repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ni1o1/transbigdata | data-visualization | 79 | 出现报错:cannot import name 'TopologicalError' from 'shapely.geos' | import时出现报错:cannot import name 'TopologicalError' from 'shapely.geos' (D:\Anaconda\envs\TransBigData\lib\site-packages\shapely\geos.py)
这是为什么呢,是我的shapely版本没安装对吗,但是我都是让conda自动安装的呀 | closed | 2023-07-31T03:50:39Z | 2024-01-25T11:47:20Z | https://github.com/ni1o1/transbigdata/issues/79 | [] | Dennissy23 | 1 |
ShishirPatil/gorilla | api | 494 | Question about AST evaluation for Java | Hello, I am testing my own model. The test set is java. There is an example:
The output of my model is` {'invokemethod007_runIt': {'args': ['suspend', 'log'], 'out': 'debugLog'}}`. When I execute the code, it seems that the code forces all the parameter values to be of type string: ` {'invokemethod007_runIt': {'args': "['suspend', 'log']," 'out']: 'debugLog'}}` , but the real expected answer is ` {'invokemethod007_runIt': {'args': [['suspend','log']], 'out': ['debugLog']}}` .
As a result, the final evaluation result error type is type mismatch. Do you have a solution? Thank you very much! | closed | 2024-07-01T09:37:22Z | 2024-10-16T07:35:58Z | https://github.com/ShishirPatil/gorilla/issues/494 | [
"BFCL-General"
] | GeniusYx | 3 |
slackapi/bolt-python | fastapi | 981 | What is the difference between ts and event_ts VERSUS ts and thread_ts? | (Describe your issue and goal here)
What is the difference between ts and event_ts VERSUS ts and thread_ts?
For instance when a thread is replied to, an event of the following type is generated:
```python
{
"type": "reaction_added",
"user": "xxx",
"reaction": "happy-face",
"item": {"type": "message", "channel": "xxx", "ts": "1698820283.053449"},
"item_user": "U03CVKELZU6",
"event_ts": "1698820290.008000",
}
```
Even though this event was generated when a reaction was added to a file uploaded in a thread, there is no thread_ts value?
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
(Paste the output of `pip freeze | grep slack`)
#### Python runtime version
(Paste the output of `python --version`)
#### OS info
(Paste the output of `sw_vers && uname -v` on macOS/Linux or `ver` on Windows OS)
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1.
2.
3.
### Expected result:
(Tell what you expected to happen)
### Actual result:
(Tell what actually happened with logs, screenshots)
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2023-11-01T06:43:37Z | 2023-11-01T08:53:30Z | https://github.com/slackapi/bolt-python/issues/981 | [
"question"
] | WhyIsItSoHardToPickAUsername | 1 |
babysor/MockingBird | deep-learning | 271 | 进行音频和梅尔频谱图预处理报错怎么回事: python encoder_preprocess.py <datasets_root> | The dataset consists of 0 utterances, 0 mel frames, 0 audio timesteps (0.00 hours).
Traceback (most recent call last):
File "pre.py", line 74, in <module>
preprocess_dataset(**vars(args))
File "E:\PythonProject\MockingBird\synthesizer\preprocess.py", line 88, in preprocess_dataset
print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata))
ValueError: max() arg is an empty sequence | open | 2021-12-14T16:03:18Z | 2023-06-27T09:03:17Z | https://github.com/babysor/MockingBird/issues/271 | [] | LiangChenStart | 7 |
kennethreitz/responder | flask | 48 | Unable to reach subequently defined routes | I have thrown together a quick app, and I think I discovered a bug with memoization of the does_match method in routes. The behavior I was seeing was that only the first registered route would pass the 'does_match' function, even though the route had been registered, commenting out the @memoize decorator seemed to fix the issue. I'll submit a pull request, but feel free to disregard it if you know of a better way to fix it.
Trying to access anything other than the first written route in an app would yield a 'Not found' on the web page, because None was being returned.
import responder
api = responder.API()
@api.route('/route1')
def route1(req, res):
res.text = 'route1'
@api.route('/route2')
def route2(req, res):
res.text = 'route2'
if __name__ == '__main__':
api.run() | closed | 2018-10-15T11:09:02Z | 2018-10-15T19:51:12Z | https://github.com/kennethreitz/responder/issues/48 | [] | nmunro | 1 |
koaning/scikit-lego | scikit-learn | 38 | documentation on github pages | locally it seems to run just fine

but github seems to not be rendering it appropriately | closed | 2019-03-20T06:07:24Z | 2019-03-20T06:25:08Z | https://github.com/koaning/scikit-lego/issues/38 | [] | koaning | 2 |
flairNLP/flair | nlp | 3,015 | connection timeout | getting connection timeout error while trying to download 'en-sentiment' model.
<img width="984" alt="image" src="https://user-images.githubusercontent.com/6858237/206390383-6d3e754d-b5ca-4916-aa93-72f07a351442.png">
| closed | 2022-12-08T07:54:53Z | 2023-09-27T10:48:14Z | https://github.com/flairNLP/flair/issues/3015 | [] | amod99 | 7 |
aeon-toolkit/aeon | scikit-learn | 2,015 | [DOC] Failed Example | ### Describe the issue linked to the documentation
Hi. In the "getting started" documentation [here](https://www.aeon-toolkit.org/en/stable/getting_started.html) there is an example for **Pipelines for aeon estimators**. This example throws the following error
```
ValueError: Multivariate data not supported by BoxCoxTransformer
```
This might be an issue with either the documentation or even the transformer as it is a `pd.Series` being passed in so I am unsure why this error exists.
### Suggest a potential alternative/fix
Either fix the documentation or it might be an error with the transformer itsef. | closed | 2024-08-27T14:19:28Z | 2024-11-28T11:18:31Z | https://github.com/aeon-toolkit/aeon/issues/2015 | [
"documentation"
] | twobitunicorn | 2 |
roboflow/supervision | machine-learning | 831 | useing my viedo to run speed | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
HI ,i used my viedo to run speed [speed_estimation](https://github.com/roboflow/supervision/tree/develop/examples/speed_estimation) code.
I didnt do anything with code,and my video had a little bit proble.could you help me ?
issue >>
AttributeError: 'NoneType' object has no attribute 'reshape'
my video [https://www.youtube.com/watch?v=8Gvz_FjWy4s](url)
my video run result [https://youtu.be/0KxiJQKj-vA?si=lVrhGR3edo499JP5](https://youtu.be/0KxiJQKj-vA?si=lVrhGR3edo499JP5)

` def transform_points(self, points: np.ndarray) -> np.ndarray:
reshaped_points = points.reshape(-1, 1, 2).astype(np.float32)
transformed_points = cv2.perspectiveTransform(reshaped_points, self.m)
return transformed_points.reshape(-1, 2)
`
### Additional
_No response_ | closed | 2024-02-01T07:34:15Z | 2024-02-01T09:30:04Z | https://github.com/roboflow/supervision/issues/831 | [
"question"
] | althonmp | 4 |
plotly/dash | data-visualization | 2,851 | Dash 2.17.0 prevents some generated App Studio apps from running | https://github.com/plotly/notebook-to-app/actions/runs/8974757424/job/24647808759#step:9:1283
We've reverted to 2.16.1 for the time being. | closed | 2024-05-06T20:27:02Z | 2024-07-26T13:45:34Z | https://github.com/plotly/dash/issues/2851 | [
"P2"
] | hatched | 2 |
2noise/ChatTTS | python | 914 | 遇到多音字如何处理?比如我生成的语音里面有仔,模型默认输出为zai,我想让模型按zi输出,如何设置? | 遇到多音字如何处理?比如我生成的语音里面有仔,模型默认输出为zai,我想让模型按zi输出,如何设置? | closed | 2025-03-11T01:15:02Z | 2025-03-12T13:54:39Z | https://github.com/2noise/ChatTTS/issues/914 | [
"documentation"
] | tjh123321 | 1 |
jschneier/django-storages | django | 1,190 | wrong usage of LibCloudStorage._get_object in LibCloudStorage._read | I bumped into this bug when I was trying to make **LibCloudStorage** work with django's **ManifestFilesMixin** but that doesn't matter and it should be fixed regardless.
exact version of what it is now:
```
def _get_object(self, name):
"""Get object by its name. [Return None if object not found"""
clean_name = self._clean_name(name)
try:
return self.driver.get_object(self.bucket, clean_name)
except ObjectDoesNotExistError:
return None
def _read(self, name):
obj = self._get_object(name)
# TOFIX : we should be able to read chunk by chunk
return next(self.driver.download_object_as_stream(obj, obj.size))
```
my recommendation:
```
def _read(self, name):
obj = self._get_object(name)
if obj is None:
raise FileNotFoundError(f"{name} does not exist.")
# TOFIX : we should be able to read chunk by chunk
return next(self.driver.download_object_as_stream(obj, obj.size))
```
and if you are curious about the exact trigger of the bug:
```
class ManifestFilesMixin(HashedFilesMixin):
def read_manifest(self):
try:
with self.manifest_storage.open(self.manifest_name) as manifest:
return manifest.read().decode()
except FileNotFoundError:
return None
```
| closed | 2022-10-27T14:42:55Z | 2023-02-16T15:14:11Z | https://github.com/jschneier/django-storages/issues/1190 | [] | engAmirEng | 0 |
deezer/spleeter | deep-learning | 195 | [Discussion] Ideas to improve deep learning on a particular music style | Newbie here. First approach to Git, Python/PiP and commands.
I'm really interested in the development of this tool and the use of it. My main focus is to separate stems in a particular field of music: jazzfunk. I'm not fully aware of how neural networks learn themselves and evolve. So, I need basic info to clarify how to improve Spleeter performance.
Does every song we input to Spleeter makes it gets better results therefore?
Does remaining in a particular style of songs makes it even better with the output results?
Has it progressive learning steps? This meaning if I input "easier" songs, then other a little bit more complicated and then songs more chaotic or freeform, it will learn better than if the first input are complex.
Thanks in advance
(please if anyone can label this into "training", it would be appreciated)
| open | 2019-12-23T16:48:42Z | 2020-01-07T13:12:55Z | https://github.com/deezer/spleeter/issues/195 | [
"question"
] | antojsan | 2 |
ets-labs/python-dependency-injector | asyncio | 750 | Fix Closing dependency resolution |
There's a PR that came up four months ago. [PR LINK](https://github.com/ets-labs/python-dependency-injector/pull/711)
I think the problem has been solved, why hasn't that PR been merged for 4 months? | open | 2023-09-26T06:52:57Z | 2023-09-26T06:52:57Z | https://github.com/ets-labs/python-dependency-injector/issues/750 | [] | HyungJunKimB | 0 |
modin-project/modin | data-science | 7,445 | Metrics interface for collecting modin frontend telemetry | Within Snowflake Pandas we want to start understanding interactive workloads as seen by the end user in modin. Since we are looking at how to balance/change the underlying engine these statistics cannot be collected from our engine plugin alone. This interface should allow us to collect this data without overriding the logging framework and parsing log messages.
| open | 2025-02-17T23:53:21Z | 2025-03-15T19:49:56Z | https://github.com/modin-project/modin/issues/7445 | [
"new feature/request 💬",
"P2"
] | sfc-gh-jkew | 1 |
microsoft/nni | deep-learning | 5,688 | How to use customize assessor in NNI? | Describe the issue:
How to use customized assessor in NNI?
I can run my experiment when using builtin accessors like Medianstop. But when I want to customize my own accessors, it starts having problems.
i learn from this: https://nni.readthedocs.io/en/stable/hpo/custom_algorithm.html
and set the config.yml as the manual.
but when i run the command: nnictl create --config my_config_path/config.yml --port my_customize_port
i got the following error:
```
Traceback (most recent call last):
File "/opt/conda/envs/Fusion_trans/bin/nnictl", line 8, in <module>
sys.exit(parse_args())
File "/opt/conda/envs/Fusion_trans/lib/python3.8/site-packages/nni/tools/nnictl/nnictl.py", line 503, in parse_args
args.func(args)
File "/opt/conda/envs/Fusion_trans/lib/python3.8/site-packages/nni/tools/nnictl/launcher.py", line 91, in create_experiment
exp.start(port, debug, RunMode.Detach)
File "/opt/conda/envs/Fusion_trans/lib/python3.8/site-packages/nni/experiment/experiment.py", line 135, in start
self._start_impl(port, debug, run_mode, None, [])
File "/opt/conda/envs/Fusion_trans/lib/python3.8/site-packages/nni/experiment/experiment.py", line 94, in _start_impl
config = self.config.canonical_copy()
File "/opt/conda/envs/Fusion_trans/lib/python3.8/site-packages/nni/experiment/config/base.py", line 166, in canonical_copy
canon._canonicalize([])
File "/opt/conda/envs/Fusion_trans/lib/python3.8/site-packages/nni/experiment/config/experiment_config.py", line 121, in _canonicalize
_AlgorithmConfig(**algo) # pylint: disable=not-a-mapping
File "/opt/conda/envs/Fusion_trans/lib/python3.8/site-packages/nni/experiment/config/base.py", line 98, in __init__
raise AttributeError(f'{class_name} does not have field(s) {fields}')
AttributeError: _AlgorithmConfig does not have field(s) codedir, classfilename
```
i am confused how to fix the problem?
Environment:
NNI version: 2.10
Training service (local|remote|pai|aml|etc): local
Client OS: Ubuntu 18.04
Server OS (for remote mode only):
Python version: 3.8.16
PyTorch/TensorFlow version: Pytorch 1.10.1
Is conda/virtualenv/venv used?: virtualenv in anaconda
Is running in Docker?: no
Configuration:
Experiment config (remember to remove secrets!):
```
searchSpaceFile: search_space.json
trialCommand: python3 ~/my_path/train.py --my_args my_args
trialConcurrency: 2
trialGpuNumber: 4
maxTrialNumber: 100
maxExperimentDuration: 999h
experimentWorkingDirectory: "../result/"
tuner:
name: TPE
classArgs:
optimize_mode: minimize
assessor:
codeDir: /local/nni/
classFileName: Assessor_test.py
className: CustomizedAssessor
# Any parameter need to pass to your Assessor class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
epoch_num: 40
start_up: 10
gap: 5
higher_is_better : True
trainingService:
platform: local
useActiveGpu: true
```
Log message:
no log message, i can not even run the experiment.
Thanks for any suggestions
```[tasklist]
### Tasks
```
| open | 2023-09-30T14:51:48Z | 2023-10-13T20:42:40Z | https://github.com/microsoft/nni/issues/5688 | [] | skyling0299 | 1 |
dynaconf/dynaconf | fastapi | 614 | [RFC] merge strategies/deep merge strategies for lazy objects | **Is your feature request related to a problem? Please describe.**
when performing deep merges, dynaconf always eagerly evaluates lazy objects
this can in particular end bad if one wants to configure dynaconf for usage with a loader based on config data
**Describe the solution you'd like**
have a merge strategy that does not require to eagerly evaluate lazy objects
**Describe alternatives you've considered**
nothing comes to mind, the issues is a tricky
| closed | 2021-07-12T19:15:23Z | 2024-01-08T11:00:21Z | https://github.com/dynaconf/dynaconf/issues/614 | [
"wontfix",
"Not a Bug",
"RFC"
] | RonnyPfannschmidt | 1 |
plotly/dash-table | dash | 830 | Cell with dropdown does not allow for backspace | When editing the value of a cell with a dropdown after double clicking, the value can only be appended with more characters. If a typo was made when filtering the dropdown, pressing the backspace key doesn't do anything and you must click outside the cell to clear the input. However, if you double click on a cell without a dropdown to enable cell editing and then double click on a cell with a dropdown, the backspace key works as expected. | open | 2020-09-23T18:45:01Z | 2020-09-23T18:45:01Z | https://github.com/plotly/dash-table/issues/830 | [] | blozano824 | 0 |
horovod/horovod | pytorch | 3,923 | fail to build horovod 0.28.0 from the source with gcc 12 due to gloo issue | **Environment:**
1. Framework: tensorflow 2.12.0, pytorch 2.0.1
2. Framework version:
3. Horovod version: 0.28.0
4. MPI version:
5. CUDA version: 12.1.1
6. NCCL version: 2.17.1
7. Python version: 3.11
8. Spark / PySpark version:
9. Ray version:
10. OS and version: ArchLinux
11. GCC version: 12.3.0
12. CMake version: 3.26.3
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
```bash
git clone https://github.com/horovod/horovod.git
cd horovod
git submodule update --init --recursive
# modify these environment variable as you need, see also https://github.com/horovod/horovod/blob/master/docs/install.rst
export HOROVOD_CUDA_HOME=/opt/cuda
export HOROVOD_CPU_OPERATIONS=GLOO
export HOROVOD_GPU=CUDA
export HOROVOD_GPU_ALLREDUCE=NCCL
export HOROVOD_GPU_BROADCAST=NCCL
export HOROVOD_WITH_GLOO=1
export HOROVOD_WITH_MPI=1
export HOROVOD_WITHOUT_MXNET=0
export HOROVOD_WITH_PYTORCH=1
export HOROVOD_WITH_TENSORFLOW=1
export HOROVOD_BUILD_CUDA_CC_LIST="60,61,62,70,72,75,80,86,89,90"
export CC=gcc-12
export CXX=g++-12
python setup.py build
```
error logs:
```text
/build/python-horovod/src/horovod-0.28.0/third_party/gloo/gloo/mpi/context.cc:43:3: note: in expansion of macro ‘GLOO_ENFORCE_EQ’
43 | GLOO_ENFORCE_EQ(rv, MPI_SUCCESS);
| ^~~~~~~~~~~~~~~
make[2]: *** [third_party/gloo/gloo/CMakeFiles/gloo.dir/build.make:300: third_party/gloo/gloo/CMakeFiles/gloo.dir/common/linux.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/build/python-horovod/src/horovod-0.28.0/third_party/gloo/gloo/transport/tcp/device.cc: In function ‘gloo::transport::tcp::attr gloo::transport::tcp::CreateDeviceAttr(const attr&)’:
/build/python-horovod/src/horovod-0.28.0/third_party/gloo/gloo/transport/tcp/device.cc:151:39: error: aggregate ‘std::array<char, 64> hostname’ has incomplete type and cannot be defined
151 | std::array<char, HOST_NAME_MAX> hostname;
| ^~~~~~~~
make[2]: *** [third_party/gloo/gloo/CMakeFiles/gloo.dir/build.make:524: third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/device.cc.o] Error 1
make[2]: Leaving directory '/build/python-horovod/src/horovod-0.28.0/build/temp.linux-x86_64-cpython-311/RelWithDebInfo'
make[1]: *** [CMakeFiles/Makefile2:524: third_party/compatible17_gloo/gloo/CMakeFiles/compatible17_gloo.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
```
According to gloo upstream issue https://github.com/facebookincubator/gloo/issues/332, this is fixed by https://github.com/facebookincubator/gloo/commit/4a5e339b764261d20fc409071dc7a8b8989aa195. We only need to update submodule `third_party/gloo` to at least this commit. I could confirm that this works by:
```bash
cd third_party/gloo
git pull https://github.com/facebookincubator/gloo.git
```
I update the submodule, and then I could build it.
| closed | 2023-05-12T15:15:13Z | 2023-05-24T16:52:41Z | https://github.com/horovod/horovod/issues/3923 | [
"bug"
] | hubutui | 3 |
ageitgey/face_recognition | python | 863 | Unable To install face_recognition | * face_recognition version: Latest
* Python version: 3
* Operating System: Raspberry Pi 3
### Description
When I entered the command:
```
sudo pip install face_recognition
```
I got:
```
pi@raspberrypi:~ $ pip install face_recognition
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting face_recognition
Using cached https://files.pythonhosted.org/packages/3f/ed/ad9a28042f373d4633fc8b49109b623597d6f193d3bbbef7780a5ee8eef2/face_recognition-1.2.3-py2.py3-none-any.whl
Requirement already satisfied: numpy in /usr/local/lib/python3.5/dist-packages (from face_recognition) (1.16.4)
Requirement already satisfied: Pillow in /usr/lib/python3/dist-packages (from face_recognition) (4.0.0)
Requirement already satisfied: dlib>=19.7 in /usr/local/lib/python3.5/dist-packages (from face_recognition) (19.17.0)
Requirement already satisfied: Click>=6.0 in /usr/lib/python3/dist-packages (from face_recognition) (6.6)
Collecting face-recognition-models>=0.3.0 (from face_recognition)
Downloading https://www.piwheels.org/simple/face-recognition-models/face_recognition_models-0.3.0-py2.py3-none-any.whl (100.6MB)
|██████████████████████████████▊ | 96.6MB 1.2MB/s eta 0:00:04
ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
face-recognition-models>=0.3.0 from https://www.piwheels.org/simple/face-recognition-models/face_recognition_models-0.3.0-py2.py3-none-any.whl#sha256=8d6b0af2e37a17120c3f13107974bc252142a4ffcb4e58eabdfcf26608e52c24 (from face_recognition):
Expected sha256 8d6b0af2e37a17120c3f13107974bc252142a4ffcb4e58eabdfcf26608e52c24
Got 8d74fb7d6b99b175e6073af059126d07da87ee7b2f395ad1c05fbce76d60b765
```
For python 3 I tried using pip3 but same error is comming.
| closed | 2019-06-25T06:40:19Z | 2019-06-25T07:11:12Z | https://github.com/ageitgey/face_recognition/issues/863 | [] | bytesByHarsh | 7 |
ContextLab/hypertools | data-visualization | 155 | saving a DataGeometry object | After performing an analysis and visualizing the result, we want to save out the `geo` so that it can be shared or loaded in at a later time. After a little research, here are a few options:
+ `pickle` - this is the simplest way to save out an object. the downside is that its not an efficient way to store large arrays of data, and loading pickles created in one version of python (2/3) and loaded in the other is problematic. It is possible to save out versions that are compatible with each version separately (i.e. 1 file for python 2 and one for python 3). [a good resource](http://www.diveintopython3.net/serializing.html)
+ `joblib` - this library appears to wrap pickle, but is more efficient at handling large array data. you can also easily compress files. the downside is that it suffers from the same cross-version incompatibility issues as `pickle`
+ `json` - its possible to manually turn objects into json format, and then rebuild the objects on reload. i dont think this is a great solution for us given how variable our saved files may be (e.g. 1 or more of 20+ different scikit-learn model objects). (see [here](https://cmry.github.io/notes/serialize) for a post about converting scikit-learn objects to json)
+ `h5` - this is an efficient file format for large amounts of array data. however, as far as i can tell, python objects can not be easily saved.
+ `h5` + `pickle/joblib` - one possibility would be to save the array data in the h5 format and the rest in a pickle. we would get the benefit of storing array data with h5, and the ease of storing object data with pickle
To summarize, I don't see an elegant way to solve the cross-version (python 2/3) saving issue. So, unless we convert all the models to json and then rebuild them, we are stuck with pickle. My choice would be to go with joblib, which is like pickle but more efficient at handling large array data, and just note that you can't create a file one version of python and save it in the other. | closed | 2017-10-09T12:52:51Z | 2017-10-09T19:57:13Z | https://github.com/ContextLab/hypertools/issues/155 | [] | andrewheusser | 6 |
521xueweihan/HelloGitHub | python | 2,416 | 【开源自荐】GitRec - GitHub仓库推荐系统增强插件 | ## 推荐项目
- 项目地址:https://github.com/gorse-io/gitrec
- 类别:JS(前端)、Python(后端)
- 项目标题:GitRec
- 项目描述:GitHub仓库推荐系统增强插件
- 亮点:GitRec浏览器扩展能在GitHub网页上插入推荐内容
1. 替换GitHub官方推荐仓库。GitRec插件能替换GitHub首页官方推荐仓库为GitRec生成的推荐内容,可以通过配置进行切换。
2. 为热门仓库生成相似仓库推荐。如果仓库的星星数超过100,那么GitRec会在右下角展示该仓库的相似仓库。
- 示例代码:(可选)
- 截图:(可选)gif/png/jpg

- 后续更新计划:
- 收录更多的仓库提供推荐
- 能够利用更多的用户信息和仓库信息进行更精准推荐
| closed | 2022-11-05T15:19:50Z | 2024-01-24T08:15:15Z | https://github.com/521xueweihan/HelloGitHub/issues/2416 | [
"Python 项目"
] | zhenghaoz | 0 |
LAION-AI/Open-Assistant | python | 3,251 | Unable to train model (Loss is 0.000000) | I am trying to fine tune the LLM(OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) with my data.
My code
```
import torch
from transformers import LineByLineTextDataset, DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
load_in_8bit=True,
device_map="auto")
from datasets import load_dataset
# Load the dataset
dataset = load_dataset('parquet', data_files='data/dataset.parquet')
# Tokenize and format the dataset
def tokenize_function(examples):
return tokenizer(examples['TEXT'], truncation=True, max_length=128, padding='max_length')
tokenized_dataset = dataset.map(tokenize_function, batched=True)
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=100,
per_device_train_batch_size=2,
per_device_eval_batch_size=4,
warmup_steps=500,
weight_decay=0.01,
logging_dir="./logs",
logging_steps=4
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False,
)
# Create the Trainer and train
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset['train'],
data_collator=data_collator,
)
trainer.train()
# Save the trained model
trainer.save_model("model") # replace with the path where you want to save the model
tokenizer.save_pretrained("model")
```
Now the issue is while training, loss is 0.000000 meaning there is something wrong with my training, Also when I am loading the trainied model, answers are not coming at all(Which should not be the case). Also the downloaded actual model disk size is 23GB but mine model size is 9.6GB
My raw data is in csv which I have then converted to parquet. My dataset has 3 columns(TEXT, source, metadata). Also my dataset only contains 12 rows
This is how I have generated parquet file
```
df = pd.read_csv('data/data.csv')
df.to_parquet("data/dataset.parquet", row_group_size=100, engine="pyarrow", index=False)
``` | closed | 2023-05-29T09:05:55Z | 2023-06-07T18:15:31Z | https://github.com/LAION-AI/Open-Assistant/issues/3251 | [] | ban1989ban | 2 |
pandas-dev/pandas | pandas | 60,923 | BUG: `series.reindex(mi)` behaves different for series with Index and MultiIndex | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
* Create a series with `Index` and a MultiIndex to use for reindexing later
```python
>>> series = pd.Series(
... [26.7300, 24.2550],
... index=pd.Index([81, 82], name='a')
... )
>>> series
a
81 26.730
82 24.255
dtype: float64
>>> series.index
Index([81, 82], dtype='int64', name='a')
>>> other_index = pd.MultiIndex(
... levels=[
... pd.Index([81, 82], name='a'),
... pd.Index([np.nan], name='b'),
... pd.Index([
... '2018-06-01', '2018-07-01'
... ], name='c')
... ],
... codes=[
... [0, 0, 1, 1],
... [0, 0, 0, 0],
... [0, 1, 0, 1]
... ],
... names=['a', 'b', 'c']
... )
>>> other_index
MultiIndex([(81, nan, '2018-06-01'),
(81, nan, '2018-07-01'),
(82, nan, '2018-06-01'),
(82, nan, '2018-07-01')],
names=['a', 'b', 'c'])
```
* `reindex` to `MultiIndex` (`other_index`) which expands `series.index` by two more levels.
* unfortunately the `reindex` sets all values of the original series to NaN which can be fixed by turning `series.index` into a 1-level `MultiIndex` first
```python
>>> series.reindex(other_index) # this removes all values of the series
a b c
81 NaN 2018-06-01 NaN
2018-07-01 NaN
82 NaN 2018-06-01 NaN
2018-07-01 NaN
dtype: float64
```
* apply `to_mi(...)` to turn the `series.index` into a 1-level `MultiIndex`
* rerun `reindex` on the new `series` with `MultiIndex` and the values are maintained/filled as expected
```python
>>> def to_mi(series):
... if isinstance(series.index, pd.MultiIndex):
... series_mi = series.index
... else:
... level_names = [series.index.name]
... level_values = [series.index]
... series_mi = pd.MultiIndex.from_arrays(level_values, names=level_names)
... series_with_mi = pd.Series(series.values, index=series_mi, name=series.name)
... return series_with_mi
...
>>> series_mi = to_mi(series)
>>> series_mi
a
81 26.730
82 24.255
dtype: float64
>>> series_mi.index
MultiIndex([(81,),
(82,)],
names=['a'])
>>> series_mi.reindex(other_index)
a b c
81 NaN 2018-06-01 26.730
2018-07-01 26.730
82 NaN 2018-06-01 24.255
2018-07-01 24.255
dtype: float64
```
### Issue Description
In the above case, `series.reindex(multi_index)` will turn the series values to NaN when the series has a single `Index`. However when the series index is converted to a 1-level `MultiIndex` prior to the `reindex`, the values are maintained and filled as expected.
In my opinion it shouldn't matter if a 1-level `MultiIndex` or an `Index` is used for a `reindex` - the outcomes should be the same.
As a further discussion point (here or elsewhere), this issue (and others) also begs the question why a distinction between `Index` and `MultiIndex` is necessary (I suspect there are historic reasons). I would imagine that many issues (and code) would go away if `MultiIndex` was used exclusively (even for 1-dimensional indices).
### Expected Behavior
The missing levels in `series_mi` (compared to `other_index`) are added and the values of the partial index from the original series are used to fill the places of the added indices.
```
>>> series_mi.reindex(other_index)
a b c
81 NaN 2018-06-01 26.730 # from index <81> of `series` (`series_mi`)
2018-07-01 26.730 # from index <81> of `series` (`series_mi`)
82 NaN 2018-06-01 24.255 # from index <82> of `series` (`series_mi`)
2018-07-01 24.255 # from index <82> of `series` (`series_mi`)
dtype: float64
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 3979e954a339db9fc5e99b72ccb5ceda081c33e5
python : 3.11.11
python-bits : 64
OS : Linux
OS-release : 6.12.11-200.fc41.x86_64
Version : #1 SMP PREEMPT_DYNAMIC Fri Jan 24 04:59:58 UTC 2025
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_AU.UTF-8
LOCALE : en_AU.UTF-8
pandas : 3.0.0.dev0+1909.g3979e954a3.dirty
numpy : 1.26.4
dateutil : 2.9.0.post0
pip : 24.2
Cython : 3.0.11
sphinx : 8.1.3
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2025.2.0
html5lib : 1.1
hypothesis : 6.125.2
gcsfs : 2025.2.0
jinja2 : 3.1.5
lxml.etree : 5.3.0
matplotlib : 3.10.0
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 19.0.0
pyreadstat : 1.2.8
pytest : 8.3.4
python-calamine : None
pytz : 2025.1
pyxlsb : 1.0.10
s3fs : 2025.2.0
scipy : 1.15.1
sqlalchemy : 2.0.38
tables : 3.10.2
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.2
zstandard : 0.23.0
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| open | 2025-02-13T01:25:54Z | 2025-03-06T21:59:54Z | https://github.com/pandas-dev/pandas/issues/60923 | [
"Bug",
"MultiIndex",
"Index"
] | ssche | 6 |
pydantic/pydantic | pydantic | 11,055 | field annotation not respected by mypy when updated by a decorator | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
This decorator successfully makes the field optional (no runtime errors from this code), but mypy throws saying it is required. I'm unclear where the issue is between mypy, pydantic, pydantic mypy plugin, python, and this code.
Thanks!
### Example Code
```python
from typing import Type, Callable, Optional
from pydantic import BaseModel
def make_optional() -> Callable:
"""Return a decorator to make all model fields optional."""
def decorator(cls: Type[BaseModel]) -> Type[BaseModel]:
for field in cls.model_fields.values():
if not field.is_required():
continue
field.default = None
field.annotation = Optional[field.annotation] # type: ignore
cls.model_rebuild(force=True)
return cls
return decorator
@make_optional()
class MyModel(BaseModel):
my_field: int
my_partial = MyModel() # mypy throws: Missing named argument "my_field" for "MyModel"
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.3
pydantic-core version: 2.27.1
pydantic-core build: profile=release pgo=false
install path: /home/sbarrett2/tmp/.venv/lib/python3.11/site-packages/pydantic
python version: 3.11.7 (main, Jan 22 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)]
platform: Linux-5.14.0-427.el9.x86_64-x86_64-with-glibc2.34
related packages: typing_extensions-4.12.2 mypy-1.13.0 typing_extensions-4.12.2 mypy-1.13.0
commit: unknown
```
| closed | 2024-12-05T20:12:20Z | 2024-12-06T12:36:52Z | https://github.com/pydantic/pydantic/issues/11055 | [
"bug V2",
"pending"
] | BarrettStephen | 4 |
fastapi/sqlmodel | sqlalchemy | 495 | How to reference a foreign key in a table that is in a different (postgres) schema | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
#
# Apologies, this is not a fully reproducible example, as
# you will need to create a postgres database with schemas
# to reproduce fully.
#
import sqlmodel
engine = create_engine(os.getenv("PGURL"), echo=True)
# Define a table in the postgres schema: schema_1
class Model1(sqlmodel.SQLModel, table=True):
metadata = sqlmodel.MetaData(schema="schema_1")
id: int = sqlmodel.Field(default=None, primary_key=True)
# Define a table in the postgres schema: schema_2
class Model2(sqlmodel.SQLModel, table=True):
metadata = sqlmodel.MetaData(schema="schema_2")
id: int = sqlmodel.Field(default=None, primary_key=True)
model1_id: int = sqlmodel.Field(default=None, foreign_key="schema_1.model1.id")
with Session(engine) as session:
# Add a new instance of model1
model1 = Model1()
session.add(model1)
session.commit()
session.refresh(model1)
# Try to add an instance of model2 referencing model1.id
model2 = Model2(model1_id=model1.id)
session.add(model2)
session.commit()
session.refresh(model2)
# Print the annotation_set table
statement = select(Model2)
model2s = session.exec(statement).all()
print(model2s)
```
### Description
* I'm defining models for a postgres database that someone else has created, using postgres schemas to organise the tables.
* I create a table `model1` and table `model2` which has a foreign key `model1.id`
* When I run the code above code I get the following error:
```sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 'model2.model1_id' could not find table 'shema_1.model1' with which to generate a foreign key to target column 'id'```
* I realise that `sqlmodel` is reading `schema_1.model1` as the full table name, but I can't work out how to point it at `schema_1`. Is this possible?
Many thanks :pray:
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
Python 3.8.10
### Additional Context
_No response_ | closed | 2022-11-11T11:58:12Z | 2024-04-15T10:15:36Z | https://github.com/fastapi/sqlmodel/issues/495 | [
"question"
] | ivyleavedtoadflax | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,701 | Could not connect to deb.globaleaks.org | ### What version of GlobaLeaks are you using?
4.13.13
### What browser(s) are you seeing the problem on?
_No response_
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
We keep getting this error during the installation process
Failed to fetch http://deb.globaleaks.org/jammy/globaleaks_4.13.13_all.deb Could not connect to deb.globaleaks.org:80 (95.174.23.119), connection timed out
the error is on Ubuntu 22 server
### Proposed solution
_No response_ | closed | 2023-10-13T09:25:03Z | 2023-10-13T10:21:39Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3701 | [
"T: Bug",
"Triage"
] | alexspanos-wide | 2 |
dask/dask | numpy | 11,494 | `align_partitions` creates mismatched partitions. | **Describe the issue**:
The `divisions` attribute doesn't match between data frames even after applying `align_partitions` on them.
**Minimal Complete Verifiable Example**:
```python
import numpy as np
from distributed import Client, LocalCluster
from dask import dataframe as dd
from dask.dataframe.multi import align_partitions
def make_ltr(n_samples: int, n_features: int, max_rel: int):
rng = np.random.default_rng(1994)
X = rng.normal(0, 1.0, size=n_samples * n_features).reshape(n_samples, n_features)
y = np.sum(X, axis=1)
y -= y.min()
y = np.round(y / y.max() * max_rel).astype(np.int32)
return X, y
def main(client: Client) -> None:
X, y = make_ltr(n_samples=4096 * 4, n_features=16, max_rel=8)
dfx: dd.DataFrame = dd.from_array(X).repartition(npartitions=16)
dfy: dd.DataFrame = dd.from_dict({"y": y}, npartitions=16)
[dfx, dfy], _, _ = align_partitions(dfx, dfy)
print("dfx:", dfx.divisions, "\ndfy:", dfy.divisions)
if __name__ == "__main__":
with LocalCluster(n_workers=2) as cluster:
with Client(cluster) as client:
main(client)
```
For this particular example, there's an off-by-1 error in the resulting divisions.
```
dfx: (0, 1023, 2047, 3071, 4095, 5119, 6143, 7167, 8191, 9215, 10239, 11263, 12287, 13311, 14335, 15359, 16383)
dfy: (0, 1024, 2048, 3072, 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16383)
```
We need multiple dfs to have the same partition scheme.
**Anything else we need to know?**:
**Environment**:
- Dask version: dask, version 2024.10.0
- Python version: Python 3.11.9
- Operating System: Ubuntu 24.04.1
- Install method (conda, pip, source): conda
| closed | 2024-11-05T12:26:04Z | 2024-11-05T13:53:43Z | https://github.com/dask/dask/issues/11494 | [
"needs triage"
] | trivialfis | 6 |
gradio-app/gradio | machine-learning | 10,256 | Freeze Panel | Hi everyone, how are you? I would like to know if there is any way to create a frozen panel with some gradio component, similar to what we do in Excel, so that I can fix a section of components above and below as I scroll the main vertical scroll bar, the components below just scroll.
| closed | 2024-12-26T21:37:50Z | 2024-12-27T16:24:45Z | https://github.com/gradio-app/gradio/issues/10256 | [] | elismasilva | 2 |
recommenders-team/recommenders | machine-learning | 1,256 | [FEATURE] remove TF warning messages in all TF notebooks | ### Description
<!--- Describe your expected feature in detail -->
```
tf.get_logger().setLevel('ERROR') # only show error messages
```
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
| closed | 2020-12-04T12:45:21Z | 2021-01-18T16:42:36Z | https://github.com/recommenders-team/recommenders/issues/1256 | [
"enhancement"
] | miguelgfierro | 0 |
TencentARC/GFPGAN | deep-learning | 505 | Ytrded | ![Uploading replicate-prediction-ffu2e6aw4barxliwxu23ydgbnu.png…]()
| open | 2024-02-03T15:05:42Z | 2024-02-03T15:05:42Z | https://github.com/TencentARC/GFPGAN/issues/505 | [] | habiom | 0 |
microsoft/nni | machine-learning | 5,274 | MNIST Kubeflow Example Starts the Worker Pod then Set Status to Error | **Describe the issue**: My Issue is after executing the following command `nnictl create --config nni/examples/trials/mnist-tfv1/config_kubeflow.yml` it starts the experiment successfully. And it sends the TFJob to Kubeflow and kubeflow starts a working pod that gets the image msranni/nni:latest and then starts running for milliseconds and then fails. After executing the command `kubectl describe pod nniexp` this is the output:
```
Name: nniexpcxi61rqmenvlinyj-worker-0
Namespace: default
Priority: 0
Service Account: default
Node: kind-worker/172.18.0.3
Start Time: Thu, 08 Dec 2022 14:03:29 +0100
Labels: group-name=kubeflow.org
job-name=nniexpcxi61rqmenvlinyj
replica-index=0
replica-type=worker
training.kubeflow.org/job-name=nniexpcxi61rqmenvlinyj
training.kubeflow.org/job-role=master
training.kubeflow.org/operator-name=tfjob-controller
training.kubeflow.org/replica-index=0
training.kubeflow.org/replica-type=worker
Annotations: <none>
Status: Failed
IP: 10.244.2.68
IPs:
IP: 10.244.2.68
Controlled By: TFJob/nniexpcxi61rqmenvlinyj
Containers:
tensorflow:
Container ID: containerd://698b72e5a0279c3e0bd1ae3291e9bb61ae1b436e0b402689d140d6e164f1ed4c
Image: msranni/nni:latest
Image ID: docker.io/msranni/nni@sha256:7047d1245d307bc7bb1b76e66889bff6fdcea1bb2728200e06dd845ef64fe2a9
Port: 2222/TCP
Host Port: 0/TCP
Args:
sh
/tmp/mount/nni/cxi61rqm/LinYj_run.sh
State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 08 Dec 2022 14:04:02 +0100
Finished: Thu, 08 Dec 2022 14:04:05 +0100
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 8Gi
Requests:
cpu: 1
memory: 8Gi
Environment: <none>
Mounts:
/tmp/mount from nni-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d2dgc (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nni-vol:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.0.2.15
Path: /nfs/share
ReadOnly: false
kube-api-access-d2dgc:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 55s default-scheduler Successfully assigned default/nniexpcxi61rqmenvlinyj-worker-0 to kind-worker
Normal Pulling 54s kubelet Pulling image "msranni/nni:latest"
Normal Pulled 23s kubelet Successfully pulled image "msranni/nni:latest" in 31.242920489s
Normal Created 22s kubelet Created container tensorflow
Normal Started 22s kubelet Started container tensorflow
```
**Environment**:
- NNI version: v2.10
- Training service (local|remote|pai|aml|etc): kubeflow
- Client OS: Ubuntu
- Server OS (for remote mode only):
- Python version: 3.10.8
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?: Yes
- Is running in Docker?: Yes but by specifying the image in the config_kubeflow only
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2022-12-08T13:08:26Z | 2023-02-20T08:42:11Z | https://github.com/microsoft/nni/issues/5274 | [] | MHGanainy | 4 |
piskvorky/gensim | data-science | 2,585 | Having issue with encoding. | #### Problem description
I am trying to process a large corpus but in preprocess_string( ) it returns an error shown below
```
Traceback (most recent call last):
File "D:/Projects/docs_handler/data_preprocessing.py", line 60, in <module>
for temp in batch(iterator,1000):
File "D:/Projects/docs_handler/data_preprocessing.py", line 30, in batch
for item in iterable:
File "D:/Projects/docs_handler/data_preprocessing.py", line 23, in iter_tokenized_documents
document = preprocess_string(open(os.path.join(root, file)).read().strip(),filters=CUSTOM_FILTERS)
File "C:\Users\koradg\AppData\Local\Programs\Python\Python36\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 16144: character maps to <undefined>
```
#### Steps/code/corpus to reproduce
```
def iter_tokenized_documents(input_directory):
"""Iterate over all documents, yielding a document (=list of utf8 tokens) at a time."""
for root, dirs, files in os.walk(input_directory):
for file in filter(lambda file: file.endswith('.txt'), files):
document = preprocess_string(open(os.path.join(root, file)).read().strip(),filters=CUSTOM_FILTERS)
if(len(document)):
yield document
```
#### Versions
Windows-10-10.0.17763-SP0
Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)]
NumPy 1.17.0
SciPy 1.3.0
gensim 3.8.0
FAST_VERSION 0
| closed | 2019-08-28T08:23:02Z | 2019-08-28T13:57:50Z | https://github.com/piskvorky/gensim/issues/2585 | [] | gauravkoradiya | 2 |
lucidrains/vit-pytorch | computer-vision | 18 | Why only use the first patch? Thanks | I don't understand in line 124 of vit_pytorch.py:
`x = self.to_cls_token(x[:, 0])`
If the first dimension of x is batch, then the 2nd dimension 0 should be patch, as the dimension of x should be [batch, patch, feature]. Does it mean only the first patch is used? Could anybody help me on this? Thanks a lot. | closed | 2020-10-20T19:26:01Z | 2020-10-21T19:54:13Z | https://github.com/lucidrains/vit-pytorch/issues/18 | [] | junyongyou | 3 |
freqtrade/freqtrade | python | 10,717 | FreqUI backtesting visualize history error no data found | <!--
Have you searched for similar issues before posting it?
**Yes**
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: MacOS
* Python Version: Python 3.12.6 (`python -V`)
* CCXT version: ccxt==4.4.6 (`pip freeze | grep ccxt`)
* Freqtrade Version: freqtrade docker-2024.9-dev-91d9c9b4 (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
*Explain the problem you have encountered*
I installed freqUI using docker. I already ran backtesting and plot-dataframe successfully. Now I want to load backtesting result on frequi. I am able to see the data on "Analyze Result", "Analyze summary" but on "Visualize Result", it show no history as image below. I already checked my data folder and mount folder, there are no issue but I still got this error.
### Steps to reproduce:
1. Run download data `docker compose run --rm freqtrade download-data --timerange 20180101-20210301 -c ./user_data/config.json --timeframe 1h --trading-mode spot -p DOGE/USDT --exchange binance`
2. Run backtesting `docker compose run --rm freqtrade backtesting --export trades -s EMAPriceCrossoverWithThreshold --timeframe 1h --timerange=20180301-20200301 -p DOGE/USDT`
3. Run frequi
4. Access "Visualize result" tab
### Observed Results:
* What happened?
* Visualize Result page on FreqUI does not show data.
* What did you expect to happen?
* Visualize Result shows data.
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
freqtrade | 2024-09-27 09:55:49,286 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy EMAPriceCrossoverWithThreshold from '/freqtrade/user_data/strategies/EMAPriceCrossoverWithThreshold.py'...
freqtrade | 2024-09-27 09:55:49,287 - freqtrade.strategy.hyper - INFO - Found no parameter file.
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited.
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 5, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}.
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 1000.
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {}
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 1h
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.15
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: True
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True
freqtrade | 2024-09-27 09:55:49,288 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60, 'emergency_exit': 'market'}
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using protections: []
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 0
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 5, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1
freqtrade | 2024-09-27 09:55:49,289 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 1000
freqtrade | 2024-09-27 09:55:49,297 - freqtrade.data.history.datahandlers.idatahandler - WARNING - No history for DOGE/USDT, spot, 1h found. Use `freqtrade download-data` to download the data
freqtrade | 2024-09-27 09:55:59,253 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy EMAPriceCrossoverWithThreshold from '/freqtrade/user_data/strategies/EMAPriceCrossoverWithThreshold.py'...
freqtrade | 2024-09-27 09:55:59,254 - freqtrade.strategy.hyper - INFO - Found no parameter file.
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: USDT.
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: unlimited.
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'unfilledtimeout' with value in config file: {'entry': 5, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}.
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'max_open_trades' with value in config file: 1000.
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {}
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using timeframe: 1h
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.15
freqtrade | 2024-09-27 09:55:59,255 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: True
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_custom_stoploss: False
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: True
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'entry': 'limit', 'exit': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60, 'emergency_exit': 'market'}
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'entry': 'GTC', 'exit': 'GTC'}
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: USDT
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: unlimited
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using protections: []
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 0
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using unfilledtimeout: {'entry': 5, 'exit': 10, 'exit_timeout_count': 0, 'unit': 'minutes'}
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_exit_signal: True
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_only: False
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_entry_signal: False
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using exit_profit_offset: 0.0
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using disable_dataframe_checks: False
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_buying_expired_candle_after: 0
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using position_adjustment_enable: False
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_entry_position_adjustment: -1
freqtrade | 2024-09-27 09:55:59,256 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using max_open_trades: 1000
freqtrade | 2024-09-27 09:55:59,264 - freqtrade.data.history.datahandlers.idatahandler - WARNING - No history for DOGE/USDT, spot, 1h found. Use `freqtrade download-data` to download the data
```
This is my docker-compose file:
```
---
services:
freqtrade:
# image: freqtradeorg/freqtrade:stable_freqaitorch
# image: freqai
# image: freqtradeorg/freqtrade:develop
# Use plotting image
image: freqtradeorg/freqtrade:develop_plot
# # Enable GPU Image and GPU Resources (only relevant for freqAI)
# # Make sure to uncomment the whole deploy section
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
# Build step - only needed when additional dependencies are needed
# build:
# context: .
# dockerfile: "./docker/Dockerfile.custom"
restart: unless-stopped
container_name: freqtrade
volumes:
- "./user_data:/freqtrade/user_data"
- "./user_data/data:/freqtrade/user_data/data"
- "./torch/BasePyTorchModel.py:/freqtrade/freqai/base_models/BasePyTorchModel.py"
- "./torch/PyTorchLSTMModel.py:/freqai/torch/PyTorchLSTMModel.py"
- "./torch/PyTorchModelTrainer.py:/freqai/torch/PyTorchModelTrainer.py"
- "./torch/PyTorchLSTMRegressor.py:/freqai/user_data/freqaimodels/PyTorchLSTMRegressor.py"
# Expose api on port 8080 (localhost only)
# Please read the https://www.freqtrade.io/en/stable/rest-api/ documentation
# for more information.
ports:
- "127.0.0.1:8080:8080"
# Default command used when running `docker compose up`
# trade
# --logfile /freqtrade/user_data/logs/freqtrade.log
# --db-url sqlite:////freqtrade/user_data/tradesv3.sqlite
# --config /freqtrade/user_data/config.json
# --strategy Strategy002
command: >
webserver
--config /freqtrade/user_data/config.json
--datadir /freqtrade/user_data/
--userdir /freqtrade/user_data/
```

| closed | 2024-09-27T09:59:33Z | 2024-09-27T12:13:35Z | https://github.com/freqtrade/freqtrade/issues/10717 | [
"Question"
] | jtong99 | 2 |
areed1192/interactive-broker-python-api | rest-api | 20 | issue "no module name IBW" error | <img width="259" alt="Screen Shot 2020-12-20 at 9 53 17 AM" src="https://user-images.githubusercontent.com/76019147/102720452-4c2f3300-42a9-11eb-9dd8-62d9fd1b951f.png">
I'm trying to figure out the issue but having hard time. I have a IB account. I created the config. run java8 with brew (run in Mac).
any help? | closed | 2020-12-20T17:55:20Z | 2021-03-04T02:29:14Z | https://github.com/areed1192/interactive-broker-python-api/issues/20 | [] | gra88hopper | 0 |
biolab/orange3 | scikit-learn | 6,879 | Python Script: example in widget help page causes warning | **What's wrong?**
The suggested code for the Zoo example in the Python Script help page causes a warning: "Direct calls to Table's constructor are deprecated and will be removed. Replace this call with Table.from_table":
**What's the solution?**
Rewrite the example code so that it conforms to tha latest version of the Orange library
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Orange version:3.37.0
| open | 2024-08-21T13:27:31Z | 2024-09-06T07:18:23Z | https://github.com/biolab/orange3/issues/6879 | [
"bug report"
] | wvdvegte | 0 |
gevent/gevent | asyncio | 1,794 | How to use asgiref sync_to_async using a patched version of gevent.threadpool.ThreadPoolExecutor? | * gevent version: Please note how you installed it: From source, from
most recent on pypi
* Python version: Please be as specific as possible. For example,
"cPython 3.8.10 downloaded from python.org"
* Operating System: Please be as specific as possible. For example,
"ubuntu 20.04"
### Description:
https://github.com/django/asgiref/issues/264
```
sync_to_async(
Books.objects.all,
thread_sensitive=False,
executor=gevent.threadpool.ThreadPoolExecutor(max_workers=1)
)
```
```python-traceback
web_1 | DEBUG:root:concurrent.futures.Future is expected, got <gevent.threadpool._FutureProxy object at 0x7fcabce2e9d0>
```
### What I've run:
```python
try:
from gevent.threadpool import ThreadPoolExecutor as GThreadPoolExecutor
from django.conf import settings
if settings.GEVENT_DJANGO_ASYNC_ORM:
from gevent import monkey
monkey.patch_all()
def monkey_patch_the_monkey_patchers(ex):
from .patch_gevent import _FutureProxy
def submit(ex, fn, *args, **kwargs): # pylint:disable=arguments-differ
print(fn, *args, **kwargs)
with ex._shutdown_lock: # pylint:disable=not-context-manager
if ex._shutdown:
raise RuntimeError('cannot schedule new futures after shutdown')
future = ex._threadpool.spawn(fn, *args, **kwargs)
proxy_future = _FutureProxy(future)
# just fake it, maybe?
proxy_future.__class__ = concurrent.futures.Future
return proxy_future
ex.submit = submit
return ex
MonkeyPoolExecutor = monkey_patch_the_monkey_patchers(GThreadPoolExecutor)
conf = {"thread_sensitive": False, "executor": MonkeyPoolExecutor(max_workers=1)}
executor_ = MonkeyPoolExecutor
except Exception as e:
print('uhoh', e)
pass
all_the_books = await sync_to_async(
Books.objects.all,
**conf
)()
```
| closed | 2021-05-29T02:27:49Z | 2021-05-31T10:43:29Z | https://github.com/gevent/gevent/issues/1794 | [] | allen-munsch | 2 |
lanpa/tensorboardX | numpy | 594 | Source archives contain byte-compiled .pyc files for Python 2.7 | **Describe the bug**
The source archives of the package as hosted on PyPI include 20 .pyc files byte compiled for Python 2.7 (`0x03f3` magic value):
```
$ file tensorboardX/proto/__init__.pyc
tensorboardX/proto/__init__.pyc: python 2.7 byte-compiled
```
**Expected behavior**
The source archives for the package would only contain uncompiled code, and defer to the user to byte compile on the used version of Python.
**Additional context**
This is reproducible with the current [tensorboardX-2.0.tar.gz2](https://files.pythonhosted.org/packages/35/1a/ad6423850fffd22a30fed58a6215858c0413298efb8f5246562b8efffb49/tensorboardX-2.0.tar.gz) archive, and earlier builds. The [staging steps](https://github.com/lanpa/tensorboardX/blob/master/setup.py#L90-L99) listed in setup.py don't seem to produce these for me locally, so it may be something specific to the environment which is generating these archives. Running `uncompile6` on one of the files provided these additional details:
```
# Python bytecode 2.7 (62211)
# Decompiled from: Python 3.7.7 (default, Mar 10 2020, 15:43:03)
# [Clang 11.0.0 (clang-1100.0.33.17)]
# Embedded file name: /Users/dexter/git/tensorboardX/tensorboardX/proto/__init__.py
# Compiled at: 2019-08-01 11:57:19
```
@lanpa that looks like that may be your host machine based on other commits.
Let me know if there's any other details I can track down to help. | closed | 2020-06-30T22:07:44Z | 2020-07-03T12:55:31Z | https://github.com/lanpa/tensorboardX/issues/594 | [] | scdub | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,068 | choppy stretched out audio | My spectrogram looks kinda weird and the audio sounds like heavily synthesised choppy vocals, did I install anything [wrong?[
<img width="664" alt="Screenshot 2022-05-23 at 07 08 02" src="https://user-images.githubusercontent.com/71672036/169755394-e387d753-f4ce-46a3-8553-bafcec526580.png">
] | open | 2022-05-23T06:17:29Z | 2022-05-25T20:16:48Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1068 | [] | zemigm | 1 |
quokkaproject/quokka | flask | 637 | admin: actions | Enable more actions
https://github.com/rochacbruno/quokka_ng/issues/34 | open | 2018-02-07T01:56:32Z | 2018-02-07T01:56:33Z | https://github.com/quokkaproject/quokka/issues/637 | [
"1.0.0",
"hacktoberfest"
] | rochacbruno | 0 |
vitalik/django-ninja | django | 355 | [BUG] Reverse url names are not auto generated | **Describe the bug**
Reverse resolution of urls described on https://django-ninja.rest-framework.com/tutorial/urls/ does not work for me. By inspecting the generated resolver, I discovered, that views that do not explicitly specify `url_name` do not have a name generated at all. View function name is not used.
**Versions (please complete the following information):**
- Python version: 3.8
- Django version: 3.2
- Django-Ninja version: 0.17.0
| closed | 2022-02-09T14:01:11Z | 2022-06-26T16:29:26Z | https://github.com/vitalik/django-ninja/issues/355 | [] | stinovlas | 2 |
yeongpin/cursor-free-vip | automation | 189 | BUG | ℹ️ Checking Config File...
❌ Config File Not Found: C:\Users\Arda YaÅŸdiken (2)\AppData\Roaming\Cursor\User\globalStorage\storage.json
character encoding error | closed | 2025-03-11T01:51:55Z | 2025-03-11T08:52:32Z | https://github.com/yeongpin/cursor-free-vip/issues/189 | [
"bug",
"Completed"
] | kkatree | 1 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 501 | result is empty for any url from domain http://www.mckinsey.com | **Describe the bug**
All URLs from domain is returning empty result
**To Reproduce**
Domain: http://www.mckinsey.com
URLs tested and not working:
https://www.mckinsey.com/features/mckinsey-center-for-future-mobility/our-insights/autonomous-vehicles-moving-forward-perspectives-from-industry-leaders
https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/autonomous-drivings-future-convenient-and-connected
Prompt: Summarize and find the main topics
My code:
``` python
# Config the graph
graph_config = {
"llm": {
"api_key": GEMINI_API_KEY,
"model": "gemini-pro",
},
"verbose":True,
"headless":True,
"max_results": True
}
# Run SmartScraperGraph instance
my_prompt = f"Summarize and find the main topics"
smart_scraper_graph = SmartScraperGraph(
prompt=my_prompt,
# also accepts a string with the already downloaded HTML code
source="https://www.mckinsey.com/features/mckinsey-center-for-future-mobility/our-insights/autonomous-vehicles-moving-forward-perspectives-from-industry-leaders",
config=graph_config
)
# Run the graph
result = smart_scraper_graph.run()
print(result)
# Get graph execution info
graph_exec_info = smart_scraper_graph.get_execution_info()
print(prettify_exec_info(graph_exec_info))
```
Steps to reproduce the behavior:
I got this from McKinsey URLs
``` python
--- Executing Fetch Node ---
--- (Fetching HTML from: https://www.mckinsey.com/features/mckinsey-center-for-future-mobility/our-insights/autonomous-vehicles-moving-forward-perspectives-from-industry-leaders) ---
--- Executing Parse Node ---
--- Executing GenerateAnswer Node ---
Processing chunks: 0%| | 0/1 [00:02<?, ?it/s]{'answer': 'I apologize, but I am unable to summarize and find the main topics of the provided content as it is empty.'}
node_name total_tokens prompt_tokens completion_tokens \
0 Fetch 0 0 0
1 Parse 0 0 0
2 GenerateAnswer 269 238 31
3 TOTAL RESULT 269 238 31
successful_requests total_cost_USD exec_time
0 0 0.0 1.640176
1 0 0.0 0.002383
2 1 0.0 2.054185
3 1 0.0 3.696744
```
**Expected behavior**
``` python
From the URL: "https://www.precedenceresearch.com/autonomous-vehicle-market"
{'Autonomous Vehicle Market Size, Share, and Trends 2024 to 2034': {'Main Topics': ['Autonomous Vehicle Market Size and Growth 2024 to 2033', 'Autonomous Vehicle Market Key Takeaways', 'Autonomous Vehicle Market Growth Factors', 'Report Scope of the Autonomous Vehicle Market', 'Autonomous Vehicle Market Drivers', 'Autonomous Vehicle Market Opportunities', 'Autonomous Vehicle Market Restraint', 'Autonomous Vehicle Market Challenge', 'Regional Insights', 'Application Insights', 'Vehicle Type Insights', 'Level of Autonomy Insights', 'Application Insights', 'Autonomous Vehicle Market Companies', 'Segments Covered in the Report', 'Frequently Asked Questions'], 'Summary': 'The global autonomous vehicle market size was estimated USD 158.31 billion in 2023 and is projected to hit around USD 2,752.80 billion by 2033, poised to grow at a compound annual growth rate (CAGR) of 33% from 2024 to 2033.\n\nU.S. autonomous vehicle market was valued at USD 59.92 billion in 2023.\n\nThe Asia-Pacific region is expected to hit at a CAGR of 35% from 2024 to 2033.\n\nBy application, the transportation segment accounted largest revenue share of 93.57% in 2023.\n\nBy vehicle type, the passenger segment accounted for 74.29% of revenue share in 2023.\n\nBy propulsion type, the semi-autonomous vehicle segment accounted for 95.13% of revenue share in 2023.\n\nBy transportation, the commercial transportation segment has accounted revenue share of 84.98% in 2023.\n\nBy Level of Automation, the Level 2 segment has accounted revenue share of 40.29% in 2023.'}}
```
| closed | 2024-08-01T19:42:28Z | 2024-08-03T20:55:49Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/501 | [] | regismvargas | 3 |
strawberry-graphql/strawberry | asyncio | 2,832 | Sanic can't set cookies: get_context returning a TemporalResponse | ## Describe the Bug
Ref.: https://strawberry.rocks/docs/integrations/sanic#get_context
I am trying to set cookies in the response as shown here:
https://strawberry.rocks/docs/integrations/asgi#setting-response-headers
```python
class MyGraphQLView(GraphQLView):
async def get_context(self, request: Request, response: Response) -> Any:
return {"request": request, "response": response}
app = Sanic("api")
app.add_route(SanicGraphQLView.as_view(schema=schema, graphiql=True), "/graphql")
```
and I am getting an error when setting the cookie because the context contains a TemporalResponse instead of a Sanic HTTPResponse object.
## System Information
- Operating system: linux/amd64 python3.11
- Strawberry version (if applicable): 0.183.5 | open | 2023-06-09T13:35:18Z | 2025-03-20T15:56:13Z | https://github.com/strawberry-graphql/strawberry/issues/2832 | [
"bug"
] | wedobetter | 5 |
uriyyo/fastapi-pagination | fastapi | 585 | create custom Page | Hi, I'm using beanie as ODM. Result of query is like it:
```code
{
"data": [
{
"_id": "641c538ff9188f91c76a8b66",
"created_at": "2023-03-23T13:26:39.289000",
"name": "string",
"description": "Welcome to Biscotte restaurant! Restaurant Biscotte offers a cuisine based on fresh, quality products, often local, organic when possible, and always produced by passionate producers",
"relationships": [
{
"collectionName": "categories",
"id": [
"641c4db5b592122e212a87b3"
]
}
]
},
{
"_id": "641c5583f9188f91c76a8b67",
"created_at": "2023-03-23T13:34:59.728000",
"name": "string",
"description": "Welcome to Biscotte restaurant! Restaurant Biscotte offers a cuisine based on fresh, quality products, often local, organic when possible, and always produced by passionate producers",
"relationships": [
{
"collectionName": "categories",
"id": [
"641c4db5b592122e212a87b3",
"641c5190b592122e212a87b4"
]
}
]
}
],
"total": 2,
"page": 1,
"size": 50,
"pages": 1
}
```
I want to populate each id of relation categories using fastapi-pagination.
Can anybody helpme how define a custom Page.
Thanks !
| closed | 2023-03-24T17:38:33Z | 2023-03-24T18:20:06Z | https://github.com/uriyyo/fastapi-pagination/issues/585 | [] | opaniagu | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 976 | [DOCS]: Docs udpate Job portals code refactoring and plugin. | ### Affected documentation section
_No response_
### Documentation improvement description
ADR for Job portals code refactoring and plugin.
updating Readme, plugin creation instructions and any other relevant docs
### Why is this change necessary?
_No response_
### Additional context
_No response_ | closed | 2024-12-01T12:42:29Z | 2024-12-19T02:03:46Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/976 | [
"documentation",
"stale"
] | surapuramakhil | 2 |
pywinauto/pywinauto | automation | 1,099 | Python code freeze on exit pywinauto and tkinter . Python.exe created in crashdump folder | ## Expected Behavior
## Actual Behavior
Freeze and crashesh python.
## Steps to Reproduce the Problem
from tkinter import Tk
import tkinter.messagebox
import pywinauto
from pywinauto.keyboard import SendKeys
def show_msg(window_title, window_message):
root = Tk()
root.attributes('-alpha', 0.0)
result = tkinter.messagebox.showinfo(window_title, window_message)
root.destroy()
root.mainloop()
print("Result ")
show_msg('abc', 'abc')
- Pywinauto version: 0.6.8
- Python version and bitness 3.8:
- Platform and OS: Windows
| open | 2021-07-20T05:50:44Z | 2021-07-20T09:44:49Z | https://github.com/pywinauto/pywinauto/issues/1099 | [
"duplicate"
] | Botways | 1 |
xlwings/xlwings | automation | 2,265 | test_markdown.py fails due XLWINGS_LICENSE_KEY_SECRET being none | #### OS (e.g. Windows 10 or macOS Sierra)
macOS Ventura
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
xlwings - Latest,
Python 3.11
Office 365
#### Describe your issue (incl. Traceback!)
```python
pytest tests/test_markdown.py
```
```python
tests/test_markdown.py:8: in <module>
from xlwings.pro import Markdown, MarkdownStyle
xlwings/pro/__init__.py:14: in <module>
from .embedded_code import dump_embedded_code, runpython_embedded_code
xlwings/pro/embedded_code.py:25: in <module>
LicenseHandler.validate_license("pro")
xlwings/pro/utils.py:99: in validate_license
os.getenv("XLWINGS_LICENSE_KEY_SECRET").encode(),
E AttributeError: 'NoneType' object has no attribute 'encode'
ERROR tests/test_markdown.py - AttributeError: 'NoneType' object has no attribute 'encode'
```
Perhaps related to #2220 and #2241?
This secret is something only xlwings owns correct? Or is it a derivative from the License key?
As for the solution, I cannot wrap this part in a Try import just like #2220 can I? Otherwise I cannot run the tests. | closed | 2023-05-24T08:10:56Z | 2023-05-25T09:35:06Z | https://github.com/xlwings/xlwings/issues/2265 | [] | Jeroendevr | 3 |
thtrieu/darkflow | tensorflow | 661 | Darknet YOLO is 4 bytes off for tiny-yolo and not working for tiny-yolo-voc | I was trying to use the tiny-yolo.weights for my project and I'm sure that I'm using the correct corresponding cfg file however it keeps on giving me this error:
Parsing ./cfg/tiny-yolo.cfg
Parsing cfg/tiny-yolo.cfg
Loading bin/tiny-yolo.weights ...
Traceback (most recent call last):
File "/Users/user/anaconda2/bin/flow", line 6, in <module>
cliHandler(sys.argv)
File "/Users/user/anaconda2/lib/python2.7/site-packages/darkflow/cli.py", line 26, in cliHandler
tfnet = TFNet(FLAGS)
File "/Users/user/anaconda2/lib/python2.7/site-packages/darkflow/net/build.py", line 58, in __init__
darknet = Darknet(FLAGS)
File "/Users/user/anaconda2/lib/python2.7/site-packages/darkflow/dark/darknet.py", line 27, in __init__
self.load_weights()
File "/Users/user/anaconda2/lib/python2.7/site-packages/darkflow/dark/darknet.py", line 82, in load_weights
wgts_loader = loader.create_loader(*args)
File "/Users/user/anaconda2/lib/python2.7/site-packages/darkflow/utils/loader.py", line 105, in create_loader
return load_type(path, cfg)
File "/Users/user/anaconda2/lib/python2.7/site-packages/darkflow/utils/loader.py", line 19, in __init__
self.load(*args)
File "/Users/user/anaconda2/lib/python2.7/site-packages/darkflow/utils/loader.py", line 77, in load
walker.offset, walker.size)
AssertionError: expect 44948596 bytes, found 44948600
When I try to run using the tiny-yolo-voc.weights my program works and the FPS fluctuates between 5 and 6 seconds but the video doesn't show.
Help anyone?
| open | 2018-03-24T15:48:37Z | 2019-04-03T17:28:14Z | https://github.com/thtrieu/darkflow/issues/661 | [] | MithilV | 6 |
jschneier/django-storages | django | 790 | . | . | closed | 2019-11-17T15:24:35Z | 2019-11-17T19:04:54Z | https://github.com/jschneier/django-storages/issues/790 | [] | niccolomineo | 0 |
horovod/horovod | pytorch | 3,955 | Segmentation fault error | **Environment:**
1. Framework: TensorFlow
2. Framework version: 1.15.0
3. Horovod version:0.19.5
4. MPI version:4.0.0
5. CUDA version:10.0
6. Python version:3.6.8
**Bug report:**
When I was running the example at https://github.com/horovod/horovod/blob/master/examples/tensorflow/tensorflow_word2vec.py it raises me an Segmentation fault error
| closed | 2023-07-04T05:53:59Z | 2023-08-31T10:38:29Z | https://github.com/horovod/horovod/issues/3955 | [
"bug"
] | etoilestar | 2 |
serengil/deepface | deep-learning | 768 | please find exception stacktrace - using arcface as model | ```
return DeepFace.find(img_path=img_path, db_path=config.tdes_images_location, align=align,
File "/home/akhil/PycharmProjects/TDES-analytics/venv/lib/python3.10/site-packages/deepface/DeepFace.py", line 488, in find
img_objs = functions.extract_faces(
File "/home/akhil/PycharmProjects/TDES-analytics/venv/lib/python3.10/site-packages/deepface/commons/functions.py", line 105, in extract_faces
img_region = [0, 0, img.shape[1], img.shape[0]]
AttributeError: 'NoneType' object has no attribute 'shape'
``` | closed | 2023-06-03T19:17:18Z | 2023-10-15T21:32:30Z | https://github.com/serengil/deepface/issues/768 | [
"question"
] | surapuramakhil | 3 |
Yorko/mlcourse.ai | numpy | 776 | Issue on page /book/topic04/topic4_linear_models_part5_valid_learning_curves.html | The first validation curve is missing

| closed | 2024-08-30T12:07:28Z | 2025-01-06T15:49:43Z | https://github.com/Yorko/mlcourse.ai/issues/776 | [] | ssukhgit | 1 |
bmoscon/cryptofeed | asyncio | 418 | Dynamically adding feeds | **Is your feature request related to a problem? Please describe.**
I'm running cryptofeed in a separate thread, and I would like to dynamically subscribe to feeds from the main thread.
Is this already supported?
**Describe the solution you'd like**
Potentially, the `add_feed` function should be callable from another thread?
Currently I get an error saying `No feeds specified`here https://github.com/bmoscon/cryptofeed/blob/master/cryptofeed/feedhandler.py#L210 | closed | 2021-02-14T19:09:11Z | 2021-04-01T22:18:59Z | https://github.com/bmoscon/cryptofeed/issues/418 | [
"Feature Request"
] | thisiscam | 10 |
graphql-python/graphene-django | django | 1,273 | Consider supporting promise-based dataloaders in v3 | I know graphql-core dropped support for promises, but the author seems to think promise-support can be added via hooks like middleware and execution context (see response to my identical issue in https://github.com/graphql-python/graphql-core/issues/148).
Since most people using syrusakbary's promise library are probably already graphene users, if anyone is going to help make graphql-core 3 and promises play together, it makes sense for that to be done in graphene.
### Why not just use async?
I think I have a decent use-case for non-async, promise-based resolution. Async is nice and all, and having a standard is great, but many of us are just using dataloaders as an execution pattern, not because we actually have async data-sources. Moving everything to run in async environment can have consequences.
We are calling the django ORM from our dataloaders. Because django 3.0 forces us to isolate ORM calls and wrap them in `sync_to_async`, we stuck with promise based dataloaders for syntactic reasons. Examples below:
**What we'd like to do, but django doesn't allow**
```python
class MyDataLoader(...):
async def batch_load(self, ids):
data_from_other_loader = await other_loader.load_many(ids)
data_from_orm = MyModel.objects.filter(id__in=ids) # error! can't call django ORM from async context.
# return processed combination of orm/loader data
```
**What django would like us to do**
```python
class MyDataLoader(...):
async def batch_load(self, ids):
data_from_other_loader = await other_loader.load_many(ids)
data_from_orm = await get_orm_data()
# return processed combination of orm/loader data
@sync_to_async
def get_orm_data(ids):
return MyModel.objects.filter(id__in=ids)
```
**What we settled on instead (use generator-syntax around promises)**
```python
class MyDataLoader(...):
def batch_load(self,ids):
data_from_other_loader = yield other_loader.load_many(ids)
data_from_orm = MyModel.objects.filter(id__in=ids)
# return processed combination of orm/loader data
```
A simple `generator_function_to_promise` is used as part of dataloaders, as well as a middleware that converts generators returned from resolvers into promises. I have hundreds of dataloaders following this pattern. I don't want to be stuck isolating all the ORM calls as per django's recommendations. That would be noisy and decrease legibility.
It seems there may be other issues around using async dataloaders with connections. Admittedly that problem sounds more easily surmountable and wouldn't require supporting promises. | closed | 2021-11-25T22:08:40Z | 2021-11-25T22:12:11Z | https://github.com/graphql-python/graphene-django/issues/1273 | [
"✨enhancement"
] | AlexCLeduc | 1 |
mwouts/itables | jupyter | 32 | Conversion to HTML shows only header line | When running `jupyter nbconvert --to html test.ipynb`
to convert the example script from the website (see below), the resulting html page only shows the header of table but no contents.
When inspecting the HTML code, the header is in plain HTML while the data are in a JavaScript. However, they don't show in any of the browsers I've tried. Am I missing anything?
```import itables
import world_bank_data as wb
itables.init_notebook_mode(all_interactive=True)
df = wb.get_countries()
df``` | closed | 2021-12-22T15:27:48Z | 2021-12-22T19:18:18Z | https://github.com/mwouts/itables/issues/32 | [] | axel-loewe | 3 |
jmcnamara/XlsxWriter | pandas | 721 | Don't work 'invert_if_negative' | Maybe I’m not using the 'invert_if_negative' attribute correctly, but I don’t see any changes when applying it.
```
import xlsxwriter
workbook = xlsxwriter.Workbook('chart_column.xlsx')
worksheet = workbook.add_worksheet()
chart = workbook.add_chart({'type': 'column'})
worksheet.write_column('A1', [1, 2, 3, 4, 5])
worksheet.write_column('B1', [-3, 2, 3, -2, 5])
chart.add_series({'categories': '=Sheet1!$A$1:$A$5', 'values': '=Sheet1!$B$1:$B$5', 'invert_if_negative': True})
worksheet.insert_chart('G3', chart)
workbook.close()
```
Output:

| closed | 2020-05-29T18:04:39Z | 2020-05-29T19:18:01Z | https://github.com/jmcnamara/XlsxWriter/issues/721 | [
"question"
] | VladislavN | 1 |
healthchecks/healthchecks | django | 376 | [feature request] ability to specify paused ping handling through API | Thanks for implementing https://github.com/healthchecks/healthchecks/issues/369!
Would it be possible to have that feature be able to be specified through the API when creating a check? That way I can add it to the healthcheck manager script to solve the problem described in https://github.com/healthchecks/healthchecks/issues/343.
On a unrelated note, we are now using the [healthcheck managing script](https://gist.github.com/caleb15/1a817ef5e58e8a8caf65190cff33806e) in production and it's working well so far. I just have a small bug I need to look into. My boss said I can open-source the script, I'll let you know when it's ready. | closed | 2020-06-06T23:10:24Z | 2020-09-09T20:49:59Z | https://github.com/healthchecks/healthchecks/issues/376 | [] | caleb15 | 4 |
pywinauto/pywinauto | automation | 656 | does pywinauto support other .NET objects like Infragistics? | hi
i have a WPF application which full of customized objects like infragistic, tabRibon, embeded browser ...
does pywinauto can support it ? | open | 2019-01-16T09:04:55Z | 2019-01-18T11:36:01Z | https://github.com/pywinauto/pywinauto/issues/656 | [
"question"
] | michaazran | 2 |
mwaskom/seaborn | matplotlib | 3,639 | catplot with numeric hue and hue_order: empty legend handles | Tested with Seaborn 0.13.2, pandas 2.2.1
When the hue values are numeric, `hue_order` isn't respected in the plot. The legend does respect the order, but with empty legend handles.
```python
import seaborn as sns
import pandas as pd
tips = sns.load_dataset('tips')
sns.catplot(tips, kind='box', x='time', y='tip', hue='size', hue_order=[2, 3, 4])
```

Making the hue column of type `pd.Categorical`, but still numeric, does respect the hue order. But again with empty legend handles. The default palette changes to categorical.
```python
import seaborn as sns
import pandas as pd
tips = sns.load_dataset('tips')
tips['size'] = pd.Categorical(tips['size'])
sns.catplot(tips, kind='box', x='time', y='tip', hue='size', hue_order=[2, 3, 4])
```

| open | 2024-02-27T07:19:45Z | 2024-03-01T17:52:27Z | https://github.com/mwaskom/seaborn/issues/3639 | [] | jhncls | 2 |
hyperspy/hyperspy | data-visualization | 3,056 | Incompatibility with a new | #### Describe the bug
Cannot load data with the new version of pint (released three days ago)
#### To Reproduce
Steps to reproduce the behavior:
```python
import hyperspy.api as hs
s = hs.load('data.hspy')
### Wild error appears
```
#### Expected behavior
pint.unit has disappeared in the new pint release. The following line causes the bug
https://github.com/hyperspy/hyperspy/blob/6a68fc4899409ea010b13c67721abce6a507022a/hyperspy/misc/axis_tools.py#L20
Fixing it should be as simple as replacing with
```python
from pint import Unit
```
But I haven't tested it
#### Python environement:
- HyperSpy version: 1.7.2
- Python version: 3.10
- Pint 0.20.1
#### Additional context
Sorry for not giving more details, I was facing the problem when a colleague was installing hyperspy with me over zoom
| closed | 2022-10-28T14:31:05Z | 2022-10-29T10:43:57Z | https://github.com/hyperspy/hyperspy/issues/3056 | [
"type: bug"
] | LMSC-NTappy | 2 |
eralchemy/eralchemy | sqlalchemy | 80 | KeyError: '_data' | With
- Python 3.9.2
- sqlalchemy 1.4.0
I get this error:
```
Traceback (most recent call last):
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1104, in __getattr__
return self._index[key]
KeyError: '_data'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/julian/src/bruce-leads/.venv/bin/eralchemy", line 8, in <module>
sys.exit(cli())
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/main.py", line 31, in cli
render_er(
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/main.py", line 231, in render_er
tables, relationships = all_to_intermediary(input, schema=schema)
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/main.py", line 147, in all_to_intermediary
return database_to_intermediary(filename_or_input, schema=schema)
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/sqla.py", line 82, in database_to_intermediary
return declarative_to_intermediary(Base)
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/sqla.py", line 61, in declarative_to_intermediary
return metadata_to_intermediary(base.metadata)
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/sqla.py", line 54, in metadata_to_intermediary
tables = [table_to_intermediary(table) for table in metadata.tables.values()]
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/sqla.py", line 54, in <listcomp>
tables = [table_to_intermediary(table) for table in metadata.tables.values()]
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/eralchemy/sqla.py", line 49, in table_to_intermediary
return Table(name=table.fullname, columns=[column_to_intermediary(col) for col in table.c._data.values()])
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/sqlalchemy/sql/base.py", line 1106, in __getattr__
util.raise_(AttributeError(key), replace_context=err)
File "/home/julian/src/bruce-leads/.venv/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 180, in raise_
raise exception
AttributeError: _data
```
I have investigated to fix it for sqlachemy but don't know about backwards compat.
https://github.com/Alexis-benoist/eralchemy/blob/d6fcdc67d6d413bb174bf008fd360044e1dff5a7/eralchemy/sqla.py#L50
needs to be replaced with:
```py
columns=[column_to_intermediary(col) for col in table.c._colset]
```
I can create a tested PR if you are willing to merge it.
| closed | 2021-03-19T10:38:58Z | 2024-07-07T10:02:17Z | https://github.com/eralchemy/eralchemy/issues/80 | [] | julian-r | 13 |
albumentations-team/albumentations | deep-learning | 2,392 | [New feature] Add apply_to_images to AutoContrast | open | 2025-03-11T00:58:31Z | 2025-03-11T00:58:38Z | https://github.com/albumentations-team/albumentations/issues/2392 | [
"enhancement",
"good first issue"
] | ternaus | 0 | |
plotly/plotly.py | plotly | 4,131 | plotly-express fig.update_traces(root_color="black") has no effect | Graph renders with default charcoal root colour irrespective of colour set in update_traces(root_color='some color')

| closed | 2023-03-29T08:59:52Z | 2023-04-01T14:02:13Z | https://github.com/plotly/plotly.py/issues/4131 | [] | ouryperd | 2 |
coqui-ai/TTS | pytorch | 2,555 | [Bug] RuntimeError: min(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument. | ### Describe the bug
I am training a voice cloning model using VITS. My dataset is in LJSpeech Format. I am trying to train Indian English model straight from character with Phonemizer = False. The training runs for 35-40 epochs and then abruptly stops. Sometimes it runs for even longer, like 15k steps and then stops. I can share the notebook I am using for training. I have successfully completed my training with this notebook several times, but in recent times I am facing this error.
Also I am getting this warning at the beginning of the training.
/usr/local/lib/python3.10/dist-packages/torch/functional.py:641: UserWarning: stft with return_complex=False is deprecated. In a future pytorch release, stft will return complex tensors for all inputs, and return_complex=False will raise an error.
Note: you can still call torch.view_as_real on the complex output to recover the old return format. (Triggered internally at ../aten/src/ATen/native/SpectralOps.cpp:862.)
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
I am providing the screenshots of the error I encounter everytime.
<img width="1405" alt="Screenshot 2023-04-13 at 11 08 42 PM" src="https://user-images.githubusercontent.com/19510293/234267079-d8c5f39e-75eb-4f07-b6a0-da9e56499c03.png">
<img width="1362" alt="Screenshot 2023-04-13 at 11 09 00 PM" src="https://user-images.githubusercontent.com/19510293/234267091-102b5c5c-a881-4175-b1c5-b11b322af255.png">
<img width="1356" alt="Screenshot 2023-04-13 at 11 09 14 PM" src="https://user-images.githubusercontent.com/19510293/234267110-801c9e91-af08-47e6-b41f-3ac1b38cc71b.png">
### To Reproduce
https://colab.research.google.com/drive/1k8Fk5kfU_aZ2lM7Esih3Ud1fYtNlujOQ?authuser=0#scrollTo=A49iDwajBtu_
I am using this colab notebook for training purpose. Every configuration regarding the training can be referred from here. Mind that training will go on for 35-40 epochs then it will stop.
### Expected behavior
Training should continue.
### Logs
_No response_
### Environment
```shell
https://colab.research.google.com/drive/1k8Fk5kfU_aZ2lM7Esih3Ud1fYtNlujOQ?authuser=0#scrollTo=A49iDwajBtu_
```
### Additional context
I have tried to resolve the warning and error both as I think both are related.
I tried following solutions to resolve the warning.
https://github.com/jaywalnut310/vits/issues/15
and the following to solve the error.
https://github.com/coqui-ai/TTS/discussions/1949
Looks like Torch version == 1.8 is unstable and distribution is not available. I tried 1.9 too because github above prescribed it. Distribution not available. | closed | 2023-04-25T11:52:05Z | 2024-01-23T15:28:31Z | https://github.com/coqui-ai/TTS/issues/2555 | [
"bug",
"wontfix"
] | offside609 | 13 |
davidsandberg/facenet | computer-vision | 1,161 | center_loss_factor and prelogits_norm_loss_factor during softmax training | In train_softmax.py, the default value of center_loss_factor and prelogits_norm_loss_factor are both 0.0. Is it the setting we should use to train facenet from scratch?

| open | 2020-06-24T04:41:18Z | 2020-06-24T04:41:18Z | https://github.com/davidsandberg/facenet/issues/1161 | [] | w19787 | 0 |
tflearn/tflearn | tensorflow | 1,137 | tensorflow.python.framework.errors_impl.FailedPreconditionError | How can i fix this error ?
please help
<
import tflearn
import numpy as np
import tensorflow as tf
tf.reset_default_graph()
q_inputs = tf.placeholder(tf.float32, [None, 1, 8])
net, hidden_states_1 = tflearn.lstm(q_inputs, 10, return_seq=True, return_state=True, scope="lstm_1", reuse=False)
q_values = tflearn.lstm(net, 4, return_seq=True, activation='linear', scope="lstm_2", reuse=False)
q_values = tf.stack(q_values, axis=1) # tensor shape(None, n_timesteps, n_actions)
evaluator = tflearn.Evaluator(q_values)
inputs = np.array([[[0, 0, 0, 0, 0, 1, 0, 0]]])
res = evaluator.predict(feed_dict = {q_inputs: inputs})
print(np.argmax(res))
>
Traceback (most recent call last):
File "D:\Anaconda\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1022, in _do_call
return fn(*args)
File "D:\Anaconda\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1004, in _run_fn
status, run_metadata)
File "D:\Anaconda\envs\py35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "D:\Anaconda\envs\py35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value lstm_2/lstm_2/BasicLSTMCell/Linear/Bias
[[Node: lstm_2/lstm_2/BasicLSTMCell/Linear/Bias/read = Identity[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](lstm_2/lstm_2/BasicLSTMCell/Linear/Bias)]]
| open | 2019-10-11T08:04:53Z | 2019-10-11T08:04:53Z | https://github.com/tflearn/tflearn/issues/1137 | [] | Eroslon | 0 |
pytorch/pytorch | numpy | 149,799 | bug in pytorch/torch/nn/parameter: | ### 🐛 Describe the bug
```python
class UninitializedBuffer(UninitializedTensorMixin, torch.Tensor):
r"""A buffer that is not initialized.
Uninitialized Buffer is a a special case of :class:`torch.Tensor`
where the shape of the data is still unknown.
Unlike a :class:`torch.Tensor`, uninitialized parameters
hold no data and attempting to access some properties, like their shape,
will throw a runtime error. The only operations that can be performed on a uninitialized
parameter are changing its datatype, moving it to a different device and
converting it to a regular :class:`torch.Tensor`.
The default device or dtype to use when the buffer is materialized can be set
during construction using e.g. ``device='cuda'``.
"""
cls_to_become = torch.Tensor
def __new__(
cls, requires_grad=False, device=None, dtype=None, persistent=True
) -> None:
factory_kwargs = {"device": device, "dtype": dtype}
data = torch.empty(0, **factory_kwargs)
ret = torch.Tensor._make_subclass(cls, data, requires_grad)
ret.persistent = persistent
ret._is_buffer = True
return ret # ret is not None probablity of issue here
# suggest debugging :
class UninitializedBuffer(UninitializedTensorMixin, torch.Tensor):
# as it is
def __new__(cls, requires_grad=False, device=None, dtype=None, persistent=True):
factory_kwargs = {"device": device, "dtype": dtype}
data = torch.empty(0, **factory_kwargs)
# Ensure we are subclassing correctly
ret = super().__new__(cls, data, requires_grad)
# Set attributes
ret.persistent = persistent
ret._is_buffer = True
return ret # avoid annotation that method def __new__ return None
```
### Versions
wget https://raw.githubusercontent.com/pytorch/pytorch/main/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py | closed | 2025-03-22T08:46:52Z | 2025-03-24T16:21:31Z | https://github.com/pytorch/pytorch/issues/149799 | [] | said-ml | 1 |
vitalik/django-ninja | pydantic | 1,319 | Content Negotiation | Is it possible to return a different response based on the "Accept" header or a query param e.g. ?format=json-ld
| open | 2024-10-16T08:03:35Z | 2024-10-17T05:25:00Z | https://github.com/vitalik/django-ninja/issues/1319 | [] | MiltosD | 5 |
minivision-ai/photo2cartoon | computer-vision | 45 | The lights of faces may cause bad results. | I train the model with the given dataset, but found that some faces with light get a bad result at the edge of face. | closed | 2020-10-19T05:57:07Z | 2020-11-19T07:38:33Z | https://github.com/minivision-ai/photo2cartoon/issues/45 | [] | CodingMice | 1 |
RobertCraigie/prisma-client-py | asyncio | 3 | Model aliases can clash | ## Problem
For example, a model defines two relational fields, the first named "categories" that references a "CustomCategories" model and the second named "posts" that references a "Post" model and in the "Post" model a relational field named "categories" is defined that references a "Categories" model. This setup will result in incorrect aliases being defined for one of the categories fields.
This issue was not fixed in the original design as it is a weird edge case that can be solved by the end user by mapping the troublesome fields.
## Possible Solution
Each recursive model reference could be namespaced, for example, given the relation user -> posts -> categories aliases should be generated as such
```json
{
"": {},
"posts": {
"categories": {}
}
}
```
instead of the current implementation
```json
{
"": {},
"posts": {},
"categories": {}
}
``` | closed | 2021-01-13T07:29:01Z | 2021-06-18T13:55:47Z | https://github.com/RobertCraigie/prisma-client-py/issues/3 | [
"bug/2-confirmed",
"kind/bug"
] | RobertCraigie | 0 |
miguelgrinberg/Flask-Migrate | flask | 410 | Is There a way to ignore a model not to generate op.create_table for the model? | I'm using multiple databases with MySQL and BigQuery by declaring the binds
```
SQLALCHEMY_BINDS = {
'bigquery': BIGQUERY_URI,
}
```
```
app = Flask(__name__)
from models import db, migrate
db.init_app(app)
migrate.init_app(app=app, db=db)
```
the bigquery table is already created but when I execute `flask db migrate` It generate create_table function with bigquery table in a migration file.
Is there a way to prevent or ignore models not to generate a migration file?
| closed | 2021-06-02T11:18:37Z | 2021-06-03T01:58:56Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/410 | [
"question"
] | mz-ericlee | 2 |
supabase/supabase-py | fastapi | 525 | Bulk Delete by array of UUID strings | **Describe the bug**
Unable to delete using a .filter('in", array_of_ids)
**To Reproduce**
Steps to reproduce the behavior:
1. Create a supabase python client (mine is under)
`self.commons['supabase']`
2. Have an array of uuids represented as strings ex:
`vector_ids = ["aed10938-65db-4678-8438-cf7684eaefd7", "94a58004-7f19-4e79-b189-0824e55d4c8a"]`
4. First I tried
`self.commons['supabase'].table('brains_vectors').delete().filter('vector_id', "in", vector_ids).execute()`
The error:
```
raise APIError(r.json())
postgrest.exceptions.APIError: {'code': 'PGRST100', 'details': 'unexpected "[" expecting "("', 'hint': None, 'message': '"failed to parse filter
```
5. Then I tried
`self.commons['supabase'].table('brains_vectors').delete().filter('vector_id', "in", tuple(vector_ids)).filter('brain_id', 'eq', self.id).execute()`
The error:
`postgrest.exceptions.APIError: {'code': '22P02', 'details': None, 'hint': None, 'message': 'invalid input syntax for type uuid: "\'6eafc10e-6e1d-44c6-9ba9-99f2044b30d0\'"'}`
I wasn't able to convert those strings to double quotes without python changing them to single quotes.
**Expected behavior**
I was hoping to be able to bulk delete a bunch of vector ids for an LLM application. Right now it takes ~0.5 seconds per vector to delete in a for loop. For a 1MB file with 237 vectors, it feels like 2 mins in a long time.
**Screenshots**.
**Desktop (please complete the following information):**
- OS: Mac OS Ventura
- IDE: Visual Studio Code
**Additional context**
Supabase version: `supabase==1.0.3`
If anyone knows how to do this bulk delete any help is greatly appreciated. I'm happy to help makes the changes or add to documentation as well!
Thanks!
| closed | 2023-08-24T01:57:14Z | 2024-07-06T12:01:43Z | https://github.com/supabase/supabase-py/issues/525 | [
"enhancement"
] | levi-katarok | 2 |
iterative/dvc | machine-learning | 9,913 | `repro`: does not run stage if params are removed from dvc.yaml (in some cases) | # Bug Report
## Description
DVC runs a stage if params are changed, but in some cases the stage is considered as unchanged if params are compeletely removed from `dvc.yaml`.
If params are changed in params.yaml, everything works as expected (stage is run).
I have not tested how DVC behaves if dependencies are removed (again I'd expect this to be treated as change).
### Reproduce
**Example 1:**
Here we demonstrate that a stage is not run if params are removed.
1. `mkdir dvctest; cd dvctest; git init; dvc init`
2. create a file `params.yaml` with the content:
```
name: lumbric
```
3. create a file `dvc.yaml` with the content:
```
stages:
do_something:
cmd: echo "hello world, my name is ${name}"
deps:
- in
params:
- name
```
4. `echo "some input" > in`
5. `dvc repro` will then print:
```
Running stage 'do_something':
> echo "hello world, my name is lumbric"
hello world, my name is lumbric
Generating lock file 'dvc.lock'
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
```
6. note that the lock file contains params:
```console
$ cat dvc.lock
schema: '2.0'
stages:
do_something:
cmd: echo "hello world, my name is lumbric"
deps:
- path: in
hash: md5
md5: 38debce1fb87347226e0c14d0084ef5c
size: 11
params:
params.yaml:
name: lumbric
```
7. now remove the params section from dvc.yaml, it should look like this:
```console
$ cat dvc.yaml
stages:
do_something:
cmd: echo "hello world, my name is ${name}"
deps:
- in
```
8. `dvc repro` will then print:
```console
dvc repro
Stage 'do_something' didn't change, skipping
Data and pipelines are up to date
```
9. note that dvc.lock is unchanged (see step 6)
**Example 2:**
Here we demonstrate that a stage is run if params are removed. The difference is that there is no dependency on the stage.
1. `mkdir dvctest; cd dvctest; git init; dvc init`
2. create a file `params.yaml` with the content:
```
name: lumbric
```
3. create a file `dvc.yaml` with the content:
```
stages:
do_something:
cmd: echo "hello world, my name is ${name}"
params:
- name
```
4. `dvc repro` will then print:
```
Running stage 'do_something':
> echo "hello world, my name is lumbric"
hello world, my name is lumbric
Generating lock file 'dvc.lock'
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
```
5. note that the lock file contains params:
```console
$ cat dvc.lock
schema: '2.0'
stages:
do_something:
cmd: echo "hello world, my name is lumbric"
params:
params.yaml:
name: lumbric
```
6. now remove the params section from dvc.yaml, it should look like this:
```console
$ cat dvc.yaml
stages:
do_something:
cmd: echo "hello world, my name is ${name}"
```
7. `dvc repro` will then print:
```console
Running stage 'do_something':
> echo "hello world, my name is lumbric"
hello world, my name is lumbric
Updating lock file 'dvc.lock'
To track the changes with git, run:
git add dvc.lock
To enable auto staging, run:
dvc config core.autostage true
Use `dvc push` to send your updates to remote storage.
```
8. note that the params section is removed from dvc.lock:
```console
$ cat dvc.lock
cat dvc.lock
schema: '2.0'
stages:
do_something:
cmd: echo "hello world, my name is lumbric"
```
### Expected
A stage should run if a dependency changes or if a parameter changes. Removing all params should be treated as change independently if dependencies exist or not. In Example 1 above the stage is not treated as changed and is not run. Example 2 behaves as expected. The only difference is a dependency in Example 1.
### Environment information
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.16.0 (conda)
---------------------------
Platform: Python 3.9.15 on Linux-5.4.0-146-generic-x86_64-with-glibc2.31
Subprojects:
dvc_data = 2.15.4
dvc_objects = 1.0.1
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.3.1
Supports:
http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3)
Config:
Global: /home/snip/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/md1
Caches: local
Remotes: None
Workspace directory: ext4 on /dev/md1
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/89d1c207de8c87e2e774811a63cbdd62
``` | closed | 2023-09-05T11:47:02Z | 2023-09-08T14:58:51Z | https://github.com/iterative/dvc/issues/9913 | [
"awaiting response",
"A: pipelines",
"A: params"
] | lumbric | 4 |
schemathesis/schemathesis | graphql | 1,933 | [BUG] Error in exception handling | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
I'm getting the following error when the schemathesis gets a network error:
```
An internal error occurred during the test run
schemathesis-schemathesis-1 |
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/schemathesis/cli/__init__.py", line 1004, in into_event_stream
yield from runner.from_schema(
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/impl/core.py", line 111, in _generate_events
for event in self._execute(results, stop_event):
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/impl/solo.py", line 22, in _execute
for event in self._execute_impl(results):
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/impl/solo.py", line 30, in _execute_impl
yield from self._run_tests(
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/impl/core.py", line 160, in _run_tests
for event in run_test(
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/impl/core.py", line 396, in run_test
yield events.AfterExecution.from_result(
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/events.py", line 158, in from_result
result=SerializedTestResult.from_test_result(result),
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/serialization.py", line 361, in from_test_result
errors=[SerializedError.from_exception(error) for error in result.errors],
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/serialization.py", line 361, in <listcomp>
errors=[SerializedError.from_exception(error) for error in result.errors],
File "/usr/local/lib/python3.10/site-packages/schemathesis/runner/serialization.py", line 214, in from_exception
message, extras = extract_requests_exception_details(exception)
File "/usr/local/lib/python3.10/site-packages/schemathesis/exceptions.py", line 454, in extract_requests_exception_details
_, reason = exc.args[0].reason.args[0].split(":", maxsplit=1)
AttributeError: 'ProtocolError' object has no attribute 'reason'
schemathesis-schemathesis-1 |
Tip: Please consider reporting the traceback above to our issue tracker:
```
It looks like this bug was introduced with the following commit:
https://github.com/schemathesis/schemathesis/commit/8c4fe09c94e65cec9814d8e36fdf379e2810db43
Here is what I got with the `v3.20.0` :
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 791, in urlopen
response = self._make_request(
File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 537, in _make_request
response = conn.getresponse()
File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 461, in getresponse
httplib_response = super().getresponse()
File "/usr/local/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/usr/local/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/usr/local/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
schemathesis-schemathesis-1 |
```
| closed | 2023-12-04T10:38:38Z | 2023-12-04T12:21:10Z | https://github.com/schemathesis/schemathesis/issues/1933 | [
"Priority: Critical",
"Type: Bug"
] | navruzm | 3 |
getsentry/sentry | django | 87,181 | Add details on differences between auto vs manual setup of Cocoa SDK | When configuring an iOS SDK, there are two options: `Auto` and `Manual`.
If choosing `Auto`, it summarizes the steps take like this:
> The Sentry wizard will automatically patch your application:
>
> - Install the Sentry SDK via Swift Package Manager or Cocoapods
> - Update your AppDelegate or SwiftUI App Initializer with the default Sentry configuration and an example error
> - Add a new Upload Debug Symbols phase to your xcodebuild build script
> - Create .sentryclirc with an auth token to upload debug symbols (this file is automatically added to .gitignore)
> - When you're using Fastlane, it will add a Sentry lane for uploading debug symbols
For the `Manual` setup we only mention the setup of the SDK and the verification example error.
I would like to discuss two points:
1. We should add a warning at the top indicating that the manual process **does not** cover all the steps taken by the wizard.
2. We should give the setup step for the uploading of debug symbols a more prominent location, i.e. before `Configure SDK` there should be a section "Setup Uploading Debug Symbols" which gives a summary of the purpose and benefits, with link to the user docs with relevant setup steps. This would keep the guide slim, while not making users skip the step all together.
cc @jas-kas @kahest | open | 2025-03-17T15:18:20Z | 2025-03-17T15:18:20Z | https://github.com/getsentry/sentry/issues/87181 | [
"Platform: Cocoa",
"Type: Improvement"
] | philprime | 0 |
home-assistant/core | python | 140,491 | ONVIF - No registered handler for event from XX:XX:XX:XX:XX:XX - IPCAM C9F0SEZ0N0P2L0 | ### The problem
Warning for No registered handler for event
Happens after restarted HA and from time to time while HA is up and running with no particular trigger as far as I have seen.
The camera streams correctly thought, but I've got thousands of those warning in the log.
This is a Chinese cam, called IPCAM C9F0SEZ0N0P2L0
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
onvif
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/onvif/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
Logger: homeassistant.components.onvif
Source: components/onvif/event.py:180
integration: ONVIF (documentation, issues)
First occurred: March 12, 2025 at 3:57:06 PM (8201 occurrences)
Last logged: 7:52:03 AM
IPCAM: Unable to parse event from 74:1A:10:B0:94:CD: { 'SubscriptionReference': None, 'Topic': { '_value_1': 'tns1:RuleEngine/CellMotionDetector/Motion', 'Dialect': 'http://www.onvif.org/ver10/tev/topicExpression/ConcreteSet', '_attr_1': { } }, 'ProducerReference': None, 'Message': { '_value_1': { 'Source': { 'SimpleItem': [ { 'Name': 'VideoSourceConfigurationToken', 'Value': '1' } ], 'ElementItem': [], 'Extension': None, '_attr_1': None }, 'Key': None, 'Data': None, 'Extension': None, 'UtcTime': datetime.datetime(2025, 3, 13, 6, 51, 30), 'PropertyOperation': 'Changed', '_attr_1': { }, '_raw_elements': deque([<Element {http://www.onvif.org/ver10/schema}Source at 0x7f4f4e6800>, <Element {http://www.onvif.org/ver10/schema}Source at 0x7f4f4e7b00>, <Element {http://www.onvif.org/ver10/schema}Data at 0x7f4f4e4280>]) } } }
```
### Additional information
_No response_ | open | 2025-03-13T08:21:59Z | 2025-03-15T15:24:18Z | https://github.com/home-assistant/core/issues/140491 | [
"integration: onvif"
] | tslpre | 7 |
explosion/spaCy | data-science | 12,590 | Incorrect lemma for "taxes" | ## Description of issue
I'm not sure if this should be an accepted side-effect of the model or not so I apologize in advance if this is not considered a bug. When obtaining the lemma for "taxes", it returns "taxis".
## How to reproduce the behaviour
```
import spacy
nlp = spacy.load("en_core_web_sm")
nlp.add_pipe("sentencizer")
doc = nlp('taxes are high')
sentence = list(doc.sents)[0]
words = [token.lemma_ for token in sentence]
print((sentence.text, words))
```
## Actual output
```
('taxes are high', ['taxis', 'be', 'high'])
```
## Info about spaCy
- **spaCy version:** 3.2.4
- **Platform:** Linux-5.10.0-15-amd64-x86_64-with-glibc2.31
- **Python version:** 3.9.2
- **Pipelines:** en_core_web_sm (3.2.0)
| closed | 2023-05-02T17:54:35Z | 2023-06-08T00:02:17Z | https://github.com/explosion/spaCy/issues/12590 | [
"lang / en",
"feat / lemmatizer",
"resolved"
] | jaleskovec | 3 |
geopandas/geopandas | pandas | 2,654 | BUG: GeoDataFrame.to_parquet writes geometry in EWKB instead of ISO WKB required by spec | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [x] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
#### Code Sample, a copy-pastable example
```python
import geopandas
geopandas.GeoDataFrame(geometry=geopandas.GeoSeries.from_wkt(['POINT Z (1 2 3)'])) \
.to_parquet('pointz.parquet', schema_version='0.4.0')
import pyarrow.parquet as pq
print(pq.read_table('pointz.parquet').to_pydict())
{'geometry': [b'\x01\x01\x00\x00\x80\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x08@']}
```
#### Problem description
The [0.4.0 GeoParquet specification](https://github.com/opengeospatial/geoparquet/blob/v0.4.0/format-specs/geoparquet.md#encoding) explicitly requires encoding extended geometry types in ISO WKB:
> using codes for 3D geometry types in the [1001,1007] range
The geometry was instead encoded using EWKB 0x80000001 code, making the resulting file invalid GeoParquet.
As I understand, GEOS has `GEOSWKBWriter_setFlavor` since 3.10 that allows to write ISO WKB correctly. I understand that there are currently several different code paths implementing WKB writing, and implementing such support for them all could be challenging. But even raising an error early on non-supported geometry types would be in my opinion preferable to silently writing invalid data files.
#### Expected Output
`{'geometry': [b'\x01\xe9\x03\x00\x00\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x08@']}`
Alternatively, an error explaining that serializing POINT Z geometry type in GeoParquet is not currently supported.
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)]
executable : /home/nofitserov/.cache/pypoetry/virtualenvs/test-d9F45URs-py3.9/bin/python
machine : Linux-5.14.18-100.fc33.x86_64-x86_64-with-glibc2.32
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.8.1
GEOS lib : /usr/lib64/libgeos_c.so
GDAL : 3.4.3
GDAL data dir: /home/nofitserov/.cache/pypoetry/virtualenvs/test-d9F45URs-py3.9/lib64/python3.9/site-packages/fiona/gdal_data
PROJ : 9.1.0
PROJ data dir: /home/nofitserov/.cache/pypoetry/virtualenvs/test-d9F45URs-py3.9/lib64/python3.9/site-packages/pyproj/proj_dir/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.1
numpy : 1.23.5
pandas : 1.5.1
pyproj : 3.4.0
shapely : 1.8.5.post1
fiona : 1.8.22
geoalchemy2: None
geopy : None
matplotlib : 3.6.2
mapclassify: 2.4.3
pygeos : None
pyogrio : v0.4.2
psycopg2 : None
pyarrow : 10.0.0
rtree : None
</details>
| closed | 2022-11-23T16:12:10Z | 2023-02-11T10:36:10Z | https://github.com/geopandas/geopandas/issues/2654 | [
"bug"
] | himikof | 3 |
miguelgrinberg/Flask-Migrate | flask | 252 | sqlalchemy.exc.OperationalError: when trying to migrate existing database | Hi Miguel,
I'm having issues migrating a sqlite database (dev version, production is mysql). I needed a change on String length. The migration script is created as follows:
`def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.alter_column('answer', 'answer',
existing_type=sa.VARCHAR(length=255),
type_=sa.String(length=1024),
existing_nullable=True)`
however, when trying to upgrade, it gives me a whole lot of information in console (copy over here https://shrib.com/#rqyZSZa2WgSaMbVjOm1T) , but I really cannot see anything useful that is causing this. The error trail ends with this:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "ALTER": syntax error [SQL: 'ALTER TABLE answer ALTER COLUMN answer TYPE VARCHAR(1024)'] (Background on this error at: http://sqlalche.me/e/e3q8)
any advice on how to solve this?
many thanks!
| closed | 2019-02-04T20:58:25Z | 2021-07-09T13:31:10Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/252 | [
"question"
] | git-bone | 6 |
lanpa/tensorboardX | numpy | 716 | Loosening protobuf version limit breaking downstream packages | **Describe the bug**
The recent removal of the protobuf version version limit (https://github.com/lanpa/tensorboardX/pull/712) has breaking implications for downstream packages dependent on tensorboardX and protobuf<3.20.
**Minimal runnable code to reproduce the behavior**
```
$ pip install tensorboardX==2.6.2
$ pip install protobuf<3.20
$ python
>>> import tensorboardX
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lab_phe3223/anaconda3/envs/tensorboard/lib/python3.8/site-packages/tensorboardX/__init__.py", line 5, in <module>
from .torchvis import TorchVis
File "/home/lab_phe3223/anaconda3/envs/tensorboard/lib/python3.8/site-packages/tensorboardX/torchvis.py", line 10, in <module>
from .writer import SummaryWriter
File "/home/lab_phe3223/anaconda3/envs/tensorboard/lib/python3.8/site-packages/tensorboardX/writer.py", line 16, in <module>
from .comet_utils import CometLogger
File "/home/lab_phe3223/anaconda3/envs/tensorboard/lib/python3.8/site-packages/tensorboardX/comet_utils.py", line 7, in <module>
from .summary import _clean_tag
File "/home/lab_phe3223/anaconda3/envs/tensorboard/lib/python3.8/site-packages/tensorboardX/summary.py", line 12, in <module>
from .proto.summary_pb2 import Summary
File "/home/lab_phe3223/anaconda3/envs/tensorboard/lib/python3.8/site-packages/tensorboardX/proto/summary_pb2.py", line 5, in <module>
from google.protobuf.internal import builder as _builder
ImportError: cannot import name 'builder' from 'google.protobuf.internal' (/home/lab_phe3223/anaconda3/envs/tensorboard/lib/python3.8/site-packages/google/protobuf/internal/__init__.py)
...
``` | closed | 2023-08-01T17:12:23Z | 2023-08-23T17:16:22Z | https://github.com/lanpa/tensorboardX/issues/716 | [] | psfoley | 7 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 383 | Do we have a output parser to get a certain format output | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2024-06-14T18:37:05Z | 2024-06-14T18:43:54Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/383 | [] | Vikrant-Khedkar | 1 |
rougier/from-python-to-numpy | numpy | 108 | anatomy of an array: multidimensional array start and stop from view | At the end of the section "Anatomy of an array" you give an example of how given a view of a multidimensional array we can infer the start:stop:step structure of each index. I have compiled the code and it works fine. But there is a line of code that I have trouble following:
`offset_stop = (np.byte_bounds(view)[-1] - np.byte_bounds(base)[-1]-1)//itemsize`
Following the one-dimensional array example we are trying to find the offset at the end of the two arrays ; for this reason we use the address of the last byte in the view array minus the address of the last byte in the base array. This difference is converted into a difference in terms of memory blocks by dividing with itemsize.
However there is a '-1' in the expression; initially I thought it was a mistake but if I remove it the code does not work. Still I cannot make sense of it since it does not correspond to an offset in memory blocks. I am sure I am missing something trivial that would be immediately recognized by someone with more experience. Would you be able to help? | open | 2023-01-12T20:59:02Z | 2023-02-14T10:00:08Z | https://github.com/rougier/from-python-to-numpy/issues/108 | [] | empeirikos | 5 |
flairNLP/flair | nlp | 2,773 | Fine-tuning or extended training of target language in NER few-shot transfer? | Firstly, I would like to thank everyone contributing to this easy-to-use, well structured and open-source framework.
I myself am currently writing my bachelor thesis and using during the last weeks daily the flair framework, since I am trying out different languages in zero and few-shot transfer in order to figure out what hinders or not the knowledge transfer on NER task.
Please excuse if my questions are for you self-explanatory since I am novice on this particular field.
The below mentioned code, is the training that I have implemented for all languages(all deriving from the WikiANN dataset, downsampled to 20k,10k,10k train, dev, test sets respectively, where applicable). I have tried out both approaches of the [FLERT paper](https://arxiv.org/pdf/2011.06993.pdf) and sticked to the second one(Feature-Based) according to my [results](https://docs.google.com/spreadsheets/d/e/2PACX-1vRm4IqGv45NF5FxvUEVbC-LXoh5OU-jP-0Iyml8B3-EMW7vQQboxspr4d_FjbUzhGQgGCMvgh7tHpG3/pubhtml) and the #2732 issue.
#### Training of the source language (here English):
```ruby
# import libs
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import TransformerWordEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
# set language abbreviation
lan = "en"
# path to dataset splits
data_folder = f"/content/gdrive/MyDrive/data/{lan}"
# set column scheme
columns = {0: "text", 1: "ner"}
# create the corpus
corpus: Corpus = ColumnCorpus(data_folder, columns,
train_file="train.txt",
test_file="test.txt",
dev_file="dev.txt",
tag_to_bioes=None,
column_delimiter="\t",
comment_symbol="#")
# label to predict
label_type = 'ner'
# make the label dictionary from the corpus
label_dict = corpus.make_label_dictionary(label_type=label_type)
# initialize non-fine-tuneable transformer embeddings
embeddings = TransformerWordEmbeddings(model='xlm-roberta-base',
layers="all",
subtoken_pooling="first",
fine_tune=False,
use_context=True)
# initialize sequence tagger
tagger = SequenceTagger(
embeddings=embeddings,
tag_dictionary=label_dict,
tag_type=label_type,
use_rnn=True,
hidden_size=256,
rnn_layers=1,
use_crf=True,
reproject_embeddings=True,
)
# initialize trainer
trainer = ModelTrainer(tagger, corpus)
# run training
trainer.train(f'/content/gdrive/MyDrive/models/resources/taggers/{lan}/{lan}_tagger',
learning_rate=0.1,
mini_batch_size=32,
max_epochs=300,
checkpoint=True,
embeddings_storage_mode="gpu",
write_weights=True
)
```
## Question 1 on few-shot:
Now that I have my source language trained, would it be more logical to continue with training on target language with previously used default params (i.e. trainer.**train( )**) or switch to the trainer.**fine_tune()** default parameters?
Or would be the already acquired knowledge from the source language overwritten in the first approach (**.train()**)?
I have tried out both, would nevertheless very much appreciate any opinions on this matter.
#### Extended training on the target language (here Tamil with 50 sentences as training set):
```ruby
# set abbreviation of source and target language
source_lan = "en"
target_lan = "ta"
# set column scheme
columns = {0: "text", 1: "ner"}
# path to dataset splits
data_folder = f"/content/gdrive/MyDrive/data/{target_lan}"
# create the corpus
corpus: Corpus = ColumnCorpus(data_folder, columns,
train_file="train.txt",
test_file="test.txt",
dev_file="dev.txt",
tag_to_bioes=None,
column_delimiter="\t",
comment_symbol="#")
# load the model to evaluate
tagger: SequenceTagger = SequenceTagger.load(f'/content/gdrive/MyDrive/models/resources/taggers/{source_lan}/{source_lan}_tagger/best-model.pt')
# initialize trainer
trainer = ModelTrainer(tagger, corpus)
# run training
trainer.train(f'/content/gdrive/MyDrive/models/resources/taggers/{target_lan}/train_{source_lan}_to_{target_lan}_tagger',
learning_rate=0.1,
mini_batch_size=32,
max_epochs=300,
checkpoint=True,
embeddings_storage_mode="gpu",
)
```
#### Fine-tuning on the target language (here Tamil with 50 sentences as training set):
```ruby
# run fine-tuning
trainer.fine_tune(f'/content/gdrive/MyDrive/models/resources/taggers/{target_lan}/fine_tune_{source_lan}_to_{target_lan}_tagger,
learning_rate = 5e-5,
mini_batch_size = 4,
max_epochs = 10, # also tried with 20, just like in FLERT paper
embeddings_storage_mode = "gpu"
)
```
## Question 2 on few-shot:
Since I will be experimenting on different trainingset-sizes of the target languages (50, 100, 500, 1000 sentences), to indicate the performance improvement with more samples, would you recommend to have a fixed dev and test set size and if so what would be a reasonable value? (e.g. 2.8k in dev and test sets each time)
## Question 3 on few-shot:
Any recommendations on literature for few-shot techniques? | closed | 2022-05-16T16:45:02Z | 2022-11-01T15:04:45Z | https://github.com/flairNLP/flair/issues/2773 | [
"question",
"wontfix"
] | i-partalas | 4 |
PaddlePaddle/ERNIE | nlp | 800 | 复现ERNIE-GEN的Persona-chat数据集时出现错误 | 当我执行
```
MODEL="base" # base or large or large_430g
TASK="personachat" # cnndm, coqa, gigaword, squad_qg or persona-chat
sh run_seq2seq.sh ./configs/${MODEL}/${TASK}_conf
```
这是log文件的输出
```
----------- Configuration Arguments -----------
current_node_ip: 127.0.1.1
log_prefix:
node_id: 0
node_ips: 127.0.1.1
print_config: True
split_log_path: log
training_script: ./run_seq2seq.py
training_script_args: ['--use_cuda', 'true', '--do_train', 'true', '--do_val', 'true', '--do_test', 'true', '--do_pred', 'false', '--train_set', 'datasets/persona_chat/train.tsv', '--dev_set', 'datasets/persona_chat/dev.2k.tsv', '--test_set', 'datasets/persona_chat/test.tsv', '--pred_set', 'datasets/persona_chat/', '--epoch', '10', '--tokenizer', 'FullTokenizer', '--tokenized_input', 'true', '--task_type', 'normal', '--role_type_size', '0', '--turn_type_size', '0', '--max_src_len', '192', '--max_tgt_len', '64', '--max_dec_len', '32', '--hidden_dropout_prob', '0.1', '--attention_probs_dropout_prob', '-1', '--random_noise', 'true', '--noise_prob', '0.5', '--continuous_position', 'true', '--tgt_type_id', '3', '--batch_size', '16', '--learning_rate', '3e-5', '--lr_scheduler', 'linear_warmup_decay', '--warmup_proportion', '0.1', '--weight_decay', '0.01', '--weight_sharing', 'True', '--label_smooth', '0.1', '--do_decode', 'true', '--beam_size', '5', '--length_penalty', '0.6', '--init_pretraining_params', 'ernie_gen_base/params', '--vocab_path', 'ernie_gen_base/vocab.txt', '--ernie_config_path', 'ernie_gen_base/ernie_config.json', '--checkpoints', './checkpoints', '--save_and_valid_by_epoch', 'true', '--eval_script', 'sh', './eval/tasks/persona_chat/eval.sh', '--eval_mertrics', 'rouge-1,rouge-2,rouge-l', '--random_seed', '38956']
------------------------------------------------
all_trainer_endpoints: 127.0.1.1:6170,127.0.1.1:6171,127.0.1.1:6172 , node_id: 0 , current_ip: 127.0.1.1 , num_nodes: 1 , node_ips: ['127.0.1.1'] , gpus_per_proc: 1 , selected_gpus_per_proc: [[0], [1], [2]] , nranks: 3
```
这是conf文件
```
#load model
vocab_path="ernie_gen_base/vocab.txt"
config_path="ernie_gen_base/ernie_config.json"
init_model="ernie_gen_base/params"
#input
max_src_len=192
max_tgt_len=64
tokenized_input="true"
continuous_position="true"
batch_size=16
in_tokens="false"
tgt_type_id=3
#decode
do_decode="true"
max_dec_len=32
beam_size=5
length_penalty=0.6
use_multi_gpu_test="true"
#train
epoch=10
weight_decay=0.01
label_smooth=0.1
hidden_dropout_prob=0.1
save_and_valid_by_epoch="true"
#lr
warmup_proportion=0.1
lr_scheduler="linear_warmup_decay"
learning_rate=3e-5
#noise
random_noise="true"
noise_prob=0.5
#dataset
data_path="datasets/persona_chat"
train_set="train.tsv"
dev_set="dev.2k.tsv"
test_set="test.tsv"
do_train="true"
do_val="true"
do_test="true"
do_pred="false"
#evaluate
eval_script="sh ./eval/tasks/persona_chat/eval.sh"
#eval_mertrics="rouge-1,rouge-2,rouge-l,bleu-1,bleu-2,bleu-3,bleu-4,dist-1,dis-2,dist-3,dist-4"
eval_mertrics="rouge-1,rouge-2,rouge-l"
``` | closed | 2022-04-29T08:47:09Z | 2022-07-14T07:41:14Z | https://github.com/PaddlePaddle/ERNIE/issues/800 | [
"wontfix"
] | xiang-xiang-zhu | 3 |
thunlp/OpenPrompt | nlp | 116 | Library does not allow to extract on mask | There is no way to extract the mask prediction... the [extract_at_mask)](https://thunlp.github.io/OpenPrompt/modules/base.html?highlight=mask#openprompt.pipeline_base.PromptForClassification.extract_at_mask) is only implemented for `PromptForClassification` which only outputs the prediction of classification layer...
Is there a reason why we can't see the predicted masked tokens? The documentation has very good intentions but it misses a lot of the library functionality and does not provide clear examples. Hope someone can shed some light here. I just want to see what was the predicted mask token. | closed | 2022-02-12T15:34:25Z | 2022-03-31T02:17:27Z | https://github.com/thunlp/OpenPrompt/issues/116 | [] | gmihaila | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,664 | [Feature Request]: Added notification sound | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Not do, does. Not sending up as a pull request, more fyi if anyone wants the feature and how to do it.
Does:
Upon completion of image generation, or, completion of loading a checkpoint, make a notification sound.
### Proposed workflow
shared_total_tqdm.py:
```
import winsound
...
line 36 +
sound_file = "C:\\Windows\\Media\\notify.wav" # Path to the sound file
winsound.PlaySound(sound_file, winsound.SND_FILENAME)
```
sd_models.py:
same as above, + line 348 after
` timer.record("load weights from disk")`
### Additional information
That little notification sound is a very small but huge QoL improvement, imo. Merge if you like/want/whatever. | open | 2024-11-18T01:46:09Z | 2024-11-19T05:59:47Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16664 | [
"enhancement"
] | hackedpassword | 3 |
errbotio/errbot | automation | 1,692 | Package installation issues | Hey everyone,
I would really appreciate some help with getting errbot to connect to a mattermost-preview server running locally
**Issue description**
I have a mattermost-preview server successfully running on a docker image with an account set up
I'm having trouble with cryptography and openssl.
I get this error when running errbot in the backend directory
```
11:32:32 ERROR errbot.bootstrap Some plugins failed to load:
from cryptography.hazmat.bindings._openssl import ffi, lib
ModuleNotFoundError: No module named '_cffi_backend
```
I fixed it prior somehow but not sure how I did it or how I undid it
```
PS C:\Users\marcu\Documents\2pi\LLMBot\err-backend-mattermost> errbot
11:32:32 INFO errbot.bootstrap Found Storage plugin: Shelf.
11:32:32 INFO errbot.bootstrap Found Backend plugin: Mattermost
11:32:32 DEBUG errbot.storage Opening storage 'repomgr'
11:32:32 DEBUG errbot.storage.shelf Open shelf storage C:\Users\marcu\Documents\2pi\LLMBot\err-backend-mattermost\data\repomgr.db
11:32:32 DEBUG errbot.core ErrBot init.
11:32:32 DEBUG errbot.backends.base Backend init.
11:32:32 DEBUG errbot.core created a thread pool of size 10.
11:32:32 DEBUG errbot.storage Opening storage 'core'
11:32:32 DEBUG errbot.storage.shelf Open shelf storage C:\Users\marcu\Documents\2pi\LLMBot\err-backend-mattermost\data\core.db
11:32:32 DEBUG errbot.core Initializing backend storage
11:32:32 DEBUG errbot.storage Opening storage 'mattermost_backend'
11:32:32 DEBUG errbot.storage.shelf Open shelf storage C:\Users\marcu\Documents\2pi\LLMBot\err-backend-mattermost\data\mattermost_backend.db
11:32:32 DEBUG errbot.plugin_manager New entries added to sys.path:
11:32:32 DEBUG errbot.plugin_manager C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\errbot\core_plugins
11:32:32 DEBUG errbot.plugin_manager C:\Users\marcu\Documents\2pi\LLMBot\err-backend-mattermost\plugins\err-example
11:32:32 DEBUG errbot.plugins.ACLs Logger for plugin ACLs initialized...
11:32:32 DEBUG errbot.plugins.Backup Logger for plugin Backup initialized...
11:32:32 DEBUG errbot.plugins.ChatRoom Logger for plugin ChatRoom initialized...
11:32:32 DEBUG errbot.plugins.CommandNot Logger for plugin CommandNotFoundFilter initialized...
11:32:32 DEBUG errbot.plugins.Flows Logger for plugin Flows initialized...
11:32:32 DEBUG errbot.plugins.Health Logger for plugin Health initialized...
11:32:32 DEBUG errbot.plugins.Help Logger for plugin Help initialized...
11:32:32 DEBUG errbot.plugins.Plugins Logger for plugin Plugins initialized...
11:32:32 DEBUG errbot.plugins.TextCmds Logger for plugin TextCmds initialized...
11:32:32 DEBUG errbot.plugins.Utils Logger for plugin Utils initialized...
11:32:32 DEBUG errbot.plugins.VersionChe Logger for plugin VersionChecker initialized...
11:32:32 DEBUG errbot.plugins.Example Logger for plugin Example initialized...
11:32:32 ERROR errbot.bootstrap Some plugins failed to load:
Traceback (most recent call last):
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\errbot\plugin_manager.py", line 289, in _load_plugins_generic
plugin_classes = plugin_info.load_plugin_classes(
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\errbot\plugin_info.py", line 100, in load_plugin_classes
spec.loader.exec_module(modu1e)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\errbot\core_plugins\webserver.py", line 9, in <module>
from OpenSSL import crypto
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\OpenSSL\__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\OpenSSL\crypto.py", line 16, in <module>
from OpenSSL._util import (
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\OpenSSL\_util.py", line 6, in <module>
from cryptography.hazmat.bindings.openssl.binding import Binding
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\cryptography\hazmat\bindings\openssl\binding.py", line 14, in <module>
from cryptography.hazmat.bindings._openssl import ffi, lib
ModuleNotFoundError: No module named '_cffi_backend'
11:32:32 DEBUG errbot.bootstrap Start serving commands from the mattermost backend.
11:32:32 DEBUG urllib3.connectionpool Starting new HTTP connection (1): localhost:8065
11:32:32 DEBUG urllib3.connectionpool http://localhost:8065 "POST /api/v4/users/login HTTP/1.1" 200 707
11:32:32 DEBUG urllib3.connectionpool Starting new HTTP connection (1): localhost:8065
11:32:32 DEBUG urllib3.connectionpool http://localhost:8065 "GET /api/v4/teams/name/George%20testing HTTP/1.1" 404 270
11:32:32 ERROR mattermostdriver.websocke Sorry, we could not find the page.
11:32:32 ERROR errbot.backends.base Exception occurred in serve_once:
Traceback (most recent call last):
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\errbot\backends\base.py", line 869, in serve_forever
if self.serve_once():
File "C:\Users\marcu\Documents\2pi\LLMBot\err-backend-mattermost\src\err-backend-mattermost\err-backend-mattermost.py", line 347, in serve_once
self.teamid = self.driver.teams.get_team_by_name(name=self.team)["id"]
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\mattermostdriver\endpoints\teams.py", line 46, in get_team_by_name
return self.client.get(
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\mattermostdriver\client.py", line 193, in get
response = self.make_request('get', endpoint, options=options, params=params)
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\mattermostdriver\client.py", line 179, in make_request
raise ResourceNotFound(message,self._basepath) from None
mattermostdriver.exceptions.ResourceNotFound: [Errno Sorry, we could not find the page.] /api/v4
11:32:32 INFO errbot.backends.base Reconnecting in 1 seconds (0 attempted reconnections so far).
11:32:33 DEBUG urllib3.connectionpool Starting new HTTP connection (1): localhost:8065
11:32:33 DEBUG urllib3.connectionpool http://localhost:8065 "POST /api/v4/users/login HTTP/1.1" 200 707
11:32:33 DEBUG urllib3.connectionpool Starting new HTTP connection (1): localhost:8065
11:32:33 DEBUG urllib3.connectionpool http://localhost:8065 "GET /api/v4/teams/name/George%20testing HTTP/1.1" 404 270
11:32:33 ERROR mattermostdriver.websocke Sorry, we could not find the page.
11:32:33 ERROR errbot.backends.base Exception occurred in serve_once:
Traceback (most recent call last):
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\errbot\backends\base.py", line 869, in serve_forever
if self.serve_once():
File "C:\Users\marcu\Documents\2pi\LLMBot\err-backend-mattermost\src\err-backend-mattermost\err-backend-mattermost.py", line 347, in serve_once
self.teamid = self.driver.teams.get_team_by_name(name=self.team)["id"]
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\mattermostdriver\endpoints\teams.py", line 46, in get_team_by_name
return self.client.get(
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\mattermostdriver\client.py", line 193, in get
response = self.make_request('get', endpoint, options=options, params=params)
File "C:\Users\marcu\.conda\envs\errbot-backend\lib\site-packages\mattermostdriver\client.py", line 179, in make_request
raise ResourceNotFound(message,self._basepath) from None
mattermostdriver.exceptions.ResourceNotFound: [Errno Sorry, we could not find the page.] /api/v4
```
**Steps to reproduce**
Here's the packages installed on the errbot conda environment
```
# packages in environment at C:\Users\marcu\.conda\envs\errbot-backend:
#
# Name Version Build Channel
ansi 0.3.6 pypi_0 pypi
asyncio 3.4.3 pypi_0 pypi
autopep8 2.1.0 pypi_0 pypi
beautifulsoup4 4.12.3 pypi_0 pypi
blinker 1.8.1 pypi_0 pypi
bzip2 1.0.8 h2bbff1b_6
ca-certificates 2024.3.11 haa95532_0
certifi 2024.2.2 pypi_0 pypi
cffi 1.16.0 pypi_0 pypi
charset-normalizer 3.3.2 pypi_0 pypi
click 8.1.7 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
colorlog 6.7.0 pypi_0 pypi
cryptography 38.0.4 pypi_0 pypi
deepmerge 1.1.0 pypi_0 pypi
distlib 0.3.8 pypi_0 pypi
dulwich 0.21.5 pypi_0 pypi
errbot 6.2.0 pypi_0 pypi
expat 2.6.2 hd77b12b_0
filelock 3.14.0 pypi_0 pypi
flask 2.3.3 pypi_0 pypi
idna 3.7 pypi_0 pypi
importlib-metadata 7.1.0 pypi_0 pypi
itsdangerous 2.2.0 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
libffi 3.4.4 hd77b12b_1
markdown 3.4.4 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
mattermost 6.5.0 pypi_0 pypi
mattermostdriver 7.3.2 pypi_0 pypi
openssl 3.0.13 h2bbff1b_1
pip 23.3.1 py310haa95532_0
platformdirs 4.2.1 pypi_0 pypi
pycodestyle 2.11.1 pypi_0 pypi
pycparser 2.22 pypi_0 pypi
pygments 2.16.1 pypi_0 pypi
pygments-markdown-lexer 0.1.0.dev39 pypi_0 pypi
pyopenssl 19.0.0 pypi_0 pypi
python 3.10.14 he1021f5_0
requests 2.31.0 pypi_0 pypi
setuptools 68.1.2 pypi_0 pypi
six 1.16.0 pypi_0 pypi
soupsieve 2.5 pypi_0 pypi
sqlite 3.45.3 h2bbff1b_0
tk 8.6.12 h2bbff1b_0
tomli 2.0.1 pypi_0 pypi
tzdata 2024a h04d1e81_0
urllib3 2.2.1 pypi_0 pypi
vc 14.2 h21ff451_1
virtualenv 20.26.1 pypi_0 pypi
vs2015_runtime 14.27.29016 h5e58377_2
waitress 3.0.0 pypi_0 pypi
webob 1.8.7 pypi_0 pypi
websockets 12.0 pypi_0 pypi
webtest 3.0.0 pypi_0 pypi
werkzeug 2.2.2 pypi_0 pypi
wheel 0.43.0 py310haa95532_0
xz 5.4.6 h8cc25b3_1
zipp 3.18.1 pypi_0 pypi
zlib 1.2.13 h8cc25b3_1
zope-event 5.0 pypi_0 pypi
zope-interface 6.3 pypi_0 pypi
zope-schema 7.0.1 pypi_0 pypi
```
I am using docker for the server and did have an issue when trying to connect to the server separately.
I'm running this on Windows 11
**Environment :**
- Errbot version: 6.2.0
- OS version: Windows 11
- Python version: 3.10.14
- conda version : 23.7.4
- conda-build version : 3.26.0
- python version : 3.9.17.final.0
- Using Docker: yes
| open | 2024-05-09T01:47:06Z | 2024-05-09T01:47:34Z | https://github.com/errbotio/errbot/issues/1692 | [
"type: support/question"
] | Marcusg33 | 0 |
hankcs/HanLP | nlp | 700 | 繁转简中出现的“陷阱”被翻译为“猫腻” | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:hanlp-1.5.2
我使用的版本是:hanlp-1.5.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
繁体转简体出现简体“陷阱”被替换为“猫腻”。
System.out.println(HanLP.convertToSimplifiedChinese("瓦爾.閃電陷阱"));
输出:瓦尔.闪电猫腻
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
没有修改代码,词典和模型
### 步骤
### 触发代码
```
System.out.println(HanLP.convertToSimplifiedChinese("瓦爾.閃電陷阱"));
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
瓦尔.闪电陷阱
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
瓦尔.闪电猫腻
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2017-11-29T09:10:01Z | 2017-12-02T03:17:07Z | https://github.com/hankcs/HanLP/issues/700 | [
"improvement"
] | lucifering | 2 |
postmanlabs/httpbin | api | 291 | Would it be possible to add a "/timeout" endpoint to httpbin? | My use case is building a client that understands the unreliability of distributed systems, and attempts backoffs, retries and other stuff. Having an endpoint which intentionally never returns a response or similarly represents the unreliability here would be really useful. Let me know if I can provide more detail.
| closed | 2016-06-16T10:59:19Z | 2018-04-26T17:51:10Z | https://github.com/postmanlabs/httpbin/issues/291 | [] | fables-tales | 3 |
sqlalchemy/alembic | sqlalchemy | 550 | consider rationale for ModifyTableOps emitting "pass" for no operations | I was using alembic's rewrite feature to ignore comment changes (temporarily, not important for the issue):
```python
@writer.rewrites(ops.AlterColumnOp)
def rewrite_alter_column(context, revision, op):
op.modify_comment = False
if not op.has_changes():
return []
return op
```
the resulting migration file has many pass lines:
```python
def upgrade():
pass
pass
pass
...
```
It would be nice if alembic wouldn't add unnecessary `pass` lines.
I found the cause:
https://github.com/sqlalchemy/alembic/blob/d46de05b8b3281a85e6b107ef3f3407e232eb9e9/alembic/autogenerate/render.py#L117
---
I found a workaround by rewriting `ModifyTableOps` after the first rewriter has finished:
```python
writer1 = Rewriter()
writer2 = Rewriter()
@writer1.rewrites(ops.AlterColumnOp)
def rewrite_alter_column(context, revision, op):
op.modify_comment = False
if not op.has_changes():
return []
return op
@writer2.rewrites(ops.ModifyTableOps)
def rewrite_modify_table_ops(context, revision, op):
if op.is_empty():
return []
return op
writer = writer1.chain(writer2)
``` | closed | 2019-04-01T13:05:40Z | 2019-09-17T23:09:11Z | https://github.com/sqlalchemy/alembic/issues/550 | [
"bug",
"autogenerate - rendering"
] | RazerM | 3 |
man-group/arctic | pandas | 228 | Mongo fail-over during append can leave a Version in an inconsistent state | We can trigger this assertion:
```
...
File "/app/AHL/packages/ahl.tickdownsample/1.16.0-py2.7/app/AHL/ahl.tickdownsample/lib/python2.7/site-packages/ahl.mongo-1.297.0-py2.7-linux-x86_64.egg/ahl/mongo/mongoose/store/version_store.py", line 105, in append
return super(VersionStore, self).append(symbol, data, metadata, prune_previous_version, upsert, **kwargs)
File "/opt/ahl/app/AHL/packages/ahl.tickdownsample/1.16.0-py2.7/app/AHL/ahl.tickdownsample/lib/python2.7/site-packages/arctic-1.4.0-py2.7-linux-x86_64.egg/arctic/decorators.py", line 50, in f_retry
return f(*args, **kwargs)
File "/opt/ahl/app/AHL/packages/ahl.tickdownsample/1.16.0-py2.7/app/AHL/ahl.tickdownsample/lib/python2.7/site-packages/arctic-1.4.0-py2.7-linux-x86_64.egg/arctic/store/version_store.py", line 449, in append
Append not possible - please call write() to get versions back in sync''')
ArcticException: version_nums is out of sync with previous version document.
This probably means that either a version document write has previously failed, or the previous version has been deleted.
Append not possible - please call write() to get versions back in sync
```
in append, as the `version_nums` are updated before the data is actually written.
| closed | 2016-09-19T17:22:37Z | 2016-09-20T18:38:17Z | https://github.com/man-group/arctic/issues/228 | [
"bug"
] | jamesblackburn | 3 |
vitalik/django-ninja | pydantic | 921 | [BUG] Package generates invalid openapi.json files | **Describe the bug**
The `openapi.json` file states that it is compliant with version 3.0.2, but it uses features that are only valid in 3.1.0.
Pydantic docs [say](https://docs.pydantic.dev/latest/why/): "Pydantic generates [JSON Schema version 2020-12](https://json-schema.org/draft/2020-12/release-notes.html), the latest version of the standard which is compatible with [OpenAPI 3.1](https://www.openapis.org/blog/2021/02/18/openapi-specification-3-1-released)." OpenAPI 3.0.2 supports an older JSON spec, [JSON Schema Specification Wright Draft 00](https://tools.ietf.org/html/draft-wright-json-schema-00#section-4.2).
I think the solution to this could be to just increase the OpenAPI version in the generated file to 3.1.0.
Here is an example of the errors this issue causes. This schema:
```python
class Book(Schema):
"""Used for filtering."""
title: str | None = None
```
will generate a chunk like this:
```json
"Book": {
"properties": {
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"title": "Title"
}
},
"title": "Book",
"type": "object"
}
```
`"type": "null"` is not a valid data type in OpenAPI 3.0.2.
From [here](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.3.md):
```
null is not supported as a type (see [nullable](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.3.md#schemaNullable) for an alternative solution).
```
This language is removed from the OpenAPI 3.1.0 spec [here](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.1.0.md).
To support nullable values in 3.0.2, `nullable` should be used, like this:
```json
{
"type": "string",
"nullable": true
}
```
I've also found issues with nested lists in schemas. This won't generate a valid 3.0.2 schema, because it doesn't support nested arrays:
```python
class Test(Schema):
ranks: list[tuple[conint(gt=0), confloat(ge=0, le=1)]]
```
We've been using [this package](https://github.com/python-openapi/openapi-spec-validator) to validate the OpenAPI files.
**Versions (please complete the following information):**
- Python version: 3.10
- Django version: 4.2
- Django-Ninja version: 1.0.0rc
- Pydantic version: 2.5
| closed | 2023-11-13T23:02:48Z | 2023-11-16T16:35:46Z | https://github.com/vitalik/django-ninja/issues/921 | [] | scott-8 | 0 |
awesto/django-shop | django | 719 | Failed building wheel for rcssmin | ----------------------------------------
Failed building wheel for hiredis
Running setup.py clean for hiredis
Running setup.py bdist_wheel for rcssmin ... error
Complete output from command /home/c013ra/Desktop/django-shop/myenv/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-24x5vmdv/rcssmin/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-ip9ie3jm --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
copying ./rcssmin.py -> build/lib.linux-x86_64-3.5
running build_ext
building '_rcssmin' extension
creating build/temp.linux-x86_64-3.5
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DEXT_MODULE=_rcssmin -UEXT_PACKAGE -I_setup/include -I/usr/include/python3.5m -I/home/c013ra/Desktop/django-shop/myenv/include/python3.5m -c rcssmin.c -o build/temp.linux-x86_64-3.5/rcssmin.o
In file included from rcssmin.c:18:0:
_setup/include/cext.h:34:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Failed building wheel for rcssmin
Running setup.py clean for rcssmin
Running setup.py bdist_wheel for rjsmin ... error
Complete output from command /home/c013ra/Desktop/django-shop/myenv/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-24x5vmdv/rjsmin/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-fe4qkzhn --python-tag cp35:
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
copying ./rjsmin.py -> build/lib.linux-x86_64-3.5
running build_ext
building '_rjsmin' extension
creating build/temp.linux-x86_64-3.5
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DEXT_MODULE=_rjsmin -UEXT_PACKAGE -I_setup/include -I/usr/include/python3.5m -I/home/c013ra/Desktop/django-shop/myenv/include/python3.5m -c rjsmin.c -o build/temp.linux-x86_64-3.5/rjsmin.o
In file included from rjsmin.c:18:0:
_setup/include/cext.h:34:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Failed building wheel for rjsmin
Running setup.py clean for rjsmin
Failed to build hiredis rcssmin rjsmin
django-filer 1.3.0 has requirement easy-thumbnails<2.5,>=2, but you'll have easy-thumbnails 2.5 which is incompatible.
django-haystack 2.5.0 has requirement Django<1.10, but you'll have django 1.10 which is incompatible.
django-allauth 0.35.0 has requirement Django>=1.11, but you'll have django 1.10 which is incompatible.
Installing collected packages: rcssmin, rjsmin, django-compressor, Unidecode, django-polymorphic, olefile, Pillow, easy-thumbnails, django-mptt, django-filer, django-filter, django-fsm, django-fsm-admin, django-haystack, django-ipware, django-parler, jsonfield, django-post-office, redis, django-redis-cache, django-redis-sessions, djangorestframework, six, django-rest-auth, django-sass-processor, djangocms-bootstrap3, html5lib, djangocms-text-ckeditor, djangocms-cascade, python-dateutil, drf-haystack, elasticsearch, hiredis, libsass, python-openid, pytz, virtualenv, wheel, beautifulsoup4, bs4
Running setup.py install for rcssmin ... error
Complete output from command /home/c013ra/Desktop/django-shop/myenv/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-24x5vmdv/rcssmin/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-y8abyajq/install-record.txt --single-version-externally-managed --compile --install-headers /home/c013ra/Desktop/django-shop/myenv/include/site/python3.5/rcssmin:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
copying ./rcssmin.py -> build/lib.linux-x86_64-3.5
running build_ext
building '_rcssmin' extension
creating build/temp.linux-x86_64-3.5
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DEXT_MODULE=_rcssmin -UEXT_PACKAGE -I_setup/include -I/usr/include/python3.5m -I/home/c013ra/Desktop/django-shop/myenv/include/python3.5m -c rcssmin.c -o build/temp.linux-x86_64-3.5/rcssmin.o
In file included from rcssmin.c:18:0:
_setup/include/cext.h:34:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/home/c013ra/Desktop/django-shop/myenv/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-24x5vmdv/rcssmin/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-y8abyajq/install-record.txt --single-version-externally-managed --compile --install-headers /home/c013ra/Desktop/django-shop/myenv/include/site/python3.5/rcssmin" failed with error code 1 in /tmp/pip-install-24x5vmdv/rcssmin/
| closed | 2018-04-01T05:12:16Z | 2018-11-08T16:43:20Z | https://github.com/awesto/django-shop/issues/719 | [] | ghost | 3 |
google-research/bert | tensorflow | 580 | Model Hyper Parameters to change after pretraining on the custom dataset | I had run the pretraining code of bert on a custom dataset and now i want to know which arguments i should change based on the pretrained model. The only arguments which I have changed among the three arguments(vocab_file,config_file,init_checkpoint) is the init_checkpoint which I have given the latest checkpoint created by the pretraining code. But when I tried to run it I was getting the following error.
.
So i tried changing the vocab_size in bert_config.json and tried to run it. This is the error which I am getting now.

Could you tell me the reason why am getting this issue?
| closed | 2019-04-15T09:27:05Z | 2019-04-16T10:20:47Z | https://github.com/google-research/bert/issues/580 | [] | aswin-giridhar | 1 |
lundberg/respx | pytest | 90 | Big overhead when mocking 149 urls | I'm currently migrating from aiohttp/aioresponses to httpx/respx, and seeing a large regression in test times. An integration test where I mock 149 urls which took 0.5s with aioresponses now takes 2.2s with respx. `build_request` seems to be the major culprit. I've created a flamegraph with py-spy ([full svg as gist](https://gist.github.com/konstin/f4e17d5c77ebc0b13f4e6f1a7bb27737)):

Test code
```python
@pytest.mark.asyncio
async def test_integration_json():
with respx.mock(
assert_all_mocked=True,
assert_all_called=True,
) as respx_mock:
for file in Path("test-data/snapshots").iterdir(): # Directory with 149 files
respx_mock.get(
"https://www.example.com/" + file.name.replace(".html", ""),
content=file.read_text(),
content_type="text/html",
)
# Actual test logic
``` | closed | 2020-10-09T10:39:37Z | 2020-10-15T08:43:25Z | https://github.com/lundberg/respx/issues/90 | [] | konstin | 10 |
fastapi/sqlmodel | fastapi | 107 | Return a Column class for relationship attributes that require it | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class UpsertByModelMixin(SQLModel):
created_by_id : Optional[int] = Field(default=None, foreign_key="users.id")
created_by: Optional["User"] = Relationship(sa_relationship_kwargs={ 'foreign_keys': [created_by_id] })
updated_by_id : Optional[int] = Field(default=None, foreign_key="users.id")
updated_by: Optional["User"] = Relationship(sa_relationship_kwargs={ 'foreign_keys': [updated_by_id] })
class Team(UpsertByModelMixin, SQLModel, table=True,):
__tablename__ = 'teams'
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field(max_length=500)
```
```
### Description
Use models such as above will result in the following error because the `foreign_keys` argument expects a `Column` (not a SQLModel `Field`).
```
| sqlalchemy.exc.ArgumentError: Column expression expected for argument 'foreign_keys'; got FieldInfo(default=PydanticUndefined, extra={'exclude': None, 'include': None}).
```
As such, its impossible right now for anyone to deploy classes with multiple FKs to the same identity and/or use custom named foreign key columns.
### Wanted Solution
To be able to specify foreign keys to the same related entity and/or name my fk columns however I choose.
### Wanted Code
```python
Just like above
```
### Alternatives
_No response_
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.5
### Additional Context
_No response_ | open | 2021-09-20T16:53:55Z | 2024-05-28T20:25:13Z | https://github.com/fastapi/sqlmodel/issues/107 | [
"feature"
] | ohmeow | 3 |
marshmallow-code/flask-marshmallow | sqlalchemy | 128 | `__init__() got an unexpected keyword argument 'ordered'` when subclassing mm.ModelSchema | This worked fine with marshmallow 2 but started failing with marshmallow 3rc4, on Python 2.7.15
```python
from flask_marshmallow import Marshmallow
from flask_marshmallow.sqla import SchemaOpts
from marshmallow_sqlalchemy import ModelConverter
class MyModelConverter(ModelConverter):
pass # usually i do more stuff in here, which is not relevant for this issue
class _MySchemaOpts(SchemaOpts):
def __init__(self, meta):
super(_MySchemaOpts, self).__init__(meta)
self.model_converter = getattr(meta, 'model_converter', MyModelConverter)
mm = Marshmallow()
class MyModelSchema(mm.ModelSchema):
OPTIONS_CLASS = _MySchemaOpts
```
```pytb
TypeError Traceback (most recent call last)
<ipython-input-1-bb4a7fc1f246> in <module>()
15
16 mm = Marshmallow()
---> 17 class MyModelSchema(mm.ModelSchema):
18 OPTIONS_CLASS = _MySchemaOpts
/home/adrian/dev/indico/env/lib/python2.7/site-packages/marshmallow/schema.pyc in __new__(mcs, name, bases, attrs)
106 # Set klass.opts in __new__ rather than __init__ so that it is accessible in
107 # get_declared_fields
--> 108 klass.opts = klass.OPTIONS_CLASS(meta, ordered=ordered)
109 # Add fields specifid in the `include` class Meta option
110 cls_fields += list(klass.opts.include.items())
TypeError: Error when calling the metaclass bases
__init__() got an unexpected keyword argument 'ordered'
``` | closed | 2019-03-14T16:23:41Z | 2019-03-14T16:26:54Z | https://github.com/marshmallow-code/flask-marshmallow/issues/128 | [] | ThiefMaster | 1 |
schenkd/nginx-ui | flask | 30 | flask auth? | Do I need to implement authorization based on flask? This will protect inexperienced users from unauthorized access by default.
| closed | 2020-07-05T15:29:08Z | 2022-11-18T21:27:09Z | https://github.com/schenkd/nginx-ui/issues/30 | [] | foozzi | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.