repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
horovod/horovod | deep-learning | 4,020 | Horovod with TensorFlow crashed | **Environment:**
1. Framework: TensorFlow
2. Framework version: 1.15.3
3. Horovod version: 0.28.1
4. MPI version: 4.0.1
5. CUDA version:
6. NCCL version:
7. Python version: 3.6.9
8. Spark / PySpark version:
9. Ray version:
10. OS and version: Ubuntu 18.04.6 LTS
11. GCC version: 7.5.0
12. CMake version: 3.13.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? Yes
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
I did CPU training with Horovod + TensorFlow and launched it with OpenMPI. Horovod always crashed with following errors when some workers didn't process any data and directly call hvd.join() to wait for other workers.
```
munmap_chunk(): invalid pointer
[node-0:410078] *** Process received signal ***
[node-0:410078] Signal: Aborted (6)
[node-0:410078] Signal code: (-6)
[node-0:410078] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x3ef10)[0x7f77f0cebf10]
[node-0:410078] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7f77f0cebe87]
[node-0:410078] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7f77f0ced7f1]
[node-0:410078] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x89837)[0x7f77f0d36837]
[node-0:410078] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x908ba)[0x7f77f0d3d8ba]
[node-0:410078] [ 5] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x58c)[0x7f77f0d44e9c]
[node-0:410078] [ 6] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZN9__gnu_cxx13new_allocatorIN7horovod6common7RequestEE10deallocateEPS3_m+0x20)[0x7f77d197de04]
[node-0:410078] [ 7] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt16allocator_traitsISaIN7horovod6common7RequestEEE10deallocateERS3_PS2_m+0x2b)[0x7f77d197bac8]
[node-0:410078] [ 8] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt12_Vector_baseIN7horovod6common7RequestESaIS2_EE13_M_deallocateEPS2_m+0x32)[0x7f77d1978de0]
[node-0:410078] [ 9] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt12_Vector_baseIN7horovod6common7RequestESaIS2_EED2Ev+0x52)[0x7f77d1977d66]
[node-0:410078] [10] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt6vectorIN7horovod6common7RequestESaIS2_EED1Ev+0x41)[0x7f77d1974e7b]
[node-0:410078] [11] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIN7horovod6common7RequestESaISA_EEED1Ev+0x1c)[0x7f77d197f9ca]
[node-0:410078] [12] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZN9__gnu_cxx13new_allocatorISt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIN7horovod6common7RequestESaISC_EEEE7destroyISF_EEvPT_+0x1c)[0x7f77d197f9f6]
[node-0:410078] [13] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt16allocator_traitsISaISt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIN7horovod6common7RequestESaISB_EEEEE7destroyISE_EEvRSF_PT_+0x23)[0x7f77d197e4ee]
[node-0:410078] [14] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt8__detail16_Hashtable_allocISaINS_10_Hash_nodeISt4pairIKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIN7horovod6common7RequestESaISD_EEELb1EEEEE18_M_deallocate_nodeEPSH_+0x6c)[0x7f77d197c34c]
[node-0:410078] [15] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt10_HashtableINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt4pairIKS5_St6vectorIN7horovod6common7RequestESaISB_EEESaISE_ENSt8__detail10_Select1stESt8equal_toIS5_ESt4hashIS5_ENSG_18_Mod_range_hashingENSG_20_Default_ranged_hashENSG_20_Prime_rehash_policyENSG_17_Hashtable_traitsILb1ELb0ELb1EEEE8_M_eraseEmPNSG_15_Hash_node_baseEPNSG_10_Hash_nodeISE_Lb1EEE+0x12b)[0x7f77d197da9f]
[node-0:410078] [16] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt10_HashtableINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt4pairIKS5_St6vectorIN7horovod6common7RequestESaISB_EEESaISE_ENSt8__detail10_Select1stESt8equal_toIS5_ESt4hashIS5_ENSG_18_Mod_range_hashingENSG_20_Default_ranged_hashENSG_20_Prime_rehash_policyENSG_17_Hashtable_traitsILb1ELb0ELb1EEEE5eraseENSG_20_Node_const_iteratorISE_Lb0ELb1EEE+0x62)[0x7f77d197b67e]
[node-0:410078] [17] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt10_HashtableINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt4pairIKS5_St6vectorIN7horovod6common7RequestESaISB_EEESaISE_ENSt8__detail10_Select1stESt8equal_toIS5_ESt4hashIS5_ENSG_18_Mod_range_hashingENSG_20_Default_ranged_hashENSG_20_Prime_rehash_policyENSG_17_Hashtable_traitsILb1ELb0ELb1EEEE5eraseENSG_14_Node_iteratorISE_Lb0ELb1EEE+0x45)[0x7f77d1978609]
[node-0:410078] [18] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt13unordered_mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIN7horovod6common7RequestESaIS9_EESt4hashIS5_ESt8equal_toIS5_ESaISt4pairIKS5_SB_EEE5eraseENSt8__detail14_Node_iteratorISI_Lb0ELb1EEE+0x23)[0x7f77d1975711]
[node-0:410078] [19] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZN7horovod6common10Controller17ConstructResponseERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEi+0x1cf0)[0x7f77d1970eb4]
[node-0:410078] [20] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZN7horovod6common10Controller19ComputeResponseListEbRNS0_18HorovodGlobalStateERNS0_10ProcessSetE+0x1c2f)[0x7f77d196e85d]
[node-0:410078] [21] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x10365e)[0x7f77d199d65e]
[node-0:410078] [22] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x102e54)[0x7f77d199ce54]
[node-0:410078] [23] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZSt13__invoke_implIvPFvRN7horovod6common18HorovodGlobalStateEEJSt17reference_wrapperIS2_EEET_St14__invoke_otherOT0_DpOT1_+0x39)[0x7f77d19ae578]
[node-0:410078] [24] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZSt8__invokeIPFvRN7horovod6common18HorovodGlobalStateEEJSt17reference_wrapperIS2_EEENSt15__invoke_resultIT_JDpT0_EE4typeEOS9_DpOSA_+0x4e)[0x7f77d19a9d68]
[node-0:410078] [25] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt6thread8_InvokerISt5tupleIJPFvRN7horovod6common18HorovodGlobalStateEESt17reference_wrapperIS4_EEEE9_M_invokeIJLm0ELm1EEEEDTcl8__invokespcl10_S_declvalIXT_EEEEESt12_Index_tupleIJXspT_EEE+0x43)[0x7f77d19c39f9]
[node-0:410078] [26] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt6thread8_InvokerISt5tupleIJPFvRN7horovod6common18HorovodGlobalStateEESt17reference_wrapperIS4_EEEEclEv+0x2c)[0x7f77d19c399a]
[node-0:410078] [27] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJPFvRN7horovod6common18HorovodGlobalStateEESt17reference_wrapperIS5_EEEEEE6_M_runEv+0x1c)[0x7f77d19c391e]
[node-0:410078] [28] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xd44c0)[0x7f73eb5a54c0]
[node-0:410078] [29] /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7f77f0a956db]
[node-0:410078] *** End of error message ***
```
OR
```
free(): invalid next size (normal)
[node-0:391803] *** Process received signal ***
[node-0:391803] Signal: Aborted (6)
[node-0:391803] Signal code: (-6)
[node-0:391803] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x3ef10)[0x7fd1dc5a7f10]
[node-0:391803] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7fd1dc5a7e87]
[node-0:391803] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7fd1dc5a97f1]
[node-0:391803] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x89837)[0x7fd1dc5f2837]
[node-0:391803] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x908ba)[0x7fd1dc5f98ba]
[node-0:391803] [ 5] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x76d)[0x7fd1dc60107d]
[node-0:391803] [ 6] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0xd2550)[0x7fd1bd228550]
[node-0:391803] [ 7] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0xd23ee)[0x7fd1bd2283ee]
[node-0:391803] [ 8] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0xd2204)[0x7fd1bd228204]
[node-0:391803] [ 9] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0xd1df7)[0x7fd1bd227df7]
[node-0:391803] [10] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0xd189b)[0x7fd1bd22789b]
[node-0:391803] [11] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZN7horovod6common10Controller17ConstructResponseERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEi+0x1d20)[0x7fd1bd22cee4]
[node-0:391803] [12] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZN7horovod6common10Controller19ComputeResponseListEbRNS0_18HorovodGlobalStateERNS0_10ProcessSetE+0x1c2f)[0x7fd1bd22a85d]
[node-0:391803] [13] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x10365e)[0x7fd1bd25965e]
[node-0:391803] [14] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x102e54)[0x7fd1bd258e54]
[node-0:391803] [15] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZSt13__invoke_implIvPFvRN7horovod6common18HorovodGlobalStateEEJSt17reference_wrapperIS2_EEET_St14__invoke_otherOT0_DpOT1_+0x39)[0x7fd1bd26a578]
[node-0:391803] [16] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZSt8__invokeIPFvRN7horovod6common18HorovodGlobalStateEEJSt17reference_wrapperIS2_EEENSt15__invoke_resultIT_JDpT0_EE4typeEOS9_DpOSA_+0x4e)[0x7fd1bd265d68]
[node-0:391803] [17] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt6thread8_InvokerISt5tupleIJPFvRN7horovod6common18HorovodGlobalStateEESt17reference_wrapperIS4_EEEE9_M_invokeIJLm0ELm1EEEEDTcl8__invokespcl10_S_declvalIXT_EEEEESt12_Index_tupleIJXspT_EEE+0x43)[0x7fd1bd27f9f9]
[node-0:391803] [18] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt6thread8_InvokerISt5tupleIJPFvRN7horovod6common18HorovodGlobalStateEESt17reference_wrapperIS4_EEEEclEv+0x2c)[0x7fd1bd27f99a]
[node-0:391803] [19] /opt/.sin/lib/python3.6/site-packages/horovod/tensorflow/mpi_lib.cpython-36m-x86_64-linux-gnu.so(_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJPFvRN7horovod6common18HorovodGlobalStateEESt17reference_wrapperIS5_EEEEEE6_M_runEv+0x1c)[0x7fd1bd27f91e]
[node-0:391803] [20] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xd44c0)[0x7fcdd6e614c0]
[node-0:391803] [21] /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7fd1dc3516db]
[node-0:391803] [22] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fd1dc68a61f]
[node-0:391803] *** End of error message ***
```
What's wrong? Thanks for your help in advance!
| open | 2024-02-05T09:21:31Z | 2025-03-21T08:53:53Z | https://github.com/horovod/horovod/issues/4020 | [
"bug"
] | mythZhu | 1 |
d2l-ai/d2l-en | machine-learning | 2,123 | Typo in Ch.2 introduction. | In the second paragraph on the intro to 2. Preliminaries, change "basic" to "basics" (see the image below).

| closed | 2022-05-10T15:08:49Z | 2022-05-10T17:37:44Z | https://github.com/d2l-ai/d2l-en/issues/2123 | [] | jbritton6 | 1 |
plotly/dash | jupyter | 2,852 | [BUG] set_props called multiple times only keep the last props. | For regular callbacks, when multiple call of `set_props` to the same component id, only the last call is saved.
Example:
```
from dash import Dash, Input, html, set_props
app = Dash()
app.layout = [
html.Button("start", id="start"),
html.Div("initial", id="output"),
]
@app.callback(
Input("start", "n_clicks"),
)
def on_click(_):
set_props("output", {"children": "changed"})
set_props("output", {"style": {"background": "red"}})
if __name__ == "__main__":
app.run(debug=True)
```
Clicking on the start button only set the background red, the text stays at "initial". The props should be merged and both updated.
| closed | 2024-05-07T16:35:57Z | 2024-05-15T19:22:04Z | https://github.com/plotly/dash/issues/2852 | [
"bug",
"sev-1"
] | T4rk1n | 0 |
huggingface/peft | pytorch | 2,179 | Integration of merge-kit into PEFT | ### Feature request
Integrate merge-kit functionalities within the PEFT library to enable users to leverage the techniques provided in the library.
This could include additional merging techniques beyond TIES and DARE which are currently natively supported by PEFT.
References:
1)https://github.com/arcee-ai/mergekit
2)https://huggingface.co/docs/peft/main/en/developer_guides/model_merging#model-merging
### Motivation
For beginners, especially those new to fine-tuning large language models, integrating merge-kit requires familiarity with multiple merging methods and careful handling of model weights.
PEFT could bridge this gap by providing an easy-to-use, fully integrated solution for merging model weights.
### Your contribution
With ample support and guidance, I could help in the integration. | open | 2024-10-25T19:53:14Z | 2025-03-03T16:58:20Z | https://github.com/huggingface/peft/issues/2179 | [] | ParagEkbote | 21 |
statsmodels/statsmodels | data-science | 9,190 | Review: cov_params, cov_type in RLM | see also #4670
and issue for heteroscedasticity, correlation robust sandwiches.
I have again problems understanding what the cov_types in RLM actually do.
We need LATEX formulas, and get some ideas what how H1, H2, H3 differ
Salini et al have 5 cov_types in table 1.
It's for S-estimator, but locally they are the same as M-estimators
Salini, S., F. Laurini, G. Morelli, M. Riani, and A. Cerioli. 2022. “Covariance Matrices of S Robust Regression Estimators.” Journal of Statistical Computation and Simulation 92 (4): 724–47. https://doi.org/10.1080/00949655.2021.1972300.
main question would be how strongly inferential statistics like cov_params are influenced by outliers in exog, when X'X is not a good estimate for (expected or limiting) normalized_cov_params or hessian. Weights in current RLM are only for outliers in endog.
But that extension requires more background in outlier robust inference.
For basic cov_type: AFAIR, HC4 has stronger correction using diag_hat_matrix, but still computes the hat_matrix in a non-robust way (i.e. similar to nonrobust standard Mahalanobis distance).
| open | 2024-03-30T20:42:29Z | 2024-03-31T00:24:12Z | https://github.com/statsmodels/statsmodels/issues/9190 | [
"comp-docs",
"comp-robust"
] | josef-pkt | 0 |
plotly/dash | dash | 2,353 | [Feature Request][BUG] Disable html.Iframe scrolling | I classify as a feature request + a bug because without being able to set `scrolling="no"`, the IFrame attribute is extremely bugged and is hard to work around with other css styles.
My issue is basically that I want to use html.IFrame without the scrollbar (in html you can set the scrolling attribute="no"). Right now my work-around is setting the height to be extremely high via the style attribute, but my htmls are of different sizes so it's not a very effective work around. | closed | 2022-12-06T01:59:26Z | 2023-03-10T21:08:10Z | https://github.com/plotly/dash/issues/2353 | [] | matthewyangcs | 2 |
neuml/txtai | nlp | 13 | Not accurate with long sentences | The txtai library performs less accurately when the given input matching texts are too long. | closed | 2020-08-24T05:47:37Z | 2020-08-27T13:29:28Z | https://github.com/neuml/txtai/issues/13 | [] | pradeepdev-1995 | 1 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 107 | 看了代码感觉loss是training loss 是吗? | 如何显示validation loss呢? | closed | 2019-04-19T09:33:26Z | 2021-11-22T13:56:56Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/107 | [] | myrainbowandsky | 3 |
mwaskom/seaborn | data-science | 2,820 | Logarithmic hist plot with multi='dodge': unequal bar width | Using a hist plot with logarithmic x-axis results in unequal bar width.
Fix PR is provided in #2819 | closed | 2022-05-25T07:05:56Z | 2022-06-11T23:42:20Z | https://github.com/mwaskom/seaborn/issues/2820 | [
"bug",
"mod:distributions"
] | infosec-it-init | 3 |
thp/urlwatch | automation | 716 | Feature request: Custom AWS (requests) Authentication | I would like create a custom hook that sets custom authentication that is compatible request module.
For context...
I thought this would be useful when I was doing AWS v4 signing using the [aws_request_auth](https://github.com/davidmuller/aws-requests-auth) module. | open | 2022-08-08T13:42:51Z | 2022-09-17T17:56:38Z | https://github.com/thp/urlwatch/issues/716 | [
"enhancement"
] | batandwa | 7 |
cvat-ai/cvat | pytorch | 8,328 | An unexpected error occurred when I uploaded labels with more than 5,000 categories. | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
In the locally deployed CVAT, after creating a project, I imported labels into it according to a given JSON format. When the number of imported labels isn't large, it works fine. However, since my dataset contains over 5,000 categories, when I try to import all the categories, an error occurs as shown in the image. 
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | closed | 2024-08-21T11:42:34Z | 2024-09-02T11:57:27Z | https://github.com/cvat-ai/cvat/issues/8328 | [
"bug",
"need info"
] | wbh604 | 5 |
mage-ai/mage-ai | data-science | 5,359 | [BUG] Base/Compare branches are not available to select in Version Control | ### Mage version
0.9.73
### Describe the bug
The commit, push, and pull functions work correctly using the Version Control tab, but the Base branch & Compare branch lists are not showing up when trying to create a pull request.
### To reproduce
1. enter Create pull request
2. set Repository
and no available Base/Compare branch like below.

### Expected behavior
available to select branch in Base/Compare
### Screenshots

### Operating system
_No response_
### Additional context
_No response_ | closed | 2024-08-23T06:24:15Z | 2024-10-07T00:13:22Z | https://github.com/mage-ai/mage-ai/issues/5359 | [
"bug"
] | jx2lee | 0 |
pytorch/vision | computer-vision | 8,573 | Inconsistent Behavior with transforms.v2 for Multiple Arguments | ### 🐛 Describe the bug
I've been testing various transforms.v2 and noticed an inconsistency:
- When passing multiple PIL.Image arguments, the transformation is applied to all of them simultaneously, which is the expected behavior.
- However, when passing multiple torch.Tensor arguments, only the first argument is transformed, while the others remain unchanged. This issue isn't just with Resize; it also occurs with other transforms like Normalize.
Wouldn't it be more intuitive and consistent if transforms were applied to all provided arguments, whether they are PIL.Image or torch.Tensor? This would lead to a more predictable and natural workflow, especially when dealing with multiple tensors.
Expected Behavior:
- The transform should be applied to all provided arguments, regardless of their type (PIL.Image or torch.Tensor), in a consistent manner.
This inconsistency becomes especially confusing when using Compose to chain multiple transforms. The behavior differs depending on whether the input is a PIL.Image or a torch.Tensor, which can lead to unexpected results and confusion.
```
import torch
import torchvision.transforms.v2 as v2
import PIL.Image as Image
# PIL Image
a = Image.new("L", (2, 2), (1))
b, c = a.copy(), a.copy()
print('-- (pil.image) resize applied --')
print(a.size, b.size, c.size)
a, b, c = v2.Resize(1)(a, b, c)
print(' >> ')
print(a.size, b.size, c.size)
# torch
a = torch.ones([1, 2, 2])
b, c = a.clone(), a.clone()
print('-- (torch) resize applied --')
print(a.shape, b.shape, c.shape)
print(' >> ')
a, b, c = v2.Resize(1)(a, b, c)
print(a.shape, b.shape, c.shape)
```
```
-- (pil.image) resize applied --
(2, 2) (2, 2) (2, 2)
>>
(1, 1) (1, 1) (1, 1)
-- (torch) resize applied --
torch.Size([1, 2, 2]) torch.Size([1, 2, 2]) torch.Size([1, 2, 2])
>>
torch.Size([1, 1, 1]) torch.Size([1, 2, 2]) torch.Size([1, 2, 2])
```
OS: Ubuntu 20.04.6 LTS
PyTorch: 2.3.1
Torchvison: 0.18.1
### Versions
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-97-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 535.161.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i9-14900K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 6000.0000
CPU min MHz: 800.0000
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.3.1
[pip3] torchaudio==2.3.1
[pip3] torchelastic==0.2.2
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.8 py310h5eee18b_0
[conda] mkl_random 1.2.4 py310hdb19cb5_0
[conda] numpy 1.26.4 py310h5f9d8c6_0
[conda] numpy-base 1.26.4 py310hb5e798b_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] pytorch 2.3.1 py3.10_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.3.1 py310_cu118 pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtriton 2.3.1 py310 pytorch
[conda] torchvision 0.18.1 py310_cu118 pytorch | closed | 2024-08-09T08:50:24Z | 2024-12-12T13:24:49Z | https://github.com/pytorch/vision/issues/8573 | [] | sanghunpark | 1 |
gunthercox/ChatterBot | machine-learning | 1,457 | is it possible to train chatbot with excel file | Currently we train chatbot from yml file conversion.can we train chatbot from excel file data.Any way is there. | closed | 2018-10-15T11:28:57Z | 2019-08-23T13:45:19Z | https://github.com/gunthercox/ChatterBot/issues/1457 | [] | shesadev | 2 |
ray-project/ray | machine-learning | 51,554 | [core] question about ray issue: 51051 | https://github.com/ray-project/ray/pull/51051 ray issue:51051 says it solves the memory leak of ray components, but the modified code can only be called by plasma_store, and the raylet process cannot call the modified code, so it should not solve the memory leak problem of the ray process itself. Is there something wrong with my understanding? Can you explain it to me? | open | 2025-03-20T12:47:52Z | 2025-03-22T02:33:11Z | https://github.com/ray-project/ray/issues/51554 | [] | nihaoqingtuan | 1 |
databricks/koalas | pandas | 1,992 | Supporting allows_duplicate_labels for Series and DataFrame | pandas experimentally started to support `allows_duplicate_labels` when creating `Series` or `DataFrame` to control whether the index or columns can contain duplicate labels from [pandas 1.2](https://pandas.pydata.org/pandas-docs/dev/whatsnew/v1.2.0.html#optionally-disallow-duplicate-labels
).
```python
In [1]: pd.Series([1, 2], index=['a', 'a'])
Out[1]:
a 1
a 2
Length: 2, dtype: int64
In [2]: pd.Series([1, 2], index=['a', 'a']).set_flags(allows_duplicate_labels=False)
...
DuplicateLabelError: Index has duplicates.
positions
label
a [0, 1]
```
They said,
> This is an experimental feature. Currently, many methods fail to propagate the allows_duplicate_labels value. In future versions it is expected that every method taking or returning one or more DataFrame or Series objects will propagate allows_duplicate_labels.
Thus, I think Koalas also better to prepare supporting this feature. | open | 2021-01-05T07:58:36Z | 2021-01-05T07:58:46Z | https://github.com/databricks/koalas/issues/1992 | [
"enhancement"
] | itholic | 0 |
dask/dask | numpy | 11,213 | Handling errors in Dask distributed | I have a data processing server that will receive Dask arrays (I send them through writing them to hdf5 files). The server reads the file, performs some computations on the Dask array using the distributed framework, and then writes the results to a new hdf5 file and send this back to the client. This works in so far as there are no errors during the computation. I can do this repeatedly without issue.
I wanted to install error handling into the server, so that if there is some sort of error during the computation, that the client receives the error message. This also works, with a caveat. The problem I have is that if there is an error, and subsequently we receive a new job to perform a fresh computation, then the computation is performed, but we can no longer write to the hdf5 file: it gives me a "ValueError: Can only serialize read-only h5py files".
Here is an outline of the code - it's not the full code with all the functions but it should be enough to get the idea. In particular, the function write_dataset() just uses the Dask .to_hdf() method and this is where the error happens. I believe this is somehow related to the fact that the processed_output variable is stored within Dask distributed already, so when we redo the computation, although it works, it is not rewritten and the permissions have changed. I don't know how to fix this issue, so any thoughts on how to clear the variables after a failed Dask distributed computation are appreciated.
```
import logging
import socket
import os
import h5py
import io
import json
error_file_name = 'error.log'
log_file_path = 'console.log'
file_name = 'temp_file.h5'
if __name__ == '__main__':
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
while True:
s.listen()
conn, address = s.accept()
#Receive the HDF5 file in chunks
chunks = []
while True:
chunk = conn.recv(1024)
if not chunk:
break
chunks.append(chunk)
#if there is already a processed output variable, delete it.
try:
del processed_output
except:
pass
#Read the hdf5 file
with h5py.File(io.BytesIO(b''.join(chunks)), 'r') as f:
f.flush()
#read the dataset
h5_dataset = f['MyDataGroup/sid_data/sid_data']
print('Received dataset {}'.format(h5_dataset))
try:
processed_output = do_some_processing(h5_dataset)
except ValueError as e:
logging.error(f"ValueError: {e}")
except Exception as e:
logging.error(f"Unexpected error: {e}")
#If we received an error message...
if 'processed_output' not in locals():
try:
os.remove(error_file_name)
except:
pass
#Read the log
print('in error handling section')
with open(log_file_path, 'r') as f:
log_contents = f.read()
log_lines = log_contents.split('\n')
f.flush()
json_data = json.dumps(log_lines, indent=4)
with open(error_file_name, 'a') as f:
f.write(json_data)
f.flush()
with open(error_file_name, 'rb') as f:
f.flush()
img = f.read()
print('Failed; sending the error back to client')
#if not...
else:
try:
os.remove(file_name)
except:
pass
with h5py.File(file_name, 'a') as h5_f:
data_group = h5_f.create_group('MyDataGroup')
for ind,dsets in enumerate(processed_output):
print(ind, dsets)
write_dataset(dsets,data_group, main_data_name = 'processed_data_' + str(ind))
h5_f.flush()
img = h5_f.id.get_file_image()
print('now sending the data back to client')
#Now send it back
conn, address = s.accept()
conn.sendall(img)
#Close off the connections
conn.close()
```
| open | 2024-07-03T13:48:05Z | 2024-07-03T13:51:11Z | https://github.com/dask/dask/issues/11213 | [
"needs triage"
] | ramav87 | 0 |
litestar-org/polyfactory | pydantic | 430 | Bug: Cannot generate pydantic 1.9.0 model | ### Description
Hello,
The example I grabbed from your doc page does not seem to work:
### URL to code causing the issue
_No response_
### MCVE
```python
from pydantic import BaseModel
from polyfactory.factories.pydantic_factory import ModelFactory
class Person(BaseModel):
name: str
age: float
height: float
weight: float
class PersonFactory(ModelFactory[Person]):
__model__ = Person
PersonFactory.build()
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/k/.pyenv/versions/v1_3085/lib/python3.8/site-packages/polyfactory/factories/pydantic_factory.py", line 364, in build
processed_kwargs = cls.process_kwargs(**kwargs)
File "/home/k/.pyenv/versions/v1_3085/lib/python3.8/site-packages/polyfactory/factories/base.py", line 719, in process_kwargs
for field_meta in cls.get_model_fields():
File "/home/k/.pyenv/versions/v1_3085/lib/python3.8/site-packages/polyfactory/factories/pydantic_factory.py", line 324, in get_model_fields
cls._fields_metadata = [
File "/home/k/.pyenv/versions/v1_3085/lib/python3.8/site-packages/polyfactory/factories/pydantic_factory.py", line 325, in <listcomp>
PydanticFieldMeta.from_model_field(
File "/home/k/.pyenv/versions/v1_3085/lib/python3.8/site-packages/polyfactory/factories/pydantic_factory.py", line 202, in from_model_field
if isinstance(model_field.annotation, (DeferredType, ForwardRef))
AttributeError: 'ModelField' object has no attribute 'annotation'
```
### Release Version
polyfactory==2.11.0
pydantic==1.9.0
using wsl
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above) | closed | 2023-10-26T09:06:13Z | 2025-03-20T15:53:10Z | https://github.com/litestar-org/polyfactory/issues/430 | [
"bug"
] | kgskgs | 4 |
rthalley/dnspython | asyncio | 930 | Can't re-use asyncio UDP socket with multiple outstanding queries | Attempting to send (and await) multiple asynchronous queries using one UDP socket consistently fails with assertion error like in #843
**To reproduce:**
```python
import asyncio
import socket
import dns.asyncbackend
import dns.asyncquery
async def main() -> None:
sock = await dns.asyncbackend.get_backend("asyncio").make_socket(
socket.AF_INET, socket.SOCK_DGRAM
)
tasks = []
for _ in range(5):
query = dns.message.make_query(dns.name.from_text("example.com."), dns.rdatatype.A)
tasks.append(asyncio.create_task(dns.asyncquery.udp(query, "1.1.1.1", timeout=5, sock=sock)))
await asyncio.gather(*tasks)
asyncio.run(main())
```
Output:
```
Traceback (most recent call last):
File "/home/developer/src/python/dnsbug.py", line 20, in <module>
asyncio.run(main())
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/developer/src/python/dnsbug.py", line 18, in main
await asyncio.gather(*tasks)
File "/home/developer/src/python/pvenv/lib/python3.9/site-packages/dns/asyncquery.py", line 199, in udp
(r, received_time, _) = await receive_udp(
File "/home/developer/src/python/pvenv/lib/python3.9/site-packages/dns/asyncquery.py", line 138, in receive_udp
(wire, from_address) = await sock.recvfrom(65535, _timeout(expiration))
File "/home/developer/src/python/pvenv/lib/python3.9/site-packages/dns/_asyncio_backend.py", line 72, in recvfrom
assert self.protocol.recvfrom is None
AssertionError
```
**Environment**: dnspython 2.3.0 on Python 3.9.2 (packaged with Debian 11) and 3.11.3 (compiled from source).
I believe that this is related to the fact that the method DatagramSocket.recvfrom() is non-reentrant.
| open | 2023-05-08T01:23:04Z | 2023-11-06T14:01:52Z | https://github.com/rthalley/dnspython/issues/930 | [
"Enhancement Request",
"Future"
] | aldem | 11 |
Lightning-AI/pytorch-lightning | deep-learning | 20,337 | `LightningCLI` doesn't fail when `config.yaml` contains invalid arguments | ### Bug description
I was playing around with the `LightningCLI` and I found out that it can still work even if the `config.yaml` contains invalid data types. For example, `max_epochs` for `Trainer` should be `int`. However, it still succeeds with a `str` in the `.yaml`. In the MWE, you can see that `config.yaml` contains `str` for both `seed_everything` and `max_epochs`. This is also evident when reading back the `config.yaml` file:
```python
import yaml
with open('config.yaml', 'r') as fhand:
data = yaml.load(fhand)
print(data)
```
```python
{'seed_everything': '1042', 'trainer': {'max_epochs': '2'}} # Prints this
```
> [!NOTE]
> I am not sure if this is really a bug, since it might be the case that the `LightningCLI` converts the given data types to the correct ones based on the type hints. However, I couldn't find if this is really the case.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
# main.py
from lightning.pytorch.cli import LightningCLI
# simple demo classes for your convenience
from lightning.pytorch.demos.boring_classes import DemoModel, BoringDataModule
def cli_main():
cli = LightningCLI(DemoModel, BoringDataModule)
# note: don't call fit!!
if __name__ == "__main__":
cli_main()
# note: it is good practice to implement the CLI in a function and call it in the main if block
```
```yaml
# config.yaml
seed_everything: "1042"
trainer:
max_epochs: "2"
```
Now from the CLI:
```bash
python main.py fit --config=config.yaml
```
```
### Error messages and logs
```
config.yaml lightning_logs/ main.py
(aidsorb) [ansar@mofinium ligthning_bug]$ python main.py fit --config=config.yaml
Seed set to 1042
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
/home/ansar/venvir/aidsorb/lib64/python3.11/site-packages/lightning/pytorch/trainer/configuration_validator.py:68: You passed in a `val_dataloader` but have no `validation_step`. Skipping val loop.
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
[rank: 1] Seed set to 1042
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]
| Name | Type | Params | Mode
----------------------------------------
0 | l1 | Linear | 330 | train
----------------------------------------
330 Trainable params
0 Non-trainable params
330 Total params
0.001 Total estimated model params size (MB)
1 Modules in train mode
0 Modules in eval mode
/home/ansar/venvir/aidsorb/lib64/python3.11/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:424: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=9` in the `DataLoader` to improve performance.
/home/ansar/venvir/aidsorb/lib64/python3.11/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (32) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
Epoch 1: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 1100.86it/s, v_num=3]`Trainer.fit` stopped: `max_epochs=2` reached.
Epoch 1: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 1045.92it/s, v_num
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- Quadro RTX 4000
- Quadro RTX 4000
- available: True
- version: 12.1
* Lightning:
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- pytorch-lightning: 2.4.0
- torch: 2.4.1
- torchmetrics: 1.4.3
- torchvision: 0.19.1
* Packages:
- absl-py: 2.1.0
- aidsorb: 1.0.0
- aiohappyeyeballs: 2.4.3
- aiohttp: 3.10.9
- aiosignal: 1.3.1
- ase: 3.23.0
- attrs: 24.2.0
- contourpy: 1.3.0
- cycler: 0.12.1
- docstring-parser: 0.16
- filelock: 3.16.1
- fire: 0.7.0
- fonttools: 4.54.1
- frozenlist: 1.4.1
- fsspec: 2024.9.0
- grpcio: 1.66.2
- idna: 3.10
- importlib-resources: 6.4.5
- jinja2: 3.1.4
- jsonargparse: 4.33.2
- kiwisolver: 1.4.7
- lightning: 2.4.0
- lightning-utilities: 0.11.7
- markdown: 3.7
- markupsafe: 3.0.1
- matplotlib: 3.9.2
- mpmath: 1.3.0
- multidict: 6.1.0
- networkx: 3.3
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 9.1.0.70
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.6.77
- nvidia-nvtx-cu12: 12.1.105
- packaging: 24.1
- pandas: 2.2.3
- pillow: 10.4.0
- pip: 24.2
- plotly: 5.24.1
- propcache: 0.2.0
- protobuf: 5.28.2
- pyparsing: 3.1.4
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.4.0
- pytz: 2024.2
- pyyaml: 6.0.2
- scipy: 1.14.1
- setuptools: 65.5.1
- six: 1.16.0
- sympy: 1.13.3
- tenacity: 9.0.0
- tensorboard: 2.18.0
- tensorboard-data-server: 0.7.2
- termcolor: 2.5.0
- torch: 2.4.1
- torchmetrics: 1.4.3
- torchvision: 0.19.1
- tqdm: 4.66.5
- triton: 3.0.0
- typeshed-client: 2.7.0
- typing-extensions: 4.12.2
- tzdata: 2024.2
- werkzeug: 3.0.4
- yarl: 1.14.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.11.7
- release: 5.14.0-427.16.1.el9_4.x86_64
- version: #1 SMP PREEMPT_DYNAMIC Wed May 8 17:48:14 UTC 2024
</details>
### More info
_No response_ | closed | 2024-10-11T15:32:24Z | 2024-11-08T14:47:20Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20337 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | adosar | 4 |
graphdeco-inria/gaussian-splatting | computer-vision | 330 | SIBR viewer cmake -Bbuild fails (Ubuntu 20.04) | Hi,
I have done running `render.py` and `metrics.py`, but meet some errors when using cmake to build the SIBR viewer. Could you help me to fix this problem?
My device: CUDA-toolkit=11.8, cmake=3.27.7.
I also tested cmake=3.24.1, which does not work.
I am running this command:
```bash
cmake -Bbuild . -DCMAKE_BUILD_TYPE=Release
```
---
wujing@wujing-All-Series:~/Code/gaussian-splatting/SIBR_viewers$ cmake -Bbuild . -DCMAKE_BUILD_TYPE=Release
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning at CMakeLists.txt:21 (message):
Untested version of cmake. If you checked everything is working properly,
please update 3.27.0 in the main CmakeLists.txt with the version you
tested.
-- Git found: /usr/bin/git
-- SIBR version :
BRANCH fossa_compatibility
COMMIT_HASH 9906b16831d215ac7fdb10a24780411a4db931fd
TAG
VERSION 0.9.6-166-g9906b16
-- Install path set to /home/wujing/Code/gaussian-splatting/SIBR_viewers/install.
Note you can provide default program options for Visual Studio target properties by either setting a value for the cmake cached variable 'SIBR_PROGRAMARGS' or by setting a new environment variable 'SIBR_PROGRAMARGS'
--
****************** Handling core dependencies ******************
Activating EGL support for headless GLFW/GLEW
There is no provided GLEW library for your compiler, relying on find_package to find it
-- FindGLEW: did not find GLEW CMake config file. Searching for libraries.
-- FindGLEW: GLEW_USE_STATIC_LIBS is undefined. Treated as FALSE.
-- FindGLEW: GLEW_INCLUDE_DIR: /usr/include
-- FindGLEW: GLEW_INCLUDE_DIRS: /usr/include
-- FindGLEW: CMAKE_FIND_LIBRARY_SUFFIXES for SHARED: .so;.a
-- FindGLEW: CMAKE_FIND_LIBRARY_SUFFIXES for STATIC: .so
-- FindGLEW: GLEW_SHARED_LIBRARY_RELEASE: /usr/lib/x86_64-linux-gnu/libGLEW.so
-- FindGLEW: GLEW_STATIC_LIBRARY_RELEASE: GLEW_STATIC_LIBRARY_RELEASE-NOTFOUND
-- FindGLEW: GLEW_SHARED_LIBRARY_DEBUG: GLEW_SHARED_LIBRARY_DEBUG-NOTFOUND
-- FindGLEW: GLEW_STATIC_LIBRARY_DEBUG: GLEW_STATIC_LIBRARY_DEBUG-NOTFOUND
-- FindGLEW: GLEW_SHARED_LIBRARY: /usr/lib/x86_64-linux-gnu/libGLEW.so
-- FindGLEW: GLEW_STATIC_LIBRARY: GLEW_STATIC_LIBRARY-NOTFOUND
-- FindGLEW: GLEW_LIBRARIES: /usr/lib/x86_64-linux-gnu/libGLEW.so
-- FindGLEW: GLEW_VERSION_MAJOR: 2
-- FindGLEW: GLEW_VERSION_MINOR: 1
-- FindGLEW: GLEW_VERSION_MICRO: 0
-- FindGLEW: GLEW_VERSION: 2.1.0
-- FindGLEW: Creating GLEW::glew imported target.
-- FindGLEW: Creating GLEW::GLEW imported target.
There is no provided ASSIMP library for your compiler, relying on find_package to find it
NO ASSIMP DIR ASSIMP_DIR
SETTING ASSIMP DIR ASSIMP_DIR
ASSIMP DIR ASSIMP_DIR
CMake Warning (dev) at /snap/cmake/1336/share/cmake-3.27/Modules/FindPackageHandleStandardArgs.cmake:438 (message):
The package name passed to `find_package_handle_standard_args` (EMBREE)
does not match the name of the calling package (embree). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
cmake/linux/Modules/Findembree.cmake:87 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
cmake/linux/dependencies.cmake:127 (find_package)
cmake/linux/include_once.cmake:20 (include)
src/CMakeLists.txt:46 (include_once)
This warning is for project developers. Use -Wno-dev to suppress it.
There is no provided OpenCV library for your compiler, relying on find_package to find it
-- Library imgui already available, skipping.
-- Library nativefiledialog already available, skipping.
-- Library mrf already available, skipping.
-- Library nanoflann already available, skipping.
-- Library picojson already available, skipping.
-- Library rapidxml already available, skipping.
-- Library xatlas already available, skipping.
CMake Deprecation Warning at extlibs/xatlas/xatlas/CMakeLists.txt:4 (cmake_minimum_required):
Compatibility with CMake < 3.5 will be removed from a future version of
CMake.
Update the VERSION argument <min> value or use a ...<max> suffix to tell
CMake that the project does not need compatibility with older versions.
-- ****************************************************************
-- Adding dataset_tools project
-- BUILD_IBR_DATASET_TOOLS is OFF
-- Adding ulr project
-- BUILD_IBR_ULR is OFF
-- Adding basic project
-- BUILD_IBR_BASIC is ON
-- Adding gaussianviewer project
-- BUILD_IBR_GAUSSIANVIEWER is ON
CMake Error at /snap/cmake/1336/share/cmake-3.27/Modules/CMakeDetermineCompilerId.cmake:753 (message):
Compiling the CUDA compiler identification source file
"CMakeCUDACompilerId.cu" failed.
Compiler: /usr/bin/nvcc
Build flags:
Id flags: --keep;--keep-dir;tmp -v
The output was:
255
#$ _SPACE_=
#$ _CUDART_=cudart
#$ _HERE_=/usr/lib/nvidia-cuda-toolkit/bin
#$ _THERE_=/usr/lib/nvidia-cuda-toolkit/bin
#$ _TARGET_SIZE_=
#$ _TARGET_DIR_=
#$ _TARGET_SIZE_=64
#$ NVVMIR_LIBRARY_DIR=/usr/lib/nvidia-cuda-toolkit/libdevice
#$
PATH=/usr/lib/nvidia-cuda-toolkit/bin:/usr/local/cuda-11.8/bin:/home/wujing/anaconda3/envs/gaussian_splatting/bin:/home/wujing/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-11.8/bin:/usr/local/cuda-11.8/bin:/home/wujing/anaconda3/bin:/home/wujing/anaconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
#$ LIBRARIES= -L/usr/lib/x86_64-linux-gnu/stubs -L/usr/lib/x86_64-linux-gnu
#$ rm tmp/a_dlink.reg.c
#$ gcc -D__CUDA_ARCH__=300 -E -x c++ -DCUDA_DOUBLE_MATH_FUNCTIONS
-D__CUDACC__ -D__NVCC__ -D__CUDACC_VER_MAJOR__=10 -D__CUDACC_VER_MINOR__=1
-D__CUDACC_VER_BUILD__=243 -include "cuda_runtime.h" -m64
"CMakeCUDACompilerId.cu" > "tmp/CMakeCUDACompilerId.cpp1.ii"
#$ cicc --c++14 --gnu_version=90400 --allow_managed -arch compute_30 -m64
-ftz=0 -prec_div=1 -prec_sqrt=1 -fmad=1 --include_file_name
"CMakeCUDACompilerId.fatbin.c" -tused -nvvmir-library
"/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.10.bc"
--gen_module_id_file --module_id_file_name
"tmp/CMakeCUDACompilerId.module_id" --orig_src_file_name
"CMakeCUDACompilerId.cu" --gen_c_file_name
"tmp/CMakeCUDACompilerId.cudafe1.c" --stub_file_name
"tmp/CMakeCUDACompilerId.cudafe1.stub.c" --gen_device_file_name
"tmp/CMakeCUDACompilerId.cudafe1.gpu" "tmp/CMakeCUDACompilerId.cpp1.ii" -o
"tmp/CMakeCUDACompilerId.ptx"
#$ ptxas -arch=sm_30 -m64 "tmp/CMakeCUDACompilerId.ptx" -o
"tmp/CMakeCUDACompilerId.sm_30.cubin"
ptxas fatal : Value 'sm_30' is not defined for option 'gpu-name'
\# --error 0xff --
Call Stack (most recent call first):
/snap/cmake/1336/share/cmake-3.27/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
/snap/cmake/1336/share/cmake-3.27/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test)
/snap/cmake/1336/share/cmake-3.27/Modules/CMakeDetermineCUDACompiler.cmake:307 (CMAKE_DETERMINE_COMPILER_ID)
src/projects/gaussianviewer/apps/gaussianViewer/CMakeLists.txt:12 (project)
-- Configuring incomplete, errors occurred! | open | 2023-10-17T13:45:11Z | 2024-07-10T03:21:36Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/330 | [] | jingwu2121 | 3 |
ivy-llc/ivy | pytorch | 28,705 | Fix Frontend Failing Test: paddle - creation.jax.numpy.triu | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-29T12:18:54Z | 2024-03-29T16:45:11Z | https://github.com/ivy-llc/ivy/issues/28705 | [
"Sub Task"
] | ZJay07 | 0 |
flairNLP/flair | pytorch | 3,252 | [Bug]: Flair overriding user logging config | ### Describe the bug
During import Flair set the root logger to WARN overriding any user provided basicConfig
This issue was solved here : https://github.com/flairNLP/flair/issues/1059
But it seems it's back
### To Reproduce
```python
import logging
logging.basicConfig(
format="%(asctime)s : %(levelname)s : %(message)s", level=logging.INFO
)
logging.info("First info")
import flair
logging.info("Second info")
logging.basicConfig(
format="%(asctime)s : %(levelname)s : %(message)s", level=logging.INFO
)
logging.info("Third info")
```
### Expected behavior
All user log should be present

### Logs and Stack traces
_No response_
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
OS : macOS 13.3.1 (a)
Version : flair 0.12.2 | open | 2023-05-25T17:45:12Z | 2024-09-15T23:34:41Z | https://github.com/flairNLP/flair/issues/3252 | [
"bug",
"Awaiting Response"
] | BertrandMiannay | 1 |
biolab/orange3 | scikit-learn | 6,131 | Test and Score - intuitive way to add new score | ##### Issue
In Test and Score, the menu to add additional scores is hidden (regular users cannot easily find it):
<img width="786" alt="Screenshot 2022-09-09 at 10 26 18" src="https://user-images.githubusercontent.com/6421558/189306963-acb982f1-ec1a-41e9-919c-bb197b39c1dc.png">
##### Suggestion
Add a sign that would suggest to the user that it is possible to add additional scores. For example, tree dots:

| closed | 2022-09-09T08:32:04Z | 2023-01-10T12:31:40Z | https://github.com/biolab/orange3/issues/6131 | [] | PrimozGodec | 4 |
nolar/kopf | asyncio | 614 | Operator doesn't respond after some time | ## Long story short
Right after the deployment of my controller, everything works fine: he reacts on create/delete/update events of resources. Then, after leaving the controller for some time (<10min), it stops handling changes of resources and nothing is logged anymore (also when running with `--verbose` )
## Description
While the controller seems to be frozen, my liveness probe continues to be working. Also, if I restart the controller, all changes that where made in the meantime, are handled.
I tested it on kopf 0.27, 0.28 and 0.28.2, all version give the same kind of behavior.
I must be doing something wrong, but I don't have any idea what can be wrong. I don't think that operator is still stuck in a handler, because the last lines of the logs says something like (which means that the handler is finished, right?)
```
[2020-12-14 16:34:02,413] kopf.objects [INFO ] [sample-project] Handler 'update_members/spec.members' succeeded.
[2020-12-14 16:34:02,414] kopf.objects [INFO ] [sample-project] Update event is processed: 1 succeeded; 0 failed.
```
Has someone seen this behavior before? Or any ideas on how I can debug this one?
## Environment
<!-- The following commands can help:
`kopf --version` or `pip show kopf`
`kubectl version`
`python --version`
-->
* Kopf version: 0.27, 0.28 and 0.28.2
* Kubernetes version: 1.16
* Python version: 3.7
* OS/platform: Ubuntu
| closed | 2020-12-14T16:50:59Z | 2020-12-15T10:45:19Z | https://github.com/nolar/kopf/issues/614 | [
"bug"
] | gilbeckers | 2 |
napari/napari | numpy | 6,860 | Polygon faces in Shapes layer are not shown in 3D if not contained in an *axis-orthogonal* plane | ### 🐛 Bug Report
If I make polygons that are planar but not on a single z (or whatever) plane, the faces of the polygon are not rendered, only the edges.
### 💡 Steps to Reproduce
```python
import napari
viewer = napari.Viewer()
poly0 = [[0, 0, 0], [0, 1, 1], [1, 1, 1], [1, 0, 0]]
poly1 = [[2, 2, 2], [3, 3, 3], [3, 2, 2]]
viewer.add_shapes(
[poly0, poly1],
shape_type='polygon',
edge_width=0.1,
face_color='white',
)
viewer.dims.ndisplay = 3
viewer.camera.angles = (-35, 50, 35)
viewer.camera.zoom = 150
napari.run()
```
Result:
<img width="1228" alt="Screenshot 2024-04-22 at 10 11 49 AM" src="https://github.com/napari/napari/assets/492549/c78ce85b-5ac0-4684-9090-25ff7da7912e">
### 💡 Expected Behavior
The face color of the polygons should be white as specified.
### 🌎 Environment
napari: 0.5.0a2.dev640+gbdf6d644b
Platform: macOS-14.4-arm64-arm-64bit
System: MacOS 14.4
Python: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:34:54) [Clang 16.0.6 ]
Qt: 5.15.8
PyQt5: 5.15.9
NumPy: 1.26.4
SciPy: 1.13.0
Dask: 2024.4.2
VisPy: 0.14.2
magicgui: 0.8.2
superqt: 0.6.3
in-n-out: 0.2.0
app-model: 0.2.6
npe2: 0.7.5
OpenGL:
- GL version: 2.1 Metal - 88
- MAX_TEXTURE_SIZE: 16384
- GL_MAX_3D_TEXTURE_SIZE: 2048
Screens:
- screen 1: resolution 1800x1169, scale 2.0
Optional:
- numba not installed
- triangle not installed
Settings path:
- /Users/jni/Library/Application Support/napari/all_f206df4881e2999baa22e6df448393db299c70d6/settings.yaml
Plugins:
- napari: 0.5.0a2.dev640+gbdf6d644b (81 contributions)
- napari-console: 0.0.9 (0 contributions)
- napari-svg: 0.1.10 (2 contributions)
### 💡 Additional Context
I expect that this is a case of "we did the easy thing and left the hard thing till later" — only display polygons that are "easy" ie orthogonal planes, since the triangulation libraries are 2D-only. But polygons with co-planar vertices should be displayed correctly also. We could project them onto a 2D plane, do the 2D triangulation, then back-project the triangulations to 3D. | open | 2024-04-22T00:22:19Z | 2024-04-22T00:27:07Z | https://github.com/napari/napari/issues/6860 | [
"bug"
] | jni | 1 |
autogluon/autogluon | data-science | 4,970 | [BUG] ImportError: `import vowpalwabbit` failed. | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
In a fresh python session, running
```python
from autogluon.common.utils.try_import import try_import_vowpalwabbit
try_import_vowpalwabbit()
```
I have the following error:
```python
----> 1 import vowpalwabbit
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/vowpalwabbit/__init__.py:35
7 __all__ = [
8 "AbstractLabel",
9 "ActionScore",
(...) 31 "Workspace",
32 ]
34 from .version import __version__
---> 35 from . import pyvw
36 from .pyvw import (
37 AbstractLabel,
38 ActionScore,
(...) 60 Workspace,
61 )
64 def __getattr__(name):
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/vowpalwabbit/pyvw.py:6
4 from __future__ import division
5 from typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union
----> 6 import pylibvw
7 import warnings
8 import inspect
ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by ~/miniconda3/envs/autogluon/lib/libboost_python311.so.1.82.0)
```
Running this in a fresh python session still gives me the same error
```python
from autogluon.common.utils.try_import import try_import_vowpalwabbit
import vowpalwabbit
```
However doing the following I have no error...
```python
import vowpalwabbit
from autogluon.common.utils.try_import import try_import_vowpalwabbit
import vowpalwabbit # or try_import_vowpalwabbit()
```
Finally, if copying the try_import_vowpalwabbit function and running it locally I don't have the error either
```python
In [1]: def my_try_import_vowpalwabbit():
...: try:
...: import vowpalwabbit
...: from pkg_resources import parse_version # pylint: disable=import-outside-toplevel
...:
...: vowpalwabbit_version = parse_version(vowpalwabbit.__version__)
...: assert vowpalwabbit_version >= parse_version("9.0.0") and vowpalwabbit_version < parse_version(
...: "9.10.0"
...: ), f"Currently, we only support vowpalwabbit version >=9.0 and <9.10. Found vowpalwabbit version: {vowpalwabbit_version}"
...: except ImportError:
...: raise ImportError("`import vowpalwabbit` failed.\n" "A quick tip is to install via `pip install vowpalwabbit>=9,<9.10")
...:
In [2]: my_try_import_vowpalwabbit()
In [3]:
```
I observed this while I was trying the multimodal feature for data table.
When doing the prediction on a test set I have the following error (I don't have the error at training...):
```python
Predicting DataLoader 0: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 3125/3125 [00:47<00:00, 66.18it/s]---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/common/utils/try_import.py:165, in try_import_vowpalwabbit()
164 try:
--> 165 import vowpalwabbit
166 from pkg_resources import parse_version # pylint: disable=import-outside-toplevel File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/vowpalwabbit/__init__.py:35
34 from .version import __version__
---> 35 from . import pyvw
36 from .pyvw import (
37 AbstractLabel,
38 ActionScore,
(...) 60 Workspace,
61 )
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/vowpalwabbit/pyvw.py:6
5 from typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union
----> 6 import pylibvw
7 import warnings
ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by ~/miniconda3/envs/autogluon/lib/libboost_python311.so.1.82.0)
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
File ~/run_autogluon/autogluon_script.py:248
--> 248 y_pred = predictor.predict(test_data)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py:2117, in TabularPredictor.predict(self, data, model, as_pandas, transform_features, decision_threshold)
2115 if decision_threshold is None:
2116 decision_threshold = self.decision_threshold
-> 2117 return self._learner.predict(X=data, model=model, as_pandas=as_pandas, transform_features=transform_features, decision_threshold=decision_threshold)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py:208, in AbstractTabularLearner.predict(self, X, model, as_pandas, inverse_transform, transform_features, decision_threshold)
206 decision_threshold = 0.5
207 X_index = copy.deepcopy(X.index) if as_pandas else None
--> 208 y_pred_proba = self.predict_proba(
209 X=X, model=model, as_pandas=False, as_multiclass=False, inverse_transform=False, transform_features=transform_features
210 )
211 problem_type = self.label_cleaner.problem_type_transform or self.problem_type
212 y_pred = get_pred_from_proba(y_pred_proba=y_pred_proba, problem_type=problem_type, decision_threshold=decision_threshold)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py:189, in AbstractTabularLearner.predict_proba(self, X, model, as_pandas, as_multiclass, inverse_transform, transform_features)
187 if transform_features:
188 X = self.transform_features(X)
--> 189 y_pred_proba = self.load_trainer().predict_proba(X, model=model)
190 y_pred_proba = self._post_process_predict_proba(
191 y_pred_proba=y_pred_proba, as_pandas=as_pandas, index=X_index, as_multiclass=as_multiclass, inverse_transform=inverse_transform
192 )
193 return y_pred_proba
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:837, in AbstractTrainer.predict_proba(self, X, model)
835 model = self._get_best()
836 cascade = isinstance(model, list)
--> 837 return self._predict_proba_model(X, model, cascade=cascade)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:2611, in AbstractTrainer._predict_proba_model(self, X, model, model_pred_proba_dict, cascade)
2610 def _predict_proba_model(self, X, model, model_pred_proba_dict=None, cascade=False):
-> 2611 return self.get_pred_proba_from_model(model=model, X=X, model_pred_proba_dict=model_pred_proba_dict, cascade=cascade)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:851, in AbstractTrainer.get_pred_proba_from_model(self, model, X, model_pred_proba_dict, cascade)
849 else:
850 models = [model]
--> 851 model_pred_proba_dict = self.get_model_pred_proba_dict(X=X, models=models, model_pred_proba_dict=model_pred_proba_dict, cascade=cascade)
852 if not isinstance(model, str):
853 model = model.name
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:1100, in AbstractTrainer.get_model_pred_proba_dict(self, X, models, model_pred_proba_dict, model_pred_time_dict, record_pred_time, use_val_cache, cascade, cascade_threshold)
1098 else:
1099 preprocess_kwargs = dict(infer=False, model_pred_proba_dict=model_pred_proba_dict)
-> 1100 model_pred_proba_dict[model_name] = model.predict_proba(X, **preprocess_kwargs)
1101 else:
1102 model_pred_proba_dict[model_name] = model.predict_proba(X)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/models/abstract/abstract_model.py:968, in AbstractModel.predict_proba(self, X, normalize, record_time, **kwargs)
943 """
944 Returns class prediction probabilities of X.
945 For binary problems, this returns the positive class label probability as a 1d numpy array.
(...) 964 The prediction probabilities
965 """
966 time_start = time.time() if record_time else None
--> 968 y_pred_proba = self._predict_proba_internal(X=X, normalize=normalize, **kwargs)
970 if self.params_aux.get("temperature_scalar", None) is not None:
971 y_pred_proba = self._apply_temperature_scaling(y_pred_proba)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py:454, in BaggedEnsembleModel._predict_proba_internal(self, X, normalize, **kwargs)
453 def _predict_proba_internal(self, X, *, normalize: bool | None = None, **kwargs):
--> 454 model = self.load_child(self.models[0])
455 X = self.preprocess(X, model=model, **kwargs)
456 y_pred_proba = model.predict_proba(X=X, preprocess_nonadaptive=False, normalize=normalize)
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py:911, in BaggedEnsembleModel.load_child(self, model, verbose)
909 if isinstance(model, str):
910 child_path = self.create_contexts(os.path.join(self.path, model))
--> 911 return self._child_type.load(path=child_path, verbose=verbose)
912 else:
913 return model
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/tabular/models/vowpalwabbit/vowpalwabbit_model.py:199, in VowpalWabbitModel.load(cls, path, reset_paths, verbose)
192 @classmethod
193 def load(cls, path: str, reset_paths=True, verbose=True):
194 """
195 There are two files which needs to be loaded.
196 First is the Abstract Model pickle dump and second is the internal model file.
197 For VW, based on different problem_type/hyperparams, loading arguments will be different
198 """
--> 199 try_import_vowpalwabbit()
200 import vowpalwabbit
202 # Load Abstract Model. This is without the internal model
File ~/miniconda3/envs/autogluon/lib/python3.11/site-packages/autogluon/common/utils/try_import.py:173, in try_import_vowpalwabbit()
169 assert vowpalwabbit_version >= parse_version("9.0.0") and vowpalwabbit_version < parse_version(
170 "9.10.0"
171 ), f"Currently, we only support vowpalwabbit version >=9.0 and <9.10. Found vowpalwabbit version: {vowpalwabbit_version}"
172 except ImportError:
--> 173 raise ImportError("`import vowpalwabbit` failed.\n" "A quick tip is to install via `pip install vowpalwabbit>=9,<9.10")
ImportError: `import vowpalwabbit` failed.
A quick tip is to install via `pip install vowpalwabbit>=9,<9.10
```
**Expected behavior**
No `ImportError`
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
show_versions()
INSTALLED VERSIONS
------------------
date : 2025-03-10
time : 14:06:25.993833
python : 3.11.10.final.0
OS : Linux
OS-release : 5.4.0-186-generic
Version : #206-Ubuntu SMP Fri Apr 26 12:31:10 UTC 2024
machine : x86_64
processor : x86_64
num_cores : 64
cpu_ram_mb : 515973.59375
cuda version : 12.530.30.02
num_gpus : 8
gpu_ram_mb : [32497, 32497, 32497, 32497, 32497, 32497, 32497, 32497]
avail_disk_size_mb : 440183
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.37.3
catboost : 1.2.7
defusedxml : 0.7.1
evaluate : 0.4.3
fastai : 2.7.18
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.5
joblib : 1.4.2
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.10.1
mlforecast : 0.10.0
networkx : 3.4.2 nlpaug : 1.1.11
nltk : 3.9.1 nptyping : 2.4.1
numpy : 1.26.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.18.1
optimum-intel : None
orjson : 3.10.15
pandas : 2.2.3
pdf2image : 1.17.0
Pillow : 10.4.0
psutil : 5.9.8
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.12.0
seqeval : 1.2.2
setuptools : 75.8.0
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.19.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1
torchmetrics : 1.2.1
torchvision : 0.18.1
tqdm : 4.67.1
transformers : 4.39.3
utilsforecast : 0.0.10
vowpalwabbit : 9.0.0
xgboost : 2.0.3
```
</details>
| closed | 2025-03-10T14:15:57Z | 2025-03-11T18:49:17Z | https://github.com/autogluon/autogluon/issues/4970 | [
"module: tabular",
"bug: unconfirmed",
"Needs Triage"
] | albertcthomas | 3 |
openapi-generators/openapi-python-client | fastapi | 1,091 | allOf fails if it references a type that also uses allOf with just single item | **Describe the bug**
Conditions:
- Schema A is a type with any definition.
- Schema B contains only an `allOf` with a single element referencing Schema A.
- Schema C contains an `allOf` that 1. references Schema B and 2. adds a property.
Expected behavior:
- Spec is valid. Schema B should be treated as exactly equivalent to Schema A (in other words, C becomes an extension of A with an extra property).
Observed behavior:
- Parsing fails. Error message is "Unable to process schema <path to schema C>".
**OpenAPI Spec File**
https://gist.github.com/eli-bl/8f5c7d1d872d9fda5379fa6370dab6a8
**Desktop (please complete the following information):**
- OS: macOS 14.5
- Python Version: 3.8.15
- openapi-python-client version 0.21.2
| closed | 2024-08-06T18:57:33Z | 2024-08-25T02:58:03Z | https://github.com/openapi-generators/openapi-python-client/issues/1091 | [] | eli-bl | 0 |
NullArray/AutoSploit | automation | 1,156 | Unhandled Exception (b289011b0) | Autosploit version: `3.1.2`
OS information: `Linux-4.19.0-kali5-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error mesage: `[Errno 2] No such file or directory: '/root/AutoSploit/hosts.txt'`
Error traceback:
```
Traceback (most recent call):
File "/root/AutoSploit/autosploit/main.py", line 116, in main
terminal.terminal_main_display(loaded_tokens)
File "/root/AutoSploit/lib/term/terminal.py", line 598, in terminal_main_display
self.__reload()
File "/root/AutoSploit/lib/term/terminal.py", line 72, in __reload
self.loaded_hosts = open(lib.settings.HOST_FILE).readlines()
IOError: [Errno 2] No such file or directory: '/root/AutoSploit/hosts.txt'
```
Metasploit launched: `False`
| closed | 2019-08-19T16:09:27Z | 2019-09-03T21:38:12Z | https://github.com/NullArray/AutoSploit/issues/1156 | [
"bug"
] | AutosploitReporter | 0 |
psf/black | python | 4,121 | `allow_empty_first_line_before_new_block_or_comment` can lead to inconsistent formatting | **Describe the style change**
I am working on Ruff's formatter and implementing Black's preview styles. We reviewed the `allow_empty_first_line_before_new_block_or_comment` preview style and decided not to implement it because it leads to inconsistent formatting after moving or deleting code or requires more manual intervention.
**Examples when `allow_empty_first_line_before_new_block_or_comment` is enabled**
I work on a refactoring and start with the following code:
```python
def foo():
"""
Docstring
"""
# Here we go
if x:
if no_longer_needed:
print("Some complex statements")
# This is also now fine
a = 123
```
And I delete the `no_longer_needed` branch:
```python
def foo():
"""
Docstring
"""
# Here we go
if x:
# This is also now fine
a = 123
```
Black removes the empty line above the comment when the `allow_empty_first_line_before_new_block_or_comment` style is disabled.
```python
def foo():
"""
Docstring
"""
# Here we go
if x:
# This is also now fine
a = 123
```
Black doesn't remove the empty line when enabling `allow_empty_first_line_before_new_block_or_comment,` which either results in an unintended empty line above the comment (inconsistency) or that I have to intervene and remove the empty line manually. This feels like something I would expect a formatter to do for me (at the cost that having empty lines before comments isn't possible)
**Desired style**
To keep the non-preview formatting for comments at the start of a block.
**Additional context**
* [PR](https://github.com/psf/black/pull/3967)
* [Issue](https://github.com/psf/black/issues/3551#issuecomment-1545878067)
| closed | 2023-12-22T05:37:17Z | 2024-01-20T00:58:49Z | https://github.com/psf/black/issues/4121 | [
"T: style",
"C: preview style"
] | MichaReiser | 3 |
chatanywhere/GPT_API_free | api | 324 | Gomoon中配置Claud时报错 |

| open | 2024-11-19T09:49:37Z | 2024-11-20T06:32:16Z | https://github.com/chatanywhere/GPT_API_free/issues/324 | [] | Laohu81 | 4 |
AntonOsika/gpt-engineer | python | 565 | Use mindmap instead of prompt file | I am not sure if its practical but it would be great to export a mindmap (created in apps like use MindMeister, SimpleMind, Freemind, or XMind) to a text file (md, json, yml whichever works the best) and use that instead of `prompt` file.
Of course, The visual structure of mindmap will be lost when exporting mindmap to text file. | closed | 2023-08-02T06:53:58Z | 2023-08-05T21:23:02Z | https://github.com/AntonOsika/gpt-engineer/issues/565 | [] | sam5epi0l | 1 |
youfou/wxpy | api | 87 | 部署到linux上,发送群组消息会把错误信息也发送到消息 | 是log模块的缘故么?
会一直重复发送:
Resetting dropped connection: wx2.qq.com
connection pool if full, discarding connection: wx2.qq.com | closed | 2017-06-19T01:39:39Z | 2017-06-29T08:45:48Z | https://github.com/youfou/wxpy/issues/87 | [] | zzir | 2 |
google-deepmind/sonnet | tensorflow | 13 | Config issue on macOS | Simply follow the instructions produces the following error at the "./configure" on macOS:
Please specify the location of python. [Default is /usr/local/bin/python]:
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
sed: can't read : No such file or directory | closed | 2017-04-10T08:15:14Z | 2017-04-10T19:17:25Z | https://github.com/google-deepmind/sonnet/issues/13 | [] | davidcittadini | 2 |
hzwer/ECCV2022-RIFE | computer-vision | 79 | Training strategies of the three models provided | do you train the RIFE RIFE2F RIFE_HD using the same training strategies such as learning_rate、optimizer、training epoch and so on? | closed | 2020-12-23T04:54:51Z | 2021-02-11T03:10:49Z | https://github.com/hzwer/ECCV2022-RIFE/issues/79 | [] | tqyunwuxin | 5 |
holoviz/colorcet | plotly | 4 | Add named_palettes | Widget-based applications very often need to provide a list of colormaps/palettes for the user to select via a widget. The complete list of colorcet palettes is unwieldy for such a purpose, and the names are obscure, but there is a subset that has readable names that is probably also the most commonly needed set. Should add some code like the following to easily provide a list of useful colormaps, with a useful default at the front:
```python
def odict_to_front(odict,key):
"""Given an OrderedDict, move the item with the given key to the front."""
front_item = [(key,odict[key])]
other_items = [(k,v) for k,v in odict.items() if k is not key]
return OrderedDict(front_item+other_items)
from colorcet import palette
default_palette = "fire"
named_palettes = {k:p for k,p in palette.items() if not '_' in k}
sorted_palettes = OrderedDict(sorted(named_palettes.items()))
typical_palettes = odict_to_front(sorted_palettes,default_palette)
```
| closed | 2017-05-19T14:59:49Z | 2017-09-25T21:34:12Z | https://github.com/holoviz/colorcet/issues/4 | [] | jbednar | 0 |
replicate/cog | tensorflow | 1,497 | Allow cache type mount | I'd like to add a pip install command to the run field of the yaml file. To optimize this further, I tried adding a cache type mount with target set to /root/.cache/pip. When I tried building, this was the error output:
```
ⅹ There is a problem in your cog.yaml file.
ⅹ build.run.0.mounts.0.type must be one of the following: "secret".
ⅹ
ⅹ To see what options you can use, take a look at the docs:
ⅹ https://github.com/replicate/cog/blob/main/docs/yaml.md
ⅹ
ⅹ You might also need to upgrade Cog, if this option was added in a
ⅹ later version of Cog.
```
I am running cog v0.9.4 | open | 2024-01-25T15:44:54Z | 2024-01-25T15:44:54Z | https://github.com/replicate/cog/issues/1497 | [] | hazelnutcloud | 0 |
huggingface/peft | pytorch | 2,381 | Bug when deleting adapters of a model with modules_to_save | ### System Info
All PEFT versions.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSequenceClassification
from peft import LoraConfig, get_peft_model
model_id = "facebook/opt-125m"
config = LoraConfig(task_type="SEQ_CLS")
model = AutoModelForSequenceClassification.from_pretrained(model_id)
adapter_to_delete = "delete_me"
model = get_peft_model(model, config)
model.add_adapter(adapter_to_delete, config)
# sanity check
assert "delete_me" in model.base_model.model.score.modules_to_save
model.delete_adapter(adapter_to_delete)
assert "delete_me" not in model.base_model.model.score.modules_to_save
```
### Expected behavior
When adding, say, a LoRA adapter with `modules_to_save`, then deleting the adapter, the LoRA part is correctly removed but the `modules_to_save` part is not removed. | open | 2025-02-17T11:22:34Z | 2025-03-19T15:09:09Z | https://github.com/huggingface/peft/issues/2381 | [
"bug",
"wip"
] | BenjaminBossan | 1 |
slackapi/bolt-python | fastapi | 374 | Why am I getting a 'not_allowed_token_type' error from the Slack API? | I am trying to open a modal when a user clicks my app's **Global shortcut.**

But when I try, I receive the following error message:
```
127.0.0.1 - - [08/Jun/2021 15:54:37] "POST /slack/events HTTP/1.1" 200 -
Error creating conversation: The request to the Slack API failed.
The server responded with: {'ok': False, 'error': 'not_allowed_token_type'}
```
I'm not sure what the source of the issue is, since I am able to successfully use a separate function (`update`) to collect the text of a slack message when a user mentions my app (see `@app.event("app_mention")`.
Note: In both Interactivity & Shortcuts and Event Subscriptions, both the Request URLs are my ngrok.io link at `/slack/events` (i.e. `https://...ngrok.io/slack/events`)
I also checked in previously logged issues on this github, and saw [this](https://github.com/slackapi/bolt-python/issues/305). Afterwards, I checked the `views.open` method page [here](https://api.slack.com/methods/views.open) and see that it works with both **bot** and **user** token types with no scopes required.

What am I doing wrong?
### Reproducible in:
#### The `slack_bolt` version
slack-bolt==1.6.1
slack-sdk==3.6.0
slackclient==2.9.3
slackeventsapi==2.2.1
#### Python runtime version
Python 3.9.2
#### OS info
ProductName: macOS
ProductVersion: 11.3.1
BuildVersion: 20E241
Darwin Kernel Version 20.4.0: Thu Apr 22 21:46:47 PDT 2021; root:xnu-7195.101.2~1/RELEASE_X86_64
#### Steps to reproduce:
1. source venv/bin/activate
2. python3 main.py
3. ./ngrok http 3000
4. Click shortcut using Bolt icon.

**Source code:**
.env
```bash
export MONDAY_API_KEY="..."
export SLACK_BOT_TOKEN="xoxb-..."
export SLACK_SIGNING_SECRET="..."
```
main.py
```python
import os
# Use the package we installed
from slack_bolt import App
from slack_sdk.errors import SlackApiError
import requests
import json
# Initializes your app with your bot token and signing secret
app = App(
token=os.environ.get("SLACK_BOT_TOKEN"),
signing_secret=os.environ.get("SLACK_SIGNING_SECRET")
)
@app.event("app_mention")
def update(client, event, logger):
try:
apiKey = os.environ.get("MONDAY_API_KEY")
apiUrl = "https://api.monday.com/v2"
headers = {"Authorization" : apiKey}
board_id = "1029994357"
query5 = 'mutation ($myItemName: String!, $columnVals: JSON!) { create_item (board_id:'+board_id+', item_name:$myItemName, column_values:$columnVals) { id } }'
vars = {
'myItemName' : event["blocks"][0]["elements"][0]["elements"][1]["text"]
}
data = {'query' : query5, 'variables' : vars}
r = requests.post(url=apiUrl, json=data, headers=headers) # make request
print(r.json())
except Exception as e:
logger.error(f"Error publishing home tab: {e}")
# The open_modal shortcut listens to a shortcut with the callback_id "open_modal"
@app.shortcut("file_bug")
def open_modal(ack, shortcut, client, logger, body):
# Acknowledge the shortcut request
ack()
try:
# Call the views_open method using the built-in WebClient
api_response = client.views_open(
trigger_id=shortcut["trigger_id"],
# A simple view payload for a modal
view=
{
"title": {
"type": "plain_text",
"text": "File a bug",
"emoji": True
},
"submit": {
"type": "plain_text",
"text": "Submit",
"emoji": True
},
"type": "modal",
"close": {
"type": "plain_text",
"text": "Cancel",
"emoji": True
},
"blocks": [
{
"type": "section",
"text": {
"type": "plain_text",
"text": "This form will submit your bug to the #dev-bugs board on monday.com",
"emoji": True
}
}
}
logger.info(api_response)
except SlackApiError as e:
logger.error("Error creating conversation: {}".format(e))
# Start your app
if __name__ == "__main__":
app.start(port=int(os.environ.get("PORT", 3000)))
```
### Expected result:
Modal is opened, as shown in ["Modal Preview"](https://app.slack.com/block-kit-builder/TMDTWEGDN#%7B%22title%22:%7B%22type%22:%22plain_text%22,%22text%22:%22Add%20info%20to%20feedback%22,%22emoji%22:true%7D,%22submit%22:%7B%22type%22:%22plain_text%22,%22text%22:%22Save%22,%22emoji%22:true%7D,%22type%22:%22modal%22,%22blocks%22:%5B%5D%7D) in Block Kit Builder.
### Actual result:
Error message in terminal tab running bolt app:
```
127.0.0.1 - - [08/Jun/2021 15:54:37] "POST /slack/events HTTP/1.1" 200 -
Error creating conversation: The request to the Slack API failed.
The server responded with: {'ok': False, 'error': 'not_allowed_token_type'}
```
Error message in Slack app:

| closed | 2021-06-08T20:16:38Z | 2021-06-09T18:21:33Z | https://github.com/slackapi/bolt-python/issues/374 | [
"question"
] | chxw | 4 |
aleju/imgaug | deep-learning | 429 | Support for RGBD Images | I have 4 channel rgbd images of type float32. How does imgaug work on these images?
Is the depth channel automatically considered as heatmap and augmented and later concatenated?
Thank you | open | 2019-09-17T08:27:16Z | 2019-09-18T18:47:26Z | https://github.com/aleju/imgaug/issues/429 | [] | ApoorvaSuresh | 1 |
Textualize/rich | python | 3,134 | [BUG] Current window width is not respected in VS Code Jupyter | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
`rich` doesn't respect the current window width when running in VS Code Jupyter. (See the horizontal slider)

Here's the code snippet to reproduce:
```python
import logging
from rich.logging import RichHandler
FORMAT = "%(message)s"
logging.basicConfig(
level="WARNING",
format=FORMAT,
datefmt="[%X]",
handlers=[RichHandler(rich_tracebacks=True)]
)
log = logging.getLogger("test")
log.warning("This is a warning message!")
```
**Platform**
<details>
<summary>Click to expand</summary>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #000080; text-decoration-color: #000080">╭────────────────────── </span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold"><</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">class</span><span style="color: #000000; text-decoration-color: #000000"> </span><span style="color: #008000; text-decoration-color: #008000">'rich.console.Console'</span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold">></span><span style="color: #000080; text-decoration-color: #000080"> ──────────────────────╮</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">A high level console interface.</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008000; text-decoration-color: #008000">╭────────────────────────────────────────────────────────────────────────╮</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008000; text-decoration-color: #008000">│</span> <span style="font-weight: bold"><</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">console</span><span style="color: #000000; text-decoration-color: #000000"> </span><span style="color: #808000; text-decoration-color: #808000">width</span><span style="color: #000000; text-decoration-color: #000000">=</span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">115</span><span style="color: #000000; text-decoration-color: #000000"> ColorSystem.TRUECOLOR</span><span style="font-weight: bold">></span> <span style="color: #008000; text-decoration-color: #008000">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008000; text-decoration-color: #008000">╰────────────────────────────────────────────────────────────────────────╯</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">color_system</span> = <span style="color: #008000; text-decoration-color: #008000">'truecolor'</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">encoding</span> = <span style="color: #008000; text-decoration-color: #008000">'utf-8'</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">file</span> = <span style="font-weight: bold"><</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">ipykernel.iostream.OutStream</span><span style="color: #000000; text-decoration-color: #000000"> object at </span><span style="color: #008080; text-decoration-color: #008080; font-weight: bold">0x7fc8e533d570</span><span style="font-weight: bold">></span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">height</span> = <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">100</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">is_alt_screen</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">is_dumb_terminal</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">is_interactive</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">is_jupyter</span> = <span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">is_terminal</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">legacy_windows</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">no_color</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">options</span> = <span style="color: #800080; text-decoration-color: #800080; font-weight: bold">ConsoleOptions</span><span style="font-weight: bold">(</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">size</span>=<span style="color: #800080; text-decoration-color: #800080; font-weight: bold">ConsoleDimensions</span><span style="font-weight: bold">(</span><span style="color: #808000; text-decoration-color: #808000">width</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">115</span>, <span style="color: #808000; text-decoration-color: #808000">height</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">100</span><span style="font-weight: bold">)</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">legacy_windows</span>=<span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">min_width</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">1</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">max_width</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">115</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">is_terminal</span>=<span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">encoding</span>=<span style="color: #008000; text-decoration-color: #008000">'utf-8'</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">max_height</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">100</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">justify</span>=<span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">overflow</span>=<span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">no_wrap</span>=<span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">highlight</span>=<span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">markup</span>=<span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000">height</span>=<span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="font-weight: bold">)</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">quiet</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">record</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">safe_box</span> = <span style="color: #00ff00; text-decoration-color: #00ff00; font-style: italic">True</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">size</span> = <span style="color: #800080; text-decoration-color: #800080; font-weight: bold">ConsoleDimensions</span><span style="font-weight: bold">(</span><span style="color: #808000; text-decoration-color: #808000">width</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">115</span>, <span style="color: #808000; text-decoration-color: #808000">height</span>=<span style="color: #008080; text-decoration-color: #008080; font-weight: bold">100</span><span style="font-weight: bold">)</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">soft_wrap</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">stderr</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">style</span> = <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">tab_size</span> = <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">8</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">width</span> = <span style="color: #008080; text-decoration-color: #008080; font-weight: bold">115</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">╰────────────────────────────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #000080; text-decoration-color: #000080">╭─── </span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold"><</span><span style="color: #ff00ff; text-decoration-color: #ff00ff; font-weight: bold">class</span><span style="color: #000000; text-decoration-color: #000000"> </span><span style="color: #008000; text-decoration-color: #008000">'rich._windows.WindowsConsoleFeatures'</span><span style="color: #000080; text-decoration-color: #000080; font-weight: bold">></span><span style="color: #000080; text-decoration-color: #000080"> ────╮</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008080; text-decoration-color: #008080">Windows features available.</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008000; text-decoration-color: #008000">╭───────────────────────────────────────────────────╮</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008000; text-decoration-color: #008000">│</span> <span style="color: #800080; text-decoration-color: #800080; font-weight: bold">WindowsConsoleFeatures</span><span style="font-weight: bold">(</span><span style="color: #808000; text-decoration-color: #808000">vt</span>=<span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span>, <span style="color: #808000; text-decoration-color: #808000">truecolor</span>=<span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span><span style="font-weight: bold">)</span> <span style="color: #008000; text-decoration-color: #008000">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #008000; text-decoration-color: #008000">╰───────────────────────────────────────────────────╯</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">truecolor</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">│</span> <span style="color: #808000; text-decoration-color: #808000; font-style: italic">vt</span> = <span style="color: #ff0000; text-decoration-color: #ff0000; font-style: italic">False</span> <span style="color: #000080; text-decoration-color: #000080">│</span>
<span style="color: #000080; text-decoration-color: #000080">╰───────────────────────────────────────────────────────╯</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">╭────── <span style="font-weight: bold">Environment Variables</span> ───────╮
│ <span style="font-weight: bold">{</span> │
│ <span style="color: #008000; text-decoration-color: #008000">'TERM'</span>: <span style="color: #008000; text-decoration-color: #008000">'xterm-color'</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'COLORTERM'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'CLICOLOR'</span>: <span style="color: #008000; text-decoration-color: #008000">'1'</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'NO_COLOR'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'TERM_PROGRAM'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'COLUMNS'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'LINES'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'JUPYTER_COLUMNS'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'JUPYTER_LINES'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'JPY_PARENT_PID'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span>, │
│ <span style="color: #008000; text-decoration-color: #008000">'VSCODE_VERBOSE_LOGGING'</span>: <span style="color: #800080; text-decoration-color: #800080; font-style: italic">None</span> │
│ <span style="font-weight: bold">}</span> │
╰────────────────────────────────────╯
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="color: #808000; text-decoration-color: #808000">platform</span>=<span style="color: #008000; text-decoration-color: #008000">"Linux"</span>
</pre>
</details>
| closed | 2023-09-25T19:23:23Z | 2023-09-25T20:02:04Z | https://github.com/Textualize/rich/issues/3134 | [
"Needs triage"
] | ma-sadeghi | 4 |
MaartenGr/BERTopic | nlp | 1,686 | Zero Shot Modelling | Is there a way to only have the model classify document into the zero shot topic categories instead of outputting a mix of both? | closed | 2023-12-11T18:28:46Z | 2023-12-11T20:46:29Z | https://github.com/MaartenGr/BERTopic/issues/1686 | [] | srikantamehta | 1 |
jstrieb/github-stats | asyncio | 69 | Bot Can't Commit |
### I have done everything you said on the tutorial but it seems like the bot cant have access to the repo...

| closed | 2022-05-02T13:07:18Z | 2022-05-08T18:35:42Z | https://github.com/jstrieb/github-stats/issues/69 | [] | AfonsoBatista7 | 1 |
plotly/dash | plotly | 2,715 | [BUG] | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 0.42.0
dash-core-components 0.47.0
dash-html-components 0.16.0
dash-renderer 0.23.0
dash-table 3.6.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Describe the bug**
A clear and concise description of what the bug is.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots or screen recording to help explain your problem.
| closed | 2023-12-20T19:00:00Z | 2023-12-21T17:14:11Z | https://github.com/plotly/dash/issues/2715 | [] | dlikemobile26 | 1 |
home-assistant/core | asyncio | 140,876 | Reolink tests failing | ### The problem
The tests for reolink integration are failing on dev.
### What version of Home Assistant Core has the issue?
core-dev
### What was the last working version of Home Assistant Core?
core-76aef5b
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
reolink
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/reolink/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
FAILED tests/components/reolink/test_binary_sensor.py::test_smart_ai_sensor - AttributeError: 'NoneType' object has no attribute 'state'
```
### Additional information
Seems to be caused by https://github.com/home-assistant/core/pull/140408 | closed | 2025-03-18T15:12:25Z | 2025-03-18T15:44:41Z | https://github.com/home-assistant/core/issues/140876 | [
"integration: reolink"
] | Taraman17 | 4 |
sinaptik-ai/pandas-ai | pandas | 1,053 | No results returned error is raised often when generating dataframes | ### System Info
Azure OpenAI
Pandas AI version 2.012
ChatGPT version 3.5 Turbo
### 🐛 Describe the bug
When submitting a prompt that asks the results to be shown as a table, I often get a no results returned error.
For example for the prompt:
**show me the number of employees hired by year as a table**
I get this log:
2024-03-19 16:14:27 [INFO] Code generated:
```
import pandas as pd
```
# Filter the dataframe to include only the relevant columns
df_filtered = dfs[0][['Employee start date']]
# Extract the year from the 'Employee start date' column
df_filtered['Year'] = df_filtered['Employee start date'].dt.year
# Group the data by year and count the number of employees hired in each year
df_grouped = df_filtered.groupby('Year').size().reset_index(name='Number of Employees Hired')
# Sort the data by year in ascending order
df_sorted = df_grouped.sort_values('Year')
# Display the table
df_sorted
```
2024-03-19 16:14:27 [INFO]
Code running:
```
df_filtered = dfs[0][['Employee start date']]
df_filtered['Year'] = df_filtered['Employee start date'].dt.year
df_grouped = df_filtered.groupby('Year').size().reset_index(name='Number of Employees Hired')
df_sorted = df_grouped.sort_values('Year')
df_sorted
```
<string>:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
2024-03-19 16:14:27 [ERROR] Failed with error: Traceback (most recent call last):
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\pipelines\chat\code_execution.py", line 64, in execute
result = code_manager.execute_code(code_to_run, code_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\helpers\code_manager.py", line 211, in execute_code
raise NoResultFoundError("No result returned")
pandasai.exceptions.NoResultFoundError: No result returned
2024-03-19 16:14:27 [ERROR] Pipeline failed on step 5: No result returned
Traceback (most recent call last):
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\pipelines\chat\generate_chat_pipeline.py", line 281, in run
output = (self.code_generation_pipeline | self.code_execution_pipeline).run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\pipelines\pipeline.py", line 137, in run
raise e
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\pipelines\pipeline.py", line 101, in run
step_output = logic.execute(
^^^^^^^^^^^^^^
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\pipelines\chat\code_execution.py", line 93, in execute
raise e
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\pipelines\chat\code_execution.py", line 64, in execute
result = code_manager.execute_code(code_to_run, code_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\SFDVHE\AppData\Local\anaconda3\envs\pandasAIV20\Lib\site-packages\pandasai\helpers\code_manager.py", line 211, in execute_code
raise NoResultFoundError("No result returned")
pandasai.exceptions.NoResultFoundError: No result returned
``` | closed | 2024-03-19T15:21:05Z | 2024-08-27T16:05:55Z | https://github.com/sinaptik-ai/pandas-ai/issues/1053 | [] | epicvhbennetts | 1 |
Textualize/rich | python | 3,075 | Width measurement of align object | **Describe the bug**
```__rich_measure__``` returns invalid measurement for an Align object, the object width shouldn't just be the passed in renderable width
<img width="782" alt="Screenshot 2023-08-05 at 5 54 36 PM" src="https://github.com/Textualize/rich/assets/67282231/518151c9-896c-4ac1-a232-d70ac83e318f">
| closed | 2023-08-05T12:26:45Z | 2024-08-26T14:50:01Z | https://github.com/Textualize/rich/issues/3075 | [
"Needs triage"
] | Aradhya-Tripathi | 4 |
rpicard/explore-flask | flask | 105 | A problem with Blueprints subdomain setup | Hi, I just found a tiny problem about the blueprints subdomain.
According to a stackoverflow answer, if you use a subdomain setup, you have to config a SERVER_NAME and you have to setup a default subdomain, such as "www":
```
app.config['SERVER_NAME'] = "localhost:5000"
app.url_map.default_subdomain = "www"
```
If you do not do the config above,
`app.register_blueprint(site, subdomain='<site_subdomain>')`
The code would not work.
Maybe I am wrong. But I tested it. And the subdomain could not work if I did not config the 'SERVER_NAME' and default_subdomain
My point is that your book in the session about subdomain, your code does not config the SERVER_NAME or default_submain, but it seems your code still works.
Would you like to talk about the subdomain issue more specific? Thank you .
| open | 2016-09-18T06:43:23Z | 2018-11-01T05:21:54Z | https://github.com/rpicard/explore-flask/issues/105 | [] | medmin | 1 |
thtrieu/darkflow | tensorflow | 968 | Cannot Train My Own Models - Layer [convolutional]1 not implemented | python flow --model cfg/tiny-yolo-voc-1c.cfg --load bin/tiny-yolo-voc.weights --train --annotation annotations --dataset AL_ready_seen --epoch 400
The cmd command above when executed returns:
Parsing ./cfg/tiny-yolo-voc.cfg
Parsing cfg/tiny-yolo-voc-1c.cfg
Layer [convolutional]1 not implemented
And then it stops. | open | 2019-01-15T17:17:36Z | 2019-05-14T10:20:32Z | https://github.com/thtrieu/darkflow/issues/968 | [] | AndrejHatzi | 1 |
miLibris/flask-rest-jsonapi | sqlalchemy | 135 | Reraise JsonApiException instead of creating a new one | Some times it may be convenient to delegate validation to some SQLAlchemy event, and raise `JsonApiException` from there. But with the current implementation in `datalayer/sql_alchemy`, the raised exception is "again" transformed to `JsonApiException`.
It would be nice in case a `JsonApiException` is raised to re-raise it without any change.
PR coming. | closed | 2018-12-28T15:56:23Z | 2019-01-22T14:00:24Z | https://github.com/miLibris/flask-rest-jsonapi/issues/135 | [] | kumy | 0 |
slackapi/python-slack-sdk | asyncio | 962 | Python package should use dashes in name | Python packaging naming conventions require dashes instead of underscores in package names. So while the Slack documentation asks the user to install `slack_api`, that's not the actual package name on PyPI or in pip or elsewhere in the Python packaging systems. This can lead to confusion. Please change the name of the package from `slack_api` to `slack-api` to avoid this confusion going forward. | closed | 2021-02-17T22:49:17Z | 2021-09-09T03:22:22Z | https://github.com/slackapi/python-slack-sdk/issues/962 | [
"question"
] | kislyuk | 4 |
pallets/flask | flask | 5,084 | Cannot import Markup from flask | `from flask import Markup` and `flask.Markup` both don’t work. This is merely supposed to be deprecated, not broken, in Flask 2.3.0.
Example A:
```python
import flask
print(flask.Markup('hi'))
```
```
Traceback (most recent call last):
File "/tmp/flask230/1.py", line 3, in <module>
print(flask.Markup('hi'))
File "/tmp/flask230/venv/lib/python3.10/site-packages/flask/__init__.py", line 102, in __getattr__
raise AttributeError(name)
AttributeError: Markup
```
Example B:
```python
from flask import Markup
print(Markup('hi'))
```
```
Traceback (most recent call last):
File "/tmp/flask230/2.py", line 1, in <module>
from flask import Markup
ImportError: cannot import name 'Markup' from 'flask' (/tmp/flask230/venv/lib/python3.10/site-packages/flask/__init__.py)
```
Environment:
- Python version: 3.10.10
- Flask version: 2.3.0 | closed | 2023-04-25T19:11:36Z | 2023-04-25T21:36:33Z | https://github.com/pallets/flask/issues/5084 | [] | lucaswerkmeister | 2 |
RomelTorres/alpha_vantage | pandas | 261 | API was typed wrong, yet still able to get data | I am building Tkinter GUI for foreign exchange using Alpha Vantage API. The issue was started:
```
if cp.verifyex('FXapi'):
root.mainloop()
else:
root.withdraw()
apifx = simpledialog.askstring('FX Currency Exchange', 'Please key in your API:', parent = root, show = '*')
if apifx:
mapi = cp(apifx)
mapi.createcpd('FXapi')
try:
cc = ForeignExchange(cp.readcpd('FXapi'))
testing = cc.get_currency_exchange_rate(from_currency = 'USD', to_currency = 'USD')
print(testing)
del cc
del testing
root.deiconify()
root.mainloop()
except:
messagebox.showinfo('FX Currency Exchange', 'Create API first!\nGet it:\nhttps://www.alphavantage.co/')
root.destroy()
else:
messagebox.showinfo('FX Currency Exchange', 'Create API first!\nGet it:\nhttps://www.alphavantage.co/')
root.destroy()
```
I typed wrongly the API key, when request for Current Exchange, still get result. Can explain? Thank you in advance🙏.
| closed | 2020-10-07T05:48:12Z | 2020-10-07T23:53:23Z | https://github.com/RomelTorres/alpha_vantage/issues/261 | [] | kakkarja | 1 |
awesto/django-shop | django | 8 | Order of methods should be Djangonic | The order of stuff on the Models is all wrong:
http://docs.djangoproject.com/en/1.2/internals/contributing/#model-style
| closed | 2011-02-14T09:22:32Z | 2011-02-14T10:44:43Z | https://github.com/awesto/django-shop/issues/8 | [] | chrisglass | 1 |
onnx/onnx | pytorch | 6,192 | The model is converted to onnx format using dynamic batch precision collapse | Here is the code I used to convert the model to onnx format, where I enabled dynamic batch
```python
def export_onnx(onnx_model_name,net,input_dicts:dict,output_name:list,dynamic=True,simplify=False):
'''
input_dicts = {"input1":(1,3,224,224),"input2":(1,3,224,224),...}
output_name = {"output1","output2",...}
'''
inp = (torch.randn(i) for i in input_dicts.values())
# inp.to('cuda')
# net.to('cuda')
# net=torch.jit.script(net)
net.eval()
# generate ONNX model
input_name = [i for i in input_dicts]
all_names = input_name+output_name
dynamic_axes={}
for i in all_names:
dynamic_axes[i]={0:'batch'}
# Export the model
torch.onnx.export(net, # model being run
tuple(inp), # model input (or a tuple for multiple inputs)
onnx_model_name, # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=17, # the ONNX version to export the model to
# do_constant_folding=True, # whether to execute constant folding for optimization
input_names = [i for i in input_name], # the model's input names
output_names = [i for i in output_name], # the model's output names
dynamic_axes=dynamic_axes if dynamic else None,
)
print(dynamic_axes)
onnx_model = onnx.load(onnx_model_name)
onnx.checker.check_model(onnx_model)
# 验证精度差异
ort_session = onnxruntime.InferenceSession(onnx_model_name)
def to_numpy(tensor):
return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()
# compute ONNX Runtime output prediction
ort_inputs = {i.name:to_numpy(torch.randn(input_dicts[i.name])) for i in ort_session.get_inputs()}
ort_outs = ort_session.run(None, ort_inputs)
net_input=[torch.tensor(i) for i in ort_inputs.values()]
torch_out = net(*net_input)
# compare ONNX Runtime and PyTorch results
np.testing.assert_allclose(to_numpy(torch_out[0]), ort_outs[0][0], rtol=1e-04, atol=1e-05)
# np.testing.assert_allclose(to_numpy(torch_out[1]), ort_outs[1], rtol=1e-03, atol=1e-05)
print("Exported model has been tested with ONNXRuntime, and the result looks good!")
# Simplify 简化onnx模型
if simplify:
try:
model_onnx, check = onnxsim.simplify(onnx_model_name)
assert check, 'assert check failed'
onnx.save(model_onnx, onnx_model_name)
except Exception as e:
print(f'simplifier failure: {e}')
input_dicts = {
"input":(1,3,448,448),
}
```
export_onnx(onnx_model_name = 'model.onnx',net = net,input_dicts=input_dicts,output_name=['output'],dynamic=True,simplify=True)
But when I use the converted model to reason, the accuracy is only 15%, what causes this problems?
| open | 2024-06-19T02:21:32Z | 2024-06-20T12:37:53Z | https://github.com/onnx/onnx/issues/6192 | [
"question"
] | ffxxjj | 8 |
ultralytics/ultralytics | python | 18,771 | yolov11 train | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Exception ignored in: <function InfiniteDataLoader.__del__ at 0x000001CC205E6700>
Traceback (most recent call last):
File "G:\Learning\yolo\yolov11_ws\ultralytics\ultralytics\data\build.py", line 52, in __del__
if hasattr(self.iterator, "_workers"):
AttributeError: 'InfiniteDataLoader' object has no attribute 'iterator'
AttributeError: 'InfiniteDataLoader' object has no attribute 'iterator'
### Additional
_No response_ | closed | 2025-01-20T08:50:55Z | 2025-01-21T09:37:28Z | https://github.com/ultralytics/ultralytics/issues/18771 | [
"question",
"fixed",
"detect"
] | wuliu-G | 3 |
chiphuyen/stanford-tensorflow-tutorials | tensorflow | 109 | Model training is taking time | I am trying to train the model by running chatbot.py file. I am using AWS GPU p2.xlarge (61GB Mem, ubuntu 16) instance. Its been 19hrs and I am wondering if I should still wait or is there anything else I should try ? TIA. | open | 2018-04-04T17:28:15Z | 2018-04-11T12:49:54Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/109 | [] | poojashah89 | 1 |
yt-dlp/yt-dlp | python | 12,545 | Error Downloading Facebook /share/ URL | ### Checklist
- [x] I'm reporting a bug unrelated to a specific site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
### Provide a description that is worded well enough to be understood
App version: 1.13.1 (11312)
Device information: Android 13 (API 33)
Supported ABIs: [arm64-v8a, armeabi-v7a, armeabi]
Yt-dlp version: 2025.03.05.232947
URL: https://www.facebook.com/share/p/18mdUVDAt8/
ERROR: [facebook] 1681602802426818: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
App version: 1.13.1 (11312)
Device information: Android 13 (API 33)
Supported ABIs: [arm64-v8a, armeabi-v7a, armeabi]
Yt-dlp version: 2025.03.05.232947
URL: https://www.facebook.com/share/p/18mdUVDAt8/
ERROR: [facebook] 1681602802426818: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
``` | closed | 2025-03-06T06:28:16Z | 2025-03-10T21:53:33Z | https://github.com/yt-dlp/yt-dlp/issues/12545 | [
"incomplete",
"site-bug",
"triage"
] | rizanzan9283 | 4 |
Urinx/WeixinBot | api | 97 | 调用webwxinit时,接收post的返回一直为空 | open | 2016-10-11T10:48:00Z | 2016-10-11T10:48:00Z | https://github.com/Urinx/WeixinBot/issues/97 | [] | tianser | 0 | |
ray-project/ray | machine-learning | 51,478 | CI test windows://python/ray/serve/tests:test_telemetry_1 is consistently_failing | CI test **windows://python/ray/serve/tests:test_telemetry_1** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aac2-9ab7-4db5-a5c8-93acf663bf5d
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c50-447f-9210-614604a63e49
DataCaseName-windows://python/ray/serve/tests:test_telemetry_1-END
Managed by OSS Test Policy | closed | 2025-03-18T22:41:42Z | 2025-03-19T19:59:48Z | https://github.com/ray-project/ray/issues/51478 | [
"bug",
"triage",
"serve",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
zihangdai/xlnet | nlp | 45 | Getting the following error when trying to run tpu_squad_large.sh | ```
W0624 16:40:52.848234 140595823699392 __init__.py:44] file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery_cache/__init__.py", line 41, in autodetect
from . import file_cache
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery_cache/file_cache.py", line 41, in <module>
'file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth')
ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
I0624 16:40:53.032814 140595823699392 model_utils.py:32] Use TPU without distribute strategy.
W0624 16:40:53.034595 140595823699392 estimator.py:1924] Estimator's model_fn (<function model_fn at 0x7fded610ded8>) includes params argument, but params are not passed to Estimator.
I0624 16:40:53.035511 140595823699392 estimator.py:201] Using config: {'_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_train_distribute': None, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fded610c2d0>, '_model_dir': 'gs://question-answering/experiment/squad', '_protocol': None, '_save_checkpoints_steps': 1000, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_num_ps_replicas': 0, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_tf_random_seed': None, '_save_summary_steps': 100, '_device_fn': None, '_cluster': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': None, '_evaluation_master': u'grpc://10.240.1.2:8470', '_eval_distribute': None, '_global_id_in_cluster': 0, '_master': u'grpc://10.240.1.2:8470'}
I0624 16:40:53.035886 140595823699392 tpu_context.py:202] _TPUContext: eval_on_tpu True
I0624 16:40:53.036292 140595823699392 run_squad.py:940] Input tfrecord file glob gs://question-answering/proc_data/squad/spiece.model.*.slen-512.qlen-64.train.tf_record
I0624 16:40:53.103672 140595823699392 run_squad.py:943] Find 0 input paths []
I0624 16:40:53.243366 140595823699392 tpu_system_metadata.py:59] Querying Tensorflow master (grpc://10.240.1.2:8470) for TPU system metadata.
2019-06-24 16:40:53.244997: W tensorflow/core/distributed_runtime/rpc/grpc_session.cc:354] GrpcSession::ListDevices will initialize the session with an empty graph and other defaults because the session has not yet been created.
I0624 16:40:53.250566 140595823699392 tpu_system_metadata.py:120] Found TPU system:
I0624 16:40:53.250852 140595823699392 tpu_system_metadata.py:121] *** Num TPU Cores: 8
I0624 16:40:53.251368 140595823699392 tpu_system_metadata.py:122] *** Num TPU Workers: 1
I0624 16:40:53.251487 140595823699392 tpu_system_metadata.py:124] *** Num TPU Cores Per Worker: 8
I0624 16:40:53.251578 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:CPU:0, CPU, -1, 13676165870058292740)
I0624 16:40:53.251995 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 18431886415160989968)
I0624 16:40:53.252130 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 1709911759425913454)
I0624 16:40:53.252240 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 10844450437283158931)
I0624 16:40:53.252331 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 6304466678072412335)
I0624 16:40:53.252414 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 1347834186282897648)
I0624 16:40:53.252512 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 2010934665306124677)
I0624 16:40:53.252598 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 1558411301377583255)
I0624 16:40:53.252691 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 15582409736436553171)
I0624 16:40:53.252773 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 13427578911967334923)
I0624 16:40:53.252856 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 17740777277430650014)
W0624 16:40:53.257469 140595823699392 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
I0624 16:40:53.268704 140595823699392 estimator.py:1111] Calling model_fn.
W0624 16:40:53.273418 140595823699392 deprecation.py:323] From run_squad.py:1001: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.parallel_interleave(...)`.
I0624 16:40:53.275295 140595823699392 error_handling.py:70] Error recorded from training_loop: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("arg0:0", shape=(), dtype=float32, device=/job:tpu_worker/task:0/device:CPU:0)'
I0624 16:40:53.275455 140595823699392 error_handling.py:93] training_loop marked as finished
W0624 16:40:53.275588 140595823699392 error_handling.py:127] Reraising captured error
Traceback (most recent call last):
File "run_squad.py", line 1310, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "run_squad.py", line 1209, in main
estimator.train(input_fn=train_input_fn, max_steps=FLAGS.train_steps)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2457, in train
rendezvous.raise_errors()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/error_handling.py", line 128, in raise_errors
six.reraise(typ, value, traceback)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2452, in train
saving_listeners=saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1154, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2251, in _call_model_fn
config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2547, in _model_fn
input_holders.generate_infeed_enqueue_ops_and_dequeue_fn())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1167, in generate_infeed_enqueue_ops_and_dequeue_fn
self._invoke_input_fn_and_record_structure())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1243, in _invoke_input_fn_and_record_structure
self._inputs_structure_recorder, host_device, host_id))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 830, in generate_per_host_v2_enqueue_ops_fn_for_host
inputs = _Inputs.from_input_fn(input_fn(user_context))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2423, in _input_fn
return input_fn(**kwargs)
File "run_squad.py", line 1001, in input_fn
cycle_length=cycle_length))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1605, in apply
return DatasetV1Adapter(super(DatasetV1, self).apply(transformation_func))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1127, in apply
dataset = transformation_func(self)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/experimental/ops/interleave_ops.py", line 88, in _apply_fn
buffer_output_elements, prefetch_input_elements)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/readers.py", line 133, in __init__
cycle_length, block_length)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2827, in __init__
super(InterleaveDataset, self).__init__(input_dataset, map_func)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2798, in __init__
map_func, self._transformation_name(), dataset=input_dataset)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2124, in __init__
self._function.add_to_graph(ops.get_default_graph())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 490, in add_to_graph
self._create_definition_if_needed()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 341, in _create_definition_if_needed
self._create_definition_if_needed_impl()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 355, in _create_definition_if_needed_impl
whitelisted_stateful_ops=self._whitelisted_stateful_ops)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 883, in func_graph_from_py_func
outputs = func(*func_graph.inputs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2099, in tf_data_structured_function_wrapper
ret = func(*nested_args)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/readers.py", line 247, in __init__
filenames, compression_type, buffer_size, num_parallel_reads)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/readers.py", line 199, in __init__
filenames = ops.convert_to_tensor(filenames, dtype=dtypes.string)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1039, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1097, in convert_to_tensor_v2
as_ref=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1175, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 977, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("arg0:0", shape=(), dtype=float32, device=/job:tpu_worker/task:0/device:CPU:0)'
``` | closed | 2019-06-24T16:44:04Z | 2019-06-24T17:31:25Z | https://github.com/zihangdai/xlnet/issues/45 | [] | rakshanda22 | 2 |
pydantic/pydantic-ai | pydantic | 968 | pydantic-ai graph run_stream() | a noob question, but I just tried pydantic_ai's graph and I dont see a run_stream(); its only run or run_sync... to get a streaming response will making the agents stream responses and use the graph run() .. work ? -- I am curious because in langgraph they explicitly mention the graph streaming has feature | closed | 2025-02-22T21:15:43Z | 2025-03-02T22:18:35Z | https://github.com/pydantic/pydantic-ai/issues/968 | [] | livehop | 3 |
giotto-ai/giotto-tda | scikit-learn | 107 | Add homology_dimensions to diagrams transformers | <!-- Instructions For Filing a Bug: https://github.com/giotto-learn/giotto-learn/blob/master/CONTRIBUTING.rst -->
#### Description
In the current implementation of `diagrams.Scaler`, `Amplitude` and `PersistenceEntropy`, homology dimensions that don't appear in `fit` will not be considered in `transform`.
It might lead to some unexpected results.
#### Possible improvements
- Documentation
- Add `homology_dimensions` parameter (as done for `Filtering`) to transformers that might ignore some dimensions in transform
PS: Maybe I'm the only one who thinks that it's not clear enough | open | 2019-12-10T17:36:33Z | 2022-02-02T13:38:59Z | https://github.com/giotto-ai/giotto-tda/issues/107 | [
"documentation",
"enhancement"
] | nphilou | 1 |
encode/databases | asyncio | 115 | How to specify SQLite database fullpath as database URL | When I specify an absolute path of SQLite database file as database url as below, aiosqlite cannot open the database file. (on MacOS)
When I use a relative path like `sqlite:///example.db`, it works fine.
How could I specify the absolute path?
```
database = databases.Database('sqlite:////Users/otsuka/path/to/example.db')
await database.connect()
await database.fetch_all('select * from users')
.venv/lib/python3.7/site-packages/aiosqlite/core.py in connector()
300 loc = str(database)
301
--> 302 return sqlite3.connect(loc, **kwargs)
303
304 return Connection(connector, loop)
OperationalError: unable to open database file
```
| closed | 2019-06-26T08:59:59Z | 2019-07-16T15:20:01Z | https://github.com/encode/databases/issues/115 | [
"bug"
] | otsuka | 4 |
jupyter-book/jupyter-book | jupyter | 1,695 | Error in deploy to netlify: Could not import extension myst_nb (exception: cannot import name 'AttrDict' from 'markdown_it.utils' | Hi there:
I have been trying to deploy my Jupyter-Book for the last two days with no success. I do not know what I am doing wrong, so I have decided to ask. Thanks in advance!!
The deploys before the one I tried two days ago were all successful. (https://tfe-2021-2022.netlify.app)
In the first deploy try two days ago, I had a problem: I was using Jupyter-Book v0.12.1 and Sphinx v4.4.0
I thought that the problem could be related to new versions of Jupyter-Book and/or Sphinx.
I upgraded to Sphinx v4.5.0, and I was issued the following error, from Deploy Details in Netlify
>Running Jupyter-Book v0.12.1
>Source Folder: /opt/build/repo
>Config Path: /opt/build/repo/_config.yml
>Output Path: /opt/build/repo/_build/html
>Running Sphinx v4.5.0
>Extension error:
>**Could not import extension myst_nb (exception: cannot import name 'AttrDict' from 'markdown_it.utils' (/opt/buildhome/python3.8/lib/python3.8/site-packages/markdown_it/utils.py))**
I installed myst-nb (which I already had, for sure) *via* conda: since then, must v0.15.2
The same error.
I googled, and found how to install markdown-it, as it appeared to me that perhaps the problem was there, from the exception raised.
I (re-)installed it: npm install markdown-it --save
The same error.
I upgraded Jupyter-Book, *via* conda: I had all requirements satisfied.
The same error.
I upgraded Jupyter-Book *via* pip: this time, upgraded to v0.12.3
The same error.
This is the complete error message I get, once and again that lat two days, as mentioned, irrespective of the changes in the versions of the mentioned programs...:
>9:02:02 AM: Running Jupyter-Book v0.12.3
>9:02:02 AM: Source Folder: /opt/build/repo
>9:02:02 AM: Config Path: /opt/build/repo/_config.yml
>9:02:02 AM: Output Path: /opt/build/repo/_build/html
>9:02:02 AM: Running Sphinx v4.5.0
>9:02:02 AM: Extension error:
>9:02:02 AM: Could not import extension myst_nb (exception: cannot import name 'AttrDict' from 'markdown_it.utils' (/opt/buildhome/python3.8/lib/python3.8/site-packages/markdown_it/utils.py))
>9:02:02 AM: Traceback (most recent call last):
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/sphinx/registry.py", line 425, in load_extension
>9:02:02 AM: mod = import_module(extname)
>9:02:02 AM: File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
>9:02:02 AM: return _bootstrap._gcd_import(name[level:], package, level)
>9:02:02 AM: File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
>9:02:02 AM: File "<frozen importlib._bootstrap>", line 991, in _find_and_load
>9:02:02 AM: File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
>9:02:02 AM: File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
>9:02:02 AM: File "<frozen importlib._bootstrap_external>", line 848, in exec_module
>9:02:02 AM: File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/myst_nb/__init__.py", line 35, in <module>
>9:02:02 AM: from .parser import NotebookParser
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/myst_nb/parser.py", line 13, in <module>
>9:02:02 AM: from myst_parser.sphinx_parser import MystParser
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/myst_parser/sphinx_parser.py", line 9, in <module>
>9:02:02 AM: from markdown_it.utils import AttrDict
>9:02:02 AM: ImportError: cannot import name 'AttrDict' from 'markdown_it.utils' (/opt/buildhome/python3.8/lib/python3.8/site-packages/markdown_it/utils.py)
>9:02:02 AM: The above exception was the direct cause of the following exception:
>9:02:02 AM: Traceback (most recent call last):
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/jupyter_book/sphinx.py", line 114, in build_sphinx
>9:02:02 AM: app = Sphinx(
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/sphinx/application.py", line 223, in __init__
>9:02:02 AM: self.setup_extension(extension)
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/sphinx/application.py", line 380, in setup_extension
>9:02:02 AM: self.registry.load_extension(self, extname)
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/sphinx/registry.py", line 428, in load_extension
>9:02:02 AM: raise ExtensionError(__('Could not import extension %s') % extname,
>9:02:02 AM: sphinx.errors.ExtensionError: Could not import extension myst_nb (exception: cannot import name 'AttrDict' from 'markdown_it.utils' (/opt/buildhome/python3.8/lib/python3.8/site-packages/markdown_it/utils.py))
>9:02:02 AM: The above exception was the direct cause of the following exception:
>9:02:02 AM: Traceback (most recent call last):
>9:02:02 AM: File "/opt/buildhome/python3.8/bin/jupyter-book", line 8, in <module>
>9:02:02 AM: sys.exit(main())
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/click/core.py", line 829, in __call__
>9:02:02 AM: return self.main(*args, **kwargs)
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/click/core.py", line 782, in main
>9:02:02 AM: rv = self.invoke(ctx)
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
>9:02:02 AM: return _process_result(sub_ctx.command.invoke(sub_ctx))
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
>9:02:02 AM: return ctx.invoke(self.callback, **ctx.params)
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/click/core.py", line 610, in invoke
>9:02:02 AM: return callback(*args, **kwargs)
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/jupyter_book/cli/main.py", line 323, in build
>9:02:02 AM: builder_specific_actions(
>9:02:02 AM: File "/opt/buildhome/python3.8/lib/python3.8/site-packages/jupyter_book/cli/main.py", line 535, in builder_specific_actions
>9:02:02 AM: raise RuntimeError(_message_box(msg, color="red", doprint=False)) from result
>9:02:02 AM: RuntimeError:
>9:02:02 AM: ===============================================================================
>9:02:02 AM: There was an error in building your book. Look above for the cause.
>9:02:02 AM: ===============================================================================
>9:02:03 AM:
>9:02:03 AM: ────────────────────────────────────────────────────────────────
>9:02:03 AM: "build.command" failed
>9:02:03 AM: ────────────────────────────────────────────────────────────────
I can not guess what I am doing wrong, so I would very much appreciate some help, some guide.
Thank you very much!! | closed | 2022-04-09T07:29:57Z | 2022-04-09T08:32:09Z | https://github.com/jupyter-book/jupyter-book/issues/1695 | [] | jmigartua | 3 |
electricitymaps/electricitymaps-contrib | data-visualization | 7,671 | misunderstanding of the data displayed | ## Description
Displayed data under (1) Selection under (3) in the German version
The user interface should be clear. Too many interpret the display below 3 as the displayed data

| closed | 2025-01-01T19:28:51Z | 2025-01-03T11:44:42Z | https://github.com/electricitymaps/electricitymaps-contrib/issues/7671 | [] | mazie-78 | 1 |
mitmproxy/mitmproxy | python | 6,371 | mitmdump memory usage is always constantly growing | #### Problem Description
Hello dear. I'm using mitmdump for a websocket connection that sends a large and continuous stream of text data. I see how RAM consumption increases over time. And now it’s more than 1 gigabyte and continues to grow until I kill the process. I use binary precompiled version for linux.
I only use "map_local" for one single url. The websocket traffic is a side connection that I cannot bypass the mitmproxy, since the application being attacked does not allow it to do so. I don't need to save any connection history.
Launch parameters:
./mitmdump -q --set map_local="|site.com|./map/site.txt" --set block_global="false" --flow-detail 0
#### System Information
Mitmproxy: 10.0.0 binary
Python: 3.11.4
OpenSSL: OpenSSL 3.1.2 1 Aug 2023
Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
| closed | 2023-09-18T17:54:58Z | 2023-10-31T12:38:01Z | https://github.com/mitmproxy/mitmproxy/issues/6371 | [
"kind/triage"
] | k0xxxx | 6 |
modelscope/data-juicer | streamlit | 80 | pip install py-data-juicer安装失败 | ### Before Asking 在提问之前
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully. 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引。
- [X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
### Search before asking 先搜索,再提问
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar questions. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的问题。
### Question
pip install py-data-juicer安装失败
` Cython.Compiler.Errors.CompileError: simhash/simhash.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for simhash-py
Running setup.py clean for simhash-py
Failed to build kenlm simhash-py
ERROR: Could not build wheels for kenlm, simhash-py, which is required to install pyproject.toml-based projects`
全部报错信息见[全部报错](https://github.com/UoBzhfh/my_data_juicer_issues/blob/main/errors.txt)
### Additional 额外信息
- windows11系统,已经安装gcc, cmake, visual studio 2022 MSVC140/143生成工具,我不太清楚是不是因为gcc和cmake版本不对,导致kenlm, simhash-py报错?
- gcc (x86_64-win32-seh-rev3, Built by MinGW-W64 project) 12.1.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- cmake version 3.28.0-rc5
| closed | 2023-11-16T05:07:12Z | 2023-11-18T05:14:33Z | https://github.com/modelscope/data-juicer/issues/80 | [
"question"
] | UoBzhfh | 2 |
horovod/horovod | deep-learning | 3,642 | Tensorflow: Support int8 and uint8 allreduce | **Is your feature request related to a problem? Please describe.**
Currently, `HorovodAllreduce` tensorflow op is registered to support allreduce on `int32, int64, float16, float32, float64` types but not on `int8` even though the backends support it (ex: NCCL, MPI).
**Describe the solution you'd like**
I plan to add `int8` to the supported types of `HorovodAllreduce`.
**Describe alternatives you've considered**
An alternative approach would be to use `allgather` on `int8` tensors and then do the addition/averaging in the user code, which can be avoided with the existing tooling that we have.
**Additional context**
With this in place, I believe we can also add an `Int8Compressor` which is extreme but people have used `1bit` compressors in the literature so `8bit` compression can definitely help.
cc: @romerojosh @chongxiaoc @maxhgerlach @tgaddair | closed | 2022-08-10T23:06:58Z | 2022-08-17T19:07:57Z | https://github.com/horovod/horovod/issues/3642 | [
"enhancement"
] | kvignesh1420 | 10 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 257 | where is the help documentation of get_all_embeddings | when I run the mnist example provided on the package, I found
```
def get_all_embeddings(dataset, model):
tester = testers.BaseTester()
return tester.get_all_embeddings(dataset, model)
```
I want to know all methods provided by the class testers.BaseTester(),the documentation provided on online only is the property of class BaseTester, but I can't find the help documentation of the methods(for example, the function get_all_embeddings)? | closed | 2021-01-05T06:00:07Z | 2022-08-14T23:30:52Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/257 | [
"documentation"
] | yuxiaokang-source | 5 |
PhantomInsights/baby-names-analysis | matplotlib | 3 | These are not baby names | Social security started in 1935. That means those born in 1880 self-reported their names at 55 years old. This makes the database tremendously biased towards those rich enough to survive to 55 years old. It is also tremendously sex-biased, as only widows of professionals were eligibile at first. Working black women were not eligibile for social security until the 1960s.
None of this is your fault, it is the Social Security Administration's fault for calling this dataset a "baby names dataset", when the first babies were rich babies born in 1935. Of course, a little investigating on your part would show many, many anomalies in the data until about the 1970s. Look at the ratio of male to female "babies" over time; it's pretty constant for human births. | closed | 2019-07-25T20:52:34Z | 2019-07-26T01:22:13Z | https://github.com/PhantomInsights/baby-names-analysis/issues/3 | [] | Prooffreader | 1 |
dpgaspar/Flask-AppBuilder | flask | 1,830 | Feature: Add support for read-only database views without primary key column | Databases like Postgres provide SQL Views which can be read-only.
While these can be represented withinin a Flask App, they take quite a lot of extra work compared with adding a CRUD table, mainly to remove all the editing functionality that comes with the default CRUD Model and ModelView base classes.
I would like to see a ReadOnlyModel base class (or similar) that can be linked to a View within a database and which doesn't enforce a primary key column or provide any data editing functionality.
| open | 2022-04-09T11:01:07Z | 2022-05-03T09:16:21Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1830 | [
"enhancement"
] | rob-hills | 1 |
harry0703/MoneyPrinterTurbo | automation | 522 | 语音和文案选择的语言是一致的,提示以下报错。 | 2024-11-13 10:01:39.427 | ERROR | app.services.task:generate_audio:85 - failed to generate audio:
1. check if the language of the voice matches the language of the video script.
2. check if the network is available. If you are in China, it is recommended to use a VPN and enable the global traffic mode.
2024-11-13 10:01:39.431 | ERROR | __main__:<module>:790 - 视频生成失败 | open | 2024-11-13T02:06:29Z | 2024-12-23T09:47:18Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/522 | [] | xuwu-001 | 9 |
davidsandberg/facenet | computer-vision | 701 | the pretrained model works bad for testing chinese face image | I using the pretrained model to test Chinese face verification ,but the result is bad ! I do not know whether the bad work result in the training data or not ? | open | 2018-04-16T03:00:13Z | 2018-05-12T11:55:58Z | https://github.com/davidsandberg/facenet/issues/701 | [] | piaohe111 | 1 |
keras-team/keras | machine-learning | 20,556 | How to enable Flash-Attn in the PyTorch backend. | The 3.7.0 update documentation states that the PyTorch backend is optionally invoked. I now want to call the BERT model from keras_hub. How do I start Flash Attn? | closed | 2024-11-27T14:54:10Z | 2024-11-29T13:14:22Z | https://github.com/keras-team/keras/issues/20556 | [
"type:support"
] | pass-lin | 3 |
keras-team/keras | data-science | 20,713 | Huge difference in training with Pytorch backend | I ran the simple mnist training following code with the different backends: Tensorflow, Pytorch and Jax. I get similar results with tensorflow and Jax: between 98 and 99% test accuracy but way lower results with Pytorch: below 90%.
```python
import os
from time import time
os.environ["KERAS_BACKEND"] = "jax"
# os.environ["KERAS_BACKEND"] = "torch"
# os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
import keras
from keras import layers, Model
print(f"Keras version: {keras.__version__}")
# Load MNIST dataset
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(f"x_train shape: {x_train.shape}")
print(f"y_train shape: {y_train.shape}")
# Preprocess data
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0
x_train = x_train[..., None] # Add channel dimension
x_test = x_test[..., None]
# Define the model using Functional API
inputs = keras.Input(shape=(28, 28, 1), name="input_layer")
x = layers.Conv2D(32, kernel_size=(3, 3), activation="relu", name="conv_1")(inputs)
x = layers.MaxPooling2D(pool_size=(2, 2), name="pool_1")(x)
x = layers.Conv2D(64, kernel_size=(3, 3), activation="relu", name="conv_2")(x)
x = layers.MaxPooling2D(pool_size=(2, 2), name="pool_2")(x)
x = layers.Flatten(name="flatten")(x)
x = layers.Dense(128, activation="relu", name="dense_1")(x)
x = layers.Dropout(0.2, name="dropout_1")(x)
outputs = layers.Dense(10, activation="softmax", name="output_layer")(x)
model = Model(inputs=inputs, outputs=outputs, name="mnist_cnn_model")
model.summary()
# Compile the model
model.compile(optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
begin_time = time()
# Train the model
model.fit(x_train, y_train, epochs=3, batch_size=32, validation_split=0.2)
end_time = time()
print(f"Time taken: {end_time - begin_time:.2f}s")
# Evaluate the model
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print(f"Test accuracy: {test_acc:.4f}")
```
Note: I have to put PYTORCH_ENABLE_MPS_FALLBACK to True for the code to run with Pytorch (I am on MAC OS). However, it is around 6 times slower than with Tensorflow or Jax.
Are there differences with the layers in Pytorch or with the optimizer (default parameters, ...) which can explain this difference in accuracy? How to fix the following code so that it gives the same results in Pytorch? | closed | 2025-01-02T16:13:47Z | 2025-01-06T10:20:52Z | https://github.com/keras-team/keras/issues/20713 | [
"stat:awaiting response from contributor",
"type:performance"
] | invoxiaglo | 4 |
jofpin/trape | flask | 132 | Google Maps error | I keep getting
`Directions request failed due to REQUEST_DENIED`
any ideas? | open | 2019-02-08T01:42:57Z | 2020-08-18T21:06:15Z | https://github.com/jofpin/trape/issues/132 | [] | OstojaOfficial | 3 |
streamlit/streamlit | streamlit | 9,967 | Dev: make clean is cleaning "e2e_playwright/.streamlit/secrets.toml" even though it contains only a fake secret value | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
Running `make clean` on a fresh `develop` branch cleans a `secrets.toml` test file. It looks like `make clean` removes all found `.streamlit` directories, but `e2e_playwright/.streamlit/secrets.toml` is a tracked file in git which makes me think we don't want to be cleaning it up.

### Reproducible Code Example
_No response_
### Steps To Reproduce
1. Check out `develop` branch
1. run `make clean`
### Expected Behavior
We shouldn't clean a file needed for testing
### Current Behavior
We do clean a file needed for testing
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.0
- Python version: N/A
- Operating System: Ubuntu
- Browser: N/A
### Additional Information
_No response_ | closed | 2024-12-05T04:27:13Z | 2024-12-05T14:22:55Z | https://github.com/streamlit/streamlit/issues/9967 | [
"type:bug",
"status:confirmed",
"priority:P3"
] | Asaurus1 | 2 |
lukas-blecher/LaTeX-OCR | pytorch | 6 | Use model on Android? | Hi! Your model is working great on PC, but is is possible to use it on Android device?
As far as I know, the model have to be converted to TorchScript format to work on mobile device, but it's not enough. We also need to transfer "call_model" function from pix2tex.py script to Android app, because model requires specific image resize to work. How we can do that? Thank you :) | closed | 2021-04-29T12:22:07Z | 2022-06-21T10:03:44Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/6 | [] | pavviaz | 3 |
dask/dask | pandas | 11,805 | Make repartition a no-op when divisions match | This issue seems to be a regression of https://github.com/dask/dask/pull/9924 in daskexpr.
Repartition must be a no-op when divisions match.
**Minimal Complete Verifiable Example**:
```python
import dask
import pandas as pd
from dask.dataframe import from_pandas
dd = from_pandas(pd.Series([1., 2., 3.]), npartitions=2)
dd2 = dd.repartition(dd.divisions)
assert dd2 is dd
```
**Environment**:
- Dask version: 2025.2.0
- Python version: 3.12.7
- Operating System: RHEL 8
- Install method (conda, pip, source): pip
| closed | 2025-03-04T07:56:56Z | 2025-03-05T11:00:56Z | https://github.com/dask/dask/issues/11805 | [
"needs triage"
] | faulaire | 1 |
unit8co/darts | data-science | 2,399 | [BUG] Distributed prediction crash | **Describe the bug**
Torch model crashes on prediction with distributed strategy
**To Reproduce**
- add distributed strategy to trainer params
- call predict
**Expected behavior**
No crash
**System (please complete the following information):**
- Python version: 3.11
- darts version 0.29.0
**Additional context**
Problem in this [line](https://github.com/unit8co/darts/blob/a4ed8b1cd36ccb8c214669625de5b74469ecc624/darts/models/forecasting/torch_forecasting_model.py#L1521)
Lightning does not return results directly in distributed mode. Rather Subclass of [lightning.pytorch.callbacks.BasePredictionWriter](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.BasePredictionWriter.html#lightning.pytorch.callbacks.BasePredictionWriter) should be used
| closed | 2024-06-03T18:22:47Z | 2024-06-04T09:17:01Z | https://github.com/unit8co/darts/issues/2399 | [
"bug",
"triage"
] | BohdanBilonoh | 2 |
deepset-ai/haystack | pytorch | 8,681 | Haystack should not configure root logger handlers | **Describe the bug**
Any application that imports this library cannot expect their own configuration of the Python root logger to be respected, because this library adds to the root logger's list of handlers.
This issue occurred previously in https://github.com/deepset-ai/haystack/issues/2485 and https://github.com/deepset-ai/haystack/issues/4202
**Expected behavior**
An application using this library should be able to `import haystack` and then use `logging.basicConfig()` as normal.
**Additional context**
[This issue was introduced here](https://github.com/deepset-ai/haystack/commit/2a591280ab43aba52bfd5cf61c2b0056c5655b98#diff-6de31bc13ff57e52637aeb2c3c8946b8244ae6426f5a0940a2dbf4ff331b3214R89-R97)
This is an issue because [`logging.basicConfig()` is ignored once any handlers are configured](https://docs.python.org/3/library/logging.html#logging.basicConfig). At a bare minimum, it is reasonable to expect all libraries make no modifications to the root handler. The quickest fix is to edit line 89 so as to only add the handler onto the subloggers that will be used throughout the library:
```python
haystack_logger = logging.getLogger("haystack") # only add our handler within our library's hierarchy
# avoid adding our handler twice
old_handlers = [
h for h in haystack_logger.handlers
if (isinstance(h, logging.StreamHandler) and h.name == "HaystackLoggingHandler")
]
for old_handler in old_handlers:
haystack_logger.removeHandler(old_handler)
haystack_logger.addHandler(handler)
# or more succinctly, only add if not already present
# if not old_handlers:
# haystack_logger.addHandler(handler)
```
However, it is also generally expected that the application and not the library is the arbiter of all log handlers, [as recommended in the python docs' Logging Cookbook](https://docs.python.org/3.12/howto/logging-cookbook.html#adding-handlers-other-than-nullhandler-to-a-logger-in-a-library). This would mean it is unusual for any library to implicitly add a log handler -- it is the application developer who knows best what log formats they need.
I agree that providing recommended overrides can be very convenient; one route would be to export a factory for the provided handler so that the consuming application can easily opt-in to this feature:
```python
from haystack.logging import configure_logging_handler # function to create the HaystackLoggingHandler
logging.getLogger().addHandler(configure_logging_handler()) # app dev can choose to add at the root, at the haystack level, or not at all
````
Quick blog post summary of developer expectations on this topic: http://rednafi.com/python/no_hijack_root_logger/
**To Reproduce**
Minimal repro:
```python
from haystack import Document
from haystack.document_stores.in_memory import InMemoryDocumentStore
import logging
import pandas as pd
logging.basicConfig(level=logging.CRITICAL)
document_store = InMemoryDocumentStore()
document_store.write_documents([
# still prints a warning, because of the logging.getLogger().root changes within haystack
Document(
content="My name is Jean and I live in Paris.",
dataframe=pd.DataFrame({"name": ["Jean"], "city": ["Paris"]}),
),
])
```
**FAQ Check**
- [X] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Ubuntu 22.04
- GPU/CPU: N/A
- Haystack version (commit or version number): 2.7.0 in my testing, up to present
- DocumentStore: N/A
- Reader: N/A
- Retriever: N/A
| open | 2025-01-03T21:17:31Z | 2025-01-06T08:47:40Z | https://github.com/deepset-ai/haystack/issues/8681 | [
"P2"
] | CSRessel | 0 |
hyperspy/hyperspy | data-visualization | 2,552 | Hyperspy 1.6.1 next_minor and next_patch branch Pytest failed with error: unrecognized arguments: -n --dist loadfile | Hello,
I was just testing upcoming 1.6.1 because I want to package it to openSUSE Tumbleweed.
I am aware that 1.6.1 is not release but I just want to check that it came out smoothly.
I build hyperspy 1.6.1 from the latest snapshot next_minor and next_patch branch. I found Pytest error as shown below.
This error both occurred on next_minor and next_patch branch.
```
[ 83s] + pytest-3.8 --ignore=_build.python2 --ignore=_build.python3 --ignore=_build.pypy3 -v
[ 84s] ERROR: usage: pytest-3.8 [options] [file_or_dir] [file_or_dir] [...]
[ 84s] pytest-3.8: error: unrecognized arguments: -n --dist loadfile
```
Have this error been made aware of? If so I don't have anything to add.
Full build log: https://build.opensuse.org/package/live_build_log/home:andythe_great/python-hyperspy/openSUSE_Tumbleweed/x86_64
All Python dependencies installed
```
[ 15s] [253/428] cumulate python3-PTable-0.9.2-3.4
[ 15s] [254/428] cumulate python3-appdirs-1.4.4-2.1
[ 15s] [255/428] cumulate python3-asn1crypto-1.4.0-1.1
[ 15s] [256/428] cumulate python3-attrs-19.3.0-2.2
[ 15s] [257/428] cumulate python3-backcall-0.2.0-1.2
[ 15s] [258/428] cumulate python3-decorator-4.4.2-1.3
[ 15s] [259/428] cumulate python3-entrypoints-0.3-2.3
[ 15s] [260/428] cumulate python3-idna-2.10-1.3
[ 15s] [261/428] cumulate python3-iniconfig-1.0.1-1.1
[ 15s] [262/428] cumulate python3-more-itertools-8.4.0-1.2
[ 15s] [263/428] cumulate python3-olefile-0.46-2.4
[ 15s] [264/428] cumulate python3-ordered-set-3.1.1-4.2
[ 15s] [265/428] cumulate python3-parso-0.7.0-1.4
[ 15s] [266/428] cumulate python3-pickleshare-0.7.5-3.3
[ 15s] [267/428] cumulate python3-pluggy-0.13.1-1.4
[ 15s] [268/428] cumulate python3-ptyprocess-0.6.0-3.3
[ 15s] [269/428] cumulate python3-py-1.9.0-1.3
[ 15s] [270/428] cumulate python3-pyasn1-0.4.8-1.5
[ 15s] [271/428] cumulate python3-pybind11-2.5.0-2.2
[ 15s] [272/428] cumulate python3-pycparser-2.20-1.5
[ 15s] [273/428] cumulate python3-pyparsing-2.4.7-2.2
[ 15s] [274/428] cumulate python3-simplegeneric-0.8.1-9.8
[ 15s] [275/428] cumulate python3-threadpoolctl-2.1.0-1.2
[ 15s] [276/428] cumulate python3-toml-0.10.1-2.1
[ 15s] [277/428] cumulate python3-toolz-0.10.0-1.4
[ 15s] [278/428] cumulate python3-wcwidth-0.2.5-1.2
[ 15s] [287/428] cumulate python3-pytz-2020.1-1.2
[ 15s] [288/428] cumulate python3-simplejson-3.17.2-1.2
[ 15s] [289/428] cumulate python3-six-1.15.0-1.2
[ 15s] [293/428] cumulate python3-PyYAML-5.3.1-1.6
[ 15s] [294/428] cumulate python3-chardet-3.0.4-8.6
[ 15s] [295/428] cumulate python3-dill-0.3.1.1-1.6
[ 15s] [296/428] cumulate python3-future-0.18.2-1.6
[ 15s] [297/428] cumulate python3-gmpy-1.17-2.13
[ 15s] [298/428] cumulate python3-kiwisolver-1.2.0-1.3
[ 15s] [299/428] cumulate python3-llvmlite-0.32.0-1.3
[ 15s] [300/428] cumulate python3-numpy-1.19.1-1.1
[ 15s] [301/428] cumulate python3-tqdm-4.48.2-1.1
[ 15s] [310/428] cumulate python3-Cycler-0.10.0-4.4
[ 15s] [311/428] cumulate python3-ipython_genutils-0.2.0-1.10
[ 15s] [312/428] cumulate python3-jedi-0.17.2-2.1
[ 15s] [313/428] cumulate python3-networkx-2.5-1.1
[ 15s] [314/428] cumulate python3-pexpect-4.8.0-2.3
[ 15s] [315/428] cumulate python3-prompt_toolkit-3.0.5-1.3
[ 15s] [316/428] cumulate python3-python-dateutil-2.8.1-1.4
[ 15s] [317/428] cumulate python3-uncertainties-3.1.4-2.2
[ 15s] [321/428] cumulate python3-PyWavelets-1.1.1-1.6
[ 15s] [322/428] cumulate python3-blosc-1.9.1-3.2
[ 15s] [323/428] cumulate python3-cytoolz-0.10.1-2.5
[ 15s] [324/428] cumulate python3-mpmath-1.1.0-1.8
[ 15s] [325/428] cumulate python3-packaging-20.4-1.2
[ 15s] [326/428] cumulate python3-patsy-0.5.1-2.1
[ 15s] [327/428] cumulate python3-pyrsistent-0.16.0-1.2
[ 15s] [328/428] cumulate python3-traits-6.1.0-1.2
[ 15s] [330/428] cumulate python3-cffi-1.14.2-1.1
[ 15s] [331/428] cumulate python3-psutil-5.7.0-2.1
[ 15s] [332/428] cumulate python3-pyzmq-19.0.2-3.1
[ 15s] [333/428] cumulate python3-talloc-2.3.1-1.4
[ 15s] [334/428] cumulate python3-ldb-2.1.4-1.3
[ 15s] [335/428] cumulate python3-numexpr-2.7.1-1.6
[ 15s] [336/428] cumulate python3-tornado6-6.0.4-2.5
[ 15s] [337/428] cumulate python38-devel-3.8.5-2.1
[ 15s] [338/428] cumulate python3-h5py-2.10.0-2.4
[ 15s] [341/428] cumulate python3-scipy-1.5.2-1.2
[ 15s] [343/428] cumulate python3-Pillow-7.2.0-3.1
[ 15s] [344/428] cumulate python3-imagecodecs-2020.5.30-1.1
[ 15s] [345/428] cumulate python3-tornado-6.0-13.3
[ 15s] [346/428] cumulate python3-mrcz-0.5.6-19.4
[ 15s] [347/428] cumulate python3-Pint-0.14-1.3
[ 15s] [348/428] cumulate python3-lz4-3.0.2-2.3
[ 15s] [349/428] cumulate python3-traitlets-4.3.3-2.3
[ 15s] [350/428] cumulate python3-Cython-0.29.21-1.3
[ 15s] [352/428] cumulate python3-jsonschema-3.2.0-2.5
[ 15s] [354/428] cumulate python3-setuptools-44.1.1-1.1
[ 15s] [357/428] cumulate python3-matplotlib-3.3.0-1.2
[ 15s] [358/428] cumulate python3-numba-0.49.1-1.4
[ 15s] [364/428] cumulate python3-click-7.1.2-2.1
[ 15s] [365/428] cumulate python3-joblib-0.16.0-3.1
[ 15s] [366/428] cumulate python3-natsort-7.0.1-3.2
[ 15s] [367/428] cumulate python3-Pygments-2.6.1-1.5
[ 15s] [368/428] cumulate python3-jupyter-core-4.6.3-3.3
[ 15s] [369/428] cumulate python3-sparse-0.8.0-1.5
[ 15s] [370/428] cumulate python3-dask-2.25.0-1.1
[ 15s] [371/428] cumulate python3-sympy-1.6.2-1.1
[ 15s] [372/428] cumulate python3-pandas-1.1.1-1.1
[ 15s] [373/428] cumulate python3-cryptography-3.0-1.3
[ 15s] [374/428] cumulate python3-pytest-6.0.1-1.1
[ 15s] [378/428] cumulate jupyter-jupyter-core-4.6.3-3.3
[ 15s] [379/428] cumulate python3-certifi-2020.6.20-1.1
[ 15s] [380/428] cumulate python3-dask-array-2.25.0-1.1
[ 15s] [381/428] cumulate python3-pyOpenSSL-19.1.0-1.4
[ 15s] [382/428] cumulate python3-pytest-mpl-0.11-2.2
[ 15s] [383/428] cumulate python3-tifffile-2020.5.30-30.6
[ 15s] [384/428] cumulate python3-statsmodels-0.11.1-1.5
[ 15s] [385/428] cumulate python3-jupyter-client-6.1.7-1.1
[ 15s] [386/428] cumulate python3-scikit-learn-0.23.2-1.3
[ 15s] [387/428] cumulate python3-ipython-7.18.1-1.1
[ 15s] [390/428] cumulate python3-urllib3-1.25.10-1.2
[ 15s] [391/428] cumulate python3-ipykernel-5.3.4-1.2
[ 15s] [395/428] cumulate python3-ipywidgets-7.5.1-2.2
[ 15s] [396/428] cumulate python3-requests-2.24.0-1.2
[ 15s] [397/428] cumulate python3-ipyparallel-6.3.0-1.3
[ 15s] [399/428] cumulate python3-sidpy-0.0.1-6.3
[ 15s] [402/428] cumulate python3-pyUSID-0.0.9-4.3
[ 16s] [426/428] cumulate python3-imageio-ffmpeg-0.4.1-1.2
[ 16s] [427/428] cumulate python3-imageio-2.8.0-2.3
[ 16s] [428/428] cumulate python3-scikit-image-0.17.2-1.1
``` | closed | 2020-09-18T09:46:41Z | 2020-09-19T07:36:41Z | https://github.com/hyperspy/hyperspy/issues/2552 | [] | kevinsmia1939 | 3 |
flairNLP/flair | nlp | 2,841 | Suffering to make 'tars base Korean version' | I want to make a Korean base tars model that replaces 'tars-base-v8.pt' based on the Korean dataset and the Korean bert pretrained model on the huggingface. The train loss curve falls well as expected, but the dev loss is diverging rather than converging. The dev score also shows a saturation pattern after the initial increase.
Can you tell me the code or hyperparameters you trained on 'tars-base-v8.pt'? I referenced the Tutorial to make a 'tars base Korean version', I think there seems to be something I'm missing.
https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_10_TRAINING_ZERO_SHOT_MODEL.md
## datasets
https://huggingface.co/datasets/klue
- from subset - ynat, sts, nli are used (properly applicable to text classification)
## parameters
2022-06-27 23:44:41,830 Parameters:
2022-06-27 23:44:41,830 - learning_rate: "0.000040"
2022-06-27 23:44:41,830 - mini_batch_size: "16"
2022-06-27 23:44:41,830 - patience: "10"
2022-06-27 23:44:41,830 - anneal_factor: "0.5"
2022-06-27 23:44:41,830 - max_epochs: "100"
2022-06-27 23:44:41,830 - shuffle: "True"
2022-06-27 23:44:41,830 - train_with_dev: "False"
2022-06-27 23:44:41,830 - batch_growth_annealing: "False"
### Train Loss

Learning Rate
Initially 4e-5

Dev Loss

Dev Score

## codes
```python
class BaseMetaLearner(MetaLearner):
def __init__(self, *args, **kwargs):
super(BaseMetaLearner, self).__init__(*args, **kwargs)
self._lang = 'en'
def base_learning(
self,
embedding: str = 'klue/bert-base',
down_sample: float = 1.0,
sample_missing_splits=False,
corpus_iteration: int = 3,
):
assert not self._tars_model and 0 < down_sample <= 1
corpora = [
fetch(KlueYnatDataset, sample_missing_splits=sample_missing_splits),
fetch(KlueNliDataset, sample_missing_splits=sample_missing_splits),
fetch(KlueStsDataset, sample_missing_splits=sample_missing_splits),
# fetch(PawsXDataset, sample_missing_splits=sample_missing_splits),
# fetch(NaverSentimentMovieCommentsDataset, sample_missing_splits=sample_missing_splits),
# fetch(KoreanRestaurantReviewsDataset, sample_missing_splits=sample_missing_splits),
]
tars = TARSClassifier(
embeddings=embedding,
)
# optimizer_params
# _params = list(tars.tars_model.named_parameters())
# no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
# decay = 0.01
# params = [
# {'params': [p for n, p in _params if not any(nd in n for nd in no_decay)], 'weight_decay': decay},
# {'params': [p for n, p in _params if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
# ]
results = []
for i in range(1, corpus_iteration + 1):
for c in corpora:
if 0 < down_sample < 1.0:
c = copy(c).downsample(percentage=down_sample)
logger.info(f" start training for corpus {c.name}, {i} -- iteration")
# tensorboard log directory
log_dir = self._output_path / 'tensorboard' / f'{c.name}_{i}'
log_dir.mkdir(parents=True, exist_ok=True)
if c.name in tars.list_existing_tasks():
tars.switch_to_task(c.name)
else:
label_dict = c.make_label_dictionary(c.name)
tars.add_and_switch_to_new_task(
task_name=c.name,
label_dictionary=label_dict,
label_type=c.name,
multi_label=label_dict.multi_label,
)
# initialize the text classifier trainer with corpus
trainer = ModelTrainer(tars, c)
result = trainer.train(
base_path=self._output_path / c.name, # path to store the model artifacts
learning_rate=self._learning_rate, # use very small learning rate
# optimizer=AdamW(params, lr=self._learning_rate, weight_decay=decay),
optimizer=Adam,
mini_batch_size=self._mini_batch_size, # small mini-batch size since corpus is tiny
patience=self._patience,
max_epochs=self._max_epochs, # terminate after 10 epochs
train_with_dev=self._train_with_dev,
use_tensorboard=True,
tensorboard_log_dir=log_dir,
)
results.append(result)
self._tars_model = tars # replace with fine tuned model
logger.info(f'fine tuning completed for corpora:{[c.name for c in corpora]}, results:{results}')
return results
if __name__ == "__main__":
meta = BaseMetaLearner(
model_path=None, # base learning
max_epochs=100,
mini_batch_size=16,
mini_batch_chunk_size=4,
learning_rate=4e-5,
train_with_dev=False,
)
result = meta.base_learning(down_sample=0.1, embedding="klue/bert-base")
path = meta.save_model()
print(path)
``` | closed | 2022-06-28T02:37:36Z | 2022-11-13T08:45:15Z | https://github.com/flairNLP/flair/issues/2841 | [
"question",
"wontfix"
] | yspaik | 3 |
huggingface/datasets | numpy | 7,260 | cache can't cleaned or disabled | ### Describe the bug
I tried following ways, the cache can't be disabled.
I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help!
```python
from datasets import disable_caching
from transformers import AutoTokenizer
disable_caching()
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path)
def tokenization_fn(examples):
column_name = 'text' if 'text' in examples else 'data'
tokenized_inputs = tokenizer(
examples[column_name], return_special_tokens_mask=True, truncation=False,
max_length=tokenizer.model_max_length
)
return tokenized_inputs
data = load_dataset('json', data_files=save_local_path, split='train', cache_dir=None)
data.cleanup_cache_files()
updated_dataset = data.map(tokenization_fn, load_from_cache_file=False)
updated_dataset .cleanup_cache_files()
```
### Expected behavior
no cache file generated
### Environment info
Ubuntu 20.04.6 LTS
datasets 3.0.2 | open | 2024-10-29T03:15:28Z | 2024-12-11T09:04:52Z | https://github.com/huggingface/datasets/issues/7260 | [] | charliedream1 | 1 |
axnsan12/drf-yasg | django | 47 | Data object for request_body | Is it possible to get rid of "data" node in case when request structure is defined using "request_body" attribute?
<img width="294" alt="screen shot 2018-01-15 at 16 04 04" src="https://user-images.githubusercontent.com/3892914/34943787-3140a254-fa0e-11e7-8be9-8d2d4b303994.png">
| closed | 2018-01-15T13:07:59Z | 2020-05-26T07:00:43Z | https://github.com/axnsan12/drf-yasg/issues/47 | [] | andrenerd | 11 |
getsentry/sentry | django | 87,188 | Display possible root cause for `Failed to fetch` issues in issue details | ### Problem Statement
To make `Failed to fetch` issues more actionable we should more prominently hint the user with possible root causes. If error message contains no further explanation they were likely blocked by an ad-blocker.
E.g. the Solutions Hub already contains the following root cause description: "Direct fetch requests to `some-blocked-domain.io` are blocked by ad blockers, causing a `TypeError: Failed to fetch` error."
### Solution Brainstorm
Special case this specific kind of issue and display a hint close to the error message itself.
### Product Area
Issues | open | 2025-03-17T16:31:27Z | 2025-03-17T16:31:27Z | https://github.com/getsentry/sentry/issues/87188 | [] | chargome | 0 |
dynaconf/dynaconf | flask | 1,146 | [RFC] Implement OnFailure recover | In case a validation error happens, user must want to avoid raising exception but just log and keep program working with defaults.
```
field: Annotated[str, OnFailure(log=logger,quiet=True, take_default=True)] = "Foo"
```
In case field is configured as `123` instead of raising validation error, logs the validation error and set the default value instead. | open | 2024-07-07T14:41:59Z | 2024-07-08T18:38:23Z | https://github.com/dynaconf/dynaconf/issues/1146 | [
"Not a Bug",
"RFC",
"typed_dynaconf"
] | rochacbruno | 0 |
Lightning-AI/LitServe | api | 362 | Support additional content types in post requests | <!--
⚠️ BEFORE SUBMITTING, READ:
We're excited for your request! However, here are things we are not interested in:
- Decorators.
- Doing the same thing in multiple ways.
- Adding more layers of abstraction... tree-depth should be 1 at most.
- Features that over-engineer or complicate the code internals.
- Linters, and crud that complicates projects.
-->
----
## 🚀 Feature
Add support to additional `content-type` requests such as `application/octet-stream`
### Motivation
It will make the Litserv more flexible. It will also allow streaming binary data.
### Pitch
Basically add another option to the lines here https://github.com/Lightning-AI/LitServe/blob/c8cf6b224d68bbf2006b4b20198007ead3a58fd8/src/litserve/server.py#L351
that returns `request.body()`
```python
if self.request_type == Request:
if request.headers["Content-Type"] == "application/x-www-form-urlencoded" or request.headers[
"Content-Type"
].startswith("multipart/form-data"):
payload = await request.form()
elif request.headers["Content-Type"] == "application/octet-stream":
payload = await request.body()
else:
payload = await request.json()
```
to support the following request
```python
response = requests.post(API_URL, data = input_bytes, headers = {"Content-Type": "application/octet-stream"})
```
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| open | 2024-11-19T03:31:35Z | 2025-02-10T15:29:04Z | https://github.com/Lightning-AI/LitServe/issues/362 | [
"enhancement"
] | ktrapeznikov | 3 |
modin-project/modin | data-science | 7,342 | Modin read_csv not loading the complete file (memory leak in file reading) | I am trying to read a large file of size > 2 GB and the read_csv is not loading complete data from input while only 3000000 records are inserted into the dataframe df_ratings. Below is the code snippet of the problem:
Installing libraries & getting dataset:
```
!kaggle datasets download -d mohamedbakhet/amazon-books-reviews
!pip install -U ipykernel
!pip install modin[all]
```
Next: Reading file -
```
import modin.pandas as mpd
df_ratings = mpd.read_csv('Books_rating.csv')
df_books = mpd.read_csv('books_data.csv')
```
> UserWarning: The size of /dev/shm is too small (6133121024 bytes). The required size at least half of RAM (6804715520 bytes). Please, delete files in /dev/shm or increase size of /dev/shm with --shm-size in Docker. Also, you can can override the memory size for each Ray worker (in bytes) to the MODIN_MEMORY environment variable.
2024-07-13 19:27:51,074 INFO worker.py:1788 -- Started a local Ray instance.
Can some please help to load the data correctly here? | closed | 2024-07-13T19:40:40Z | 2024-07-13T20:03:58Z | https://github.com/modin-project/modin/issues/7342 | [
"question ❓",
"Triage 🩹"
] | quicksid | 1 |
long2ice/fastapi-cache | fastapi | 96 | Implement `@cacheable, @cache_put, @cache_evict` like Spring cache. | This idea comes from Spring Cache, which mainly provides caching function for methods or functions, similar to `functools.cache` in Python.
These features are often applied to "CRUD" methods, while the target of fastapi-cache is HTTP interface functions. Therefore, I am not sure if these features can be implemented in fastapi-cache.
- Cacheable
> Annotation indicating that the result of invoking a method (or all methods in a class) can be cached.
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/cache/annotation/Cacheable.html
- CachePut
> In contrast to the [@Cacheable](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/cache/annotation/Cacheable.html) annotation, this annotation does not cause the advised method to be skipped. Rather, it always causes the method to be invoked and its result to be stored in the associated cache if the [condition()](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/cache/annotation/CachePut.html#condition()) and [unless()](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/cache/annotation/CachePut.html#unless()) expressions match accordingly.
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/annotation/CachePut.html
- CacheEvict
> Annotation indicating that a method (or all methods on a class) triggers a [cache evict](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/cache/Cache.html#evict(java.lang.Object)) operation.
https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/annotation/CacheEvict.html
---
If you have any further questions, please leave a message and I will reply as soon as possible.
| open | 2022-11-03T13:54:49Z | 2023-06-21T05:31:09Z | https://github.com/long2ice/fastapi-cache/issues/96 | [
"enhancement"
] | mkdir700 | 5 |
d2l-ai/d2l-en | computer-vision | 1,702 | suggestion: move sec 4.9 (Environment and Distribution Shift) out of ch 4 (MLPs) | Sec 4.9 (presumably written by Zachary :) does not logically belong in sec 4, since it has nothing to do with MLPs.
(Indeed most of the examples concern image classification.) You might want to move it to a later part of the book. (Adding practical examples with code would also be nice, and would be more in the spirit of the rest of the book :) | closed | 2021-03-29T22:11:47Z | 2021-03-30T16:32:01Z | https://github.com/d2l-ai/d2l-en/issues/1702 | [] | murphyk | 1 |
openapi-generators/openapi-python-client | rest-api | 625 | Namespace based client package generate | **Is your feature request related to a problem? Please describe.**
Currently, default client directory and package names are picked up from the schema files. E.g. For schema titled "Company API", we get a directory named `company-api-client` with package name being `company_api_client`.
**Describe the solution you'd like**
We follow a namespace based solution throughout our company code base i.e. instead of `from company_api_client import Client`, we would like to have `from company.api.client import Client`. By doing so, generated package folder will also be compliant with our company GHA workflows and get automatically published to private pypi.
It's mostly a matter of creating those directories within directories without any `__init__.py` so that they get recognized as namespace. Some changes to the `pyproject.toml` and poetry config may also be needed. While I am able to override `pyproject.toml` templates, I am unable to find a way to customize the directory structure.
Is there a possibility to restructure the generated directories as explained above without forking/changing the core.
Best | closed | 2022-06-04T06:59:34Z | 2023-08-13T01:36:42Z | https://github.com/openapi-generators/openapi-python-client/issues/625 | [
"✨ enhancement"
] | abhinavsingh | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,297 | The PredictionError can't be visualized due to the dim error | **Describe the bug**
The PredictionError can't be visualized due to the dim error.
**To Reproduce**
I use the following code:
```python
visualizer = PredictionError(model)
self.y_test = self.y_test.squeeze()
visualizer.fit(self.x_train, self.y_train)
visualizer.score(self.x_test, self.y_test)
visualizer.show()
```
And I think the error happens in `yellowbrick/regressor/prediction_error.py`
```python
def score(self, X, y, **kwargs):
# super will set score_ on the visualizer
super(PredictionError, self).score(X, y, **kwargs)
y_pred = self.predict(X)
self.draw(y, y_pred)
return self.score_
```
The dimension of y_pred is 2. But in `draw_best_fit` function,` y.ndim>1` will raise error!
```python
# Verify that y is a (n,) dimensional array
if y.ndim > 1:
raise YellowbrickValueError(
"y must be a (1,) dimensional array not {}".format(y.shape)
)
```
**Traceback**
```
Traceback (most recent call last):
File "/home/PJLAB/liangyiwen/anaconda3/envs/torch181/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/PJLAB/liangyiwen/anaconda3/envs/torch181/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/PJLAB/liangyiwen/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/PJLAB/liangyiwen/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/PJLAB/liangyiwen/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/PJLAB/liangyiwen/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 322, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/PJLAB/liangyiwen/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 136, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/PJLAB/liangyiwen/.vscode/extensions/ms-python.python-2022.16.1/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/PJLAB/liangyiwen/Even/code/OpenBaseLab-Edu/demo/boston_reg_demo.py", line 53, in <module>
boston_reg(algorithm='LinearRegression')
File "/home/PJLAB/liangyiwen/Even/code/OpenBaseLab-Edu/demo/boston_reg_demo.py", line 32, in boston_reg
mp.plot()
File "/home/PJLAB/liangyiwen/Even/code/OpenBaseLab-Edu/BaseML/BaseMetricVisual.py", line 46, in plot
self.reg_pred_error_plot()
File "/home/PJLAB/liangyiwen/Even/code/OpenBaseLab-Edu/BaseML/BaseMetricVisual.py", line 70, in reg_pred_error_plot
visualizer.score(self.x_test, self.y_test)
File "/home/PJLAB/liangyiwen/anaconda3/envs/torch181/lib/python3.7/site-packages/yellowbrick/regressor/prediction_error.py", line 168, in score
self.draw(y, y_pred)
File "/home/PJLAB/liangyiwen/anaconda3/envs/torch181/lib/python3.7/site-packages/yellowbrick/regressor/prediction_error.py", line 218, in draw
label="best fit",
File "/home/PJLAB/liangyiwen/anaconda3/envs/torch181/lib/python3.7/site-packages/yellowbrick/bestfit.py", line 142, in draw_best_fit
"y must be a (1,) dimensional array not {}".format(y.shape)
yellowbrick.exceptions.YellowbrickValueError: y must be a (1,) dimensional array not (102, 1)
```
| open | 2023-02-03T08:16:51Z | 2023-02-25T18:19:41Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1297 | [
"type: question"
] | Even-ok | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 981 | Random squeaks in output | Using MDX for vocal isolation or VR for De-echo/de-reverb results in unexpected noises in the result. This may or may not happen and that's the worst part of this issue. Sometimes it's there and sometimes not. I'm curious as to whether anyone else has faced this and got a proper solution.
I came across this previous issue as well: [#949](https://github.com/Anjok07/ultimatevocalremovergui/issues/949) | open | 2023-11-18T08:45:32Z | 2025-01-20T09:03:02Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/981 | [] | ArifAhmed1995 | 3 |
graphistry/pygraphistry | pandas | 12 | edge colors can't be set | get errors when I do `edge_color = ...`
| closed | 2015-07-03T07:22:32Z | 2015-08-06T13:55:34Z | https://github.com/graphistry/pygraphistry/issues/12 | [
"bug"
] | lmeyerov | 1 |
Yorko/mlcourse.ai | seaborn | 354 | Topic 9 Kaggle template broken | https://www.kaggle.com/kashnitsky/topic-9-part-2-time-series-with-facebook-prophet

| closed | 2018-09-24T12:50:54Z | 2018-10-04T14:12:09Z | https://github.com/Yorko/mlcourse.ai/issues/354 | [
"minor_fix"
] | Vozf | 1 |
ultralytics/ultralytics | python | 19,824 | WeightsUnpickler error | Hello;
I am working on a project and using the Yolov8.yaml file to train the module from scratch, everything okay however I have faced this problem, and I am stuck here I have used this repo before, and there were no problems or errors, but when I want to return to the project, I have faced this issue.
and this is the error message:
```
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL ultralytics.nn.tasks.DetectionModel was not an allowed global by default. Please use `torch.serialization.add_safe_globals([DetectionModel])` or the `torch.serialization.safe_globals([DetectionModel])` context manager to allowlist this global if you trust this class/function.
```
could anyone help, please? | open | 2025-03-22T13:36:31Z | 2025-03-23T00:22:18Z | https://github.com/ultralytics/ultralytics/issues/19824 | [
"detect"
] | Salmankm93 | 2 |
cvat-ai/cvat | tensorflow | 9,089 | uvicorn-0 | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
i have a problem with run the command docker logs cvat_server -f
and result this:
2025-02-11 02:50:18,537 DEBG 'uvicorn-1' stderr output:
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
2025-02-11 02:50:18,538 DEBG 'uvicorn-1' stderr output:
INFO: Uvicorn running on socket /tmp/uvicorn.sock (Press CTRL+C to quit)
2025-02-11 02:50:18,551 DEBG 'uvicorn-0' stderr output:
INFO: Started server process [239]
INFO: Waiting for application startup.
2025-02-11 02:50:18,552 DEBG 'uvicorn-0' stderr output:
INFO: ASGI 'lifespan' protocol appears unsupported.
INFO: Application startup complete.
2025-02-11 02:50:18,552 DEBG 'uvicorn-0' stderr output:
INFO: Uvicorn running on socket /tmp/uvicorn.sock (Press CTRL+C to quit)
2025-02-11 02:50:24,223 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:50:39,109 DEBG 'uvicorn-1' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:50:44,747 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:50:55,475 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:51:01,982 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:51:16,271 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:51:31,081 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:51:40,585 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:51:50,247 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:52:02,389 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:52:14,372 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:52:26,539 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:52:33,909 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:52:40,162 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:52:52,114 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:53:00,157 DEBG 'uvicorn-1' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:53:05,508 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:53:11,413 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:53:19,740 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:53:32,238 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:53:46,399 DEBG 'uvicorn-1' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:53:55,208 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:54:07,833 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:54:17,180 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:54:29,113 DEBG 'uvicorn-1' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:54:38,314 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:54:50,221 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:55:00,745 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:55:15,147 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:55:27,412 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:55:32,492 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:55:40,971 DEBG 'uvicorn-1' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:55:54,344 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 200 OK
2025-02-11 02:56:07,673 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
2025-02-11 02:56:17,856 DEBG 'uvicorn-0' stdout output:
INFO: 172.18.0.6:0 - "GET /api/auth/rules HTTP/1.0" 304 Not Modified
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
``` | closed | 2025-02-11T02:58:57Z | 2025-02-12T13:59:09Z | https://github.com/cvat-ai/cvat/issues/9089 | [
"need info"
] | Mirshal | 6 |
dunossauro/fastapi-do-zero | sqlalchemy | 307 | Atualizar para Poetry 2.1 | closed | 2025-02-15T18:27:51Z | 2025-02-19T13:46:45Z | https://github.com/dunossauro/fastapi-do-zero/issues/307 | [] | dunossauro | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.