repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
sunscrapers/djoser | rest-api | 468 | SAML metadata page | While following the python-social-auth's [tutorial](https://python-social-auth.readthedocs.io/en/latest/backends/saml.html#basic-usage) for configuring the SAML authentication backend, I've hit a problem. I'm unable to configure the metadata serving page. Even after setting the social urls with the ```social``` namespace:
`path('auth/', include(('djoser.social.urls', 'social'), namespace='social')),`
the following error occurs:
> Reverse for 'complete' not found. 'complete' is not a valid view function or pattern name.
I've tried poking around at the Djoser integration of the python-social-auth package but have not found a way to solve this problem yet.
Is there a way to make this work in the current state of this library (I understand this integration is still in beta) | open | 2020-02-27T18:34:00Z | 2020-02-27T18:34:00Z | https://github.com/sunscrapers/djoser/issues/468 | [] | lm-sousa | 0 |
piccolo-orm/piccolo | fastapi | 865 | name 'Serial' is not defined | When creating a new migration, I think this bug(error) only occurs with ForeignKeys, details below:
```
my_app/tables.py
```
```python
class Order(Table, tablename="orders"):
user = ForeignKey(
LazyTableReference(
table_class_name="Users",
module_path="users.tables"
)
)
tariff = ForeignKey(
LazyTableReference(
table_class_name="Tariff",
module_path="course.tables"
)
)
```
```
my_app/piccolo_migrations/my_app_2023_06_30t18_31_03_962663.py
```
<img src='https://github.com/piccolo-orm/piccolo/assets/73847672/01d68eba-d3a1-401e-a2d3-e588e926b9f6' width='500'>
***
<details><summary><b>Traceback</b></summary>
```
The command failed.
name 'Serial' is not defined
Traceback (most recent call last):
File "path\to\project\env\Lib\site-packages\targ\__init__.py", line 448, in run
command.call_with(arg_class)
File "path\to\project\env\Lib\site-packages\targ\__init__.py", line 229, in call_with
asyncio.run(self.command(**cleaned_kwargs))
File "path\to\programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "path\to\programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "path\to\programs\Python\Python311\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "path\to\project\env\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 159, in forwards
response = await run_forwards(
^^^^^^^^^^^^^^^^^^^
File "path\to\project\env\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 120, in run_forwards
response = await manager.run()
^^^^^^^^^^^^^^^^^^^
File "path\to\project\env\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 97, in run
return await self.run_migrations(app_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "path\to\project\env\Lib\site-packages\piccolo\apps\migrations\commands\forwards.py", line 37, in run_migrations
] = self.get_migration_modules(app_config.migrations_folder_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "path\to\project\env\Lib\site-packages\piccolo\apps\migrations\commands\base.py", line 53, in get_migration_modules
modules: t.List[MigrationModule] = [
^
File "path\to\project\env\Lib\site-packages\piccolo\apps\migrations\commands\base.py", line 54, in <listcomp>
t.cast(MigrationModule, importlib.import_module(name))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "path\to\programs\Python\Python311\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "path\to\project\payment\piccolo_migrations\payment_2023_07_14t19_26_53_265153.py", line 20, in <module>
class Tariff(Table, tablename="tariff", schema=None):
File "path\to\project\payment\piccolo_migrations\payment_2023_07_14t19_26_53_265153.py", line 21, in Tariff
id = Serial(
^^^^^^
NameError: name 'Serial' is not defined
```
</details> | open | 2023-07-14T14:54:22Z | 2023-07-22T19:37:58Z | https://github.com/piccolo-orm/piccolo/issues/865 | [
"bug"
] | hoosnick | 11 |
huggingface/datasets | machine-learning | 7,377 | Support for sparse arrays with the Arrow Sparse Tensor format? | ### Feature request
AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**.
Arrow has support for sparse tensors.
https://arrow.apache.org/docs/format/Other.html#sparse-tensor
It would be a big deal if Hugging Face Datasets supported sparse tensors as a feature type, natively.
### Motivation
This is important for example in the field of transcriptomics (modeling and understanding gene expression), because a large fraction of the genes are not expressed (zero). More generally, in science, sparse arrays are very common, so adding support for them would be very benefitial, it would make just using Hugging Face Dataset objects a lot more straightforward and clean.
### Your contribution
We can discuss this further once the team comments of what they think about the feature, and if there were previous attempts at making it work, and understanding their evaluation of how hard it would be. My intuition is that it should be fairly straightforward, as the Arrow backend already supports it. | open | 2025-01-21T20:14:35Z | 2025-01-30T14:06:45Z | https://github.com/huggingface/datasets/issues/7377 | [
"enhancement"
] | JulesGM | 1 |
widgetti/solara | jupyter | 36 | Quickstart example does not render correctly with solara run | I have a fresh conda environment with python 3.9 on windows. I've used poetry to install solara.
I have a `myapp.py` file with the following code:
```python
import solara
@solara.component
def Page():
clicks, set_clicks = solara.use_state(0)
def increase_clicks():
set_clicks(clicks+1)
solara.Button(label=f"Clicked {clicks} times", on_click=increase_clicks)
```
And then run with:
```
solara run `myapp.py`
```
The webpage looks like this:

These are some of the errors in the console:
```
The stylesheet http://localhost:8765/_solara/cdn/font-awesome@4.5.0/css/font-awesome.min.css was not loaded because its MIME type, “text/plain”, is not “text/css”.
The script from “http://localhost:8765/_solara/cdn/mermaid@9.1.7/dist/mermaid.min.js” was loaded even though its MIME type (“text/plain”) is not a valid JavaScript MIME type
The script from “http://localhost:8765/_solara/cdn/requirejs@2.3.6/require.js” was loaded even though its MIME type (“text/plain”) is not a valid JavaScript MIME type.
The stylesheet http://localhost:8765/_solara/cdn/@widgetti/solara-vuetify-app@3.0.1/dist/main.css was not loaded because its MIME type, “text/plain”, is not “text/css”.
```
| closed | 2023-03-09T10:25:21Z | 2023-03-09T14:07:56Z | https://github.com/widgetti/solara/issues/36 | [] | Jhsmit | 11 |
mwouts/itables | jupyter | 56 | Use Jupyter Book to build the ITables documentation | We should split the README into multiple chapters of a Jupyter Book
TODO
- [x] Fix the table width and font
- [x] Fix the [issue](https://github.com/executablebooks/jupyter-book/issues/1610) with the sample dataframe notebook
- [x] Automatize the publication on GitHub pages | closed | 2022-01-22T00:41:51Z | 2022-01-24T23:29:44Z | https://github.com/mwouts/itables/issues/56 | [] | mwouts | 1 |
pytest-dev/pytest-django | pytest | 306 | Migrations fail silently | If I run `py.test --create-db` with migrations that don't work in dev, py.test just swallows them up and a lot of tests fail due to missing columns etc.
| closed | 2016-01-11T09:19:44Z | 2020-10-16T19:06:52Z | https://github.com/pytest-dev/pytest-django/issues/306 | [
"bug"
] | lee-kagiso | 4 |
jupyterlab/jupyter-ai | jupyter | 510 | Leverage `ExtensionHandlerMixin` from Jupyter Server |
### Problem
Jupyter Server provides an `ExtensionHandlerMixin` class for more complex extension handlers. We should probably take advantage of this.
See discussion in: https://github.com/jupyter-server/jupyter_server_fileid/pull/72#discussion_r1420726422
| open | 2023-12-08T16:34:49Z | 2023-12-08T16:34:49Z | https://github.com/jupyterlab/jupyter-ai/issues/510 | [
"enhancement"
] | dlqqq | 0 |
mage-ai/mage-ai | data-science | 5,618 | Instance-wide API key rotation | **Is your feature request related to a problem? Please describe.**
Our organization values strong security practices, but the application currently lacks a method to rotate the instance-wide API key. This creates a potential risk if the key is ever compromised, as there is no way to retire the compromised key or introduce a new one. This limitation also makes it challenging to adhere to security policies that mandate periodic credential rotation.
**Describe the solution you'd like**
We would like the ability to rotate the instance-wide API key through the application's interface or API. The solution could include:
- The ability to generate a new API key while keeping the existing key temporarily active to ensure a seamless transition.
- Automated expiration or deactivation of old keys after a configurable period.
- Notifications or logs indicating when a key was rotated and by whom for auditing purposes.
**Describe alternatives you've considered**
We do try to limit the key's exposure via other external controls, but this does not address the root problem of the inability to rotate the key itself.
**Additional context**
Adding this feature would align the application with modern security best practices and industry standards for credential management. | closed | 2024-12-16T18:50:27Z | 2024-12-22T21:00:21Z | https://github.com/mage-ai/mage-ai/issues/5618 | [] | the-archbishop | 1 |
vaexio/vaex | data-science | 1,387 | Join with interval | Hi,
is there any way to perform this function in vaex?
trim2 = df1.join(df2, on=[df1['CODMUNRES']==df2['mun_geocod'] ,df2['Date'] >= df1['Trim2_start'], df2['Date'] <= df1['Trim2_stop']],how='left').groupBy('ID','CODMUNRES')
| open | 2021-06-05T06:21:52Z | 2021-06-05T06:21:52Z | https://github.com/vaexio/vaex/issues/1387 | [] | erickkill | 0 |
scikit-learn/scikit-learn | machine-learning | 30,621 | Add links to examples from the docstrings and user guide | _TLDR: Meta-issue for new contributors to add links to the examples in helpful places of the rest of the docs._
## Description
This meta-issue is a good place to start with your first contributions to scikit-learn.
This issue builds on top of #26927 and is introduced for easier maintainability. The goal is exactly the same as in the old issue.
Here, we improve the documentation by making the [Examples](https://scikit-learn.org/stable/auto_examples/index.html) more discoverable by **adding links to examples in relevant sections of the documentation in the _API documentation_ and in the _User Guide_**:
- the [API documentation](https://scikit-learn.org/stable/api/index.html) is made from the docstrings of public classes and functions which can be found in the `sklearn` folder of the project
- the [User Guide](https://scikit-learn.org/stable/user_guide.html) can be found in the `doc/modules` folder of the project
Together with the [examples](https://scikit-learn.org/stable/auto_examples/index.html) (which are in the `examples` folder of the project), these files get rendered into html when the documentation is build and then are displayed on the [scikit-learn website](https://scikit-learn.org).
**Important: We estimate that only 70% of the examples in this list will ultimately be referenced. This means part of the task is deciding which examples deserve being referenced and we are aware that this is not a trivial decision, especially for new contributors. We encourage you to share your reasoning, and a team member will make the final call. We hope this isn’t too frustrating, but please know that evaluating an example is not just an exercise for new contributors; it’s a meaningful and valuable contribution to the project, even (and especially) if the example you worked on doesn’t end up being linked.**
## Workflow
We recommend this workflow for you:
0. have `pre-commit` installed in your environment as in point 10 of _How to contribute_ in the [development guide](https://scikit-learn.org/dev/developers/contributing.html#contributing-code) (this will re-format your contribution to the standards used in scikit-learn and will spare you a lot of confusion when you are a beginner)
1. pick an example to work on
- Make sure your example of interest had not recently been claimed by someone else by looking through the discussion of this issue (you will have to load hidden items in this discussion). Hint: If somebody has claimed an example several weeks ago and then never started it, you can take it. You can also take over tasks marked as _stalled_.
- search the repo for other links to your example and check if the example is already linked in relevant parts of the docs
- how to search the repo: a) find the file name of your example in the examples folder (it starts with `plot_...`); b) use full text search of your IDE to look for where that name appears
- you can totally ignore the "Gallery examples" on the website, as it is auto-generated; do only look for real links in the repo
- comment on the issue to claim an example (you don't need to wait for a team member's approval before starting to work)
2. find suitable spots in either the _API documentation_ or the _User Guide_ (or both) where users would be happy to find your example linked
- read through your example and understand where it is making its most useful statements
- how to find a good spot (careful: we are extremely picky here)
- if the example demonstrates a certain real world use case: find where in the _User Guide_ the same use case is treated or could be treated
- if the example shows how to use a certain param: the param description in the _API documentation_ might be a good spot to put the link
- if the example compares different techniques: this highly calls for mentioning it in the more theoretical parts of the _User Guide_
- not all the examples listed here need to be referenced: a link to an example on simply how to use some estimator, doesn't add enough value
- if you find an example that doesn't add enough value to be linked: please leave a comment here; this kind of contribution is highly appreciated
3. add links
- An example with the path examples/developing_estimators/sklearn_is_fitted.py whould be referenced like this:
```
:ref:`sphx_glr_auto_examples_developing_estimators_sklearn_is_fitted.py`
```
- see this example PR, that shows how to add a link to the User Guide: #26926
- we aim **not** to use the `.. rubric:: Examples` section to put the example if possible, but to integrate it into the text; be aware that if you add a link like this \:ref:\`title \<link\>\`, you can change its title so that the example's title gets substituted by your picked title and the link can be fitted more nicely to the sentences
- please avoid adding your link to a list of other examples, since we strive to add the links in the most relevant places
- please avoid adding a new `.. rubric:: Examples` section
4. test build the documentation before opening your PR
- have a look into the [Documentation part of the Development Guide](https://scikit-learn.org/dev/developers/contributing.html#building-the-documentation) to learn how to locally build the documentation.
- Check if your changes are displayed as desired by opening the test build in your browser.
5. open PR
- use a PR title like `DOC add links to <name of example>` (starting with DOC)
- do not refer to this issue on the title of the PR, instead:
- do refer to this issue using in the *Reference Issues/PRs* section of your PR, do refer to this issue using "Towards `#30621`" (do **not** use "Closes #..." or "Fixes #...")
6. check the CI
- After the CI tests have finished (~90 minutes) you can find one that says "Check the rendered docs here!". In there, you can look into how the CI has built the documentation for the changed files to check if everything looks alright. You will see something like `auto_examples/path_to_example, [dev], [stable]`, where the first link is your branche's version, the second is the main dev branch and the third link is the last released scikit-learn version that is used for the stable documentation on the website.
- if the CI shows any failure, you should to take action by investigating and proposing solutions; as a rule of thump, you can find the most useful information from the CIs, if you click the upper links first; in any case you need to click through several layers until you see actual test results with more information (and until it looks similar to running pytest, ruff or doctest locally)
- if the CI shows linting issues, check if you have installed and activated `pre-commit` properly, and fix the issue by the action the CI proposes (for instance adding or deleting an empty line)
- if you are lost and don't know what to do with a CI failure, look through other PRs from this issue; most things have already happened to others
- sometimes, http request errors such as 404 or 405 show up in the CI, in which case you should push an empty commit (`git commit --allow-empty -m "empty commit to re-trigger CI"`)
7. wait for reviews and be ready to adjust your contribution later on
## Expectation management for new contributors
How long will your first PR take you up until the point you open a PR?
- 8-16 hours if you have never contributed to any project and have only basic or no understanding of the workflow yet
- 2-8 hours if you know the workflow and are just new to scikit-learn (more to the shorter end if you know what linting is and a bit of sphinx)
- 1-2 hours for your 2nd, 3rd, ... PR on the same issue for everyone
How long will it take us to merge your PR?
- we strive for a scikit-learn member to look at your PR within a few days and suggest changes depending on technical quality of the PR and an assessment of added value to the user
- we strive for a maintainer to evaluate your PR within a few weeks; they might also suggest changes before approving and merging
- the whole process on average takes several weeks and can take up months, depending of availability of maintainers and on how many review cycles are necessary
## ToDo
Here's a list of all the remaining examples:
- examples/applications:
- [x] plot_model_complexity_influence.py # no references need to be added: #30814
- [ ] plot_out_of_core_classification.py #30462 (stalled)
- [ ] plot_prediction_latency.py #30462 (stalled)
- [ ] plot_topics_extraction_with_nmf_lda.py
- examples/bicluster:
- [ ] plot_bicluster_newsgroups.py
- [ ] plot_spectral_coclustering.py #29606 (stalled)
- examples/calibration:
- [ ] plot_compare_calibration.py
- examples/classification:
- [ ] plot_classifier_comparison.py
- [ ] plot_digits_classification.py
- examples/cluster:
- [x] plot_agglomerative_clustering_metrics.py #30867
- [x] plot_cluster_comparison.py #30127
- [ ] plot_coin_ward_segmentation.py #30916
- [x] plot_dict_face_patches.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2716167959)
- [ ] plot_digits_agglomeration.py #30979
- [ ] plot_digits_linkage.py
- [ ] plot_face_compress.py
- [x] plot_inductive_clustering.py #30182
- [ ] plot_segmentation_toy.py #30978
- [ ] plot_ward_structured_vs_unstructured.py #30861
- examples/covariance:
- [ ] plot_mahalanobis_distances.py
- [ ] plot_robust_vs_empirical_covariance.py
- [ ] plot_sparse_cov.py
- examples/decomposition:
- [x] plot_ica_blind_source_separation.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2649370018): https://github.com/scikit-learn/scikit-learn/pull/30786
- [x] plot_ica_vs_pca.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2649370018): https://github.com/scikit-learn/scikit-learn/pull/30786
- [ ] plot_image_denoising.py #30864
- [ ] plot_sparse_coding.py
- [ ] plot_varimax_fa.py
- examples/ensemble:
- [ ] plot_bias_variance.py #30845
- [ ] plot_ensemble_oob.py
- [ ] plot_feature_transformation.py
- [ ] plot_forest_hist_grad_boosting_comparison.py
- [ ] plot_forest_importances_faces.py
- [x] plot_forest_importances.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2731163071)
- [x] plot_forest_iris.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2676356956)
- [x] plot_gradient_boosting_categorical.py #30749
- [x] plot_gradient_boosting_oob.py #30749
- [x] plot_gradient_boosting_regularization.py #30749
- [ ] plot_monotonic_constraints.py
- [ ] plot_random_forest_regression_multioutput.py
- [x] plot_stack_predictors.py #30747
- [x] plot_voting_decision_regions.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30847#discussion_r1963601795) #30847
- [x] plot_voting_probas.py #30847
- examples/feature_selection:
- [x] plot_feature_selection.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/31000#issuecomment-2728836616) #31000
- [x] plot_f_test_vs_mi.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2734809734)
- [ ] plot_rfe_with_cross_validation.py
- [ ] plot_select_from_model_diabetes.py
- examples/gaussian_process:
- [ ] plot_gpc_iris.py #30605
- [ ] plot_gpc_isoprobability.py #30605
- [ ] plot_gpc.py #30605
- [ ] plot_gpc_xor.py #30605
- [ ] plot_gpr_co2.py
- [ ] plot_gpr_noisy.py
- [x] plot_gpr_noisy_targets.py #30850
- [ ] plot_gpr_on_structured_data.py
- [ ] plot_gpr_prior_posterior.py
- examples/inspection:
- [x] plot_causal_interpretation.py #30752
- [ ] plot_linear_model_coefficient_interpretation.py
- [ ] plot_permutation_importance_multicollinear.py
- [ ] plot_permutation_importance.py
- examples/linear_model:
- [ ] plot_ard.py
- [ ] plot_huber_vs_ridge.py
- [ ] plot_iris_logistic.py
- [ ] plot_lasso_and_elasticnet.py #30587
- [ ] plot_lasso_coordinate_descent_path.py
- [ ] plot_lasso_dense_vs_sparse_data.py
- [ ] plot_lasso_lars_ic.py
- [ ] plot_lasso_lars.py
- [ ] plot_lasso_model_selection.py
- [ ] plot_logistic_l1_l2_sparsity.py
- [ ] plot_logistic_multinomial.py
- [ ] plot_logistic_path.py
- [ ] plot_logistic.py #30942
- [ ] plot_multi_task_lasso_support.py
- [ ] plot_nnls.py
- [ ] plot_ols_3d.py
- [x] plot_ols.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2600872584)
- [x] plot_ols_ridge_variance.py #30683
- [ ] plot_omp.py
- [ ] plot_poisson_regression_non_normal_loss.py
- [ ] plot_polynomial_interpolation.py
- [ ] plot_quantile_regression.py
- [ ] plot_ridge_coeffs.py
- [ ] plot_ridge_path.py
- [ ] plot_robust_fit.py
- [ ] plot_sgd_comparison.py
- [ ] plot_sgd_iris.py
- [ ] plot_sgd_separating_hyperplane.py
- [ ] plot_sgd_weighted_samples.py
- [ ] plot_sparse_logistic_regression_20newsgroups.py
- [ ] plot_sparse_logistic_regression_mnist.py
- [ ] plot_theilsen.py
- [ ] plot_tweedie_regression_insurance_claims.py
- examples/manifold:
- [ ] plot_lle_digits.py
- [ ] plot_manifold_sphere.py #30959
- [ ] plot_swissroll.py
- [ ] plot_t_sne_perplexity.py
- examples/miscellaneous:
- [ ] plot_anomaly_comparison.py
- [ ] plot_display_object_visualization.py
- [ ] plot_estimator_representation.py
- [ ] plot_johnson_lindenstrauss_bound.py
- [ ] plot_kernel_approximation.py
- [ ] plot_metadata_routing.py
- [ ] plot_multilabel.py
- [x] plot_multioutput_face_completion.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2676356956)
- [ ] plot_outlier_detection_bench.py
- [ ] plot_partial_dependence_visualization_api.py
- [ ] plot_pipeline_display.py
- [ ] plot_roc_curve_visualization_api.py
- [ ] plot_set_output.py
- examples/mixture:
- [ ] plot_concentration_prior.py
- [ ] plot_gmm_covariances.py
- [ ] plot_gmm_init.py
- [ ] plot_gmm_pdf.py
- [x] plot_gmm.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30841#issue-2855807102): #30841
- [x] plot_gmm_selection.py #30841
- [x] plot_gmm_sin.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30841#issue-2855807102): #30841
- examples/model_selection:
- [ ] plot_confusion_matrix.py #30949
- [ ] plot_cv_predict.py
- [x] plot_det.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30977#pullrequestreview-2684987302)
- [ ] plot_grid_search_digits.py
- [ ] plot_grid_search_refit_callable.py
- [ ] plot_grid_search_stats.py #30965
- [ ] plot_grid_search_text_feature_extraction.py #30974
- [ ] plot_likelihood_ratios.py
- [ ] plot_multi_metric_evaluation.py
- [ ] plot_permutation_tests_for_classification.py
- [x] plot_precision_recall.py # [no reference needs to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2669520889)
- [ ] plot_randomized_search.py
- [ ] plot_roc_crossval.py
- [ ] plot_roc.py
- [ ] plot_successive_halving_heatmap.py
- [ ] plot_successive_halving_iterations.py
- [ ] plot_train_error_vs_test_error.py
- [x] plot_underfitting_overfitting.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2681734179)
- [x] <strike>plot_validation_curve.py</strike> #had been merged with another example in #29936
- examples/neighbors:
- [ ] plot_digits_kde_sampling.py
- [ ] plot_kde_1d.py
- [ ] plot_lof_novelty_detection.py
- [ ] plot_lof_outlier_detection.py
- [x] plot_nca_classification.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30849#issuecomment-2665171341) #30849
- [x] plot_nca_dim_reduction.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30849#issuecomment-2665171341) #30849
- [x] plot_nca_illustration.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30849#issuecomment-2665171341) #30849
- [ ] plot_species_kde.py
- examples/semi_supervised:
- [x] plot_label_propagation_digits_active_learning.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30553#issuecomment-2582852356) #30553
- [x] plot_label_propagation_digits.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30553#issuecomment-2582852356) #30553
- [x] plot_label_propagation_structure.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30553#issuecomment-2582852356) #30553
- [ ] plot_self_training_varying_threshold.py
- [ ] plot_semi_supervised_newsgroups.py #30882
- [ ] plot_semi_supervised_versus_svm_iris.py
- examples/svm:
- [ ] plot_custom_kernel.py
- [ ] plot_iris_svc.py
- [ ] plot_linearsvc_support_vectors.py
- [ ] plot_oneclass.py
- [ ] plot_rbf_parameters.py
- [ ] plot_separating_hyperplane.py
- [ ] plot_separating_hyperplane_unbalanced.py
- [ ] plot_svm_anova.py
- [ ] plot_svm_margin.py #26969 (stalled) #30975 ([maybe remove the example](https://github.com/scikit-learn/scikit-learn/pull/30975#pullrequestreview-2684941292))
- [ ] plot_weighted_samples.py #30676
- examples/tree:
- [x] plot_iris_dtc.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30650#issuecomment-2653822241) #30650
- <strike>[x] plot_tree_regression_multioutput.py </strike> # was merged with another example in #26962
- [x] plot_unveil_tree_structure.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2626465696)
## What comes next?
- after working a bit here, you might want to further explore contributing to scikit learn
- we have #22827 and #25024 that are both also suitable for beginners, but might move forwards a little slower than here
- we are looking for people who are willing to do some intense work to improve or merge some examples; these will be PRs that will be intensely discussed and thoroughly reviewed and will probably take several months; if this sounds good to you, please open an issue with a suggestion and maintainers will evaluate your idea
- this could look like #29963 and #29962
- we also have an open issue to discuss examples that can be removed: #27151
- if you are more senior professionally, you can look through the issues with the [`help wanted`](https://github.com/scikit-learn/scikit-learn/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22help%20wanted%22) label or with the [`moderate`](https://github.com/scikit-learn/scikit-learn/labels/Moderate) label or you can take over [stalled PRs](https://github.com/scikit-learn/scikit-learn/issues?q=is%3Apr%20state%3Aopen%20label%3AStalled); these kind of contributions need to be discussed with maintainers and I would recommend seeking their approval first and not invest too much work before you get a go | open | 2025-01-10T12:29:04Z | 2025-03-24T15:21:08Z | https://github.com/scikit-learn/scikit-learn/issues/30621 | [
"Documentation",
"Sprint",
"good first issue",
"Meta-issue"
] | StefanieSenger | 101 |
pydantic/pydantic-core | pydantic | 853 | manylinux aarch64 wheels aren't manylinux2014 compliant | This could be due to an underlying issue with maturin, but the published `aarch64` wheels for `pydantic_core` do not appear to be `manylinux2014` compliant. This doesn't look like a huge surprise given that they appear to be built in a `manylinux_2_24` container, but somehow the final wheels haven't ended up with the correct names?
For background, I was trying to a build a package that requires pydantic v2 with `cibuildwheel`, and it fails for `aarch64` and `manylinux2014` with an error importing `pydantic_core` in the test step:
```
ImportError: /tmp/tmp.jCq1ERH7jQ/venv/lib/python3.8/site-packages/pydantic_core/_pydantic_core.cpython-38-aarch64-linux-gnu.so: symbol __cxa_thread_atexit_impl, version GLIBC_2.18 not defined in file libc.so.6 with link time reference
```
Link to the error in context in the logs for the `cibuildwheel` run: https://dev.azure.com/explosion-ai/Public/_build/results?buildId=26592&view=logs&j=166da4ee-13dd-5b99-3378-c30201f23530&t=674eee91-5f01-5a8d-f3f1-8c028466f401&l=6865
The package builds fine with `cibuildwheel` using `CIBW_MANYLINUX_AARCH64_IMAGE="manylinux_2_24"`.
Selected Assignee: @dmontagu | closed | 2023-08-04T15:58:44Z | 2023-08-16T16:57:27Z | https://github.com/pydantic/pydantic-core/issues/853 | [
"unconfirmed"
] | adrianeboyd | 2 |
adap/flower | tensorflow | 4,919 | Add a SizePartitioner with IID Distribution | ### Describe the type of feature and its functionality.
The current implementation of Flower’s partitioners provides two separate functionalities: the IidPartitioner and SizePartitioner. This proposal suggests introducing a new partitioner "SizePartitionerIID" that merges both functionalities. It will allow users to specify exact partition sizes while also ensuring that the dataset is partitioned in an IID manner by shuffling the indices before splitting.
functionality:
In the existing sizepartitioner Randomly shuffle dataset indices before partitioning
allow the user to pick a fixed random seed in the shuffling process to ensure reproducible results.
### Describe step by step what files and adjustments are you planning to include.
To implement the proposed feature, we first need to create a new class named SizePartitionerIID within the flwr_datasets/partitioner/size_partitioner.py file. This class will inherit from the base Partitioner class while maintaining the overall structure of the existing SizePartitioner. The key modification will be in the _determine_partition_id_to_indices_if_needed method, where we introduce IID enforcement. To achieve this, we will first generate a list of all dataset indices and then use Python’s random module to shuffle them. By setting a random seed set by the user , we ensure reproducibility. Once shuffled, we will select only the first total_desired indices, where total_desired represents the sum of all specified partition sizes. These selected indices will then be divided into segments according to each partition’s defined size.
### Is there something else you want to add?
_No response_ | closed | 2025-02-07T10:49:29Z | 2025-02-20T10:25:28Z | https://github.com/adap/flower/issues/4919 | [
"feature request"
] | RedPandaY | 0 |
autogluon/autogluon | data-science | 4,218 | [tabular] Add log-scaling to regression for appropriate metrics | Reference: [Discord Thread](https://discord.com/channels/1043248669505368144/1241688613725536296)
Kudos to @giladrubin1 for starting the thread and @LennartPurucker for helping with brainstorming
## The Idea
In regression tasks for metrics such as RMLSE (root mean squared log error), it is beneficial to first log scale the ground truth before training, and then fit with RMSE. At predict time, we can inverse the log transform to get the corrected predictions.
This is because passing RMLSE directly for the purposes of early stopping fails to align with the way the model's loss function is being optimized. But using RMSE lets it be much more closely aligned.
This process should also be done for all other cases where log transforms occur (and potentially other transforms).
## Implementation Ideas
### Internal Handling
One candidate idea is add extra logic to `Scorer`. This logic would include `.transform(y)` and `.inverse_transform(y)`. We can then refer to these transforms when scoring predictions and returning predictions to the user.
One downside of this approach is added code complexity. If we ever forget to do the transform / inverse transform at any point, the scores would become incorrect and performance would severely degrade. We can ward against this with careful unit testing and benchmarking of the metrics this applies to.
Maybe this logic can also be tied to the Models, ensuring the transform and inverse transforms happen in the `predict`, `predict_proba` and `score` methods, among others. We will need to ensure that this doesn't happen multiple times in the cases of bagged ensembles or nested models.
We could use an `ag.metric_transform` ag_args_fit argument to enable or disable the transforms in the models.
We will need to add unit tests for the transform and inverse_transforms to ensure they work as intended.
### External Handling
Another option is to have TabularPredictor or Trainer handle the transform. TabularPredictor is less ideal because all the scores reported in logging and leaderboard would be incorrect, but it would be the easiest to make work without bugs. For Trainer, we would be able to make the logging and leaderboard correct, but things might become odd with model-specific logging.
### Detailed Thoughts
- [ ] Some models can more directly optimize for RMSLE, and we should could avoid the log transforms in these cases. For example, the weighted ensemble wouldn't benefit from the log transform when fitting, and in fact it would probably harm performance.
- [ ] For multi-layer stacking, should we pass the stack features in the original or the transformed representation to the stacker models? | open | 2024-05-22T19:02:57Z | 2025-02-05T15:05:56Z | https://github.com/autogluon/autogluon/issues/4218 | [
"enhancement",
"module: tabular",
"priority: 1"
] | Innixma | 3 |
home-assistant/core | python | 141,257 | Roborock Connection | ### The problem
Hello,
ich try to connect my Roborock but i get an error:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 640, in __async_setup_with_context
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/roborock/__init__.py", line 72, in async_setup_entry
device.duid: device for device in all_devices
^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'duid'

### What version of Home Assistant Core has the issue?
core
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-24T06:37:32Z | 2025-03-24T16:31:18Z | https://github.com/home-assistant/core/issues/141257 | [
"needs-more-information",
"integration: roborock"
] | Dengo91 | 3 |
freqtrade/freqtrade | python | 11,511 | How to Enable the Backtesting Option in FreqUI When Running Freqtrade via Docker on Windows 10? | Hello Freqtrade community,
Could I kindly ask a beginner question? I’m running Freqtrade using Docker on Windows 10 and noticed in the documentation that FreqUI includes backtesting functionality. However, when I access FreqUI, I only see the options "Trade," "Dashboard," and "Chart Logs." Could you please guide me on how to enable or access the backtesting feature in FreqUI?
Any advice would be greatly appreciated! Thank you in advance for your help!
OS: Windows 10
Installation: Docker-based Freqtrade | closed | 2025-03-15T19:34:16Z | 2025-03-15T23:57:40Z | https://github.com/freqtrade/freqtrade/issues/11511 | [
"Question"
] | ray147291617 | 2 |
holoviz/panel | matplotlib | 7,771 | Feature: A lighter weight pn.Card | With the advent of LLMs, I see a lot of collapsible divs, e.g.
Claude:
<img width="748" alt="Image" src="https://github.com/user-attachments/assets/4721460a-6c04-4fe4-a7a2-cf8547848009" />
Cursor:
<img width="395" alt="Image" src="https://github.com/user-attachments/assets/965cd583-eedb-49a9-bbcd-0603c3d3ee34" />
These are very thin and do not occupy a lot of space, and I'd like to propose something like `Details`

From:
https://blog.holoviz.org/posts/reactivehtml/index.html
I imagine this thinner collapsible `Details` could also be usefully nested inside `pn.Card`, e.g. nesting the following snippet:
<img width="620" alt="Image" src="https://github.com/user-attachments/assets/cc627d86-23a2-486d-bd03-073ecafc965d" /> | open | 2025-03-11T18:38:03Z | 2025-03-13T11:24:36Z | https://github.com/holoviz/panel/issues/7771 | [] | ahuang11 | 0 |
localstack/localstack | python | 12,062 | bug: EventBridge event target pointing on API destination is not expanding header_parameters | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I have EventBus event target configured similarly to the below terraform:
```
resource "aws_cloudwatch_event_api_destination" "http_destination" {
# .......
}
resource "aws_cloudwatch_event_target" "test_target" {
# Irrelevant configuration omitted
arn = aws_cloudwatch_event_api_destination.http_destination.arn
http_target {
header_parameters = {
# Below are not substituted correctly in localstack setup but working correctly on AWS
"X-Message-ID" = "$.id"
}
}
}
```
When message is delivered to the configured API destination, it has http header containing: `x-message-id: $.id`
### Expected Behavior
When message is delivered to the configured API destination, it has http header containing similar to: `x-message-id: dde026d1-38e1-4bef-c402-3f2bc6b856bd`
This behavior has been verified against AWS.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Deploy terraform mentioned above.
### Environment
```markdown
- OS: OSX 15.1.1 (24B91)
- LocalStack:
LocalStack version: 4.0.4.dev26
LocalStack Docker image sha: sha256:f4819f09a5a15b10c91a88026d14bc0d11287bf182676090d746541468fccb1e
LocalStack build date: 2024-12-10
LocalStack build git hash: 37a56a501
```
### Anything else?
_No response_ | open | 2024-12-22T19:31:12Z | 2025-01-03T15:26:06Z | https://github.com/localstack/localstack/issues/12062 | [
"type: bug",
"aws:events",
"status: backlog"
] | gemyago | 1 |
google-deepmind/graph_nets | tensorflow | 59 | Cannot interpret feed_dict as Tensor | Hi! I'm trying to run a model that predicts node attributes based on global and edge inputs.
I've been largely following the shortest_path.ipynb demo to write my code, and my code at the moment looks as follows (happy to include more if need be!):
```python
# train_input, train_target, test_input etc. are all lists containing nxgraphs
train_input_ph, train_target_ph = create_placeholders(train_input, train_target)
test_input_ph, test_target_ph = create_placeholders(test_input, test_target)
output_train_graphs = graph_net_module(train_input_ph)
output_test_graphs = graph_net_module(test_input_ph)
loss_train = create_loss_ops(train_target_ph, output_train_graphs)
loss_test = create_loss_ops(test_target_ph, output_test_graphs)
....
train_input_ph, train_target_ph, output_train_graphs, output_test_graphs = make_all_runnable_in_session(train_input_ph, train_target_ph, output_train_graphs, output_test_graphs)
```
In the running training section of code, I then have:
```python
for iteration in range(last_iteration, num_training_iterations):
last_iteration = iteration
train_values = sess.run({
"step": step_op,
"target": train_target_ph,
"train_loss": loss_train,
"outputs": output_train_graphs
},
feed_dict={train_input_ph: gn.utils_np.networkxs_to_graphs_tuple(train_input),
train_target_ph: gn.utils_np.networkxs_to_graphs_tuple(train_target)}
)
```
However, when I try to run the second set of code, I get the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/miniconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1091 subfeed_t = self.graph.as_graph_element(
-> 1092 subfeed, allow_tensor=True, allow_operation=False)
1093 except Exception as e:
/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
3477 with self._lock:
-> 3478 return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
3479
/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
3566 raise TypeError("Can not convert a %s into a %s." % (type(obj).__name__,
-> 3567 types_str))
3568
TypeError: Can not convert a Operation into a Tensor.
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-1103-fddee2f34548> in <module>
16 },
17 feed_dict={train_input_ph: gn.utils_np.networkxs_to_graphs_tuple(train_input),
---> 18 train_target_ph: gn.utils_np.networkxs_to_graphs_tuple(train_target)}
19 )
20 the_time = time.time()
/miniconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
927 try:
928 result = self._run(None, fetches, feed_dict, options_ptr,
--> 929 run_metadata_ptr)
930 if run_metadata:
931 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/miniconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1093 except Exception as e:
1094 raise TypeError(
-> 1095 'Cannot interpret feed_dict key as Tensor: ' + e.args[0])
1096
1097 if isinstance(subfeed_val, ops.Tensor):
TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a Operation into a Tensor.
```
I notice this is a similar thing to [#24](https://github.com/deepmind/graph_nets/issues/24) but when I tried the solution there of reducing make_all_runnable_in_session to only act on output_train_graphs and output_test_graphs, I get the following error instead:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/miniconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1091 subfeed_t = self.graph.as_graph_element(
-> 1092 subfeed, allow_tensor=True, allow_operation=False)
1093 except Exception as e:
/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in as_graph_element(self, obj, allow_tensor, allow_operation)
3477 with self._lock:
-> 3478 return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
3479
/miniconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
3566 raise TypeError("Can not convert a %s into a %s." % (type(obj).__name__,
-> 3567 types_str))
3568
TypeError: Can not convert a NoneType into a Tensor.
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-1106-fddee2f34548> in <module>
16 },
17 feed_dict={train_input_ph: gn.utils_np.networkxs_to_graphs_tuple(train_input),
---> 18 train_target_ph: gn.utils_np.networkxs_to_graphs_tuple(train_target)}
19 )
20 the_time = time.time()
/miniconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
927 try:
928 result = self._run(None, fetches, feed_dict, options_ptr,
--> 929 run_metadata_ptr)
930 if run_metadata:
931 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/miniconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1093 except Exception as e:
1094 raise TypeError(
-> 1095 'Cannot interpret feed_dict key as Tensor: ' + e.args[0])
1096
1097 if isinstance(subfeed_val, ops.Tensor):
TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a NoneType into a Tensor.
```
What can I do with the feed_dict and session variables to make this all work? | closed | 2019-04-04T15:33:53Z | 2019-04-05T13:28:22Z | https://github.com/google-deepmind/graph_nets/issues/59 | [] | jmcs100 | 6 |
mitmproxy/pdoc | api | 268 | Get line number of classes and functions... | I implement a README generate using pdoc here: https://github.com/boxine/bx_py_utils/pull/76
Example result is currently: https://github.com/boxine/bx_py_utils/blob/auto-doc/README.md
The idea is to add links to the github code view page, e.g.:
https://github.com/boxine/bx_py_utils/blob/auto-doc/bx_py_utils/auto_doc.py#L18
But I can't find a way to get the line number of the functions and classes from pdoc.
In short i do:
```
module_obj = extract.load_module(module_name)
pdoc_module = Module(module_obj)
for item in pdoc_module.classes:
# ...
for item in pdoc_module.functions
# ...
```
In the end i process a list of `pdoc.doc.Class` and `pdoc.doc.Function` instances.
Is it possible to the the line numbers of these instances?!?
| closed | 2021-05-31T10:18:28Z | 2021-06-05T10:46:45Z | https://github.com/mitmproxy/pdoc/issues/268 | [
"enhancement"
] | jedie | 10 |
matplotlib/mplfinance | matplotlib | 203 | Bug Report: Can't display Chinese character even matplotlib can work with Chinese | Respect,
Here is a user from China. I met a problem when I updated to latest version of mapfinance. I can't show Chinese characters anymore even the matplotlib can work with Chinese characters well.
Thank you very much | closed | 2020-07-02T08:03:38Z | 2020-08-09T18:42:43Z | https://github.com/matplotlib/mplfinance/issues/203 | [
"bug"
] | xuelangqingkong | 8 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,878 | solved | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
WebUI refuses to install torch in many restarts
### Steps to reproduce the problem
1. Start Web UI for first time
2. Wait
### What should have happened?
WebUI should install torch
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
Can't generate. Another bug pop-ups (specs is compatiable)
### Console logs
```Shell
Creating venv in directory V:\stable-diffusion-webui\venv using python "C:\Users\USER\AppData\Local\Programs\Python\Python313\python.exe"
Requirement already satisfied: pip in v:\stable-diffusion-webui\venv\lib\site-packages (24.3.1)
Collecting pip
Downloading pip-25.0.1-py3-none-any.whl.metadata (3.7 kB)
Downloading pip-25.0.1-py3-none-any.whl (1.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 3.1 MB/s eta 0:00:00
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 24.3.1
Uninstalling pip-24.3.1:
Successfully uninstalled pip-24.3.1
Successfully installed pip-25.0.1
venv "V:\stable-diffusion-webui\venv\Scripts\Python.exe"
=============================================================================================================================
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have 3.13.2.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/
Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre
Use --skip-python-version-check to suppress this warning.
=============================================================================================================================
Python 3.13.2 (tags/v3.13.2:4f8bb39, Feb 4 2025, 15:23:48) [MSC v.1942 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
ERROR: Could not find a version that satisfies the requirement torch==2.1.2 (from versions: 2.6.0)
ERROR: No matching distribution found for torch==2.1.2
Traceback (most recent call last):
File "V:\stable-diffusion-webui\launch.py", line 48, in <module>
main()
~~~~^^
File "V:\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
~~~~~~~~~~~~~~~~~~~^^
File "V:\stable-diffusion-webui\modules\launch_utils.py", line 381, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "V:\stable-diffusion-webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "V:\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
```
### Additional information
first launch, renamed webui-user to Stable Diffusion, added line git pull, added argument --autolauch | closed | 2025-03-03T15:38:47Z | 2025-03-14T16:30:52Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16878 | [
"bug-report"
] | Sensanko52123 | 2 |
aimhubio/aim | data-visualization | 3,272 | The API reference spec seems missing | ## 📚 Documentation
The documentation sometimes contains links to the full API reference specs, like for instance on the [Manage Runs](https://aimstack.readthedocs.io/en/latest/using/manage_runs.html) page:
> Run class full [spec](https://aimstack.readthedocs.io/en/latest/refs/sdk.html#aim.sdk.run.Run).
Unfortunately this links to a [page](https://aimstack.readthedocs.io/en/latest/refs/sdk.html#aim-sdk-repo-module) that actually doesn't contain anything besides the sub-module structure.
I actually couldn't find any actual API reference at all on the website so far. | open | 2024-12-17T12:09:52Z | 2024-12-17T12:09:52Z | https://github.com/aimhubio/aim/issues/3272 | [
"area / docs"
] | bluenote10 | 0 |
opengeos/leafmap | streamlit | 164 | Add GUI for opening COG and STAC | This feature allows loading raster datasets onto the map without coding. | closed | 2021-12-30T15:03:51Z | 2022-01-11T05:38:21Z | https://github.com/opengeos/leafmap/issues/164 | [
"Feature Request"
] | giswqs | 1 |
wandb/wandb | data-science | 8,937 | 'wandb.tensorboard.unpatch()' missing in documentation | Hey everyone,
I log my experiments with Tensorboard and have multiple experiments per run. Thus, I need to run:
```
wandb.tensorboard.patch(root_logdir=log_directory)
wand.init()
writer = SummaryWriter(log_dir=log_directory)
...
My experiment
...
wandb.finish()
writer.close()
wandb.tensorboard.unpatch()
```
However, `unpatch()` is not documented, I only found it because your code is open-source :) If it is not called, an error occurs:
```
"Tensorboard already patched, remove `sync_tensorboard=True` "
"from `wandb.init` or only call `wandb.tensorboard.patch` once."
```
This is the function: https://github.com/wandb/wandb/blob/0bf2ea43770e7349a57fc776aeb16d3035ce4dbf/wandb/integration/tensorboard/monkeypatch.py#L18
This is the documentation: https://docs.wandb.ai/guides/integrations/tensorboard/. Here, only `patch()` is explained. In addition, the error messsage could be improved by also mentioning that `unpatch()` is a valid option.
Hope that helps! :) | open | 2024-11-22T22:43:09Z | 2024-11-28T14:46:49Z | https://github.com/wandb/wandb/issues/8937 | [
"c:docs"
] | daniel-bogdoll | 5 |
lexiforest/curl_cffi | web-scraping | 464 | Unable to download libcurl-impersonate when installing package (build.py) | Please check the following items before reporting a bug, otherwise it may be closed immediately.
- [ + ] **This is NOT a site-related "bugs"**, e.g. some site blocks me when using curl_cffi,
UNLESS it has been verified that the reason is missing pieces in the impersonation.
- [ + ] A code snippet that can reproduce this bug is provided, even if it's a one-liner.
- [ + ] Version information will be pasted as below.
**Describe the bug**
i think there is no valid archive in [lexiforest/curl-impersonate](https://github.com/lexiforest/curl-impersonate/releases/download/v0.8.2/libcurl-impersonate-v0.8.2.)
https://github.com/lexiforest/curl_cffi/blob/main/scripts/build.py#L114
**To Reproduce**
```bash
pip install --pre curl-cffi
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Versions**
- OS: Windows 10 Lite
- curl_cffi version 0.8.0b7
**Additional context**
```log
Using ./lib64 to store libcurl-impersonate
Downloading libcurl-impersonate-chrome from https://github.com/lexiforest/curl-impersonate/releases/download/v0.8.2/libcurl-impersonate-v0.8.2.x86_64-win32.tar.gz...
Traceback (most recent call last):
File "C:\Users\lord\PycharmProjects\dtekbot\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\lord\PycharmProjects\dtekbot\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\PycharmProjects\dtekbot\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\build_meta.py", line 304, in _get_build_requires
self.run_setup()
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in run_setup
exec(code, locals())
File "<string>", line 16, in <module>
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 145, in setup
_setup_distribution = dist = klass(attrs)
^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\dist.py", line 319, in __init__
_Distribution.__init__(self, dist_attrs)
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 279, in __init__
self.finalize_options()
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\dist.py", line 677, in finalize_options
ep(self)
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\setuptools\dist.py", line 697, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\cffi\setuptools_ext.py", line 216, in cffi_modules
add_cffi_module(dist, cffi_module)
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\cffi\setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "C:\Users\lord\AppData\Local\Temp\pip-build-env-nsfao6eq\overlay\Lib\site-packages\cffi\setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "scripts/build.py", line 114, in <module>
download_libcurl()
File "scripts/build.py", line 69, in download_libcurl
urlretrieve(url, file)
File "C:\Users\lord\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 240, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 215, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 521, in open
response = meth(req, response)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 630, in http_response
response = self.parent.error(
^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 559, in error
return self._call_chain(*args)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "C:\Users\lord\AppData\Local\Programs\Python\Python312\Lib\urllib\request.py", line 639, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
[end of output]```
| closed | 2024-12-17T12:10:38Z | 2024-12-17T12:42:24Z | https://github.com/lexiforest/curl_cffi/issues/464 | [
"bug"
] | lrdcxdes | 1 |
pytorch/vision | computer-vision | 8,720 | wrap_dataset_for_transforms_v2 with transforms not working as intended | ### 🐛 Describe the bug
When using the `wrap_dataset_for_transforms_v2` wrapper for `torchvision.datasets` classes it seems that the `transform` being passed during instantiation of the dataset is not utilized properly. The issue was observed the `VOCDetection` dataset:
- when using `RandomCrop`, passing it to `VOCDetection` and wrapping it by `wrap_dataset_for_transforms_v2`, the image gets cropped, but the bounding boxes are unaffected (image in the middle)
- when not passing any transform, but instead applying the transform manually outside of the wrapped dataset, it works as intended (image on the right)
- image on the left is the original without cropping

Code to reproduce:
```python
from random import Random
from torchvision.datasets import (
wrap_dataset_for_transforms_v2,
CocoDetection,
VOCDetection,
Cityscapes
)
from torchvision.transforms import ToTensor
from torchvision.transforms.v2 import Compose, RandomCrop, ToImage, ToDtype
import torch
import matplotlib.pyplot as plt
from torchvision.utils import draw_bounding_boxes
transform1 = ToTensor()
transform2 = Compose([ToImage(), ToDtype(torch.float32, scale=True), RandomCrop((360, 360))])
ds1 = wrap_dataset_for_transforms_v2(VOCDetection("./voc", transform=transform1, download=True))
ds2 = wrap_dataset_for_transforms_v2(VOCDetection("./voc", transform=transform2))
ds3 = wrap_dataset_for_transforms_v2(VOCDetection("./voc"))
fig, ax = plt.subplots(1, 3)
# plot original
sample1 = ds1[0]
img1 = sample1[0]
boxes1 = sample1[1]["boxes"]
img1_w_boxes = draw_bounding_boxes(img1, boxes1, width=3, colors="green")
ax[0].imshow(img1_w_boxes.permute(1,2,0))
# plot wrongly transformed boxes
sample2 = ds2[0]
img2 = sample2[0]
boxes2 = sample2[1]["boxes"]
img2_w_boxes = draw_bounding_boxes(img2, boxes2, width=4, colors="green")
ax[1].imshow(img2_w_boxes.permute(1,2,0))
# properly transform manually
sample3 = ds3[0]
img3 = sample3[0]
boxes3 = sample3[1]["boxes"]
img3, boxes3 = transform2(img3, boxes3) # manual transform
img3_w_boxes = draw_bounding_boxes(img3, boxes3, width=3, colors="green")
ax[2].imshow(img3_w_boxes.permute(1,2,0))
fig.show()
```
### Versions
```bash
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 15:57:01) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.1
[pip3] torchmetrics==1.5.1
[pip3] torchvision==0.20.1
[conda] Could not collect
``` | closed | 2024-11-11T12:54:52Z | 2024-12-23T09:16:41Z | https://github.com/pytorch/vision/issues/8720 | [] | liopeer | 3 |
zappa/Zappa | flask | 386 | [Migrated] Deleted Resource resulting in `Invalid REST API identifier` | Originally from: https://github.com/Miserlou/Zappa/issues/967 by [yuric](https://github.com/yuric)
<!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->This is not an issue or a bug but a request for advice. On Amazon API Gateway one of the APIs deployed by Zappa was deleted.
I created a new one with matching name but of course the ID does not match. I get the error `GetRestApi operation: Invalid REST API identifier specified`. I know this is because the API was deleted. How can I fix this without having to `zappa undeploy/deploy`?
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
My initial reaction is if I find where `zappa/boto` stores the REST API resource identifier it uses on `zappa update` I can updated the respective id manually.
Alternatively, is there a way to `zappa undeploy {stage}` without removing the Custom Domain Name entries from Amazon API Gateway? Is change the Base Path Mappings sufficient?
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.zappa deploy and then manually delete the API from Amazon API gateway.
2. Try to zappa update and voila.
| closed | 2021-02-20T08:27:46Z | 2022-08-16T00:44:23Z | https://github.com/zappa/Zappa/issues/386 | [] | jneves | 1 |
Teemu/pytest-sugar | pytest | 32 | Possible to convert live video into animated GIF? | I wonder if the cool animated video at http://pivotfinland.com/pytest-sugar/
could be converted to an animated GIF?
Because GitHub, as far as I can tell, won't let us embed a video in a README, but you can embed an animated GIF and that would let the GitHub page have the sexy video.
I think I tried to convert it myself using one or two Web sites and it crapped out (maybe because it's too large?) Maybe it could be done with some desktop tool though?
| closed | 2014-02-13T16:07:09Z | 2014-12-12T22:14:13Z | https://github.com/Teemu/pytest-sugar/issues/32 | [] | msabramo | 2 |
autogluon/autogluon | scikit-learn | 4,832 | [BUG] [timeseries] Covariate regressor creates an empty folder under `AutogluonModels/ag-{TIMESTAMP}CatBoostModel`` | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
When fitting a forecasting model with a `covariate_regressor` inside a `TimeSeriesPredictor`, AutoGluon creates an empty directory under `AutogluonModels/ag-**********ModelName`.
**Expected behavior**
No directory should be created. Even if the regressor must be saved to disk, it should be done in the folder corresponding to the predictor / model.
**To Reproduce**
```python
from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor
data = TimeSeriesDataFrame.from_path(
"https://autogluon.s3.amazonaws.com/datasets/timeseries/grocery_sales/test.csv",
)
predictor = TimeSeriesPredictor(
prediction_length=8,
target="unit_sales",
known_covariates_names=["scaled_price", "promotion_email", "promotion_homepage"],
path="test_predictor",
verbosity=0,
)
predictor.fit(data, hyperparameters={"Chronos": {"covariate_regressor": "CAT"}}, time_limit=10)
```
**Screenshots / Logs**
Directory structure before fit:
```
└── reproduce.py
```
Directory structure after fit:
```
├── AutogluonModels
│ └── ag-20250123_124514CatBoostModel
├── reproduce.py
└── test_predictor
├── learner.pkl
├── logs
│ └── predictor_log.txt
├── models
│ ├── Chronos[autogluon__chronos-bolt-small]
│ │ ├── W0
│ │ │ └── model.pkl
│ │ ├── model.pkl
│ │ └── utils
│ │ └── oof.pkl
│ └── trainer.pkl
├── predictor.pkl
├── utils
│ └── data
│ └── train.pkl
└── version.txt
```
Note the folder `AutogluonModels/ag-20250123_124514CatBoostModel` that was created. This happened even though the predictor was saved to `test_predictor/`.
**Root cause**
When a tabular regression model is created in
https://github.com/autogluon/autogluon/blob/608a8555d0aa541bea5a9e6d9edb7a1f6354824b/timeseries/src/autogluon/timeseries/regressor.py#L106-L113
The `path` attribute is not set. For this reason, the following code block is triggered inside `AbstractModel.__init__`:
https://github.com/autogluon/autogluon/blob/608a8555d0aa541bea5a9e6d9edb7a1f6354824b/core/src/autogluon/core/models/abstract/abstract_model.py#L115
which results in creation of the directory `AutogluonModels/ag-20250123_124514CatBoostModel`.
**Installed Versions**
v1.2.0 | closed | 2025-01-23T12:55:15Z | 2025-01-28T19:52:14Z | https://github.com/autogluon/autogluon/issues/4832 | [
"bug",
"module: timeseries",
"module: core"
] | shchur | 0 |
fugue-project/fugue | pandas | 259 | [FEATURE] Duckdb support | **Describe the solution you'd like**
We should support DuckDB as a sql backend.
| closed | 2021-10-17T22:02:50Z | 2021-10-18T20:37:34Z | https://github.com/fugue-project/fugue/issues/259 | [
"enhancement",
"Fugue SQL"
] | goodwanghan | 0 |
kizniche/Mycodo | automation | 1,308 | Unable to Generate Camera Timelapses | Attempting to add a time-lapse using the basic function page using a working camera. I'm unable to generate any time-lapse stills now.
1. Images of the time-lapse information with error are shown.
2. No pictures are ever generated in the time-lapse photo. There were time-lapse photos from a different date, indicating it worked at one time
3. Still images function as expected with below criteria, both when manually generated from the Camera page and when using a widget.
4. I've attempted re-install camera again, and using all of the different libraries with no effect. Restarting the front end/ back end/ and system have no effect.
5. Mycodo has been updated to the latest 8.15.8, but this error also occurred on the last two versions I was running
6. Camera details: Arducam Fisheye Camera, 5MP OV5647 1080P Camera Module
If there are any other logs I can provide to assist with troubleshooting I'm happy to, but I would need a bit of direction on how to do
so.

As a side note, there is a second issue where I can no longer enter a 0 for an indefinite time-lapse. This appears to be new to 8.15.8 as this did not occur until the update.
P.S. As a note, (unable to run infinitely without entering 0 for time-lapse, which I thought was no longer possible based on other error reports, but that is an aside.

| closed | 2023-05-17T20:26:37Z | 2023-08-21T18:25:01Z | https://github.com/kizniche/Mycodo/issues/1308 | [
"bug",
"Fixed and Committed"
] | robocode-LAB | 1 |
biolab/orange3 | data-visualization | 6,435 | ODBC Support | **What's your use case?**
I would like to connect through oledb to an existing database (MonetDB here)
<!-- Is your request related to a problem, or perhaps a frustration? -->
Well, I can't connect to the said database :)
<!-- Tell us the story that led you to write this request. -->
**What's your proposed solution?**
The SQL Table Widget should allow a connection through ODBC
**Are there any alternative solutions?**
Using excel to export data in file... but it's an alternative like coca cola is an alternative to coffee.
| open | 2023-04-26T19:23:05Z | 2024-09-12T11:49:37Z | https://github.com/biolab/orange3/issues/6435 | [] | simonaubertbd | 3 |
jmcnamara/XlsxWriter | pandas | 212 | Content is unreadable in excel | I have the following code
```
workbook = xlsxwriter.Workbook('test.xls')
worksheet = workbook.add_worksheet()
e_opts = ['test5', 'other']
for i in range(10):
worksheet.write('A%d'%i, 'test1')
worksheet.write('B%d'%i, 'test2')
worksheet.write('C%d'%i, 'test3')
worksheet.write('D%d'%i, 'test4')
worksheet.write('E%d'%i, 'test5')
worksheet.data_validation('E%d'%i, {'validate': 'list',
'source': e_opts})
workbook.close()
```
And when trying to open in Excel I get a content is unreadable error. After repairing, there is nothing left in the spreadsheet
| closed | 2015-01-16T18:24:51Z | 2015-01-16T20:28:04Z | https://github.com/jmcnamara/XlsxWriter/issues/212 | [
"bug"
] | SeanWhipple | 2 |
microsoft/hummingbird | scikit-learn | 287 | Add support for sklearn MLPRegressor | Extend the existing sklearn MLPClassifier to also support MLPRegressor. | closed | 2020-09-03T20:18:56Z | 2020-09-04T04:36:57Z | https://github.com/microsoft/hummingbird/issues/287 | [] | scnakandala | 1 |
Ehco1996/django-sspanel | django | 603 | (1045, "Access denied for user 'root'@'172.18.0.4' (using password: YES)") | **问题的描述**
(1045, "Access denied for user 'root'@'172.18.0.4' (using password: YES)")
**项目的配置文件**
```
#--->服务端口 nginx
port=8080
#--->mysql
# mysql数据库设置 db.py
MYSQL_USER=root
MYSQL_PASSWORD=yourpass
# mysql服务设置
# mysql host默认请保持注释*
# MYSQL_HOS=mysql
# mysql服务密码设置
MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
#设置MYSQL_PASSWORD即可
#--->网站定制 sites.py
# 网站域名设置(请正确填写,不然订阅功能会失效:
HOST=http://127.0.0.1
# 网站密钥
SECRET_KEY=aasdasdas
# 是否开启注册
ALLOW_REGISTER=True
# 默认的theme
# 可选列表在 apps/constants.py 里的THEME_CHOICES里
DEFAULT_THEME=default
# 默认加密混淆协议
DEFAULT_METHOD=aes-256-cfb
# 签到流量设置
MIN_CHECKIN_TRAFFIC=10485760
# 10 * 1024 * 1024 (10MB)
MAX_CHECKIN_TRAFFIC=209715200
# 200 * 1024 * 1024 (200MB)
# 网站title
TITLE=谜之屋
SUBTITLE=秘密的小屋
# 用户邀请返利比例
INVITE_PERCENT=0.2
# 用户能生成的邀请码数量
INVITE_NUM=5
# 网站邀请页提示语
INVITEINFO=邀请码实时更新,如果用完了就没了
# 部分API接口TOKEN
TOKEN=youowntoken
# SHORT_URL_ALPHABET 请随机生成,且不要重复
DEFAULT_ALPHABET=qwertyuiopasdfghjklzxcvbnm
# FOR SIMPLE UI
SIMPLEUI_HOME_INFO=False
SIMPLEUI_DEFAULT_ICON=False
# 是否开启用户到期邮件通知
EXPIRE_EMAIL_NOTICE=False
#--->邮箱设置 email.py
# 是否开启邮件功能
USE_SMTP=True
EMAIL_USE_SSL=True
EMAIL_HOST=smtp.163.com
EMAIL_PORT=465
EMAIL_HOST_USER=user
EMAIL_HOST_PASSWORD=yourpass
DEFAULT_FROM_EMAIL=user #可以与EMAIL_HOST_USER一致
# FOR mailgun*
# MAILGUN_API_KEY=key
# MAILGUN_SENDER_DOMAIN=domain
#--->支付宝对接 pay.py*
#USE_ALIPAY = False
#CHECK_PAY_REQ_IP_FROM_CN = False
#ALIPAY_APP_ID = XXXXXXXX
#ALIPAY_APP_PRIVATE_KEY_STRING = "-----BEGIN RSA PRIVATE KEY-----
#-----END RSA PRIVATE KEY-----"
#ALIPAY_PUBLIC_KEY_STRING="-----BEGIN PUBLIC KEY-----
#-----END PUBLIC KEY-----"
ALIPAY_TRADE_INFO="{}元充值码"
#--->其他*
# 时区
#TZ = Asia/Shanghai
# msyql host
#MYSQL_HOST = mysql
# redis host
#REDIS_HOST = redis
# production不开启debug,development开启debug
#DJANGO_ENV = production
```
**如何复现**
```
docker-compose run --rm web python manage.py collectstatic --noinput
```
**相关截图/log**
```
Pulling mysql (mysql:5.7)...
5.7: Pulling from library/mysql
ffbb094f4f9e: Pull complete
df186527fc46: Pull complete
fa362a6aa7bd: Pull complete
5af7cb1a200e: Pull complete
949da226cc6d: Pull complete
bce007079ee9: Pull complete
eab9f076e5a3: Pull complete
c7b24c3f27af: Pull complete
6fc26ff6705a: Pull complete
bec5cdb5e7f7: Pull complete
6c1cb25f7525: Pull complete
Digest: sha256:d1cc87a3bd5dc07defc837bc9084f748a130606ff41923f46dec1986e0dc828d
Status: Downloaded newer image for mysql:5.7
Starting redis ... done
Creating mysql ... done
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.8/site-packages/django_prometheus/db/common.py", line 45, in get_new_connection
return super().get_new_connection(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 234, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/__init__.py", line 130, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 185, in __init__
super().__init__(*args, **kwargs2)
MySQLdb._exceptions.OperationalError: (1045, "Access denied for user 'root'@'172.18.0.4' (using password: YES)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
django.setup()
File "/usr/local/lib/python3.8/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python3.8/site-packages/django/apps/registry.py", line 122, in populate
app_config.ready()
File "/usr/local/lib/python3.8/site-packages/django_prometheus/apps.py", line 23, in ready
ExportMigrations()
File "/usr/local/lib/python3.8/site-packages/django_prometheus/migrations.py", line 52, in ExportMigrations
executor = MigrationExecutor(connections[alias])
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 53, in __init__
self.build_graph()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 220, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/recorder.py", line 77, in applied_migrations
if self.has_table():
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/recorder.py", line 55, in has_table
with self.connection.cursor() as cursor:
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 259, in cursor
return self._cursor()
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 235, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.8/site-packages/django_prometheus/db/common.py", line 45, in get_new_connection
return super().get_new_connection(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 234, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/__init__.py", line 130, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 185, in __init__
super().__init__(*args, **kwargs2)
django.db.utils.OperationalError: (1045, "Access denied for user 'root'@'172.18.0.4' (using password: YES)")
```
**其他信息**
Docker version 20.10.11, build dea9396
docker-compose version 1.24.1, build 4667896b | closed | 2021-12-09T07:32:45Z | 2021-12-28T00:34:30Z | https://github.com/Ehco1996/django-sspanel/issues/603 | [
"bug"
] | taotecode | 5 |
strawberry-graphql/strawberry | asyncio | 3,631 | strawberry.ext.mypy_plugin Pydantic 2.9.0 PydanticModelField.to_argument error missing 'model_strict' and 'is_root_model_root' | Hello!
It seems the Pydantic 2.9.0 version introduced a breaking change on PydanticModelField.to_argument adding two new arguments:
https://github.com/pydantic/pydantic/commit/d6df62aaa34c21272cb5fcbcbe3a8b88474732f8
and
https://github.com/pydantic/pydantic/commit/93ced97b00491da4778e0608f2a3be62e64437a8
## Describe the Bug
This is the mypy trace
```
./my-file.py:132: error: INTERNAL ERROR -- Please try using mypy master on GitHub:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 1.11.2
Traceback (most recent call last):
File "mypy/semanal.py", line 7087, in accept
File "mypy/nodes.py", line 1183, in accept
File "mypy/semanal.py", line 1700, in visit_class_def
File "mypy/semanal.py", line 1891, in analyze_class
File "mypy/semanal.py", line 1925, in analyze_class_body_common
File "mypy/semanal.py", line 1996, in apply_class_plugin_hooks
File "/Users/victorbarroncas/code/boostsec-asset-management/.venv/lib/python3.12/site-packages/strawberry/ext/mypy_plugin.py", line 489, in strawberry_pydantic_class_callback
f.to_argument(
TypeError: PydanticModelField.to_argument() missing 2 required positional arguments: 'model_strict' and 'is_root_model_root'
./my-file.py:132: : note: use --pdb to drop into pdb
```
## System Information
- Operating system: OSX
- strawberry-graphql 0.240.3
- pydantic 2.9.1
- pydantic-core 2.23.3
- mypy 1.11.2
- mypy-extensions 1.0.0
## Additional Context
Similar issue:
https://github.com/strawberry-graphql/strawberry/issues/3560 | open | 2024-09-13T12:02:43Z | 2025-03-20T15:56:52Z | https://github.com/strawberry-graphql/strawberry/issues/3631 | [
"bug"
] | victor-nb | 0 |
timkpaine/lantern | plotly | 40 | plotly - pie | closed | 2017-10-10T01:33:08Z | 2017-10-19T04:53:44Z | https://github.com/timkpaine/lantern/issues/40 | [
"feature",
"plotly/cufflinks"
] | timkpaine | 0 | |
modelscope/modelscope | nlp | 701 | 加载本地数据集时报错NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. | ```python
from modelscope.msdatasets import MsDataset
from modelscope.utils.constant import DownloadMode
ms_train_dataset = MsDataset.load(
'./data/garbage265',
subset_name='default',split='train',) # 加载训练集
```
使用以上代码加载本地自定义数据集报错
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. | closed | 2023-12-28T08:52:28Z | 2024-06-08T01:49:39Z | https://github.com/modelscope/modelscope/issues/701 | [
"Stale"
] | 1006076811 | 3 |
qubvel-org/segmentation_models.pytorch | computer-vision | 294 | [Feature request] Add an option BatchNorm => Instance norm | I would like to be able to replace BatchNorm with Instance norm in the network at the initialization, including pre-trained backbones. | closed | 2020-12-06T17:48:37Z | 2020-12-07T15:10:36Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/294 | [] | ternaus | 1 |
tflearn/tflearn | data-science | 356 | Is there any way to disable printing loss? | when i use "fit" function, loss is printed on console each step
I guess if i can disable printing it, the speed of learning process will be faster
However, i don't know how to disable... (it's possible to disable printing accuracy though)
Is there any way?
| closed | 2016-09-23T15:38:16Z | 2016-09-25T14:28:11Z | https://github.com/tflearn/tflearn/issues/356 | [] | y-rok | 2 |
deepset-ai/haystack | pytorch | 8,692 | Document ID doesn't updated upon metadata update | **Describe the bug**
If you assign the `meta` field post initialization to a `Document`, the id of the document doesn't get updated.
This is e.g. done in the [PyPDFConverter](https://github.com/deepset-ai/haystack/blob/28ad78c73d6c11c9b77089aba42799508178a2fa/haystack/components/converters/pypdf.py#L225).
Documents having the same ID although they have different metadata leads to issues with document stores and duplicate policy `OVERWRITE` as all documents end up as the same document then and even overwrite each other.
**Error message**
Error that was thrown (if available)
**Expected behavior**
The ID should update itself if the metadata is changed. Same applies to the other properties.
**Additional context**
Ideally we find a solution that the ID is automatically updated but also can be overridden manually?
**To Reproduce**
```python
def test_set_meta_afterwards():
doc = Document()
old_id = doc.id
doc.meta = {"test": 10}
assert doc.meta == {"test": 10}
assert doc.id != old_id
```
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number):
- DocumentStore:
- Reader:
- Retriever:
| closed | 2025-01-09T12:23:59Z | 2025-02-13T09:01:32Z | https://github.com/deepset-ai/haystack/issues/8692 | [
"P3"
] | wochinge | 2 |
NullArray/AutoSploit | automation | 634 | Unhandled Exception (a2bc5a14e) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-parrot1-20t-amd64-x86_64-with-Parrot-4.6-stable`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/lnx-crew/3xploit/AutoSploit-master/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/lnx-crew/3xploit/AutoSploit-master/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-04-07T00:24:37Z | 2019-04-18T17:31:35Z | https://github.com/NullArray/AutoSploit/issues/634 | [] | AutosploitReporter | 0 |
raphaelvallat/pingouin | pandas | 408 | Remove call to sns.despine in paired_plot | ### Discussed in https://github.com/raphaelvallat/pingouin/discussions/407
<div type='discussions-op-text'>
<sup>Originally posted by **timobage** February 17, 2024</sup>
Hi,
how can I change the y-axis limits of the plot_paired() function?
If I use ax.set_ylim() it only shrinks/expands the existing plots but the limits of the underlying point and boxplot stay the same. I tried to give several different kwargs arguments but nothing worked.
Thank you for your help!</div> | closed | 2024-02-20T19:43:00Z | 2024-03-02T12:05:59Z | https://github.com/raphaelvallat/pingouin/issues/408 | [
"bug :boom:"
] | raphaelvallat | 0 |
modin-project/modin | data-science | 7,391 | whats the fastest way to add a new column that already has the same partitions (probably)? | There are a bunch of ways to add a column to a dataframe..
what is the fastest with modin?
say get a new column by applying a function to another one
```py
new_c = df['column'].apply(lambda x: abs(x))
```
the resulting series should have the same partitions as the dataframe right?
we can use...
merge, or concat, or just do
```py
df['new_col'] = new_c
```
which is the most readable IMO
and probably a few other ways
but what is the fastest?
Thank you! | open | 2024-09-06T16:26:28Z | 2024-09-06T16:58:22Z | https://github.com/modin-project/modin/issues/7391 | [
"question ❓",
"Triage 🩹"
] | Liquidmasl | 1 |
thtrieu/darkflow | tensorflow | 1,197 | How to put a darkflow model into Android Studio | I created an object detection model that I trained through darkflow. and I changed this to the tflite format to put into Android Studio
but I when I try to add metadata to this file, the following error occurs:
`ValueError: The number of output tensors (1) should match the number of output tensor metadata (4)`
For reference, the code for adding metadata was based on the below site.
(https://www.tensorflow.org/lite/convert/metadata)
I want to know if there is any other way to add metadata to the darkflow model or if my model or code is wrong :sob:
| open | 2021-05-15T07:27:20Z | 2021-05-15T07:27:20Z | https://github.com/thtrieu/darkflow/issues/1197 | [] | M1nseoPark | 0 |
sinaptik-ai/pandas-ai | data-visualization | 1,343 | Questions about the train function | Thanks for the great work.
I have several questions about the instruct train function
1. May I know that what vectorDB perform during the train? Does it act as a RAG?
2. After the train, is that anyway to save the trained model or stuff? Or it requires to call the train function for the prompt everytime?
3. For the cache, it seems generate a new one when I restart the kernel. Is that it store the previous prompt and response?
Thank you very much. | closed | 2024-08-30T02:21:56Z | 2025-02-11T16:00:11Z | https://github.com/sinaptik-ai/pandas-ai/issues/1343 | [] | mrgreen3325 | 3 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 394 | 想问下有没有分割网络读取coco数据集的代码? | 想问下有没有分割网络读取coco数据集的代码? | closed | 2021-11-06T09:25:17Z | 2021-11-13T07:25:25Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/394 | [] | ily666666 | 1 |
StackStorm/st2 | automation | 5,087 | NO_PROXY environment variable is not considered while installing the pack dependencies | ## SUMMARY
I have HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables set in my setup and also, I maintain an internal PyPI hosting most of the python dependencies. Now, I want the `st2 pack install file:///path/to/pack/folder` command to download python dependencies from this internal PyPI instead of the official PyPI and so, I added the internal PyPI URL to the NO_PROXY environment variable. But the `st2 pack install` always tries to fetch the dependencies via the HTTP(S)_PROXY only.
## STACKSTORM VERSION
st2 3.1.0, on Python 3.6.9
## Steps to reproduce the problem
Set HTTP_PROXY and HTTPS_PROXY environment variables and make sure NO_PROXY environment variable contain the internal PyPI URL. Now, install a pack using `st2 pack install` and it will still try to fetch the dependencies via the proxy instead of avoiding it.
## Expected Results
Ideally during the pack installation, the dependencies should be pulled directly from the internal PyPI avoiding the HTTP(S)_PROXY as the internal PyPI URL is part of the NO_PROXY environment variable.
## Actual Results
Downloads the dependencies using the HTTP(S)_PROXY or FAILS if the HTTP(s)_PROXY URL is not reachable.
What happened? What output did you get?
I made the proxies set via HTTP(s)_PROXY unreachable, and ran `st2 pack install file:///path/to/pack/folder` failed while installing the dependencies.
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden',))': /internal/blobs/six/
Ideally it should never check for the HTTP(S)_PROXY and pull the dependencies from the internal PyPI.
## Possible bug:
I see that we always set `--proxy` flag on the `pip install` command if the HTTP(S)_PROXY env. variable is set or the `http_proxy` or `https_proxy` are set in the config file as described here: https://docs.stackstorm.com/packs.html#installing-packs-from-behind-a-proxy and NO_PROXY is not checked.
https://github.com/StackStorm/st2/blob/master/st2common/st2common/util/virtualenvs.py#L244 | open | 2020-11-20T11:54:24Z | 2025-02-14T07:48:47Z | https://github.com/StackStorm/st2/issues/5087 | [
"bug"
] | RaviTezu | 9 |
plotly/dash | flask | 3,008 | Dash slow pattern-matching performance on many elements | Hello,
First, thank you for your amazing work on Dash.
**Describe your context**
```
dash 2.17.1
dash_ag_grid 31.2.0
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
jupyter-dash 0.4.2
```
Browser: Firefox / Chrome (not related) on MacOS 14 (not related either).
**Describe the bug**
I recently came across a performance limitation of Dash that really made my app slow, and at some point, unusable.
The idea is that I needed to have N checkboxes that my user can check or uncheck. N can sometimes be 20, but it could actually go up to 500 or 1000. That’s where I hit some performance issue with Dash.
**Expected behavior**
I expect to have N > 1.000 checkboxes rendered in my webpage with no struggle. A similar Vanilla JavaScript code can handle more than 100.000 checkboxes before the page starts to get slow. I imagine that using React comes with some performance downgrade, but it seems to be optimizable.
**Screenshots**
Here is a visual example for the problem (for 500) ; you can see a lag between my clicks and checkboxes actually being checked:

**Reproducible code**
We create 500 checkboxes and get their value with one callback.
```python
import uuid
import random
import dash
from dash import dcc, html
from dash.dependencies import Input, Output, ALL
app = dash.Dash(__name__)
# We create 500 checkboxes
checkboxes = [
dcc.Checklist(
options=[{"label": f"Checkbox {i+1}", "value": "checked"}],
id={"type": "checkbox", "group": random.choice(["a", "b", "c"]), "index": str(uuid.uuid4())},
inline=True
) for i in range(500)
]
app.layout = html.Div([
html.Div(id="output", style={"position": "fixed", "top": 0, "left": "200px"}),
html.Div(checkboxes)
])
# Client-side callback to illustrate that network is not the bottleneck here
app.clientside_callback(
"""
function(checkbox_values, checkbox_ids) {
const groupCounts = { a: 0, b: 0, c: 0 };
checkbox_values.forEach((value, i) => {
if (value && value.includes('checked')) {
const group = checkbox_ids[i].group;
groupCounts[group]++;
}
});
const totalChecked = groupCounts.a + groupCounts.b + groupCounts.c;
return `${totalChecked} checkboxes checked (Group A: ${groupCounts.a}, Group B: ${groupCounts.b}, Group C: ${groupCounts.c})`;
}
""",
Output("output", "children"),
Input({"type": "checkbox", "group": ALL, "index": ALL}, "value"),
Input({"type": "checkbox", "group": ALL, "index": ALL}, "id"),
)
# Run the app
if __name__ == "__main__":
app.run_server(debug=True, port=8060)
```
The issue is not with the callback itself, but occurs prior to the callback. It just takes some time to gather all the checkboxes, and then fires the callback (which runs fast).
Here are some things I noticed:
- performance drops after 100s of elements.
- it’s still slow even if I get rid of the groups and replace string indexes with integers
- CPU is between 90-100% and at some point the browser may ask to stop the script
- the problem is not related to the browser. I tested on Chrome and Firefox, recent versions.
- a similar vanilla JavaScript page (with pattern-matching-like IDs being parsed to JSON) can handle more than 100,000 checkboxes without any performance drop…
Here is the performance report (Firefox). I clicked 4 times, the CPU was around 90% all the time, making the page unresponsive:

One of the bottlenecks seems to be `getReadyCallbacks` from dash_renderer.
I hope this can be solved!
| open | 2024-09-19T08:25:15Z | 2024-12-21T11:10:35Z | https://github.com/plotly/dash/issues/3008 | [
"performance",
"bug",
"P3"
] | Spriteware | 4 |
zappa/Zappa | flask | 708 | [Migrated] `certify` custom domain doesn't take effect until `update` is called | Originally from: https://github.com/Miserlou/Zappa/issues/1796 by [pickledish](https://github.com/pickledish)
<!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
Not really a bug, just something that took a lot of time for me to figure out -- I set up the ACM certificate with my custom domain and updated the `zappa-settings.json` with the ARN, and called `zappa certify` with success, but the cloudfront URL (which the custom domain was CNAMEd to) didn't work until I updated the stage.
## Expected Behavior
<!--- Tell us what should happen -->
Would maybe be a good idea to have a little
```
Please call "zappa update" to see your custom URL take effect
```
prompt at the end of the `zappa certify` so people don't waste as much time as I did!
## Actual Behavior
<!--- Tell us what happens instead -->
I waste a few hours trying to figure out what Cloudfront is and why the URL wasn't working :P
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
(see expected behavior)
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47.1
* Operating System and Python version: OSX El Capitan, with Python 3.6
(PS, thanks for the work you guys do on this project, it's pretty amazing imo)
| closed | 2021-02-20T12:40:55Z | 2024-04-13T18:14:20Z | https://github.com/zappa/Zappa/issues/708 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
tensorflow/tensor2tensor | machine-learning | 1,472 | Dependency mismatch for Cloud ML Engine runtime 1.12 | ### Description
TensorFlow Probability has just released 0.6.0 to PyPI, which expects TF 1.13.1. The current Runtime Version on Cloud ML Engine comes with TF 1.12.0. Without either:
1. upgrading the default tensorflow version to 1.13.1 on ML Engine, or
2. downgrading the version of tensorflow-probability to 0.5.0,
jobs cannot currently be submitted to ML Engine.
FWIW, I already tried specifying both of these in the requirements.txt file of my t2t_usr_dir package with no change in error output, so I'm assuming these requirements.txt are superseded by the versions imposed by PyPI.
### Environment information
Cloud ML Engine Runtime Version 1.12
### For bugs: reproduction and error logs
#### Steps to reproduce:
Submit any `t2t-trainer` job to Cloud ML Engine.
#### Error logs:
```
The replica master 0 exited with a non-zero status of 1.
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/.local/lib/python3.5/site-packages/tensor2tensor/bin/t2t_trainer.py", line 24, in <module>
from tensor2tensor import models # pylint: disable=unused-import
File "/root/.local/lib/python3.5/site-packages/tensor2tensor/models/__init__.py", line 25, in <module>
from tensor2tensor.layers import modalities # pylint: disable=g-import-not-at-top
File "/root/.local/lib/python3.5/site-packages/tensor2tensor/layers/modalities.py", line 22, in <module>
from tensor2tensor.layers import common_attention
File "/root/.local/lib/python3.5/site-packages/tensor2tensor/layers/common_attention.py", line 31, in <module>
from tensor2tensor.layers import common_layers
File "/root/.local/lib/python3.5/site-packages/tensor2tensor/layers/common_layers.py", line 30, in <module>
import tensorflow_probability as tfp
File "/root/.local/lib/python3.5/site-packages/tensorflow_probability/__init__.py", line 68, in <module>
_ensure_tf_install()
File "/root/.local/lib/python3.5/site-packages/tensorflow_probability/__init__.py", line 65, in _ensure_tf_install
present=tf.__version__))
ImportError: This version of TensorFlow Probability requires TensorFlow version >= 1.13.1; Detected an installation of version 1.12.0. Please upgrade TensorFlow to proceed.
```
| closed | 2019-02-26T23:21:49Z | 2020-12-14T07:46:21Z | https://github.com/tensorflow/tensor2tensor/issues/1472 | [] | jvmncs | 5 |
biolab/orange3 | pandas | 6,250 | File widget: URL is lost when saving and reopening OWS file |
**What's wrong?**
If I specify a (specific type of?) URL referring to a data file in the File widget, then save the workflow and reopen it, the URL field is empty and the widget produces an error "No file selected"
**How can we reproduce the problem?**
Place a File widget in the canvas, double-click on it, select URL and enter https://drive.google.com/uc?export=download&id=1dyrDfu2yow5ydnbOkgiJW_NUMPnKEpu2 and press Reload. The preview will now correctly show the variables in the file. Now close the widget, save the workflow and reopen it. The URL has gone and the widget produces an error "No file selected".
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac OS 13.0.1 on Silicon
- Orange version: 3.33.0
- How you installed Orange: from DMG
| closed | 2022-12-09T11:34:56Z | 2023-01-20T07:41:15Z | https://github.com/biolab/orange3/issues/6250 | [
"bug"
] | wvdvegte | 1 |
keras-team/keras | pytorch | 20,706 | Loaded Keras Model Throws Error While Predicting (Likely Issues with Masking) | I am currently developing and testing a RNN that relies upon a large amount of data for training, and so have attempted to separate my training and testing files. I have one file where I create, train, and save a tensorflow.keras model to a file 'model.keras' I then load this model in another file and predict some values, but get the following error: Failed to convert elements of {'class_name': '__tensor__', 'config': {'dtype': 'float64', 'value': [0.0, 0.0, 0.0, 0.0]}} to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes
By the way, I have tried running model.predict with this exact same data in the file where I train the model, and it works smoothly. The model loading must be the problem, not the data used to predict.
This mysterious float64 tensor is the value I passed into the masking layer. I don't understand why keras is unable to recognize this JSON object as a Tensor and apply the masking operation as such. I have included snippets of my code below, edited for clarity and brevity:
model_generation.py:
```
# Create model
model = tf.keras.Sequential([
tf.keras.layers.Input((352, 4)),
tf.keras.layers.Masking(mask_value=tf.convert_to_tensor(np.array([0.0, 0.0, 0.0, 0.0]))),
tf.keras.layers.GRU(50, return_sequences=True, activation='tanh'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GRU(50,activation='tanh'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(units=1, activation='sigmoid')])
# Compile Model...
# Train Model...
model.save('model.keras')
model.predict(data) # Line works here
```
model_testing.py
```
model = tf.keras.models.load_model('model.keras')
model.predict(data) # this line generates the error
```
I have tried to re-load the model in the `model_generation.py` file and I get the exact same issue. | closed | 2024-12-31T20:46:49Z | 2025-01-23T21:57:52Z | https://github.com/keras-team/keras/issues/20706 | [
"type:Bug"
] | JoeDoyle12 | 4 |
ets-labs/python-dependency-injector | asyncio | 58 | Add docs for @inject decorator | closed | 2015-05-08T14:48:03Z | 2015-08-05T14:48:10Z | https://github.com/ets-labs/python-dependency-injector/issues/58 | [
"docs"
] | rmk135 | 0 | |
geex-arts/django-jet | django | 87 | how to enable multiselect actions in pages that opens in popup windows? | When opening an admin list in a popup window, the column with checkoxes to do multiselection does not appear, and combo with actions is not appearing also...
I've done a search and i«m not finding what exactly how can i enable this... basically it's related with GET var ?_popup=1, but can't find the files i should change/override to change this...
Any ideas? Thanks!
| closed | 2016-07-11T21:58:02Z | 2016-08-27T12:58:42Z | https://github.com/geex-arts/django-jet/issues/87 | [] | carlosfvieira | 7 |
proplot-dev/proplot | data-visualization | 300 | x axis inverts when using plot with negative y | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
within cartesian grid, using plot, x axis inverts with y set starting with nagative number
### Steps to reproduce
import numpy as np
import proplot as pplt
print(pplt.__version__)
fig = pplt.figure(suptitle='test')
ax = fig.subplot(111, title="A-B")
ax.format(xlim=(-2, 2),
xticks=0.5,
xloc=('data', -2),
xlabel='x axis',
xlabelloc='bottom',
ylim=(-2, 2),
yticks=0.5,
yloc=('data', -2),
ylabel='y axis')
ax.set_aspect('equal')
ax.scatter(-1, 1, color="b")
ax.text(-1, 1, "A")
ax.scatter(1, -1, color="b")
ax.text(1, -1, "B")
x_set = [-1, 1]
y_set = [1, -1]
ax.plot(x_set, y_set, color="r")
pplt.show()
**Expected behavior**:
The above code gives a grid as expected, with the 2 points A and B and a red line between them

**Actual behavior**: [What actually happened]
**However, when changing the x_set/y_set lines to**
x_set = [1, -1]
y_set = [-1, 1]
the x-axis inverts (i.e. the whole grid) starting with x = 2 going to x = -2

### Equivalent steps in matplotlib
with ax.invert_xaxis() the grid inverts to the presentation as expected
### Proplot version 0.9.5
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
| closed | 2021-11-01T19:17:22Z | 2021-11-03T14:39:00Z | https://github.com/proplot-dev/proplot/issues/300 | [
"wontfix",
"documentation"
] | dewinkelwaar | 4 |
falconry/falcon | api | 2,314 | Get rid of `setup.cfg` | `setup.cfg` does not really contribute much value anymore, almost everything can be migrated to `pyproject.toml`.
We will have to continue augmenting `pyproject.toml` with `setup.py` though, because it is where we programmatically manage our build process. | closed | 2024-08-31T09:06:16Z | 2024-08-31T19:44:16Z | https://github.com/falconry/falcon/issues/2314 | [
"breaking-change",
"maintenance"
] | vytas7 | 0 |
quantmind/pulsar | asyncio | 158 | test suite does not stop with ctrl-C | The reason being all test classes are loaded in the event loop.
| closed | 2015-09-07T13:14:52Z | 2015-10-02T13:46:45Z | https://github.com/quantmind/pulsar/issues/158 | [
"bug",
"test"
] | lsbardel | 3 |
ultralytics/yolov5 | machine-learning | 13,019 | Issue when try to validate openvino format model | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Validation
### Bug
The next script, to validate a trained yolov5 works well:
!python ./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/val.py --weights yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/runs/train/exp5/weights/best.pt --imgsz 1280 --data "./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/100KBDD/data.yaml"
To convert to onnx format i use (works well):
!python ./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/export.py --weights ./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/runs/train/exp5/weights/best.pt --include onnx --imgsz 736 1280 --data ./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/100KBDD/data.yaml --batch-size 1
To validate the onnx i used:
!python ./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/val.py --weights yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/runs/train/exp5/weights/best.onnx --imgsz 1280 --data "./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/100KBDD/data.yaml" --batch-size 1
here there is a problem:
Traceback (most recent call last):
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\val.py", line 438, in <module>
main(opt)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\val.py", line 409, in main
run(**vars(opt))
File "c:\Users\franco\OneDrive\Escritorio\Final_project\project\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\val.py", line 236, in run
preds, train_out = model(im) if compute_loss else (model(im, augment=augment), None)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\project\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\project\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\models\common.py", line 666, in forward
y = self.session.run(self.output_names, {self.session.get_inputs()[0].name: im})
File "c:\Users\franco\OneDrive\Escritorio\Final_project\project\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
index: 2 Got: 1280 Expected: 736
Please fix either the inputs/outputs or the model.
But val.py doesn't accept --rect parameter. How do i fix this?
For last, i used openvino to quantized my model, nncf, etc.
Then when I tried to validate this new model int8, we have:
!python ./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/val.py --weights yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/runs/train/exp5/weights/int8_openvino_model --imgsz 1280 --data "./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/100KBDD/data.yaml" --batch-size 1
The problem here is different:
val: data=./yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/100KBDD/data.yaml, weights=['yolov5_train100kbdd/yolov5s_originsize_300epochs_lr_001/runs/train/exp5/weights/int8_openvino_model'], batch_size=1, imgsz=1280, conf_thres=0.001, iou_thres=0.6, max_det=300, task=val, device=, workers=8, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\runs\val, name=exp, exist_ok=False, half=False, dnn=False
YOLOv5 v7.0-294-gdb125a20 Python-3.10.9 torch-2.2.2+cpu CPU
Loading yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\runs\train\exp5\weights\int8_openvino_model for OpenVINO inference...
Traceback (most recent call last):
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\val.py", line 438, in <module>
main(opt)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\val.py", line 409, in main
run(**vars(opt))
File "c:\Users\franco\OneDrive\Escritorio\Final_project\project\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\val.py", line 165, in run
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
File "c:\Users\franco\OneDrive\Escritorio\Final_project\yolov5_train100kbdd\yolov5s_originsize_300epochs_lr_001\models\common.py", line 643, in __init__
if names[0] == "n01440764" and len(names) == 1000: # ImageNet
TypeError: 'NoneType' object is not subscriptable
There is a issue similar, https://github.com/ultralytics/yolov5/issues/10180.
But my data.yaml worked well in the first case, why does it happen?
data.yaml:
#test: 100KBDD/test/images
train: 100KBDD/train/images
val: 100KBDD/valid/images
names:
0: 0
1: 1
2: 2
3: 3
4: 4
5: 5
6: 6
7: 7
nc: 8
#roboflow:
# license: CC BY 4.0
# project: car_part2
# url: https://universe.roboflow.com/carpart2-gj01d/car_part2/dataset/1
# version: 1
# workspace: carpart2-gj01d
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-05-16T17:36:09Z | 2024-06-30T00:24:58Z | https://github.com/ultralytics/yolov5/issues/13019 | [
"bug",
"Stale"
] | FrancoArtale | 4 |
reloadware/reloadium | flask | 98 | Reloadium doesn't work on M1 mac with Python 3.9 | ## Describe the bug*
On some M1 macs, reloadium is working fine with Pyton 3.9. On few other macs, it is failing to run.
## To Reproduce
Not able to reproduce the bug on all M1 macs.
## Expected behavior
Reloadium should run the application.
## Screenshots
<img width="1280" alt="Screenshot 2023-02-06 at 08 41 19" src="https://user-images.githubusercontent.com/4463796/216875865-f3b49259-5f70-4d67-a56f-90466ab9f44c.png">
## Desktop or remote (please complete the following information):**
- OS: Mac
- OS version: 13.0.1
- M1 chip: yes
- Reloadium package version: 0.9.10
- PyCharm plugin version: 0.9.5
- Editor: PyCharm
- Python Version: 3.9.12
- Python Architecture:64
- Run mode: Debug
| closed | 2023-02-06T03:16:06Z | 2023-02-06T06:36:05Z | https://github.com/reloadware/reloadium/issues/98 | [] | ChillarAnand | 7 |
pandas-dev/pandas | pandas | 60,322 | BUG: Specifying `hour` param, but not year, month, day in pandas.Timestamp() sets hour-value as minutes | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# returns False
pd.Timestamp(2020, 1, 1, hour=1) == pd.Timestamp(year=2020, month=1, day=1, hour=1)
```
### Issue Description
When not explicitly defining keyword args for `year`, `month` and `day`, but doing so for the `hour` sets the value provided in the `hour` as minutes instead. The values for year, month and day are still correct though.
### Expected Behavior
I'd expect the value provided in the `hour` param to be set to the hour, and not the minutes.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.10
python-bits : 64
OS : Darwin
OS-release : 23.6.0
Version : Darwin Kernel Version 23.6.0: Mon Jul 29 21:13:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6020
machine : x86_64
processor : i386
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.4
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
</details>
| closed | 2024-11-15T11:40:32Z | 2024-11-17T13:41:11Z | https://github.com/pandas-dev/pandas/issues/60322 | [
"Bug",
"Duplicate Report",
"Timestamp"
] | christoffer-hk | 2 |
ml-tooling/opyrator | pydantic | 4 | Finalize docker export capabilities | **Feature description:**
Finalize capabilities to export an opyrator to a Docker image.
The export can be executed via command line:
```bash
opyrator export my_opyrator:hello_world --format=docker my-opyrator-image:latest
```
_💡 The Docker export requires that Docker is installed on your machine._
After the successful export, the Docker image can be run as shown below:
```bash
docker run -p 8080:8080 my-opyrator-image:latest
```
Running your Opyrator within this Docker image has the advantage that only a single port is required to be exposed. The separation between UI and API is done via URL paths: `http://localhost:8080/api` (API); `http://localhost:8080/ui` (UI). The UI is automatically configured to use the API for all function calls.
| closed | 2021-04-19T10:01:47Z | 2023-07-27T14:30:30Z | https://github.com/ml-tooling/opyrator/issues/4 | [
"feature",
"stale"
] | lukasmasuch | 6 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 583 | Email | @blue-fish Hey I have some question regarding the repo, would it be possible to get in touch with you through email? | closed | 2020-10-31T10:10:32Z | 2020-10-31T17:47:57Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/583 | [] | JakubReha | 1 |
flairNLP/flair | nlp | 3,356 | [Question]: How to train an end-to-end Entity Linking model? | ### Question
Hello,
I am interested in implementing an end-to-end Entity Linking model using a custom dataset. I would greatly appreciate any guidance on how to proceed, particularly concerning the required input data format, the training procedure, and the inference process.
Thank you in advance for your time and assistance. | open | 2023-10-25T10:06:57Z | 2024-08-06T09:01:09Z | https://github.com/flairNLP/flair/issues/3356 | [
"question"
] | anna-shopova | 6 |
PeterL1n/BackgroundMattingV2 | computer-vision | 22 | problem running inference_images.py | It seems that there arer multiple threads running at the same time since I got this overrride question many times:
This is what I see when answering no (the output folder does not exist before)
```
(bgm2) C:\ZeroBox\src\BackgroundMattingV2> python inference_images.py --model-type mattingrefine --model-backbone mobilenetv2 --model-checkpoint PyTorch\pytorch_mobilenetv2.pth --images-src Group15BOriginals --images-bgr Group15BBackground --output-dir output-images --output-type com fgr pha
0%| | 0/1 [00:00<?, ?it/s]Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: n
0%| | 0/1 [00:40<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 872, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\queue.py", line 178, in get
raise Empty
_queue.Empty
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "inference_images.py", line 123, in <module>
for i, (src, bgr) in enumerate(tqdm(dataloader)):
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\tqdm\std.py", line 1171, in __iter__
for obj in iterable:
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 435, in __next__
data = self._next_data()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 1068, in _next_data
idx, data = self._get_data()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 1024, in _get_data
success, data = self._try_get_data()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 885, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 54132) exited unexpectedly
```
If I answer yes (the first yes is normal since the output folder now exists):
```
(bgm2) C:\ZeroBox\src\BackgroundMattingV2> python inference_images.py --model-type mattingrefine --model-backbone mobilenetv2 --model-checkpoint PyTorch\pytorch_mobilenetv2.pth --images-src Group15BOriginals --images-bgr Group15BBackground --output-dir output-images --output-type com fgr pha
Directory output-images already exists. Override? [Y/N]: y
0%| | 0/1 [00:00<?, ?it/s]Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: Directory output-images already exists. Override? [Y/N]: y
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\spawn.py", line 125, in _main
prepare(preparation_data)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\ZeroBox\src\BackgroundMattingV2\inference_images.py", line 123, in <module>
for i, (src, bgr) in enumerate(tqdm(dataloader)):
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\tqdm\std.py", line 1171, in __iter__
for obj in iterable:
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 352, in __iter__
return self._get_iterator()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 801, in __init__
w.start()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
0%| | 0/1 [00:25<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 872, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\queue.py", line 178, in get
raise Empty
_queue.Empty
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "inference_images.py", line 123, in <module>
for i, (src, bgr) in enumerate(tqdm(dataloader)):
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\tqdm\std.py", line 1171, in __iter__
for obj in iterable:
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 435, in __next__
data = self._next_data()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 1068, in _next_data
idx, data = self._get_data()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 1024, in _get_data
success, data = self._try_get_data()
File "C:\Users\jinzi\miniconda3\envs\bgm2\lib\site-packages\torch\utils\data\dataloader.py", line 885, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 45488) exited unexpectedly
```
I do have GPU and I was able to run the other two interference code without any issue. | closed | 2020-12-31T03:51:41Z | 2020-12-31T21:29:15Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/22 | [] | jinzishuai | 5 |
horovod/horovod | pytorch | 2,957 | Large Batch Simulation Breaks with Mixed Precision | **Environment:**
1. Framework: TensorFlow
2. Framework version: 2.3
3. Horovod version: 0.22.0
7. Python version: 3.7
**Checklist:**
1. Did you search issues to find if somebody asked this question before? yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
When running large batch simulation with mixed precision the accumulated gradients cannot recover from a nan gradient. As a result, training will get stuck.
The reason is the use of the following code to reset the aggregated grads:
def _clear_vars(self):
self.counter.assign(0)
for idx in self.locally_aggregated_grads.keys():
self.locally_aggregated_grads[idx].assign(
-1 * self.locally_aggregated_grads[idx])
I have fixed this by changing the code to:
def _clear_vars(self):
self.counter.assign(0)
for idx in self.locally_aggregated_grads.keys():
self.locally_aggregated_grads[idx].assign(
tf.zeros_like(self.locally_aggregated_grads[idx]))
| open | 2021-06-07T19:43:15Z | 2021-06-21T17:07:35Z | https://github.com/horovod/horovod/issues/2957 | [
"bug"
] | czmrand | 2 |
apachecn/ailearning | python | 432 | 第9章_树回归 - ApacheCN | http://ailearning.apachecn.org/ml/9.TreeRegression/
ApacheCN 专注于优秀项目维护的开源组织 | closed | 2018-08-24T07:10:06Z | 2021-09-07T17:40:30Z | https://github.com/apachecn/ailearning/issues/432 | [
"Gitalk",
"162d4b1d1067f1f6264103c55a777526"
] | jiangzhonglian | 0 |
jonaswinkler/paperless-ng | django | 324 | Quotation marks in PAPERLESS_FILENAME_FORMAT cause malformed filenames | Hi Jonas,
I just tried to change the filename format by setting (in docker-compose.env)
`PAPERLESS_FILENAME_FORMAT="{created_year}/{correspondent}/{created_year}-{created_month}-{created_day} - {title}"`.
Paperless does not complain and starts up as usual. I let paperelss consume a new file, which also works but results in the following filename
`/paperless/media/documents/originals/\"2009/CorespondentXYZ/2009-06-16\ -\ totallySecretDocument\".pdf`
So the variables work fine but paperless also interpretes the quotation marks as part of the filename.
Best regards
Tobi | closed | 2021-01-11T18:14:01Z | 2021-02-03T00:20:16Z | https://github.com/jonaswinkler/paperless-ng/issues/324 | [] | niarbx | 4 |
FujiwaraChoki/MoneyPrinter | automation | 142 | [BUG] | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'front page '
2. Click on ' Generate'
3. Scroll down to '....'
4. See error 'An error occurred. Please try again later.'
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Windows
- Browser chrome
- Python Version [e.g. 3.9]

| closed | 2024-02-10T10:02:32Z | 2024-02-10T10:45:51Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/142 | [] | FranklinOP-IND | 3 |
apache/airflow | data-science | 47,521 | Test Deployment via breeze in KinD/K8s need to realx Same Origin Policy | ### Body
When a local K8s cluster (via KinD) is started through `breeze k8s ...` and you attempt to open the exposed URL in the browser (it uses a random forwarded port, e.g. http://localhost:13582/ ) then you get the following errors:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8080/static/assets/index-gP-zsNBT.js. (Reason: CORS request did not succeed). Status code: (null).
Module source URI is not allowed in this document: “http://localhost:8080/static/assets/index-gP-zsNBT.js”.
Therefor the KinD/K8s deployment in breeze needs to be enhanced such that the forwarded ports are accepted from web server / UI.
(Not an expert in this area but might be a general K8s deployment problem outside breeze as well? Or is just a port / hostname from breeze to be set to config dynamically?)
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-07T22:29:05Z | 2025-03-09T08:18:50Z | https://github.com/apache/airflow/issues/47521 | [
"kind:bug",
"area:dev-env",
"area:dev-tools",
"kind:meta",
"area:UI"
] | jscheffl | 2 |
chmp/ipytest | pytest | 123 | Command line / CI-CD / IDE execution | Firstly, thanks for an awesome tool. This is exactly what I was looking for.
The docstring to `_impl.run` states:
```
**NOTE:** In the default configuration `ipytest.run()` will not raise
exceptions, when tests fail. To raise exceptions on test errors, e.g.,
inside a CI/CD context, use `ipytest.autoconfig(raise_on_error=True)`.
```
Suggesting there _may be_ some way to execute ipytest from a command line. This would give me the chance to include the notebook(s) in a github workflow _and_ in vscode test explorer.
In return for a hint on how to get that working, I'd be happy to raise documentation PR to let others know for the future.
Thanks!
PS: I'm trying to help my son learn to code and using jupyter notebooks is a great simple way to start, but I also want him to learn to have tests from the start
| closed | 2025-02-01T08:41:27Z | 2025-02-17T07:09:44Z | https://github.com/chmp/ipytest/issues/123 | [] | MusicalNinjaDad | 6 |
pallets/flask | python | 5,245 | NameError: name 'Response' is not defined | I have a view like this:
```
@pydantic.validate_arguments(config={"arbitrary_types_allowed": True})
def my_view(
)-> flask.typing.ResponseReturnValue:
...
```
This view is failing during tests:
```
@pd.validate_arguments(config={"arbitrary_types_allowed": True})
pydantic/decorator.py:36: in pydantic.decorator.validate_arguments.validate
???
pydantic/decorator.py:78: in pydantic.decorator.ValidatedFunction.__init__
???
pydantic/typing.py:78: in pydantic.typing.get_all_type_hints
???
/nix/store/yk7nrvvz4hjgzzlhr3mg8wl2hn56h187-python3-3.11.4/lib/python3.11/typing.py:2373: in get_type_hints
hints[name] = _eval_type(value, globalns, localns)
/nix/store/yk7nrvvz4hjgzzlhr3mg8wl2hn56h187-python3-3.11.4/lib/python3.11/typing.py:371: in _eval_type
return t._evaluate(globalns, localns, recursive_guard)
/nix/store/yk7nrvvz4hjgzzlhr3mg8wl2hn56h187-python3-3.11.4/lib/python3.11/typing.py:882: in _evaluate
self.__forward_value__ = _eval_type(
/nix/store/yk7nrvvz4hjgzzlhr3mg8wl2hn56h187-python3-3.11.4/lib/python3.11/typing.py:385: in _eval_type
ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
/nix/store/yk7nrvvz4hjgzzlhr3mg8wl2hn56h187-python3-3.11.4/lib/python3.11/typing.py:385: in <genexpr>
ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
/nix/store/yk7nrvvz4hjgzzlhr3mg8wl2hn56h187-python3-3.11.4/lib/python3.11/typing.py:371: in _eval_type
return t._evaluate(globalns, localns, recursive_guard)
/nix/store/yk7nrvvz4hjgzzlhr3mg8wl2hn56h187-python3-3.11.4/lib/python3.11/typing.py:877: in _evaluate
eval(self.__forward_code__, globalns, localns),
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E NameError: name 'Response' is not defined
```
Looks like the problem is in flask.typing:
```
if t.TYPE_CHECKING: # pragma: no cover
from _typeshed.wsgi import WSGIApplication # noqa: F401
from werkzeug.datastructures import Headers # noqa: F401
from werkzeug.wrappers import Response # noqa: F401
```
Flask imports Response only during type checking, but pydantic tries to get the type hints at runtime...
<!--
This issue tracker is a tool to address bugs in Flask itself. Please use
Pallets Discord or Stack Overflow for questions about your own code.
Replace this comment with a clear outline of what the bug is.
-->
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
<!--
Describe the expected behavior that should have happened but didn't.
-->
Environment:
- Python version: Python 3.11.4
- Flask version: Flask==2.3.2
| closed | 2023-08-31T07:11:47Z | 2023-09-15T00:05:39Z | https://github.com/pallets/flask/issues/5245 | [] | warvariuc | 4 |
sqlalchemy/alembic | sqlalchemy | 1,386 | Autogenerate renders TypeDecorator instance instead of underlying impl type | **Describe the bug**
This isn't a bug _per se_, but a small improvement for autogenerate when using TypeDecorator.
When a TypeDecorator is used in a column definition, e.g.:
```py
"""
File: app/models/foo.py
"""
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.types import TypeDecorator
...
class JSONBData(TypeDecorator):
impl = JSONB
foo = Table("foo", MetaData(), Column("data", JSONBData))
```
The auto-generated migration ends up referencing the TypeDecorator:
```py
op.add_column("foo", sa.Column("data", app.models.foo.JSONBData(), nullable=True))
```
which is annoying for two reasons:
1. The import is not automatically rendered.
2. The migration file has an unnecessary dependency on `app`, e.g. if the app/models/foo.py is refactored, we may need to update this migration file... when that could have been avoided if instead of rendering `app.models.foo.JSONBData`, alembic directly rendered the underlying impl: `postgresql.JSONB`.
I'm not sure if there are any scenarios where it is actually preferable to have the TypeDecorator in the autogenerated file. If there are use cases for it, would it be sensible to make this a config option instead of unconditional?
**Expected behavior**
Ideally, we'd generate the same thing as when `foo = Table("foo", MetaData(), Column("data", JSONB))` is provided, i.e.:
```py
op.add_column("foo", sa.Column("data", postgresql.JSONB(astext_type=Text()), nullable=True))
```
**To Reproduce**
Test case:
```diff
diff --git a/tests/test_autogen_render.py b/tests/test_autogen_render.py
index eeeb92e..9755869 100644
--- a/tests/test_autogen_render.py
+++ b/tests/test_autogen_render.py
@@ -33,6 +33,7 @@ from sqlalchemy.sql import false
from sqlalchemy.sql import literal_column
from sqlalchemy.sql import table
from sqlalchemy.types import TIMESTAMP
+from sqlalchemy.types import TypeDecorator
from sqlalchemy.types import UserDefinedType
from alembic import autogenerate
@@ -1078,6 +1079,21 @@ class AutogenRenderTest(TestBase):
"server_default='5', nullable=True))",
)
+ def test_render_add_column_type_decorator(self):
+ self.autogen_context.opts["user_module_prefix"] = None
+
+ class MyType(TypeDecorator):
+ impl = Integer
+
+ op_obj = ops.AddColumnOp(
+ "foo", Column("x", MyType, server_default="5")
+ )
+ eq_ignore_whitespace(
+ autogenerate.render_op_text(self.autogen_context, op_obj),
+ "op.add_column('foo', sa.Column('x', sa.Integer(), "
+ "server_default='5', nullable=True))",
+ )
+
@testing.emits_warning("Can't validate argument ")
def test_render_add_column_custom_kwarg(self):
col = Column(
```
**Error**
When running `tox -e py-sqlalchemy -- tests/test_autogen_render.py` with the above patch:
```
File "/Users/saif/contrib/alembic/tests/test_autogen_render.py", line 1091, in test_render_add_column_type_decorator
eq_ignore_whitespace(
File "/Users/saif/contrib/alembic/alembic/testing/assertions.py", line 111, in eq_ignore_whitespace
assert a == b, msg or "%r != %r" % (a, b)
^^^^^^
AssertionError: "op.add_column('foo', sa.Column('x', tests.test_autogen_render.MyType(), server_default='5', nullable=True))" != "op.add_column('foo', sa.Column('x', sa.Integer(), server_default='5', nullable=True))"
```
**Versions.**
- OS: macOS 14.1.2
- Python: 3.11.6
- Alembic: 1.13.1
- SQLAlchemy: 1.3.24 / 2.0.23
- Database: Postgres
- DBAPI: psycopg2
**Have a nice day!**
| open | 2024-01-05T05:50:59Z | 2024-09-18T12:28:11Z | https://github.com/sqlalchemy/alembic/issues/1386 | [
"autogenerate - rendering",
"use case",
"cookbook requested"
] | saifelse | 7 |
scikit-multilearn/scikit-multilearn | scikit-learn | 12 | Implement a general ensemble of classifiers classifier | J. Read, B. Pfahringer, G. Holmes, E. Frank, Classifier chains for multi-label classification, in: Proceedings of the 20th European Conference on Machine Learning, 2009, pp. 254–269.
Ensembles of classifier chains (ECC ) [16] are an ensemble multi-label classification technique that uses classifier chains as a base classifier. ECC trains m CC classifiers C1,C2,…,Cm. Each C k is trained with a random chain ordering (of ℒ) and a random subset of 𝒳. Hence each Ck model is likely to be unique and able to give different multi-label predictions. These predictions are summed per label so that each label receives a number of votes. A threshold is used to select the most popular labels which form the final predicted multi-label set.
| open | 2014-12-06T14:30:17Z | 2023-03-14T16:56:55Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/12 | [
"enhancement",
"help wanted"
] | niedakh | 3 |
coqui-ai/TTS | deep-learning | 3,656 | [Bug] bug in tts_to_file | ### Describe the bug
```python
from TTS.api import TTS
tts_ins = TTS('tts_models/multilingual/multi-dataset/xtts_v2')
tts_ins.tts_to_file(
text='因为树脂这个材料它比较容易染色所以久了之后呢树脂贴片就没有刚开始那么漂亮了那再来就是树脂这个材料它比较软。',
file_path="output.wav",
speaker="Ana Florence", # 使用默认的人声
language="zh-cn",
split_sentences=True,
)
```
when I run this code,the output.wav will most likely repeat the last few words of the text.How is this going? thanks
### To Reproduce
from TTS.api import TTS
tts_ins = TTS('tts_models/multilingual/multi-dataset/xtts_v2')
tts_ins.tts_to_file(
text='因为树脂这个材料它比较容易染色所以久了之后呢树脂贴片就没有刚开始那么漂亮了那再来就是树脂这个材料它比较软。',
file_path="output.wav",
speaker="Ana Florence", # 使用默认的人声
language="zh-cn",
split_sentences=True,
)
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.2.1",
"TTS": "0.22.0",
"numpy": "1.26.4"
},
"System": {
"OS": "Darwin",
"architecture": [
"64bit",
""
],
"processor": "arm",
"python": "3.11.5",
"version": "Darwin Kernel Version 23.3.0: Wed Dec 20 21:33:31 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8112"
}
}
```
### Additional context
none | closed | 2024-04-01T09:18:39Z | 2024-06-26T16:49:22Z | https://github.com/coqui-ai/TTS/issues/3656 | [
"bug",
"wontfix"
] | forthcoming | 1 |
FactoryBoy/factory_boy | django | 965 | KeyError: 'locale' | #### Description
KeyError: 'locale' generated when calling Faker.
factory/faker.py:46: KeyError
Seems that the `extra` dictionary is empty, but `locale` key is expected.
#### To Reproduce
running unit test with tox
python 3.10; factory-boy==3.2.1 faker==13.15.0
##### Model / Factory code
```python
@pytest.fixture
def unique_dominant_factors():
return list(set(DominantFactorsFactory.create_batch(3)))
@pytest.fixture
def portfolio_values_per_dominant_factors(unique_dominant_factors):
scenario_ids = np.array([3, 4, 5, 6, 7, 8, 9, 10])
portfolio_values_per_dominant_factors = PortfolioValuesPerDominantFactors(
scenario_ids=scenario_ids, dominant_factors=unique_dominant_factors
)
rng = np.random.default_rng(seed=2021)
for i in range(len(unique_dominant_factors) - 1):
portfolio_values_per_dominant_factors.add_instrument_values_by_factor_name(
unique_dominant_factors[i], rng.normal(size=8)
)
portfolio_values_per_dominant_factors.add_instrument_values_by_factor_name(
unique_dominant_factors[-1], np.zeros(8) # Portfolio value 0 can also occur
)
return portfolio_values_per_dominant_factors
```
##### The issue
*Add a short description along with your code*
```python
@pytest.fixture
def unique_dominant_factors():
> return list(set(DominantFactorsFactory.create_batch(3)))
tests/repositories/test_portfolio_values_per_dominant_factors_repository.py:14:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.tox/py3/lib/python3.10/site-packages/factory/base.py:540: in create_batch
return [cls.create(**kwargs) for _ in range(size)]
.tox/py3/lib/python3.10/site-packages/factory/base.py:540: in <listcomp>
return [cls.create(**kwargs) for _ in range(size)]
.tox/py3/lib/python3.10/site-packages/factory/base.py:528: in create
return cls._generate(enums.CREATE_STRATEGY, kwargs)
.tox/py3/lib/python3.10/site-packages/factory/base.py:465: in _generate
return step.build()
.tox/py3/lib/python3.10/site-packages/factory/builder.py:258: in build
step.resolve(pre)
.tox/py3/lib/python3.10/site-packages/factory/builder.py:199: in resolve
self.attributes[field_name] = getattr(self.stub, field_name)
.tox/py3/lib/python3.10/site-packages/factory/builder.py:344: in __getattr__
value = value.evaluate_pre(
.tox/py3/lib/python3.10/site-packages/factory/declarations.py:477: in evaluate_pre
choice = self.decider.evaluate(instance=instance, step=step, extra={})
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <factory.faker.Faker object at 0x7f3843200df0>
instance = <Resolver for <BuildStep for <StepBuilder(<FactoryOptions for DominantFactorsFactory>, strategy='create')>>>
step = <BuildStep for <StepBuilder(<FactoryOptions for DominantFactorsFactory>, strategy='create')>>, extra = {}
def evaluate(self, instance, step, extra):
> locale = extra.pop('locale')
E KeyError: 'locale'
```
#### Notes
Factory-boy version 3.0.1 works properly
| closed | 2022-07-17T06:20:52Z | 2024-06-18T07:42:50Z | https://github.com/FactoryBoy/factory_boy/issues/965 | [
"Bug"
] | radohristov | 16 |
snarfed/granary | rest-api | 208 | [reddit] support @self, @friends, and @all in get_activities() | right now it only supports search and activity_id, so bridgy only finds posts that link to your site. this would fill in the rest of bridgy's backfeed functionality for people who POSSE their posts to reddit without backlinks. cc @stedn | open | 2020-04-30T23:14:42Z | 2020-04-30T23:14:42Z | https://github.com/snarfed/granary/issues/208 | [] | snarfed | 0 |
2noise/ChatTTS | python | 246 | 为什么会自己加字? | 比如:`白 日 依 山 尽 , 黄 河 入 海 流。 欲 穷 千 里目 , 更 上 一 层 楼 。`
生成文字为:`白 日 依 山 尽 [uv_break] , 黄 河 入 海 流 [uv_break] 。 欲 穷 千 里 目 [uv_break] , 那 更 上 一 层 楼 [uv_break] 。`
为什么要加一个**那**字,有办法保持原文吗? | closed | 2024-06-04T04:45:58Z | 2024-06-19T03:50:12Z | https://github.com/2noise/ChatTTS/issues/246 | [] | zhouhao27 | 3 |
netbox-community/netbox | django | 18,577 | Translation Error in Device View: "No está atormentado" should be "No tiene rack asignado" | ### Language
Spanish
### ISO 639-1 code
es
### Volunteer
Yes
### Comments
The correct translation for this line should be:
"**No tiene rack asignado**" instead of "**No está atormentado**."
"Atormentado" translates to "tormented" and is not related to the rack device's status. The intended meaning refers to whether a rack has been assigned. | closed | 2025-02-05T14:35:50Z | 2025-02-05T14:42:02Z | https://github.com/netbox-community/netbox/issues/18577 | [] | SorianoTech | 1 |
healthchecks/healthchecks | django | 847 | SMTP session exception | Hello together,
unfortunately, the update to version 2.9.2 works not for me. Some hours after this update, i got a error message with smtp session exception. With version 2.8.1 i don't get any error messages.
The error log says that the db isn't reachable, but other application that use the same db still work as aspected.
When i change back to 2.8.1 everything works as normal.
Here is my config:
```services:
healthchecks:
# image: healthchecks/healthchecks:v2.8.1
image: healthchecks/healthchecks:latest
container_name: healthchecks
user: "1001:1001"
restart: always
environment:
ADMINS: admin@domain.tld
# Used for running as non prod
DEBUG: "False"
# Disable http request logging
UWSGI_DISABLE_LOGGING: "1"
# Database settings
DB: mysql
DB_HOST: mariadb
DB_NAME: healthchecks
DB_USER: healthchecks
DB_PASSWORD: "xxxxxxxx"
# Email settings
EMAIL_HOST: smtp.domain.tld
EMAIL_PORT: 587
EMAIL_HOST_USER: admin@domain.tld
EMAIL_HOST_PASSWORD: "xxxxxxxx"
EMAIL_USE_TLS: "True"
DEFAULT_FROM_EMAIL: healthchecks@domain.tld
# Force email verifications
EMAIL_USE_VERIFICATION: "True"
# Set to FQDN for container
ALLOWED_HOSTS: hc.domain.tld
# Ping body size (email attached max. log files size)
PING_BODY_LIMIT: 5000000
# Domain for ping email generation
PING_EMAIL_DOMAIN: hc.domain.tld
PUSHOVER_API_TOKEN: xxxxxxxxxxxx
PUSHOVER_SUBSCRIPTION_URL: https://pushover.net/subscribe/xxxxxxx
# Open registration for everyone
REGISTRATION_OPEN: "False"
# WebUI settings
SITE_LOGO_URL: https://www.domain.tld
SITE_NAME: Healthchecks
SITE_ROOT: https://hc.domain.tld
# Random creted secret key
SECRET_KEY: "xxxxxxx"
SMTPD_PORT: 2525
ports:
# - "8000:8000"
- "2525:2525"
networks:
- npm
- sql
networks:
npm:
name: npm
external: true
sql:
name: sql
external: true
```
And this is the error log:
```[uWSGI] getting INI configuration from /opt/healthchecks/docker/uwsgi.ini
[uwsgi-static] added check for static-collected/
*** Starting uWSGI 2.0.21 (64bit) on [Thu Jun 15 09:03:56 2023] ***
compiled with version: 10.2.1 20210110 on 05 June 2023 20:42:11
os: Linux-5.15.0-1030-oracle #36-Ubuntu SMP Wed Feb 15 05:57:14 UTC 2023
nodename: df60ccb22ff9
machine: aarch64
clock source: unix
detected number of CPU cores: 4
current working directory: /opt/healthchecks
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
chdir() to /opt/healthchecks
your memory page size is 4096 bytes
detected max file descriptor number: 10000
!!! no /etc/mime.types file found !!!
lock engine: pthread robust mutexes
thunder lock: enabled
uwsgi socket 0 bound to TCP address :8000 fd 3
Python version: 3.11.3 (main, May 23 2023, 08:54:51) [GCC 10.2.1 20210110]
Python main interpreter initialized at 0xffffbd699900
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
setting request body buffering size to 16192 bytes
mapped 445560 bytes (435 KB) for 4 cores
*** Operational MODE: preforking ***
running "exec:./manage.py migrate" (pre app)...
System check identified some issues:
WARNINGS:
api.Check: (models.W037) MariaDB does not support indexes with conditions.
HINT: Conditions will be ignored. Silence this warning if you don't care about it.
api.Flip: (models.W037) MariaDB does not support indexes with conditions.
HINT: Conditions will be ignored. Silence this warning if you don't care about it.
Operations to perform:
Apply all migrations: accounts, admin, api, auth, contenttypes, payments, sessions
Running migrations:
No migrations to apply.
Your models in app(s): 'api' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0xffffbd699900 pid: 1 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1)
spawned uWSGI worker 1 (pid: 9, cores: 1)
spawned uWSGI worker 2 (pid: 10, cores: 1)
spawned uWSGI worker 3 (pid: 11, cores: 1)
spawned uWSGI worker 4 (pid: 12, cores: 1)
[uwsgi-daemons] spawning "./manage.py sendalerts" (uid: 1001 gid: 1001)
[uwsgi-daemons] spawning "./manage.py smtpd --port 2525" (uid: 1001 gid: 1001)
[uwsgi-daemons] spawning "./manage.py sendreports --loop" (uid: 1001 gid: 1001)
sendalerts is now running
sendreports is now running
Starting SMTP listener on 0.0.0.0:2525 ...
Not an UUID: erorshimiciaw5
('xxx.xxx.xxx.xxx', 65190) unrecognised: GET
('xxx.xxx.xxx.xxx', 65190) unrecognised: HOST:
('xxx.xxx.xxx.xxx', 65190) unrecognised: USER-AGENT:
Not an UUID: erorshimiciaw5
Not an UUID: josewhiltawsky
Not an UUID: josewhiltawsky
('xxx.xxx.xxx.xxx', 58480) unrecognised: GET
('xxx.xxx.xxx.xxx', 58480) unrecognised: HOST:
('xxx.xxx.xxx.xxx', 58480) unrecognised: USER-AGENT:
('xxx.xxx.xxx.xxx', 55272) unrecognised: MGLNDD_130.61.79.228_2525
Processed ping for 91fa4cdd-a854-4813-8189-9bf353a1877c
Processed ping for 19e998bf-961b-49bf-a3cc-5732869d2e7d
Processed ping for f8290de5-b561-456e-97a7-9304bc1871f4
Processed ping for 547e6ac3-8e9d-4d3a-a57a-f02bcf965506
('xxx.xxx.xxx.xxx', 45280) unrecognised: ��8��'45.79.181.179', 45280) unrecognised: �
('xxx.xxx.xxx.xxx', 62002) unrecognised: GET
('xxx.xxx.xxx.xxx', 62002) unrecognised: HOST:
('xxx.xxx.xxx.xxx', 62002) unrecognised: USER-AGENT:
Processed ping for bb7136bb-24a1-4a99-832a-55c3a032b3c6
Processed ping for ce0f9218-bebe-4a7e-9979-85b27840f997
Processed ping for 1aac8e46-d37b-4719-9814-cb8137618f87
Processed ping for 3f63731e-e458-479f-b14c-e2a2c89b5cc7
Processed ping for 35110aac-aebc-4fc6-871c-c47cb84d4356
('xxx.xxx.xxx.xxx', 59603) unrecognised: �
Sending alert, status=down, code=180f4857-80b4-4dd6-b79e-a04cfd4c4166
* OK 0.3s po 9145f266-aa64-4e72-9da6-22f568bb5430
Sending took 0.3s, code=180f4857-80b4-4dd6-b79e-a04cfd4c4166
Sending alert, status=down, code=63a34eeb-878d-4cd4-84d1-71c5dedf2ce5
* OK 0.7s po 9145f266-aa64-4e72-9da6-22f568bb5430
Sending took 0.7s, code=63a34eeb-878d-4cd4-84d1-71c5dedf2ce5
('xxx.xxx.xxx.xxx', 5893) SMTP session exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/mysql/base.py", line 75, in execute
return self.cursor.execute(query, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.11/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb.OperationalError: (2006, 'MySQL server has gone away')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/aiosmtpd/smtp.py", line 741, in _handle_client
await method(arg)
File "/usr/local/lib/python3.11/site-packages/aiosmtpd/smtp.py", line 1460, in smtp_DATA
status = await self._call_handler_hook('DATA')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiosmtpd/smtp.py", line 473, in _call_handler_hook
status = await hook(self, self.session, self.envelope, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/healthchecks/hc/api/management/commands/smtpd.py", line 94, in handle_DATA
result = await self.process_message(remote_addr, mailfrom, mailto, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 479, in __call__
ret: _R = await loop.run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 538, in thread_handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/healthchecks/hc/api/management/commands/smtpd.py", line 56, in _process_message
check = Check.objects.get(code=code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 633, in get
num = len(clone)
^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 380, in __len__
self._fetch_all()
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1881, in _fetch_all
self._result_cache = list(self._iterable_class(self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/local/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/mysql/base.py", line 75, in execute
return self.cursor.execute(query, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.11/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (2006, 'MySQL server has gone away')
('88.65.237.58', 6571) SMTP session exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/mysql/base.py", line 75, in execute
return self.cursor.execute(query, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.11/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb.OperationalError: (2006, 'MySQL server has gone away')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/aiosmtpd/smtp.py", line 741, in _handle_client
await method(arg)
File "/usr/local/lib/python3.11/site-packages/aiosmtpd/smtp.py", line 1460, in smtp_DATA
status = await self._call_handler_hook('DATA')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiosmtpd/smtp.py", line 473, in _call_handler_hook
status = await hook(self, self.session, self.envelope, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/healthchecks/hc/api/management/commands/smtpd.py", line 94, in handle_DATA
result = await self.process_message(remote_addr, mailfrom, mailto, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 479, in __call__
ret: _R = await loop.run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 538, in thread_handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/healthchecks/hc/api/management/commands/smtpd.py", line 56, in _process_message
check = Check.objects.get(code=code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 633, in get
num = len(clone)
^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 380, in __len__
self._fetch_all()
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1881, in _fetch_all
self._result_cache = list(self._iterable_class(self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/local/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/mysql/base.py", line 75, in execute
return self.cursor.execute(query, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.11/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (2006, 'MySQL server has gone away')
('130.61.79.228', 54626) SMTP session exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/backends/mysql/base.py", line 75, in execute
return self.cursor.execute(query, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.11/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb.OperationalError: (2006, 'MySQL server has gone away')```
Thanks for your time!
| closed | 2023-06-20T10:01:02Z | 2023-07-02T09:37:52Z | https://github.com/healthchecks/healthchecks/issues/847 | [] | techsolo12 | 5 |
MaartenGr/BERTopic | nlp | 1,240 | Output size for supervised topic modeling (multi-class classification) | I am trying to change the output size of my model to allow the classification of multiple classes. Following a common PyTorch/CNN strategy, this would be the equivalent of returning something like `nn.Linear(K, 30)`, in which I could compare the output of my model [0.0123, 0.245, 0.113, ..., 0.136] of shape `[30]` with my ground-truth labels [0, 1, 0, ..., 1] which also has shape `[30]`.
I've modeled this approach on BERTopic and successfully achieved some interesting results using the tips given in #295; however, the multi-class aspect is not taken into consideration during training. Meaning that, given an input `X`, the output always has shape `[1]`.
```
>>> topics, _ = topic_model.fit_transform(docs, y=y)
>>> topic, _ = topic_model.transform([test_sample])
>>> topic.shape
<<< [0]
```
Currently, `y` is a list of integers with the classes:
```
>>> print(docs[0], y[0])
<<< A man wearing a purple shirt and a tie. 7
```
Ideally, I would want to use y as a list of one-hot encoded labels to train my model:
```
>>> print(docs[0], y[0])
<<< A man wearing a purple shirt and a tie. [0. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
```
However, when I try to feed this to my model, the method raises `TypeError: unhashable type: 'list'`.
Is it possible to adapt the supervised pipeline for multi-class classification? Even though I am using the probabilities to do the multi-class classification, it would be very nice if one could use this to train the model. | closed | 2023-05-08T15:04:15Z | 2023-09-27T08:56:42Z | https://github.com/MaartenGr/BERTopic/issues/1240 | [] | wlcosta | 2 |
iperov/DeepFaceLab | machine-learning | 667 | Remove lens distortion from src images before training? | Question: Would it make sense to remove lens distortion before you extract the faces and start training? When src images are collected from a huge variety of sources, lens distortion is quite a big variable.
I guess todays method is to train with uncorrected images, and then rebuild uncorrected.
It would be interesting to know if likeness would be any better if all src images were corrected for lens distortion before you train.
In most compositing workflows, you always undistort -> -apply changes -> redistort.
Not sure if this applies to machine learning?
| closed | 2020-03-21T17:41:39Z | 2020-03-22T18:32:06Z | https://github.com/iperov/DeepFaceLab/issues/667 | [] | Tesla32X | 2 |
graphistry/pygraphistry | jupyter | 304 | [FEA] List support in hypergraph entity extraction | **Is your feature request related to a problem? Please describe.**
When plotting some event data, some of the columns had entity lists ("topics", "attending_groups", ...) that it'd help to auto-link
**Describe the solution you'd like**
```python
df = pd.DataFrame({'event': ['e1', 'e2'], 'attendees': [ ['a', 'b'], ['a', 'c', 'd'] ] })
graphistry.hypergraph(df)['graph'].plot()
```
Links: `e1->a`, `e1->b`, `e2->a`, `e2->c`, `e2->d`
The hypergraph code is a bit gnarly so may not be a good first issue, not sure
**Describe alternatives you've considered**
Current workaround uses explode:
```python
graphistry.hypergraph(df.explode('attendess'), direct=True)['graph'].plot()
```
**Additional context**
Explode duplicates rows, so need to use direct=True with a manual event id col (like `event` above). Overall... janky! | open | 2022-02-03T05:42:05Z | 2022-02-03T05:42:05Z | https://github.com/graphistry/pygraphistry/issues/304 | [
"enhancement"
] | lmeyerov | 0 |
rthalley/dnspython | asyncio | 612 | Allow configuring dns.rdata._chunksize | [`dns.rdata._chunksize`](https://github.com/rthalley/dnspython/blob/65c4a968686dd0c3bca1ed1a5fd13fa0d2f1b441/dns/rdata.py#L35) currently is a private setting.
However, some other applications require certain chunk sizes (or no chunking at all, as is achieved when `_chunksize` is falsy). For example, the PowerDNS API requires the base64-encoded key part of (C)DNSKEY records to not contain any whitespace.
To solve this, I suggest to make `dns.rdata._chunksize` configurable, by exposing it through some public interface.
It seems to me that such a setting would belong to the constructor arguments of `dns.wire.Parser` (but that's merely a quick idea, not sure if it's the best place). | closed | 2020-12-14T17:21:24Z | 2021-01-05T20:02:34Z | https://github.com/rthalley/dnspython/issues/612 | [] | peterthomassen | 3 |
matplotlib/cheatsheets | matplotlib | 38 | Legend placement error | Legend placement numbers in the cheatsheet are inconsistent with matplotlib documentation. (e.g. for lower-right, loc=3 in the cheatsheet instead of loc=4).
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.legend.html | closed | 2020-07-20T09:33:56Z | 2020-07-28T08:06:33Z | https://github.com/matplotlib/cheatsheets/issues/38 | [] | bertrandcz | 1 |
huggingface/diffusers | pytorch | 11,144 | FlaxUNet2DConditionModel is not initialized with correct dtypes | ### Describe the bug
The FlaxUNet2DConditionModel allows specifying the dtype of the weights. Supplying a dtype different from float32 does not seem to be propagated to the actual model. This is imo different from https://github.com/huggingface/diffusers/issues/2068 since the afaik the code has correct dtype initialization. but the result is still incorrect. So this is not connected to loading FP32 weights or something similar.
### Reproduction
```python
import diffusers
from jax import random, numpy as jnp
dummy_input = jnp.zeros((2, 4, 32, 32), dtype=jnp.bfloat16)
dummy_t = jnp.zeros(2, dtype=jnp.bfloat16)
model = diffusers.FlaxUNet2DConditionModel(dtype=jnp.bfloat16)
key1, key2 = random.split(random.key(0))
params = model.init(key1, dummy_input, dummy_t, None)
print(jax.tree_util.tree_map(jnp.dtype, params))
```
### Logs
```shell
{'params': {'conv_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_norm_out': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'conv_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'down_blocks_0': {'attentions_0': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_1': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'downsamplers_0': {'conv': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'down_blocks_1': {'attentions_0': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_1': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'downsamplers_0': {'conv': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'down_blocks_2': {'attentions_0': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_1': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'downsamplers_0': {'conv': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'down_blocks_3': {'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'mid_block': {'attentions_0': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'time_embedding': {'linear_1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'linear_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'up_blocks_0': {'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_2': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'upsamplers_0': {'conv': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'up_blocks_1': {'attentions_0': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_1': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_2': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_2': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'upsamplers_0': {'conv': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'up_blocks_2': {'attentions_0': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_1': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_2': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_2': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'upsamplers_0': {'conv': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}, 'up_blocks_3': {'attentions_0': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_1': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'attentions_2': {'norm': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'proj_in': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'proj_out': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'transformer_blocks_0': {'attn1': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'attn2': {'to_k': {'kernel': dtype('float32')}, 'to_out_0': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'to_q': {'kernel': dtype('float32')}, 'to_v': {'kernel': dtype('float32')}}, 'ff': {'net_0': {'proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'net_2': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm3': {'bias': dtype('float32'), 'scale': dtype('float32')}}}, 'resnets_0': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_1': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}, 'resnets_2': {'conv1': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv2': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'conv_shortcut': {'bias': dtype('float32'), 'kernel': dtype('float32')}, 'norm1': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'norm2': {'bias': dtype('float32'), 'scale': dtype('float32')}, 'time_emb_proj': {'bias': dtype('float32'), 'kernel': dtype('float32')}}}}}
```
### System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Running on Google Colab?: Yes
- Python version: 3.11.11
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.4 (gpu)
- Jax version: 0.5.2
- JaxLib version: 0.5.1
- Huggingface_hub version: 0.29.3
- Transformers version: 4.49.0
- Accelerate version: 1.5.2
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: Tesla T4, 15360 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | open | 2025-03-23T17:47:33Z | 2025-03-24T15:59:04Z | https://github.com/huggingface/diffusers/issues/11144 | [
"bug"
] | wittenator | 3 |
jupyterlab/jupyter-ai | jupyter | 381 | azure-chat-openai | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
I want connect Jupyter_ai magic commands using azure open ai models ..Any suggestion how to pass deployement id and Engine name?
###
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
<!--Describe the bug clearly and concisely. Include screenshots if possible-->
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
1. Go to '...'
2. Click on '...'
3. Scroll down to '...'
4. See error '...'
<!--Describe how you diagnosed the issue. See the guidelines at
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html -->
## Expected behavior
<!--Describe what you expected to happen-->
## Context
<!--Complete the following for context, and add any other relevant context-->
- Operating System and version: <!-- e.g. Linux Ubuntu 21.04 -->
- Browser and version: <!-- e.g. Chrome 92 -->
- JupyterLab version: <!-- e.g. 3.1.7 -->
<!--The more content you provide, the more we can help!-->
<details><summary>Troubleshoot Output</summary>
<pre>
Paste the output from running `jupyter troubleshoot` from the command line here.
You may want to sanitize the paths in the output.
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
</pre>
</details>
<details><summary>Browser Output</summary>
<!--See https://webmasters.stackexchange.com/a/77337 for how to access the JavaScript console-->
<pre>
Paste the output from your browser Javascript console here, if applicable.
</pre>
</details>
| open | 2023-09-07T11:08:21Z | 2024-02-09T04:29:50Z | https://github.com/jupyterlab/jupyter-ai/issues/381 | [
"bug"
] | bmshambu | 7 |
ExpDev07/coronavirus-tracker-api | fastapi | 210 | Reimplementation of country_code - why? | The reimplementation of the `country_code` function in the 73f02fb does at least 3 dictionary lookups for any input: 3 lookups in the best-case and 5 lookups in worst-case scenarios:
```python
# 1. # 2.
if not country in is_3166_1 and country in synonyms:
country = synonyms[country] # 3.
# Return code if country was found.
if country in is_3166_1: # 4.
return is_3166_1[country] # 5.
```
Where the original implementation does only 2 lookups in the best-case and only 4 lookups in worst-case scenarios:
```python
if country in is_3166_1: # 1.
return is_3166_1[country] # 2.
else:
if country in synonyms: # 2.
synonym = synonyms[country] # 3.
return is_3166_1[synonym] # 4.
else:
if verbose:
print ("No country_code found for '" + country + "'. Using '" + default_code + "'")
return default_code
```
To me it looks like the old implementation is more efficient, right?
Yea i know, the code gets optimized, dictionaries get cached, processor instructions reordered, etc. by some "magic". Either way this function gets computed quite a lot, and one cannot predict "magic"... So then why such a change? Have you done any measurements?
| closed | 2020-03-26T23:38:11Z | 2020-03-28T10:35:04Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/210 | [
"enhancement"
] | Bost | 11 |
chainer/chainer | numpy | 7,838 | `chainer.backend.copyto` cannot copy chainerx array to cupy | * Code to reproduce
```python
import chainer
import numpy
for dst_device in ['@numpy', '@cupy:0', '@intel64']:
for src_device in ['native', 'cuda:0']:
print((dst_device, src_device))
dst = chainer.get_device(dst_device).send(
numpy.array([1, 2], numpy.float32))
src = chainer.get_device(src_device).send(
numpy.array([3, 4], numpy.float32))
try:
chainer.backend.copyto(dst, src)
except Exception as e:
print(repr(e))
else:
print('ok')
```
* Error messages, stack traces, or logs
```
('@numpy', 'native')
ok
('@numpy', 'cuda:0')
ok
('@cupy:0', 'native')
TypeError('object array cannot be set to float32 array')
('@cupy:0', 'cuda:0')
TypeError('object array cannot be set to float32 array')
('@intel64', 'native')
ok
('@intel64', 'cuda:0')
ok
```
| closed | 2019-07-31T00:48:09Z | 2019-08-07T04:29:16Z | https://github.com/chainer/chainer/issues/7838 | [
"cat:bug",
"prio:high",
"pr-ongoing"
] | toslunar | 0 |
huggingface/datasets | computer-vision | 6,897 | datasets template guide :: issue in documentation YAML | ### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the document remains functional
### Expected behavior
I think the YAML block should be displayed or ignored.
### Environment info
N/A | closed | 2024-05-13T17:33:59Z | 2024-05-16T14:28:17Z | https://github.com/huggingface/datasets/issues/6897 | [] | bghira | 2 |
PeterL1n/BackgroundMattingV2 | computer-vision | 102 | 请问可以分布式计算吗? | 您好,我现在手里有8块3090的GPU卡,我修改了以下代码
model = model.to(device).eval()
model.load_state_dict(torch.load(args.model_checkpoint, map_location=device), strict=False)
为
model = model.to(device).eval()
BM = model.load_state_dict(torch.load(args.model_checkpoint, map_location=device), strict=False)
BM = nn.DataParallel(BM)
我不知道修改的对不对
我在运行程序的时候监测GPU的使用状况,发现只有一张卡在使用,求指点 | closed | 2021-05-20T07:07:27Z | 2021-08-10T15:11:04Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/102 | [] | zhanghonglishanzai | 8 |
reloadware/reloadium | pandas | 198 | [Feature Request] attach/detach `reloadium` at runtime | It would be nice if it was possible to attach `reloadium` at runtime:
- e.g. when you're debugging some script as usual and then you realize it would be helpful to have reloadium features now, then you would be able to load it, debug the issue and unload it and keep working
- in my case I work on Blender (3D software that has Python API) addon and I don't start Python process - it started in some internal way by Blender. So I can't just replace `py my_script.py` with `reloadium run my_script.py` to attach reloadium and option to do it somehow at runtime would help | open | 2024-07-14T17:55:41Z | 2024-07-15T15:04:09Z | https://github.com/reloadware/reloadium/issues/198 | [] | Andrej730 | 2 |
graphql-python/graphene-django | graphql | 1,065 | CSRF cookie being set regardless of whether csrf middleware is used | My project does not use CSRF middleware:
```python
MIDDLEWARE = [
# 'django.middleware.csrf.CsrfViewMiddleware'
}
```
And the view is set as exempt from CSRF:
```python
urlpatterns = [
path("graphql", csrf_exempt(GraphQLView.as_view())),
]
```
But despite this, a CSRF token is still *always* set when using the `/graphql` endpoint:
```python
<RequestsCookieJar[<Cookie csrftoken=WcZCnfoWHlIj7fPR4lxT5ftb1XHumcSdv2QnHWoRgu2KKXnZYxx8MCrF8UCN4y3M for 127.0.0.1/>]>
```
Only requests that go via graphene_django have this problem. Checking the source code shows that the problem is this line in views.py:
https://github.com/graphql-python/graphene-django/blob/55769e814f3fc3da6c6d39696d6d1460fd8c9c89/graphene_django/views.py#L141
Why is the `ensure_csrf_cookie` decorator used? graphene_django should respect the settings.py configuration and only use CSFR if it is enabled for the project. There is currently no way as far as I can tell to stop this cookie being set.
I am using django 2.2.16, graphene_django 2.8.2.
| open | 2020-11-21T14:50:13Z | 2020-11-21T14:50:13Z | https://github.com/graphql-python/graphene-django/issues/1065 | [
"🐛bug"
] | samirelanduk | 0 |
ivy-llc/ivy | pytorch | 28,100 | Fix Frontend Failing Test: tensorflow - attribute.paddle.real | To-do List: https://github.com/unifyai/ivy/issues/27499 | closed | 2024-01-28T19:04:22Z | 2024-01-29T13:08:52Z | https://github.com/ivy-llc/ivy/issues/28100 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
521xueweihan/HelloGitHub | python | 2,756 | 【开源自荐】一款基于 bpf 的 dns 查询实时追踪工具 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/chenjiandongx/dnstrack
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:Go
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:一款基于 bpf 的 dns 查询实时追踪工具
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:dnstrack 使用 libpcap 监听机器网卡并过滤 dns 查询,此工具主要用于发现是否有进程持续高频地访问 dns 服务。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:跨平台兼容
- 示例代码:(可选)
dnstrack 命令需要在特权模式或者 root 用户下运行。
```shell
> dnstrack -h
# A dns-query tracking tool written in go
Usage:
dnstrack [flags]
Examples:
# list all the net-devices
$ dnstrack -l
# filters google dns server packet attached in lo0 dev and output with json format
$ dnstrack -s 8.8.8.8 -o j -d '^lo0$'
Flags:
-a, --all-devices listen all devices if present (default true)
-d, --devices string devices regex pattern filter
-h, --help help for dnstrack
-l, --list list all devices name
-o, --output-format string output format [json(j)|yaml(y)|question(q)|verbose(v)] (default "verbose")
-s, --server string dns server filter
-t, --type string dns query type filter [A/AAAA/CNAME/...]
-v, --version version for dnstrack
```
verbose 输出格式。
```shell
> dnstrack -d '^lo$|^ens'
--------------------
; <ens160>@172.16.22.2:53, ID: 49390, OpCpde: Query, Status: Success
;; When: 2024-05-29T00:42:52+08:00
;; Query Time: 57.667µs
;; Msg Size: 292B
;; Question Section:
google.com. A
;; Answer Section:
google.com. 5 A INET 93.46.8.90
;; Authority Section:
google.com. NS INET ns2.google.com.
google.com. NS INET ns1.google.com.
google.com. NS INET ns4.google.com.
google.com. NS INET ns3.google.com.
;; Additional Section:
ns2.google.com. AAAA INET 2001:4860:4802:34::a
ns4.google.com. AAAA INET 2001:4860:4802:38::a
ns3.google.com. AAAA INET 2001:4860:4802:36::a
ns1.google.com. AAAA INET 2001:4860:4802:32::a
ns2.google.com. A INET 216.239.34.10
ns4.google.com. A INET 216.239.38.10
ns3.google.com. A INET 216.239.36.10
ns1.google.com. A INET 216.239.32.10
```
question 输出格式。
```shell
> dnstrack -d '^lo$|^ens' -oq
2024-05-29T00:44:02+08:00 <ens160>@172.16.22.2:53 A 44.959µs facebook.com.
2024-05-29T00:44:02+08:00 <lo>@127.0.0.53:53 A 16.416µs facebook.com.
2024-05-29T00:44:02+08:00 <lo>@127.0.0.53:53 A 33.125µs facebook.com.
2024-05-29T00:44:04+08:00 <lo>@127.0.0.53:53 A 35.125µs twitter.com.
2024-05-29T00:44:04+08:00 <lo>@127.0.0.53:53 A 59.166µs twitter.com.
2024-05-29T00:44:04+08:00 <ens160>@172.16.22.2:53 A 72.373058ms twitter.com.
2024-05-29T00:44:08+08:00 <ens160>@172.16.22.2:53 A 72.008765ms google.com.
2024-05-29T00:44:08+08:00 <lo>@127.0.0.53:53 A 72.072515ms google.com.
2024-05-29T00:44:08+08:00 <lo>@127.0.0.53:53 A 72.309974ms google.com.
2024-05-29T00:44:13+08:00 <ens160>@172.16.22.2:53 A 80.584µs x.com.
2024-05-29T00:44:13+08:00 <lo>@127.0.0.53:53 A 39.667µs x.com.
2024-05-29T00:44:13+08:00 <lo>@127.0.0.53:53 A 72.417µs x.com.
```
- 截图:(可选)gif/png/jpg
- 后续更新计划:
| open | 2024-05-29T05:02:15Z | 2024-05-29T06:05:47Z | https://github.com/521xueweihan/HelloGitHub/issues/2756 | [] | chenjiandongx | 0 |
localstack/localstack | python | 11,983 | bug: Batch of ECS FargateContainer doesn't pass secrets to environment variable | ### Is there an existing issue for this?
- [X] I have searched the existing issues
- #8492
- But, the different thing is this Issue is for Aws Batch.
### Current Behavior
Job Definition is setting with EcsFargateContainerDefinition.
The option of EcsFargateContainerDefinitionProp environment is passed to environment variables, but, the option secrets isn't passed to environment variables.
I attempted to access the container directly to check the environment variables by searching for the cluster and task ID. While the compute environment contained an ecsClusterArn, when I checked the actual cluster, it was in a failures state with the reason: MISSING, making it impossible to verify the cluster and access the container.
If this is not a bug, please let me know the example how to set up.
### Expected Behavior
Aws Batch with Job definition of EcsFargateContainerDefinition pass secrets to environment variables.
### How are you starting LocalStack?
With a docker-compose file
### Environment
```markdown
- OS:
- LocalStack:localstack/localstack-pro
LocalStack version:latest
```
### Anything else?
_No response_ | open | 2024-12-04T03:10:42Z | 2024-12-05T13:40:04Z | https://github.com/localstack/localstack/issues/11983 | [
"type: bug",
"aws:batch",
"aws:ecs",
"aws:fargate",
"status: backlog"
] | BrianKim-git | 0 |
JaidedAI/EasyOCR | deep-learning | 366 | Error in downloading Languages in EasyOCR- OS Error and URL Error- Help | I face this error, kindly help, I am new to this OCR topic...



| closed | 2021-02-04T14:05:16Z | 2021-02-22T01:04:37Z | https://github.com/JaidedAI/EasyOCR/issues/366 | [] | thanveerkamal | 1 |
deepset-ai/haystack | nlp | 9,071 | idea: documentation on choosing between Rankers | This is an interesting request that came in through documentation comments.
We could create a guide, similar to what we have for [Generators](https://docs.haystack.deepset.ai/docs/choosing-the-right-generator) or [Embedders](https://docs.haystack.deepset.ai/docs/choosing-the-right-embedder), on how to choose the right Ranker. | open | 2025-03-19T16:02:42Z | 2025-03-20T15:44:51Z | https://github.com/deepset-ai/haystack/issues/9071 | [
"type:documentation",
"P2"
] | dfokina | 0 |
aiogram/aiogram | asyncio | 927 | Task was destroyed but it is pending | ## Context
I'm using aiogram to design my bot into telegrams, but as the load increased, I started getting a weird bug.
* Operating System: Linux (Heroku-22)
* Python Version: 3.9.13
* aiogram version: 2.20
* aiohttp version: 3.8.1
* uvloop version (if installed): 0.16.0
## Expected Behavior
I expect there was no such mistake.
## Current Behavior
In parallel with receiving this error, the bot ignores requests.
## Failure Information (for bugs)
Task was destroyed but it is pending!
task: <Task pending name='Task-88390' coro=<Dispatcher._process_polling_updates() done, defined at /app/.heroku/python/lib/python3.9/site-packages/aiogram/dispatcher/dispatcher.py:407> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7fb1052d0ac0>()]>>
### Steps to Reproduce
The work of the bot under load
### Failure Logs
```
2022-06-18T22:30:32.381817+00:00 app[worker.1]: aiogram.contrib.middlewares.logging - INFO - Process update [ID:844341375]: [failed] (in 607 ms)
2022-06-18T22:30:32.381955+00:00 app[worker.1]: Exception ignored in: <coroutine object Dispatcher.process_update at 0x7fb103ee80c0>
2022-06-18T22:30:32.381968+00:00 app[worker.1]: Traceback (most recent call last):
2022-06-18T22:30:32.381973+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.9/site-packages/aiogram/dispatcher/dispatcher.py", line 256, in process_update
2022-06-18T22:30:32.382784+00:00 app[worker.1]: return await self.message_handlers.notify(update.message)
2022-06-18T22:30:32.382785+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.9/site-packages/aiogram/dispatcher/handler.py", line 126, in notify
2022-06-18T22:30:32.382785+00:00 app[worker.1]: current_handler.reset(ctx_token)
2022-06-18T22:30:32.382786+00:00 app[worker.1]: ValueError: <Token var=<ContextVar name='current_handler' at 0x7fb10a5aba90> at 0x7fb103e5d9c0> was created in a different Context
2022-06-18T22:30:32.382787+00:00 app[worker.1]: Exception ignored in: <coroutine object Handler.notify at 0x7fb103ee8040>
2022-06-18T22:30:32.382787+00:00 app[worker.1]: Traceback (most recent call last):
2022-06-18T22:30:32.382787+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.9/site-packages/aiogram/dispatcher/handler.py", line 126, in notify
2022-06-18T22:30:32.382788+00:00 app[worker.1]: current_handler.reset(ctx_token)
2022-06-18T22:30:32.382788+00:00 app[worker.1]: ValueError: <Token var=<ContextVar name='current_handler' at 0x7fb10a5aba90> at 0x7fb103e5d740> was created in a different Context
2022-06-18T22:30:32.382789+00:00 app[worker.1]: asyncio - ERROR - Task was destroyed but it is pending!
2022-06-18T22:30:32.382791+00:00 app[worker.1]: task: <Task pending name='Task-86501' coro=<Dispatcher._process_polling_updates() running at /app/.heroku/python/lib/python3.9/site-packages/aiogram/dispatcher/dispatcher.py:415> wait_for=<_GatheringFuture pending cb=[<TaskWakeupMethWrapper object at 0x7fb0fedbe5b0>()]>>
2022-06-18T22:30:32.392733+00:00 app[worker.1]: aiologger - ERROR - Error write data to Redis in timer. Details: coroutine ignored GeneratorExit
2022-06-18T22:30:32.392784+00:00 app[worker.1]: aiologger - ERROR - Error send content (selector). Details: coroutine ignored GeneratorExit
```
| closed | 2022-06-18T22:41:29Z | 2023-08-04T18:23:53Z | https://github.com/aiogram/aiogram/issues/927 | [
"needs triage",
"2.x"
] | koval01 | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,136 | [Bug]: /sdapi/v1/txt2img returns 404 with --nowebui argument | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I was running sdwebui on a serverless endpoint with docker image. I was using sdwebui ref 5ef669de080814067961f28357256e8fe27544f4.
However, after I update to version 1.9.4, sdapi/v1/txt2img endpoint is returning 404 not found. Looks like with this arguments, it just starts a FastApi. Some of the endpoints I tried like `/sdapi/v1/memory` are working, however I couldn't generate any image.
### Steps to reproduce the problem
I start the API with: `python -u /stable-diffusion-webui/webui.py --skip-python-version-check --skip-torch-cuda-test --skip-install --ckpt /model.safetensors --opt-sdp-attention --disable-safe-unpickle --api --nowebui --port 3000 --skip-version-check --no-hashing --no-download-sd-model`
App logs:
```
2024-07-03T10:05:04.760583190Z Startup time: 9.0s (import torch: 4.0s, import gradio: 0.9s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.8s, load scripts: 0.7s, lora_script.py: 1.1s).
2024-07-03T10:05:04.855311421Z INFO: Started server process [7]
2024-07-03T10:05:04.855348054Z INFO: Waiting for application startup.
2024-07-03T10:05:04.855817648Z INFO: Application startup complete.
2024-07-03T10:05:04.857897685Z INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
2024-07-03T10:05:04.891046567Z Creating model from config: /stable-diffusion-webui/configs/v1-inference.yaml
2024-07-03T10:05:04.936282133Z INFO: 127.0.0.1:50592 - "GET /sdapi/v1/memory HTTP/1.1" 200 OK
2024-07-03T10:05:04.937890089Z Service OK.
2024-07-03T10:05:04.937906145Z WebUI API Service is ready. Starting RunPod...
2024-07-03T10:05:04.937909790Z --- Starting Serverless Worker | Version 1.2.1 ---
2024-07-03T10:05:04.973740388Z /usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
2024-07-03T10:05:04.973766529Z warnings.warn(
2024-07-03T10:05:05.615202925Z INFO | 3e94c472-aaae-46ca-ab7d-b24ba49723c5-u1 | Started
2024-07-03T10:05:05.622631156Z INFO: 127.0.0.1:50604 - "POST /sdapi/v1/txt2img HTTP/1.1" 404 Not Found
```
### What should have happened?
It should be returning 200 success for /sdapi/v1/txt2img
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
I can't get sysinfo from the serverless endpoint.
### Console logs
```Shell
2024-07-03T10:05:04.760583190Z Startup time: 9.0s (import torch: 4.0s, import gradio: 0.9s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.8s, load scripts: 0.7s, lora_script.py: 1.1s).
2024-07-03T10:05:04.855311421Z INFO: Started server process [7]
2024-07-03T10:05:04.855348054Z INFO: Waiting for application startup.
2024-07-03T10:05:04.855817648Z INFO: Application startup complete.
2024-07-03T10:05:04.857897685Z INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
2024-07-03T10:05:04.891046567Z Creating model from config: /stable-diffusion-webui/configs/v1-inference.yaml
2024-07-03T10:05:04.936282133Z INFO: 127.0.0.1:50592 - "GET /sdapi/v1/memory HTTP/1.1" 200 OK
2024-07-03T10:05:04.937890089Z Service OK.
2024-07-03T10:05:04.937906145Z WebUI API Service is ready. Starting RunPod...
2024-07-03T10:05:04.937909790Z --- Starting Serverless Worker | Version 1.2.1 ---
2024-07-03T10:05:04.973740388Z /usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
2024-07-03T10:05:04.973766529Z warnings.warn(
2024-07-03T10:05:05.615202925Z INFO | 3e94c472-aaae-46ca-ab7d-b24ba49723c5-u1 | Started
2024-07-03T10:05:05.622631156Z INFO: 127.0.0.1:50604 - "POST /sdapi/v1/txt2img HTTP/1.1" 404 Not Found
```
```
### Additional information
_No response_ | closed | 2024-07-03T10:31:29Z | 2024-07-03T13:47:57Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16136 | [
"bug-report"
] | ibrahimsn98 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.