repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
AirtestProject/Airtest | automation | 1,228 | 每个airtest和poco步骤,可以设置before hook & after hook吗? | 如题,
| open | 2024-07-15T13:14:37Z | 2024-07-15T13:14:37Z | https://github.com/AirtestProject/Airtest/issues/1228 | [] | lzus | 0 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 266 | Install requirement error | I'm stuck on step 2 after cloning the files. When I enter the install requirements command I get:
Could not open requirements file (Errno 2). No such file or directory:
| closed | 2020-01-19T19:00:53Z | 2020-07-04T23:18:50Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/266 | [] | jgraf451 | 18 |
ClimbsRocks/auto_ml | scikit-learn | 47 | add_ convention for all Pipeline Transformers that add a feature | if a transformer adds a feature, start that transformer name in the Pipeline with `add_`.
These will each also have an attribute called "added_feature_name_".
Then, when grabbing the feature names inside ml_for_analytics, iterate through all these transformers, and grab the `added_feature_name`.
| closed | 2016-08-22T16:38:31Z | 2016-08-24T22:48:17Z | https://github.com/ClimbsRocks/auto_ml/issues/47 | [] | ClimbsRocks | 1 |
dpgaspar/Flask-AppBuilder | rest-api | 1,384 | Bootstrap 4 integration? | Is there a plan for integrating bootstrap 4? | closed | 2020-06-02T10:44:28Z | 2020-09-07T12:03:58Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1384 | [
"stale"
] | HenkVanHoek | 1 |
koxudaxi/datamodel-code-generator | pydantic | 2,018 | Generate Annotated types instead of root types for string constraints | **Is your feature request related to a problem? Please describe.**
given a schema like
```yaml
components:
schemas:
HereBeDragons:
type: string
pattern: here
PassMe:
type: object
properties:
mangled_string:
anyOf:
- $ref: '#/components/schemas/PettyNumber'
- $ref: '#/components/schemas/HereBeDragons'
PettyNumber:
type: string
pattern: '[0-9]{2,8}'
```
multiple root types are created and in turn create object indirection
```python
from pydantic import BaseModel, Field, RootModel, constr
class HereBeDragons(RootModel[constr(pattern=r"here")]):
root: constr(pattern=r"here")
class MoreValues(BaseModel):
name: str
age: int | None = None
class PettyNumber(RootModel[constr(pattern=r"[0-9]{2,8}")]):
root: constr(pattern=r"[0-9]{2,8}")
class PassMe(BaseModel):
mangled_string: PettyNumber | HereBeDragons | None = None
```
**Describe the solution you'd like**
decare the types more directly like
```python
# using python 3.12+ syntax
from pydantic import BaseModel, Field, StringConstraints
type HereBeDragons = Annotated[str, StringConstraints(pattern=r"here")]
type PrettyNumber = Annotated[str, StringConstraints(pattern=r"[0-9]{2,8}")]
class PassMe(BaseModel):
mangled_string: PettyNumber | HereBeDragons | None = None
```
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-06-28T10:20:31Z | 2024-06-28T10:20:31Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2018 | [] | RonnyPfannschmidt | 0 |
piskvorky/gensim | nlp | 2,936 | PerplexityMetric example from docs doesn't print to terminal in my conda environment | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
> What are you trying to achieve?
I am trying to print the perplexity of an LDA model to stdout/stderr in a shell.
> What is the expected result?
Some floating point number printed to stdout/stderr
> What are you seeing instead?
I see no output in either stderr or stdout
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
```sh
$ cat test.py
from gensim.models.callbacks import PerplexityMetric
from gensim.models.ldamodel import LdaModel
from gensim.test.utils import common_corpus, common_dictionary
# Log the perplexity score at the end of each epoch.
perplexity_logger = PerplexityMetric(corpus=common_corpus, logger='shell')
lda = LdaModel(common_corpus, id2word=common_dictionary, num_topics=5, callbacks=[perplexity_logger])
$ python test.py
$
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
```
Python 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform; print(platform.platform())
Linux-5.4.0-7642-generic-x86_64-with-debian-bullseye-sid
>>> import sys; print("Python", sys.version)
Python 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.19.1
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.5.2
>>> import gensim; print("gensim", gensim.__version__)
gensim 3.8.3
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 0
```
I have also dumped my conda env to a pastebin, [here is the link](https://pastebin.com/Du3kJ5QC).
----
I feel like I'm missing something obvious here, but as far as I can tell this example *should* be printing perplexity in my terminal, but it isn't. Many thanks in advance to the people who take the time to reply to this issue. | open | 2020-09-08T09:15:57Z | 2020-09-10T06:35:16Z | https://github.com/piskvorky/gensim/issues/2936 | [] | dwrodri | 1 |
gradio-app/gradio | deep-learning | 10,846 | Model3D: Use Babylon Viewer ESM | model3D uses Babylon.js as UMD module. Update to ESM to allow tree-shaking and reduced size.
Also, Babylon.js new viewer can help make the code smaller. It can come with UI elements like loading bars, animation controls, etc.
Current Canvas3D size : 4.6Mb
Updated Canvas3D size: 1.2Mb | open | 2025-03-20T16:52:47Z | 2025-03-21T11:04:30Z | https://github.com/gradio-app/gradio/issues/10846 | [
"enhancement"
] | CedricGuillemet | 0 |
scrapy/scrapy | python | 5,874 | Scrapy does not decode base64 MD5 checksum from GCS | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
Incorrect GCS Checksum processing
### Steps to Reproduce
1. Obtain the checksum for an up-to-date file.
**Expected behavior:** [What you expect to happen]
matches the checksum of the file downloaded
**Actual behavior:** [What actually happens]
NOT matches the checksum of the file downloaded
**Reproduces how often:** [What percentage of the time does it reproduce?]
Always
### Versions
current
### Additional context
https://cloud.google.com/storage/docs/json_api/v1/objects
> MD5 hash of the data, encoded using [base64](https://datatracker.ietf.org/doc/html/rfc4648#section-4).
But, Scrapy dose not decode MD5 from GCS.
| closed | 2023-03-27T05:55:22Z | 2023-04-11T16:25:43Z | https://github.com/scrapy/scrapy/issues/5874 | [
"bug",
"good first issue"
] | namelessGonbai | 12 |
opengeos/streamlit-geospatial | streamlit | 32 | AttributeError |
File "C:\Users\Rojesh Thapa\PycharmProjects\streamlit-geospatial\multiapp.py", line 46, in run
k: v[0] if isinstance(v, list) else v for k, v in app_state.items()
AttributeError: 'str' object has no attribute 'items' | closed | 2022-02-08T10:28:53Z | 2022-02-23T10:23:47Z | https://github.com/opengeos/streamlit-geospatial/issues/32 | [] | RojeshThapa | 0 |
kubeflow/katib | scikit-learn | 1,904 | Remove unused `pkg/suggestion/v1beta1/bayesianoptimization` | /kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
There are Python codes for the bayesian optimization suggestion service in [`pkg/suggestion/v1beta1/bayesianoptimization`](https://github.com/kubeflow/katib/tree/master/pkg/suggestion/v1beta1/bayesianoptimization), but since their codes are unused anywhere, not tested, and no exist Docker images, we are not sure if compatible with current Katib.
So, it would be better to remove `pkg/suggestion/v1beta1/bayesianoptimization`.
What do you think about this? @kubeflow/wg-automl-leads
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
---
<!-- Don't delete this message to encourage users to support your issue! -->
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| closed | 2022-06-22T17:22:00Z | 2023-12-06T02:46:07Z | https://github.com/kubeflow/katib/issues/1904 | [
"kind/feature",
"lifecycle/frozen"
] | tenzen-y | 4 |
thtrieu/darkflow | tensorflow | 365 | Stop Darkflow producing a summary | Is there a flag to prevent Darkflow from producing a events summary every time it is run? | closed | 2017-08-01T16:45:55Z | 2018-01-10T03:55:07Z | https://github.com/thtrieu/darkflow/issues/365 | [] | jubjamie | 4 |
jina-ai/serve | machine-learning | 6,103 | TransformerTorchEncoder Install Cannot Find torch Version | **Describe the bug**
I am attempting to leverage a TransformerTorchEncoder in my flow to index documents. When I run flow.index(docs, show_progress=True) with this encoder, it fails when attempting to install with the message:
CalledProcessError: Command '['/home/ec2-user/anaconda3/envs/python3/bin/python', '-m', 'pip', 'install', '--compile', '--default-timeout=1000', 'torch==1.9.0+cpu', 'transformers>=4.12.0', '-f', 'https://download.pytorch.org/whl/torch_stable.html']' returned non-zero exit status 1.
This is the detailed info I can find in the error:
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cpu (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2, 2.1.0, 2.1.0+cpu, 2.1.0+cpu.cxx11.abi, 2.1.0+cu118, 2.1.0+cu121, 2.1.0+cu121.with.pypi.cudnn, 2.1.0+rocm5.5, 2.1.0+rocm5.6)
ERROR: No matching distribution found for torch==1.9.0+cpu
Is there any way to update this version or ignore the version so that this can install correctly?
**Describe how you solve it**
<!-- copy past your code/pull request link -->
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
- jina 3.22.4
- docarray 0.21.0
- jcloud 0.3
- jina-hubble-sdk 0.39.0
- jina-proto 0.1.27
- protobuf 4.24.3
- proto-backend upb
- grpcio 1.47.5
- pyyaml 6.0
- python 3.10.12
- platform Linux
- platform-release 5.10.192-183.736.amzn2.x86_64
- platform-version #1 SMP Wed Sep 6 21:15:41 UTC 2023
- architecture x86_64
- processor x86_64
- uid 3083049066853
- session-id 20a184b4-7982-11ee-9c84-02cdd40b6165
- uptime 2023-11-02T13:17:09.019222
- ci-vendor (unset)
- internal False
* JINA_DEFAULT_HOST (unset)
* JINA_DEFAULT_TIMEOUT_CTRL (unset)
* JINA_DEPLOYMENT_NAME (unset)
* JINA_DISABLE_UVLOOP (unset)
* JINA_EARLY_STOP (unset)
* JINA_FULL_CLI (unset)
* JINA_GATEWAY_IMAGE (unset)
* JINA_GRPC_RECV_BYTES 4120095
* JINA_GRPC_SEND_BYTES 3172175
* JINA_HUB_NO_IMAGE_REBUILD (unset)
* JINA_LOG_CONFIG (unset)
* JINA_LOG_LEVEL (unset)
* JINA_LOG_NO_COLOR (unset)
* JINA_MP_START_METHOD (unset)
* JINA_OPTOUT_TELEMETRY (unset)
* JINA_RANDOM_PORT_MAX (unset)
* JINA_RANDOM_PORT_MIN (unset)
* JINA_LOCKS_ROOT (unset)
* JINA_K8S_ACCESS_MODES (unset)
* JINA_K8S_STORAGE_CLASS_NAME (unset)
* JINA_K8S_STORAGE_CAPACITY (unset)
* JINA_STREAMER_ARGS (unset)
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2023-11-02T13:17:48Z | 2024-02-16T00:17:18Z | https://github.com/jina-ai/serve/issues/6103 | [
"Stale"
] | dkintgen | 6 |
NVIDIA/pix2pixHD | computer-vision | 336 | Edge2face experiment with CelebA-HQ | Can you post how you train the Pix2Pix HD model to learn Edge 2 face ? Like what ngf and ndf did you use ? What batch sizes? learning rate? How long did it take for your model to learn? What size were you using ? I have been training a Pix2Pix HD model with both ngf and ndf set to 128 for 7 days now some images are close where other way off. I am doing a size of 1028 X 1028. I trying to see what I need to do to get better outputs.
here is my logfile:
https://docs.google.com/spreadsheets/d/1iaYbW5LLdJMvTSfX_D38V6H_2s9NI8QYywD2VNUkb9Q/edit?usp=sharing
so any help you be great | open | 2024-03-29T18:28:07Z | 2024-03-29T18:28:07Z | https://github.com/NVIDIA/pix2pixHD/issues/336 | [] | Code-Author | 0 |
chaoss/augur | data-visualization | 2,641 | New Contributors Closing Issues metric API | The canonical definition is here: https://chaoss.community/?p=3615 | open | 2023-11-30T18:06:55Z | 2023-11-30T18:18:59Z | https://github.com/chaoss/augur/issues/2641 | [
"API",
"first-timers-only"
] | sgoggins | 0 |
psf/black | python | 3,663 | string_processing: Better whitespace allocation for an assert statement | **Describe the style change**
Strange indentation of a long `assert` statement, in `--preview` (it works correctly in the stable branch).
**Examples in the current (preview) _Black_ style**
```python
assert (
result
== "Lorem ipsum dolor sit amet,\n"
"consectetur adipiscing elit,\n"
"sed doeiusmod tempor incididunt\n"
)
```
**Desired style**
```python
assert result == (
"Lorem ipsum dolor sit amet,\n"
"consectetur adipiscing elit,\n"
"sed doeiusmod tempor incididunt\n"
)
```
**Additional context**
Similar to #3409
| open | 2023-04-26T20:52:37Z | 2024-01-30T18:27:58Z | https://github.com/psf/black/issues/3663 | [
"T: style",
"F: strings",
"C: preview style"
] | st-pasha | 1 |
Miserlou/Zappa | django | 1,602 | Don't exclude *.egg-info from lambda package | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
Excluding `*.egg-info` from the package deployed to lambda, when `slim_handler: true` causes random "DistributionNotFound" errors for packages that do not have `dist-info` but have `egg-info`.
Related issues:
- https://github.com/Miserlou/Zappa/issues/1555
- https://github.com/Miserlou/Zappa/issues/1601
I had exactly the problem described in https://github.com/Miserlou/Zappa/issues/1555 and after investigation it turned out that `pkg_resources` ignores packages that do not have `dist-info` or `egg-info` directories. After removing the `*.egg-info'` from `ZIP_EXCLUDES` in Zappa `core.py` the code executes without issues!
## Expected Behavior
<!--- Tell us what should happen -->
All required files from venv should be included in the lambda package
## Actual Behavior
<!--- Tell us what happens instead -->
`.egg-info` directories are missing which is causing `DistributionNotFound` errors.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Remove '*.egg-info' from ZIP_EXCLUDES.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. `docker run -it amazonlinux:1 bash`
2. `export LANG=en_US.utf8`
3. `yum install -y python36-pip python36-devel vim`
4. `pip-3.6 install zappa awscli`
5.
```
ln -fs /usr/bin/python3 /usr/bin/python
ln -fs /usr/bin/pip-3.6 /usr/bin/pip
```
6. `mkdir /tmp/myproject && cd /tmp/myproject`
7. `python -m venv .env`
8. `. .env/bin/activate`
9. `pip install sagemaker`
10.
```
export AWS_ACCESS_KEY_ID=xxx
export AWS_SECRET_ACCESS_KEY=xxx
export AWS_DEFAULT_REGION=eu-west-1
```
11. Create `lambda_function.py` with:
```
import sagemaker
def lambda_handler():
print("Hello")
```
12. Create `zappa_settings.json` (see below)
13. `zappa deploy dev`
14. Execute Lambda in AWS and get this error:
```
The 'sagemaker' distribution was not found and is required by the application: DistributionNotFound
Traceback (most recent call last):
File "/var/task/handler.py", line 567, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 237, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 129, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/tmp/test-zappa-egginfo/lambda_function.py", line 1, in <module>
import sagemaker
File "/tmp/test-zappa-egginfo/sagemaker/__init__.py", line 15, in <module>
from sagemaker import estimator # noqa: F401
File "/tmp/test-zappa-egginfo/sagemaker/estimator.py", line 23, in <module>
from sagemaker.analytics import TrainingJobAnalytics
File "/tmp/test-zappa-egginfo/sagemaker/analytics.py", line 22, in <module>
from sagemaker.session import Session
File "/tmp/test-zappa-egginfo/sagemaker/session.py", line 28, in <module>
from sagemaker.user_agent import prepend_user_agent
File "/tmp/test-zappa-egginfo/sagemaker/user_agent.py", line 22, in <module>
SDK_VERSION = pkg_resources.require('sagemaker')[0].version
File "/tmp/test-zappa-egginfo/pkg_resources/__init__.py", line 892, in require
needed = self.resolve(parse_requirements(requirements))
File "/tmp/test-zappa-egginfo/pkg_resources/__init__.py", line 778, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'sagemaker' distribution was not found and is required by the application
```
15. Modify `/usr/local/lib/python3.6/site-packages/zappa/core.py`: remove `'*.egg-info'` from `ZIP_EXCLUDES` in line 201
16. `zappa update dev`
17. Execute Lambda again, get successful execution.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.46.2
* Operating System and Python version: Amazon Linux, Python 3.6.5.
Similar issue happens on MacOS 10.12.6 with Python 3.6.5 for other packages.
* The output of `pip freeze`:
```
boto3==1.8.6
botocore==1.11.6
docutils==0.14
jmespath==0.9.3
numpy==1.15.1
protobuf==3.6.1
protobuf3-to-dict==0.1.5
python-dateutil==2.7.3
PyYAML==3.13
s3transfer==0.1.13
sagemaker==1.9.2
scipy==1.1.0
six==1.11.0
urllib3==1.23
```
* Link to your project (optional): N/A
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "lambda_function.lambda_handler",
"aws_region": "eu-west-1",
"project_name": "test-zappa-egginfo",
"runtime": "python3.6",
"s3_bucket": "some-bucket-here",
"manage_roles": false,
"role_arn": "role-arn-here",
"keep_warm": false,
"slim_handler": true,
"log_level": "DEBUG",
"debug": true,
"touch": false,
"num_retained_versions": 1
}
}
```
| open | 2018-09-04T12:26:53Z | 2018-09-17T11:44:52Z | https://github.com/Miserlou/Zappa/issues/1602 | [] | paulina-mudano | 1 |
MycroftAI/mycroft-core | nlp | 2,423 | pes | # How to submit an Issue to a Mycroft repository
When submitting an Issue to a Mycroft repository, please follow these guidelines to help us help you.
## Be clear about the software, hardware and version you are running
For example:
* I'm running a Mark 1
* With version 0.9.10 of the Mycroft software
* With the standard Wake Word
## Try to provide steps that we can use to replicate the Issue
For example:
1. Burn the 0.9.10 image to Micro SD card using Etcher
2. Seat the Micro SD card in the RPi 3
3. Boot Picroft
4. Wait 3 minutes
5. The red light will come on indicating that the RPi 3 is overheating
6. Running `htop` via the command line indicates a number of Zombie'd processes
## Be as specific as possible about the expected condition, and the deviation from expected condition.
This is called _object-deviation format_. Specify the object, then the deviation of the object from an expected condition.
Example 1:
* When I say "Hey Mycroft, set your eyes to cadet blue", the eyes turn purple instead of blue.
Example 2:
* When I say "Hey Mycroft, what time is it in Paris", the time spoken is out by one hour - it's not observing daylight savings time.
Example 3:
* When I run `msm default` on my Mark 1, I receive lots of Git 'locked file' errors on the command line.
## Provide log files or other output to help us see the error
We will normally require log files or other troubleshooting information to assist you with your Issue.
This [documentation](https://mycroft.ai/documentation/troubleshooting/) explains how to find log files.
As of version 0.9.10, the [Support Skill](https://github.com/MycroftAI/skill-support) also helps to automate gathering support information.
Simply say:
* "Create a support ticket" _or_
* "You're not working!" _or_
* "Send me debug info"
and the Skill will put together a support package which you can email to us.
## Upload any files to the Issue that will be useful in helping us to investigate
Please ensure you upload any relevant files - such as screenshots - which will aid us investigating.
| closed | 2019-12-11T18:38:51Z | 2019-12-11T19:00:08Z | https://github.com/MycroftAI/mycroft-core/issues/2423 | [] | tsigalko2003 | 1 |
NVlabs/neuralangelo | computer-vision | 131 | W&B Logging Multiple Images | I see an another issue that in W&B there's these sliders at the bottom of the images where it seems like you can see a history/progression of images, but for some reason whenever I run Neuralangelo on my computer and connect it up to W&B it never shows these sliders, and only seems to show the very first image, and no more.
Do you know what setting might be wrong somewhere to get more than a single image to show up? (so there's those sliders as shown in this example image below)

| open | 2023-09-29T13:22:28Z | 2024-10-30T10:48:24Z | https://github.com/NVlabs/neuralangelo/issues/131 | [] | Chadt54 | 2 |
rthalley/dnspython | asyncio | 651 | The AMTRELAY type code 259 is wrong | https://github.com/rthalley/dnspython/blob/585add9d96ed981ab6a23b373a9166d8986620ab/dns/rdatatype.py#L99
See AMTRELAY RRType Definition in [Section 4.1 of [RFC8777]](https://tools.ietf.org/html/rfc8777#section-4.1) | closed | 2021-03-16T18:03:15Z | 2021-03-16T18:29:41Z | https://github.com/rthalley/dnspython/issues/651 | [] | SeaHOH | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,214 | SocketIO Peer to Peer | Is there any socketiop2p functionality coming soon? As you know WebRTC is becoming more reliable, socketiop2p is possible through nodejs as their examples demonstrate here https://github.com/socketio/socket.io-p2p. I'm not sure if flask_socketio has any implementation of this that I'm unaware of. If so could you explain? | closed | 2020-03-21T20:17:54Z | 2020-03-22T00:12:51Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1214 | [
"question"
] | absurdprofit | 3 |
Lightning-AI/LitServe | api | 47 | Only enable either `predict` or `stream-predict` API based on `stream` argument to LitServer | LitServe uses `/predict` for regular and `/stream-predict` endpoint path with streaming mode.
We should only register either of these to FastAPI endpoint route or alternatively maybe just use `/predict` path for both the tasks by passing the relevant API function.
<img width="1509" alt="image" src="https://github.com/Lightning-AI/litserve/assets/21018714/daf062e2-d9b6-43a7-95ca-4e723ee8ad6d">
### To Reproduce
Steps to reproduce the behavior:
Run a server `LitServer(..., stream=True)` flag
| closed | 2024-04-16T16:45:49Z | 2024-05-17T22:43:41Z | https://github.com/Lightning-AI/LitServe/issues/47 | [
"bug",
"help wanted"
] | aniketmaurya | 2 |
netbox-community/netbox | django | 17,826 | Make it possible to add relations to plugin models to GraphQL types | ### NetBox version
v4.1.4
### Feature type
New functionality
### Triage priority
N/A
### Proposed functionality
Currently the relations that can be retrieved with the GraphQL `TenantType` object type are hard-coded in `tenancy/graphql/types.py`.
I suggest adding a way to register plugin models with GraphQL so that it's possible to retrieve plugin objects together with tenanty, like this:
```
query { tenant_list { plugin_object { id } } }
```
Currently only the opposite direction works:
```
query { plugin_object_list { tenant { name } } }
```
### Use case
I am maintaining a plugin that supports tenancy for some of its models. It would be useful be able to get a list of plugin objects when querying a tenant.
### Database changes
None
### External dependencies
_No response_ | closed | 2024-10-22T10:55:45Z | 2025-01-31T03:02:10Z | https://github.com/netbox-community/netbox/issues/17826 | [
"status: duplicate",
"type: feature",
"netbox"
] | peteeckel | 3 |
indico/indico | sqlalchemy | 6,381 | Room reservation: Make whole location settings menu item clickable | 
It would be more intuitive if the whole item *Rooms* (*Orte*) and not just the settings wheel would be clickable. | closed | 2024-06-07T10:16:48Z | 2024-06-10T09:59:12Z | https://github.com/indico/indico/issues/6381 | [
"enhancement"
] | paulmenzel | 1 |
yt-dlp/yt-dlp | python | 11,901 | [RFE] Support visir.is | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Germany
### Example URLs
- Single video: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
### Provide a description that is worded well enough to be understood
Just what it says on the tin. Visir.is is apparently not currently supported, and the generic fallback does not work. Would appreciate it if support was added. Thanks!
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.23 from yt-dlp/yt-dlp [65cf46cdd] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.26100-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-107213-gfed07efcde-20220622 (setts), ffprobe N-107213-gfed07efcde-20220622
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.23 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.23 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
[generic] briet-x-birnir-1000-ord: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] briet-x-birnir-1000-ord: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1634, in wrapper
File "yt_dlp\YoutubeDL.py", line 1769, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2553, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.visir.is/k/3bb38186-f16d-481f-8608-4c404aab0455-1727694046807/briet-x-birnir-1000-ord
```
| open | 2024-12-24T23:21:22Z | 2025-01-04T14:46:03Z | https://github.com/yt-dlp/yt-dlp/issues/11901 | [
"site-request"
] | africola | 6 |
chainer/chainer | numpy | 8,038 | ChainerX build failure with macOS | It fails to be built because of the warning
```
In file included from /path/to/chainerx_cc/chainerx/native/native_device/misc.cc:12:
/path/to/chainerx_cc/chainerx/numeric.h:72:52: error: taking the absolute value of unsigned type 'unsigned char' has no effect [-Werror,-Wabsolute-value]
```
after #7319 is merged (https://travis-ci.org/chainer/chainer/jobs/575281021#L3288-L3311).
I have no idea on whether the warning is valid. | closed | 2019-08-26T10:55:00Z | 2019-08-28T06:21:27Z | https://github.com/chainer/chainer/issues/8038 | [
"cat:bug",
"prio:high"
] | toslunar | 3 |
ultralytics/yolov5 | pytorch | 13,079 | Get segmentation from yolov5 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
How to get segmentations in form of [ [670,35][6,305] [60,3]] from segments model of yolov5
### Additional
Thanks in advance | closed | 2024-06-10T22:04:08Z | 2024-10-20T19:47:31Z | https://github.com/ultralytics/yolov5/issues/13079 | [
"question",
"Stale"
] | manoj-samal | 7 |
TracecatHQ/tracecat | fastapi | 128 | Evaluate structlog and loguru | # Evaluation
## Resources used
- https://betterstack.com/community/guides/logging/structlog/
- https://betterstack.com/community/guides/logging/loguru/
- Both libraries' docs
## Breakdown
| Function | Loguru | Structlog|
|--------------|---------|----------|
|Attach structured context| Yes, call `.bind` or `.contextualize` to attach variables | Yes, call `.bind` |
| Add new logger | Call `.add` | Likely some additional config |
| Colored logs | Yes | Yes |
| Setup complexity | Low, literally just use one logger | Mid-High , requires creation of pipeline-like object with processors applied onto the logger |
| f-string syntax | Yes | No |
| JSON logs | Yes, use `serialize` | Yes, use processor |
|Structured exceptions/traces | No | Yes |
|Exception diagnostics | Yes, use `diagnose` flag | No |
|Async-Await syntax compatibility | ? | Available|
# Verdict
Loguru appears simpler to get started with. Structlog apparently needs more config before it's actually usable.
They mostly offer the same features but loguru is more featureful.
-> going with Loguru | closed | 2024-05-03T23:48:01Z | 2024-05-04T17:13:05Z | https://github.com/TracecatHQ/tracecat/issues/128 | [
"enhancement",
"engine"
] | daryllimyt | 0 |
PokeAPI/pokeapi | api | 1,153 | Lack of distinction In Evolution and Reminder moves | Sorry If this Is an Invalid Issue but I don't really know the consensus behind It
I'm finding lots of Inconsistencies on [reminder moves](https://bulbapedia.bulbagarden.net/wiki/Move_Reminder#Reminder-exclusive_moves) within moves In the Scarlet and Violet move version group In Pokemon movesets
Reminder moves are listed under the `egg` `move-learn-method` with no distinction between them and... actual egg moves. This Is true even for legendaries ( I checked Uxie and Entei )
I expected them to be listed as level 0 or level 1 moves, and could only find It appear as such under Dazzling Gleam In Alolan Ninetales' moves
Evolution moves seem to be consistently listed as level 0 `level-up` moves. But even so, that doesn't specifically tell you they're learned on evo
Would a new boolean field within move objects In the moves array for If It's evo or reminder, or just entirely new `move-learn-method` endpoints be a solution? | open | 2024-10-21T06:46:21Z | 2024-10-22T03:39:28Z | https://github.com/PokeAPI/pokeapi/issues/1153 | [] | Deleca7755 | 1 |
thp/urlwatch | automation | 645 | Telegram Reporter: Any way of setting the "parse_mode"? | Hi,
is there any way to set the "parse_mode" option for the telegram reporter?
It would be cool to have the telegram chat recognize HTML tags like "a" as clickable links, especially for very long links
This feature is described here: https://core.telegram.org/bots/api#html-style
Also i want to say thank you for such an awesome project!
if this "issue" is more sort of a feature request, can i somehow contribute to making this feature happen?
Thanks in advance for a reply | open | 2021-05-27T15:55:36Z | 2022-03-04T20:43:44Z | https://github.com/thp/urlwatch/issues/645 | [] | c0deing | 2 |
tensorflow/tensor2tensor | deep-learning | 1,265 | Why initialize from same checkpoint but first loss is different? | ### Description
I am trying to do transfer learning on top of the tensor2tensor translation problem.
I use warm_start_from param. The code shows it triggers initialize_from_ckpt and shall be supposed to init variables based on my checkpoint(which is my base model) passed in the warm_start_from param .
One thing I don't get here is first loss for different attempts of training when all the training models are based on the same checkpoint.
ie:
model1:
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into C:\Users\admin\Desktop\project\transferlearning\childmodel\transfer_1128model_3_output\model.ckpt.
INFO:tensorflow:loss = 5.121408, step = 0
model2:
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into C:\Users\admin\Desktop\project\transferlearning\childmodel\transfer_1128model_2_output\model.ckpt.
INFO:tensorflow:loss = 4.8216796, step = 0
model 1 and model 2 use warm_start_from from the same checkpoint(base model). Since their initialized variable are the same, why the first loss are different ?
### Environment information
```
OS: win10
$ pip freeze | grep tensor
# your output here
$ python -V
# your output here
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| open | 2018-12-03T08:20:57Z | 2018-12-08T14:45:50Z | https://github.com/tensorflow/tensor2tensor/issues/1265 | [] | happylittlebunny | 11 |
recommenders-team/recommenders | data-science | 1,387 | MemoryError: Unable to allocate 75.6 GiB for an array with shape (63577, 319205) and data type float32 | 如果我的数据很大,如果才能让它运行下去呢?我没有办法增加内存或者删减数据集。
If my data is big, what if I can make it work? I have no way to increase memory or delete datasets. | closed | 2021-04-29T07:15:55Z | 2021-04-29T12:55:05Z | https://github.com/recommenders-team/recommenders/issues/1387 | [
"help wanted"
] | rookiexiao123 | 1 |
horovod/horovod | deep-learning | 3,568 | BaseHorovodWorker | Hi, i am trying to send over to ray executor a dataset that i treated locally but 8 out of 10 times im getting an error that Object owner has died. Is there any way to prevent this?
error sample:
`
error ray::BaseHorovodWorker.execute() (pid=132, ip=10.0.3.38, repr=<horovod.ray.worker.BaseHorovodWorker object at 0x7f5b555e1210>)
microservice_distributedtraining.1.lclmxgm94n89@template-maquina-learning-1 | At least one of the input arguments for this task could not be computed:
microservice_distributedtraining.1.lclmxgm94n89@template-maquina-learning-1 | ray.exceptions.OwnerDiedError: Failed to retrieve object 0084dccab86986a08bfb38ae278b2ccc11f06e8e0100000002000000. To see information about where this ObjectRef was created in Python, set the environment variable RAY_record_ref_creation_sites=1 during ray start and ray.init().
microservice_distributedtraining.1.lclmxgm94n89@template-maquina-learning-1 |
microservice_distributedtraining.1.lclmxgm94n89@template-maquina-learning-1 | The object’s owner has exited. This is the Python worker that first created the ObjectRef via .remote() or ray.put(). Check cluster logs (/tmp/ray/session_latest/logs/*01000000ffffffffffffffffffffffffffffffffffffffffffffffff* at IP address 172.18.0.3) for more information about the Python worker failure.
` | closed | 2022-06-07T13:38:13Z | 2022-09-16T02:02:56Z | https://github.com/horovod/horovod/issues/3568 | [
"wontfix"
] | joaoderocha | 1 |
ets-labs/python-dependency-injector | asyncio | 572 | How to traverse containers up to find other provider | I have not found a good explanation in the docs on how to achieve the following:
There is an application that is organised in bounded contexts which have no overlap, each has a set of controllers (to simplify).
However, it should be possible to reference a controller from context "other" in the controllers container of context "one".
What can I do to make the test pass?
<details>
<summary>Click to toggle self-contained example</summary>
```py
from types import SimpleNamespace
from dependency_injector import containers, providers
class Controller:
def __init__(self, context):
self.context = context
class OneContextControllers(containers.DeclarativeContainer):
controller: providers.Singleton = providers.Singleton(Controller, "one")
controller_from_other_context: providers.Singleton = providers.Singleton(
lambda: SimpleNamespace(context="What to do here?")
)
class OtherContextControllers(containers.DeclarativeContainer):
controller: providers.Singleton = providers.Singleton(Controller, "other")
class OneContext(containers.DeclarativeContainer):
controllers: providers.Provider[OneContextControllers] = providers.Container(
OneContextControllers
)
class OtherContext(containers.DeclarativeContainer):
controllers: providers.Container[OtherContextControllers] = providers.Container(
OtherContextControllers
)
class AppContainer(containers.DeclarativeContainer):
one_context: providers.Container[OneContext] = providers.Container(OneContext)
other_context: providers.Container[OtherContext] = providers.Container(OtherContext)
app = AppContainer()
def test_controllers__ok():
assert app.one_context.controllers.controller().context == "one"
assert app.other_context.controllers.controller().context == "other"
def test_access_controller_from_other_context():
assert (
app.one_context.controllers.controller_from_other_context().context == "other"
)
```
</details>
| open | 2022-03-29T08:37:32Z | 2022-03-29T08:37:32Z | https://github.com/ets-labs/python-dependency-injector/issues/572 | [] | chbndrhnns | 0 |
davidsandberg/facenet | computer-vision | 669 | RetVal error on changing batch size | I made some changes to this git repo and applied it to speaker verification. But whenever I change batch size from 30 , i get RetVal Error. Here is the Error -
```
Traceback (most recent call last):
File "train_tripletloss.py", line 485, in <module>
main(parse_arguments(sys.argv[1:]))
File "train_tripletloss.py", line 169, in main
args.embedding_size, anchor, positive, negative, triplet_loss)
File "train_tripletloss.py", line 256, in train
err, _, step, emb, lab = sess.run([loss, train_op, global_step, embeddings, labels_batch], feed_dict=feed_dict)
File "/home/abhishek_dandona/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/home/abhishek_dandona/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1137, in _run
feed_dict_tensor, options, run_metadata)
File "/home/abhishek_dandona/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1355, in _do_run
options, run_metadata)
File "/home/abhishek_dandona/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1374, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Retval[1] does not have value
```
And Here is my github repo
[https://github.com/ad349/Deep_Speaker/tree/wip](https://github.com/ad349/Deep_Speaker/tree/wip) | open | 2018-03-23T07:39:29Z | 2018-03-23T07:39:29Z | https://github.com/davidsandberg/facenet/issues/669 | [] | ad349 | 0 |
nteract/papermill | jupyter | 125 | Papermill hangs if kernel is killed | Today papermill is hanging when kernels get OOM killed instead of returning with an error status code within some reasonable timeframe.
Steps to reproduce:
- Make a notebook which sleeps indefinitely.
- Call papermill on the notebook
- Kill -9 the kernel process
- See papermill not exit (ever) | closed | 2018-03-28T19:34:15Z | 2019-04-25T05:44:14Z | https://github.com/nteract/papermill/issues/125 | [
"bug"
] | MSeal | 5 |
keras-team/keras | tensorflow | 20,274 | Incompatibility in 'tf.GradientTape.watch' of TensorFlow 2.17 in Keras 3.4.1 | I read the issue 19155 (https://github.com/keras-team/keras/issues/19155), but still have problem
I am trying to perform gradient descent on the model.trainable variables, but have errors regarding model.trainable_variables
Tensorflow version is 2.17.0
keras version is 3.4.1
def get_grad(model, X_train, data_train):
with tf.GradientTape(persistent=True) as tape:
# This tape is for derivatives with
# respect to trainable variables
tape.watch(model.trainable_variables.value) ###added .value from issue 19155
loss = compute_loss(model, X_train, data_train)
g = tape.gradient(loss, model.trainable_variables.value) #
del tape
return loss, g
###################
Error:
AttributeError: in user code:
File "<ipython-input-13-fc15d0ce6166>", line 7, in train_step *
loss, grad_theta = get_grad(model, X_train, data_train)
File "<ipython-input-11-cca40e4543b3>", line 6, in get_grad *
tape.watch(model.trainable_variables.value)
AttributeError: 'list' object has no attribute 'value' | closed | 2024-09-20T06:20:02Z | 2024-10-22T02:03:06Z | https://github.com/keras-team/keras/issues/20274 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | yajuna | 4 |
waditu/tushare | pandas | 1,333 | get_tick_data返回None | 在使用tushare.get_tick_data时返回None,日期'2020-04-08', 但是日期改成'2020-04-07'或其他日期时,可以返回数据,是否有接口改动? | open | 2020-04-08T18:01:48Z | 2020-04-29T07:20:46Z | https://github.com/waditu/tushare/issues/1333 | [] | zhjsoftware | 9 |
sinaptik-ai/pandas-ai | data-science | 1,211 | There is a code error in the document | ### System Info
pandasai==2.1
python==3.12
### 🐛 Describe the bug
https://docs.pandas-ai.com/llms#langchain-models
```python
from pandasai import SmartDataframe
from langchain_openai import OpenAI
langchain_llm = OpenAI(openai_api_key="my-openai-api-key")
df = SmartDataframe("data.csv", config={"llm": langchain_llm})
```
*OpenAI* should be **ChatOpenAI** | closed | 2024-06-06T13:07:35Z | 2024-09-12T16:06:31Z | https://github.com/sinaptik-ai/pandas-ai/issues/1211 | [
"documentation"
] | tocer | 1 |
iterative/dvc | data-science | 10,475 | dvc queue not found dvc.yaml in .dvc/tmp/exps/tmp*/dvc.yaml | # Bug Report
<!--
## Issue name
ERROR: '<my path>/mnist/.dvc/tmp/exps/tmpxo1bv22y/dvc.yaml' does not exist
-->
## Description
<!--
I built a dvc queue. when I try to dvc queue start or use dvc exp run --run-all get error can not find dvc.yaml. i check the tmp file where the queue make after run the code dvc exp run --queue ...
there is a copy of my repo without dvc.yaml and it can not fund it when want to start run. there is train.py(the file I want to run in stage) , params.yaml and ... but there is not dvc.yaml-->
### Reproduce
<!--
this is mu repo structure
├── data
│ └── MNIST
│ └── raw
│ ├── t10k-images-idx3-ubyte
│ ├── t10k-images-idx3-ubyte.gz
│ ├── t10k-labels-idx1-ubyte
│ ├── t10k-labels-idx1-ubyte.gz
│ ├── train-images-idx3-ubyte
│ ├── train-images-idx3-ubyte.gz
│ ├── train-labels-idx1-ubyte
│ └── train-labels-idx1-ubyte.gz
├── dvc.lock
├── dvc.yaml
├── nohup.out
├── params.yaml
├── results
│ ├── metrics.json
│ └── plots
│ └── metrics
│ ├── accuracy.tsv
│ └── loss.tsv
└── src
├── dvc_experiment.ipynb
├── modules.py
├── __pycache__
│ ├── modules.cpython-39.pyc
│ ├── tasks.cpython-310.pyc
│ └── tasks.cpython-39.pyc
└── train.py
-->
<!--
this is params.yaml
mnist:
epochs: 1
lr: 0.001
momentum: 0.5
-->
<!--
this is dvc.yaml
stages:
train:
cmd: python mnist/src/train.py
deps:
- src/train.py
params:
- mnist.epochs
- mnist.lr
- mnist.momentum
metrics:
- results/metrics.json
plots:
- results/plots/metrics:
x: step
-->
<!--
Example:
1. dvc exp run --queue -S "mnist.lr=0.001" -S "mnist.momentum=0.2, 0.5"
2. dvc exp run --run-all -> got error (not found dvc.yaml in /mnist/.dvc/tmp/exps/tmpxo1bv22y/dvc.yaml')
3. or use dvc queue start -> got error but do not describe the bug
-->
### Expected
<!--
run the queue and all of the experiments as like as defining dvc exp run
-->
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
DVC version: 3.48.4 (pip)
-------------------------
Platform: Python 3.10.12 on Linux-6.5.0-41-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.14.1
dvc_objects = 5.1.0
dvc_render = 1.0.1
dvc_task = 0.3.0
scmrepo = 3.2.0
Supports:
http (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.3, aiohttp-retry = 2.8.3),
ssh (sshfs = 2023.10.0)
Config:
Global: /home/fteam4/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/sda1
Caches: local
Remotes: local
Workspace directory: ext4 on /dev/sda1
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/3ca893080cef4a8f55e4a30b7747face
```console
$ dvc doctor
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
| closed | 2024-07-08T13:57:17Z | 2024-07-08T13:58:13Z | https://github.com/iterative/dvc/issues/10475 | [] | Hamzeluie | 0 |
TheAlgorithms/Python | python | 11,627 | Minecraft Kings Finaler Code | ### What would you like to share?
Kkkk
### Additional information
Bhbzh | closed | 2024-10-01T14:39:04Z | 2024-10-01T16:01:42Z | https://github.com/TheAlgorithms/Python/issues/11627 | [
"awaiting triage"
] | Masterboubi | 0 |
kensho-technologies/graphql-compiler | graphql | 715 | Validate that OrientDB datetime arguments are tz aware | We currently allow tz naive datetime arguments for OrientDB, but OrientDB will throw a very cryptic error if we pass in tz naive datetime arguments. So we should have explicit validation in the compiler requiring that OrientDB datetime arguments are tz aware. | closed | 2020-01-06T19:57:49Z | 2020-05-14T17:48:28Z | https://github.com/kensho-technologies/graphql-compiler/issues/715 | [
"enhancement"
] | pmantica1 | 1 |
Farama-Foundation/Gymnasium | api | 1,324 | [Proposal] Can the dependency `box2d-py==2.3.8` be replaced with `Box2D==2.3.10`, which will simplify the installation? | ### Proposal
replace dependency `box2d-py==2.3.8` to `Box2D==2.3.10`, will simplify the installation.
hoping that more professional and trustworthy community members can test its function and check its security.
### Motivation
During the process of learning gym, I am always blocked by the dependency `box2d-py` every time I install it. The official has stopped maintaining this project, and there are very few built distributions in the official pypi repository. There are a large number of similar installation errors in issues and on the internet. A c++ compilation environment is required to install the `box2d-py` dependency normally, which is actually unnecessary.
The community provides the Box2D package (https://pypi.org/project/Box2D/). I downloaded the project code, changed the dependency configuration in `pyproject.toml` to `Box2D ==2.3.10`, and made no other changes. Installation was very easy, and there were no issues with initial use. This should reduce the difficulty for many beginners.
This may require more testing. I'm not very sure about the security of the `Box2D` package. I'm just making a suggestion, hoping that more professional and trustworthy community members can test its function and check its security.
### Pitch
```pyproject.toml
....
[project.optional-dependencies]
# Update dependencies in `all` if any are added or removed
atari = ["ale_py >=0.9"]
box2d = ["Box2D ==2.3.10", "pygame >=2.1.3", "swig ==4.*"]
....
all = [
# All dependencies above except accept-rom-license
# NOTE: No need to manually remove the duplicates, setuptools automatically does that.
# atari
"ale_py >=0.9",
# box2d
"Box2D ==2.3.10",
....
]
....
```
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [x] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| open | 2025-03-01T16:54:01Z | 2025-03-05T16:20:33Z | https://github.com/Farama-Foundation/Gymnasium/issues/1324 | [
"enhancement"
] | moretouch | 3 |
AntonOsika/gpt-engineer | python | 296 | Is GPT-3.5 supported | I'm on the GPT-4 waitlist and I'm trying to use the tool with it defaulting to 3.5 but when I run the script it starts generating a program specification that goes like this:
```
(venv) ➜ gpt-engineer git:(main) gpt-engineer projects/payments-test
INFO:openai:error_code=model_not_found error_message="The model 'gpt-4' does not exist" error_param=model error_type=invalid_request_error message='OpenAI API error received' stream_error=False
Model gpt-4 not available for provided API key. Reverting to gpt-3.5-turbo. Sign up for the GPT-4 wait list here: https://openai.com/waitlist/gpt-4-api
Program Specification:
The program is a command-line tool that takes a CSV file as input and outputs a JSON file. The CSV file contains information about employees, including their name, department, and salary. The program should calculate the total salary for each department and output it in the JSON file.
The program should have the following features:
- The user should be able to specify the input CSV file path and the output JSON file path as command-line arguments.
^C
Aborted.
```
Where is this CSV program coming from? Is it because the API is using 3.5? | closed | 2023-06-21T14:42:10Z | 2023-06-23T22:05:43Z | https://github.com/AntonOsika/gpt-engineer/issues/296 | [] | salazarm | 5 |
mars-project/mars | numpy | 3,282 | [BUG] CI install conda failed | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
CI install conda failed

**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-10-20T02:59:23Z | 2022-10-20T03:02:50Z | https://github.com/mars-project/mars/issues/3282 | [] | chaokunyang | 0 |
scanapi/scanapi | rest-api | 376 | Move the community communication channel away from Septrum | Hello Everyone 👋🏾
Spectrum will become read-only on August 10, 2021.[ Official announcement post](https://spectrum.chat/spectrum/general/join-us-on-our-new-journey~e4ca0386-f15c-4ba8-8184-21cf5fa39cf5).
Since Spectrum would be read-only I suggest moving to some other platform, a dev channel really comes in handy to interact with folks in async. Conversations that aren't fit for Github issues since a community is more than just about closing PR's.
I had suggested Zulip in the past and here is the [thread ](https://spectrum.chat/scanapi/general/can-we-move-to-zulip~91d57217-4a5f-49fd-b00d-ab65621fe98c) on Spectrum.
Also having a communication channel helps folks to hang around after a sprint (winks EuroPython 2020 sprints) and new folks join in the conversation/give feedback after using scanapi or just hearing about it from a talk.
Opening this thread so we can have more people chip into the convention.
PS: Here is a quick tour of [Zulip](https://zulip.com/hello/) | closed | 2021-05-26T10:42:08Z | 2021-06-21T14:47:26Z | https://github.com/scanapi/scanapi/issues/376 | [] | Pradhvan | 9 |
sgl-project/sglang | pytorch | 3,715 | [Bug] Could not find a version that satisfies the requirement sgl-kernel>=0.0.3.post6 | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 5. Please use English, otherwise it will be closed.
### Describe the bug
I use dockerfile to build the image, but I get an error:
```
359.9 INFO: pip is looking at multiple versions of sglang[srt] to determine which version is compatible with other requirements. This could take a while.
359.9 ERROR: Could not find a version that satisfies the requirement sgl-kernel>=0.0.3.post6; extra == "srt" (from sglang[srt]) (from versions: 0.0.1)
362.0 ERROR: No matching distribution found for sgl-kernel>=0.0.3.post6; extra == "srt"
```
### Reproduction
docker build -t sglang .
### Environment
docker | closed | 2025-02-20T02:55:38Z | 2025-02-20T05:32:16Z | https://github.com/sgl-project/sglang/issues/3715 | [] | zwdgit | 3 |
autogluon/autogluon | scikit-learn | 4,337 | [BUG] Attribute Error when fitting a TimeSeriesPredictor instance | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [ x ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
I'm running into an `AttributeError` when trying to fit a autogluon time series model in Databricks. I'm following the example code in [this documentation](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) and reaching the error specifically when calling `.fit()` on the `TimeSeriesPredictor` instance.
**Expected behavior**
I'm hoping to have the `.fit()` command run entirely with no error so that I can kick-start an AutoGluon time series project.
**To Reproduce**
`single_series_training_df = pd.read_csv('/dbfs/FileStore/data.csv')`
`single_series_training_df['id'] = single_series_training_df.index`
`single_series_training_df['Date'] = pd.to_datetime(single_series_training_df['Date'])`
`single_series_train_data = TimeSeriesDataFrame.from_data_frame(
single_series_training_df,
id_column="id",
timestamp_column="Date"
)`
`predictor = TimeSeriesPredictor(
prediction_length=7,
path="/dbfs/FileStore/autogluon-path",
target="Sales",
eval_metric="RMSE",
freq='D'
)`
`predictor.fit(
single_series_train_data,
time_limit=600
)`
Databricks cluster configuration
- Policy: Single User
- Databricks Runtime Version: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)
- Use photon acceleration
- Worker type: E8as_v4 (workers=1, 64 GB memory, 8 cores)
- Driver type: E8as_v4 (64 GB memory, 8 cores)
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2024-07-23
time : 18:38:39.267847
python : 3.10.12.final.0
OS : Linux
OS-release : 5.15.0-1061-azure
Version : #70~20.04.1-Ubuntu SMP Mon Apr 8 15:38:58 UTC 2024
machine : x86_64
processor : x86_64
num_cores : 8
cpu_ram_mb : 58770.0
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 203698
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.24.28
catboost : 1.2.5
defusedxml : 0.7.1
evaluate : 0.4.2
fastai : 2.7.15
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.4
joblib : 1.2.0
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.5.2
mlforecast : 0.10.0
networkx : 3.3
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.24.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.18.1
optimum-intel : None
orjson : 3.10.6
pandas : 2.2.2
pdf2image : 1.17.0
Pillow : 10.4.0
psutil : 5.9.0
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.9.1
seqeval : 1.2.2
setuptools : 63.4.1
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.17.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1
torchmetrics : 1.2.1
torchvision : 0.18.1
tqdm : 4.66.4
transformers : 4.39.3
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
12:49 PM (<1s)
```
</details>
**Here's the output from the run:**
Beginning AutoGluon training... Time limit = 600s
AutoGluon will save models to '/dbfs/FileStore/autogluon-path'
System Info
AutoGluon Version: 1.1.1
Python Version: 3.10.12
Operating System: Linux
Platform Machine: x86_64
Platform Version: #70~20.04.1-Ubuntu SMP Mon Apr 8 15:38:58 UTC 2024
CPU Count: 8
GPU Count: 0
Memory Avail: 45.07 GB / 57.39 GB (78.5%)
Disk Space Avail: 1048576.00 GB / 1048576.00 GB (100.0%)
Fitting with arguments:
{'enable_ensemble': True,
'eval_metric': RMSE,
'freq': 'D',
'hyperparameters': 'default',
'known_covariates_names': [],
'num_val_windows': 1,
'prediction_length': 7,
'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
'random_seed': 123,
'refit_every_n_windows': 1,
'refit_full': False,
'skip_model_selection': False,
'target': 'Sales',
'time_limit': 600,
'verbosity': 2}
**And here's the error message:**
AttributeError: 'DataFrame' object has no attribute 'freq'
AttributeError Traceback (most recent call last)
File <command-1861354523862313>, line 1
----> 1 predictor = TimeSeriesPredictor(prediction_length=7, target='Sales', freq='D').fit(single_series_train_data)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-145221fe-160f-4320-878f-44f193f58a1f/lib/python3.10/site-packages/autogluon/core/utils/decorators.py:31, in unpack.<locals>._unpack_inner.<locals>._call(*args, **kwargs)
28 @functools.wraps(f)
29 def _call(*args, **kwargs):
30 gargs, gkwargs = g(*other_args, *args, **kwargs)
---> 31 return f(*gargs, **gkwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-145221fe-160f-4320-878f-44f193f58a1f/lib/python3.10/site-packages/autogluon/timeseries/predictor.py:701, in TimeSeriesPredictor.fit(self, train_data, tuning_data, time_limit, presets, hyperparameters, hyperparameter_tune_kwargs, excluded_model_types, num_val_windows, val_step_size, refit_every_n_windows, refit_full, enable_ensemble, skip_model_selection, random_seed, verbosity)
698 logger.info("\nFitting with arguments:")
699 logger.info(f"{pprint.pformat({k: v for k, v in fit_args.items() if v is not None})}\n")
--> 701 train_data = self._check_and_prepare_data_frame(train_data, name="train_data")
702 logger.info(f"Provided train_data has {self._get_dataset_stats(train_data)}")
704 if val_step_size is None:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-145221fe-160f-4320-878f-44f193f58a1f/lib/python3.10/site-packages/autogluon/timeseries/predictor.py:314, in TimeSeriesPredictor._check_and_prepare_data_frame(self, data, name)
312 logger.info(f"Inferred time series frequency: '{df.freq}'")
313 else:
--> 314 if df.freq != self.freq:
315 logger.warning(f"{name} with frequency '{df.freq}' has been resampled to frequency '{self.freq}'.")
316 df = df.convert_frequency(freq=self.freq)
File /databricks/python/lib/python3.10/site-packages/pandas/core/generic.py:5575, in NDFrame.__getattr__(self, name)
5568 if (
5569 name not in self._internal_names_set
5570 and name not in self._metadata
5571 and name not in self._accessors
5572 and self._info_axis._can_hold_identifiers_and_holds_name(name)
5573 ):
5574 return self[name]
-> 5575 return object.__getattribute__(self, name)
| closed | 2024-07-23T18:40:17Z | 2024-11-08T15:49:49Z | https://github.com/autogluon/autogluon/issues/4337 | [
"bug: unconfirmed",
"Needs Triage"
] | ANNIKADAHLMANN-8451 | 5 |
vitalik/django-ninja | rest-api | 671 | Dose Django Ninja Support Async ? (from django 4.1 ) | As the django documentation describes here, Django version 4.1 already support a lot of the async ORM functions, And i used them in my code, but i still had errors working with django ninja.
After cheking the errors and logs, i discovered that it is django ninja having problems with supporting async functions.
Dose it support native django 4.1 async functions ? If not, is there any working code or plan for this ?
Thanks.
| closed | 2023-01-30T12:31:47Z | 2023-04-13T15:33:16Z | https://github.com/vitalik/django-ninja/issues/671 | [] | azataiot | 25 |
OpenBB-finance/OpenBB | python | 6,879 | [🕹️] Side Quest: Starry Eyed supporter | ### What side quest or challenge are you solving?
Starry Eyed Supporter
### Points
150
### Description
_No response_
### Provide proof that you've completed the task
### Description
Completed the challenge by gathering five friends to star the repository.
### GitHub Usernames of Friends
1. [@virugamacoder](https://github.com/virugamacoder)
2. [@harshp421](https://github.com/harshp421)
3. [@Felixcoder308](https://github.com/Felixcoder308)
4. [@nandani-1411](https://github.com/nandani-1411)
5. [@gtlYashParmar](https://github.com/gtlYashParmar)
### Screenshots
1. [@virugamacoder](https://github.com/virugamacoder)

2. [@harshp421](https://github.com/harshp421)

3. [@Felixcoder308](https://github.com/Felixcoder308)

4. [@nandani-1411](https://github.com/nandani-1411)

5. [@gtlYashParmar](https://github.com/gtlYashParmar)

| closed | 2024-10-25T19:38:59Z | 2024-10-30T20:53:43Z | https://github.com/OpenBB-finance/OpenBB/issues/6879 | [] | Yash-1511 | 2 |
mitmproxy/mitmproxy | python | 6,743 | won't log localhost traffic on windows | I'm attempting to execute what I would think to be a very common use case - intercepting and logging all HTTP traffic on localhost servers for debugging/demonstration purposes. For example I want to run an OAuth confidential client (HTTP server), an authorization server and a resource server on localhost host and capture all frontend and backend HTTP traffic to demonstrate and/or troubleshoot a OAuth code grant use case/flow. For example the code for all of these are available from
https://github.com/oauthinaction/oauth-in-action-code/
Specifcally
https://github.com/oauthinaction/oauth-in-action-code/tree/master/exercises/ch-3-ex-1
But of course it shouldn't matter if this is node, python, .NET or whatever.
When running "mitmproxy" non of the traffic shows up - only traffic/noise I'm not interesting in leaving my "localhost".
#### Steps to reproduce the behavior:
1. create a python virtual environment and install using "python -m venv pip install mitmproxy"
2. install mitmproxy certs as document - I won't reproduce the instruction here
3. open a windows command prompt and/or terminal, activate the virtual environment and run "mitmproxy"
4. run all apps (e.g. "node some_node_js_script.js"
5. configure your browser to use the proxy which defaults to "localhost:8080"
6. navigate the brower to the OAuth confidential client - in my case localhost:9000
7. observe that no traffic is captured by the mitmproxy output
I've tried various modes including transparent ("mitmproxy --mode transparent") which only results in many messages messsing up the screen with "Warning: Previously unseen connection from proxy ..." not helpful, maybe in a log file but not the screen.
#### System Information
Mitmproxy: 10.2.4
Python: 3.12.1
OpenSSL: OpenSSL 3.2.1 30 Jan 2024
Platform: Windows-11-10.0.22631-SP0
| open | 2024-03-19T20:14:37Z | 2025-01-16T04:22:28Z | https://github.com/mitmproxy/mitmproxy/issues/6743 | [
"kind/triage"
] | cmedcoff | 12 |
koxudaxi/fastapi-code-generator | fastapi | 277 | Special input parameter support | An error is reported when a function variable is a Python keyword. Like this:
```text
"parameters": [
{
"name": "from",
"in": "query",
"description": "起点坐标,39.071510,117.190091",
"required": true,
"schema": {
"type": "string"
}
},
...
```
Maybe you should generate something like this:
```
from_: str=Field(alias="from"),
```
| open | 2022-09-11T04:04:23Z | 2022-09-11T04:04:23Z | https://github.com/koxudaxi/fastapi-code-generator/issues/277 | [] | Chise1 | 0 |
nteract/papermill | jupyter | 375 | Add docs on how to pipe to papermill | From Slack:
Piping to papermill is on `master` :drooling_face:
`cat original.ipynb | papermill > result.ipynb` | open | 2019-06-06T23:53:59Z | 2019-06-06T23:54:30Z | https://github.com/nteract/papermill/issues/375 | [
"docs"
] | willingc | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 525 | [BUG]: <Bot doesn't run my personal resume> | ### Describe the bug
\Auto_Jobs_Applier_AIHawk>python main.py --resume xxx_resume.pdf After lauching this, the script simply doesn't do anything. anybody with similar issue?
### Steps to reproduce
\Auto_Jobs_Applier_AIHawk>python main.py --resume xxx_resume.pdf
- nothing
### Expected behavior
to run the script and open the page with Linkedin
### Actual behavior
nothing
### Branch
None
### Branch name
_No response_
### Python version
3.10.10
### LLM Used
ChatGPT
### Model used
GPT-4o-mini
### Additional context
_No response_ | closed | 2024-10-13T09:40:00Z | 2024-11-08T11:47:00Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/525 | [
"bug"
] | Leof137 | 1 |
Miserlou/Zappa | flask | 2,179 | zappa init not creating zappa_settings.json file | Nothing happens when I type "zappa init" in my terminal window, however I'm not getting an error message either. The cursor just goes to a new command line. It's not creating a zappa_settings.json file. My IDE is PyCharm and as far as I can tell everything is set up correctly.
Additional details:
zappa version: 0.52.0
operating system: windows 10
python version: 3.7
Thanks | closed | 2020-10-24T04:20:09Z | 2020-10-25T05:42:30Z | https://github.com/Miserlou/Zappa/issues/2179 | [] | MasterShake20 | 1 |
miguelgrinberg/Flask-Migrate | flask | 164 | Best place to import models when using a factory function? | I have something like this in my appmod.py file...
```
db = SQLAlchemy()
migrate = Migrate()
def create_app(config):
app = Flask(__name__, template_folder='_templates', static_folder='_static')
app.config.from_mapping(config)
db.init_app(app)
migrate.init_app(app, db)
from webapp.viewsbp import main
app.register_blueprint(main)
return app
```
And something like this in a management file which I use the flask command with...
```
from tweaks import tweaks
from webapp.config import create_config
from webapp.appmod import create_app
tweaks['DEBUG'] = True
config = create_config(tweaks)
app = create_app(config)
```
If I run flask db migrate, with the above code, it doesn't detect my models which are in a separate file. If I import them at the top of the management script, it does pick them up. I can also import them inside the factory function, and that seems to work also. I'm just not sure where the "right" place is? Perhaps adding something to the documentation for factory functions would be nice. | closed | 2017-07-13T16:07:20Z | 2019-01-13T22:20:36Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/164 | [
"question",
"auto-closed"
] | coreybrett | 3 |
adap/flower | scikit-learn | 4,339 | Add Client Metadata Transmission in gRPC Communication | ### Describe the type of feature and its functionality.
To enhance federated learning (FL) using gRPC communication, this feature will enable the transmission of client metadata to the ClientProxy, such as CPU and memory specifications. This addition will allow the client_manager to selectively choose clients based on device capability, enabling optimized FL strategies, including heterogeneous federated learning (heteroFL), by making better use of diverse device capacities.
### Describe step by step what files and adjustments are you planning to include.
1. Add a new attribute to the FlowerServiceStub.Join function (a grpc._StreamStreamMultiCallable class). This function has a metadata attribute in its __call__ function. We will retrieve this data on the server side using the invocation_metadata function. The affected files will be:
- client.grpc_client.connection.py
- server.superlink.fleet.grpc_bidi.flower_service_servicer.py
2. Modify the attributes of other transport types for compatibility with the API. This is mainly for consistency across transport methods. The affected file will be:
- client.app.py
3. Register the metadata to the GrpcClientProxy as a class attribute during initialization. This will allow the server to store and use client metadata effectively. The affected files will be:
- server.superlink.fleet.grpc_bidi.flower_service_servicer.py
- server.superlink.fleet.grpc_bidi.grpc_client_proxy.py
4. Add documentation in the docstrings to explain the new metadata handling and how it integrates with the existing functionality.
### Is there something else you want to add?
_No response_ | open | 2024-10-17T15:08:34Z | 2024-12-11T18:54:27Z | https://github.com/adap/flower/issues/4339 | [
"feature request",
"stale",
"part: communication"
] | kuchidareo | 0 |
itamarst/eliot | numpy | 280 | How to make eliot work with asyncio? | Question as in the title. I have a pretty big project with logging handled by eliot.
I had to move to PY3.6+asyncio recently.
Now I have every coroutine call nesting within all other "opened" calls from eliot POV.
This yields a decent mindf**k; a line from eliot-tree:
> comm_api:execute_request/2/2/3/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/2/3 ⇒ failed
Do I need to downgrade my logging back to Python's builtin logger or can I work around this somehow? | closed | 2017-10-16T22:58:55Z | 2018-09-22T20:59:22Z | https://github.com/itamarst/eliot/issues/280 | [
"bug",
"in progress"
] | x0zzz | 10 |
benbusby/whoogle-search | flask | 694 | [FEATURE] Add https://whoogle.lunar.icu/ to Public List | Hello, can you add https://whoogle.lunar.icu/ to your Public List?
Thanks,
Maximilian | closed | 2022-03-22T19:50:54Z | 2022-03-25T18:53:05Z | https://github.com/benbusby/whoogle-search/issues/694 | [
"enhancement"
] | MaximilianGT500 | 4 |
dgtlmoon/changedetection.io | web-scraping | 2,179 | [feature] mouse over when 'checking now' should show number of seconds/minutes its been checking for already | mouse over when 'checking now' should show number of seconds/minutes its been checking for already | open | 2024-02-11T23:05:08Z | 2024-02-11T23:05:08Z | https://github.com/dgtlmoon/changedetection.io/issues/2179 | [
"enhancement"
] | dgtlmoon | 0 |
nolar/kopf | asyncio | 415 | [archival placeholder] | This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved. | closed | 2020-08-18T20:06:00Z | 2020-08-18T20:06:01Z | https://github.com/nolar/kopf/issues/415 | [
"archive"
] | kopf-archiver[bot] | 0 |
gradio-app/gradio | data-visualization | 9,938 | DataFrame styling doesn't work when it's used as input to another component | ### Describe the bug
When you format a dataframe with pandas Styles, and then use them in `gr.DataFrame`, the style only applies if the dataframe is not used as input to callbacks or other components.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
If I do this:
```python
import gradio as gr
import gradio as gr
import pandas as pd
df = (
pd.DataFrame(
{
"values": [
1.333333333333333,
2.44444444444444444,
3.55555555555555555,
6.77777777777777777,
],
"other_values": [
1.333333333333333,
2.44444444444444444,
3.55555555555555555,
6.77777777777777777,
],
}
)
.style.format({"values": "{:.2f}", "other_values": "{:.0f}"})
.highlight_min("values")
.highlight_max("other_values")
)
with gr.Blocks() as demo:
data = gr.DataFrame(df)
demo.launch()
```
then the table is properly styled as expected:

If I add a Markdown component that takes the dataframe as its input, the styling disappears.
```python
def create_text(df):
n_rows = len(df.index)
return f"# This is some markdown\nThere are {n_rows} rows in the dataframe"
with gr.Blocks() as demo:
data = gr.DataFrame(df)
gr.Markdown(create_text, inputs=[data])
```

### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.1
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.25.2
orjson: 3.10.7
packaging: 24.0
pandas: 2.2.2
pillow: 10.3.0
pydantic: 2.7.0
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.1
ruff: 0.3.7
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.12.5
typing-extensions: 4.11.0
urllib3: 2.2.1
uvicorn: 0.30.6
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.2
huggingface-hub: 0.26.1
packaging: 24.0
typing-extensions: 4.11.0
websockets: 12.0
```
```
### Severity
Blocking usage of gradio | closed | 2024-11-11T14:47:14Z | 2024-12-02T07:26:42Z | https://github.com/gradio-app/gradio/issues/9938 | [
"bug",
"pending clarification"
] | x-tabdeveloping | 4 |
huggingface/datasets | machine-learning | 6,496 | Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again. | **Describe the bug**
Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub.
```
huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc)
A commit has happened since. Please refresh and try again.
```
**Steps to reproduce the bug**
This is a minimal reproducer:
```
import dask.dataframe as dd
import pandas as pd
import random
import os
import huggingface_hub
import datasets
huggingface_hub.login(token=os.getenv("HF_TOKEN"))
data = {"number": [random.randint(0,10) for _ in range(1000)]}
df = pd.DataFrame.from_dict(data)
dataframe = dd.from_pandas(df, npartitions=1)
dataframe = dataframe.repartition(npartitions=3)
schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema
repo_id = "GLorr/test-dask"
repo_path = f"hf://datasets/{repo_id}"
huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True)
dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema)
```
**Expected behavior**
Would expect to write to the hub without any problem.
**Environment info**
```
datasets==2.15.0
huggingface-hub==0.19.4
```
| open | 2023-12-14T11:24:54Z | 2023-12-14T12:22:21Z | https://github.com/huggingface/datasets/issues/6496 | [] | GeorgesLorre | 1 |
pallets/flask | python | 5,070 | pip install flask-sqlalchemy Fails | <!--
On newest version of python (3.11). Using Pycharm. Venv environment.
Trying to attach Sqlalchemy.
pip install flask-wtf Works fine
pip install flask-sqlalchemy Fails.
Fails to build greenlet. Wheel does not run successfully.
Using the same steps in python (3.8), all works fine.
Is it not compatible with version 3.11
Any help is appreciated.
Thanks,
Digdug
-->
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
<!--
Describe the expected behavior that should have happened but didn't.
-->
Environment:
- Python version:
- Flask version:
| closed | 2023-04-17T15:36:45Z | 2023-05-02T00:05:34Z | https://github.com/pallets/flask/issues/5070 | [] | DigDug10 | 3 |
strawberry-graphql/strawberry | asyncio | 2,846 | `strawberry.directive` type hints appear to be incorrect. | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
A function like:
```py
@strawberry.directive(
locations=[DirectiveLocation.FRAGMENT_DEFINITION],
description="Do nothing, but add a base class for query generation in the python client.",
)
def identity(value: str, a: str, b: str | None = None) -> str:
"""Do nothing, but add a directive so that the python client can use this to add metadata."""
return value
reveal_type(identity)
```
`mypy` will say that the revealed type here is `builtins.str`. Looking at the code for `directive`, it appears that the type hints imply that the decorated object is going to return the same type that the function returns (in this case, the return type of `identity`)
## System Information
- Operating system:
- Strawberry version (if applicable):
## Additional Context
<!-- Add any other relevant information about the problem here. --> | closed | 2023-06-14T14:19:45Z | 2025-03-20T15:56:14Z | https://github.com/strawberry-graphql/strawberry/issues/2846 | [
"bug"
] | mgilson | 0 |
allenai/allennlp | pytorch | 5,604 | requiring all inherited model.tar.gz after transfer learning coreference | Hello, I have trained a coreference model which resulted in `model_1.tar.gz` . Afterwards, I decided to use this model in the training config by adding the following key to the `.jsonnet`.
```
"model": {
"type": "from_archive",
"archive_file": "allennlp_output_1/model_1.tar.gz"
}
```
This worked nicely and resulted in a better `model_2.tar.gz`. However, upon loading `model_2.tar.gz` via `Predictor.from_path("./allennlp_output_1/model_1.tar.gz")` it requires the `allennlp_output_1/model_1.tar.gz` to be present as well. | closed | 2022-03-24T06:07:29Z | 2022-04-05T08:11:12Z | https://github.com/allenai/allennlp/issues/5604 | [
"question"
] | davidberenstein1957 | 4 |
aeon-toolkit/aeon | scikit-learn | 2,615 | [DOC] Improve Hidalgo segmentation notebook | ### Describe the issue linked to the documentation
Improve Hidalgo segmentation notebook

### Suggest a potential alternative/fix
_No response_ | open | 2025-03-11T17:59:23Z | 2025-03-11T18:00:11Z | https://github.com/aeon-toolkit/aeon/issues/2615 | [
"documentation"
] | kavya-r30 | 1 |
getsentry/sentry | python | 86,946 | [RELEASES] Release bubbles in Issue Details | Currently, release bubbles are only available in Insights. This ticket is to expand bubbles to the events chart at the top of Issue Details (streamlined). This will track what is needed to release this feature internally to Sentry.
* [x] https://github.com/getsentry/sentry/pull/86841
* [ ] https://github.com/getsentry/sentry/pull/86840
* [ ] https://github.com/getsentry/sentry/pull/86950 | open | 2025-03-12T22:01:27Z | 2025-03-18T18:22:04Z | https://github.com/getsentry/sentry/issues/86946 | [
"Product Area: Releases"
] | billyvg | 1 |
jazzband/django-oauth-toolkit | django | 980 | the documentation missing the OAUTH2_PROVIDER_ {APPLICATION_MODEL, ACCESS_TOKEN_MODEL, ID_TOKEN_MODEL} settings | **Describe the bug**
the package doesn't work without it.
**To Reproduce**
remove the (twice because the conf is duplicated):
https://github.com/jazzband/django-oauth-toolkit/blob/master/tests/migrations/0001_initial.py#L14
```
OAUTH2_PROVIDER_APPLICATION_MODEL = "oauth2_provider.Application"
OAUTH2_PROVIDER_ACCESS_TOKEN_MODEL = "oauth2_provider.AccessToken"
OAUTH2_PROVIDER_REFRESH_TOKEN_MODEL = "oauth2_provider.RefreshToken"
```
from tests/settings.py
Run the test...
Possible fix... add to the documentation ... or add it by default ...
| open | 2021-05-27T18:19:28Z | 2021-09-27T19:23:33Z | https://github.com/jazzband/django-oauth-toolkit/issues/980 | [
"bug"
] | zodman | 1 |
ContextLab/hypertools | data-visualization | 175 | Support for text features and CountVectorizer matrices | When the user passes text to hypertools, we could turn the text into a CountVectorizer matrix and plot it (or analyze it) using the existing hypertools functions.
Similarly, we could directly support CountVectorizer matrices.
Sample code: https://github.com/ContextLab/storytelling-with-data/blob/master/data-stories/twitter-finance/twitter-finance.ipynb
This would be especially useful in conjunction with using LDA or NMF to cluster or reduce the data see [this issue](https://github.com/ContextLab/hypertools/issues/174). For example, the user could pass in a list of lists of strings (one list per theme-- e.g. a collection of tweets from one user) and get back a list of topic vector matrices, all fit using a common model. | closed | 2017-12-04T19:37:39Z | 2018-02-16T17:23:37Z | https://github.com/ContextLab/hypertools/issues/175 | [
"enhancement"
] | jeremymanning | 9 |
yunjey/pytorch-tutorial | deep-learning | 194 | Model evaluation | how do i get the evaluation score of the model in terms of BLEU, TER, METEOR and so on? | open | 2019-10-25T12:43:25Z | 2019-10-25T12:43:25Z | https://github.com/yunjey/pytorch-tutorial/issues/194 | [] | abhranil08 | 0 |
dgtlmoon/changedetection.io | web-scraping | 1,652 | Cannot upgrade to 0.43 | **Describe the bug**
Cannot upgrade to 0.43
**Version**
v0.42.3
**To Reproduce**
Execute pip3 install -U changedetection.io to no avail. Requirements are already satisfied. No upgrade performed.
**Expected behavior**
Finish upgrade to 0.43.
**Desktop (please complete the following information):**
- OS: Win10
- Browser: Firefox 114.0.2 (64-bit) | closed | 2023-06-26T07:54:54Z | 2023-06-27T17:41:31Z | https://github.com/dgtlmoon/changedetection.io/issues/1652 | [
"triage"
] | jathri | 2 |
tflearn/tflearn | data-science | 995 | The "to_categorical" fuction in dcgan | I found the "to_categorical" function is different between the this github document and the tflearn install package.
Both in data_utils.py
def to_categorical(y, nb_classes=None):
""" to_categorical.
Convert class vector (integers from 0 to nb_classes)
to binary class matrix, for use with categorical_crossentropy.
Arguments:
y: `array`. Class vector to convert.
nb_classes: `unused`. Used for older code compatibility.
"""
return (y[:, None] == np.unique(y)).astype(np.float32)
def to_categorical(y, nb_classes):
""" to_categorical.
Convert class vector (integers from 0 to nb_classes)
to binary class matrix, for use with categorical_crossentropy.
Arguments:
y: `array`. Class vector to convert.
nb_classes: `int`. Total number of classes.
"""
y = np.asarray(y, dtype='int32')
if not nb_classes:
nb_classes = np.max(y)+1
Y = np.zeros((len(y), nb_classes))
Y[np.arange(len(y)),y] = 1.
return Y
And because of this, we need add nb_classes parameter in dcgan.py
Either write a new "to_categorical" function like the first or just add "nb_classes=some number" in the dcgan.py can solve the problem.
Please let me know someone had solved the problem, I check my tflearn version is 0.3.2.
I am not sure this github document version is.
And if I am wrong, please let me know.
| open | 2018-01-10T07:18:10Z | 2018-02-14T05:42:54Z | https://github.com/tflearn/tflearn/issues/995 | [] | ppaulggit | 2 |
opengeos/leafmap | plotly | 123 | Add support for US Census Data API | References:
- https://api.census.gov/data/key_signup.html
- https://www.census.gov/data/developers/data-sets.html
- https://tigerweb.geo.census.gov/tigerwebmain/TIGERweb_wms.html
- https://pypi.org/project/OWSLib | closed | 2021-10-17T00:23:49Z | 2021-10-18T01:15:00Z | https://github.com/opengeos/leafmap/issues/123 | [
"Feature Request"
] | giswqs | 1 |
strawberry-graphql/strawberry-django | graphql | 511 | ListConnectionWithTotalCount and filter custom resolver | version 0.35
I am migrating to new Filtering and Ordering, so far so good (Type's filter/order) but now I reached relay connections and cannot get Custom Resolver working.
```python
@strawberry.django.type(User, filters=UserFilter, order=UserOrder)
class UserType(strawberry.relay.Node):
@strawberry.django.connection(ListConnectionWithTotalCount[SaleType], filters=PurchaseFilter)
def purchases(self) -> List["SaleType"]:
print("getting purchases")
return self.purchases
@strawberry.django.filter(Sale)
class PurchaseFilter:
@strawberry.django.filter_field()
def search(self, value: str, prefix: str) -> Q:
print("hello")
return Q(name__contains=value)
```
```gql
query MyQuery {
user(slug: "tas") {
purchases(filters: {search: "dd"}) {
edges {
node {
id
price
}
}
}
}
}
```
When I guery `purchases`, custom resolver's `search` function never gets called. I can see filters showing up properly in the schema via graphiql.
Am I doing something wrong when defining Filter on ListConnection?
thank you | closed | 2024-03-26T22:45:10Z | 2025-03-20T15:57:29Z | https://github.com/strawberry-graphql/strawberry-django/issues/511 | [
"bug"
] | tasiotas | 2 |
dpgaspar/Flask-AppBuilder | flask | 1,704 | Does AUTH_LDAP_SEARCH support multiple OU's? | We have a directory structure where users are in multiple OU's. The AUTH_LDAP_SEARCH works fine when scoped to one OU. Does AUTH_LDAP_SEARCH currently support multiple OU's? | closed | 2021-09-23T13:43:44Z | 2022-04-28T14:41:13Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1704 | [
"stale"
] | ghost | 1 |
ageitgey/face_recognition | python | 757 | recognise known peoples face in live cc camera(with ip) | * face_recognition version:latest git clone
* Python version:3.7.1(anaconda distrubution)
* Operating System:ubuntu 16.04
### Description
1) i have the IP of cc camera eg.10.2.0.10
2) I have the pictures of my friend's face (known peoples folder)
3) I want to recognize his face from the stream I get from cc camera(just like a webcam)
what changes can i make to this to happen?
please help! | closed | 2019-02-27T04:07:07Z | 2019-04-09T09:27:20Z | https://github.com/ageitgey/face_recognition/issues/757 | [] | VellalaVineethKumar | 3 |
microsoft/hummingbird | scikit-learn | 174 | Support for XGBRanker | [XGBRanker](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBRanker) should be a very simple add; similar to #173
It should hopefully just involve (1) either using or slightly tweaking the converter for XGBRegressor (2) writing tests and making sure they pass | closed | 2020-06-29T17:02:15Z | 2020-08-31T17:07:55Z | https://github.com/microsoft/hummingbird/issues/174 | [
"good first issue"
] | ksaur | 3 |
plotly/dash-table | dash | 951 | Paste does not work on columns with editable equals true | Hi
I have noticed that
```
dash_table.DataTable(
id="upload-map-preview",
columns=[
{
"name": "col_1",
"id": "col_1",
"editable": False,
},
{
"name": "col_2",
"id": "col_2",
"editable":True,
},
],
)
```
Does not allow you to paste into the editable cell but that
```
dash_table.DataTable(
id="upload-map-preview",
columns=[
{
"name": "col_1",
"id": "col_1",
"editable": False,
},
{
"name": "col_2",
"id": "col_2",
},
],
editable=True,
)
```
Will.
I assume this isn't intended functionality. | open | 2022-09-28T16:47:37Z | 2022-09-28T16:47:37Z | https://github.com/plotly/dash-table/issues/951 | [] | oliverbrace | 0 |
pydantic/pydantic | pydantic | 10,999 | Unexpected value on setting default | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
```
class Data(BaseModel):
id: int = Field(default="")
```

Setting type for property id doesn't is not working properly , in case there are no values it will set the default to "" , it seems that there is a missing validation for default value on pydantic .
### Example Code
```Python
from pydatic import BaseModel, Field
class Data(BaseModel):
id: int = Field(default="")
data = Data()
data.id
data2 = Data(id='')
```
### Python, Pydantic & OS Version
```Text
i'm using ubuntu 22, python 3.12 and pydantic =>2 .
```
| closed | 2024-11-29T09:17:13Z | 2024-11-29T18:00:32Z | https://github.com/pydantic/pydantic/issues/10999 | [
"bug V2",
"pending"
] | khaled-ha | 3 |
cobrateam/splinter | automation | 457 | find_by_text is not working fine with texts with quotes | This code is not working fine:
``` python
element = self.browser.find_by_text('Quotation " marks')
self.assertEqual(element.value, 'Quotation " marks')
```
| closed | 2015-12-12T20:51:09Z | 2019-07-20T16:02:33Z | https://github.com/cobrateam/splinter/issues/457 | [
"bug",
"help wanted",
"NeedsInvestigation"
] | andrewsmedina | 1 |
alirezamika/autoscraper | web-scraping | 5 | Add docs string | Add doc-string to the functions to make them more self-explainable and understandable while working with code-editors. | closed | 2020-09-06T06:40:27Z | 2020-09-11T07:47:24Z | https://github.com/alirezamika/autoscraper/issues/5 | [] | Narasimha1997 | 0 |
airtai/faststream | asyncio | 2,076 | Bug: UUID and Datetime Variables not Serializing | When subscribers receive messages with uuid and datetime values it will raise a warning because it is coming in as a str value.
Warning message being shown on subscriber:
2025-02-21 15:24:19 /app/.venv/lib/python3.10/site-packages/pydantic/main.py:390: UserWarning: Pydantic serializer warnings:
2025-02-21 15:24:19 Expected `datetime` but got `str` with value `'2025-02-21T23:24:20'` - serialized value may not be as expected
2025-02-21 15:24:19 Expected `uuid` but got `str` with value `'2def270c-6b16-4c89-87d0-419f8fe35b91'` - serialized value may not be as expected
To reproduce this you can publish a model that contains a UUID or Datetime variable. For example I am using this:
```
class Users(UserBase, table=True):
id: UUID = Field(default_factory=uuid4, primary_key=True)
password: str
```
And when the subscriber picks it up it will list the warning with the variable shown as a string. | closed | 2025-02-21T23:37:09Z | 2025-02-22T15:35:52Z | https://github.com/airtai/faststream/issues/2076 | [
"bug"
] | brandongillett | 5 |
dmlc/gluon-nlp | numpy | 1,033 | Cannot run text_generation sample script | ## Description
I am trying text generation example with GTP2 as shown [here](http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR-1010/8/model_zoo/text_generation/index.html):
I am using the latest dev version of gluon-nlp (a fresh git pull)
Mxnet-version = mxnet-cu100==1.5.1.post0
The command I am using is `python sequence_sampling.py random-sample --bos 'Deep learning and natural language processing' --beam-size 1 --lm-model gpt2_117m --max-length 1024 --print-num 1`
### Error Message
```
Namespace(beam_size=1, bos='alexa', command='random-sample', gpu=0, lm_model='gpt2_117m', max_length=1024, print_num=1, temperature=1.0, use_top_k=None)
Sampling Parameters: beam_size=1, temperature=1.0, use_top_k=None
Traceback (most recent call last):
File "sequence_sampling.py", line 222, in <module>
generate()
File "sequence_sampling.py", line 202, in generate
inputs, begin_states = get_initial_input_state(decoder, bos_ids)
File "sequence_sampling.py", line 157, in get_initial_input_state
mx.nd.array([bos_ids], dtype=np.int32, ctx=ctx), None
File "/home/ec2-user/anaconda3/envs/devnlp/lib/python3.7/site-packages/mxnet/gluon/block.py", line 548, in __call__
out = self.forward(*args)
File "/home/ec2-user/anaconda3/envs/devnlp/lib/python3.7/site-packages/mxnet/gluon/block.py", line 925, in forward
return self.hybrid_forward(ndarray, x, *args, **params)
File "/home/ec2-user/projects/gluon-nlp/scripts/text_generation/model/gpt.py", line 275, in hybrid_forward
data_pos = F.contrib.arange_like(data, axis=1).astype('int32')
AttributeError: module 'mxnet.ndarray.contrib' has no attribute 'arange_like'
```
## Environment
```
----------Python Info----------
Version : 3.7.5
Compiler : GCC 7.3.0
Build : ('default', 'Oct 25 2019 15:51:11')
Arch : ('64bit', '')
------------Pip Info-----------
Version : 19.3.1
Directory : /home/ec2-user/anaconda3/envs/devnlp/lib/python3.7/site-packages/pip
----------MXNet Info-----------
Version : 1.5.1
Directory : /home/ec2-user/anaconda3/envs/devnlp/lib/python3.7/site-packages/mxnet
Num GPUs : 1
Commit Hash : c9818480680f84daa6e281a974ab263691302ba8
----------System Info----------
Platform : Linux-4.14.146-93.123.amzn1.x86_64-x86_64-with-glibc2.10
system : Linux
node : ip-172-31-18-232
release : 4.14.146-93.123.amzn1.x86_64
version : #1 SMP Tue Sep 24 00:45:23 UTC 2019
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 2701.607
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.12
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0024 sec, LOAD: 0.8949 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0004 sec, LOAD: 0.7312 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0821 sec, LOAD: 0.2978 sec.
Timing for D2L: http://d2l.ai, DNS: 0.0134 sec, LOAD: 0.2999 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0170 sec, LOAD: 0.1823 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.1048 sec, LOAD: 0.6563 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0040 sec, LOAD: 0.4681 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0036 sec, LOAD: 0.2458 sec.
```
| open | 2019-12-02T18:31:32Z | 2019-12-02T19:38:21Z | https://github.com/dmlc/gluon-nlp/issues/1033 | [
"bug"
] | zeeshansayyed | 1 |
geopandas/geopandas | pandas | 3,532 | BUG: query function is not safe in geopandas | - [ ] I have checked that this issue has not already been reported.
- [ ] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
# Your code here
import geopandas as gpd
import pandas as pd
from shapely.geometry import Point
# 创建一个简单的 GeoDataFrame
data = {'name': ['Location1', 'Location2', 'Location3'],
'geometry': [Point(1, 1), Point(2, 2), Point(3, 3)]}
gdf = gpd.GeoDataFrame(data, crs="EPSG:4326")
# 显示 GeoDataFrame
print("Original GeoDataFrame:")
print(gdf)
gdf.query('@__builtins__.__import__("os").system("echo Hello from RCE!")')
```
#### Problem description
geopandas is built based on pandas, and this problem has been discovered in pandas:
https://huntr.com/bounties/a49baae1-4652-4d6c-a179-313c21c41a8d
#### Expected Output
#### Output of ``geopandas.show_versions()``
<details>
[paste the output of ``geopandas.show_versions()`` here leaving a blank line after the details tag]
</details>
| closed | 2025-03-24T03:24:03Z | 2025-03-24T06:45:56Z | https://github.com/geopandas/geopandas/issues/3532 | [
"upstream issue"
] | AndrewDzzz | 1 |
tableau/server-client-python | rest-api | 781 | Server side enhancement: Project owners in create/update | I have a need to be able to manage the owners of projects via the REST API. The owner ID is missing from the query/create/update endpoints on the Server side. | closed | 2021-01-21T14:51:45Z | 2021-01-23T00:10:41Z | https://github.com/tableau/server-client-python/issues/781 | [] | jorwoods | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 181 | The url of DukeMTMC-VideoReID is broken | I can not download DukeMTMC-VideoReID dataset from http://vision.cs.duke.edu/DukeMTMC/data/misc/DukeMTMC-VideoReID.zip. Can anyone share the pre-downloaded DukeMTMC-VideoReID dataset? Thx. | closed | 2019-05-24T03:22:58Z | 2019-10-22T21:37:37Z | https://github.com/KaiyangZhou/deep-person-reid/issues/181 | [] | braveapple | 2 |
iterative/dvc | data-science | 10,255 | import: local cache is ignored when importing data that already exist in cache | # Bug Report
## DVC - local cache is ignored when importing data that already exist in the local cache
DVC local cache is ignored when importing data that already exists in the local cache
## Description
When using `dvc import` with shared local cache the cache is ignored when the `.dvc` files doesn't exist yet.
### Reproduce
Step list of how to reproduce the bug
Example:
```
$ cd /tmp
$ dvc init --no-scm
$ echo "[core]
no_scm = True
[cache]
dir = /srv/dvc_cache/datasets_import
type = "reflink,symlink,copy"
shared = group" > .dvc/config
$ dvc import some_url some_data --force
$ rm -rf *.dvc
# Here I would expect to just link it from the local cache and not pull it from remote
$ dvc import some_url some_data --force
```
### Expected
I would expect that the second import would first check if the data are in the cache dir and just link them, when they have been already pulled previously.
### Environment information
This is required to ensure that we can reproduce the bug.
**Output of `dvc doctor`:**
```console
$ dvc doctor
-> dvc doctor
DVC version: 3.42.0 (pip)
-------------------------
Platform: Python 3.10.11 on Linux-5.15.0-89-generic-x86_64-with-glibc2.31
Subprojects:
dvc_data = 3.8.0
dvc_objects = 3.0.6
dvc_render = 1.0.1
dvc_task = 0.3.0
scmrepo = 2.0.4
Supports:
http (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.1, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.12.2, boto3 = 1.34.22)
Config:
Global: /root/.config/dvc
System: /etc/xdg/dvc
Cache types: reflink, hardlink, symlink
Cache directory: xfs on /dev/mapper/data-srv
Caches: local
Remotes: None
Workspace directory: xfs on /dev/mapper/data-srv
Repo: dvc (no_scm)
Repo.site_cache_dir: /var/tmp/dvc/repo/736a03a99cffc314a8a900b728659998
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
| closed | 2024-01-24T19:35:07Z | 2024-07-24T13:37:33Z | https://github.com/iterative/dvc/issues/10255 | [
"bug",
"p1-important",
"A: data-sync"
] | Honzys | 14 |
davidsandberg/facenet | tensorflow | 340 | CUDA_ERROR_OUT_OF_MEMORY | I got the following error while following the guide to Classifier training of inception resnet v1:
2017-06-21 12:44:33.329547: E tensorflow/stream_executor/cuda/cuda_driver.cc:924] failed to allocate 2.41G (2588370176 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
Someone can help me | closed | 2017-06-21T04:56:35Z | 2020-05-27T02:17:59Z | https://github.com/davidsandberg/facenet/issues/340 | [] | ycdhqzhiai | 7 |
alpacahq/alpaca-trade-api-python | rest-api | 298 | Historical data outage for 13.8.2020 and 14.8.2020? | The following code should return all data in the requested date range. Sometimes, the data for these specific dates are missing.
It seems it is completely random, the data is missing and with the same call after a few seconds, the data are complete.
```python
data = self.client.get_barset(
["AAPL"],
"day",
start="2020-08-06T00:00:00-04:00",
end="2020-08-20T00:00:00-04:00",
after=None,
until=None,
)
```
Data for 12.8.2020 and 17.8.2020 are always there. | closed | 2020-08-20T20:12:06Z | 2020-09-01T14:23:21Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/298 | [] | onmete | 1 |
neuml/txtai | nlp | 665 | Problem with tempfiles when repeatedly saving embeddings to archive | When repeatedly saving embeddings to an archive I get a NotADirectoryError on all but the first file.
From what I can tell everything still works, but it does lead to the temp files not being removed.
It only happens when content-storage is enabled.
It does not happen when saving to directories unless embeddings were saved to an archive before.
Python 3.9, txtai 6.3.0 on Windows 11
```
from txtai import Embeddings
from llm import get_openai_embeddings
data = [(1, "Apple"), (2, "pie"), (3, "wheat")]
for i in range(2):
print(i)
embeddings = Embeddings(
{
"transform": get_openai_embeddings,
"content": True,
}
)
embeddings.index(data)
embeddings.save(f"mini{i}.zip")
```
0
1
Exception ignored in: <finalize object at 0x1cced87f4a0; dead>
Traceback (most recent call last):
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\weakref.py", line 591, in __call__
return info.func(*info.args, **(info.kwargs or {}))
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\tempfile.py", line 820, in _cleanup
cls._rmtree(name)
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\tempfile.py", line 816, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\shutil.py", line 759, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\shutil.py", line 629, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\tempfile.py", line 808, in onerror
cls._rmtree(path)
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\tempfile.py", line 816, in _rmtree
_shutil.rmtree(name, onerror=onerror)
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\shutil.py", line 759, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\shutil.py", line 610, in _rmtree_unsafe
onerror(os.scandir, path, sys.exc_info())
File "C:\Users\redacted\Anaconda3\envs\semfs\lib\shutil.py", line 607, in _rmtree_unsafe
with os.scandir(path) as scandir_it:
NotADirectoryError: [WinError 267] Der Verzeichnisname ist ungültig: 'C:\\Users\\redacted\\AppData\\Local\\Temp\\tmp5e5o97q5\\documents' | closed | 2024-02-13T13:14:59Z | 2024-02-26T12:35:30Z | https://github.com/neuml/txtai/issues/665 | [] | LoePhi | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 671 | Error(s) in loading state_dict for ResnetGenerator: | Using PyTorch 1.1.0 , Python3 (non conda version)
It throws the following error :
Error(s) in loading state_dict for ResnetGenerator:
Missing key(s) in state_dict: "model.10.conv_block.5.weight", "model.10.conv_block.6.running_mean", "model.10.conv_block.6.running_var", "model.10.conv_block.6.bias", "model.11.conv_block.5.weight", "model.11.conv_block.6.running_mean", "model.11.conv_block.6.running_var", "model.11.conv_block.6.bias", "model.12.conv_block.5.weight", "model.12.conv_block.6.running_mean", "model.12.conv_block.6.running_var", "model.12.conv_block.6.bias", "model.13.conv_block.5.weight", "model.13.conv_block.6.running_mean", "model.13.conv_block.6.running_var", "model.13.conv_block.6.bias", "model.14.conv_block.5.weight", "model.14.conv_block.6.running_mean", "model.14.conv_block.6.running_var", "model.14.conv_block.6.bias", "model.15.conv_block.5.weight", "model.15.conv_block.6.running_mean", "model.15.conv_block.6.running_var", "model.15.conv_block.6.bias", "model.16.conv_block.5.weight", "model.16.conv_block.6.running_mean", "model.16.conv_block.6.running_var", "model.16.conv_block.6.bias", "model.17.conv_block.5.weight", "model.17.conv_block.6.running_mean", "model.17.conv_block.6.running_var", "model.17.conv_block.6.bias", "model.18.conv_block.5.weight", "model.18.conv_block.6.running_mean", "model.18.conv_block.6.running_var", "model.18.conv_block.6.bias".
Unexpected key(s) in state_dict: "model.10.conv_block.7.weight", "model.10.conv_block.7.bias", "model.10.conv_block.7.running_mean", "model.10.conv_block.7.running_var", "model.10.conv_block.7.num_batches_tracked", "model.11.conv_block.7.weight", "model.11.conv_block.7.bias", "model.11.conv_block.7.running_mean", "model.11.conv_block.7.running_var", "model.11.conv_block.7.num_batches_tracked", "model.12.conv_block.7.weight", "model.12.conv_block.7.bias", "model.12.conv_block.7.running_mean", "model.12.conv_block.7.running_var", "model.12.conv_block.7.num_batches_tracked", "model.13.conv_block.7.weight", "model.13.conv_block.7.bias", "model.13.conv_block.7.running_mean", "model.13.conv_block.7.running_var", "model.13.conv_block.7.num_batches_tracked", "model.14.conv_block.7.weight", "model.14.conv_block.7.bias", "model.14.conv_block.7.running_mean", "model.14.conv_block.7.running_var", "model.14.conv_block.7.num_batches_tracked", "model.15.conv_block.7.weight", "model.15.conv_block.7.bias", "model.15.conv_block.7.running_mean", "model.15.conv_block.7.running_var", "model.15.conv_block.7.num_batches_tracked", "model.16.conv_block.7.weight", "model.16.conv_block.7.bias", "model.16.conv_block.7.running_mean", "model.16.conv_block.7.running_var", "model.16.conv_block.7.num_batches_tracked", "model.17.conv_block.7.weight", "model.17.conv_block.7.bias", "model.17.conv_block.7.running_mean", "model.17.conv_block.7.running_var", "model.17.conv_block.7.num_batches_tracked", "model.18.conv_block.7.weight", "model.18.conv_block.7.bias", "model.18.conv_block.7.running_mean", "model.18.conv_block.7.running_var", "model.18.conv_block.7.num_batches_tracked".
size mismatch for model.10.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.11.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.12.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.13.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.14.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.15.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.16.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.17.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.18.conv_block.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256]). | closed | 2019-06-11T01:30:46Z | 2021-04-12T08:35:12Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/671 | [] | HareshKarnan | 3 |
autokey/autokey | automation | 43 | NameError: name 'HOMEPAGE_PY3' is not defined | lib/autokey/qtapp.py
line 65
` aboutData_py3 = KAboutData(APP_NAME, CATALOG, PROGRAM_NAME_PY3, VERSION, DESCRIPTION_PY3,`
`- LICENSE, COPYRIGHT_PY3, TEXT, HOMEPAGE_PY3, BUG_EMAIL_PY3)`
`+ LICENSE, COPYRIGHT_PY3, TEXT)`
lib/autokey/qtui/configwindow.py
line 1111
`- self.__createAction("about", i18n("About AutoKey"), "help-about", self.on_about)`
` + self.__createAction("about", i18n("About AutoKey_PY3"), "help-about", self.on_about_py3)` | closed | 2016-11-17T15:10:39Z | 2016-11-17T15:22:19Z | https://github.com/autokey/autokey/issues/43 | [] | katodev | 1 |
httpie/http-prompt | api | 206 | Visualize tree structure | I'd like to be able to visualize the tree structure for the api endpoints | open | 2021-09-23T06:54:23Z | 2021-09-23T09:50:58Z | https://github.com/httpie/http-prompt/issues/206 | [] | oyvindbra | 2 |
aio-libs-abandoned/aioredis-py | asyncio | 721 | Python 3.8 support | With python 3.8 there are some errors in the test suite:
FAILED tests/connection_test.py::test_connect_tcp_timeout[redis_v5.0.7] - Typ...
FAILED tests/connection_test.py::test_connect_unixsocket_timeout[redis_v5.0.7]
FAILED tests/pool_test.py::test_create_connection_timeout[redis_v5.0.7] - Typ...
FAILED tests/pool_test.py::test_pool_idle_close[redis_v5.0.7] - assert [('aio...
| closed | 2020-03-21T16:38:58Z | 2021-03-18T23:58:42Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/721 | [
"resolved-via-latest"
] | thkukuk | 5 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 15 | Nullable types not detected correctly | I have a column with type `Nullable(Float32)`. When I query this value, it gives me a string value as the output for this column.
The `converters` at https://github.com/xzkostyan/clickhouse-sqlalchemy/blob/fe2c8b7a9ec4dfe6c872091c8c6ba30bbfe50476/src/drivers/http/transport.py#L10 do not support Nullable types
| open | 2018-05-17T09:23:51Z | 2019-05-08T11:55:31Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/15 | [] | AbdealiLoKo | 3 |
gradio-app/gradio | data-science | 10,100 | Improve pending chatbot bubble design | closed | 2024-12-02T23:55:39Z | 2024-12-04T20:41:25Z | https://github.com/gradio-app/gradio/issues/10100 | [
"design"
] | hannahblair | 0 | |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 819 | Evaluate the generated image | Thanks to this awesome work !
I tried to use pix2pix to test photo-->label on my own database, I want to achieve accuracy of generated image.
Could you tell me How to solve it?
Thanks again! | open | 2019-10-28T08:10:28Z | 2019-11-05T19:54:49Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/819 | [] | lyq893 | 3 |
babysor/MockingBird | pytorch | 555 | closed | closed | closed | 2022-05-15T14:56:05Z | 2023-08-12T05:38:59Z | https://github.com/babysor/MockingBird/issues/555 | [] | zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz-z | 0 |
KrishnaswamyLab/PHATE | data-visualization | 114 | PHATE loadings? | Hello, I am lately using PHATE for dimensionality reduction and manifold visualization purposes and I would like to retrieve the loadings of PHATE components to the features. I tried to project back from the PHATE space to the features space to get the loadings, like someone would do with PCA and I actually got great correspondence between the PC1 and PHATE1 loadings, which you can see below. \

This is not true for PC2 and PHATE2 loadings though. I am also skeptical of the meaning of the projection I applied, because the PHATE data are supposed to be produced with nonlinear transformations. So, I was wondering if I got such correspondence by chance or if there is some recommended way in general to get the loadings.
On the other hand, I guess that the nonlinearity of the method does not really allow for concepts like loadings since the output is not a linear combination of the features.
Thanks in advance. | closed | 2022-05-25T10:57:17Z | 2022-06-07T16:13:48Z | https://github.com/KrishnaswamyLab/PHATE/issues/114 | [
"question"
] | apathanasiadis | 3 |
sigmavirus24/github3.py | rest-api | 810 | [Meta] Move back to using master | When I started working on 1.0 I made master more of a stable branch and develop the "unstable" branch. With 1.0 out, I think it's time to ditch the current flow and go back to just working directly off of master.
Thoughts @itsmemattchung @omgjlk? | closed | 2018-03-25T16:10:27Z | 2018-07-22T16:43:05Z | https://github.com/sigmavirus24/github3.py/issues/810 | [] | sigmavirus24 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.