repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
mljar/mercury | data-visualization | 279 | ModuleNotFoundError | closed | 2023-05-18T05:49:31Z | 2023-05-18T13:45:42Z | https://github.com/mljar/mercury/issues/279 | [] | max-poltora | 0 | |
gradio-app/gradio | deep-learning | 10,116 | Apps breaking after updating to gradio 5.x when run behind app proxy | ### Describe the bug
After updating from gradio 4.44 to gradio 5.7.1 I observe two breaking changes. I think this is related to me working behind a reverse proxy.
1. Starting from Gradio 5.x files are not accessible anymore via even if they are in a subfolder of the project root. I update my code from "file=..." to "/gradio_api/file=..." but i'ts not working. I think this is related to the root_path, the expected behavior is to find the file here "root_path/gradio_api/file=..." but gradio tried to display directly here "/gradio_api/file=..." which cuase gradio not to find the file. When updating manually the path to "root_path/gradio_api/file=..." the file is displaying correctly but I cant' do it for all of my users !
2. Streaming chatinterface is not working anymore, I think it's still related to the reverse proxy because it's working locally.
PS : I can't modify / access the NGINX server parameter
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
Working locally but not behind a reverse proxy.
```python
import gradio as gr
def respond(
message,
history: list[tuple[str, str]],
system_message,
max_tokens,
temperature,
top_p,
):
messages = [{"role": "system", "content": system_message}]
for val in history:
if val[0]:
messages.append({"role": "user", "content": val[0]})
if val[1]:
messages.append({"role": "assistant", "content": val[1]})
messages.append({"role": "user", "content": message})
response = ""
for message in client.chat_completion(
messages,
max_tokens=max_tokens,
stream=True,
temperature=temperature,
top_p=top_p,
):
token = message.choices[0].delta.content
response += token
yield response
with gr.Blocks() as demo:
gr.HTML("""<img src="/gradio_api/file=logo.png">""")
gr.ChatInterface(fn=respond)
runurl = "" #your rooth path
port = 8888
demo.launch(server_port=port, root_path=runurl,allowed_path=["."])
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio==5.7.1
Working behind a reverse proxy.
```
### Severity
Blocking usage of gradio | closed | 2024-12-04T12:01:55Z | 2025-01-29T00:24:16Z | https://github.com/gradio-app/gradio/issues/10116 | [
"bug",
"cloud"
] | greg2705 | 4 |
seleniumbase/SeleniumBase | web-scraping | 2,225 | Struggling with Remote Debugging port | I am trying to run a script on a previously opened browser. I cannot connect to the browser instead my script opens up a new browser with each new run. The Browser opened -> [https://imgur.com/a/afBEAJX](https://imgur.com/a/afBEAJX)
I use this to spawn my browser:
cd C:\Program Files\Google\Chrome\Application
chrome.exe --remote-debugging-port=0249 --user-data-dir="C:\browser"
My current code:
```Python
#simple.py
from seleniumbase import Driver
from seleniumbase.undetected import Chrome
from selenium.webdriver.chrome.options import Options
from seleniumbase.undetected import ChromeOptions
chrome_options = Options()
chrome_options.add_experimental_option("debuggerAddress" , "localhost:0249")
web = Driver(browser='chrome',chromium_arg=chrome_options,remote_debug=True)
body = web.find_element("/html/body")
print(body.text)
```
Things i have tried:
-not changing the port (9222)
-pytest simple.py --remote-debug
-uc=True with uc.chromeoptions
-chrome_options = Options()
-chrome_options = ChromeOptions()
-browser="remote"
-removing chromium_arg and adding port="9222"
-chromium_arg="remote-debugging-port=9222"
-example solution with SB in #2049
I am converting my large Selenium project into SeleniumBase and being able to test in this way would be invaluable | closed | 2023-10-30T08:04:30Z | 2023-11-02T20:38:14Z | https://github.com/seleniumbase/SeleniumBase/issues/2225 | [
"question",
"UC Mode / CDP Mode"
] | Dylgod | 8 |
noirbizarre/flask-restplus | flask | 674 | Tests not in tarball | It would be helpful to add `tests` dir to manifest. This is mainly for downstream packaging. While the tests can be seen passing in travis it's nice to run the test suite inside each image version that it will be packaged for. This ensures the package runs against the package dependency versions which are provided in each image.
For now pulling the tarball from GitHub does provide the tests dir. | closed | 2019-07-19T15:53:42Z | 2019-10-31T17:46:03Z | https://github.com/noirbizarre/flask-restplus/issues/674 | [] | smarlowucf | 1 |
jupyter/nbviewer | jupyter | 776 | No Output provided by Jupyter Notebook | Whichever input i give, it does not provide me with the output ...please help
![error] (https://user-images.githubusercontent.com/40686853/42080058-079b0de6-7b9f-11e8-92ed-45a1b8009b26.jpeg)
Please Help me | closed | 2018-06-29T07:49:57Z | 2018-09-01T16:21:29Z | https://github.com/jupyter/nbviewer/issues/776 | [
"type:Question",
"tag:Other Jupyter Project"
] | Ruthz47 | 1 |
OthersideAI/self-operating-computer | automation | 240 | requirements.txt File: dependencies required for the project. |
*
requirements.txt File: dependencies required for the project.
must be run on: python 3.12
C:\Users\user\self-operating-computer\requirements.txt
numpy==1.26.2
Might need to Include other libraries if needed
---
********************
`C:\Users\user>pip install numpy==1.26.1
Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement numpy==1.26.1 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1.23.4, 1.23.5, 1.24.0, 1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.25.0, 1.25.1, 1.25.2, 1.26.2, 1.26.3, 1.26.4, 2.0.0, 2.0.1, 2.0.2, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.2.0rc1, 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.2.4)
ERROR: No matching distribution found for numpy==1.26.1
C:\Users\user>pip install numpy==1.26.2
---
Defaulting to user installation because normal site-packages is not writeable
Collecting numpy==1.26.2
Downloading numpy-1.26.2.tar.gz (15.7 MB)
---------------------------------------- 15.7/15.7 MB 2.0 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: numpy
Building wheel for numpy (pyproject.toml) ... done
Created wheel for numpy: filename=numpy-1.26.2-cp313-cp313-win_amd64.whl size=20691844 sha256=25adab590be9b12ebdd75af2cb09faf55d83b899e5a1900e7ee402a30e2f6f18
Stored in directory: c:\users\user\appdata\local\pip\cache\wheels\c9\15\32\8370c1b87f23602d92aa9dd11d143dee8df8b5fc2fdbf2b40b
Successfully built numpy
Installing collected packages: numpy
Successfully installed numpy-1.26.2
`**********************
You also need Pip installed, add pip PATH into environment variables
---
Add the Scripts Directory to PATH
Locate Your Python Scripts Directory:
C:\Users\user\AppData\Roaming\Python\Python313\Scripts
This is where pip.exe and other Python script executables are located.
Add the Directory to PATH:
Open the Start Menu, search for Environment Variables, and select Edit the system environment variables.
In the System Properties window, click Environment Variables.
Under User variables, find the PATH variable and click Edit.
Click New and paste the path:
C:\Users\user\AppData\Roaming\Python\Python312\Scripts
Click OK to save changes.
Verify the PATH Update:
* Restart your Command Prompt or PowerShell for the changes to take effect.
Test by running:
pip --version
It should now work without warnings.
Alternative Temporary Fix
If you prefer not to modify your PATH, you can explicitly call pip using its full path:
C:\Users\user\AppData\Roaming\Python\Python312\Scripts\pip install <package> | open | 2025-03-23T07:55:45Z | 2025-03-24T13:42:27Z | https://github.com/OthersideAI/self-operating-computer/issues/240 | [] | sprinteroz | 0 |
graphql-python/graphene-django | graphql | 1,291 | `DjangoObjectType` using the same django model do not resolve to correct relay object | > [!NOTE]
> This issue is a duplicate of #971 but includes a full description for searchability and links to history on the tracker itself.
## What is the Current Behavior?
Assume a fixed schema with two (or more) different GraphQL object types using `graphene_django.DjangoObjectType` linked to the same Django model:
```python
import graphene_django
from .models import Org as OrgModel
class Org(graphene_django.DjangoObjectType):
class Meta:
model = OrgModel
fields = (
"id",
"name",
"billing"
)
class AnonymousOrg(graphene_django.DjangoObjectType):
class Meta:
model = OrgModel
fields = (
"id",
"name",
)
```
Assume a query to `Org` of ID `7eca71ed-ff04-4473-9fd1-0a587705f885`.
```js
btoa('Org:7eca71ed-ff04-4473-9fd1-0a587705f885')
'T3JnOjdlY2E3MWVkLWZmMDQtNDQ3My05ZmQxLTBhNTg3NzA1Zjg4NQ=='
```
```graphql
{
node(id: "T3JnOjdlY2E3MWVkLWZmMDQtNDQ3My05ZmQxLTBhNTg3NzA1Zjg4NQ==") {
id
__typename
... on Org {
id
}
}
}
```
Response (incorrect):
```js
{
"data": {
"node": {
"id": "QW5vbnltb3VzT3JnOjdlY2E3MWVkLWZmMDQtNDQ3My05ZmQxLTBhNTg3NzA1Zjg4NQ==",
"__typename": "AnonymousOrg"
}
}
}
```
It returns the other object type `'AnonymousOrg:7eca71ed-ff04-4473-9fd1-0a587705f885'`, despite the relay ID specifying it was an `Org` object.
## What is the Expected Behavior?
Should return the object type specified in the relay ID.
Return (expected):
```js
{
"data": {
"node": {
"id": "T3JnOjdlY2E3MWVkLWZmMDQtNDQ3My05ZmQxLTBhNTg3NzA1Zjg4NQ==",
"__typename": "Org"
}
}
}
```
## Motivation / Use Case for Changing the Behavior
- For `node(id: "")` based queries to handle object types based on the same Django model.
- To resolve miscommunication and confusion between other issues and StackOverflow.
## Environment
- Version: 2.4.0
- Platform: graphene 2.1.4
## History
- **May 24, 2020**: Issue #971 posted just linking a complete description. While it's good to recreate it, the lack of description effectively made it unsearchable to many trying to look it up and hidden (StackOverflow posts and comments are being made and none of them cite any bug).
- **Feb 2, 2017**: PR #104 by @Tritlo.
- **Feb 6, 2017**: Bug reported by @nickhudkins #107.
- **Feb 12, 2017**: #107 [closed](https://github.com/graphql-python/graphene-django/issues/107#issuecomment-279243056) by @syrusakbary:
> Right now you can make this work with using a new registry for the second definition.
>
> ```python
> from graphene_django.registry import Registry
>
> class ThingB(DjangoObjectType):
> class Meta:
> registry = Registry()
> ```
>
> Also, this issue #104 might be related :)
- **Feb 20, 2017**: Replaced by #115 by @syrusakbary:
Merged to master https://github.com/graphql-python/graphene-django/commit/c635db5e5a83bb777c99514f06e3c906163eb57b.
However, no history of it remains in trunk. It seems to have been rebased out of master without any revert or explanation: [docs/registry.rst](https://github.com/graphql-python/graphene-django/commits/main/docs/registry.rst) is removed.
It's not clear what the registry does, but it looks like different issues are being convoluted with this one.
When a relay ID is passed, it should return the object of the type encoded in the ID, e.g.
```js
btoa('Org:7eca71ed-ff04-4473-9fd1-0a587705f885')
'T3JnOjdlY2E3MWVkLWZmMDQtNDQ3My05ZmQxLTBhNTg3NzA1Zjg4NQ=='
```
This would return the GraphQL type `Org`. But instead it's not deterministic, it will return _any_ GraphQL object type using the same model, and disregard the object type.
## Other
- StackOverflow question: https://stackoverflow.com/questions/70826464/graphene-django-determine-object-type-when-multiple-graphql-object-types-use-th
## Workaround
### Graphene 2
#### Version 1
@boolangery [posted a workaround](https://github.com/graphql-python/graphene-django/issues/971#issuecomment-633507631) on May 25, 2020:
```python
class FixRelayNodeResolutionMixin:
@classmethod
def get_node(cls, info, pk):
instance = super(FixRelayNodeResolutionMixin, cls).get_node(info, pk)
setattr(instance, "graphql_type", cls.__name__)
return instance
@classmethod
def is_type_of(cls, root, info):
if hasattr(root, "graphql_type"):
return getattr(root, "graphql_type") == cls.__name__
return super(FixRelayNodeResolutionMixin, cls).is_type_of(root, info)
class PublicUserType(FixRelayNodeResolutionMixin, DjangoObjectType):
class Meta:
model = User
interfaces = (graphene.relay.Node,)
fields = ['id', 'first_name', 'last_name']
class UserType(FixRelayNodeResolutionMixin, DjangoObjectType):
class Meta:
model = User
interfaces = (graphene.relay.Node,)
fields = ['id', 'first_name', 'last_name', 'profile']
```
#### Version 2
```python
ass FixRelayNodeResolutionMixin:
"""
Fix issue where DjangoObjectType using same model aren't returned in node(id: )
WARNING: This needs to be listed _before_ SecureDjangoObjectType when inherited.
Credit: https://github.com/graphql-python/graphene-django/issues/971#issuecomment-633507631
Bug: https://github.com/graphql-python/graphene-django/issues/1291
"""
@classmethod
def is_type_of(cls, root: Any, info: graphene.ResolveInfo) -> bool:
# Special handling for the Relay `Node`-field, which lives at the root
# of the schema. Inside the `graphene_django` type resolution logic
# we have very little type information available, and therefore it'll
# often resolve to an incorrect type. For example, a query for `Book:<UUID>`
# would return a `LibraryBook`-object, because `graphene_django` simply
# looks at `LibraryBook._meta.model` and sees that it is a `Book`.
#
# Here we use the `id` variable from the query to figure out which type
# to return.
#
# See: https://github.com/graphql-python/graphene-django/issues/1291
# Check if the current path is evaluating a relay Node field
if info.path == ['node'] and info.field_asts:
# Support variable keys other than id. E.g., 'node(id: $userId)'
# Since `node(id: ...)` is a standard relay idiom we can depend on `id` being present
# and the value field's name being the key we need from info.variable_values.
argument_nodes = info.field_asts[0].arguments
if argument_nodes:
for arg in argument_nodes:
if arg.name.value == 'id':
# Catch direct ID lookups, e.g. 'node(id: "U3RvcmU6MQ==")'
if isinstance(arg.value, graphql.language.ast.StringValue):
global_id = arg.value.value
_type, _id = from_global_id(global_id)
return _type == cls.__name__
# Catch variable lookups, e.g. 'node(id: $projectId)'
variable_name = arg.value.name.value
if variable_name in info.variable_values:
global_id = info.variable_values[variable_name]
_type, _id = from_global_id(global_id)
return _type == cls.__name__
return super().is_type_of(root, info)
```
### Graphene 3
via August 19th, 2024, adaptation of above:
```python
class FixRelayNodeResolutionMixin:
"""
Fix issue where DjangoObjectType using same model aren't returned in node(id: )
Credit: https://github.com/graphql-python/graphene-django/issues/971#issuecomment-633507631
Bug: https://github.com/graphql-python/graphene-django/issues/1291
"""
@classmethod
def is_type_of(cls, root: Any, info: graphene.ResolveInfo) -> bool:
# Special handling for the Relay `Node`-field, which lives at the root
# of the schema. Inside the `graphene_django` type resolution logic
# we have very little type information available, and therefore it'll
# often resolve to an incorrect type. For example, a query for `Book:<UUID>`
# would return a `LibaryBook`-object, because `graphene_django` simply
# looks at `LibraryBook._meta.model` and sees that it is a `Book`.
#
# Here we use the `id` variable from the query to figure out which type
# to return.
#
# See: https://github.com/graphql-python/graphene-django/issues/1291
# Check if the current path is evaluating a relay Node field
if info.path.as_list() == ['node'] and info.field_nodes:
# Support variable keys other than id. E.g., 'node(id: $userId)'
# Since `node(id: ...)` is a standard relay idiom we can depend on `id` being present
# and the value field's name being the key we need from info.variable_values.
argument_nodes = info.field_nodes[0].arguments
if argument_nodes:
for arg in argument_nodes:
if arg.name.value == 'id':
# Catch direct ID lookups, e.g. 'node(id: "U3RvcmU6MQ==")'
if isinstance(arg.value, graphql.language.ast.StringValueNode):
global_id = arg.value.value
_type, _id = from_global_id(global_id)
return _type == cls.__name__
# Catch variable lookups, e.g. 'node(id: $projectId)'
variable_name = arg.value.name.value
if variable_name in info.variable_values:
global_id = info.variable_values[variable_name]
_type, _id = from_global_id(global_id)
return _type == cls.__name__
return super().is_type_of(root, info)
``` | open | 2022-01-23T22:47:56Z | 2024-08-19T15:42:51Z | https://github.com/graphql-python/graphene-django/issues/1291 | [
"🐛bug"
] | tony | 2 |
Lightning-AI/pytorch-lightning | machine-learning | 20,480 | Model diverges or struggles to converge with complex-valued tensors in DDP | ### Bug description
Hello,
I am using lightning to train a complex-valued neural networks with complex valued tensor. When I use single gpu training, there is no issue. When I train with multi-gpus with DDP, my training diverges. I try to train on only one gpu, and still declaring " strategy='ddp' " in the trainer, the training also diverge.
I've tried to reproduce the issue with the code sample below. MNIST dataset and the model defined in this sample are simpler than in my current work, so the model won't diverge but really struggle to converge. To check if the issue happens, just comment the line " strategy='ddp' " in the trainer.
This seems to be related to [#55375](https://github.com/pytorch/pytorch/issues/55375) and [#60931](https://github.com/pytorch/pytorch/issues/60931)
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
from typing import List
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms.v2 as v2_transforms
import lightning as L
import torchcvnn.nn as c_nn
from torchmetrics.classification import Accuracy
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor, ModelCheckpoint
from lightning.pytorch.loggers import TensorBoardLogger
from lightning.pytorch.callbacks.progress import TQDMProgressBar
from lightning.pytorch.callbacks.progress.tqdm_progress import Tqdm
from lightning.pytorch.utilities import rank_zero_only
def conv_block(in_c: int, out_c: int, cdtype: torch.dtype) -> List[nn.Module]:
return [
nn.Conv2d(in_c, out_c, kernel_size=3, stride=1, padding=1, dtype=cdtype),
c_nn.BatchNorm2d(out_c),
c_nn.Cardioid(),
nn.Conv2d(out_c, out_c, kernel_size=3, stride=1, padding=1, dtype=cdtype),
c_nn.BatchNorm2d(out_c),
c_nn.Cardioid(),
c_nn.AvgPool2d(kernel_size=2, stride=2, padding=0),
]
class TBLogger(TensorBoardLogger):
@rank_zero_only
def log_metrics(self, metrics, step):
metrics.pop('epoch', None)
metrics = {k: v for k, v in metrics.items() if ('step' not in k) and ('val' not in k)}
return super().log_metrics(metrics, step)
class CustomProgressBar(TQDMProgressBar):
def get_metrics(self, trainer, model):
items = super().get_metrics(trainer, model)
items.pop("v_num", None)
return items
def init_train_tqdm(self) -> Tqdm:
"""Override this to customize the tqdm bar for training."""
bar = super().init_train_tqdm()
bar.ascii = ' >'
return bar
def init_validation_tqdm(self):
bar = super().init_validation_tqdm()
bar.ascii = ' >'
return bar
class cMNISTModel(L.LightningModule):
def __init__(self):
super().__init__()
self.ce_loss = nn.CrossEntropyLoss()
self.model = self.configure_model()
self.accuracy = Accuracy(task='multiclass', num_classes=10)
self.train_step_outputs = {}
self.valid_step_outputs = {}
def configure_model(self):
conv_model = nn.Sequential(
*conv_block(1, 16, torch.complex64),
*conv_block(16, 16, torch.complex64),
*conv_block(16, 32, torch.complex64),
*conv_block(32, 32, torch.complex64),
nn.Flatten(),
)
with torch.no_grad():
conv_model.eval()
dummy_input = torch.zeros((64, 1, 28, 28), dtype=torch.complex64, requires_grad=False)
out_conv = conv_model(dummy_input).view(64, -1)
lin_model = nn.Sequential(
nn.Linear(out_conv.shape[-1], 124, dtype=torch.complex64),
c_nn.Cardioid(),
nn.Linear(124, 10, dtype=torch.complex64),
c_nn.Mod(),
)
return nn.Sequential(conv_model, lin_model)
def forward(self, x):
return self.model(x)
def configure_optimizers(self):
return torch.optim.Adam(params=self.parameters(), lr=3e-4)
def training_step(self, batch, batch_idx):
data, label = batch
logits = self(data)
loss = self.ce_loss(logits, label)
acc = self.accuracy(logits, label)
self.log('step_loss', loss, prog_bar=True, sync_dist=True)
self.log('step_metrics', acc, prog_bar=True, sync_dist=True)
if not self.train_step_outputs:
self.train_step_outputs = {
'step_loss': [loss],
'step_metrics': [acc]
}
else:
self.train_step_outputs['step_loss'].append(loss)
self.train_step_outputs['step_metrics'].append(acc)
return loss
def validation_step(self, batch: torch.Tensor, batch_idx: int):
images, labels = batch
logits = self(images)
loss = self.ce_loss(logits, labels)
acc = self.accuracy(logits, labels)
self.log('step_loss', loss, prog_bar=True, sync_dist=True)
self.log('step_metrics', acc, prog_bar=True, sync_dist=True)
if not self.valid_step_outputs:
self.valid_step_outputs = {
'step_loss': [loss],
'step_metrics': [acc]
}
else:
self.valid_step_outputs['step_loss'].append(loss)
self.valid_step_outputs['step_metrics'].append(acc)
def on_train_epoch_end(self) -> None:
_log_dict = {
'Loss/loss': torch.tensor(self.train_step_outputs['step_loss']).mean(),
'Metrics/accuracy': torch.tensor(self.train_step_outputs['step_metrics']).mean()
}
self.loggers[0].log_metrics(_log_dict, self.current_epoch)
self.train_step_outputs.clear()
def on_validation_epoch_end(self) -> None:
mean_loss_value = torch.tensor(self.valid_step_outputs['step_loss']).mean()
mean_metrics_value = torch.tensor(self.valid_step_outputs['step_metrics']).mean()
_log_dict = {
'Loss/loss': mean_loss_value,
'Metrics/accuracy': mean_metrics_value
}
self.loggers[1].log_metrics(_log_dict, self.current_epoch)
self.log('val_loss', mean_loss_value, sync_dist=True)
self.log('val_Accuracy', mean_metrics_value, sync_dist=True)
self.valid_step_outputs.clear()
def train():
batch_size = 64
epochs = 10
torch.set_float32_matmul_precision('high')
# Dataloading
train_dataset = torchvision.datasets.MNIST(
root="./data",
train=True,
download=True,
transform=v2_transforms.Compose([v2_transforms.PILToTensor(), torch.fft.fft]),
)
valid_dataset = torchvision.datasets.MNIST(
root="./data",
train=False,
download=True,
transform=v2_transforms.Compose([v2_transforms.PILToTensor(), torch.fft.fft]),
)
# Train dataloader
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=batch_size,
shuffle=True,
num_workers=4,
persistent_workers=True,
pin_memory=True
)
# Valid dataloader
valid_loader = torch.utils.data.DataLoader(
valid_dataset,
batch_size=batch_size,
shuffle=False,
num_workers=4,
persistent_workers=True,
pin_memory=True
)
model = cMNISTModel()
trainer = L.Trainer(
max_epochs=epochs,
strategy='ddp_find_unused_parameters_true',
num_sanity_val_steps=0,
benchmark=True,
enable_checkpointing=True,
callbacks=[
CustomProgressBar(),
EarlyStopping(
monitor='val_loss',
verbose=True,
patience=5,
min_delta=0.005
),
LearningRateMonitor(logging_interval='epoch'),
ModelCheckpoint(
dirpath='weights_storage_/',
monitor='val_Accuracy',
verbose=True,
mode='max'
)
],
logger=[
TBLogger('training_logs_', name=None, sub_dir='train'),
TBLogger('training_logs_', name=None, sub_dir='valid')
]
)
trainer.fit(model, train_dataloaders=train_loader, val_dataloaders=valid_loader)
if __name__ == "__main__":
train()
```
### Error messages and logs
_No response_
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version: 2.4.0
#- PyTorch Version: 2.5.1
#- Python version: 3.12.7
#- OS: Linux Ubuntu 24.04.1 or Slurm
#- CUDA/cuDNN version: 12.4
#- GPU models and configuration: RTX 4090 (Ubuntu pc), NVIDIA A100 40G (Slurm)
#- How you installed Lightning: pip
```
</details>
### More info
[@jeremyfix](https://github.com/jeremyfix) [@QuentinGABOT](https://github.com/QuentinGABOT) might also be interested in this issue | open | 2024-12-09T13:19:49Z | 2025-01-31T07:50:21Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20480 | [
"bug",
"3rd party",
"ver: 2.4.x"
] | ouioui199 | 5 |
svc-develop-team/so-vits-svc | deep-learning | 101 | 训练时GPU占用低,显存可以跑满,但是GPU利用率很低,如何多线程跑?(尝试过把num_works改成24,但是比原来更慢了) | closed | 2023-03-29T03:03:57Z | 2023-04-07T12:31:18Z | https://github.com/svc-develop-team/so-vits-svc/issues/101 | [
"not urgent"
] | TQG1997 | 6 | |
huggingface/transformers | machine-learning | 36,231 | Add Evolla model | ### Model description
# Model Name: Evolla
## Model Specifications
* **Model Type:** Protein-language generative model
* **Parameters:** 80 billion
* **Training Data:** AI-generated dataset with 546 million protein question-answer pairs and 150 billion word tokens
## Architecture
Multimodal model integrating a protein language model (PLM) as the encoder, a large language model (LLM) as the decoder, and a sequence compressor/aligner module.
## Key Features
- Decodes the molecular language of proteins through natural language dialogue
- Generates precise, contextually nuanced insights into protein function
- Trained on extensive data to capture protein complexity and functional diversity
## Applications
* **Protein Function Annotation:** Provides detailed functional insights for proteins
* **Enzyme Commission (EC) Number Prediction:** Assists in classifying enzymatic activities
* **Gene Ontology (GO) Annotation:** Helps in understanding protein roles in biological processes
* **Subcellular Localization Prediction:** Predicts where proteins are located within a cell
* **Disease Association Analysis:** Identifies potential links between proteins and diseases
* **Other Protein Function Characterization Tasks:** Supports various research needs in proteomics and functional genomics
## Performance
Demonstrates expert-level insights, advancing research in proteomics and functional genomics.
## Availability
* **Evolla-10B Weights:** [Hugging Face](https://huggingface.co/westlake-repl/Evolla-10B)
* **Code Repository:** [GitHub](https://github.com/westlake-repl/Evolla)
* **Webserver:** [Chat-Protein](http://www.chat-protein.com/)
## License
MIT License
## Contact
For inquiries, contact the corresponding author(s) via email (e.g., yuanfajie@westlake.edu.cn).
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
_No response_ | open | 2025-02-17T10:18:40Z | 2025-02-17T10:18:40Z | https://github.com/huggingface/transformers/issues/36231 | [
"New model"
] | zhoubay | 0 |
FlareSolverr/FlareSolverr | api | 1,451 | 500 internal server error : YggTorrent | ### Have you checked our README?
- [x] I have checked the README
### Have you followed our Troubleshooting?
- [x] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [x] I have checked older issues, open and closed
### Have you checked the discussions?
- [x] I have read the Discussions
### Have you ACTUALLY checked all these?
YES
### Environment
```markdown
- FlareSolverr version: nodriver
- Last working FlareSolverr version: nodriver
- Operating system:
- Are you using Docker: yes
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: [yes/no]
- If using captcha solver, which one:
- URL to test this issue: ygg.re
```
### Description
Hello,
Since this morning i can’t use YggTorrent with flaresolverr i have this issue.
What can I do ?
### Logged Error Messages
```text
2025-02-22 17:49:17 ERROR ReqId 23200974116544 Error: Error solving the challenge. Timeout after 120.0 seconds.
2025-02-22 17:49:17 DEBUG ReqId 23200974116544 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 120.0 seconds.', 'startTimestamp': 1740242836860, 'endTimestamp': 1740242957223, 'version': '3.4.0'}
2025-02-22 17:49:17 INFO ReqId 23200974116544 Response in 120.363 s
2025-02-22 17:49:17 INFO ReqId 23200974116544 192.168.1.61 POST http://192.168.1.61:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_ | closed | 2025-02-22T16:53:19Z | 2025-02-23T09:44:04Z | https://github.com/FlareSolverr/FlareSolverr/issues/1451 | [] | bajire72 | 6 |
onnx/onnx | scikit-learn | 6,339 | Python 3.13 support | Python 3.13 is going to be released in October. | closed | 2024-09-02T13:36:04Z | 2025-03-17T18:08:22Z | https://github.com/onnx/onnx/issues/6339 | [
"contributions welcome"
] | justinchuby | 6 |
django-import-export/django-import-export | django | 1,901 | Handle confirm_form validation errors gracefully | **Describe the bug**
In case the `confirm_form` is overriden in the admin (as described [here](https://django-import-export.readthedocs.io/en/latest/admin_integration.html#customize-admin-import-forms)) and the form is invalid, currently an unhelpful exception is raised. Example:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/usr/local/lib/python3.10/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.10/site-packages/django/utils/decorators.py", line 188, in _view_wrapper
result = _process_exception(request, e)
File "/usr/local/lib/python3.10/site-packages/django/utils/decorators.py", line 186, in _view_wrapper
response = view_func(request, *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/django/views/decorators/cache.py", line 81, in _view_wrapper
add_never_cache_headers(response)
File "/usr/local/lib/python3.10/site-packages/django/utils/cache.py", line 293, in add_never_cache_headers
patch_response_headers(response, cache_timeout=-1)
File "/usr/local/lib/python3.10/site-packages/django/utils/cache.py", line 284, in patch_response_headers
if not response.has_header("Expires"):
Exception Type: AttributeError at /main/somemodel/process_import/
Exception Value: 'NoneType' object has no attribute 'has_header'
```
**To Reproduce**
Steps to reproduce the behavior:
1. Follow the [Customize admin import forms](https://django-import-export.readthedocs.io/en/latest/admin_integration.html#customize-admin-import-forms) tutorial
2. Add a field to the confirmation form that can be filled with invalid data by the user
3. Submit the form with invalid data
4. Exception occurs
**Expected behavior**
The proper way of hangling this would be to redirect the user back to the confirm form, with the apropriate validation errors displayed.
**Additional context**
The root cause is that the `confirm_form.is_valid() == False` case is not handled, in this path, `None` is implicitly returned in the view method.
https://github.com/django-import-export/django-import-export/blob/main/import_export/admin.py#L157-L178 | closed | 2024-07-09T22:16:46Z | 2024-07-20T19:30:06Z | https://github.com/django-import-export/django-import-export/issues/1901 | [
"bug"
] | 19greg96 | 10 |
liangliangyy/DjangoBlog | django | 418 | es7.0有问题,doc_type不支持 | <!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [ ] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [ ] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [ ] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [ ] 请求技术支持
| closed | 2020-07-13T09:52:43Z | 2021-08-31T05:50:52Z | https://github.com/liangliangyy/DjangoBlog/issues/418 | [] | niweiwei789 | 0 |
PaddlePaddle/models | computer-vision | 5,737 | Compiled with WITH_GPU, but no GPU found in runtime | 
FROM paddlepaddle/paddle:2.4.2-gpu-cuda11.7-cudnn8.4-trt8.4
I have used above image as base image.
RUN python -m pip install --no-cache-dir paddlepaddle-gpu==2.4.2.post117 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html
I am using above versionof paddle only because I get error during export to onnx with other versions. https://github.com/PaddlePaddle/Paddle2ONNX/issues/1147
The code runs fine while running on my local gpu but in ml.p2.xlarge instance with aws docker sagemaker i get the above error, tried with many combinations of images still same issue, can you help me with this? | open | 2023-09-11T06:15:40Z | 2024-02-26T05:07:42Z | https://github.com/PaddlePaddle/models/issues/5737 | [] | mahesh11T | 0 |
polakowo/vectorbt | data-visualization | 121 | Simulating on different price series? | Hi @polakowo, first of all, thanks for this great simulator. I have tried both this and backtesting.py, but I can say that vbt is by far more flexible and fast.
Now, to my point, in the `simulate_best_params` method of _WalkForwardOptimization_ example:
https://github.com/polakowo/vectorbt/blob/5fe7e0e6e485f58c15a2474056602940c2c859c7/examples/WalkForwardOptimization.ipynb#L440-L446
I believe that the the code should test over variable `price` series in line _441_ and _442_.
All thought the result are the same in this case, it might confuse other developers, like me :)
```
fast_ma = vbt.MA.run(price, window=best_fast_windows, per_column=True)
slow_ma = vbt.MA.run(price, window=best_slow_windows, per_column=True)
```
Is this correct?
Regards | closed | 2021-04-04T22:27:00Z | 2021-04-06T19:45:28Z | https://github.com/polakowo/vectorbt/issues/121 | [] | emiliobasualdo | 1 |
graphql-python/graphene-django | django | 920 | Make graphene.Decimal consistent to models.DecimalField | Django's [DecimalField](https://docs.djangoproject.com/en/3.0/ref/models/fields/#decimalfield) `class DecimalField(max_digits=None, decimal_places=None, **options)` with `max_digits` and `decimal_places` is super useful. `graphene.Decimal` seems not to support definition of places before and after the point yet. What about making `graphene.Decimal` more consistent with `DecimalField`? | open | 2020-04-03T15:02:51Z | 2022-07-04T09:38:36Z | https://github.com/graphql-python/graphene-django/issues/920 | [
"wontfix"
] | fkromer | 5 |
keras-rl/keras-rl | tensorflow | 376 | ValueError: probabilities contain NaN in policy.py | Hey community,
I made an environment with openai gym and now I am trying different settings and agents.
I started with the agent from the dqn_cartpole example (https://github.com/wau/keras-rl2/blob/master/examples/dqn_cartpole.py). At some point the calculation of the q-values failed because of a NaN value. I added my Traceback and small changes in the settings below.
My settings in comparison to the dqn_cartpole example:
Dense Layer: instead of 16,16,16 i chose 256, 64, 16
policy = BoltzmannQPolicy()
dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=50000, target_model_update=1e-2, policy=policy)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])
dqn.fit(env, nb_steps=500000, visualize=False, verbose=2)
• Last training episode before error: 497280/500000: episode: 2960, duration: 13.926s, episode steps: 168, steps per second: 12, episode reward: 47056.579, mean reward: 280.099 [-10229.000, 8998.000], mean action: 45.298 [0.000, 96.000], loss: 60564033920565248.000000, mae: 3245972224.000000, mean_q: 3358134016.000000
Traceback (most recent call last):
File "~environment.py", line 125, in <module>
dqn.fit(env, nb_steps=500000, visualize=False, verbose=2)
File "~\python_env\lib\site-packages\rl\core.py", line 169, in fit
action = self.forward(observation)
File "~\python_env\lib\site-packages\rl\agents\dqn.py", line 227, in forward
action = self.policy.select_action(q_values=q_values)
File "~\python_env\lib\site-packages\rl\policy.py", line 227, in select_action
action = np.random.choice(range(nb_actions), p=probs)
File "mtrand.pyx", line 928, in numpy.random.mtrand.RandomState.choice
ValueError: probabilities contain NaN
I do not get this error, when I am using EpsGreedyQPolicy. Is there any possibility to understand why NaNs are produced and how to avoid them?
Kind regards, Jonas
| closed | 2021-06-07T09:34:36Z | 2023-06-20T21:22:04Z | https://github.com/keras-rl/keras-rl/issues/376 | [
"wontfix"
] | ghost | 5 |
sinaptik-ai/pandas-ai | data-visualization | 1,340 | _is_malicious_code doesn't look for whole word | ### System Info
Pandas AI version: 2.2.14
Python Version: 3.10.0
### 🐛 Describe the bug
I was trying to run a query where I had mentioned OSE, and I got the error
```
"Code shouldn't use 'os', 'io' or 'chr', 'b64decode' functions as this could lead to malicious code execution."
```
So I went to [code_cleaning.py](https://github.com/Sinaptik-AI/pandas-ai/blob/main/pandasai/pipelines/chat/code_cleaning.py) and saw this line of code
```python
return any(module in code for module in dangerous_modules)
```
This was looking for just presence of the words instead of the whole word,
So it returned true for OSE and it contains os
| closed | 2024-08-28T14:37:16Z | 2024-12-19T08:35:12Z | https://github.com/sinaptik-ai/pandas-ai/issues/1340 | [
"bug"
] | shoebham | 5 |
assafelovic/gpt-researcher | automation | 1,210 | ModuleNotFoundError for the module zendriver while attempting to import it in the file nodriver_scraper.py | Full error:
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 0
gptr-nextjs-1 | ▲ Next.js 14.2.24
gptr-nextjs-1 | - Local: http://localhost:3000
gptr-nextjs-1 |
gptr-nextjs-1 | ✓ Starting...
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gptr-nextjs-1 | ✓ Ready in 3.1s
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver'
gpt-researcher-1 exited with code 1
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/bin/uvicorn", line 8, in <module>
gpt-researcher-1 | sys.exit(main())
gpt-researcher-1 | ^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
gpt-researcher-1 | return self.main(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1082, in main
gpt-researcher-1 | rv = self.invoke(ctx)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
gpt-researcher-1 | return ctx.invoke(self.callback, **ctx.params)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/click/core.py", line 788, in invoke
gpt-researcher-1 | return __callback(*args, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 412, in main
gpt-researcher-1 | run(
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 579, in run
gpt-researcher-1 | server.run()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in run
gpt-researcher-1 | return asyncio.run(self.serve(sockets=sockets))
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
gpt-researcher-1 | return runner.run(main)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
gpt-researcher-1 | return self._loop.run_until_complete(task)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
gpt-researcher-1 | return future.result()
gpt-researcher-1 | ^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 70, in serve
gpt-researcher-1 | await self._serve(sockets)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 77, in _serve
gpt-researcher-1 | config.load()
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 435, in load
gpt-researcher-1 | self.loaded_app = import_from_string(self.app)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 22, in import_from_string
gpt-researcher-1 | raise exc from None
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 19, in import_from_string
gpt-researcher-1 | module = importlib.import_module(module_str)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
gpt-researcher-1 | return _bootstrap._gcd_import(name[level:], package, level)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
gpt-researcher-1 | File "<frozen importlib._bootstrap_external>", line 940, in exec_module
gpt-researcher-1 | File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
gpt-researcher-1 | File "/usr/src/app/main.py", line 31, in <module>
gpt-researcher-1 | from backend.server.server import app
gpt-researcher-1 | File "/usr/src/app/backend/__init__.py", line 1, in <module>
gpt-researcher-1 | from multi_agents import agents
gpt-researcher-1 | File "/usr/src/app/multi_agents/__init__.py", line 3, in <module>
gpt-researcher-1 | from .agents import (
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/__init__.py", line 1, in <module>
gpt-researcher-1 | from .researcher import ResearchAgent
gpt-researcher-1 | File "/usr/src/app/multi_agents/agents/researcher.py", line 1, in <module>
gpt-researcher-1 | from gpt_researcher import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/__init__.py", line 1, in <module>
gpt-researcher-1 | from .agent import GPTResearcher
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 11, in <module>
gpt-researcher-1 | from .skills.researcher import ResearchConductor
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/__init__.py", line 1, in <module>
gpt-researcher-1 | from .context_manager import ContextManager
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/skills/context_manager.py", line 5, in <module>
gpt-researcher-1 | from ..actions.utils import stream_output
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/__init__.py", line 4, in <module>
gpt-researcher-1 | from .web_scraping import scrape_urls
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/web_scraping.py", line 5, in <module>
gpt-researcher-1 | from ..scraper import Scraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/__init__.py", line 6, in <module>
gpt-researcher-1 | from .browser.nodriver_scraper import NoDriverScraper
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/scraper/browser/nodriver_scraper.py", line 9, in <module>
gpt-researcher-1 | import zendriver
gpt-researcher-1 | ModuleNotFoundError: No module named 'zendriver' | closed | 2025-02-26T07:12:14Z | 2025-02-27T12:02:26Z | https://github.com/assafelovic/gpt-researcher/issues/1210 | [] | cristianstoica | 2 |
MaartenGr/BERTopic | nlp | 1,403 | Does BERTopic rely on *both* sentence_embeddings and word_embeddings | When exploring relationships *between* topics (2D visualisations, hierarchy) we need to represent *each topic* as a summary vector (cluster-level embedding).
The BERTopic source code stats
> `topic_embeddings_ (np.ndarray) : The embeddings for each topic. It is calculated by taking the weighted average of word embeddings in a topic based on their c-TF-IDF values.`
This seems to imply BERTopic needs *both* a sentence-level word embedding model and a word-level embedding model.
Is this the case? Where is this specified in the source code please?
| open | 2023-07-12T10:15:59Z | 2023-07-12T19:41:15Z | https://github.com/MaartenGr/BERTopic/issues/1403 | [] | matthewnour | 1 |
babysor/MockingBird | deep-learning | 208 | 事小白,用ceshi不知道为什么出现这样的问题,请教一下大佬 | 
| open | 2021-11-10T09:19:30Z | 2021-11-11T01:29:27Z | https://github.com/babysor/MockingBird/issues/208 | [] | SSSSwater | 3 |
google-research/bert | nlp | 1,224 | For news classification long text tasks, BERT fine-tune, loss does not drop, training does not move, fixed classification to one category | For news classification long text tasks, BERT fine-tune, loss does not drop, training does not move, fixed classification to one category,why?
I am very anxious to seek help from Daniel | open | 2021-04-28T07:50:35Z | 2021-04-28T07:50:35Z | https://github.com/google-research/bert/issues/1224 | [] | iamsuarez | 0 |
tensorflow/tensor2tensor | machine-learning | 1,262 | Error with hparams.proximity_bias=True | ### Description
Error in common_attention.attention_bias_to_padding with setting `hparams.proximity_bias=True`
In the transformer_layers.transformer_encoder.transformer_encoder line 152, the `operation padding = common_attention.attention_bias_to_padding` performs `tf.squeeze(x, [1,2])`, but attention_bias tensor returned with proximity_bias=True is [batch_size, 1, seq_len, seq_len] and not [batch_size, 1, 1, seq_len] when proximity_bias=False.
Not sure how effective proximity_bias is, but if it is there probably it has to be working.
https://github.com/tensorflow/tensor2tensor/blob/d2b6b3a0885dcba995d74fe97f33c2e4b5ce2cf8/tensor2tensor/layers/common_attention.py#L919
I am fixing it by changing the above line to this
`return tf.squeeze(tf.to_float(tf.less(tf.reduce_sum(attention_bias, 2,keepdims=True), -1)), axis=[1, 2])`
### For bugs: reproduction and error logs
```
#1 : set hparams.proximity_bias=True
...
```
```
# Error logs:
InvalidArgumentError (see above for traceback): Can not squeeze dim[2], expected a dimension of 1, got 832
...
```
| closed | 2018-11-30T07:00:33Z | 2018-11-30T07:17:56Z | https://github.com/tensorflow/tensor2tensor/issues/1262 | [] | Leechung | 0 |
holoviz/panel | plotly | 7,583 | `panel compile <path>` error: Could not resolve "./Calendar" | I wanted to compile: https://github.com/panel-extensions/panel-full-calendar/tree/main
```bash
panel compile src/panel_full_calendar/main.py
```
```bash
Running command: npm install
npm output:
added 7 packages, and audited 8 packages in 2s
1 package is looking for funding
run `npm fund` for details
found 0 vulnerabilities
An error occurred while running esbuild: ✘ [ERROR] Could not resolve "./Calendar"
index.js:1:26:
1 │ import * as Calendar from "./Calendar"
╵ ~~~~~~~~~~~~
``` | closed | 2025-01-04T01:01:31Z | 2025-01-17T17:04:45Z | https://github.com/holoviz/panel/issues/7583 | [] | ahuang11 | 6 |
postmanlabs/httpbin | api | 645 | Migrate from brotlipy to brotlicffi | `brotlipy` has not seen updates for 4 years: https://pypi.org/project/brotlipy/
It looks like the latest work is published under `brotlicffi`: https://github.com/python-hyper/brotlicffi
- https://pypi.org/project/brotlicffi/
https://github.com/postmanlabs/httpbin/blob/f8ec666b4d1b654e4ff6aedd356f510dcac09f83/setup.py#L38 | open | 2021-06-07T12:57:22Z | 2021-12-15T12:24:12Z | https://github.com/postmanlabs/httpbin/issues/645 | [] | johnthagen | 1 |
nteract/papermill | jupyter | 161 | consider black for code formatting | [black](https://black.readthedocs.io/en/stable/index.html) is awesome! We use [prettier](https://prettier.io/) in the monorepo and it is 😍. If folks are interested, it could be a good small project for a new contributor at the sprints 😄
| closed | 2018-07-30T16:26:40Z | 2018-08-14T16:45:55Z | https://github.com/nteract/papermill/issues/161 | [] | alexandercbooth | 6 |
Anjok07/ultimatevocalremovergui | pytorch | 601 | error when trying to seperate stems on demucs | Last Error Received:
Process: Demucs
If this error persists, please contact the developers with the error details.
Raw Error Details:
AssertionError: ""
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 478, in seperate
File "separate.py", line 622, in demix_demucs
File "demucs/apply.py", line 185, in apply_model
File "demucs/apply.py", line 211, in apply_model
File "demucs/apply.py", line 245, in apply_model
File "demucs/utils.py", line 490, in result
File "demucs/apply.py", line 260, in apply_model
File "/Applications/Ultimate Vocal Remover.app/Contents/MacOS/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "demucs/htdemucs.py", line 538, in forward
File "demucs/htdemucs.py", line 435, in _spec
File "demucs/hdemucs.py", line 36, in pad1d
"
Error Time Stamp [2023-06-06 12:18:34]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 1
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-06-06T11:20:55Z | 2023-06-06T11:20:55Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/601 | [] | lastdaysofpompeii | 0 |
hankcs/HanLP | nlp | 672 | Python 调用时如何 enable debug 信息? | 当前最新版本号是:1.5.0
我使用的版本是: 1.5.0 portable
使用 Python 调用时希望打开调试信息查看词典是否加载成功,阅读了 jpype 的相关调用文档,发现由于 inner class 的问题并不能按常规方式调用
https://github.com/hankcs/HanLP/blob/master/src/main/java/com/hankcs/hanlp/HanLP.java#L53
`Config = JClass('com.hankcs.hanlp.HanLP$Config')`
请问应该如何打开调试信息呢?
PS:
```
Config = JClass('com.hankcs.hanlp.HanLP$Config')
Config.enableDebug()
```
更新到1.5.0之后问题已解决,但是可能并不是版本的问题,是自己某处调用没有写对。
| closed | 2017-11-13T07:49:52Z | 2017-11-13T07:52:14Z | https://github.com/hankcs/HanLP/issues/672 | [] | dofine | 0 |
openapi-generators/openapi-python-client | rest-api | 498 | Generate Pydoc on model properties. | **Is your feature request related to a problem? Please describe.**
I like well-documented APIs, and I can't figure out a way to get Pydoc (or even Python comments) attached to properties of a generated model class.
**Describe the solution you'd like**
OpenAPI objects in the `schemas` dict get turned into Python classes, with Pydoc matching the `description` field value.
I would like the `properties` of these objects to have the same behavior: If a entry in `properties` has a `description`, it should be attached to the attribute in the generated class as Pydoc.
**Describe alternatives you've considered**
Alternately, having simple Python comments on the generated class would be fine.
**Additional context**
Example truncated OpenAPI file:
```json
{
"components": {
"schemas": {
"SomeObject": {
"type": "object",
"required": ["my_prop"],
"description": "This is my Object.",
"properties": { "my_prop": { "type": "string", "description": "This is my property." } }
}
}
}
```
Desired output option 1:
```python3
class SomeObject:
"""This is my Object."""
"""This is my property."""
my_prop: str
```
Desired output option 2:
```python3
class SomeObject:
"""This is my Object."""
# This is my property.
my_prop: str
``` | closed | 2021-09-21T01:44:13Z | 2022-01-29T23:13:31Z | https://github.com/openapi-generators/openapi-python-client/issues/498 | [
"✨ enhancement"
] | jkinkead | 3 |
Farama-Foundation/Gymnasium | api | 1,061 | Setting up seed properly in Custom env | ### Question
`# Create the training environment
train_env = gym.make("BabyAI-GoToLocal-v0",max_episode_steps= 512)#, render_mode = "human")
train_env = RGBImgPartialObsWrapper(train_env)
train_env = CustomEnv(train_env, mission)
train_env = ImgObsWrapper(train_env)
train_env = Monitor(train_env)
train_env = DummyVecEnv([lambda: train_env])
train_env = VecTransposeImage(train_env)
# Create the evaluation environment
eval_env = gym.make("BabyAI-GoToLocal-v0", max_episode_steps= 512)#, render_mode = "human")
eval_env = RGBImgPartialObsWrapper(eval_env)
eval_env = CustomEnv(eval_env,mission)
eval_env = ImgObsWrapper(eval_env)
eval_env = Monitor(eval_env)
eval_env = DummyVecEnv([lambda: eval_env])
eval_env = VecTransposeImage(eval_env)
save_path = f"main_code/New_model/PPO/{mission.replace(' ', '_')}_model"
eval_callback = EvalCallback(eval_env, callback_on_new_best=stop_callback, eval_freq=8192,
best_model_save_path=save_path, verbose=1, n_eval_episodes= 30)
model = PPO("CnnPolicy", train_env, policy_kwargs=policy_kwargs, verbose=1,
learning_rate=0.0005, tensorboard_log="./logs/PPO2/",
batch_size= 2048,
n_epochs= 100, seed = 42)
model.learn(2.5e6, callback=eval_callback)
# Close the environments
train_env.close()
eval_env.close()`
I'm trying to train babyai env on specific instructions, but while setting up the seed it gets stuck on first frame and doesn't do anything. The issue seems to be with Custom env because when I run the code without it, the seed works fine. Help from the community would be much appreciated :) | closed | 2024-05-22T15:48:10Z | 2024-09-25T10:03:55Z | https://github.com/Farama-Foundation/Gymnasium/issues/1061 | [
"question"
] | Chainesh | 1 |
LAION-AI/Open-Assistant | machine-learning | 2,805 | Missing Authorize on some API's endpoints | Looking at the API, neither this endpoint ask for credentials

or this one

There is a reason for that? if not, I can update those
| open | 2023-04-21T08:13:26Z | 2023-05-11T12:11:47Z | https://github.com/LAION-AI/Open-Assistant/issues/2805 | [
"inference"
] | JonanOribe | 5 |
healthchecks/healthchecks | django | 1,093 | [Feature Request] do not display multiple timezones as an option if they are identical | I had a situation where the check's time zone was "Etc/UTC", and the browser's time zone was also UTC, but I had all 3 displayed:
UTC, Etc/UTC & Browser's time zone. This was very confusing. In reality, clicking any of these did not change anything because in fact they were representing the same time zone.

| open | 2024-11-28T22:48:30Z | 2025-02-20T13:15:06Z | https://github.com/healthchecks/healthchecks/issues/1093 | [] | seidnerj | 4 |
huggingface/datasets | deep-learning | 7,419 | Import order crashes script execution | ### Describe the bug
Hello,
I'm trying to convert an HF dataset into a TFRecord so I'm importing `tensorflow` and `datasets` to do so.
Depending in what order I'm importing those librairies, my code hangs forever and is unkillable (CTRL+C doesn't work, I need to kill my shell entirely).
Thank you for your help
🙏
### Steps to reproduce the bug
If you run the following script, this will hang forever :
```python
import tensorflow as tf
import datasets
dataset = datasets.load_dataset("imagenet-1k", split="validation", streaming=True)
print(next(iter(dataset)))
```
however running the following will work fine (I just changed the order of the imports) :
```python
import datasets
import tensorflow as tf
dataset = datasets.load_dataset("imagenet-1k", split="validation", streaming=True)
print(next(iter(dataset)))
```
### Expected behavior
I'm expecting the script to reach the end and my case print the content of the first item in the dataset
```
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=408x500 at 0x70C646A03110>, 'label': 91}
```
### Environment info
```
$ datasets-cli env
- `datasets` version: 3.3.2
- Platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.35
- Python version: 3.11.7
- `huggingface_hub` version: 0.29.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
```
I'm also using `tensorflow==2.18.0`. | open | 2025-02-24T17:03:43Z | 2025-02-24T17:03:43Z | https://github.com/huggingface/datasets/issues/7419 | [] | DamienMatias | 0 |
jina-ai/serve | deep-learning | 6,127 | `StableLM` example from the homepage doesn't work properly. | I was going through the small example on the [homepage of the docs](https://docs.jina.ai/), and it gives me a weird error:
```console
WARNI… gateway@6246 Getting endpoints failed: failed to connect to all [12/09/23 07:58:35]
addresses. Waiting for another trial
WARNI… gateway@6246 Getting endpoints failed: failed to connect to all [12/09/23 07:59:16]
addresses. Waiting for another trial
WARNI… gateway@6246 Getting endpoints failed: failed to connect to all [12/09/23 08:03:15]
addresses. Waiting for another trial
WARNI… gateway@6166 <jina.orchestrate.pods.Pod object at 0x7e03081072e0> timeout [12/09/23 08:08:30]
after waiting for 600000ms, if your executor takes time to load, you may
increase --timeout-ready
WARNI… gateway@6246 Getting endpoints failed: failed to connect to all [12/09/23 08:11:47]
addresses. Waiting for another trial
INFO gateway@6246 start server bound to 0.0.0.0:12345 [12/09/23 08:11:48]
Traceback (most recent call last):
File "/content/deployment.py", line 6, in <module>
with dep:
File "/usr/local/lib/python3.10/dist-packages/jina/orchestrate/orchestrator.py", line 14, in __enter__
return self.start()
File "/usr/local/lib/python3.10/dist-packages/jina/orchestrate/deployments/__init__.py", line 1157, in start
self._wait_until_all_ready()
File "/usr/local/lib/python3.10/dist-packages/jina/orchestrate/deployments/__init__.py", line 1095, in _wait_until_all_ready
asyncio.get_event_loop().run_until_complete(wait_for_ready_coro)
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/usr/local/lib/python3.10/dist-packages/jina/orchestrate/deployments/__init__.py", line 1212, in async_wait_start_success
await asyncio.gather(*coros)
File "/usr/local/lib/python3.10/dist-packages/jina/orchestrate/pods/__init__.py", line 221, in async_wait_start_success
self._fail_start_timeout(_timeout)
File "/usr/local/lib/python3.10/dist-packages/jina/orchestrate/pods/__init__.py", line 140, in _fail_start_timeout
raise TimeoutError(
TimeoutError: jina.orchestrate.pods.Pod:gateway can not be initialized after 600000.0ms
```
Just for reference, here's the code to the `executor.py` and the `deployment.py` scripts:
`executor.py`:
```python
from jina import Executor, requests
from docarray import DocList, BaseDoc
from transformers import pipeline
class Prompt(BaseDoc):
text: str
class Generation(BaseDoc):
prompt: str
text: str
class StableLM(Executor):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.generator = pipeline(
'text-generation', model='stabilityai/stablelm-base-alpha-3b'
)
@requests
def generate(self, docs: DocList[Prompt], **kwargs) -> DocList[Generation]:
generations = DocList[Generation]()
prompts = docs.text
llm_outputs = self.generator(prompts)
for prompt, output in zip(prompts, llm_outputs):
generations.append(Generation(prompt=prompt, text=output))
return generations
```
`deployment.py`:
```python
from jina import Deployment
from executor import StableLM
dep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)
with dep:
dep.block()
```
And I'm running the deployment script simply by doing:
```console
python3 deployment.py
```
Am I missing something or does this example need to be updated? | closed | 2023-12-09T08:21:00Z | 2023-12-11T12:20:43Z | https://github.com/jina-ai/serve/issues/6127 | [] | codetalker7 | 14 |
zappa/Zappa | flask | 1,035 | Updating a deployment fails at update_lambda_configuration | Updating a deployment fails at update_lambda_configuration due to a KeyError
## Context
I've had no problems updating this deployment until today. Python version is 3.8.10.
## Expected Behavior
Zappa should complete the update of the deployment successfully.
## Actual Behavior
Zappa fails with the following output:
```
$ zappa update prod
Calling update for stage prod..
Downloading and installing dependencies..
- pyyaml==5.4.1: Downloading
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 662k/662k [00:00<00:00, 6.57MB/s]
Packaging project as zip.
Uploading tellya2-prod-1631583880.zip (21.9MiB)..
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 23.0M/23.0M [00:04<00:00, 4.86MB/s]
Updating Lambda function code..
Updating Lambda function configuration..
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "/Users/mark/.virtualenvs/tellya2_project/lib/python3.8/site-packages/zappa/cli.py", line 3422, in handle
sys.exit(cli.handle())
File "/Users/mark/.virtualenvs/tellya2_project/lib/python3.8/site-packages/zappa/cli.py", line 588, in handle
self.dispatch_command(self.command, stage)
File "/Users/mark/.virtualenvs/tellya2_project/lib/python3.8/site-packages/zappa/cli.py", line 641, in dispatch_command
self.update(
File "/Users/mark/.virtualenvs/tellya2_project/lib/python3.8/site-packages/zappa/cli.py", line 1165, in update
self.lambda_arn = self.zappa.update_lambda_configuration(
File "/Users/mark/.virtualenvs/tellya2_project/lib/python3.8/site-packages/zappa/core.py", line 1395, in update_lambda_configuration
if lambda_aws_config["PackageType"] != "Image":
KeyError: 'PackageType'
==============
Need help? Found a bug? Let us know! :D
File bug reports on GitHub here: https://github.com/Zappa/Zappa
And join our Slack channel here: https://zappateam.slack.com
Love!,
~ Team Zappa!
```
## Possible Fix
None that I'm aware of.
## Steps to Reproduce
Unfortunately this is a private code repository so I cannot provide any links.
## Your Environment
* Zappa version used: 0.53.0
* Operating System and Python version: MacOS 11.5.1 - Python 3.8.10
* The output of `pip freeze`:
argcomplete==1.12.3
asgiref==3.3.4
boto3==1.17.97
botocore==1.20.97
certifi==2021.5.30
cfn-flip==1.2.3
chardet==4.0.0
click==8.0.1
Django==3.2.4
django-appconf==1.0.4
django-compressor @ git+https://github.com/django-compressor/django-compressor.git@f533dc6dde9ed90626382d78f3eb37b84d848027
django-formtools==2.3
django-libsass==0.9
django-otp==1.0.6
django-phonenumber-field==5.2.0
django-storages==1.11.1
django-tenants==3.3.1
django-two-factor-auth==1.13.1
durationpy==0.5
future==0.18.2
hjson==3.0.2
idna==2.10
jmespath==0.10.0
kappa==0.6.0
libsass==0.21.0
pbr==5.6.0
pep517==0.10.0
phonenumbers==8.12.25
pip-tools==6.1.0
placebo==0.9.0
psycopg2==2.9.1
psycopg2-binary==2.9.1
PyJWT==1.7.1
python-dateutil==2.8.1
python-slugify==5.0.2
pytz==2021.1
PyYAML==5.4.1
qrcode==6.1
rcssmin==1.0.6
requests==2.25.1
rjsmin==1.1.0
s3transfer==0.4.2
six==1.15.0
sqlparse==0.4.1
stevedore==3.3.0
text-unidecode==1.3
toml==0.10.2
tqdm==4.61.1
troposphere==2.7.0
twilio==6.61.0
urllib3==1.26.5
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.53.0
zappa-django-utils==0.4.1
* Your `zappa_settings.json`:
```
{
"dev": {
"aws_region": "ap-northeast-1",
"django_settings": "config.settings.dev",
"profile_name": "default",
"project_name": "tellya2",
"runtime": "python3.8",
"s3_bucket": "tellya2-dev",
"keep_warm": false,
"vpc_config": {
"SubnetIds": ["subnet-085938da970945e07"],
"SecurityGroupIds": ["sg-0353477a152c17539"]
},
"layers": ["arn:aws:lambda:ap-northeast-1:898466741470:layer:psycopg2-py38:1"],
"delete_local_zip": true,
"exclude": ["static", "psycopg2", "*.pyc", "pip", "pip-tools", "setuptools", "*.dist-info", "*.so"],
"events": [{
"function": "tellya2.lambda_functions.check_messages",
"expression": "rate(1 minute)"
}]
},
"prod": {
"aws_region": "ap-northeast-1",
"django_settings": "config.settings.prod",
"profile_name": "default",
"project_name": "tellya2",
"runtime": "python3.8",
"s3_bucket": "tellya2-prod",
"keep_warm": true,
"vpc_config": {
"SubnetIds": ["subnet-0042d4fa27f9253ae", "subnet-0768c029b7e2d32c3"],
"SecurityGroupIds": ["sg-446b8d00"]
},
"layers": ["arn:aws:lambda:ap-northeast-1:898466741470:layer:psycopg2-py38:1"],
"delete_local_zip": true,
"exclude": ["static", "psycopg2", "pip*", "pip-tools*", "setuptools", "*.pyc", "*.dist-info", "*.so"],
"events": [{
"function": "tellya2.lambda_functions.check_messages",
"expression": "rate(1 minute)"
}],
"certificate_arn": "arn:aws:acm:us-east-1:214849182730:certificate/19237cc2-0571-4045-bc10-face4e581f3d",
"domain": "[redacted]"
}
}
```
| closed | 2021-09-14T01:55:52Z | 2022-03-16T06:43:55Z | https://github.com/zappa/Zappa/issues/1035 | [] | mdunc | 25 |
fastapi/sqlmodel | pydantic | 176 | How to accomplish Read/Write transactions with a one to many relationship | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class User(SQLModel):
__tablename__ = "users"
id: Optional[str]
cars: List[Car] = Relationship(sa_relationship=RelationshipProperty("Car", back_populates="user")
class Car(SQLModel):
...
user_id: str = Field(default=None, foreign_key="users.id")
user: User = Relationship(sa_relationship=RelationshipProperty("User", back_populates="cars"))
is_main_car: bool
```
### Description
I have two tables that have a many to one relationship, such as the one described above. Any given user can only have a single car that `is_main_car`. Additionally, the first car a user gets must be the main car.
I am trying to determine how the transactional semantics work with this relationship within a Session. If I read the `user`, and then use the `user.cars` field to determine if the user has 0 cars or already has a main car, can I rely on that condition still being true when I write my new main `Car` row to the `Cars` table (assuming it is all within a single Session)?
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.9.7
### Additional Context
_No response_ | open | 2021-12-03T22:09:13Z | 2021-12-03T22:10:06Z | https://github.com/fastapi/sqlmodel/issues/176 | [
"question"
] | br-follow | 0 |
kennethreitz/responder | flask | 578 | [2025] Releasing version 3.0.0 | ## About
We are intending to release Responder 3.0.0.
## Notes
- responder 3.0.0 is largely compatible with responder 2.0.0,
and unlocks using it with Python 3.11 and higher.
- All subsystems have been refactored to be true extensions,
see `responder.ext.{cli,graphql,openapi,schema}`.
## Preview
Pre-release packages are available on PyPI. Feedback is very much welcome.
- **PyPI:** [responder==3.0.0.dev0](https://pypi.org/project/responder/3.0.0.dev0/)
- **Documentation:** https://responder.readthedocs.io/
- **Installation:**
```
uv pip install --upgrade 'responder>=3.0.0.dev0'
```
## Demo
Demonstrate package downloading and invocation works well.
```
uvx --with='responder[cli]>=3.0.0.dev0' responder --version
```
<details>
<summary>variants</summary>
#### uv solo
```shell
uv run --with='responder[cli]>=3.0.0.dev0' -- responder --version
uv run --python=3.8 --with='responder[cli]>=3.0.0.dev0' -- sh -c "python -V; responder --version"
uv run --python=3.13 --with='responder[cli]>=3.0.0.dev0' -- sh -c "python -V; responder --version"
```
#### uv+Docker
```shell
export PYTHON=python3.8
export PYTHON=python3.13
docker run "ghcr.io/astral-sh/uv:${PYTHON}-bookworm-slim" \
uv run --with='responder[cli]>=3.0.0.dev0' -- responder --version
```
</details>
## Downstream
Updates to responder 3.0.0, validated on downstream projects.
- https://github.com/daq-tools/vasuki/pull/28
## Details
<details>
<summary>What's inside</summary>
### Maintenance
Currently, the package can't be installed on current systems, i.e. Python 3.11+.
The next release intends to improve this situation.
- GH-470
- GH-496
- GH-518
- GH-522
- GH-525
### Code wrangling
Some subsystem modules got lost on the `main` branch.
Those patches bring them back into `responder.ext`.
- GH-547
- GH-549
- GH-576
- GH-551
- GH-554
- Feature: Bring back OpenAPI extension
- GH-555
### Documentation
The documentation is on RTD now.
- GH-564
- https://responder.readthedocs.io/
</details>
| open | 2025-01-18T21:43:41Z | 2025-02-07T00:49:59Z | https://github.com/kennethreitz/responder/issues/578 | [] | amotl | 15 |
tensorflow/tensor2tensor | deep-learning | 1,379 | Defining a new Multi-Task Problem | Hello,
I am trying to define a new multi-task problem that predict the secondary structure protein from amino acids. I am using the transformer base model.
I am facing several problems/questions:
1) After training, when I change the decoding problem id, I get the same result for both problems. The output doesn't change. Is there a bug in the multi-task decoding ?
2) How I can force the decoder to always produce the same length as the input ?
3) With the current code I can find a lot of the input vocabulary in the output vocabulary. It extremely reduces the accuracy. How can I force the model not to share vocabulary?
4) The accuracy is really low compared to a simple CNN that I made in pytorch. Is there something wrong in the Code bellow ?
This is my code unlabelled data "like language modelling":
```
@registry.register_problem
class LanguagemodelUniref50C8k(text_problems.Text2SelfProblem):
@property
def approx_vocab_size(self):
return 2**13 # 8192
def is_generate_per_split(self):
return False
@property
def dataset_splits(self):
return [{
"split": problem.DatasetSplit.TRAIN,
"shards": 99,
}, {
"split": problem.DatasetSplit.EVAL,
"shards": 1,
}]
def generate_samples(self, data_dir, tmp_dir, dataset_split):
filepath = tmp_dir + '/uniref50_protein_unlabeled.txt'
for line in tf.gfile.Open(filepath):
yield {"targets": line}
```
This is my code to the labeled dataset:
```
@registry.register_problem
class TranslateAminoProtinTokensSharedVocab(text_problems.Text2TextProblem):
@property
def approx_vocab_size(self):
return 2**13 # 8192
@property
def is_generate_per_split(self):
return False
@property
def dataset_splits(self):
return [{
"split": problem.DatasetSplit.TRAIN,
"shards": 9,
}, {
"split": problem.DatasetSplit.EVAL,
"shards": 1,
}]
def generate_samples(self, data_dir, tmp_dir, dataset_split):
datasetdf = pd.read_csv(tmp_dir + '/complete_train_dataset_seperated.csv')
for index, row in datasetdf.iterrows():
yield {
"inputs": row['input'],
"targets": row['output'],
}
```
This is my code for the Multi-task:
```
@registry.register_problem
class MultiUniref50C8kTranslateAminoProtin(multi_problem.MultiProblem):
def __init__(self, was_reversed=False, was_copy=False):
super(MultiUniref50C8kTranslateAminoProtin, self).__init__(was_reversed, was_copy)
self.task_list.append(uniref.LanguagemodelUniref50C8k())
self.task_list.append(translate_amino_protin.TranslateAminoProtinTokensSharedVocab())
@property
def vocab_type(self):
return text_problems.VocabType.SUBWORD
``` | closed | 2019-01-17T13:57:13Z | 2019-08-13T09:39:44Z | https://github.com/tensorflow/tensor2tensor/issues/1379 | [] | agemagician | 3 |
matplotlib/mplfinance | matplotlib | 18 | Trendlines | Hi Daniel,
Thanks for the work you put into this. I tried working with it and I want to do something that I'm not sure it's supported. I want to draw trendlines over the graph (just have list of values). Is this possible? | closed | 2020-01-25T15:00:44Z | 2020-01-27T21:20:19Z | https://github.com/matplotlib/mplfinance/issues/18 | [
"question"
] | VaseSimion | 3 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,808 | I am getting File "src/pymssql/_pymssql.pyx", line 479, in pymssql._pymssql.Cursor.execute pymssql._pymssql.OperationalError: (3971, b'The server failed to resume the transaction. Desc:4900000003.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n') | ### Describe the use case
File "src/pymssql/_pymssql.pyx", line 479, in pymssql._pymssql.Cursor.execute
pymssql._pymssql.OperationalError: (3971, b'The server failed to resume the transaction. Desc:4900000003.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
### Databases / Backends / Drivers targeted
mssql server 16, SQL Alchemy 1.4, Python 3.8 and pymysql 2.27
### Example Use
Traceback (most recent call last):
File "src/pymssql/_pymssql.pyx", line 459, in pymssql._pymssql.Cursor.execute
File "src/pymssql/_mssql.pyx", line 1087, in pymssql._mssql.MSSQLConnection.execute_query
File "src/pymssql/_mssql.pyx", line 1118, in pymssql._mssql.MSSQLConnection.execute_query
File "src/pymssql/_mssql.pyx", line 1251, in pymssql._mssql.MSSQLConnection.format_and_run_query
File "src/pymssql/_mssql.pyx", line 1789, in pymssql._mssql.check_cancel_and_raise
File "src/pymssql/_mssql.pyx", line 1835, in pymssql._mssql.raise_MSSQLDatabaseException
pymssql._mssql.MSSQLDatabaseException: (3971, b'The server failed to resume the transaction. Desc:4900000003.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3256, in _wrap_pool_connect
return fn()
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 310, in connect
return _ConnectionFairy._checkout(self)
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 901, in _checkout
result = pool._dialect.do_ping(fairy.dbapi_connection)
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 699, in do_ping
cursor.execute(self._dialect_specific_select_one)
File "src/pymssql/_pymssql.pyx", line 479, in pymssql._pymssql.Cursor.execute
pymssql._pymssql.OperationalError: (3971, b'The server failed to resume the transaction. Desc:4900000003.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/op_general/icici_medical_api_V2/portal/routes/file_upload/controller.py", line 150, in post
header = APIHeader.query.filter_by(claim_header_id=header_id).first()
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2810, in first
return self.limit(1)._iter().first()
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/orm/query.py", line 2894, in _iter
result = self.session.execute(
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1691, in execute
conn = self._connection_for_bind(bind)
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 1532, in _connection_for_bind
return self._transaction._connection_for_bind(
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind
conn = bind.connect()
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3210, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 96, in __init__
else engine.raw_connection()
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3289, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3259, in _wrap_pool_connect
Connection._handle_dbapi_exception_noconnection(
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2106, in _handle_dbapi_exception_noconnection
util.raise_(
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3256, in _wrap_pool_connect
return fn()
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 310, in connect
return _ConnectionFairy._checkout(self)
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/pool/base.py", line 901, in _checkout
result = pool._dialect.do_ping(fairy.dbapi_connection)
File "/home/op_general/miniconda/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 699, in do_ping
cursor.execute(self._dialect_specific_select_one)
File "src/pymssql/_pymssql.pyx", line 479, in pymssql._pymssql.Cursor.execute
sqlalchemy.exc.OperationalError: (pymssql._pymssql.OperationalError) (3971, b'The server failed to resume the transaction. Desc:4900000003.DB-Lib error message 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL Server\n')
### Additional context
rom flask_sqlalchemy import SQLAlchemy
from sqlalchemy.pool import QueuePool
from portal import create_app
db = SQLAlchemy()
def init_app(app):
db_connection_string = app.config["DB_CONNECTION_STRING"]
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SQLALCHEMY_BINDS'] = {
'db': db_connection_string
}
app.config['SQLALCHEMY_DATABASE_URI']=db_connection_string
app.config['SQLALCHEMY_POOL_RECYCLE'] = 299
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {
"poolclass": QueuePool,
"pool_size": 1000,
"pool_pre_ping": True,
"pool_recycle": 300,
"max_overflow": 1000,
| closed | 2024-08-30T11:18:43Z | 2024-08-30T12:01:09Z | https://github.com/sqlalchemy/sqlalchemy/issues/11808 | [
"SQL Server"
] | manmayaray | 1 |
tflearn/tflearn | data-science | 391 | Image Preprocessing to finetune/predict using pre trained VGG model | The example at https://github.com/tflearn/tflearn/blob/master/examples/images/vgg_network_finetuning.py shows how to fine tune the pre-trained VGG16 model.
However the trained model might have applied mean subtraction and Std normalization where the mean/STD values are calculated over entire training dataset.
if we need to fine-tune the model (or use it as-is for classification) shouldn't we have to apply the exact same preprocessing to new images?
The fine tuning example above applied Normalization to the input image but this will not match that over the entire pre-training dataset.
| closed | 2016-10-12T13:01:44Z | 2016-10-13T15:56:14Z | https://github.com/tflearn/tflearn/issues/391 | [] | sudhashbahu | 4 |
tensorly/tensorly | numpy | 190 | Parafac with non-negativity breaks down if orthogonalize = True and normalize_factors = True? | Tensorly version 0.4.5
I am using parafac with non-negativity and orthogonality constraints and noticed that the non-negativity isn't respected when I set normalize_factors = True.
Here's a small reproducible snippet.
```
import numpy as np
import tensorly as tl
from tensorly.decomposition import parafac
random_tensor = np.random.rand(10,10,10)
num_components = 5
(weights, parafac_factors), errors = parafac(tl.tensor(random_tensor, dtype='float64'),
rank=num_components,
normalize_factors=True,
orthogonalise=True,
non_negative=True,
init='random',
verbose=0,
return_errors = True)
print('Check for negative values in each factor matrix -')
print(np.any(parafac_factors[0] < 0))
print(np.any(parafac_factors[1] < 0))
print(np.any(parafac_factors[2] < 0))
```
I am assuming that orthogonality=True orthogonalizes factors at every iteration. | closed | 2020-08-13T05:56:05Z | 2020-11-19T13:09:34Z | https://github.com/tensorly/tensorly/issues/190 | [] | rutujagurav | 1 |
NullArray/AutoSploit | automation | 333 | Unhandled Exception (98afa004a) | Autosploit version: `3.0`
OS information: `Linux-4.18.0-parrot20-amd64-x86_64-with-Parrot-4.4-stable`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/lam/AutoSploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/lam/AutoSploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-01-04T20:05:01Z | 2019-01-14T18:09:05Z | https://github.com/NullArray/AutoSploit/issues/333 | [] | AutosploitReporter | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 617 | Computer literally reboots itself when using CPU convertion | Does anyone know why does that happen?? I have 16GB of ram and my CPU is an Intel Core i5-8600
Is it because of both CPU and RAM overflow? | open | 2023-06-16T07:15:09Z | 2023-06-16T07:15:09Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/617 | [] | kriterin | 0 |
cvat-ai/cvat | pytorch | 8,365 | "Cannot read properties of undefined (reading 'length')" when returning serverless predictions | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
I have been working from https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio and copying it as closely as possible.
I have a serverless function which at the end has:
```python
context.Response(body=json.dumps(results), headers={},
content_type='application/json', status_code=200)
```
Where `results` is defined as:
```python
results = [
{"confidence": None,
"label": "blood",
"points": contour_blood.ravel().tolist(),
"mask": cvat_mask_blood,
"type": "mask"},
{"confidence": None,
"label": "myocardium",
"points": contour_myo.ravel().tolist(),
"mask": cvat_mask_myo,
"type": "mask"}
]
```
But in CVAT when I use it as an interactor to get masks or points, I get `Interaction error occurred - Cannot read properties of undefined (reading 'length')`
Note I have tried returning
- both points and mask (with type set to `mask` as shown here)
- only points (with type set to `points` and `polygon`)
- only the mask (with type set to `mask`)
In the network tab in the developer tools a request to `http://100.76.3.138:8080/api/lambda/functions/lgep2d?org=` and it has the following in response:
```json
[
{
"confidence": null,
"label": "blood",
"points": [
183,
217,
182,
218,
...,
218,
204,
218,
203,
217
],
"mask": [
0,
0,
0,
0,
...,
0,
0,
0,
0,
0,
152,
217,
226,
284
],
"type": "mask"
},
{
"confidence": null,
"label": "myocardium",
"points": [
190,
203,
189,
204,
188,
204,
187,
205,
184,
...,
205,
207,
204,
203,
204,
202,
203
],
"mask": [
0,
0,
0,
0,
0,
0,
0,
1,
1,
0,
...,
0,
0,
0,
0,
147,
203,
242,
298
],
"type": "mask"
}
]
```
### Expected Behavior
Masks and/or points to appear in my labelled image.
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
Server version: 2.17.0
Core version: 15.1.1
Canvas version: 2.20.8
UI version: 1.64.5
commit a69e1228ac8ab120ca9f1274348e83ddcb89d4be (HEAD -> develop, origin/develop, origin/HEAD)
Author: Andrey Zhavoronkov <andrey@cvat.ai>
Date: Fri Aug 16 14:06:44 2024 +0300
Update dependencies (#8308)
Updated:
backend python packages
golang image
frontend nginx base image
Client: Docker Engine - Community
Version: 27.1.2
API version: 1.46
Go version: go1.21.13
Git commit: d01f264
Built: Mon Aug 12 11:50:12 2024
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 27.1.2
API version: 1.46 (minimum version 1.24)
Go version: go1.21.13
Git commit: f9522e5
Built: Mon Aug 12 11:50:12 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.20
GitCommit: 8fc6bcff51318944179630522a095cc9dbf9f353
runc:
Version: 1.1.13
GitCommit: v1.1.13-0-g58aa920
docker-init:
Version: 0.19.0
GitCommit: de40ad0
```
| closed | 2024-08-28T08:56:52Z | 2024-08-28T09:43:07Z | https://github.com/cvat-ai/cvat/issues/8365 | [
"question"
] | jphdotam | 2 |
deeppavlov/DeepPavlov | tensorflow | 1,599 | Doesn't work with recent version of pytorch-crf | Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
Please enter all the information below, otherwise your issue may be closed without a warning.
**DeepPavlov version** (you can look it up by running `pip show deeppavlov`): 1.0.0
**Python version**: 3.9.5
**Operating system** (ubuntu linux, windows, ...): Windows 11
**Issue**: Error when trying a modified example from the readme.
**Content or a name of a configuration file**:
See below
**Command that led to error**:
```
model = build_model(deeppavlov.configs.ner.ner_collection3_bert, download=True)
```
**Error (including full traceback)**:
```
2022-11-10 18:35:28.686 INFO in 'deeppavlov.download'['download'] at line 138: Skipped http://files.deeppavlov.ai/v1/ner/ner_rus_bert_coll3_torch.tar.gz download because of matching hashes
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package perluniprops to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package perluniprops is already up-to-date!
[nltk_data] Downloading package nonbreaking_prefixes to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package nonbreaking_prefixes is already up-to-date!
2022-11-10 18:35:31.569 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 112: [loading vocabulary from C:\Users\Ellsel\.deeppavlov\models\ner_rus_bert_coll3_torch\tag.dict]
Traceback (most recent call last):
File "c:\Users\Ellsel\Desktop\Automation\conversation.py", line 4, in <module>
model = build_model(deeppavlov.configs.ner.ner_collection3_bert, download=True)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\commands\infer.py", line 53, in build_model
component = from_params(component_config, mode=mode)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\params.py", line 92, in from_params
obj = get_model(cls_name)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\registry.py", line 74, in get_model
return cls_from_str(_REGISTRY[name])
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\registry.py", line 42, in cls_from_str
return getattr(importlib.import_module(module_name), cls_name)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 855, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\models\torch_bert\torch_transformers_sequence_tagger.py", line 28, in <module>
from deeppavlov.models.torch_bert.crf import CRF
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\models\torch_bert\crf.py", line 4, in <module>
from torchcrf import CRF as CRFbase
ModuleNotFoundError: No module named 'torchcrf'
```
`pip install pytorch-crf==0.4.0` needed. | closed | 2022-11-10T18:07:51Z | 2022-11-11T14:49:33Z | https://github.com/deeppavlov/DeepPavlov/issues/1599 | [
"bug"
] | claell | 3 |
tortoise/tortoise-orm | asyncio | 1,115 | source_field break __iter__ | **Describe the bug**
When a field specifies `source_field`, `Model.__iter__()` no longer works.
**To Reproduce**
```python
from tortoise import Model
from tortoise.fields import BigIntField
class SampleModel(Model):
id_ = BigIntField(pk=True, source_field='id')
a = SampleModel(id_=1)
[i for i in a]
```
**Expected behavior**
This iteration sho
**Additional context**
Python 3.10.4
tortoise-orm==0.19.0
| open | 2022-04-28T15:50:32Z | 2022-04-28T16:04:43Z | https://github.com/tortoise/tortoise-orm/issues/1115 | [] | enneamer | 2 |
tatsu-lab/stanford_alpaca | deep-learning | 10 | Example of Instruction-Tuning Training | Hello, thank you for open-sourcing this work. We are now interested in generating our own instructions to fine-tune the Llama model based on your documentation and approach. Could you please advise on any resources or references we can use? Also, are these codes available on Hugging Face? | closed | 2023-03-14T06:09:43Z | 2023-03-15T16:35:39Z | https://github.com/tatsu-lab/stanford_alpaca/issues/10 | [] | BowieHsu | 5 |
seleniumbase/SeleniumBase | web-scraping | 3,510 | Redirect to "chrome-extension" URL? | Hi. My seleniumbase scraper has suddenly started redirecting URL requests to a strange URL. It only seems to this for a particular site.
I've got it down to this MRE:
```
import seleniumbase as sb
def main():
driver = sb.Driver(uc=True, headless=True)
url = "https://www.atptour.com/scores/results-archive?year=1896&tournamentType=atpgs"
driver.get(url)
print(driver.current_url)
driver.quit()
if __name__ == "__main__":
main()
```
The `print` statement shows that the driver's current URL becomes:
`chrome-extension://nkeimhogjdpnpccoofpliimaahmaaome/background.html`
Worth mentioning that sometimes it sometimes takes a couple of runs for the issue to manifest.
I've tested this with "https://www.seleniumbase.io/" and can't replicate the issue.
Interestingly, I tested out adding a couple more requests to the chain and removing `headless` mode as follows:
```
import seleniumbase as sb
def main():
driver = sb.Driver(uc=True)
url = "https://www.atptour.com/scores/results-archive?year=1896&tournamentType=atpgs"
driver.get(url)
print(driver.current_url)
url = "https://www.atptour.com/scores/results-archive?year=1897&tournamentType=atpgs"
driver.get(url)
print(driver.current_url)
url = "https://www.atptour.com/scores/results-archive?year=1898&tournamentType=atpgs"
driver.get(url)
print(driver.current_url)
driver.quit()
if __name__ == "__main__":
main()
```
Stepping through this in the debugger then each page is loaded as expected however the print statements record:
```
chrome-extension://nkeimhogjdpnpccoofpliimaahmaaome/background.html
https://www.atptour.com/scores/results-archive?year=1896&tournamentType=atpgs
https://www.atptour.com/scores/results-archive?year=1897&tournamentType=atpgs
```
So it looks like after the first anomolous response the driver is running one URL behind.
Any ideas on what might be happening? | closed | 2025-02-12T13:07:46Z | 2025-02-12T13:21:23Z | https://github.com/seleniumbase/SeleniumBase/issues/3510 | [
"can't reproduce",
"UC Mode / CDP Mode"
] | gottogethelp | 2 |
lensacom/sparkit-learn | scikit-learn | 29 | Spark 1.4 and python 3 support | Spark 1.4 now supports python 3 https://spark.apache.org/releases/spark-release-1-4-0.html
| closed | 2015-06-12T10:22:31Z | 2015-06-17T15:36:54Z | https://github.com/lensacom/sparkit-learn/issues/29 | [
"enhancement",
"priority"
] | kszucs | 0 |
TracecatHQ/tracecat | automation | 371 | [DOCS] Missing / outdated section on formulas | closed | 2024-08-28T19:17:25Z | 2024-11-06T01:19:59Z | https://github.com/TracecatHQ/tracecat/issues/371 | [
"documentation"
] | topher-lo | 2 | |
abhiTronix/vidgear | dash | 238 | Remove range from publication year in Copyright Notice | <!--
Please note that your issue will be fixed much faster if you spend about
half an hour preparing it, including the exact reproduction steps and a demo.
If you're in a hurry or don't feel confident, it's fine to report bugs with
less details, but this makes it less likely they'll get fixed soon.
If the important info is missing we'll add the 'Needs more information' label
or may choose to close the issue until there is enough information provided.
-->
## Description
<!-- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
The vidgear's license contains publication date range in copyright notice which is not ideal for a Open-Source Licenses. It is recommended in [US copyright notices defined by Title 17, Chapter 4(Visually perceptible copies)](https://www.copyright.gov/title17/92chap4.html) to put only year date of first publication of the compilation or derivative work.
### Acknowledgment
<!-- By posting an issue you acknowledge the following: (Put an `x` in all the boxes that apply(important)) -->
- [x] I have searched the [issues](https://github.com/abhiTronix/vidgear/issues) for my issue and found nothing related or helpful.
- [x] I have read the [Documentation](https://abhitronix.github.io/vidgear).
- [x] I have read the [Issue Guidelines](https://abhitronix.github.io/vidgear/contribution/issue/#submitting-an-issue-guidelines).
### Environment
<!-- Include as many relevant details about the environment you experienced the bug in -->
* VidGear version: <!-- Run command `python -c "import vidgear; print(vidgear.__version__)"` --> v0.2.2
* Branch: <!-- Select between: Master | Testing | Development | PyPi --> Development | closed | 2021-08-11T11:46:09Z | 2021-08-12T02:52:37Z | https://github.com/abhiTronix/vidgear/issues/238 | [
"BUG :bug:",
"SOLVED :checkered_flag:",
"DOCS :scroll:",
"META :thought_balloon:"
] | abhiTronix | 1 |
jpadilla/django-rest-framework-jwt | django | 253 | Authentication Validation | Hello
I have been using this great framework for a few months now. Just recently I have started digging a little more into the code and have noticed that when login credentials are checked a ValidationError is raised. Is there a reason why this is preffered over AuthenticationFailed (401 or 403 instead of 400) ?
Thanks,
-Peter
| open | 2016-08-17T08:48:11Z | 2017-11-05T13:57:09Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/253 | [] | ghost | 6 |
Gerapy/Gerapy | django | 153 | What's the progress of V2 development? | closed | 2020-06-20T02:52:48Z | 2020-07-06T14:56:14Z | https://github.com/Gerapy/Gerapy/issues/153 | [
"bug"
] | haroldrandom | 1 | |
labmlai/annotated_deep_learning_paper_implementations | deep-learning | 218 | Wrong output of shift right example in the commnets? | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/9a42ac2697cddc1bae83968eecb0ffa72cfbd714/labml_nn/transformers/xl/relative_mha.py#L28
this should be `[[1, 2 ,3], [0, 4, 5], [6, 0, 7]]` | closed | 2023-10-25T03:40:47Z | 2023-11-07T09:28:52Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/218 | [] | harveyaot | 1 |
dunossauro/fastapi-do-zero | pydantic | 322 | Adicionar logging a erros | O curso não cobre logs em erros, seria interessante se isso fosse feito. Da maneira mais simples possível, com o handler do uvicorn mesmo.
Além de mais material, o nível de boas práticas é elevado.
O log deve ser testado usando caplog | closed | 2025-02-26T19:19:09Z | 2025-03-11T20:07:47Z | https://github.com/dunossauro/fastapi-do-zero/issues/322 | [] | dunossauro | 2 |
matterport/Mask_RCNN | tensorflow | 2,146 | AP is zero for every IoU | I am trying to run the nucleus detection model with the original data used in the paper.
I tried to used both resnet50 and coco weights, but they both show AP 0 for every IoU, and the detection is not working.
What can be the problem?


| open | 2020-04-25T01:13:49Z | 2022-12-19T16:29:01Z | https://github.com/matterport/Mask_RCNN/issues/2146 | [] | shanisa11 | 2 |
ni1o1/transbigdata | data-visualization | 69 | bounds 该如何选取 | 请问,对于一个数据,bounds 值如何选取 | closed | 2023-05-08T11:41:07Z | 2023-05-15T14:28:41Z | https://github.com/ni1o1/transbigdata/issues/69 | [] | ybxgood | 4 |
aminalaee/sqladmin | fastapi | 849 | SessionMiddleware for auth | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
Hi, was working on a task and ran into an issue where my application initializes two sessions.
AuthenticationBackend class creates its own session in the init method:
```python
class AuthenticationBackend:
"""Base class for implementing the Authentication into SQLAdmin.
You need to inherit this class and override the methods:
`login`, `logout` and `authenticate`.
"""
def __init__(self, secret_key: str) -> None:
from starlette.middleware.sessions import SessionMiddleware
self.middlewares = [
Middleware(SessionMiddleware, secret_key=secret_key),
]
```
But if I need a session also in my application I initialize, when starting the app
```python
fastapi_app = FastAPI()
fastapi_app.add_middleware(SessionMiddleware, secret_key="some")
```
and then I can't use session object from AuthBackend
### Describe the solution you would like.
I think it would be more transparent to pass this middleware to the AuthenticationBackend, for example like this:
```python
# or middlewares: list[Middleware]
def __init__(self, session_middleware: Middleware) -> None:
self.middlewares = [
session_middleware,
]
```
and initialize this middleware on application startup:
```python
session = Middleware(SessionMiddleware, secret_key="some-key")
fastapi_app = FastAPI(middleware=[session,])
admin = Admin(
authentication_backend=AdminAuthBackend(session_middleware=session),
)
```
I hope this helps someone spend less time looking for the problem than it took me to find it 🥲 | open | 2024-10-29T14:31:54Z | 2024-11-11T08:27:05Z | https://github.com/aminalaee/sqladmin/issues/849 | [] | xodiumx | 3 |
plotly/dash | data-science | 3,106 | Search param removes values after ampersand, introduced in 2.18.2 | I pass to one of my pages a search string, like:
?param1=something¶m2=something2
Accessing it uding:
def layout(**kwargs):
In 2.18.1, this works for values that included ampersand. For example:
?param1=something&bla¶m2=something2
would result in
kwargs[param1]='something&bla'
With 2.18.2 I get just:
kwargs[param1]='something'
with anything after the ampersand removed.
I would guess this is related to #2991 .
To be clear, I specifically downgraded dash to 2.18.1 and the issue went away. | open | 2024-12-12T11:43:21Z | 2025-01-17T14:05:16Z | https://github.com/plotly/dash/issues/3106 | [
"regression",
"bug",
"P2"
] | ezwc | 6 |
tensorpack/tensorpack | tensorflow | 1,503 | Export with tensorpack error when using tensorflow serving | If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
(2) **If you're using examples, have you made any changes to the examples? Paste `git status; git diff` here:**
(3) **If not using examples, help us reproduce your issue:**
It's always better to copy-paste what you did than to describe them.
Please try to provide enough information to let others __reproduce__ your issues.
Without reproducing the issue, we may not be able to investigate it.
### 2. What you observed:
(1) **Include the ENTIRE logs here:**
```
<paste logs here>
use tensorflow serving and get
2021-01-14 12:03:30.625847: E tensorflow_serving/util/retrier.cc:37] Loading servable: {name: prediction_pipeline version: 1} failed: Not found: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: `saved_model_cli`
and code is here
from tensorpack import PredictConfig, SmartInit
from audio_cls_tensorpack.tensorpack_vggish import VggModelInferOnly
from tensorpack.tfutils.export import ModelExporter
import tensorflow as tf
if __name__ == '__main__':
voice_second = 10
num_categories = 2
model_path = './model.ckpt-23745'
pred_config = PredictConfig(
session_init=SmartInit(model_path),
model=VggModelInferOnly(),
input_names=['inputs'],
output_names=['predictions']
)
ModelExporter(pred_config).export_serving( './served_model/1')
tensorflow serving config bash
model_name="check"
docker kill $model_name
docker rm $model_name
docker run \
-t --rm \
--name=$model_name \
-p 8800:8800 -p 8801:8801 \
-e CUDA_VISIBLE_DEVICES=4 \
-v "/pb_models/served_model:/models/${model_name}" \
-e MODEL_NAME=$model_name \
tensorflow/serving:1.14.0-gpu \
--port=8800 --rest_api_port=8801
```
It's always better to copy-paste what you observed instead of describing them.
It's always better to paste **as much as possible**, although sometimes a partial log is OK.
Tensorpack typically saves stdout to its training log.
If stderr is relevant, you can run a command with `my_command 2>&1 | tee logs.txt`
to save both stdout and stderr to one file.
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
### 3. What you expected, if not obvious.
If you expect higher speed, please read
http://tensorpack.readthedocs.io/tutorial/performance-tuning.html
before posting.
If you expect the model to converge / work better, note that we do not help you on how to improve a model.
Only in one of the two conditions can we help with it:
(1) You're unable to reproduce the results documented in tensorpack examples.
(2) It indicates a tensorpack bug.
### 4. Your environment:
Paste the output of this command: `python -m tensorpack.tfutils.collect_env`
If this command failed, also tell us your version of Python/TF/tensorpack.
but I install tensorpack using pip install tensorpack(on aliyun).
```
(py3) ➜ ~ pip list|grep tensor
tensorboard 1.14.0
tensorflow-estimator 1.14.0
tensorflow-gpu 1.14.0
tensorpack 0.10.1
```
Note that:
+ You can install tensorpack master by `pip install -U git+https://github.com/tensorpack/tensorpack.git`
and see if your issue is already solved.
+ If you're not using tensorpack under a normal command line shell (e.g.,
using an IDE or jupyter notebook), please retry under a normal command line shell.
You may often want to provide extra information related to your issue, but
at the minimum please try to provide the above information __accurately__ to save effort in the investigation.
| closed | 2021-01-14T12:06:04Z | 2021-02-17T23:53:08Z | https://github.com/tensorpack/tensorpack/issues/1503 | [
"unrelated"
] | xealml | 1 |
facebookresearch/fairseq | pytorch | 4,702 | Why parameters are still updated even if I set their requires_grads equal to "False" ? Fairseq transformer | I implemented a function in fairseq_cli/train.py, to freeze the parameters,
```
def freeze_param_grad_zero(model):
for name, param in model.named_parameters():
if "fc1" in name or "fc2" in name:
print("========= start freezing =========")
param.requires_grad = False
return model
```
and I found the training logs printed:
`num. model params: 33830912 (num. trained: 14140928)`
But I found that the parameters **are still updated** after training done.
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
Steps to reproduce the behavior (**always include the command you ran**):
1. Run cmd '....'
2. See error
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Environment
- fairseq Version (e.g., 1.0 or main):
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux):
- How you installed fairseq (`pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
| open | 2022-09-07T08:42:00Z | 2022-09-07T08:42:00Z | https://github.com/facebookresearch/fairseq/issues/4702 | [
"bug",
"needs triage"
] | robotsp | 0 |
simple-login/app | flask | 2,398 | Notification emails are not encrypted using PGP ("on behalf of" copies) | ## Prerequisites
- [x] I have searched open and closed issues to make sure that the bug has not yet been reported.
## Bug report
**Describe the bug**
1. Set up on alias, say a@example.com
2. Set the alias to forward to more than one mailbox, at least one of which has PGP setup, say encrypted@example.com
3. The alias receives an email, which gets forwarded to all mailboxes
4. Reply from an mailbox other than encrypted@example.com, say m@example.com
5. encrypted@example.com receives a copy of the reply with added text "Email sent on behalf of alias a@example.com using mailbox m@example.com".
The copy received in the last step is not encrypted at all. The added text, reply, and any quoted original text are all in plain text. The title is not replaced by the generic title either.
**Expected behavior**
The copy sent to encrypted@example.com should be encrypted by PGP since that mailbox has enabled PGP, just like how the forwarded email in Step 3 is encrypted before sent to encrypted@example.com.
**Screenshots**
I don't have a good screenshot for this, but emails like this are easy to spot, since they'll be the only plain text emails in an inbox full of encrypted ones. They have a header like this, in case someone needs to search:
```
**** Don't forget to remove this section if you reply to this email ****
Email sent on behalf of alias {alias.email} using mailbox {mailbox.email}
```
Searching for the text above reveals this code location: https://github.com/simple-login/app/blob/7e77afa4fc47c8c727a22fb2b84d6cdf7fb877c4/email_handler.py#L1302-L1305
**Additional context**
Since this feature involves PGP it requires a premium plan to test and repro this bug. I believe it's an important issue since it affects paying customers.
On docs like https://simplelogin.io/docs/mailbox/pgp-encryption/ it says "In order to prevent Gmail, Hotmail, Yahoo from reading your emails, [...]" but this issues means that they'll see the email anyway as soon as there's a reply. It kinda defeats the purpose, since the reply will usually contain the whole thread in quoted text.
In a team setting, if one person sets up PGP for their own mailbox, and another person replies on behalf of the alias, the first person gets the plaintext notification email too. Assuming the first person is concerned with security or privacy (since they set up PGP), they'd have no way to verify whether the second person actually sent that email or it is spoofed. The first person has no control over this behavior in SimpleLogin settings. | open | 2025-02-26T01:45:36Z | 2025-02-26T01:45:36Z | https://github.com/simple-login/app/issues/2398 | [] | utf8please | 0 |
ivy-llc/ivy | numpy | 28,242 | Fix Ivy Failing Test: paddle - elementwise.equal | ToDo List:https://github.com/unifyai/ivy/issues/27501 | closed | 2024-02-10T18:45:45Z | 2024-02-25T10:50:47Z | https://github.com/ivy-llc/ivy/issues/28242 | [
"Sub Task"
] | marvlyngkhoi | 0 |
netbox-community/netbox | django | 18,535 | Allow NetBox to start cleanly if incompatible plugins are present | ### NetBox version
v4.2.2
### Feature type
Data model extension
### Proposed functionality
Currently we raise an `ImproperlyConfigured` exception and refuse to start NetBox at all if any plugin has `min_version` or `max_version` specified such that the installed version of NetBox is outside the compatible range.
Proposal is to allow NetBox to recognize incompatible plugins at startup and skip loading them (while emitting an appropriate warning), rather than raising an exception.
This should probably be done with a custom exception type that inherits from `ImproperlyConfigured` so that these exceptions can be caught separately.
### Use case
It is not always easy to upgrade plugins at the same time as a NetBox installation, or a plugin might not yet be updated to the current NetBox version; this often leads to support issues where plugins must be painstakingly removed from the local configuration during an upgrade. This change would make it possible for plugins' published compatibility ranges to be enforced through the invalid plugins simply being disabled rather than interfering with the overall application's startup.
Note that plugins will need to be more diligent about defining `max_version` and pinning it to the most recent NetBox release against which it has been tested.
### Database changes
N/A
### External dependencies
N/A | closed | 2025-01-29T23:20:00Z | 2025-03-10T14:52:09Z | https://github.com/netbox-community/netbox/issues/18535 | [
"status: accepted",
"type: feature",
"complexity: medium"
] | bctiemann | 1 |
sktime/pytorch-forecasting | pandas | 1,207 | Pytorch-Forecasting imports deprecated property from transient dependency on numpy | - PyTorch-Forecasting version: 0.10.3
- PyTorch version: 1.12.1
- Python version: 3.8
- Operating System: Ubuntu 20.04
### Expected behavior
I created a simple `TimeSeriesDataset` without specifying an explicit `target_normalizer`. I expected it to simply create a default normalizer deduced from the other arguments as explained in the documentation.
### Actual behavior
An exception was raised of the type `AttributeError` by the `numpy` package. The cause is that aliases like `numpy.float` and `numpy.int` have been deprecated as of numpy `1.20` which has been out for almost two years. The deprecation is explained [here](https://numpy.org/doc/stable/release/1.20.0-notes.html#deprecations). This dependency on `numpy<=1.19` is not specified by `pytorch-forecasting` as described in #1130 .
### Code to reproduce the problem
Create an environment with the above versions and install numpy >= 1.20. Then run the first few cell's of the tutorial in the documentation for TFT: [https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/stallion.html](https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/stallion.html)
<details><summary>STACKTRACE</summary>
<p>
#### The stacktrace from a simple `TimeSeriesDataSet` creation.
```python
---------------------------------------------------------------------------
NotFittedError Traceback (most recent call last)
File /opt/conda/envs/leap-dsr-rd/lib/python3.8/site-packages/pytorch_forecasting/data/timeseries.py:753, in TimeSeriesDataSet._preprocess_data(self, data)
752 try:
--> 753 check_is_fitted(self.target_normalizer)
754 except NotFittedError:
File /opt/conda/envs/leap-dsr-rd/lib/python3.8/site-packages/sklearn/utils/validation.py:1380, in check_is_fitted(estimator, attributes, msg, all_or_any)
1379 if not fitted:
-> 1380 raise NotFittedError(msg % {"name": type(estimator).__name__})
NotFittedError: This GroupNormalizer instance is not fitted yet. Call 'fit' with appropriate arguments before using this estimator.
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 training = TimeSeriesDataSet(
2 df_h[df_h["local_hour_start"].dt.year == 2021],
3 group_ids=["meter_id"],
4 time_idx="local_hour_idx",
5 target="energy_kwh",
6 target_normalizer=GroupNormalizer(groups=["meter_id"]),
7 max_encoder_length=24 * 7,
8 min_prediction_length=3, # One hour plus 2 buffer hours
9 max_prediction_length=7, # Five hours plus 2 buffer hours
10 time_varying_unknown_categoricals=[],
11 time_varying_unknown_reals=["energy_kwh"],
12 time_varying_known_categoricals=["is_event_hour"],
13 time_varying_known_reals=[],
14 )
16 # validation = TimeSeriesDataSet.from_dataset(training, df_h, predict=True, stop_randomization=True)
File /opt/conda/envs/leap-dsr-rd/lib/python3.8/site-packages/pytorch_forecasting/data/timeseries.py:476, in TimeSeriesDataSet.__init__(self, data, time_idx, target, group_ids, weight, max_encoder_length, min_encoder_length, min_prediction_idx, min_prediction_length, max_prediction_length, static_categoricals, static_reals, time_varying_known_categoricals, time_varying_known_reals, time_varying_unknown_categoricals, time_varying_unknown_reals, variable_groups, constant_fill_strategy, allow_missing_timesteps, lags, add_relative_time_idx, add_target_scales, add_encoder_length, target_normalizer, categorical_encoders, scalers, randomize_length, predict_mode)
473 data = data.sort_values(self.group_ids + [self.time_idx])
475 # preprocess data
--> 476 data = self._preprocess_data(data)
477 for target in self.target_names:
478 assert target not in self.scalers, "Target normalizer is separate and not in scalers."
File /opt/conda/envs/leap-dsr-rd/lib/python3.8/site-packages/pytorch_forecasting/data/timeseries.py:758, in TimeSeriesDataSet._preprocess_data(self, data)
756 self.target_normalizer.fit(data[self.target])
757 elif isinstance(self.target_normalizer, (GroupNormalizer, MultiNormalizer)):
--> 758 self.target_normalizer.fit(data[self.target], data)
759 else:
760 self.target_normalizer.fit(data[self.target])
File /opt/conda/envs/leap-dsr-rd/lib/python3.8/site-packages/pytorch_forecasting/data/encoders.py:771, in GroupNormalizer.fit(self, y, X)
760 """
761 Determine scales for each group
762
(...)
768 self
769 """
770 y = self.preprocess(y)
--> 771 eps = np.finfo(np.float).eps
772 if len(self.groups) == 0:
773 assert not self.scale_by_group, "No groups are defined, i.e. `scale_by_group=[]`"
File /opt/conda/envs/leap-dsr-rd/lib/python3.8/site-packages/numpy/__init__.py:284, in __getattr__(attr)
281 from .testing import Tester
282 return Tester
--> 284 raise AttributeError("module {!r} has no attribute "
285 "{!r}".format(__name__, attr))
AttributeError: module 'numpy' has no attribute 'float'
```
</p>
</details> | open | 2022-12-20T09:27:25Z | 2022-12-22T22:11:56Z | https://github.com/sktime/pytorch-forecasting/issues/1207 | [] | JeroenPeterBos | 1 |
python-gino/gino | sqlalchemy | 599 | Primary Key Columns with differing database names | * GINO version: 0.8.5
* Python version: 3.8
### Description
The current logic assumes that every primary key is named exactly the same way as the corresponding database column.
This seems to be because of the logic on the lines https://github.com/fantix/gino/blob/3109577271e59ab9cde169b5884403d8f41caa8b/gino/crud.py#L573-L574
which use the database column name `c.name` as an attribute.
This causes lookup failures for models such as
```
class ModelWithCustomColumnNames(db.Model):
__tablename__ = '...'
id = db.Column('other', db.Integer(), primary_key=True)
field = db.Column(db.Text())
```
| closed | 2019-11-20T13:50:04Z | 2019-12-26T23:52:15Z | https://github.com/python-gino/gino/issues/599 | [
"feature request"
] | tr11 | 0 |
apify/crawlee-python | web-scraping | 442 | item_count double incremented when reloading dataset | When reusing a dataset with metadata, `item_count` is incremented after being loaded from the metadata file. It leads to non continuous file increments, and breaks multiple functions on Datasets (export to file, etc).
Issue: code paths with/without metadata overlap in `create_dataset_from_directory` | closed | 2024-08-19T14:09:24Z | 2024-08-30T12:24:03Z | https://github.com/apify/crawlee-python/issues/442 | [
"bug",
"t-tooling"
] | cadlagtrader | 1 |
SALib/SALib | numpy | 278 | Modify sample matrix before Sobol Analysis / Modificar matriz de muestra antes del análisis de Sobol | Hi everyone!
I'm doing Sobol analysis using Salib and getting sample matrix using saltelli.sample function. The equation Im analyzing include a inverse error function. Before the Sobols analysis its necessary to exclude some of the samples generated by saltelli sample method so the function domain make sense. I have modified the sample matrix, by removing those samples that don't comply with the domain requirements, but I get weird Sobols indices.
Since you can't use existing data to calculate Sobol indices like was asked in issue #211 ( like export the modified sample ), there is anyway to modified sample matrix before the Sobol analysis (sobol.analyse) ???
Thanks!
¡Hola a todos!
Estoy haciendo un análisis de Sobol usando Salib y obtengo una matriz de muestra usando la función saltelli.sample. La ecuación que estoy analizando incluye la inversa de la función de error. Antes del análisis de Sobols es necesario excluir algunas de las muestras generadas por el método de muestra saltelli para que el dominio de la función tenga sentido. He modificado la matriz de muestras, eliminando aquellas muestras que no cumplen con los requisitos del dominio, pero obtengo índices de Sobols extraños.
Ya que no puede utilizar los datos existentes para calcular los índices de Sobol tal como se menciona en el asunto #211 (como exportar la muestra modificada), ¿hay alguna forma de matriz de muestra modificada antes del análisis de Sobol (sobol.analyse) ???
¡Gracias! | closed | 2019-12-17T01:38:31Z | 2019-12-17T04:26:20Z | https://github.com/SALib/SALib/issues/278 | [] | rodrigojara | 1 |
coqui-ai/TTS | python | 3,573 | [Feature request] Appropriate intonation using xtts_v2 und voice cloning | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
Appropriate intonation using xtts_v2 und voice cloning
**Solution**
There is a certain structure to intonation that gives a natural flow, the same with using pauses. So the sentences spoken should also be analyzed for what a speaker intonates and when he uses pauses to adapt to new contexts semantically. | closed | 2024-02-11T14:31:56Z | 2025-01-03T09:48:04Z | https://github.com/coqui-ai/TTS/issues/3573 | [
"wontfix",
"feature request"
] | Bardo-Konrad | 1 |
modelscope/modelscope | nlp | 933 | 时间轴有问题 | 版本
```
funasr 1.1.4
modelscope 1.16.1
```
运行代码(你们官方示例代码):
```
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
if __name__ == '__main__':
audio_in = 'https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_speaker_demo.wav'
output_dir = "./results"
inference_pipeline = pipeline(
task=Tasks.auto_speech_recognition,
model='iic/speech_paraformer-large-vad-punc-spk_asr_nat-zh-cn',
model_revision='v2.0.4',
vad_model='iic/speech_fsmn_vad_zh-cn-16k-common-pytorch', vad_model_revision="v2.0.4",
punc_model='iic/punc_ct-transformer_cn-en-common-vocab471067-large', punc_model_revision="v2.0.4",
output_dir=output_dir,
)
rec_result = inference_pipeline(audio_in, batch_size_s=300, batch_size_token_threshold_s=40)
print(rec_result)
```
运行日志(取其中一个):
`[{'key': 'asr_speaker_demo', 'text': '非常高兴哈能够和几位的话呢一起来讨论互联网企业如何决胜全球化新高地这个话题。然后第二块其实是游戏平台。所谓游戏平台,它主要是呃简单来说就是一个商店加社区的这样一个模式。而这么多年我们随着整个业务的拓张呢会发现跟阿里云有非常紧密的联系。因为刚开始伟光在介绍的时候也讲阿里云也是阿里巴巴的云。所以这个过程中一会儿也可以稍微展开。跟大家讲一下我们跟云是怎么一路走来的。其实的确的话呢,就对我们互联网公司来说,如果不能够问当地的人口的话,我想我们可能这个整个的就失去了后边所有的这个动力。不知道你们各位怎么看,就是我们最大的这个问题是不是效率优先?Yes, oh no.然后如果是讲一个最关键的,你们是怎么来克服这些挑战的啊?因因因为其我们最近一直在做海外业务,嗯,就是所以说这呃我们碰到了些问题,可以一起分享出来给大家,其实一起探讨一下。嗯嗯,其实海外外就就我我们是这个强观的说是呃,无论你准备工作做的有多充分,嗯,无论你有就是呃学习能力有多强。嗯,你一个中国企业负责人其实在出海的时候,呃,他整体还是一个强试错的过程。嗯,后来退到德国或者拓大,新加坡、印尼、越南等等这些地方,那每一个地方走过去。都面临的一个问题是建站的效率怎么样能够快速的把这个站站能建起来。一方面我们当初刚好从一四年刚好开始要出去的时候呢,去国内就是三个北上广深。那当在海外呢要同时开服北美、美东美西,对吧?欧洲日本。那我还记得那个时候,那我们在海外如何去建立这种IDC的勘探,建设基础设施建设、云服务的部署,那都是一个全新的挑战。', 'timestamp': [[50, 130], [130, 250], [250, 410], [410, 650], [650, 890], [1190, 1430], [1430, 1670], [1870, 2110], [2230, 2430], [2430, 2670], [2690, 2850], [2850, 3090], [3150, 3390], [3810, 3990], [3990, 4230], [4230, 4450], [4450, 4650], [4650, 4890], [5450, 5670], [5670, 5790], [5790, 6030], [6050, 6270], [6270, 6490], [6490, 6690], [6690, 6890], [6890, 7130], [7170, 7410], [7790, 8029], [8070, 8310], [8310, 8550], [8570, 8790], [8790, 8970], [8970, 9170], [9170, 9270], [9270, 9390], [9390, 9570], [9570, 9810], [10290, 10410], [10410, 10530], [10530, 10650], [10650, 10830], [10830, 11070], [11150, 11250], [11250, 11370], [11370, 11530], [11530, 11670], [11670, 11810], [11810, 11910], [11910, 12150], [12790, 13030], [13050, 13290], [13290, 13410], [13410, 13550], [13550, 13650], [13650, 13890], [14010, 14210], [14210, 14370], [14370, 14490], [14490, 14730], [15330, 15570], [15790, 15930], [15930, 16110], [16110, 16290], [16290, 16470], [16470, 16630], [16630, 16830], [16830, 16930], [16930, 17150], [17150, 17290], [17290, 17530], [17530, 17690], [17690, 17890], [17890, 18010], [18010, 18190], [18190, 18290], [18290, 18370], [18370, 18470], [18470, 18550], [18550, 18670], [18670, 18910], [19370, 19590], [19590, 19690], [19690, 19830], [19830, 19990], [19990, 20230], [20250, 20410], [20410, 20550], [20550, 20710], [20710, 20850], [20850, 21010], [21010, 21130], [21130, 21250], [21250, 21330], [21330, 21450], [21450, 21610], [21610, 21790], [21790, 21990], [21990, 22190], [22190, 22330], [22330, 22450], [22450, 22590], [22590, 22690], [22690, 22870], [22870, 23110], [23690, 23930], [24090, 24250], [24250, 24490], [24570, 24730], [24730, 24910], [24910, 25050], [25050, 25210], [25210, 25330], [25330, 25430], [25430, 25670], [25990, 26230], [26270, 26450], [26450, 26690], [26710, 26810], [26810, 26990], [26990, 27090], [27090, 27170], [27170, 27290], [27290, 27410], [27410, 27510], [27510, 27590], [27590, 27670], [27670, 27910], [28230, 28430], [28430, 28550], [28550, 28790], [28790, 28910], [28910, 29030], [29030, 29110], [29110, 29230], [29230, 29330], [29330, 29450], [29450, 29570], [29570, 29770], [29770, 29930], [29930, 30170], [30330, 30470], [30470, 30590], [30590, 30710], [30710, 30830], [30830, 30950], [30950, 31030], [31030, 31130], [31130, 31210], [31210, 31310], [31310, 31390], [31390, 31490], [31490, 31570], [31570, 31750], [31750, 31910], [31910, 32030], [32030, 32170], [32170, 32270], [32270, 32390], [32390, 32509], [32509, 32630], [32630, 32730], [32730, 32810], [32810, 32990], [32990, 33070], [33070, 33270], [33270, 33450], [33450, 33550], [33550, 33710], [33710, 33910], [33910, 34315], [35110, 35270], [35270, 35510], [35510, 35750], [36070, 36210], [36210, 36350], [36350, 36510], [36510, 36710], [36710, 36870], [36870, 37110], [37170, 37290], [37290, 37410], [37410, 37530], [37530, 37610], [37610, 37710], [37710, 37830], [37830, 37910], [37910, 38030], [38030, 38190], [38190, 38370], [38370, 38490], [38490, 38730], [38750, 38850], [38850, 38970], [38970, 39050], [39050, 39130], [39130, 39310], [39310, 39550], [39590, 39730], [39730, 39970], [39970, 40210], [40250, 40450], [40450, 40650], [40650, 40810], [40810, 41050], [41250, 41410], [41410, 41590], [41590, 41670], [41670, 41850], [41850, 42010], [42010, 42250], [42750, 42970], [42970, 43210], [43290, 43510], [43510, 43750], [43750, 43990], [43990, 44230], [44290, 44390], [44390, 44570], [44570, 44710], [44710, 44870], [44870, 45110], [45150, 45290], [45290, 45470], [45470, 45590], [45590, 45670], [45670, 45790], [45790, 45950], [45950, 46130], [46130, 46210], [46210, 46290], [46290, 46470], [46470, 46610], [46610, 46810], [46810, 46970], [46970, 47210], [47270, 47370], [47370, 47490], [47490, 47730], [48190, 48390], [48390, 48630], [48650, 48750], [48750, 48850], [48850, 49050], [49050, 49230], [49230, 49370], [49370, 49470], [49470, 49610], [49610, 49770], [49770, 50010], [50170, 50370], [50370, 50490], [50490, 50730], [50950, 51150], [51150, 51350], [51350, 51510], [51510, 51590], [51590, 51830], [52290, 52850], [52850, 53175], [54000, 54200], [54200, 54440], [54460, 54600], [54600, 54760], [54760, 55145], [56990, 57230], [57290, 57450], [57450, 57590], [57590, 57770], [57770, 57990], [57990, 58210], [58210, 58450], [58550, 58750], [58750, 58870], [58870, 59050], [59050, 59150], [59150, 59270], [59270, 59510], [59530, 59750], [59750, 59990], [60610, 60850], [60870, 61110], [61510, 61750], [61770, 62010], [62070, 62310], [62310, 62635], [64610, 64750], [64750, 64850], [64850, 65090], [65110, 65190], [65190, 65390], [65390, 65470], [65470, 65570], [65570, 65670], [65670, 65850], [65850, 65950], [65950, 66050], [66050, 66210], [66210, 66350], [66350, 66450], [66450, 66590], [66590, 66750], [66750, 66990], [67110, 67330], [67330, 67430], [67430, 67570], [67570, 67670], [67670, 67790], [67790, 68030], [68210, 68450], [68450, 68550], [68550, 68690], [68690, 68790], [68790, 68910], [68910, 68990], [68990, 69070], [69070, 69150], [69150, 69250], [69250, 69410], [69410, 69510], [69510, 69750], [69930, 70110], [70110, 70250], [70250, 70350], [70350, 70530], [70530, 70650], [70650, 70750], [70750, 70890], [70890, 71010], [71010, 71250], [71270, 71430], [71430, 71670], [71690, 71790], [71790, 71970], [71970, 72090], [72090, 72230], [72230, 72330], [72330, 72450], [72450, 72690], [73110, 73350], [73590, 73770], [73770, 73910], [73910, 74130], [74130, 74370], [74990, 75230], [75390, 75510], [75510, 75650], [75650, 75750], [75750, 75870], [75870, 75990], [75990, 76110], [76110, 76190], [76190, 76330], [76330, 76470], [76470, 76710], [76750, 76950], [76950, 77190], [77790, 78030], [78250, 78390], [78390, 78570], [78570, 78810], [79350, 79530], [79530, 79710], [79710, 79830], [79830, 79970], [79970, 80070], [80070, 80190], [80190, 80270], [80270, 80410], [80410, 80550], [80550, 80790], [80910, 81150], [81370, 81510], [81510, 81710], [81710, 81950], [82050, 82290], [82690, 82890], [82890, 83130], [83350, 83590], [83630, 83730], [83730, 83910], [83910, 83990], [83990, 84090], [84090, 84210], [84210, 84290], [84290, 84410], [84410, 84650], [84950, 85110], [85110, 85210], [85210, 85310], [85310, 85410], [85410, 85550], [85550, 85650], [85650, 85770], [85770, 85870], [85870, 85950], [85950, 86130], [86130, 86330], [86330, 86430], [86430, 86550], [86550, 86690], [86690, 86850], [86850, 87030], [87030, 87150], [87150, 87270], [87270, 87510], [88050, 88290], [88350, 88550], [88550, 88650], [88650, 88770], [88770, 88890], [88890, 89010], [89010, 89090], [89090, 89190], [89190, 89370], [89370, 89490], [89490, 89670], [89670, 89810], [89810, 89910], [89910, 90150], [90390, 90630], [90750, 90870], [90870, 91030], [91030, 91170], [91170, 91370], [91370, 91530], [91530, 91770], [91810, 91950], [91950, 92150], [92150, 92310], [92310, 92450], [92450, 92610], [92610, 92790], [92790, 93030], [93110, 93330], [93330, 93530], [93530, 93690], [93690, 93890], [93890, 93990], [93990, 94190], [94190, 94290], [94290, 94430], [94430, 94530], [94530, 94770], [95070, 95310], [95610, 95850], [95850, 95930], [95930, 96030], [96030, 96150], [96150, 96269], [96269, 96390], [96390, 96570], [96570, 96810], [96830, 97070], [97130, 97290], [97290, 97410], [97410, 97550], [97550, 97650], [97650, 97850], [97850, 97950], [97950, 98130], [98130, 98370], [98550, 98730], [98730, 98910], [98910, 99010], [99010, 99150], [99150, 99390], [99490, 99630], [99630, 99750], [99750, 99870], [99870, 99950], [99950, 100070], [100070, 100230], [100230, 100350], [100350, 100430], [100430, 100530], [100530, 100650], [100650, 100750], [100750, 100830], [100830, 101010], [101010, 101130], [101130, 101250], [101250, 101430], [101430, 101570], [101570, 101670], [101670, 101790], [101790, 101970], [101970, 102090], [102090, 102170], [102170, 102270], [102270, 102510], [102790, 102930], [102930, 103050], [103050, 103230], [103230, 103350], [103350, 103470], [103470, 103650], [103650, 103770], [103770, 103910], [103910, 104130], [104130, 104250], [104250, 104430], [104430, 104550], [104550, 104670], [104670, 104790], [104790, 104910], [104910, 105030], [105030, 105270], [105550, 105790], [105990, 106090], [106090, 106250], [106250, 106490], [106490, 106610], [106610, 106730], [106730, 106910], [106910, 107150], [107370, 107570], [107570, 107690], [107690, 108085], [108750, 108970], [108970, 109090], [109090, 109270], [109270, 109430], [109430, 109610], [109610, 109730], [109730, 109870], [109870, 109970], [109970, 110170], [110170, 110330], [110330, 110570], [110770, 110990], [110990, 111230], [111510, 111650], [111650, 111890], [111910, 112070], [112070, 112310], [112450, 112670], [112670, 112850], [112850, 113030], [113030, 113270], [113410, 113610], [113610, 113850], [114190, 114390], [114390, 114490], [114490, 114590], [114590, 114690], [114690, 114810], [114810, 114930], [114930, 115030], [115030, 115110], [115110, 115230], [115230, 115470], [115490, 115590], [115590, 115710], [115710, 115830], [115830, 115950], [115950, 116130], [116130, 116230], [116230, 116350], [116350, 116490], [116490, 116589], [116589, 116730], [116730, 116830], [116830, 116950], [116950, 117370], [117370, 117470], [117470, 117670], [117670, 117910], [118030, 118270], [118270, 118510], [118730, 118870], [118870, 119050], [119050, 119170], [119170, 119290], [119290, 119450], [119450, 119690], [119890, 120130], [120130, 120290], [120290, 120410], [120410, 120510], [120510, 120650], [120650, 120890], [121370, 121570], [121570, 121730], [121730, 121870], [121870, 121970], [121970, 122050], [122050, 122150], [122150, 122270], [122270, 122410], [122410, 122570], [122570, 122805]], 'sentence_info': [{'text': '非常高兴哈能够和几位的话呢一起来讨论互联网企业如何决胜全球化新高地这个话题。', 'start': 9570, 'end': 9810, 'timestamp': [[50, 130], [130, 250], [250, 410], [410, 650], [650, 890], [1190, 1430], [1430, 1670], [1870, 2110], [2230, 2430], [2430, 2670], [2690, 2850], [2850, 3090], [3150, 3390], [3810, 3990], [3990, 4230], [4230, 4450], [4450, 4650], [4650, 4890], [5450, 5670], [5670, 5790], [5790, 6030], [6050, 6270], [6270, 6490], [6490, 6690], [6690, 6890], [6890, 7130], [7170, 7410], [7790, 8029], [8070, 8310], [8310, 8550], [8570, 8790], [8790, 8970], [8970, 9170], [9170, 9270], [9270, 9390], [9390, 9570], [9570, 9810]], 'spk': 0},...]`
BUG:
1. `sentence_info` 字段里面的 'start': 9570, 'end': 9810 为什么要取 `最后一个字` 的开始时间和结束时间?你在 `sentence_info` 字段里面不应该取的是这句话的开始时间和结束时间吗?难道不应该是'start': 50, 'end': 9810吗?
@wenmengzhou @tastelikefeet @wangxingjun778 @Jintao-Huang @Firmament-cyou | closed | 2024-07-29T06:40:07Z | 2024-09-04T01:57:42Z | https://github.com/modelscope/modelscope/issues/933 | [
"Stale"
] | Lixi20 | 6 |
sunscrapers/djoser | rest-api | 563 | French translations are not applied | I have been trying out djoser with django rest framework. Using 2.0.5 at the moment.
I noticed that the activation email was only partially translated, and digging into it I realized some of it is translated because by coincidence some of the translation keys used are also used by the admin django app.
Take the translation key: {% blocktrans %}Account activation on {{ site_name }}{% endblocktrans %}
This one does not get translated because it is not part of any other installed app.
The following translation in the same email template gets translated, albeit not to the translation specified in the djoser locale file, but rather to the one specified in the admin locale file.
{% trans "Thanks for using our site!" %}
Grepping the site-packages I find the translation which gets applied:
./lib/python3.6/site-packages/django/contrib/admin/locale/fr/LC_MESSAGES/django.po:msgstr "Merci d'utiliser notre site !"
There is no sign of the equivalent file for djoser.
I can indeed see it exists in the git repo at https://github.com/sunscrapers/djoser/blob/master/djoser/locale/fr/LC_MESSAGES/django.po
Grepping for one of the translation keys I discovered that there are translations for Polish:
ls ./lib/python3.6/site-packages/djoser/locale/
pl
There are no other locales in this directory. | open | 2020-12-06T18:46:02Z | 2020-12-31T18:37:50Z | https://github.com/sunscrapers/djoser/issues/563 | [] | connorsml | 2 |
dgtlmoon/changedetection.io | web-scraping | 1,707 | New version available notif showing up even when there is no new version | **Describe the bug**
I'm not sure what's going on here, but it seems that Change Detection is suggesting an update is available even though I am on the latest release.
**Version**
*Exact version* in the top right area: v0.44
**To Reproduce**
Steps to reproduce the behavior:
1. Selfhost (via docker)
2. Open homepage
3. See error
**Expected behavior**
I would expect this not to happen as I am on the latest release. This could be intentional by accounting for dev versions but I feel that would be misleading, and confusing for some (like me) who appear to be on the latest version already.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 10
- Browser Firefox
- Version 115.0.3 (64-bit)
**Additional context**
I am self-hosting via docker but I am sure I am on the [latest image available](https://github.com/dgtlmoon/changedetection.io/pkgs/container/changedetection.io/110236280?tag=latest) and I am using the `latest` tag.
This could all be incorrect and I am missing something (as I have not looked into how the message appears in the code) and if that is the case that's my bad 😅. | closed | 2023-07-31T03:10:40Z | 2023-08-02T13:42:45Z | https://github.com/dgtlmoon/changedetection.io/issues/1707 | [
"triage"
] | k4deng | 2 |
ydataai/ydata-profiling | jupyter | 1,299 | feature: Enable users to control the path for the generated visuals | ### Current Behaviour
Use the following codes to reproduce:
```python
import numpy as np
import pandas as pd
from ydata_profiling import ProfileReport
df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
profile = ProfileReport(df, title="Profiling Report", html={'inline':False})
profile.to_file("path/to/report")
```
Then output the report html file and assets dir(cause `set inline to False` here) in `path/to/report` dir:
- `report.html`
- `report_assets/`
---
### ISSUE
The problem is that, `report.html` file cannot load the `css` files, be cause It try to load `css` files from a fold named `_assets`. HOWEVER, there is no `_assets` folder.
### Expected Behaviour
Cause the `report.html` file is auto-generated, which means it can adapt the assets link to the CORRECT dir correctly, such that the report style could be display/render properly.
### Data Description
Pseudo data.
```ptyhon
import pandas as pd
df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
```
### Code that reproduces the bug
```Python
import numpy as np
import pandas as pd
from ydata_profiling import ProfileReport
df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
profile = ProfileReport(df, title="Profiling Report", html={'inline':False})
profile.to_file("path/to/report")
```
### pandas-profiling version
v4.1.2
### Dependencies
```Text
attrs==22.2.0
certifi==2022.12.7
charset-normalizer==3.1.0
colorama==0.4.6
contourpy==1.0.7
cycler==0.11.0
fonttools==4.39.2
htmlmin==0.1.12
idna==3.4
ImageHash==4.3.1
Jinja2==3.1.2
joblib==1.2.0
kiwisolver==1.4.4
MarkupSafe==2.1.2
matplotlib==3.6.3
multimethod==1.9.1
networkx==3.0
numpy==1.23.5
packaging==23.0
pandas==1.5.3
patsy==0.5.3
phik==0.12.3
Pillow==9.4.0
pydantic==1.10.7
pyparsing==3.0.9
python-dateutil==2.8.2
pytz==2023.2
PyWavelets==1.4.1
PyYAML==6.0
requests==2.28.2
scipy==1.9.3
seaborn==0.12.2
six==1.16.0
statsmodels==0.13.5
tangled-up-in-unicode==0.2.0
tqdm==4.64.1
typeguard==2.13.3
typing_extensions==4.5.0
urllib3==1.26.15
visions==0.7.5
ydata-profiling==4.1.2
```
### OS
Windows 11
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-03-28T12:15:02Z | 2023-04-12T16:58:17Z | https://github.com/ydataai/ydata-profiling/issues/1299 | [
"feature request 💬"
] | kaimo455 | 1 |
vaexio/vaex | data-science | 2,320 | Issue on page /faq.html | I am not able to install via pip install vaex
I install python version 3.11.1, error given below which I am getting when I try to install
Using cached numba-0.56.4.tar.gz (2.4 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\HP\AppData\Local\Temp\pip-install-ksldzaij\numba_1712998ee6f0470e807e228c6b892e9b\setup.py", line 51, in <module>
_guard_py_ver()
File "C:\Users\HP\AppData\Local\Temp\pip-install-ksldzaij\numba_1712998ee6f0470e807e228c6b892e9b\setup.py", line 48, in _guard_py_ver
raise RuntimeError(msg.format(cur_py, min_py, max_py))
RuntimeError: Cannot install on Python version 3.11.1; only versions >=3.7,<3.11 are supported.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
| open | 2023-01-04T11:13:47Z | 2023-03-13T19:08:15Z | https://github.com/vaexio/vaex/issues/2320 | [] | navikaran1 | 1 |
coqui-ai/TTS | deep-learning | 3,641 | cannot import name 'magphase' from 'librosa' | ### Describe the bug
cannot import name 'magphase' from 'librosa'
I saw that yesterday there was an update in Librosa in my project I replaced the library with 0.9.0 version and everything worked.
### To Reproduce
from TTS.api import TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True)
tts.tts_to_file(text=my_text,
file_path="output.wav",
speaker_wav="/content/audio.wav",
language="ru")
### Expected behavior
Everything was supposed to work
### Logs
```shell
no logs
```
### Environment
```shell
- TTS version
```
### Additional context
No | closed | 2024-03-20T08:57:36Z | 2025-01-03T08:48:56Z | https://github.com/coqui-ai/TTS/issues/3641 | [
"bug",
"wontfix"
] | Simaregele | 4 |
horovod/horovod | machine-learning | 3,482 | Gathering non numeric values | Hello! we are using hvd.allreduce to gather tensor values from different gpus. What if I have evaluation function which in result produces an answer dictionary for example {"0b014789":"yes", "0b458796":"in the school"} . Naturally we will have multiple dictionaries, is there an example how to gather these values from multiple gpus? I hope I expressed my idea clearly | closed | 2022-03-21T13:34:02Z | 2022-03-21T20:03:13Z | https://github.com/horovod/horovod/issues/3482 | [] | Arij-Aladel | 0 |
tflearn/tflearn | tensorflow | 474 | Allow empty for restorer_trainvars saver | In line 140 of [tflearn/helpers/trainer.py](https://github.com/tflearn/tflearn/blob/master/tflearn/helpers/trainer.py#L138), the saver should be created with `allow_empty=True`. Otherwise, it will raise error complaining no variables to restore when doing last-layer-fine-tuning.
In last layer fine tuning, we set all the bottom layers to be `trainable=False`, while the last layer to be `restore=False`. As a result, this
```
to_restore_trainvars = [item for item in tf.trainable_variables()
if check_restore_tensor(item, excl_vars)]
```
is an empty list. | open | 2016-11-18T16:12:03Z | 2016-11-23T16:51:55Z | https://github.com/tflearn/tflearn/issues/474 | [] | pluskid | 1 |
samuelcolvin/watchfiles | asyncio | 169 | Errors on `WSL` and docker with windows | ### Description
`watchfiles` appears to be fully pip installable for windows, but I can't seem to get a minimal example working on Linux. The below MRE is straight from the docs and works great on windows. That is, when I change the foobar.py file the simple server is relaunched.
However if I try the same MRE on via wsl (ubuntu) or docker (debian) I don't see any change events firing. This was discovered while updating `uvicorn` 0.17x->0.18x in which they transitioned to this package for --reload file-watching rather than `watchgod`.
The following MRE was tested in new conda environments on both windows (it worked well) and Ubuntu (it didn't detect changes).
---
_foobar.py_
```python
import os, json
from aiohttp import web
async def handle(request):
# get the most recent file changes and return them
changes = os.getenv("WATCHFILES_CHANGES")
changes = json.loads(changes)
return web.json_response(dict(changes=changes))
app = web.Application()
app.router.add_get("/", handle)
def main():
web.run_app(app, port=8000)
```
---
_requirements.txt_
```shell
aiohttp
watchfiles==0.15.0
```
---
_shell_
```shell
$watchfiles foobar.main
[17:05:44] watchfiles 👀 path="$HOME$/sources/watchfiles-bug" target="foobar.main" (function) filter=DefaultFilter...
======== Running on http://0.0.0.0:8000 ========
(Press CTRL+C to quit)
```
On Linux, this output is never updated when `foobar.py` changes, but on windows the expected notifications of the changes occur and the module reloads correctly.
Is this expected? Are there other dependencies on Linux that need to be installed for the binaries to work, e.g., do I need to install rust?
It's entirely likely that this is a me-issue, but if this is verified to be an issue for others I wonder if it's possible to catch this sort of thing in the ci action with a build step that pip installs the built package from pypi on a clean environment (one without the rust compiler) and then runs `watchfiles` with a simple example like this on from the docs.
The tough part (to me) seems like it'd be making the change and detecting that it was correctly caught by `watchfiles`, but I'd be happy to brainstorm how to handle that if this issue ends up even needing a correction.
### Example Code
```Python
-- see description
```
### Example Code Output
```Text
-- see description
```
### Operating System
linux
### Environment
docker (debian), WSL (ubuntu)
### Watchfiles Version
0.15.0
### Python Version
3.9.13
### Rust & Cargo Version
-- | closed | 2022-07-19T00:15:05Z | 2024-08-31T20:32:55Z | https://github.com/samuelcolvin/watchfiles/issues/169 | [
"bug"
] | austinorr | 14 |
ghtmtt/DataPlotly | plotly | 185 | TypeError: setChecked(self, bool): argument 1 has unexpected type 'NoneType' | Hi, i got a new problem after upgrading to 3.2: It is linked to this #183 fix, since i had that before upgrading.
It seems to occur when i open a project wich was created before i upgraded this plugin. It does not occur when i start a new project in qgis.
Traceback (most recent call last):
File "C:\Users\dpe\AppData\Roaming\QGIS\QGIS3\profiles\Daniel\python\plugins\DataPlotly\gui\plot_settings_widget.py", line 1393, in read_project
self.set_settings(settings)
File "C:\Users\dpe\AppData\Roaming\QGIS\QGIS3\profiles\Daniel\python\plugins\DataPlotly\gui\plot_settings_widget.py", line 1080, in set_settings
self.violinBox.setChecked(settings.properties.get('violin_box', None))
TypeError: setChecked(self, bool): argument 1 has unexpected type 'NoneType' | closed | 2020-02-10T09:29:25Z | 2020-02-10T10:25:04Z | https://github.com/ghtmtt/DataPlotly/issues/185 | [] | danpejobo | 5 |
jazzband/django-oauth-toolkit | django | 724 | Missing self argument when trying to create Application from code | How would I create an application from a view? I've tried copying the management command as closely as I could, like this:
```
from oauth2_provider.models import get_application_model
Application = get_application_model()
Application(client_id=serializer.data['client_id'], user=request.user, redirect_uris="https://google.com", client_type="Public", authorization_grant_type="Authorization code", name="Modeltest")
Application.save()
```
But I keep getting `TypeError: full_clean() missing 1 required positional argument: 'self'`. Why is this? I've initialised Application, right? | closed | 2019-07-25T16:27:53Z | 2019-07-25T16:30:35Z | https://github.com/jazzband/django-oauth-toolkit/issues/724 | [] | sometimescool22 | 1 |
xorbitsai/xorbits | numpy | 717 | BUG: `LogisticRegression.fit` has poor performance on speed | ### Describe the bug
`LogisticRegression.fit` never stops with a bit *larger* data.
### To Reproduce
When `max_iter=1` everything works fine
```python
from xorbits._mars.learn.glm import LogisticRegression
import numpy as np
n_rows = 100
n_cols = 2
X = np.random.randn(n_rows, n_cols)
y = np.random.randint(0, 2, n_rows)
lr = LogisticRegression(max_iter=1)
lr.fit(X, y)
```
However, just increase `max_iter` to 100, the program seems never stop (at least after 1min, it's weird.)
```python
lr = LogisticRegression(max_iter=100)
lr.fit(X, y)
```
1. Your Python version: 3.10.2
2. The version of Xorbits you use: HEAD, install on my local device.
3. I'm working on my Macbook with m1 pro chip
| open | 2023-09-24T11:03:42Z | 2024-12-16T01:52:34Z | https://github.com/xorbitsai/xorbits/issues/717 | [
"bug"
] | JiaYaobo | 2 |
PaddlePaddle/models | computer-vision | 4,802 | AI Studio环境CUDA出错 | 训练的模型是SimNet,无任何改动,报下面错误
ExternalError: Cuda error(38), no CUDA-capable device is detected.
[Advise: This indicates that no CUDA-capable devices were detected by the installed CUDA driver. ] at (/paddle/paddle/fluid/platform/gpu_info.cc:65) | closed | 2020-08-16T10:26:26Z | 2020-08-18T01:15:31Z | https://github.com/PaddlePaddle/models/issues/4802 | [] | hlzonWang | 1 |
chaoss/augur | data-visualization | 2,888 | Deal with Repo Group Placement Issue: https://github.com/oss-aspen/8Knot/issues/698 | This issue on 8Knot's repo: https://github.com/oss-aspen/8Knot/issues/698
Is about how Augur puts new repos into repo_groups. We need to discuss design and how it works. | open | 2024-08-08T22:40:51Z | 2024-11-14T15:25:50Z | https://github.com/chaoss/augur/issues/2888 | [] | sgoggins | 5 |
kennethreitz/responder | flask | 202 | Unable to include "parameters" in autogenerated documentation | Hello.
First of all, I would like to say thanks for providing this package.
I'm stuck trying to include a parameters section in the docstring associated to my route functions.
Using the Pets example, I always get the following error for docstrings containing the parameters section, when visiting the /docs path:
> 😱 Could not render this component, see the console.
If no parameters section is included, no error is fired.
I'm including below the whole source code, for your reference.
I have made some slight modifications:
- added "Pet" class, which seems to be needed according to marshmallow documentation.
- added a new route, with pet_id as a path param.
- added a summary section to the docstring
By the way, when you use the directive @api.schema('Pet'), I assume that it adds the schema class following it to the api instance, right? But, what does the param "Pet" do? I thought that it was used to
automatically create the Pet class (the regular class, not the schema class), but I guess that it is only
a unique name or so...
I'm using python 3.7, and installed the latest stable release with:
> $ pipenv install responder --pre
```
#! /usr/bin/env python
import responder
from marshmallow import Schema, fields
api = responder.API(title='Web Service', version='1.0', openapi='3.0.0',
docs_route='/docs')
class Pet():
def __init__(self, pet_id, pet_name):
self.pet_id = pet_id
self.pet_name = pet_name
@api.schema('Pet')
class PetSchema(Schema):
pet_id = fields.Str()
pet_name = fields.Str()
@api.route('/pets')
def get_all_pets(req, resp):
'''A cute furry animal endpoint.
---
get:
summary: Animal endpoint.
description: Get all pets.
responses:
200:
description: All pets to be returned
schema:
$ref = '#/components/schemas/Pet'
'''
resp.media = PetSchema().dump({'name': 'little orange'}).data
@api.route('/pets/{pet_id}')
def get_one_pet(req, resp, *, pet_id):
'''A cute furry animal endpoint.
---
get:
summary: Animal endpoint.
description: Get a random pet.
parameters:
- name: pet_id
in: path
description: Pet ID
type: integer
required: true
responses:
200:
description: A pet to be returned
schema:
$ref = '#/components/schemas/Pet'
'''
pet = Pet('pet_id1', 'freddie')
resp.media = PetSchema().dump(pet).data
if __name__ == '__main__':
api.run()
```
Thanks a lot.
-Bob V | closed | 2018-11-06T11:38:45Z | 2018-11-06T12:28:34Z | https://github.com/kennethreitz/responder/issues/202 | [] | emacsuser123 | 5 |
lucidrains/vit-pytorch | computer-vision | 160 | Delete Issue | Delete Comment. | open | 2021-10-01T03:09:08Z | 2021-10-01T03:28:59Z | https://github.com/lucidrains/vit-pytorch/issues/160 | [] | nsriniva03 | 0 |
huggingface/datasets | numpy | 7,472 | Label casting during `map` process is canceled after the `map` process | ### Describe the bug
When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithLogitsLoss`
However, the casting was canceled after `.map` process and the label values still use int values, which leads to an error
```
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 1711, in forward
loss = loss_fct(logits, labels)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 819, in forward
return F.binary_cross_entropy_with_logits(
File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/functional.py", line 3628, in binary_cross_entropy_with_logits
return torch.binary_cross_entropy_with_logits(
RuntimeError: result type Float can't be cast to the desired output type Long
```
This seems like happening only when the original labels are int values (see examples below)
### Steps to reproduce the bug
If the original dataset uses a list of int labels, it will cancel the int->float casting
```python
from datasets import Dataset
data = {
'text': ['text1', 'text2', 'text3', 'text4'],
'labels': [[0, 1, 2], [3], [3, 4], [3]]
}
dataset = Dataset.from_dict(data)
label_set = set([label for labels in data['labels'] for label in labels])
label2idx = {label: idx for idx, label in enumerate(sorted(label_set))}
def multi_labels_to_ids(labels):
ids = [0.0] * len(label2idx)
for label in labels:
ids[label2idx[label]] = 1.0
return ids
def preprocess(examples):
result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]}
print('"labels" are int', examples['labels'])
result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']]
print('"labels" were converted to multi-label format with float values', result['labels'])
return result
preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text'])
print(preprocessed_dataset[0]['labels'])
# Output: "[1, 1, 1, 0, 0]"
# Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]"
```
If the original dataset uses non-int labels, it works as expected.
```python
from datasets import Dataset
data = {
'text': ['text1', 'text2', 'text3', 'text4'],
'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']]
}
dataset = Dataset.from_dict(data)
label_set = set([label for labels in data['labels'] for label in labels])
label2idx = {label: idx for idx, label in enumerate(sorted(label_set))}
def multi_labels_to_ids(labels):
ids = [0.0] * len(label2idx)
for label in labels:
ids[label2idx[label]] = 1.0
return ids
def preprocess(examples):
result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]}
print('"labels" are int', examples['labels'])
result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']]
print('"labels" were converted to multi-label format with float values', result['labels'])
return result
preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text'])
print(preprocessed_dataset[0]['labels'])
# Output: "[1.0, 1.0, 1.0, 0.0, 0.0]"
# Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]"
```
Note that the only difference between these two examples is
> 'labels': [[0, 1, 2], [3], [3, 4], [3]]
v.s
> 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']]
### Expected behavior
Even if the original dataset uses a list of int labels, the int->float casting during `.map` process should not be canceled as shown in the above example
### Environment info
OS Ubuntu 22.04 LTS
Python 3.10.11
datasets v3.4.1 | open | 2025-03-21T07:56:22Z | 2025-03-21T07:58:14Z | https://github.com/huggingface/datasets/issues/7472 | [] | yoshitomo-matsubara | 0 |
lucidrains/vit-pytorch | computer-vision | 2 | Using masks as preprocessing for classification [FR] | Maybe it's a little bit too early to ask for this but could it be possible to specify regions within an image for `ViT` to perfom the prediction? I was thinking on a binary mask, for example, which could be used for the tiling step in order to obtain different images sequences.
I am thinking on a pipeline where, in order to increase resolution, you could specify the regions to perform the training based on whatever reason you find it suitable (previous attention maps for example :smile:). | closed | 2020-10-07T12:31:23Z | 2020-10-09T05:29:47Z | https://github.com/lucidrains/vit-pytorch/issues/2 | [] | Tato14 | 8 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 388 | US Sponsorship now or in the future Bug | Configured the `plain_test.yaml` as such:
> legal_authorization:
eu_work_authorization: "No"
us_work_authorization: "Yes"
requires_us_visa: "Yes "
requires_us_sponsorship: "Yes"
requires_eu_visa: "Yes"
legally_allowed_to_work_in_eu: "No"
legally_allowed_to_work_in_us: "Yes"
requires_eu_sponsorship: "Yes"
Only applying to jobs in the US, but it keeps saying "No" to needs sponsorship now or in the future in the US when it should say "Yes".
| closed | 2024-09-15T17:09:02Z | 2024-11-19T23:51:30Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/388 | [
"bug"
] | abrohit | 6 |
deezer/spleeter | tensorflow | 906 | [Discussion] numpy 1.22.4 is missing as a dependency from the pyproject.toml? | using Python 3.19.9 on Fedora Linux
` pip install spleeter` gives the following error
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
pandas 1.5.3 requires numpy>=1.20.3; python_version < "3.10", but you have numpy 1.19.5 which is incompatible.
scipy 1.13.1 requires numpy<2.3,>=1.22.4, but you have numpy 1.19.5 which is incompatible.
tensorflow 2.9.3 requires numpy>=1.20, but you have numpy 1.19.5 which is incompatible.
Successfully installed numpy-1.19.5
```
Running `pip install numpy==1.22.4`, I was able to install spleeter via pip | open | 2024-09-05T11:02:48Z | 2024-09-05T11:02:48Z | https://github.com/deezer/spleeter/issues/906 | [
"question"
] | vin-cf | 0 |
dropbox/PyHive | sqlalchemy | 357 | Slow performance reading large Hive table in comparison with RJDBC | I'm trying to read a large table from Hive in python using pyhive, the table has about 16 millions of rows. But it is taking about to **33 minutes**. When I read the same table in R with RJDBC it takes about **13 minutes** to read the whole table. Here is my code.
``` R
library(RJDBC)
driver <- try(JDBC("org.apache.hive.jdbc.HiveDriver", paste0(jar_dir, '/hive-jdbc-3.1.2-standalone.jar')))
con_hive <- RJDBC::dbConnect(driver, "jdbc:hive2://hive_ip:10000/dev_perm")
query <- "SELECT * FROM mi table WHERE periodo='2020-02-01'"
replica_data <- dbGetQuery(con_hive, query)
```
And in python my code is
``` python
import pyhive
conn = hive.Connection(host=ip_hive)
curs = conn.cursor()
cursor.execute("SELECT * FROM mi table WHERE periodo='2020-02-01'")
results = pd.DataFrame(cursor.fetchall(), columns=[desc[0] for desc in cursor.description])
```
I already tried to set multiple cursor.arraysize in python but it doesn't improve performace and also I notice when I set a arraysize greater than 10000 hive ignores it and set 10000. The default value is 1000.
What can I do to improve my performace reading Hive tables in python?
| open | 2020-07-31T16:50:13Z | 2020-07-31T16:50:30Z | https://github.com/dropbox/PyHive/issues/357 | [] | DXcarlos | 0 |
tartiflette/tartiflette | graphql | 127 | (SDL / Execution) Handle "input" type for mutation | Hello,
As specified in the specification, we have to use the type `input` type for complex inputs in mutation, instead of using the regular Object type which can contain interface, union or arguments.
## What we should do
```graphql
input RecipeInput {
id: Int
name: String
cookingTime: Int
}
type Mutation {
updateRecipe(input: RecipeInput): Recipe
}
```
instead of
```graphql
type RecipeInput {
id: Int
name: String
cookingTime: Int
}
type Mutation {
updateRecipe(input: RecipeInput): Recipe
}
```
Request sample
```graphql
mutation {
updateRecipe(input: {
id: 1,
name: "The best Tartiflette by Eric Guelpa",
cookingTime: 12
}) {
id
name
cookingTime
}
}
```
* **Tartiflette version:** 0.4.0
* **Python version:** 3.7.1
* **Executed in docker:** No
* **Is a regression from a previous versions?** No
| closed | 2019-02-26T16:34:49Z | 2019-03-04T16:22:59Z | https://github.com/tartiflette/tartiflette/issues/127 | [
"bug"
] | tsunammis | 0 |
jowilf/starlette-admin | sqlalchemy | 145 | Bug and Propose: build_full_text_search_query not use request, extend get_search_query | 1. you can safely remove `request` in build_full_text_search_query and nested function
2. require a way to easily customize search by fields
my case:
```python
def get_search_query(term: str, model) -> Any:
"""Return SQLAlchemy whereclause to use for full text search"""
clauses = []
for field_name, field in model.__fields__.items():
if field.type_ in [
str,
int,
float,
uuid_pkg.UUID,
] or hasattr(field.type_, 'numerator'): # Pydantic fields like price: int = Field(...
attr = getattr(model, field.name)
clauses.append(cast(attr, String).ilike(f"%{term}%"))
return or_(*clauses)
```
I adapt the code to work with SQLModel, works fine!
Th bro! | closed | 2023-03-29T16:11:23Z | 2023-03-30T15:48:41Z | https://github.com/jowilf/starlette-admin/issues/145 | [
"bug"
] | MatsiukMykola | 1 |
Baiyuetribe/kamiFaka | flask | 88 | heroku在休眠唤醒后,数据库会重置回初始状态 | heroku在休眠唤醒后,数据库会重置回初始状态,包括登录密码,网站设置跟商品都变回刚安装完的初始状态,不知道是脚本的问题还是heroku自身的问题。 | closed | 2021-06-26T23:40:29Z | 2021-06-30T16:59:02Z | https://github.com/Baiyuetribe/kamiFaka/issues/88 | [
"bug",
"good first issue",
"question"
] | winww22 | 1 |
fastapi/sqlmodel | fastapi | 37 | FastAPI and Pydantic - Relationships Not Working | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import List, Optional
from fastapi import Depends, FastAPI, HTTPException, Query
from sqlmodel import Field, Relationship, Session, SQLModel, create_engine, select
class TeamBase(SQLModel):
name: str
headquarters: str
class Team(TeamBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
heroes: List["Hero"] = Relationship(back_populates="team")
class TeamCreate(TeamBase):
pass
class TeamRead(TeamBase):
id: int
class TeamUpdate(SQLModel):
id: Optional[int] = None
name: Optional[str] = None
headquarters: Optional[str] = None
class HeroBase(SQLModel):
name: str
secret_name: str
age: Optional[int] = None
team_id: Optional[int] = Field(default=None, foreign_key="team.id")
class Hero(HeroBase, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
team: Optional[Team] = Relationship(back_populates="heroes")
class HeroRead(HeroBase):
id: int
class HeroCreate(HeroBase):
pass
class HeroUpdate(SQLModel):
name: Optional[str] = None
secret_name: Optional[str] = None
age: Optional[int] = None
team_id: Optional[int] = None
class HeroReadWithTeam(HeroRead):
team: Optional[TeamRead] = None
class TeamReadWithHeroes(TeamRead):
heroes: List[HeroRead] = []
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
connect_args = {"check_same_thread": False}
engine = create_engine(sqlite_url, echo=True, connect_args=connect_args)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def get_session():
with Session(engine) as session:
yield session
app = FastAPI()
@app.on_event("startup")
def on_startup():
create_db_and_tables()
@app.post("/heroes/", response_model=HeroRead)
def create_hero(*, session: Session = Depends(get_session), hero: HeroCreate):
db_hero = Hero.from_orm(hero)
session.add(db_hero)
session.commit()
session.refresh(db_hero)
return db_hero
@app.get("/heroes/", response_model=List[HeroRead])
def read_heroes(
*,
session: Session = Depends(get_session),
offset: int = 0,
limit: int = Query(default=100, lte=100),
):
heroes = session.exec(select(Hero).offset(offset).limit(limit)).all()
return heroes
@app.get("/heroes/{hero_id}", response_model=HeroReadWithTeam)
def read_hero(*, session: Session = Depends(get_session), hero_id: int):
hero = session.get(Hero, hero_id)
if not hero:
raise HTTPException(status_code=404, detail="Hero not found")
return hero
@app.patch("/heroes/{hero_id}", response_model=HeroRead)
def update_hero(
*, session: Session = Depends(get_session), hero_id: int, hero: HeroUpdate
):
db_hero = session.get(Hero, hero_id)
if not db_hero:
raise HTTPException(status_code=404, detail="Hero not found")
hero_data = hero.dict(exclude_unset=True)
for key, value in hero_data.items():
setattr(db_hero, key, value)
session.add(db_hero)
session.commit()
session.refresh(db_hero)
return db_hero
@app.delete("/heroes/{hero_id}")
def delete_hero(*, session: Session = Depends(get_session), hero_id: int):
hero = session.get(Hero, hero_id)
if not hero:
raise HTTPException(status_code=404, detail="Hero not found")
session.delete(hero)
session.commit()
return {"ok": True}
@app.post("/teams/", response_model=TeamRead)
def create_team(*, session: Session = Depends(get_session), team: TeamCreate):
db_team = Team.from_orm(team)
session.add(db_team)
session.commit()
session.refresh(db_team)
return db_team
@app.get("/teams/", response_model=List[TeamRead])
def read_teams(
*,
session: Session = Depends(get_session),
offset: int = 0,
limit: int = Query(default=100, lte=100),
):
teams = session.exec(select(Team).offset(offset).limit(limit)).all()
return teams
@app.get("/teams/{team_id}", response_model=TeamReadWithHeroes)
def read_team(*, team_id: int, session: Session = Depends(get_session)):
team = session.get(Team, team_id)
if not team:
raise HTTPException(status_code=404, detail="Team not found")
return team
@app.patch("/teams/{team_id}", response_model=TeamRead)
def update_team(
*,
session: Session = Depends(get_session),
team_id: int,
team: TeamUpdate,
):
db_team = session.get(Team, team_id)
if not db_team:
raise HTTPException(status_code=404, detail="Team not found")
team_data = team.dict(exclude_unset=True)
for key, value in team_data.items():
setattr(db_team, key, value)
session.add(db_team)
session.commit()
session.refresh(db_team)
return db_team
@app.delete("/teams/{team_id}")
def delete_team(*, session: Session = Depends(get_session), team_id: int):
team = session.get(Team, team_id)
if not team:
raise HTTPException(status_code=404, detail="Team not found")
session.delete(team)
session.commit()
return {"ok": True}
```
### Description
Is realationships working for anyone?
I either get null or an empty list.
OK, so, I've copied the last full file preview at the - https://sqlmodel.tiangolo.com/tutorial/fastapi/relationships/
Run it and it creates the Db and the foreign key
Then I've insert the data into the Db.
Checking the docs UI everything looks great
<img width="1368" alt="Screenshot 2021-08-26 at 23 33 55" src="https://user-images.githubusercontent.com/11464425/131044799-26f45765-95bf-4528-8353-4277dcfceb3e.png">
But when I do a request for a hero, `team` is `null`
<img width="1400" alt="Screenshot 2021-08-26 at 23 36 39" src="https://user-images.githubusercontent.com/11464425/131044990-e773fe1f-3b3a-48e4-9204-74ce0b14718c.png">
Really not sure what going on, especially when all I have just is copied the code example with no changes?
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.8.2
### Additional Context
_No response_ | closed | 2021-08-26T22:40:52Z | 2024-08-22T16:54:39Z | https://github.com/fastapi/sqlmodel/issues/37 | [
"question"
] | Chunkford | 24 |
amisadmin/fastapi-amis-admin | fastapi | 147 | 如何自定义某个search_fields字段的UI组件 | 以教程中的示例代码为例,category的筛选需要下拉菜单dropdown list, 但是系统默认为input文本输入。如何重写并自定义该字段、并且不影响其它search_fields字段的自动生成。
```
from fastapi_amis_admin import admin
from fastapi_amis_admin.models.fields import Field
class Article(SQLModel, table=True):
id: int = Field(default=None, primary_key=True, nullable=False)
title: str = Field(title='ArticleTitle', max_length=200)
description: Optional[str] = Field(default='', title='ArticleDescription', max_length=400)
status: bool = Field(None, title='status')
content: str = Field(title='ArticleContent')
category_id: Optional[int] = Field(default=None, foreign_key="category.id", title='CategoryId')
is_active: bool = False
@site.register_admin
class ArticleAdmin(admin.ModelAdmin):
page_schema = 'article management'
model = Article
# Set the fields to display
list_display = [Article.id, Article.title, Article.description, Article.status, Category.name]
# Set up fuzzy search field
search_fields = [Article.title, Category.name]
# custom base selector
async def get_select(self, request: Request) -> Select:
stmt = await super().get_select(request)
return stmt.outerjoin(Category, Article.category_id == Category.id)
``` | open | 2023-12-05T16:29:45Z | 2024-02-25T03:53:09Z | https://github.com/amisadmin/fastapi-amis-admin/issues/147 | [] | lifengmds | 3 |
run-llama/rags | streamlit | 16 | If multiple PDF agents can be defined ? | Hi ,
I succeeded to create some PDF documnet agent . I wonder if it's possible to create agent on several PDFs or create agent per PDF and reference them by name or similar way in "Generated RAG Agent" chat ? | open | 2023-11-23T15:47:16Z | 2023-12-11T03:19:16Z | https://github.com/run-llama/rags/issues/16 | [] | snassimr | 4 |
ivy-llc/ivy | numpy | 28,526 | Fix Frontend Failing Test: tensorflow - math.paddle.diff | To-do List: https://github.com/unifyai/ivy/issues/27499 | closed | 2024-03-09T20:57:12Z | 2024-04-02T09:25:05Z | https://github.com/ivy-llc/ivy/issues/28526 | [
"Sub Task"
] | ZJay07 | 0 |
mljar/mercury | jupyter | 278 | Create widgets in the loop | Hi there,
I'm currently trying to figure out a way to dynamically add in `Select` widgets based on the categorical columns of a dataframe. Basically, I am trying to plot the data for different interactions of different categorical data. However, the columns between loaded dataframes might have different categorical column names, or any number of them, and thus the `Select` widgets can vary according to whatever is in the data.
The current approach I have is to throw everything in the a dictionary, like in the example below:
```python
import numpy as np
import mercury as mr
data = # some pandas dataframe
categorical_column_names = # list categorical columns (determined via a separate function)
widgets = {}
for i in categorical_column_names:
categorical_levels = np.unique(data[i])
widgets[i] = mr.Select(label=i, value=categorical_levels[0], choices=categorical_levels)
```
This runs totally fine in the notebook. However, the locally hosted mercury app doesn't render all widgets, and fails to plot the data that the widgets are used for. Any thoughts? Thanks so much for mercury! | closed | 2023-05-17T04:18:10Z | 2023-05-24T13:59:52Z | https://github.com/mljar/mercury/issues/278 | [
"enhancement"
] | danjgale | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.