repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
tensorflow/tensor2tensor | deep-learning | 1,013 | Language model decoding issues | I have trained a language model on problem languagemodel_ptb10k. I want it can decode a sentence giving some starting words. For example, giving words 'be sure', it will decode 'to review the contributing guidelines'. However, it seems t2t-decoder can not give the right results when the models are language model. Same issue: #884
I have digged to the source code and found two mirror issues:
- the demo problem `languagemodel_ptb10k` generate vocabulary file that has word `the` with id->0, thus `<pad>`'s is 1, `<EOS>`'s is 2, so this line will give wrong `eos_id` to `beam_search` decoding processing. It results wrong terminal state. https://github.com/tensorflow/tensor2tensor/blob/1de75bda4bd4c98ca50bcdbcf5e94b388bf9a044/tensor2tensor/models/transformer.py#L812
- language model problem has only `targets`, so if the model decodes those targets words, it will be striped, see this line:
https://github.com/tensorflow/tensor2tensor/blob/57444300243f068bad88eb5ed51a9793c4bde172/tensor2tensor/models/transformer.py#L442 . However, in the preprocessing, `<EOS>` is automatically added to the `targets`, the model will then always decodes `<pad>` after`<EOS>` . Thus nothing is outputed. | open | 2018-08-23T10:55:29Z | 2018-10-26T15:46:34Z | https://github.com/tensorflow/tensor2tensor/issues/1013 | [] | Chanrom | 1 |
django-oscar/django-oscar | django | 3,619 | TypeError creating voucher without end-date | In oscar 3.0 beta/master, creating a coupon/voucher , not specifying an end date results in
> TypeError: '>' not supported between instances of 'datetime.datetime' and 'NoneType' | closed | 2021-01-23T07:57:23Z | 2021-03-11T02:37:00Z | https://github.com/django-oscar/django-oscar/issues/3619 | [
"☛ Needs more info"
] | jayvdb | 4 |
twopirllc/pandas-ta | pandas | 294 | tsignals generating wrong single with more than 2 indicator in strategy | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```
panda-ta=0.2.75b
```
**Describe the bug**
tsignals indicator is giving few wrong trade entry/exits in case of using multiple indicators. I've tried to use MACD with two SMA. And results are varying as per the chart.
**To Reproduce**
```python
#dump the attached csv file (it have close column)
dump_df #with strategy applied data
cnd = (dump_df['MACD_13_21_8'] >= dump_df['MACDs_13_21_8']) & (dump_df['close'] >= dump_df['SMA_13']) & (dump_df['close'] >= dump_df['SMA_21'])
dump_df.ta.tsignals(trend=cnd, append=True)
```
**Expected behavior**
the column generated through np.where in attached sheet have a correct trade. tsignals should match the same value.
```
eg : since it's the AND condition. Thus,final signal (s) should be only valid if all the indicator signal are the same
s = (s_1 & s3 & s_3)
```
**Additional context**
Note: Problem has experienced in case of more than 2 indicators in strategy. I've generated the actual signals through np. where with below condition and generated column s, which has been generated through s_0,s_1,s_2. column s_0,s_1,s_2 are respectively a signal for each indicator And it has an expected result.
```python
dump_df['signal'] = np.where((dump_df['s_1'].astype(int) == dump_df['s_0'].astype(int)) & (dump_df['s_2'].astype(int) == dump_df['s_1'].astype(int)),test_dump_df['s_2'],0)
```
```
S* = 0 (No Trade)
S* = 1 (Buy Trade) #in tsignal terminology entry
S* = -1 (Short Trade) #in tsignal terminology exit
```
Thanks in advance !!
[test-signal.xlsx](https://github.com/twopirllc/pandas-ta/files/6526970/test-signal.xlsx)
| open | 2021-05-22T19:14:54Z | 2024-07-15T15:32:02Z | https://github.com/twopirllc/pandas-ta/issues/294 | [
"bug",
"good first issue"
] | codesutras | 10 |
ray-project/ray | deep-learning | 51,506 | CI test windows://python/ray/tests:test_multi_tenancy is consistently_failing | CI test **windows://python/ray/tests:test_multi_tenancy** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_multi_tenancy-END
Managed by OSS Test Policy | closed | 2025-03-19T00:07:58Z | 2025-03-19T21:53:33Z | https://github.com/ray-project/ray/issues/51506 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
zalandoresearch/fashion-mnist | computer-vision | 174 | convolution network mean acc achieve 0.9765 | import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
class ConvBlocck(nn.Module):
def __init__(self, inchannel, outchannel, kernel_size=3, stride=1):
super().__init__()
self.conv = nn.Sequential(
nn.Conv2d(inchannel, outchannel, 1, 1),
nn.BatchNorm2d(outchannel),
nn.GELU(),
)
self.conv1 = nn.Sequential(
nn.Conv2d(outchannel, outchannel, kernel_size=kernel_size, padding=kernel_size//2, stride=stride, groups=outchannel),
nn.BatchNorm2d(outchannel),
nn.GELU(),
)
self.kernel_size = kernel_size
self.stride = stride
def forward(self, x):
out = self.conv(x)
out = out + self.conv1(out)
return out
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = ConvBlocck(1, 20, 5, 1)
self.conv2 = ConvBlocck(20, 50, 5, 1)
self.conv3 = nn.Sequential(
ConvBlocck(50, 100, 5, 1),
ConvBlocck(100, 100, 7, 1),
)
self.conv4 = nn.Sequential(
ConvBlocck(100, 200, 5, 1),
ConvBlocck(200, 200, 5, 1),
)
self.fc1 = nn.Linear(200, 100)
self.fc2 = nn.Linear(100, 10)
def forward(self, x):
x = self.conv1(x)
x = F.max_pool2d(x, 2, 2)
x = self.conv2(x)
x = F.max_pool2d(x, 2, 2)
x = self.conv3(x)
x = self.conv4(x)
x = F.avg_pool2d(x, kernel_size=7, stride=1, padding=0)
# import pdb; pdb.set_trace()
x = x.view(-1, 200)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=128, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=140, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.1, metavar='LR',
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', type=float, default=0.9, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
parser.add_argument('--save-model', action='store_true', default=False,
help='For Saving the current Model')
args = parser.parse_args()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 4, 'pin_memory': True} if use_cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.FashionMNIST('./fashionmnist_data/', train=True, download=False,
transform=transforms.Compose([
transforms.RandomCrop(28, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
# transforms.RandomErasing(p = 0.5, scale = (0, 0.4), ratio = (0.5, 2), value = 'random'),
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.FashionMNIST('./fashionmnist_data/', train=True, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
model = Net().to(device)
optimizer = optim.AdamW(model.parameters(), eps=1e-8, betas=(0.9, 0.99),
lr=5e-4, weight_decay=5e-2)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50, 120], gamma=0.1)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(args, model, device, test_loader)
if (args.save_model):
torch.save(model.state_dict(), "mnist_cnn.pt")
if __name__ == '__main__':
main()
`
just run with python -u main.py
some log
Test set: Average loss: 0.0810, Accuracy: 58119/60000 (97%)
Train Epoch: 138 [0/60000 (0%)] Loss: 0.103322
Train Epoch: 138 [1280/60000 (2%)] Loss: 0.124087
Train Epoch: 138 [2560/60000 (4%)] Loss: 0.105898
Train Epoch: 138 [3840/60000 (6%)] Loss: 0.114831
Train Epoch: 138 [5120/60000 (9%)] Loss: 0.055228
Train Epoch: 138 [6400/60000 (11%)] Loss: 0.057790
Train Epoch: 138 [7680/60000 (13%)] Loss: 0.077030
Train Epoch: 138 [8960/60000 (15%)] Loss: 0.104552
Train Epoch: 138 [10240/60000 (17%)] Loss: 0.098626
Train Epoch: 138 [11520/60000 (19%)] Loss: 0.095885
Train Epoch: 138 [12800/60000 (21%)] Loss: 0.066495
Train Epoch: 138 [14080/60000 (23%)] Loss: 0.053589
Train Epoch: 138 [15360/60000 (26%)] Loss: 0.092867
Train Epoch: 138 [16640/60000 (28%)] Loss: 0.116169
Train Epoch: 138 [17920/60000 (30%)] Loss: 0.107934
Train Epoch: 138 [19200/60000 (32%)] Loss: 0.116899
Train Epoch: 138 [20480/60000 (34%)] Loss: 0.095697
Train Epoch: 138 [21760/60000 (36%)] Loss: 0.112671
Train Epoch: 138 [23040/60000 (38%)] Loss: 0.075007
Train Epoch: 138 [24320/60000 (41%)] Loss: 0.083380
Train Epoch: 138 [25600/60000 (43%)] Loss: 0.136541
Train Epoch: 138 [26880/60000 (45%)] Loss: 0.098393
Train Epoch: 138 [28160/60000 (47%)] Loss: 0.156382
Train Epoch: 138 [29440/60000 (49%)] Loss: 0.120168
Train Epoch: 138 [30720/60000 (51%)] Loss: 0.102728
Train Epoch: 138 [32000/60000 (53%)] Loss: 0.093192
Train Epoch: 138 [33280/60000 (55%)] Loss: 0.067673
Train Epoch: 138 [34560/60000 (58%)] Loss: 0.118263
Train Epoch: 138 [35840/60000 (60%)] Loss: 0.063559
Train Epoch: 138 [37120/60000 (62%)] Loss: 0.107007
Train Epoch: 138 [38400/60000 (64%)] Loss: 0.097562
Train Epoch: 138 [39680/60000 (66%)] Loss: 0.067643
Train Epoch: 138 [40960/60000 (68%)] Loss: 0.119229
Train Epoch: 138 [42240/60000 (70%)] Loss: 0.153711
Train Epoch: 138 [43520/60000 (72%)] Loss: 0.103719
Train Epoch: 138 [44800/60000 (75%)] Loss: 0.120675
Train Epoch: 138 [46080/60000 (77%)] Loss: 0.092273
Train Epoch: 138 [47360/60000 (79%)] Loss: 0.148049
Train Epoch: 138 [48640/60000 (81%)] Loss: 0.096311
Train Epoch: 138 [49920/60000 (83%)] Loss: 0.067373
Train Epoch: 138 [51200/60000 (85%)] Loss: 0.084663
Train Epoch: 138 [52480/60000 (87%)] Loss: 0.149150
Train Epoch: 138 [53760/60000 (90%)] Loss: 0.069273
Train Epoch: 138 [55040/60000 (92%)] Loss: 0.050591
Train Epoch: 138 [56320/60000 (94%)] Loss: 0.059370
Train Epoch: 138 [57600/60000 (96%)] Loss: 0.132310
Train Epoch: 138 [58880/60000 (98%)] Loss: 0.084755
Test set: Average loss: 0.0648, Accuracy: 58591/60000 (98%) | closed | 2021-05-19T23:56:26Z | 2023-02-22T09:53:11Z | https://github.com/zalandoresearch/fashion-mnist/issues/174 | [] | liubo0902 | 0 |
indico/indico | flask | 6,471 | Unable to save contribution `Timetable inconsistent: Entry ends after its parent block` | Indico 3.3.2
```
2024-08-07 13:30:06,701 28a52a0aba3d4e08 2 indico.flask - ERROR errors.py:110 -- Timetable inconsistent: Entry ends after its parent block
Traceback (most recent call last):
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1094, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 686, in do_commit
dbapi_connection.commit()
psycopg2.DatabaseError: Timetable inconsistent
DETAIL: Entry ends after its parent block
CONTEXT: PL/pgSQL function events.check_timetable_consistency() line 76 at RAISE
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/project/initindico/home/.venv/lib/python3.12/site-packages/indico/web/rh.py", line 303, in process
db.session.commit()
File "<string>", line 2, in commit
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 1454, in commit
self._transaction.commit(_to_root=self.future)
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 839, in commit
trans.commit()
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2469, in commit
self._do_commit()
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2659, in _do_commit
self._connection_commit_impl()
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2630, in _connection_commit_impl
self.connection._commit_impl()
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1096, in _commit_impl
self._handle_dbapi_exception(e, None, None, None, None)
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2134, in _handle_dbapi_exception
util.raise_(
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1094, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/project/initindico/home/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 686, in do_commit
dbapi_connection.commit()
sqlalchemy.exc.DatabaseError: (psycopg2.DatabaseError) Timetable inconsistent
DETAIL: Entry ends after its parent block
CONTEXT: PL/pgSQL function events.check_timetable_consistency() line 76 at RAISE
(Background on this error at: https://sqlalche.me/e/14/4xp6)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/project/initindico/home/.venv/lib/python3.12/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/project/initindico/home/.venv/lib/python3.12/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/project/initindico/home/.venv/lib/python3.12/site-packages/indico/web/flask/util.py", line 80, in wrapper
return obj().process()
^^^^^^^^^^^^^^^
File "/project/initindico/home/.venv/lib/python3.12/site-packages/indico/web/rh.py", line 309, in process
handle_sqlalchemy_database_error() # this will re-raise an exception
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/project/initindico/home/.venv/lib/python3.12/site-packages/indico/core/db/sqlalchemy/core.py", line 59, in handle_sqlalchemy_database_error
raise ConstraintViolated(msg, exc.orig) from exc
indico.core.db.sqlalchemy.core.ConstraintViolated: Timetable inconsistent: Entry ends after its parent block
{'data': {'get': {'day': '2024/09/25', 'session_block_id': '4'},
'headers': {'Accept': 'application/json, text/javascript, */*; '
'q=0.01',
'Accept-Encoding': 'gzip, deflate, br, zstd',
'Accept-Language': 'en-US,en;q=0.5',
'Content-Length': '327',
'Content-Type': 'application/x-www-form-urlencoded; '
'charset=UTF-8',
'Cookie': 'indico_session=XXX',
'Host': 'init-events.molgen.mpg.de',
'Origin': 'https://init-events.molgen.mpg.de/',
'Priority': 'u=0',
'Referer': 'https://init-events.molgen.mpg.de/event/1/manage/timetable/',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin',
'Te': 'trailers',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:131.0) '
'Gecko/20100101 Firefox/131.0',
'X-Csrf-Token': '13263da9-38bd-413e-911c-40e5c0ec10e0',
'X-Requested-With': 'XMLHttpRequest'},
'json': None,
'post': {'board_number': '',
'code': '',
'csrf_token': '13263da9-38bd-413e-911c-40e5c0ec10e0',
'description': '',
'duration': '52200',
'location_data': '{"address":"","inheriting":true,"venue_id":1,"venue_name":"Faßberg-Campus"}',
'person_link_data': '[]',
'references': '[]',
'time': '13:30',
'title': 'Wanderung zum Faßberg',
'type': '9'},
'url': {'event_id': 1}},
'endpoint': 'timetable.add_contribution',
'id': '28a52a0aba3d4e08',
'ip': 'XXX',
'method': 'POST',
'referrer': 'https://init-events.molgen.mpg.de/event/1/manage/timetable/',
'rh': 'RHLegacyTimetableAddContribution',
'time': '2024-08-07T13:30:06.835863',
'url': 'https://init-events.molgen.mpg.de/event/1/manage/timetable/add-contribution?day=2024/09/25&session_block_id=4',
'user': {'email': '[pmenzel@molgen.mpg.de](mailto:pmenzel@molgen.mpg.de)', 'id': 2, 'name': 'Paul Menzel'},
'user_agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:131.0) Gecko/20100101 '
'Firefox/131.0'}
``` | open | 2024-08-07T13:49:19Z | 2024-08-07T13:53:15Z | https://github.com/indico/indico/issues/6471 | [
"bug"
] | paulmenzel | 3 |
d2l-ai/d2l-en | machine-learning | 2,440 | Wrong epanechikov kernel | Chapter 11.2. It is triangular kernel. | open | 2023-02-12T15:50:31Z | 2023-02-12T15:50:31Z | https://github.com/d2l-ai/d2l-en/issues/2440 | [] | yongduek | 0 |
huggingface/datasets | pytorch | 6,977 | load json file error with v2.20.0 | ### Describe the bug
```
load_dataset(path="json", data_files="./test.json")
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1997, in _prepare_split_single
for _, table in generator:
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 155, in _generate_tables
df = pd.read_json(f, dtype_backend="pyarrow")
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
TypeError: read_json() got an unexpected keyword argument 'dtype_backend'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/t1.py", line 11, in <module>
load_dataset(path=data_path, data_files="./t2.json")
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2616, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1029, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1124, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1884, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 2040, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
```
import pandas as pd
with open("./test.json", "r") as f:
df = pd.read_json(f, dtype_backend="pyarrow")
```
```
Traceback (most recent call last):
File "/app/t3.py", line 3, in <module>
df = pd.read_json(f, dtype_backend="pyarrow")
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 211, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
TypeError: read_json() got an unexpected keyword argument 'dtype_backend'
```
### Steps to reproduce the bug
.
### Expected behavior
.
### Environment info
```
datasets 2.20.0
pandas 1.5.3
``` | closed | 2024-06-18T08:41:01Z | 2024-06-18T10:06:10Z | https://github.com/huggingface/datasets/issues/6977 | [] | xiaoyaolangzhi | 2 |
gee-community/geemap | jupyter | 1,288 | Support Python 3.11 | [Python 3.11](https://docs.python.org/3.11/whatsnew/3.11.html) is now in release candidate phase and will be released in two weeks. Support is ramping up, both in the scientific suite (with NumPy, Pandas, SciPy, Matplotlib and Seaborn already supporting 3.11, among others), as well in the geospatial suite (with Shapely CI is passing and Fiona working on it). It would be great if geemap also supported Python 3.11. | closed | 2022-10-10T11:31:33Z | 2022-10-11T15:04:05Z | https://github.com/gee-community/geemap/issues/1288 | [
"Feature Request"
] | EwoutH | 5 |
klen/mixer | sqlalchemy | 38 | Exceptions thrown from related model being represented as coming from the relationship property itself. | I came across an error when using Mixer, but the stack trace sent me in the wrong direction at first. The trace indicated there was a problem processing a SQLAlchemy relationship definition. I eventually discovered that the error wasn't on the relationship itself, but one of the columns in the related table was creating issues. The misdirection in this stack trace made it very difficult to debug.
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 576, in blend
return type_mixer.blend(**values)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 125, in blend
for name, value in defaults.items()
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 125, in <genexpr>
for name, value in defaults.items()
File "/usr/local/lib/python2.7/site-packages/mixer/mix_types.py", line 220, in gen_value
return type_mixer.gen_field(field)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 202, in gen_field
return self.gen_value(field.name, field, unique=unique)
File "/usr/local/lib/python2.7/site-packages/mixer/main.py", line 255, in gen_value
field_name, self.__scheme.__name__, exc))
ValueError: Mixer (myproject.models.Order): Generation for customer (Order) has been stopped. Exception: 'NoneType' object has no attribute '__bases__'
```
That last line should read something like this, with perhaps a reference that it was coming from `myproject.models.Order`:
```
ValueError: Mixer (myproject.models.Customer): Generation for time_created (Customer) has been stopped. Exception: 'NoneType' object has no attribute '__bases__'
```
| closed | 2015-02-06T09:44:16Z | 2017-08-17T22:08:26Z | https://github.com/klen/mixer/issues/38 | [] | wolverdude | 2 |
inventree/InvenTree | django | 8,354 | Is not permitted to add/modify comment in attachment in part | ### Please verify that this bug has NOT been raised before.
- [x] I checked and didn't find a similar issue
### Describe the bug*
When trying to modify the comment of an attachment in part I get this error:

also is not permitted to delete the attachment it tell me that is prohibited (the user is admin)
### Steps to Reproduce
1. go to attachments in a selected part
2. press edit
3. add a comment
4. press submit
### Expected behaviour
Pressing submit should apply the updated data
### Deployment Method
- [x] Docker
- [ ] Package
- [ ] Bare metal
- [ ] Other - added info in Steps to Reproduce
### Version Information
# Version Information:
InvenTree-Version: 0.16.5
Django Version: 4.2.15
Commit Hash: 6e37f0c
Commit Date: 2024-10-07
Database: postgresql
Debug-Mode: False
Deployed using Docker: True
Platform: Linux-5.15.0-124-generic-x86_64-with
Installer: DOC
Active plugins: [{'name': 'InvenTreeBarcode', 'slug': 'inventreebarcode', 'version': '2.1.0'}, {'name': 'InvenTreeCoreNotificationsPlugin', 'slug': 'inventreecorenotificationsplugin', 'version': '1.0.0'}, {'name': 'InvenTreeCurrencyExchange', 'slug': 'inventreecurrencyexchange', 'version': '1.0.0'}, {'name': 'InvenTreeLabel', 'slug': 'inventreelabel', 'version': '1.1.0'}, {'name': 'InvenTreeLabelMachine', 'slug': 'inventreelabelmachine', 'version': '1.0.0'}, {'name': 'InvenTreeLabelSheet', 'slug': 'inventreelabelsheet', 'version': '1.0.0'}, {'name': 'DigiKeyPlugin', 'slug': 'digikeyplugin', 'version': '1.0.0'}, {'name': 'LCSCPlugin', 'slug': 'lcscplugin', 'version': '1.0.0'}, {'name': 'MouserPlugin', 'slug': 'mouserplugin', 'version': '1.0.0'}, {'name': 'TMEPlugin', 'slug': 'tmeplugin', 'version': '1.0.0'}, {'name': 'KiCadLibraryPlugin', 'slug': 'kicad-library-plugin', 'version': '1.4.3'}]
### Please verify if you can reproduce this bug on the demo site.
- [ ] I can reproduce this bug on the demo site.
### Relevant log output
_No response_ | closed | 2024-10-24T09:12:09Z | 2024-10-29T07:42:47Z | https://github.com/inventree/InvenTree/issues/8354 | [
"bug",
"question"
] | simoneamadori | 12 |
huggingface/datasets | nlp | 6,603 | datasets map `cache_file_name` does not work | ### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it crashes
### Expected behavior
It will tell you the filename you specified does not exist or it will generate a new file and tell you the filename does not exist.
### Environment info
- `datasets` version: 2.16.0
- Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.12.2 | open | 2024-01-18T23:08:30Z | 2024-01-28T04:01:15Z | https://github.com/huggingface/datasets/issues/6603 | [] | ChenchaoZhao | 2 |
datadvance/DjangoChannelsGraphqlWs | graphql | 27 | Cannot Access to info.context.scope | info.context.scope throws a wrong attribute error.
But I read
Changed
Channels scope is now stored in info.context.scope as dict. (Previously info.context was a copy of scope wrapped into the types.SimpleNamespace). The thing is the GraphQL info.context and Channels scope are different things. The info.context is a storage for a single GraphQL operation, while scope is a storage for the whole WebSocket connection. For example now use info.context.scope["user"] to get access to the Django user model.
Is there a special circumstance I can use info.context.scope? | closed | 2019-10-04T05:08:18Z | 2020-04-04T22:45:03Z | https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/27 | [] | ghost | 3 |
nvbn/thefuck | python | 467 | Fuck doesn't work on Fedora 22 | > [@localhost ~] $ pwf
> bash: pwf: command not found...
> [@localhost ~] $ fuck
> pwd [enter/↑/↓/ctrl+c]
> [@localhost ~] $
Just nothing happens.
I've installed via install script and checked all needed aliases for `bash`. They are ok.
Any ideas?
| closed | 2016-02-23T16:06:10Z | 2016-04-13T20:16:42Z | https://github.com/nvbn/thefuck/issues/467 | [] | kiddten | 5 |
litestar-org/litestar | pydantic | 2,992 | Enhancement: Multiple TestClients / explicit test app running | ### Summary
Sometimes, it would be nice to initialize multiple `Testclient`s, but only do the blocking portal magic once. The request routing magic lives in `TestClient(Transport)` so simply copying the `base_url` to a `httpx.Client` won't work (found out by trying).
An example where this could be needed is when testing the API from the perspective of 2 different users. Using just one client might not be feasible due to cookies that server could set (should be different for the two users) and in general it would be nice to have the two separated.
```py
client = TestClient(app=app)
client.get(..., auth=user1_auth) # requires also...
client.get(..., auth=user2_auth) # ...this repetition
```
### Basic Example
Maybe something like:
```py
with create_portal(app) as portal:
user1 = TestClient(portal=portal, auth=...)
user2 = TestClient(portal=portal, auth=...)
user1.get(...)
user2.get(...)
```
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | open | 2024-01-17T21:20:25Z | 2025-03-20T15:54:20Z | https://github.com/litestar-org/litestar/issues/2992 | [
"Enhancement",
"Needs MCVE"
] | mtvx | 4 |
dmlc/gluon-cv | computer-vision | 1,709 | VideoClsCustom can't load dataset with heterogeneous frame naming pattern | I am trying to execute the **Fine-tuning SOTA video models on your own dataset** tutorial with a custom dataset that collects videos from different datasets like UCF101 and HMDB51.
The videos are already decoded in frames. However, the frames do not have a common naming pattern. Some videos may have the frames named "frameXXX.jpeg" and others simply "XXX.jpeg". With such a dataset, it is impossible to use `VideoClsCustom`.
The [class doc](https://github.com/dmlc/gluon-cv/blob/master/gluoncv/data/video_custom/classification.py) states:
```
name_pattern : str, default None.
The naming pattern of the decoded video frames.
For example, img_00012.jpg.
```
However, `name_pattern`is initialized by default at `'img_%05d.jpg'`. Thus, `VideoClsCustom` does not work if naming is different from the default and `name_pattern`is not set. Moreover, it does not work with a heterogeneous naming pattern like mine. | closed | 2021-10-16T18:37:54Z | 2021-12-08T20:46:25Z | https://github.com/dmlc/gluon-cv/issues/1709 | [] | seraogianluca | 2 |
akfamily/akshare | data-science | 5,564 | 部分基金规模数据无法出获得 | 当akshare版本 1.15.80
问题:部分基金规模数据仍然无法出获得,
已发现一部分国债ETF。比如'511130'。为30年国债ETF,目前规模30亿
还有一部分商品ETF。比如'159985'。为豆粕ETF, 目前规模40亿
1、 出问题相关api:
fund_individual_basic_info_xq

应该是雪球api出问题,此ETF基金仍在存续可交易

2、当前akshare可能存在代替的方案:ak.fund_scale_open_sina(symbol)
symbol="股票型基金"; choice of {"股票型基金", "混合型基金", "债券型基金", "货币型基金", "QDII基金"}。
仍缺乏商品ETF数据
3、期望获得的结果:
从 f"https://fundf10.eastmoney.com/jbgk_{'symbols'}.html" 爬取
参考代码:
import requests
from bs4 import BeautifulSoup
url = f"https://fundf10.eastmoney.com/jbgk_{'511130'}.html"
r = requests.get(url)
r1 = BeautifulSoup(r.text, 'html.parser')
r2 = r1.find_all('label')[8]
re.findall(pattern=('\d+\.\d+.*元'), string=r2.text )[0]


非科班出身,哪里不合规范见谅
| closed | 2025-02-04T14:38:05Z | 2025-02-05T12:33:40Z | https://github.com/akfamily/akshare/issues/5564 | [
"bug"
] | adsxadsx | 1 |
tflearn/tflearn | data-science | 868 | IndexError: list index out of range | Hi, I have been having issues similar to #360 and #408. I am attempting to classify a single pixel into one of three categories. All solutions provided to #360 and #408 do not solve my problem. I restarted, added the reset_default_graph(), and in the second section of code converted the data a list.
**Original problem:**
>>> net = tflearn.input_data(shape=[None,1, 3])
>>> net = tflearn.lstm(net, 9)
>>> net = tflearn.fully_connected(net, 3, activation='softmax')
>>> net = tflearn.regression(net, optimizer='adam',
... loss='categorical_crossentropy', name="output1")
>>> model = tflearn.DNN(net, tensorboard_verbose=2)
>>> model.fit(data, labels, n_epoch=5, validation_set=0.1, show_metric=True,snapshot_step=100)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/site-packages/tflearn/models/dnn.py", line 183, in fit
self.targets)
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/site-packages/tflearn/utils.py", line 283, in feed_dict_builder
feed_dict[net_inputs[i]] = x
IndexError: list index out of range
**Attempt after converting to lists:**
>>> model.fit(X, Y, n_epoch=5, validation_set=0.1, show_metric=True,snapshot_step=100)
---------------------------------
Run id: ORFXDS
Log directory: /tmp/tflearn_logs/
Exception in thread Thread-8:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/site-packages/tflearn/data_flow.py", line 187, in fill_feed_dict_queue
data = self.retrieve_data(batch_ids)
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/site-packages/tflearn/data_flow.py", line 222, in retrieve_data
utils.slice_array(self.feed_dict[key], batch_ids)
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/site-packages/tflearn/utils.py", line 187, in slice_array
return X[start]
IndexError: index 1091614 is out of bounds for axis 0 with size 1536
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/site-packages/tflearn/models/dnn.py", line 215, in fit
callbacks=callbacks)
File "/Library/Frameworks/Python.framework/Versions/Current/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 281, in fit
vd = val_feed_dicts[i] if val_feed_dicts else None
IndexError: list index out of range
Is there an easy solution here?
Thank you
| open | 2017-08-10T17:03:23Z | 2018-01-16T20:05:31Z | https://github.com/tflearn/tflearn/issues/868 | [] | abarrington | 5 |
hpcaitech/ColossalAI | deep-learning | 6,171 | [FEATURE]: Support for Large Parameter Models | ### Describe the feature
Hello, I would like to ask if there are plans to support larger parameter models like Llama-405B in future versions? | closed | 2024-12-25T06:54:23Z | 2024-12-25T06:56:11Z | https://github.com/hpcaitech/ColossalAI/issues/6171 | [
"enhancement"
] | huangmengasd | 0 |
deepset-ai/haystack | machine-learning | 8,862 | Improve Type Validation in Pipelines: Configurable Strictness and Errors vs. Warnings | **Is your feature request related to a problem? Please describe.**
Currently, Haystack enforces strict type checking for pipeline connection validation, meaning users cannot run a pipeline if their type annotations do not align exactly with the expected types. While this validation is intended to help users catch potential issues early, it can be overly restrictive—especially for advanced users—leading to unintuitive errors and forcing workarounds like bypassing the pipeline run method. Additionally, the current implementation does not allow users to configure the strictness level, and it is unclear how best to align with best practices from other Python libraries like Pydantic, FastAPI, or Typer.
**Describe the solution you'd like**
Introduce configurable options for type validation in pipeline connections:
1. **Strict vs. lax type comparison** – Allow users to choose whether type checking should be strict (e.g., `Optional[str] → str` fails) or more permissive (e.g., `Optional[str] → str` passes).
2. **Error vs. warning vs. disable option** – Give users the ability to configure whether type validation should raise an error, issue a warning, or be disabled entirely.
3. **Alignment with broader ecosystem** – Investigate how established Python libraries handle similar type validation scenarios and determine if there are best practices or patterns that Haystack should adopt.
**Additional context**
Looser type validation (e.g., allowing `Optional[str]` to be passed where `str` is expected) can make Haystack more user-friendly while still providing helpful validation for common mistakes. Making type checking configurable ensures flexibility for different use cases, from beginner-friendly strict validation to more advanced, customizable behavior.
Also related to these issues raised by the community
- https://github.com/deepset-ai/haystack/issues/8524
- https://github.com/deepset-ai/haystack/issues/8494
cc @mathislucka who made a more permissive version of the type checker in haystack-experimental when creating SuperComponents | closed | 2025-02-14T13:33:49Z | 2025-03-03T15:11:44Z | https://github.com/deepset-ai/haystack/issues/8862 | [
"P1"
] | sjrl | 0 |
iperov/DeepFaceLab | deep-learning | 834 | How to reproduce quantitative results in your paper? | Thank you very much for the great works!
I would like to reproduce quantitative results in your paper, and I have some questions as below:
Dataset:
you described "To be statistically significant, we compute the mean and variance of those measurements on the 100 frames (uniform sampling over time) of the first 500 videos in FaceForensics++"
**Question 1 : As I know, your model support faceswap from src to dst (one face pair at a time). How to feed multi-face pairs into the training model?**
Training setting:
you described "It should be noted that all the videos produced by DeepFaceLab were follow by the same settings with 5.2."
**Question 2: How to make it "The average training time is restrict within 3 hours" in 5.2? (It might be more clear to me if I know the answer of question 1)**
Result:
**Question 3: Table 1 in your paper show the quantitative face swapping results on FaceForensics++. Could I know how to calculate those index: SSIM, perceptual loss, verification, landmarks, pose? Or is it possible to release code to let us reproduce experiments in your paper?**
Looking forward to your reply! Thank you!
Joanne | open | 2020-07-15T00:06:02Z | 2023-06-08T23:11:26Z | https://github.com/iperov/DeepFaceLab/issues/834 | [] | JOANNECYSHEN | 2 |
marimo-team/marimo | data-science | 3,492 | WASM exports produce error when trying to serve locally | ### Describe the bug
The wasm is exported without error, but I get a the following error when trying to serve locally:
>
> Something went wrong
> Traceback (most recent call last): File "/lib/python312.zip/_pyodide/_base.py", line 523, in eval_code .run(globals, locals) ^^^^^^^^^^^^^^^^^^^^ File "/lib/python312.zip/_pyodide/_base.py", line 357, in run coroutine = eval(self.code, globals, locals) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<exec>", line 5, in <module> File "/lib/python3.12/site-packages/marimo/__init__.py", line 88, in <module> import marimo._islands as islands File "/lib/python3.12/site-packages/marimo/_islands/__init__.py", line 9, in <module> from marimo._islands.island_generator import ( File "/lib/python3.12/site-packages/marimo/_islands/island_generator.py", line 15, in <module> from marimo._output.formatting import as_html, mime_to_html File "/lib/python3.12/site-packages/marimo/_output/formatting.py", line 35, in <module> from marimo._plugins.stateless.json_output import json_output File "/lib/python3.12/site-packages/marimo/_plugins/stateless/json_output.py", line 7, in <module> from marimo._plugins.core.web_component import JSONType, build_stateless_plugin File "/lib/python3.12/site-packages/marimo/_plugins/core/web_component.py", line 26, in <module> from marimo._output.md import _md File "/lib/python3.12/site-packages/marimo/_output/md.py", line 7, in <module> import markdown # type: ignore ^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'markdown' The module 'markdown' is included in the Pyodide distribution, but it is not installed. You can install it by calling: await micropip.install("markdown") in Python, or await pyodide.loadPackage("markdown") in JavaScript See https://pyodide.org/en/stable/usage/loading-packages.html for more details.
### Environment
<details>
```
C:\Users\FelixGeorge>marimo env
{
"marimo": "0.10.13",
"OS": "Windows",
"OS Version": "11",
"Processor": "Intel64 Family 6 Model 186 Stepping 2, GenuineIntel",
"Python Version": "3.12.4",
"Binaries": {
"Browser": "132.0.6834.83",
"Node": "--"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.1",
"markdown": "3.7",
"narwhals": "1.22.0",
"packaging": "24.1",
"psutil": "6.0.0",
"pygments": "2.18.0",
"pymdown-extensions": "10.14",
"pyyaml": "6.0.2",
"ruff": "0.9.1",
"starlette": "0.45.2",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.1"
},
"Optional Dependencies": {
"pandas": "2.2.2",
"pyarrow": "18.1.0"
}
}
```
</details>
### Code to reproduce
_No response_ | open | 2025-01-18T07:17:36Z | 2025-03-13T20:37:13Z | https://github.com/marimo-team/marimo/issues/3492 | [
"bug",
"cannot reproduce"
] | fgeorgepar | 7 |
graphql-python/graphene | graphql | 948 | Execute without case convertions | I am trying to execute a query without having the result be turned into camel case and enums turn into upper case. I was able to write a utility for keys conversion back to snake case, but that I can't do the same for enums, since I would not know what is an enum and what is not.
Is there a way to execute a query without getting those case conversions string? (This would be just for some queries, not for all)
Another options is somehow feeding the query result to some case "restorer", but I looked into the code and could not find a good example of that. | closed | 2019-04-23T12:18:33Z | 2020-05-21T00:26:56Z | https://github.com/graphql-python/graphene/issues/948 | [] | yardensachs | 4 |
SciTools/cartopy | matplotlib | 2,413 | [TST] Upcoming dependency test failures | The build with nightly wheels from matplotlib, scipy, shapely and their
dependencies has failed. Check the logs for any updates that need to be
made in cartopy.
https://github.com/SciTools/cartopy/actions/runs/9866018660 | closed | 2024-07-10T00:24:33Z | 2024-07-11T20:09:47Z | https://github.com/SciTools/cartopy/issues/2413 | [] | github-actions[bot] | 3 |
tqdm/tqdm | pandas | 1,049 | Pylint crashes after 4.48.0 | - [ ] I have marked all applicable categories:
+ [x] exception-raising bug
+ [ ] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
See also here: https://github.com/PyTorchLightning/pytorch-lightning/issues/4039
The minimal example is mention here: https://github.com/PyTorchLightning/pytorch-lightning/issues/4039#issuecomment-706568078 and below:
```python
from tqdm.auto import tqdm # pylint error only with auto
bar = tqdm()
bar.total = 1
```
With the above example, pylint crashed:
<details>
<pre>
<code>
Traceback (most recent call last):
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/decorators.py", line 32, in cached
return cache[func]
KeyError: <bound method ClassDef.slots of <ClassDef.tqdm l.31 at 0x7f31996d9940>>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/pylint/lint/pylinter.py", line 1031, in get_ast
return MANAGER.ast_from_file(filepath, modname, source=True)
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/manager.py", line 98, in ast_from_file
return AstroidBuilder(self).file_build(filepath, modname)
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/builder.py", line 138, in file_build
return self._post_build(module, encoding)
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/builder.py", line 158, in _post_build
self.delayed_assattr(delayed)
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/builder.py", line 234, in delayed_assattr
if not _can_assign_attr(inferred, node.attrname):
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/builder.py", line 59, in _can_assign_attr
slots = node.slots()
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/decorators.py", line 34, in cached
cache[func] = result = func(*args, **kwargs)
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/scoped_nodes.py", line 2833, in slots
slots = list(grouped_slots())
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/scoped_nodes.py", line 2818, in grouped_slots
for cls in self.mro()[:-1]:
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/scoped_nodes.py", line 2904, in mro
return self._compute_mro(context=context)
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/scoped_nodes.py", line 2894, in _compute_mro
return _c3_merge(unmerged_mro, self, context)
File "/home/pwwang/.cache/pypoetry/virtualenvs/prog-taq1idOW-py3.8/lib/python3.8/site-packages/astroid/scoped_nodes.py", line 83, in _c3_merge
raise exceptions.InconsistentMroError(
astroid.exceptions.InconsistentMroError: Cannot create a consistent method resolution order for MROs (tqdm, Comparable, object), (tqdm_asyncio, tqdm, Comparable, object), (tqdm, tqdm_asyncio) of class <ClassDef.tqdm l.31 at 0x7f31996d9940>.
************* Module tqdm_pylint_error
tqdm_pylint_error.py:1:0: F0002: <class 'astroid.exceptions.InconsistentMroError'>: Cannot create a consistent method resolution order for MROs (tqdm, Comparable, object), (tqdm_asyncio, tqdm, Comparable, object), (tqdm, tqdm_asyncio) of class <ClassDef.tqdm l.31 at 0x7f31996d9940>. (astroid-error)
</code>
</pre>
</details>
Environment:
- tqdm: 4.50.2
- pylint: 2.6.0
- python: 3.8.3 (default, May 19 2020, 18:47:26)
- Platform: [GCC 7.3.0] linux
I have test version 4.48.0 and earlier, which worked fine.
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2020-10-12T15:57:44Z | 2020-10-12T17:18:49Z | https://github.com/tqdm/tqdm/issues/1049 | [
"need-feedback 📢",
"submodule ⊂"
] | pwwang | 1 |
MentatInnovations/datastream.io | jupyter | 29 | Not allowed to use more than 1-dimensional data | Does this work with multivariable anomaly detection? Doesn't seem to. Also, how do you choose the time range if you're not doing this in the terminal? | open | 2018-07-02T22:17:31Z | 2018-07-05T17:46:30Z | https://github.com/MentatInnovations/datastream.io/issues/29 | [] | agrover7 | 0 |
slackapi/python-slack-sdk | asyncio | 1,086 | Add interactivity patterns in SocketModeClient document | Currently, https://slack.dev/python-slack-sdk/socket-mode/index.html has only Events API example. We add interactivity patterns (e.g., shortcuts, modal submission, button clicks, etc.) in the page.
---
@~tjstum Thanks for your prompt reply here!
>I might suggest adding something to the docs of SocketModeRequest (and the example usages) mentioning that the WebhookClient/AsyncWebhookClient can be used to work with the response_url provided in the payload.
This can be helpful for other developers too! Perhaps, rather than docstrings, updating [this documentation page](https://slack.dev/python-slack-sdk/socket-mode/index.html) to have an interactivity payload pattern with `response_url` usage would be a good way to improve the visibility of functionalities.
>Maybe also exposing the response_url as a property (again, to help promote visibility).
Thanks for the suggestion but we are not planning to have the property. The class represents the whole message structure as-is. No modification and addition are intended. I would recommend transforming `SocketModeRequest` data to your own class with utility methods (like we do in Bolt, which is equivalent to your app framework).
I'm thinking to create a new issue for the document improvement and then close this issue. Is that fine with you? Thanks again for your feedback and inputs here.
_Originally posted by @seratch in https://github.com/slackapi/python-slack-sdk/issues/1075#issuecomment-894284214_ | closed | 2021-08-06T14:06:55Z | 2021-12-09T09:56:28Z | https://github.com/slackapi/python-slack-sdk/issues/1086 | [
"docs",
"Version: 3x",
"socket-mode",
"good first issue"
] | seratch | 1 |
biolab/orange3 | scikit-learn | 6,869 | First class Vector/Tensor Datatype | Hi ! Such a great tool you built !
First of all, I am no data scientist. I am a backend developper and have no clue about what i am doing around data analysis.
I was fiddling to explore ways to extract domain knowledge from images, and I wanted to play with embeddings.
I already have a dataset of embeddings, and found no other way to use that than use 1 column per dimension of my embedding (1408).
This worked well and I did find the answer I was looking for. However, as I tried to see if I could optimize things, I found myself writing lots of python scripts ( I never worked with python until now so lots of dirty code written with copilot ) to do arithmetics over those columns, because the widgets were not designed to apply the same thing to 1408 columns.
I took a look at the codebase, and tried to add support for a vector datatype. I did succeed to make some stuff work, but it was requiring me to add code to every widget, so I know I am working in the wrong direction.
However in half a day of work I did end up with a pretty cool result.

It is buggy as hell, but the concept is here.
I might be completely mistaken about the way I am supposed to use the tool, but if I am not, maybe it will interest someone.
I can free up some time to work on this, but as I said, I never worked in python before although I have been programming in other languages for about 10 years, so I will need some guidance.
| open | 2024-08-10T18:07:56Z | 2024-09-06T07:39:28Z | https://github.com/biolab/orange3/issues/6869 | [] | Aetherall | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,420 | No route to host!! | When I try to follow the steps you gave to train CycleGAN, I have the following problem, I don't know if it's the server firewall setting or something, I hope I can get your reply, thanks you!

| open | 2022-05-11T14:21:13Z | 2022-06-14T19:56:27Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1420 | [] | huahuabai | 1 |
sigmavirus24/github3.py | rest-api | 576 | Backport fix for commit status: 422 Validation Failed | ``` pytb
File "priorities_lint/web.py", line 51, in github_webhook
description="Running priorities-lint on your PR",
File "/app/.heroku/python/lib/python2.7/site-packages/github3/decorators.py", line 38, in auth_wrapper
return func(self, *args, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/github3/repos/repo.py", line 803, in create_status
json = self._json(self._post(url, data=data), 201)
File "/app/.heroku/python/lib/python2.7/site-packages/github3/models.py", line 100, in _json
if self._boolean(response, status_code, 404) and response.content:
File "/app/.heroku/python/lib/python2.7/site-packages/github3/models.py", line 121, in _boolean
raise GitHubError(response)
github3.models.GitHubError: 422 Validation Failed
```
| closed | 2016-02-15T22:19:34Z | 2016-02-15T23:26:49Z | https://github.com/sigmavirus24/github3.py/issues/576 | [] | alex | 1 |
matplotlib/matplotlib | matplotlib | 29,743 | [Bug]: memory baking figure is not freed when figure is closed | ### Bug summary
It seems that a reference to the figure object is kept in some global state. Executing the code below and closing the figures that pop up results in a linear increase in used memory. This happens with Matplotlib 3.9.1 and later. The latest release that does not exhibit the problem is Matplotlib 3.8.4.
Adding `fig.clf()` after the `plt.show()` in the code below (clearing the figure after it has been displayed and closed by the user) results in the memory leak being significantly reduced, but not eliminated.
### Code for reproduction
```Python
import gc
import psutil
import numpy as np
import matplotlib.pyplot as plt
p = psutil.Process()
d = np.linspace(0, 1, 1_000_000)
for i in range(10):
fig, ax = plt.subplots()
ax.plot(d)
plt.show()
gc.collect()
print(p.memory_info().rss)
```
### Actual outcome
185729024
254550016
321282048
389459968
456253440
524554240
592330752
659394560
727457792
794468352
### Expected outcome
183324672
251961344
253427712
253464576
253493248
254521344
254529536
253464576
253468672
254476288
### Additional information
The leak has been introduced sometime between version 3.8.4 (which gives the expected outcome output above) and version 3.9.1 (which gives the actual outcome output above). It reproduces with Matplotlib version 3.9.4 and 3.10.1
### Operating system
Windows
### Matplotlib Version
3.9.1
### Matplotlib Backend
qtagg
### Python version
3.12.9
### Jupyter version
_No response_
### Installation
conda | open | 2025-03-12T13:36:08Z | 2025-03-13T23:15:33Z | https://github.com/matplotlib/matplotlib/issues/29743 | [
"status: confirmed bug"
] | dnicolodi | 10 |
assafelovic/gpt-researcher | automation | 874 | Next.JS UI errors | **Describe the bug**
Geting some bugs from the Next.JS Web UI.
Using the latest version from Main branch
**To Reproduce**
Steps to reproduce the behavior:
1. Git clone the repo
2. Follow steps to build with "docker compose"
3. Run containers and go to the Next.JS UI
4. Errors (happens even after inputing API keys)
**Expected behavior**
No errors
(Not sure they are related so posting both)
**Bug 1**
```
(index):1 Uncaught TypeError: Cannot read properties of undefined (reading 'register')
at HTMLDivElement.onreset ((index):1:71)
```
That line is referring to this line of HTML:
`<meta name="viewport" content="width=device-width, initial-scale=1"/>`
ChatGPT thinks that it is something is trying to register, and that exact line doesnt actually mean anything, just a bug in Javascript
**Bug 2**
```
Warning: Extra attributes from the server: data-extension-installed
at body
at html
at RootLayout (Server)
at RedirectErrorBoundary (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/redirect-boundary.js:74:9)
at RedirectBoundary (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/redirect-boundary.js:82:11)
at NotFoundErrorBoundary (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/not-found-boundary.js:76:9)
at NotFoundBoundary (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/not-found-boundary.js:84:11)
at DevRootNotFoundBoundary (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/dev-root-not-found-boundary.js:33:11)
at ReactDevOverlay (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/react-dev-overlay/app/ReactDevOverlay.js:87:9)
at HotReload (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/react-dev-overlay/app/hot-reloader-client.js:321:11)
at Router (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/app-router.js:207:11)
at ErrorBoundaryHandler (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/error-boundary.js:113:9)
at ErrorBoundary (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/error-boundary.js:160:11)
at AppRouter (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/components/app-router.js:577:13)
at ServerRoot (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/app-index.js:112:27)
at Root (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/client/app-index.js:117:11)
```
Side note: The old UI works fine and returns a report, just the Next.JS UI is not working right now | closed | 2024-09-30T20:01:47Z | 2024-11-20T19:49:20Z | https://github.com/assafelovic/gpt-researcher/issues/874 | [] | arces | 8 |
ydataai/ydata-profiling | pandas | 949 | pandas-profiling giving random error | **Describe the bug**
when putting a data-frame inside the ProfileReport class I am getting random errors on different runs.
<!--
A clear and concise description of what the bug is.
If the description consists of multiple non-related bugs, you are encouraged to create separate issues.
-->
**To Reproduce**
Create a data frame from ```seaborn.load_dataset('titanic')``` and try creating a profile report.
```python
import pandas as pd
import seaborn as sns
from pandas_profiling import ProfileReport
titanic = sns.load_dataset('titanic')
profile = ProfileReport(titanic,title="titanic dataset")
try:
profile.to_file('output.html`')
except Exception as e:
print(e)
```
<!--
We would need to reproduce your scenario before being able to resolve it.
_Data:_
Please share your dataframe.
If the data is confidential, for example when it contains company-sensitive information, provide us with a synthetic or open dataset that produces the same error.
You should provide the DataFrame structure, for example by reporting the output of `df.info()`.
You can anonymize the column names if necessary.
_Code:_ Preferably, use this code format:
```python
"""
Test for issue XXX:
https://github.com/pandas-profiling/pandas-profiling/issues/XXX
"""
import pandas as pd
import pandas_profiling
def test_issueXXX():
df = pd.read_csv(r"<file>")
# Minimal reproducible code
```
-->
**Version information:**
pandas-profiling: 3.1.0
<!--
Version information is essential in reproducing and resolving bugs. Please report:
* _Python version_: Your exact Python version.
* _Environment_: Where do you run the code? Command line, IDE (PyCharm, Spyder, IDLE etc.), Jupyter Notebook (Colab or local)
* _`pip`_: If you are using `pip`, run `pip freeze` in your environment and report the results. The list of packages can be rather long, you can use the snippet below to collapse the output.
<details><summary>Click to expand <strong><em>Version information</em></strong></summary>
<p>
```
<<< Put your version information here >>>
```
</p>
</details>
-->
**Additional context**
Video has been posted
https://user-images.githubusercontent.com/36355951/161377743-684efb25-6be9-42c2-8667-4b881cd08bc5.mp4
<!--
Add any other context about the problem here.
-->
| closed | 2022-04-02T09:51:10Z | 2022-05-07T18:08:48Z | https://github.com/ydataai/ydata-profiling/issues/949 | [
"bug 🐛"
] | kameshkotwani | 2 |
seleniumbase/SeleniumBase | web-scraping | 3,561 | Pytest --help shows a stack trace ('TerminalReporter' object has no attribute '_sessionstarttime') | When executing `pytest --help`, using the latest version of SeleniumBase, a stack trace is reported (as shown below).
The error reported is: **AttributeError: 'TerminalReporter' object has no attribute '_sessionstarttime'**
In: **seleniumbase\plugins\pytest_plugin.py", line 2163, in _perform_pytest_unconfigure_**
```
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Sebastien\GitLab\qa-automations\py\venv\Scripts\pytest.exe\__main__.py", line 7, in <module>
sys.exit(console_main())
^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\_pytest\config\__init__.py", line 201, in console_main
code = main()
^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\_pytest\config\__init__.py", line 175, in main
ret: ExitCode | int = config.hook.pytest_cmdline_main(config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_callers.py", line 139, in _multicall
raise exception.with_traceback(exception.__traceback__)
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\_pytest\helpconfig.py", line 156, in pytest_cmdline_main
config._ensure_unconfigure()
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\_pytest\config\__init__.py", line 1123, in _ensure_unconfigure
self.hook.pytest_unconfigure(config=self)
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_hooks.py", line 513, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_manager.py", line 120, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_callers.py", line 139, in _multicall
raise exception.with_traceback(exception.__traceback__)
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\pluggy\_callers.py", line 103, in _multicall
res = hook_impl.function(*args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\seleniumbase\plugins\pytest_plugin.py", line 2533, in pytest_unconfigure
_perform_pytest_unconfigure_(config)
File "C:\Sebastien\GitLab\qa-automations\py\venv\Lib\site-packages\seleniumbase\plugins\pytest_plugin.py", line 2163, in _perform_pytest_unconfigure_
duration = time.time() - reporter._sessionstarttime
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'TerminalReporter' object has no attribute '_sessionstarttime'
```
**Steps to reproduce**
1. Install SeleniumBase
2. Install Pytest
3. Execute "pytest --help" from the venv
**Expected results**
1. The help is displayed without any error
**Actual behavior**
1. An error is thrown
**Workaround**
1. No workaround was found. | closed | 2025-02-24T15:08:25Z | 2025-02-24T21:33:22Z | https://github.com/seleniumbase/SeleniumBase/issues/3561 | [
"bug"
] | smanenti | 3 |
TencentARC/GFPGAN | pytorch | 28 | Training on custom dataset | Hello, first of all fantastic work and thanks for sharing.
I would like to know how can I train the model on a custom dataset?
I noticed in the training explanation, there are 3 files I need to download excluding the dataset:
-Pre trained styleGAN2 model
-FFHQ component locations
-Arcface
I know that Arcface is used for face recognition. I assume that the pretrained styleGAN2 model, is to train the GFPGAN model from scratch so if i wanted to continue training I could just use the [model ](https://github.com/TencentARC/GFPGAN/releases/download/v0.1.0/GFPGANv1.pth) you have provided for inference to continue training on my custom dataset. Finally for the components locations file, will it work with my custom dataset or is it specific to the FFHQ dataset, and if it won't work how will I be able to create my own so it may work with my dataset?
I hope my issue is clear, thanks.
| closed | 2021-07-28T08:33:32Z | 2021-08-13T12:17:46Z | https://github.com/TencentARC/GFPGAN/issues/28 | [] | 3BBUAE | 3 |
piccolo-orm/piccolo | fastapi | 766 | List aggregate | Let's say I have
```python
class Subject(Table):
name = Varchar()
class Event(Table):
subject = ForeignKey(Subject)
timestamp = Timestamptz()
data = JSONB()
```
How can I fetch all Subject with corresponding Events given a particular timerange?
It should be possible with a 'ReverseLookup' (#599), when we use a 'RIGHT JOIN' to the subquery, I think?
Alternatively an list aggregate Function could be helpful:
1. Events.select and group_by subject + list aggregate the rest (e.g. with json_agg in postgres)
2. Join Subject data inside that Events.select query
Is there currently a way to do this? Thank you!
| open | 2023-02-21T09:59:52Z | 2023-02-22T17:10:03Z | https://github.com/piccolo-orm/piccolo/issues/766 | [] | powellnorma | 2 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 873 | bulk_update_mappings method does not work with list of dictionaries given as a variable for update while the same list when provided as an expanded form(actual dictionary with key and value) works fine |
---
### Expected Behavior
```python
updated_records=[{'id':1, 'salary':5000},{'id':2, 'salary':8000}]
db.session.bulk_update_mappings(Salary, updated_records)
throws error sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'numpy.int64'
(Background on this error at: http://sqlalche.me/e/13/f405)
while
db.session.bulk_update_mappings(Salary,[{'id':1, 'salary':5000}, {'id':2, 'salary':8000}]) works fine
```
### Actual Behavior
Tell us what happens instead.
```pytb
throws error sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'numpy.int64'
(Background on this error at: http://sqlalche.me/e/13/f405)
```
### Environment
* Python version: 3.8
* Flask-SQLAlchemy version: 2.4.3
* SQLAlchemy version:1.3.18
| closed | 2020-08-27T20:29:51Z | 2020-12-05T19:58:20Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/873 | [] | nirum09 | 1 |
dsdanielpark/Bard-API | api | 116 | Multimodal capability | Does your API support multimodal input, such as image input for bard? | closed | 2023-07-17T00:57:01Z | 2024-01-18T15:51:25Z | https://github.com/dsdanielpark/Bard-API/issues/116 | [] | Lucky-Lance | 6 |
HIT-SCIR/ltp | nlp | 528 | 依存句法问题 | 最开始我使用的small模型,然后有不少句子,依存句法树存在问题
我换成base模型后,还是有问题,具体问题如下(基于base模型):
eg:等下去干嘛你
结果:
等 下去 干嘛 你
v v v r
0 1 1 0
HED|CMP|COO|HED
即,同一句话里有两个HED,这在3.0时是没有这个问题的。请教一下,是我哪里参数设置错了,导致依存树变成了依存图吗?
| closed | 2021-07-21T06:07:44Z | 2021-07-23T06:41:01Z | https://github.com/HIT-SCIR/ltp/issues/528 | [] | ghost | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,304 | testing image using cycleGAN gets weird result | Input images are 112*112 size, i use trained cycleGAN model to test the image, getting result images are very dark. Then, i set ''--preprocess none'' parameter, trained another vesion, getting the second result image below, which is totally not what i expected.
 
 
| open | 2021-08-03T07:33:47Z | 2022-10-25T21:07:59Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1304 | [] | robbie2021 | 4 |
thp/urlwatch | automation | 87 | pip installed package is broken | there is a syntax error `...` instead of `pass` in the main file. And it does not seem to install the necessary dependencies.
| closed | 2016-08-16T12:04:11Z | 2016-09-17T17:13:12Z | https://github.com/thp/urlwatch/issues/87 | [] | stefanfoulis | 2 |
miguelgrinberg/flasky | flask | 197 | Running flasky in specific tag version | I tried to run flasky in tag version 3d (which does not contain a requirements.txt file) by doing the following...
- clone flasky: `~/ws_github $ git clone https://github.com/miguelgrinberg/flasky.git`
- checkout tag 3d: `~/ws_github $ git checkout 3d`
- create virtualenv and activate it like described on [book first edition 2014 , p. 4]: `~/ws_github/flasky $ virtualenv venv`
- activate venv: `~/ws_github/flasky $ source venv/bin/activate`
- install flask [book first edition 2014 , p. 6]: `~/ws_github/flasky $ pip install flask`
- install flask-script [book first edition 2014 , p. 17]: `~/ws_github/flasky $ pip install flask-script`
... flask and flask-script seems to be installed in the venv properly...
```
(venv)florian@florian-desktop ~/ws_github/flasky $ pip freeze
Flask==0.11.1
Flask-Script==2.0.5
Jinja2==2.8
MarkupSafe==0.23
Werkzeug==0.11.11
argparse==1.2.1
click==6.6
itsdangerous==0.24
wsgiref==0.1.2
```
...but when I try to run flasky I get the following error response [book first edition 2014 , p. 19]:
```
(venv)florian@florian-desktop ~/ws_github/flasky $ python hello.py runserver --host 0.0.0.0
Traceback (most recent call last):
File "hello.py", line 2, in <module>
from flask_script import Manager
ImportError: No module named 'flask_script'
```
Running flask with the system installation of python (with flask and klask-script installed) does result in the same error response.
| closed | 2016-10-23T19:35:53Z | 2016-10-24T07:09:31Z | https://github.com/miguelgrinberg/flasky/issues/197 | [
"question"
] | fkromer | 9 |
tensorflow/tensor2tensor | deep-learning | 1,246 | Cannot download MRPC data | ### Description
I get `UnicodeDecodeError` when trying to generate the "MSR Paraphrase Corpus" data. It happens when using either `t2t-datagen` or `t2t-trainer`.
### Environment information
```
OS: macOS 10.13.4
$ pip freeze | grep tensor
mesh-tensorflow==0.0.4
tensor2tensor==1.11.0
tensorboard==1.12.0
tensorflow==1.12.0
tensorflow-metadata==0.9.0
tensorflow-probability==0.5.0
$ python -V
Python 3.6.4
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
$ t2t-datagen \
--data_dir=~/t2t_data/msr_paraphrase_corpus \
--tmp_dir=/tmp/t2t_tmp \
--problem=msr_paraphrase_corpus
```
```
# Error logs:
INFO:tensorflow:Generated 8152 Examples
INFO:tensorflow:Found vocab file: /Users/ywkim/t2t_data/msr_paraphrase_corpus/vocab.msr_paraphrase_corpus.8192.subwords
Traceback (most recent call last):
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/bin/t2t-datagen", line 28, in <module>
tf.app.run()
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/bin/t2t-datagen", line 23, in main
t2t_datagen.main(argv)
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensor2tensor/bin/t2t_datagen.py", line 198, in main
generate_data_for_registered_problem(problem)
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensor2tensor/bin/t2t_datagen.py", line 260, in generate_data_for_registered_problem
problem.generate_data(data_dir, tmp_dir, task_id)
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensor2tensor/data_generators/text_problems.py", line 306, in generate_data
self.generate_encoded_samples(data_dir, tmp_dir, split)), paths)
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensor2tensor/data_generators/generator_utils.py", line 165, in generate_files
for case in generator:
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensor2tensor/data_generators/text_problems.py", line 542, in generate_encoded_samples
for sample in generator:
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensor2tensor/data_generators/mrpc.py", line 114, in generate_samples
for row in tf.gfile.Open(os.path.join(mrpc_dir, "dev_ids.tsv")):
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py", line 220, in __next__
return self.next()
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py", line 214, in next
retval = self.readline()
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py", line 184, in readline
return self._prepare_value(self._read_buf.ReadLineAsString())
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py", line 100, in _prepare_value
return compat.as_str_any(val)
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensorflow/python/util/compat.py", line 107, in as_str_any
return as_str(value)
File "/Users/ywkim/.local/share/virtualenvs/rally-f4OA2-t-/lib/python3.6/site-packages/tensorflow/python/util/compat.py", line 80, in as_text
return bytes_or_text.decode(encoding)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 12: invalid start byte
```
| closed | 2018-11-24T14:52:43Z | 2018-11-28T23:11:08Z | https://github.com/tensorflow/tensor2tensor/issues/1246 | [] | ywkim | 0 |
miguelgrinberg/python-socketio | asyncio | 1,390 | The library sets signal handler in a way that disables signals handlers previously set by application with `asyncio.loop.add_signal_handler` | python-socketio library sets its own signal handler in a way that disables signals handlers that were previously set with `asyncio.loop.add_signal_handler`
**To Reproduce**
```python
import asyncio
import signal
import socketio
URL = ... # SocketsIO server URL 'ws://...'
def signal_handler(sig: signal.Signals):
print(f'Received signal {sig}. Stopping...')
raise KeyboardInterrupt()
async def run():
loop = asyncio.get_event_loop()
for sig in (signal.SIGINT, signal.SIGTERM):
loop.add_signal_handler(sig, signal_handler, sig)
async with socketio.AsyncSimpleClient() as ws:
await ws.connect(URL, transports=['websocket'])
print('Waiting...')
await asyncio.sleep(10000000000000)
asyncio.run(run())
```
Steps:
1. On Linux system run the above program
2. Wait until `Waiting...` is printed
3. Press Ctrl+C
4. Application is stopped without printing `Received signal ...`
**Expected behavior**
4. (After Ctrl+C is pressed) `Received signal ...` is printed then application is stopped.
If I comment line `await ws.connect(....)` then the application behavior is as expected.
**Additional context**
Aside from broken applications it is just a bad practice if library messes up with such a global thing as signal handlers. Signal handlers should be left to the application concern.
| closed | 2024-09-24T08:28:37Z | 2024-12-14T10:28:18Z | https://github.com/miguelgrinberg/python-socketio/issues/1390 | [
"invalid"
] | oliora | 5 |
pytest-dev/pytest-django | pytest | 411 | --reuse-db and --create-db not working together | I noticed the below issue when using the workflow presented here: https://pytest-django.readthedocs.io/en/latest/database.html#example-work-flow-with-reuse-db-and-create-db. It seems like the behaviour changed in a recent version although it's possible I never noticed this occurred.
When I run the tests passing the `--create-db` flag, the database is recreated and the migrations are run (notice the time). The database, however, is dropped despite the `--reuse-db` flag being passed via `addopts`.
```
$ pytest tests/app --create-db
...............................................................................................................................
127 passed in 8.23 seconds
```
When I next run the tests without the `--create-db` flag the database is again recreated and the migrations are run because it was previously dropped.
```
$ pytest tests/app
...............................................................................................................................
127 passed in 7.01 seconds
```
If I run the tests a third time exactly as above, the database is properly reused:
```
$ pytest tests/app
...............................................................................................................................
127 passed in 1.12 seconds
```
| closed | 2016-10-27T14:32:33Z | 2017-12-25T16:18:09Z | https://github.com/pytest-dev/pytest-django/issues/411 | [] | ryankask | 5 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 91 | Inconsistency between code and paper | In the paper, VAE is used to transform images into latent code. However, when I see released testing code. It seems that there are just ordinary AEs(auto encoder) which transform images into feature maps rather than probability distribution.
To my best knowledge, VAE is used to estimate distributions(specifically means and variances). What we get from VAE encoder should be a group of means and variances. Then we sample from a N(0, 1) and input of VAE decoder is z = σx+μ(δ: standard deviation, μ: mean), namely reparameterization trick in paper. But I can't find such operations in code.
I also see pseudo training code in #81 and his VAE conducts sampling operation in training phase but directly feed outputs of encoder without sampling in testing phase. In #29 , @zhangmozhe said there is no need to do the reparameterization trick. However, aforementioned comments differ from implementation of VAE in [PyTorch-VAE](https://github.com/AntixK/PyTorch-VAE/blob/master/models/vanilla_vae.py) in which VAE conduct same operation in training and testing.
So my questions are:
- Why is there not sampling operation in testing code?
- Is my description of VAE correct? If not, please rectify it.
- Is there using a new type of VAE which is diverse from vanilla one? If so, could you give me some related papers?
| closed | 2020-12-16T07:30:13Z | 2020-12-23T05:33:03Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/91 | [] | NK-CS-ZZL | 2 |
hzwer/ECCV2022-RIFE | computer-vision | 273 | Missing key(s) in state_dict when evaluating RIFE_HDv3 model | I am trying to evaluate RIFE_HDv3 model with Vimeo90k.
I downloaded RIFE_HDv3 model files (https://drive.google.com/file/d/1APIzVeI-4ZZCEuIRE1m6WYfSCaOsi_7_/view?usp=sharing), and I set the model in the benchmark script.
When I ran the script (Vimeo90K.py -> model changed into RIFE_HDv3),
the below errors are printed.
How can I solve this issue?
---------------------------------------------------------------------
Traceback (most recent call last):
File "benchmark/Vimeo90K_HDv3.py", line 15, in <module>
model.load_model('train_log')
File "/root/workspace/sharing/ws-dongsoo/01_project/05_vfi/code/ECCV2022-RIFE/model/RIFE_HDv3.py", line 47, in load_model
self.flownet.load_state_dict(convert(torch.load('{}/flownet.pkl'.format(path))))
File "/root/.conda/envs/vfi_rife/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for IFNet:
Missing key(s) in state_dict: "block0.conv0.0.0.weight", "block0.conv0.0.0.bias", "block0.conv0.0.1.weight", "block0.conv0.1.0.weight", "block0.conv0.1.0.bias", "block0.conv0.1.1.weight", "block0.convblock0.0.0.weight", "block0.convblock0.0.0.bias", "block0.convblock0.0.1.weight", "block0.convblock0.1.0.weight", "block0.convblock0.1.0.bias", "block0.convblock0.1.1.weight", "block0.convblock1.0.0.weight", "block0.convblock1.0.0.bias", "block0.convblock1.0.1.weight", "block0.convblock1.1.0.weight", "block0.convblock1.1.0.bias", "block0.convblock1.1.1.weight", "block0.convblock2.0.0.weight", "block0.convblock2.0.0.bias", "block0.convblock2.0.1.weight", "block0.convblock2.1.0.weight", "block0.convblock2.1.0.bias", "block0.convblock2.1.1.weight", "block0.convblock3.0.0.weight", "block0.convblock3.0.0.bias", "block0.convblock3.0.1.weight", "block0.convblock3.1.0.weight", "block0.convblock3.1.0.bias", "block0.convblock3.1.1.weight", "block0.conv1.0.weight", "block0.conv1.0.bias", "block0.conv1.1.weight", "block0.conv1.2.weight", "block0.conv1.2.bias", "block0.conv2.0.weight", "block0.conv2.0.bias", "block0.conv2.1.weight", "block0.conv2.2.weight", "block0.conv2.2.bias", "block1.conv0.0.0.weight", "block1.conv0.0.0.bias", "block1.conv0.0.1.weight", "block1.conv0.1.0.weight", "block1.conv0.1.0.bias", "block1.conv0.1.1.weight", "block1.convblock0.0.0.weight", "block1.convblock0.0.0.bias", "block1.convblock0.0.1.weight", "block1.convblock0.1.0.weight", "block1.convblock0.1.0.bias", "block1.convblock0.1.1.weight", "block1.convblock1.0.0.weight", "block1.convblock1.0.0.bias", "block1.convblock1.0.1.weight", "block1.convblock1.1.0.weight", "block1.convblock1.1.0.bias", "block1.convblock1.1.1.weight", "block1.convblock2.0.0.weight", "block1.convblock2.0.0.bias", "block1.convblock2.0.1.weight", "block1.convblock2.1.0.weight", "block1.convblock2.1.0.bias", "block1.convblock2.1.1.weight", "block1.convblock3.0.0.weight", "block1.convblock3.0.0.bias", "block1.convblock3.0.1.weight", "block1.convblock3.1.0.weight", "block1.convblock3.1.0.bias", "block1.convblock3.1.1.weight", "block1.conv1.0.weight", "block1.conv1.0.bias", "block1.conv1.1.weight", "block1.conv1.2.weight", "block1.conv1.2.bias", "block1.conv2.0.weight", "block1.conv2.0.bias", "block1.conv2.1.weight", "block1.conv2.2.weight", "block1.conv2.2.bias", "block2.conv0.0.0.weight", "block2.conv0.0.0.bias", "block2.conv0.0.1.weight", "block2.conv0.1.0.weight", "block2.conv0.1.0.bias", "block2.conv0.1.1.weight", "block2.convblock0.0.0.weight", "block2.convblock0.0.0.bias", "block2.convblock0.0.1.weight", "block2.convblock0.1.0.weight", "block2.convblock0.1.0.bias", "block2.convblock0.1.1.weight", "block2.convblock1.0.0.weight", "block2.convblock1.0.0.bias", "block2.convblock1.0.1.weight", "block2.convblock1.1.0.weight", "block2.convblock1.1.0.bias", "block2.convblock1.1.1.weight", "block2.convblock2.0.0.weight", "block2.convblock2.0.0.bias", "block2.convblock2.0.1.weight", "block2.convblock2.1.0.weight", "block2.convblock2.1.0.bias", "block2.convblock2.1.1.weight", "block2.convblock3.0.0.weight", "block2.convblock3.0.0.bias", "block2.convblock3.0.1.weight", "block2.convblock3.1.0.weight", "block2.convblock3.1.0.bias", "block2.convblock3.1.1.weight", "block2.conv1.0.weight", "block2.conv1.0.bias", "block2.conv1.1.weight", "block2.conv1.2.weight", "block2.conv1.2.bias", "block2.conv2.0.weight", "block2.conv2.0.bias", "block2.conv2.1.weight", "block2.conv2.2.weight", "block2.conv2.2.bias", "block_tea.conv0.0.0.weight", "block_tea.conv0.0.0.bias", "block_tea.conv0.0.1.weight", "block_tea.conv0.1.0.weight", "block_tea.conv0.1.0.bias", "block_tea.conv0.1.1.weight", "block_tea.convblock0.0.0.weight", "block_tea.convblock0.0.0.bias", "block_tea.convblock0.0.1.weight", "block_tea.convblock0.1.0.weight", "block_tea.convblock0.1.0.bias", "block_tea.convblock0.1.1.weight", "block_tea.convblock1.0.0.weight", "block_tea.convblock1.0.0.bias", "block_tea.convblock1.0.1.weight", "block_tea.convblock1.1.0.weight", "block_tea.convblock1.1.0.bias", "block_tea.convblock1.1.1.weight", "block_tea.convblock2.0.0.weight", "block_tea.convblock2.0.0.bias", "block_tea.convblock2.0.1.weight", "block_tea.convblock2.1.0.weight", "block_tea.convblock2.1.0.bias", "block_tea.convblock2.1.1.weight", "block_tea.convblock3.0.0.weight", "block_tea.convblock3.0.0.bias", "block_tea.convblock3.0.1.weight", "block_tea.convblock3.1.0.weight", "block_tea.convblock3.1.0.bias", "block_tea.convblock3.1.1.weight", "block_tea.conv1.0.weight", "block_tea.conv1.0.bias", "block_tea.conv1.1.weight", "block_tea.conv1.2.weight", "block_tea.conv1.2.bias", "block_tea.conv2.0.weight", "block_tea.conv2.0.bias", "block_tea.conv2.1.weight", "block_tea.conv2.2.weight", "block_tea.conv2.2.bias".
Unexpected key(s) in state_dict: "module.block0.conv0.0.0.weight", "module.block0.conv0.0.0.bias", "module.block0.conv0.0.1.weight", "module.block0.conv0.1.0.weight", "module.block0.conv0.1.0.bias", "module.block0.conv0.1.1.weight", "module.block0.convblock0.0.0.weight", "module.block0.convblock0.0.0.bias", "module.block0.convblock0.0.1.weight", "module.block0.convblock0.1.0.weight", "module.block0.convblock0.1.0.bias", "module.block0.convblock0.1.1.weight", "module.block0.convblock1.0.0.weight", "module.block0.convblock1.0.0.bias", "module.block0.convblock1.0.1.weight", "module.block0.convblock1.1.0.weight", "module.block0.convblock1.1.0.bias", "module.block0.convblock1.1.1.weight", "module.block0.convblock2.0.0.weight", "module.block0.convblock2.0.0.bias", "module.block0.convblock2.0.1.weight", "module.block0.convblock2.1.0.weight", "module.block0.convblock2.1.0.bias", "module.block0.convblock2.1.1.weight", "module.block0.convblock3.0.0.weight", "module.block0.convblock3.0.0.bias", "module.block0.convblock3.0.1.weight", "module.block0.convblock3.1.0.weight", "module.block0.convblock3.1.0.bias", "module.block0.convblock3.1.1.weight", "module.block0.conv1.0.weight", "module.block0.conv1.0.bias", "module.block0.conv1.1.weight", "module.block0.conv1.2.weight", "module.block0.conv1.2.bias", "module.block0.conv2.0.weight", "module.block0.conv2.0.bias", "module.block0.conv2.1.weight", "module.block0.conv2.2.weight", "module.block0.conv2.2.bias", "module.block1.conv0.0.0.weight", "module.block1.conv0.0.0.bias", "module.block1.conv0.0.1.weight", "module.block1.conv0.1.0.weight", "module.block1.conv0.1.0.bias", "module.block1.conv0.1.1.weight", "module.block1.convblock0.0.0.weight", "module.block1.convblock0.0.0.bias", "module.block1.convblock0.0.1.weight", "module.block1.convblock0.1.0.weight", "module.block1.convblock0.1.0.bias", "module.block1.convblock0.1.1.weight", "module.block1.convblock1.0.0.weight", "module.block1.convblock1.0.0.bias", "module.block1.convblock1.0.1.weight", "module.block1.convblock1.1.0.weight", "module.block1.convblock1.1.0.bias", "module.block1.convblock1.1.1.weight", "module.block1.convblock2.0.0.weight", "module.block1.convblock2.0.0.bias", "module.block1.convblock2.0.1.weight", "module.block1.convblock2.1.0.weight", "module.block1.convblock2.1.0.bias", "module.block1.convblock2.1.1.weight", "module.block1.convblock3.0.0.weight", "module.block1.convblock3.0.0.bias", "module.block1.convblock3.0.1.weight", "module.block1.convblock3.1.0.weight", "module.block1.convblock3.1.0.bias", "module.block1.convblock3.1.1.weight", "module.block1.conv1.0.weight", "module.block1.conv1.0.bias", "module.block1.conv1.1.weight", "module.block1.conv1.2.weight", "module.block1.conv1.2.bias", "module.block1.conv2.0.weight", "module.block1.conv2.0.bias", "module.block1.conv2.1.weight", "module.block1.conv2.2.weight", "module.block1.conv2.2.bias", "module.block2.conv0.0.0.weight", "module.block2.conv0.0.0.bias", "module.block2.conv0.0.1.weight", "module.block2.conv0.1.0.weight", "module.block2.conv0.1.0.bias", "module.block2.conv0.1.1.weight", "module.block2.convblock0.0.0.weight", "module.block2.convblock0.0.0.bias", "module.block2.convblock0.0.1.weight", "module.block2.convblock0.1.0.weight", "module.block2.convblock0.1.0.bias", "module.block2.convblock0.1.1.weight", "module.block2.convblock1.0.0.weight", "module.block2.convblock1.0.0.bias", "module.block2.convblock1.0.1.weight", "module.block2.convblock1.1.0.weight", "module.block2.convblock1.1.0.bias", "module.block2.convblock1.1.1.weight", "module.block2.convblock2.0.0.weight", "module.block2.convblock2.0.0.bias", "module.block2.convblock2.0.1.weight", "module.block2.convblock2.1.0.weight", "module.block2.convblock2.1.0.bias", "module.block2.convblock2.1.1.weight", "module.block2.convblock3.0.0.weight", "module.block2.convblock3.0.0.bias", "module.block2.convblock3.0.1.weight", "module.block2.convblock3.1.0.weight", "module.block2.convblock3.1.0.bias", "module.block2.convblock3.1.1.weight", "module.block2.conv1.0.weight", "module.block2.conv1.0.bias", "module.block2.conv1.1.weight", "module.block2.conv1.2.weight", "module.block2.conv1.2.bias", "module.block2.conv2.0.weight", "module.block2.conv2.0.bias", "module.block2.conv2.1.weight", "module.block2.conv2.2.weight", "module.block2.conv2.2.bias", "module.block_tea.conv0.0.0.weight", "module.block_tea.conv0.0.0.bias", "module.block_tea.conv0.0.1.weight", "module.block_tea.conv0.1.0.weight", "module.block_tea.conv0.1.0.bias", "module.block_tea.conv0.1.1.weight", "module.block_tea.convblock0.0.0.weight", "module.block_tea.convblock0.0.0.bias", "module.block_tea.convblock0.0.1.weight", "module.block_tea.convblock0.1.0.weight", "module.block_tea.convblock0.1.0.bias", "module.block_tea.convblock0.1.1.weight", "module.block_tea.convblock1.0.0.weight", "module.block_tea.convblock1.0.0.bias", "module.block_tea.convblock1.0.1.weight", "module.block_tea.convblock1.1.0.weight", "module.block_tea.convblock1.1.0.bias", "module.block_tea.convblock1.1.1.weight", "module.block_tea.convblock2.0.0.weight", "module.block_tea.convblock2.0.0.bias", "module.block_tea.convblock2.0.1.weight", "module.block_tea.convblock2.1.0.weight", "module.block_tea.convblock2.1.0.bias", "module.block_tea.convblock2.1.1.weight", "module.block_tea.convblock3.0.0.weight", "module.block_tea.convblock3.0.0.bias", "module.block_tea.convblock3.0.1.weight", "module.block_tea.convblock3.1.0.weight", "module.block_tea.convblock3.1.0.bias", "module.block_tea.convblock3.1.1.weight", "module.block_tea.conv1.0.weight", "module.block_tea.conv1.0.bias", "module.block_tea.conv1.1.weight", "module.block_tea.conv1.2.weight", "module.block_tea.conv1.2.bias", "module.block_tea.conv2.0.weight", "module.block_tea.conv2.0.bias", "module.block_tea.conv2.1.weight", "module.block_tea.conv2.2.weight", "module.block_tea.conv2.2.bias". | closed | 2022-07-27T03:48:08Z | 2022-08-04T03:30:28Z | https://github.com/hzwer/ECCV2022-RIFE/issues/273 | [] | markdchoung | 2 |
explosion/spaCy | machine-learning | 11,975 | Various incorrect type stubs / annotations | Working with spaCy in a type-checked project has uncovered a few wrong type annotations tripping up mypy:
`spacy/tokens/span_group.pyi > SpanGroup`:
- missing `__iter__` method, so `list(span_group)` fails type check but works
- `SpanGroup` should inherit from `Iterable[Span]`, e.g. to allow it to be passed into `filter_spans()`
`spacy/tokens/span.pyi > Span.char_span()`:
- arguments `label` and `kb_id` have type `int` but should be `Union[int, str]` according to documentation
## Your Environment
- **spaCy version:** 3.4.2
- **Platform:** Linux-5.4.204-113.362.amzn2.x86_64-x86_64-with-glibc2.35
- **Python version:** 3.10.6
- **Pipelines:** en_core_web_lg (3.4.0), en_core_sci_md (0.5.1), en_core_sci_sm (0.5.1), en_coreference_web_trf (3.4.0a0), en_core_web_trf (3.4.1), en_core_web_md (3.4.1)
| closed | 2022-12-15T00:52:22Z | 2023-01-21T00:02:05Z | https://github.com/explosion/spaCy/issues/11975 | [
"bug",
"feat / doc",
"types"
] | itssimon | 6 |
521xueweihan/HelloGitHub | python | 2,771 | 【开源自荐】swiftui-skia: Rust + Skia 实现纯软件光栅能力移植到 SwiftUI | ## 推荐项目
项目名称: swiftui-skia
项目地址: [https://github.com/rustq/swiftui-skia](https://github.com/rustq/swiftui-skia)
项目介绍: 项目使用 `Rust` 语言实现纯软件光栅化渲染,相比原生光栅化具有更好的跨平台适应性,使用层面也完全基于 `SwiftUI` 语法。工程方面复用了此前 [rustq/vue-skia](https://github.com/rustq/vue-skia) 的同一渲染底层,具有 API 一致性。

Repo: [https://github.com/rustq/swiftui-skia](https://github.com/rustq/swiftui-skia) | open | 2024-06-17T15:03:31Z | 2024-06-17T15:03:31Z | https://github.com/521xueweihan/HelloGitHub/issues/2771 | [] | meloalright | 0 |
BeanieODM/beanie | asyncio | 609 | [BUG]Pydantic2 Error : `pydantic.error_wrappers:ErrorWrapper` has been removed | **Describe the bug**
use pydantic2.02, "from beanie import PydanticObjectId" has a PydanticImportError
**To Reproduce**
```python
from beanie import PydanticObjectId
.......
```
**Expected behavior**
normal use PydanticObjectId
**Additional context**
```
Exception has occurred: PydanticImportError
`pydantic.error_wrappers:ErrorWrapper` has been removed in V2.
pydantic.errors.PydanticImportError: `pydantic.error_wrappers:ErrorWrapper` has been removed in V2.
For further information visit https://errors.pydantic.dev/2.0.2/u/import-error
```
| closed | 2023-07-07T01:43:30Z | 2023-07-09T07:36:20Z | https://github.com/BeanieODM/beanie/issues/609 | [] | zhuxining | 1 |
albumentations-team/albumentations | deep-learning | 2,354 | still install `opencv-python-headless` | while `opencv-python` has been installed beforehand, `pip install -U albumentations` or `pip install -U albumentations --no-binary qudida,albumentations` still install `opencv-python-headless`.

Python: 3.12
OS: windows 10 | open | 2025-02-26T05:03:21Z | 2025-02-26T05:12:01Z | https://github.com/albumentations-team/albumentations/issues/2354 | [
"bug"
] | yantaozhao | 0 |
biolab/orange3 | numpy | 6,328 | ModuleNotFoundError: No module named 'pkg_resources' | Installed in ubuntu following conda instructions.
`python3 -m Orange.canvas
Traceback (most recent call last):
File "/scratch/anaconda3/envs/orange3/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/scratch/anaconda3/envs/orange3/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/scratch/anaconda3/envs/orange3/lib/python3.10/site-packages/Orange/__init__.py", line 4, in <module>
from Orange import data
File "/scratch/anaconda3/envs/orange3/lib/python3.10/site-packages/Orange/data/__init__.py", line 4, in <module>
from .variable import *
File "/scratch/anaconda3/envs/orange3/lib/python3.10/site-packages/Orange/data/variable.py", line 17, in <module>
from Orange.util import Registry, Reprable, OrangeDeprecationWarning
File "/scratch/anaconda3/envs/orange3/lib/python3.10/site-packages/Orange/util.py", line 10, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
`
| closed | 2023-02-07T04:26:38Z | 2023-02-10T09:11:02Z | https://github.com/biolab/orange3/issues/6328 | [
"bug report"
] | caonetto | 4 |
tflearn/tflearn | tensorflow | 544 | [siamese network] Help & Error | I'm trying to implement a Siamese network, where I have 2 image patches of Shape=(16,16) as input and the output is whether they are the same patch or not. The network should have 2 identical towers (that's why I'm using "reuse = True" for the second tower), and I merge them in the end. Bellow is the code that I managed to write but I cannot make it work. Furthermore, I couldn't find any example where model.fit(..) has two inputs or any example of siamese network.
I'm getting the following error:
---------------------------------
Run id: Y4RXLR
Log directory: /tmp/tflearn_logs/
Traceback (most recent call last):
File "siamese_net.py", line 260, in <module>
model.fit([X1,X2], Y,n_epoch=5)
File "/home/mss/anaconda3/lib/python3.5/site-packages/tflearn/models/dnn.py", line 214, in fit
callbacks=callbacks)
File "/home/mss/anaconda3/lib/python3.5/site-packages/tflearn/helpers/trainer.py", line 282, in fit
self.summ_writer, self.coord)
File "/home/mss/anaconda3/lib/python3.5/site-packages/tflearn/helpers/trainer.py", line 706, in initialize_fit
self.n_train_samples = len(get_dict_first_element(feed_dict))
TypeError: object of type 'Tensor' has no len()
______________________________________________________________
The shape of my data:
X1: (205674, 16, 16)
X2: (205674, 16, 16)
Y: (205674, 2)
__________________________________________________________________
```
def tower_network(reuse = False):
network = tflearn.input_data(shape=(None,16,16,1))
network = tflearn.conv_2d(network, 32,1, activation='relu',reuse=reuse, scope='conv1')
...
network = tflearn.conv_2d(network, 128,1, activation='relu',reuse=reuse, scope='conv9')
network = tflearn.max_pool_2d(network, 2, strides=2)
network = tflearn.fully_connected(network, 512, activation='relu',reuse=reuse, scope='fc1')
network = tflearn.dropout(network, 0.5)
return network
def similarity_network( net1, net2):
num_classes = 2
network = tflearn.merge([net1,net2], mode='concat', axis=1, name='Merge') # merge net1 and net2 networks
# fully connected layers
network = tflearn.fully_connected(network, 2048, activation='relu')
network = tflearn.dropout(network, 0.5)
network = tflearn.fully_connected(network, 2048, activation='relu')
network = tflearn.dropout(network, 0.5)
# softmax layers
network = tflearn.fully_connected(network, num_classes, activation='softmax')
return network
if __name__ == "__main__":
X1, _ = tflearn.data_utils.image_preloader (matchFilePath, image_shape=(16,16),
mode='file', categorical_labels=True, normalize=False,
grayscale=True, files_extension=None)
X2, Y = tflearn.data_utils.image_preloader (mismatchFilePath, image_shape=(16,16),
mode='file', categorical_labels=True, normalize=False,
grayscale=True, files_extension=None)
X1 = tf.reshape(np.asarray(X1), [-1, 16, 16, 1])
X2 = tf.reshape(np.asarray(X2), [-1, 16, 16, 1])
# LABEL
Y = np.asarray(Y)
#tower networks
net1 = tower_network()
net2 = tower_network(reuse=True)
#similarity network
network = similarity_network( net1, net2)
#output layer
#network = tflearn.regression(network, optimizer='sgd', loss='hinge_loss', learning_rate=0.02)
network = tflearn.regression(network, optimizer='sgd', loss='categorical_crossentropy', learning_rate=0.02)
#training
model = tflearn.DNN(network)
model.fit([X1,X2], Y,n_epoch=5)
```
Any help would be appreciated | closed | 2017-01-03T08:54:11Z | 2017-06-15T07:56:34Z | https://github.com/tflearn/tflearn/issues/544 | [] | mairasaboia | 3 |
open-mmlab/mmdetection | pytorch | 11,536 | TypeError: 'bool' object is not callable | # EigenCAM method
python demo/vis_cam.py demo/demo.jpg configs/retinanet/retinanet_r50_fpn_1x_coco.py retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth --method eigencam
I used this command ,but found this error:
TypeError: 'bool' object is not callable
The '--target-layers' is set as 'backbone.layer3' , and the error happens when the code runs here(backbone.layer3)
| open | 2024-03-09T14:43:54Z | 2024-03-09T14:44:10Z | https://github.com/open-mmlab/mmdetection/issues/11536 | [] | facias914 | 0 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,104 | KS-Statistic Plot Visualizer | As mentioned in the issue #1091, it would be nice to have a KS-Statistic plot visualizer within yellowbrick library.
**Example:**
```python
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
cancer = load_breast_cancer()
X, y = cancer['data'][:, :4], cancer['target']
# Using only first four features from breast cancer dataset
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42)
model = LogisticRegression()
```
**Currently** for plotting KS-Plot, I use *scikitplot* library, as follows:
```python
import scikitplot as skplt
import matplotlib.pyplot as plt
y_probas = model.fit(X_train, y_train).predict_proba(X_test)
skplt.metrics.plot_ks_statistic(y_test, y_probas)
plt.show()
```
**Expected Yellowbrick code:**
```python
from yellowbrick.classifier import KSPlot
visualizer = KSPlot(model)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.show()
```
**Output**
The graph below is the output of scikitplot library, but it would be nice to have a similar output from yellowbrick library.

| open | 2020-10-04T17:09:37Z | 2021-04-02T00:19:27Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1104 | [
"type: feature"
] | des137 | 2 |
localstack/localstack | python | 12,197 | feature request: eventbridge pipes input transformers | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Feature description
To maximise the benefit of EventBridge Pipes as a service, we would like to utilise input transformers. The use case we have is streaming data from Kinesis to target destinations with EventBridge Pipes. Input transformers would allow us to save cost by only forwarding the `data` property of a Kinesis message to the downstream target.
### 🧑💻 Implementation
_No response_
### Anything else?
The documentation currently marks this as a limitation https://docs.localstack.cloud/user-guide/aws/pipes/#current-limitations
> Lack of input transformers. | open | 2025-01-28T17:16:13Z | 2025-01-29T19:00:47Z | https://github.com/localstack/localstack/issues/12197 | [
"type: feature",
"status: accepted",
"aws:pipes"
] | alexbaileyuk | 1 |
Teemu/pytest-sugar | pytest | 30 | Support for different reporters | We should implement support for different reporting styles. A great example where this is already implemented is [Mocha](https://github.com/visionmedia/mocha/tree/master/lib/reporters).
| closed | 2014-02-08T13:40:43Z | 2020-08-25T18:20:45Z | https://github.com/Teemu/pytest-sugar/issues/30 | [] | Teemu | 0 |
tensorflow/tensor2tensor | deep-learning | 1,113 | AssertionError: desc2code.py | ### Description
Whenever I try to run t2t-datagen with the problem "programming_desc2code_py", it returns an AssertionError and the following message:
tensorflow.python.framework.errors_impl.PermissionDeniedError: /mnt/disks; Permission denied
### Environment information
Google Cloud Platform
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| open | 2018-10-04T22:36:28Z | 2018-10-04T22:36:28Z | https://github.com/tensorflow/tensor2tensor/issues/1113 | [] | avkondepudi | 0 |
pytorch/pytorch | machine-learning | 149,335 | torch.matrix_exp gets stuck on GPU | ### 🐛 Describe the bug
Running `torch.matrix_exp` with [a tensor](https://drive.google.com/file/d/1_BP6SZMKbQqMJ1nikaKrhuGneUsrjAE-/view?usp=sharing) works on CPU but gets stuck on GPU. I am providing a [colab](https://colab.research.google.com/drive/1RLd1q35-xHHANfu7YqLBu69Uv6gONROk?usp=sharing) with a code snippet to reproduce the problem using `concurrent.futures`, but I initially encountered it without this code snippet. This is just to demonstrate with a timeout that it gets stuck, and the code remains stuck even after the thread is attempted to be killed. It looks like the CUDA version encounters some sort of race condition. To run with colab, please upload [this file](https://drive.google.com/file/d/1_BP6SZMKbQqMJ1nikaKrhuGneUsrjAE-/view?usp=sharing) to the files tab first.
Minimal reproducible code:
```python
import torch, sys
from safetensors import safe_open
import concurrent.futures
tensors = {}
with safe_open("matrix_exp.safetensors", framework="pt", device='cpu') as f:
for k in f.keys():
tensors[k] = f.get_tensor(k)
def matrix_exp_with_timeout(tensor, timeout=10):
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(torch.matrix_exp, tensor)
try:
result = future.result(timeout=timeout)
print("Executed successfully")
return result
except concurrent.futures.TimeoutError:
print("Matrix exponential operation took too long and was terminated.")
future.cancel()
sys.exit(1)
timeout = 10 # seconds
print(tensors['tensor'].shape, tensors['tensor'].dtype) # torch.Size([3, 224, 224]) torch.float32
out_cpu = matrix_exp_with_timeout(tensors['tensor'], timeout=timeout) # Executed successfully
out_gpu = matrix_exp_with_timeout(tensors['tensor'].cuda(), timeout=timeout) # Timeout, still stuck
```
To run locally, download the [safetensors file](https://drive.google.com/file/d/1_BP6SZMKbQqMJ1nikaKrhuGneUsrjAE-/view?usp=sharing) and keep alongside the code.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla T4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
Stepping: 3
BogoMIPS: 4000.38
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32 KiB (1 instance)
L1i cache: 32 KiB (1 instance)
L2 cache: 1 MiB (1 instance)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.5.3.2
[pip3] nvidia-cuda-cupti-cu12==12.5.82
[pip3] nvidia-cuda-nvrtc-cu12==12.5.82
[pip3] nvidia-cuda-runtime-cu12==12.5.82
[pip3] nvidia-cudnn-cu12==9.3.0.75
[pip3] nvidia-cufft-cu12==11.2.3.61
[pip3] nvidia-curand-cu12==10.3.6.82
[pip3] nvidia-cusolver-cu12==11.6.3.83
[pip3] nvidia-cusparse-cu12==12.5.1.3
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.5.82
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] optree==0.14.1
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
cc @ptrblck @msaroufim @eqy @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | open | 2025-03-17T18:41:03Z | 2025-03-20T19:26:40Z | https://github.com/pytorch/pytorch/issues/149335 | [
"needs reproduction",
"module: cuda",
"triaged",
"module: deadlock",
"module: linear algebra"
] | jiren-the-gray | 0 |
hankcs/HanLP | nlp | 1,193 | NER对地名的识别受高频词(如:市长)的干扰 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.7.3
我使用的版本是:1.7.3
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
在使用NER识别人名、地名、组织名时,经常会被’市长‘这个词所干扰。如:织造有限公司公司位于江阴市长泾镇 的NER提取结果为 (只关注 江阴市长泾镇 这几个字)
### 实际输出
江阴/ns 市长/nnt 泾镇/ns
### 期望输出
江阴市/ns 长泾镇/ns
请问如何修改词典可以获得期望结果?
我尝试修改customer dictionary。对于网上的例子,“攻城狮逆袭单身狗,迎娶白富美,走上人生巅峰”这句话来说,加上在字典里定义攻城狮确实可以改变分词结果,但对我上面的例子没有效果
####= 备注
为了打开地名和组织名的提取,我对segment加上以下两个开关:enablePlaceRecognize(True).enableOrganizationRecognize(True)
| closed | 2019-06-05T08:27:16Z | 2020-01-01T10:49:34Z | https://github.com/hankcs/HanLP/issues/1193 | [
"ignored"
] | hgjt8989 | 4 |
microsoft/nni | machine-learning | 5,763 | gpunum | closed | 2024-03-22T04:13:46Z | 2024-03-22T04:14:37Z | https://github.com/microsoft/nni/issues/5763 | [] | fantasy0905 | 0 | |
waditu/tushare | pandas | 1,225 | 期货API不能获取数据问题 | fut_mapping API不能获取数据

fut_daily API不能获取最新数据,2019-12-13日缺失

Tushare的API很好用,希望能提高日线数据的及时性,提高数据质量(期货连续数据中有很多天窗)。如果能达到更高的可靠级别,我相信很多人是愿意付费使用的。 | closed | 2019-12-15T00:39:47Z | 2019-12-16T01:31:25Z | https://github.com/waditu/tushare/issues/1225 | [] | esun2 | 2 |
huggingface/datasets | machine-learning | 7,134 | Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown | ### Describe the bug
Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method.
I can convert an image from a (H,W,3) shape to a grayscale (H,W) image and I have no problems with this. But when attempting to return a (H,W,1) shaped matrix from a map function, it never completes and sometimes even results in an OOM from the OS.
I've used various methods to expand a (H,W) shaped array to a (H,W,1) array. But they all resulted in extremely long map operations consuming a lot of CPU and RAM.
### Steps to reproduce the bug
Below is a minimal example using two methods to get the desired output. Both of which don't work
```py
import tensorflow as tf
import datasets
import numpy as np
ds = datasets.load_dataset("project-sloth/captcha-images")
to_gray_pillow = lambda sample: {'image': np.expand_dims(sample['image'].convert("L"), axis=-1)}
ds_gray = ds.map(to_gray_pillow)
# Alternatively
ds = datasets.load_dataset("project-sloth/captcha-images").with_format("tensorflow")
to_gray_tf = lambda sample: {'image': tf.expand_dims(tf.image.rgb_to_grayscale(sample['image']), axis=-1)}
ds_gray = ds.map(to_gray_tf)
```
### Expected behavior
I expect the map operation to complete and return a new dataset containing grayscale images in a (H,W,1) shape.
### Environment info
datasets 2.21.0
python tested with both 3.11 and 3.12
host os : linux | open | 2024-09-01T13:55:41Z | 2024-09-02T10:34:53Z | https://github.com/huggingface/datasets/issues/7134 | [] | navidmafi | 0 |
onnx/onnx | scikit-learn | 6,028 | When is the next release, 1.16.0? | ### Question
When is the next release going to be?
And are the noted CVEs addressed?
### Notes
I am detecting CVEs
CVE-2024-27318 1.15.0 N/A https://avd.aquasec.com/nvd/cve-2024-27318
CVE-2024-27319 1.15.0 N/A https://avd.aquasec.com/nvd/cve-2024-27319
I also notice version 1.16.0 is 23 days past due. | closed | 2024-03-20T18:59:20Z | 2024-03-26T15:53:56Z | https://github.com/onnx/onnx/issues/6028 | [
"question"
] | benjamin-kaiser | 3 |
d2l-ai/d2l-en | computer-vision | 2,044 | Pin the dependencies in setup.py | The d2l library has various dependencies which are currently unpinned and install the latest (or collecting the cached) version of that dependency library. See below:
https://github.com/d2l-ai/d2l-en/blob/f742ee4b4d503187e6ced5dcc9ae54b955c7b0e4/setup.py#L4-L11
This leads to non-reproducible and unintentional bugs sometimes.
For example:
1. This issue in [section linear regression](http://preview.d2l.ai/d2l-en/master/chapter_linear-networks/linear-regression.html) was not easy to debug since the code ran perfectly on CI, as can be seen in the [preview plots](http://preview.d2l.ai/d2l-en/master/chapter_linear-networks/linear-regression.html#the-normal-distribution-and-squared-loss) for MXNet. CI apparently has an older version of NumPy and my environment has a newer `NumPy==1.22.2`.
The same notebook with the newer version of NumPy raises the following error about `np.asarray`:
```bash
ValueError: setting an array element with a sequence. The requested array would exceed the maximum number of dimension of 32.
```
The `np.asarray` is used in `matplotlib`, [see L1310](https://github.com/matplotlib/matplotlib/blob/190b6bd25a82f893d30adcaac8343e65ed035eec/lib/matplotlib/cbook/__init__.py#L1310)) which then percolates down internally to `d2l.plot` function.
2. Again we have a similar reason behind PR #1966 which was also not easy to debug. It works well with `numpy<=1.19.5` but starting `numpy>=1.20` it throws a `TypeError` which is actually expected if you try to cast a torch tensor on a GPU device to numpy directly without moving the tensor to CPU first.
Yes, they are easy to fix bugs but finding the reason behind the error becomes extremely tough when we have these inconsistent dependencies. If we fix the dependencies for the libraries, this will never be a problem or if the problem arises these will be easy to point out.
## Pitch
I'll send two separate PRs to fix those two bugs one each in MXNet and PyTorch (#1966 is already up so it can be merged now that we know the reason) with the latest numpy version.
I'll then send a PR to pin these dependencies which we can update manually every 6 months or something like that.
All of this was caught during the current CI overhaul and actually needs fixing for updating the frameworks to their latest versions later.
cc @astonzhang @cheungdaven | closed | 2022-02-14T18:07:36Z | 2022-09-09T21:47:32Z | https://github.com/d2l-ai/d2l-en/issues/2044 | [] | AnirudhDagar | 1 |
tqdm/tqdm | jupyter | 812 | Regression introduced for parallel bars (examples/parallel_bars.py does not run correctly in 4.35.0) | - [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
4.35.0 3.7.3 (default, Mar 27 2019, 09:23:32)
[Clang 9.0.0 (clang-900.0.39.2)] darwin
```

I believe there must be a wonky issue as in trying to debug #811, i wanted to see if any of the existing parallel examples worked for me (and they don't?).
It turns out v4.35.0 has a regression introduced in https://github.com/tqdm/tqdm/commit/32cde6fdd22d7e3e2ca86556f494e33a7f3683be#diff-90bdd6b2186b18eddc5afde5b4fcb369 that causes the problems I'm seeing. If I, instead, run `pip install tqdm==4.23.2` and re-run:

It works.
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| open | 2019-09-11T20:36:38Z | 2019-09-11T21:06:45Z | https://github.com/tqdm/tqdm/issues/812 | [] | kratsg | 1 |
sngyai/Sequoia | pandas | 18 | Win10安装问题 | 感谢分享代码,大佬。我不是太熟悉这个环境,请问这个报错具体指向和我该怎么解决呢?谢谢。
安装报错:
Building wheels for collected packages: pandas
Building wheel for pandas (PEP 517) ... done
Created wheel for pandas: filename=pandas-1.1.0-cp39-cp39-win_amd64.whl size=8350439 sha256=78984d7de1bf452a7cdb5d85e30600176d1fed0e906c5b555e432afbd1aea4cf
Stored in directory: c:\users\jiash\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local\pip\cache\wheels\9f\4b\40\9900a41b285e2e2b08f1e89284b308a6c880347296d9bb0308
Successfully built pandas
Installing collected packages: soupsieve, beautifulsoup4, bs4, certifi, chardet, idna, lxml, numpy, numexpr, pytz, six, python-dateutil, pandas, urllib3, requests, schedule, simplejson, TA-Lib, tables, threadpool, websocket-client, tushare, xlrd
Running setup.py install for bs4 ... done
WARNING: The script chardetect.exe is installed in 'C:\Users\jiash\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script f2py.exe is installed in 'C:\Users\jiash\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Running setup.py install for numexpr ... done
Running setup.py install for simplejson ... done
Running setup.py install for TA-Lib ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\jiash\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jiash\\AppData\\Local\\Temp\\pip-install-au05ctr3\\ta-lib\\setup.py'"'"'; __file__='"'"'C:\\Users\\jiash\\AppData\\Local\\Temp\\pip-install-au05ctr3\\ta-lib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jiash\AppData\Local\Temp\pip-record-4f982w7j\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\jiash\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\Include\TA-Lib'
cwd: C:\Users\jiash\AppData\Local\Temp\pip-install-au05ctr3\ta-lib\
Complete output (520 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.9
creating build\lib.win-amd64-3.9\talib
copying talib\abstract.py -> build\lib.win-amd64-3.9\talib
copying talib\deprecated.py -> build\lib.win-amd64-3.9\talib
copying talib\stream.py -> build\lib.win-amd64-3.9\talib
copying talib\test_abstract.py -> build\lib.win-amd64-3.9\talib
copying talib\test_data.py -> build\lib.win-amd64-3.9\talib
copying talib\test_func.py -> build\lib.win-amd64-3.9\talib
copying talib\test_pandas.py -> build\lib.win-amd64-3.9\talib
copying talib\test_stream.py -> build\lib.win-amd64-3.9\talib
copying talib\__init__.py -> build\lib.win-amd64-3.9\talib
running build_ext
skipping 'talib\_ta_lib.c' Cython extension (up-to-date)
building 'talib._ta_lib' extension
creating build\temp.win-amd64-3.9
creating build\temp.win-amd64-3.9\Release
creating build\temp.win-amd64-3.9\Release\talib
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\jiash\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\numpy\core\include -Ic:\ta-lib\c\include -IC:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.496.0_x64__qbz5n2kfra8p0\include -IC:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.496.0_x64__qbz5n2kfra8p0\include -IC:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE -IC:\Program Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt -IC:\Program Files (x86)\Windows Kits\8.1\include\shared -IC:\Program Files (x86)\Windows Kits\8.1\include\um -IC:\Program Files (x86)\Windows Kits\8.1\include\winrt /Tctalib\_ta_lib.c /Fobuild\temp.win-amd64-3.9\Release\talib\_ta_lib.obj
_ta_lib.c
c:\users\jiash\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
talib\_ta_lib.c(6775): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(6780): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(6964): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(7140): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(7316): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(7321): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(7473): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(7645): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(7975): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(8339): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(8526): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(8871): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(20029): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(20176): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(20484): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(20781): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(20935): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(21749): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(21885): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22021): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22157): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22293): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22602): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22765): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22770): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22775): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(22991): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(23001): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(23011): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(23177): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(23538): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(23543): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(23694): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(23830): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(24138): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(24300): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(24447): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(24593): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(24729): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(24865): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(25016): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(25189): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(25354): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(25500): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(25797): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26112): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26277): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26445): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26450): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26591): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26727): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26863): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(26999): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(27135): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(27875): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28103): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28310): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28315): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28325): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28543): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28548): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28755): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28760): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(28765): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(29060): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(29207): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(29510): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(29804): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(29940): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(30076): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(30414): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(30419): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(30424): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(30589): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(30910): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(31064): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39049): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39054): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39237): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39409): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39581): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39586): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39728): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(39895): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(40198): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(40557): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(40736): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(41073): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(52156): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(52293): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(52574): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(52854): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(53004): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(53732): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(53858): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(53984): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54110): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54236): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54515): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54668): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54673): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54678): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54886): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54896): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(54906): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(55064): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(55409): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(55414): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(55558): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(55684): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(55975): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(56136): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(56273): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(56412): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(56538): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(56664): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(56807): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(56972): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(57133): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(57272): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(57552): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(57856): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58017): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58178): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58183): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58314): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58440): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58566): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58692): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(58818): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(59514): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(59722): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(59919): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(59924): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(59934): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(60150): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(60155): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(60360): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(60365): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(60370): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(60650): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(60787): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(61060): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(61340): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(61466): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(61592): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(61916): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(61921): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(61926): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(62087): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(62394): warning C4146: unary minus operator applied to unsigned type, result still unsigned
talib\_ta_lib.c(62544): warning C4146: unary minus operator applied to unsigned type, result still unsigned
C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\ta-lib\c\lib /LIBPATH:C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.496.0_x64__qbz5n2kfra8p0\libs /LIBPATH:C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.496.0_x64__qbz5n2kfra8p0\PCbuild\amd64 /LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64 /LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucrt\x64 /LIBPATH:C:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64 ta_libc_cdr.lib /EXPORT:PyInit__ta_lib build\temp.win-amd64-3.9\Release\talib\_ta_lib.obj /OUT:build\lib.win-amd64-3.9\talib\_ta_lib.cp39-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.9\Release\talib\_ta_lib.cp39-win_amd64.lib
_ta_lib.obj : warning LNK4197: export 'PyInit__ta_lib' specified multiple times; using first specification
Creating library build\temp.win-amd64-3.9\Release\talib\_ta_lib.cp39-win_amd64.lib and object build\temp.win-amd64-3.9\Release\talib\_ta_lib.cp39-win_amd64.exp
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLBREAKAWAY_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLEVENINGSTAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSEPARATINGLINES_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDRAGONFLYDOJI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SetOptInputParamReal
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMORNINGDOJISTAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_T3_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSHOOTINGSTAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADD
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLKICKING
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINUS_DM
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3WHITESOLDIERS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTRISTAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_SINE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_VAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSPINNINGTOP_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLUPSIDEGAP2CROWS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_COSH_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MININDEX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLRISEFALL3METHODS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLPIERCING_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_BOP_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AROON_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TRANGE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHARAMI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_EXP
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SUB_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SetUnstablePeriod
_ta_lib.obj : error LNK2001: unresolved external symbol TA_FuncTableAlloc
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLONNECK
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLKICKING_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3OUTSIDE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDRAGONFLYDOJI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_WCLPRICE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MEDPRICE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLONNECK_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLLADDERBOTTOM_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG_ANGLE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ACOS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINMAXINDEX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MFI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINMAX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_RSI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GroupTableFree
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLUPSIDEGAP2CROWS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTASUKIGAP
_ta_lib.obj : error LNK2001: unresolved external symbol TA_NATR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDOJISTAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_PHASOR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_WMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_Shutdown
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSHORTLINE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetFuncHandle
_ta_lib.obj : error LNK2001: unresolved external symbol TA_FuncTableFree
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ParamHolderFree
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAVP
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3LINESTRIKE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLCONCEALBABYSWALL_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHIGHWAVE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMARUBOZU_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADOSC
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMARUBOZU
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CCI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMATHOLD
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLEVENINGDOJISTAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLBREAKAWAY
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSPINNINGTOP
_ta_lib.obj : error LNK2001: unresolved external symbol TA_PPO_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_BBANDS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG_INTERCEPT
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MIDPRICE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROCR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MACDEXT
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLLADDERBOTTOM
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STOCH
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CCI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MULT
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLCOUNTERATTACK
_ta_lib.obj : error LNK2001: unresolved external symbol TA_FLOOR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLIDENTICAL3CROWS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_DCPERIOD_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3STARSINSOUTH_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ATAN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADXR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_DEMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHANGINGMAN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SIN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_TRENDMODE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSTALLEDPATTERN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLADVANCEBLOCK_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_OBV
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLIDENTICAL3CROWS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHARAMICROSS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_FLOOR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AD_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_PPO
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLINVERTEDHAMMER_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_WCLPRICE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TRIMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MACDEXT_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLINNECK
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SUB
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TANH
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLLONGLEGGEDDOJI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDARKCLOUDCOVER_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AROON
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STDDEV
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLUNIQUE3RIVER
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ATR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLCONCEALBABYSWALL
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINUS_DI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_WMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_EMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ULTOSC_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL2CROWS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LOG10
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTAKURI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLCLOSINGMARUBOZU
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SetOptInputParamInteger
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLLONGLINE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG_ANGLE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROC
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CEIL_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLLONGLINE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMATCHINGLOW_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_COSH
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLKICKINGBYLENGTH
_ta_lib.obj : error LNK2001: unresolved external symbol TA_PLUS_DI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMATHOLD_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3OUTSIDE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHIGHWAVE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GroupTableAlloc
_ta_lib.obj : error LNK2001: unresolved external symbol TA_EMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADXR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MEDPRICE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLENGULFING_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MOM_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MFI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TANH_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetOptInputParameterInfo
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STOCHF
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG_INTERCEPT_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLGRAVESTONEDOJI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTHRUSTING_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROCR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_APO_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MACD
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SINH_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINMAX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_BOP
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_SINE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_TRENDLINE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3BLACKCROWS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ParamHolderAlloc
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLABANDONEDBABY
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetFuncInfo
_ta_lib.obj : error LNK2001: unresolved external symbol TA_DEMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ASIN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAVP_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STOCH_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TRANGE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_DIV
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MIN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_PHASOR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_COS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_Initialize
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SetCompatibility
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_TRENDMODE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMORNINGDOJISTAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_DX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_EXP_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLLONGLEGGEDDOJI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MIDPOINT_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_BETA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLGAPSIDESIDEWHITE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLINVERTEDHAMMER
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AVGPRICE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMORNINGSTAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLBELTHOLD_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STDDEV_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAXINDEX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MOM
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TSF_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MACD_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHAMMER_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TAN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHIKKAKEMOD_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MULT_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROC_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHARAMI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSHORTLINE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ATR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_DCPHASE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AVGPRICE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLGRAVESTONEDOJI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SINH
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TYPPRICE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLEVENINGDOJISTAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLENGULFING
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MACDFIX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHIKKAKE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetOutputParameterInfo
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLRICKSHAWMAN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TYPPRICE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LOG10_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ACOS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TRIMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHARAMICROSS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CMO_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDARKCLOUDCOVER
_ta_lib.obj : error LNK2001: unresolved external symbol TA_PLUS_DM_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHANGINGMAN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SQRT_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AROONOSC
_ta_lib.obj : error LNK2001: unresolved external symbol TA_RestoreCandleDefaultSettings
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MACDFIX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTRISTAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CEIL
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetInputParameterInfo
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3BLACKCROWS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSHOOTINGSTAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SQRT
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SUM
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHOMINGPIGEON
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SIN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STOCHRSI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLRICKSHAWMAN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMATCHINGLOW
_ta_lib.obj : error LNK2001: unresolved external symbol TA_COS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SAREXT_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLABANDONEDBABY_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG_SLOPE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTASUKIGAP_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSTALLEDPATTERN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDOJI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLGAPSIDESIDEWHITE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MIN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLXSIDEGAP3METHODS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINUS_DI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL2CROWS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LN_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ULTOSC
_ta_lib.obj : error LNK2001: unresolved external symbol TA_DIV_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLBELTHOLD
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROCP_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STOCHRSI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_STOCHF_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3INSIDE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetLookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3STARSINSOUTH
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSTICKSANDWICH_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINUS_DM_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLKICKINGBYLENGTH_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLUNIQUE3RIVER_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TAN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROCP
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetUnstablePeriod
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADD_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLADVANCEBLOCK
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHOMINGPIGEON_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTHRUSTING
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLTAKURI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CORREL
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SUM_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHIKKAKE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_BETA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ATAN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_DCPHASE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TEMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_VAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROCR100
_ta_lib.obj : error LNK2001: unresolved external symbol TA_PLUS_DI_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLEVENINGSTAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MINMAXINDEX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSTICKSANDWICH
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TRIX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLINNECK_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3LINESTRIKE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_WILLR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_TRENDLINE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHAMMER
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MIDPRICE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_KAMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLHIKKAKEMOD
_ta_lib.obj : error LNK2001: unresolved external symbol TA_LINEARREG_SLOPE_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_PLUS_DM
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MININDEX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_KAMA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3WHITESOLDIERS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AD
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAXINDEX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLPIERCING
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetCompatibility
_ta_lib.obj : error LNK2001: unresolved external symbol TA_OBV_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ASIN
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TEMA_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_DX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SAREXT
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLXSIDEGAP3METHODS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MA
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TRIX_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CMO
_ta_lib.obj : error LNK2001: unresolved external symbol TA_APO
_ta_lib.obj : error LNK2001: unresolved external symbol TA_WILLR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_TSF
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLMORNINGSTAR
_ta_lib.obj : error LNK2001: unresolved external symbol TA_SetCandleSettings
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLCLOSINGMARUBOZU_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MAX
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDOJISTAR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_T3
_ta_lib.obj : error LNK2001: unresolved external symbol TA_AROONOSC_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLSEPARATINGLINES
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDL3INSIDE
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ROCR100_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_RSI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLDOJI
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CORREL_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_MIDPOINT
_ta_lib.obj : error LNK2001: unresolved external symbol TA_ADOSC_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_NATR_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLCOUNTERATTACK_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_HT_DCPERIOD
_ta_lib.obj : error LNK2001: unresolved external symbol TA_BBANDS
_ta_lib.obj : error LNK2001: unresolved external symbol TA_CDLRISEFALL3METHODS_Lookback
_ta_lib.obj : error LNK2001: unresolved external symbol TA_GetVersionString
build\lib.win-amd64-3.9\talib\_ta_lib.cp39-win_amd64.pyd : fatal error LNK1120: 339 unresolved externals
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exit code 1120
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\jiash\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\jiash\\AppData\\Local\\Temp\\pip-install-au05ctr3\\ta-lib\\setup.py'"'"'; __file__='"'"'C:\\Users\\jiash\\AppData\\Local\\Temp\\pip-install-au05ctr3\\ta-lib\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\jiash\AppData\Local\Temp\pip-record-4f982w7j\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\jiash\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\Include\TA-Lib' Check the logs for full command output.
WARNING: You are using pip version 20.2.3; however, version 21.0.1 is available.
You should consider upgrading via the 'C:\Users\jiash\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe -m pip install --upgrade pip' command. | closed | 2021-02-18T10:16:48Z | 2021-12-06T09:04:54Z | https://github.com/sngyai/Sequoia/issues/18 | [] | SkyJiashu | 1 |
matplotlib/matplotlib | matplotlib | 29,227 | [Bug]: Introductory example on the pyplot API page does not show - missing plt.show() | ### Bug summary
https://matplotlib.org/3.9.3/api/pyplot_summary.html
The first example of Python code at the above URL does not work after adding the Matplotlib module in PyCharm CE 2024.2.5.
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
unless it is edited to this...
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 5, 0.1)
y = np.sin(x)
fig = plt.figure(figsize=(10, 7))
plt.plot(x, y)
plt.show()
```
### Code for reproduction
```Python
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 5, 0.1)
y = np.sin(x)
plt.plot(x, y)
```
### Actual outcome
No figure was produced in the PyCharm CE 2024.2.5
### Expected outcome
<img width="1007" alt="Screenshot 2024-12-04 at 17 46 27" src="https://github.com/user-attachments/assets/0ed0ecda-a6c0-4e40-a6ec-3605bfc06394">
### Additional information
This is the "fix":
```
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(0, 5, 0.1)
y = np.sin(x)
fig = plt.figure(figsize=(10, 7))
plt.plot(x, y)
plt.show()
```
### Operating system
MacOS 12.7.6
### Matplotlib Version
3.9.3
### Matplotlib Backend
Backend macosx is interactive backend. Turning interactive mode on. macosx
### Python version
```python --version``` crashes in PyCharm CE 2024.2.5 console.
### Jupyter version
_No response_
### Installation
None | closed | 2024-12-04T17:53:31Z | 2024-12-06T18:22:01Z | https://github.com/matplotlib/matplotlib/issues/29227 | [
"Documentation"
] | sjlearmonth | 10 |
viewflow/viewflow | django | 427 | asking information about demo | Hi,
Can we have the code of this demo page somewhere ?
https://demo.viewflow.io/intro/vf_stats/
I don't find. | closed | 2024-03-07T15:32:24Z | 2024-04-11T06:54:00Z | https://github.com/viewflow/viewflow/issues/427 | [
"request/question"
] | D0wn3r | 1 |
microsoft/MMdnn | tensorflow | 676 | ValueError: MXNet to Keras: Layer weight shape (7, 7, 1, 64) not compatible with provided weight shape (7, 7, 64, 1) | Platform (like ubuntu 16.04/win10):
win10
Python version:
3.6
Source framework with version (like Tensorflow 1.4.1 with GPU):
MXNet
Destination framework with version (like CNTK 2.3 with GPU):
Keras 2.2.4
Pre-trained model path (webpath or webdisk path):
N/A, custom model
Running scripts:
I have an MXNet image classification model (B&W images) that was trained using SageMaker, and I'm trying to convert it to Keras to be used external to the platform. I am able to convert the model (with the .params file, symbol.json, and model-shapes.json) to IR format to Keras code snippet (.py), but when I try to convert the code snippet and weights to a .h5 file, I receive the following error:
ValueError: Layer weight shape (7, 7, 1, 64) not compatible with provided weight shape (7, 7, 64, 1)
Please help!
---------------
| open | 2019-06-10T20:23:31Z | 2019-06-26T13:22:30Z | https://github.com/microsoft/MMdnn/issues/676 | [] | emilyclaps | 8 |
geopandas/geopandas | pandas | 3,276 | BUG: set_precision() doesn't work | set_precision doesn't work
#### Code Sample, a copy-pastable example
```
from shapely import LineString, Point
import geopandas
s = geopandas.GeoSeries(
[
Point(0.9, 0.9),
Point(0.9, 0.9, 0.9),
LineString([(0, 0), (0, 0.1), (0, 1), (1, 1)]),
LineString([(0, 0), (0, 0.1), (0.1, 0.1)])
],
)
s.set_precision(1)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_18012\98427605.py in ?()
----> 1 s.set_precision(1)
~\AppData\Roaming\Python\Python312\site-packages\pandas\core\generic.py in ?(self, name)
6200 and name not in self._accessors
6201 and self._info_axis._can_hold_identifiers_and_holds_name(name)
6202 ):
6203 return self[name]
-> 6204 return object.__getattribute__(self, name)
AttributeError: 'GeoSeries' object has no attribute 'set_precision'
```
#### Problem description
Doesn't work
#### Expected Output
```
0 POINT (1 1)
1 POINT Z (1 1 0.9)
2 LINESTRING (0 0, 0 1, 1 1)
3 LINESTRING Z EMPTY
dtype: geometry
```
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.12.1 (tags/v3.12.1:2305ca5, Dec 7 2023, 22:03:25) [MSC v.1937 64 bit (AMD64)]
executable : [c:\Program](file:///C:/Program) Files\Python312\python.exe
machine : Windows-11-10.0.22631-SP0
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.11.3
GEOS lib : None
GDAL : 3.6.4
GDAL data dir: [xxx]
PROJ : 9.3.0
PROJ data dir: [xxx]
PYTHON DEPENDENCIES
-------------------
geopandas : 0.14.4
numpy : 1.26.3
pandas : 2.1.4
pyproj : 3.6.1
shapely : 2.0.3
fiona : 1.9.6
geoalchemy2: None
geopy : 2.4.1
matplotlib : 3.8.2
mapclassify: 2.6.1
pygeos : None
pyogrio : None
psycopg2 : None
pyarrow : 14.0.2
rtree : None
</details>
| closed | 2024-05-07T14:45:39Z | 2024-05-07T14:49:22Z | https://github.com/geopandas/geopandas/issues/3276 | [
"bug",
"needs triage"
] | csipapicsa | 1 |
ExpDev07/coronavirus-tracker-api | fastapi | 109 | United Kingdom (GB) isn't updated | https://coronavirus-tracker-api.herokuapp.com/v2/locations?country_code=gb
Returns `"latest":{"confirmed":10,"deaths":0,"recovered":2}`
| closed | 2020-03-20T11:50:14Z | 2020-03-21T13:52:21Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/109 | [
"question"
] | tomikjetu | 2 |
flasgger/flasgger | rest-api | 470 | Replace default drop down name for flask_restful | For different resource types, I want to separate off endpoints from the default drop down and into their own drop down based off of groups.
I'm using flask_restful's Resource class. | open | 2021-03-22T04:49:02Z | 2021-03-22T04:49:02Z | https://github.com/flasgger/flasgger/issues/470 | [] | jsmwoolf | 0 |
ultralytics/ultralytics | pytorch | 19,744 | [RuntimeError: CUDA error: CUDA-capable device(s) is/are busy or unavailable] occurs when training the model in Docker | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
```
from ultralytics import YOLO
pretrained_model_path = r"./official_weights/11/yolo11n-seg.pt"
dataset_yaml_path = r'./custom/cfg/datasets/rubber0109.yaml'
model = YOLO(model = pretrained_model_path)
model.train(data = dataset_yaml_path,
imgsz = 640,
epochs = 500,
batch = 4,
workers = 8,
device = "",
optimizer = 'SGD',
close_mosaic = 10,
resume = False,
exist_ok = False,
project = 'runs/train'
name = 'indoor134',
single_cls = False,
cache = False,
pretrained = True
verbose = True
)
```
But there's an error below:
```
Ultralytics 8.3.58 🚀 Python-3.11.10 torch-2.5.0+cu124 CUDA:0 (NVIDIA RTX A6000, 48677MiB)
engine/trainer: task=segment, mode=train, model=./official_weights/11/yolo11n-seg.pt, data=./custom/cfg/datasets/rubber0109.yaml, epochs=500, time=None, patience=100, batch=4, imgsz=640, save=True, save_period=-1, cache=False, device=, workers=8, project=runs/train, name=rubber01092, exist_ok=False, pretrained=True, optimizer=SGD, verbose=True, seed=0, deterministic=True, single_cls=True, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=runs/train/rubber01092
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1742204807.591387 202 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1742204807.596917 202 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
Overriding model.yaml nc=80 with nc=1
from n params module arguments
0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
2 -1 1 6640 ultralytics.nn.modules.block.C3k2 [32, 64, 1, False, 0.25]
3 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
4 -1 1 26080 ultralytics.nn.modules.block.C3k2 [64, 128, 1, False, 0.25]
5 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
6 -1 1 87040 ultralytics.nn.modules.block.C3k2 [128, 128, 1, True]
7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
8 -1 1 346112 ultralytics.nn.modules.block.C3k2 [256, 256, 1, True]
9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
10 -1 1 249728 ultralytics.nn.modules.block.C2PSA [256, 256, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1]
13 -1 1 111296 ultralytics.nn.modules.block.C3k2 [384, 128, 1, False]
14 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
15 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1]
16 -1 1 32096 ultralytics.nn.modules.block.C3k2 [256, 64, 1, False]
17 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
18 [-1, 13] 1 0 ultralytics.nn.modules.conv.Concat [1]
19 -1 1 86720 ultralytics.nn.modules.block.C3k2 [192, 128, 1, False]
20 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
21 [-1, 10] 1 0 ultralytics.nn.modules.conv.Concat [1]
22 -1 1 378880 ultralytics.nn.modules.block.C3k2 [384, 256, 1, True]
23 [16, 19, 22] 1 683635 ultralytics.nn.modules.head.Segment [1, 32, 64, [64, 128, 256]]
YOLO11n-seg summary: 355 layers, 2,842,803 parameters, 2,842,787 gradients, 10.4 GFLOPs
Transferred 510/561 items from pretrained weights
TensorBoard: Start with 'tensorboard --logdir runs/train/indoor1347', view at http://localhost:6006/
Traceback (most recent call last):
File "/home/sa/project/yolo-web/run_train.py", line 13, in <module>
model.train(data = dataset_yaml_path,
File "/home/sa/project/yolo-web/ultralytics/engine/model.py", line 806, in train
self.trainer.train()
File "/home/sa/project/yolo-web/ultralytics/engine/trainer.py", line 207, in train
self._do_train(world_size)
File "/home/sa/project/yolo-web/ultralytics/engine/trainer.py", line 322, in _do_train
self._setup_train(world_size)
File "/home/sa/project/yolo-web/ultralytics/engine/trainer.py", line 235, in _setup_train
self.model = self.model.to(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1340, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/home/sa/project/yolo-web/ultralytics/nn/tasks.py", line 258, in _apply
self = super()._apply(fn)
^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
module._apply(fn)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
module._apply(fn)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 900, in _apply
module._apply(fn)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 927, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1326, in convert
return t.to(
^^^^^
RuntimeError: CUDA error: CUDA-capable device(s) is/are busy or unavailable
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
I am using the docker image by executing `docker pull ultralytics/ultralytics:8.3.58`. If I use some trained model for prediction, it seems to work properly. However, when I tried to train a model, just like the code above, the error states that CUDA-capable device(s) is/are busy or unavailable. On the other hand, if I set the device 'cpu', it could run the training.
I checked the NVIDIA GPU usage, `nvidia-smi`, no process occupied the GPU usage.

I am new to Docker, so any advice and answer is highly appreciated.
Thanks in advance.
### Additional
_No response_ | open | 2025-03-17T09:24:43Z | 2025-03-23T12:07:45Z | https://github.com/ultralytics/ultralytics/issues/19744 | [
"question",
"dependencies",
"detect"
] | tisu97 | 11 |
tflearn/tflearn | tensorflow | 540 | Support for tensorflow dynamic_rnn | Currently tflearn doesn't use `tensorflow tf.nn.dynamic_rnn` and `tf.nn.bidirectional_dynamic_rnn` for sequences with dynamic lengths.
Although there is a support for dynamic lengths by using `dynamic=True` with `tflearn.layers.recurrent.simple_rnn` (or similiar rnns) - as seen in #110 - which allows correct usage for varied-length sequences, under the hood it still uses `tensorflow.python.ops.nn.rnn` which statically creates the RNN graph.
While this works correctly, it doesn't benefit from the _major_ performance improvement of dynamic_rnn, and this could be a great future for tflearn.
Also, when trying to use tensorflow's dynamic_rnn directly, there is a problem with the control dependency in the `trainer.py` file, which caused me the error:
`tensorflow.python.framework.errors_impl.InvalidArgumentError: The node 'Adam/gradients/RNN/while/LSTMCell/add_grad/BroadcastGradientArgs/StackPush' has inputs from different frames. The input 'Adam/gradients/RNN/while/LSTMCell/add_grad/Shape' is in frame ''. The input 'Adam/gradients/RNN/while/LSTMCell/add_grad/BroadcastGradientArgs/RefEnter' is in frame 'RNN/while/RNN/while/'.`
(I made a temporary fix by loosing the following dependency:
` with tf.control_dependencies([loss_avg_op, acc_avg_op]):
self.grad = tf.gradients(total_loss, self.train_vars)`
) | open | 2016-12-28T13:40:31Z | 2017-02-14T16:53:02Z | https://github.com/tflearn/tflearn/issues/540 | [
"enhancement",
"contributions welcome"
] | benbogin | 5 |
pallets/flask | python | 5,249 | SO_REUSEADDR set server | There needs to give users to set app listening socket to be SO_REUSEADDR, so that an unexpected server failure does not block immediate restart due to "Address already in use".
| closed | 2023-09-03T05:12:12Z | 2023-09-18T00:05:28Z | https://github.com/pallets/flask/issues/5249 | [] | ZigC-Lang | 1 |
ray-project/ray | deep-learning | 51,195 | [Core] API Reference: uv | ### Description
uv is using pyproject.toml and uv.lock, however, currently the doc is using `requirements.txt`

### Link
https://docs.ray.io/en/latest/ray-core/handling-dependencies.html#api-reference | closed | 2025-03-09T10:54:37Z | 2025-03-21T19:15:02Z | https://github.com/ray-project/ray/issues/51195 | [
"triage",
"docs",
"uv"
] | hongbo-miao | 0 |
saulpw/visidata | pandas | 2,508 | Loading AWS S3 URL: External package "s3fs.core" not installed | **Small description**
I'm try to load s3 buckets with Visidata on MacOS. According to [this link](https://github.com/ajkerrigan/visidata-plugins?tab=readme-ov-file#vds3-open-amazon-s3-paths-and-objects), as of Visidata v2.12dev, that functionality is now part of Visidata and doesn't require a plugin.
I'm not finding much documentation for getting my environment set up so that Visidata can load s3 links, so I've been following the instructions in the plugin. I installed s3fs with `pip install s3fs` and have installed and configured my AWS CLI.
However, whenever I run vd on an s3 URL, e.g. `vd 's3://'` it returns:
```
saul.pw/VisiData v3.0.2
External package "s3fs.core" not installed; run: pip install s3fs
```
If I list the packages installed with pip, I see that `s3fs v2024.3.1` is installed.
Note that if I run `pip3 list`, I only get the following output:
```
Package Version
------- -------
pip 24.2
wheel 0.44.0
```
I don't know if this is an issue, but if I run `which pip`, I see that pip is being run from my Anaconda installation but if I run `which pip3`, it appears that pip3 is being run from my Homebrew installation.
In addition, if I try to install anything with pip3, I get the following message:
```
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try brew install
xyz, where xyz is the package you are trying to
install.
If you wish to install a Python library that isn't in Homebrew,
use a virtual environment:
python3 -m venv path/to/venv
source path/to/venv/bin/activate
python3 -m pip install xyz
If you wish to install a Python application that isn't in Homebrew,
it may be easiest to use 'pipx install xyz', which will manage a
virtual environment for you. You can install pipx with
brew install pipx
You may restore the old behavior of pip by passing
the '--break-system-packages' flag to pip, or by adding
'break-system-packages = true' to your pip.conf file. The latter
will permanently disable this error.
If you disable this error, we STRONGLY recommend that you additionally
pass the '--user' flag to pip, or set 'user = true' in your pip.conf
file. Failure to do this can result in a broken Homebrew installation.
Read more about this behavior here: <https://peps.python.org/pep-0668/>
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
```
**Data to reproduce**
N/A
**Steps to reproduce**
Install Visidata with:
`brew install saulpw/vd/visidata`
Install s3fs with:
`pip install s3fs`
Install and configure the AWS CLI per these instructions:
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-quickstart.html
**Expected result**
s3 Bucket would load and be navigable within Visidata.
**Actual result with screenshot**

**Additional context**
- What platform and version are you using (Linux, MacOS, Windows)?
MacOS Sonoma v14.6.1
- Which version of Python?
Python 3.12.2
- Which terminal are you using (for display and input issues)?
iterm2 | closed | 2024-08-30T00:02:06Z | 2024-08-30T14:06:24Z | https://github.com/saulpw/visidata/issues/2508 | [
"environment-help"
] | Charles-Alexandre-Roy | 2 |
allenai/allennlp | pytorch | 4,675 | Have a single `TrainerCallback` that can handle both `BatchCallback` and `EpochCallback`. | Also, add another call at the end of the whole training run.
This should make it easier to hang on to state inside the callback.
See the discussion at the end of this issue: https://github.com/allenai/allennlp/pull/3970 | closed | 2020-09-25T23:00:34Z | 2020-10-26T17:22:41Z | https://github.com/allenai/allennlp/issues/4675 | [
"Contributions welcome",
"Feature request"
] | dirkgr | 7 |
SYSTRAN/faster-whisper | deep-learning | 597 | Can not able to pickle faster whisper model object ? | I am trying to append model object into python multiprocessing manager sharedObject list but got below error
TypeError: cannot pickle 'ctranslate2._ext.Whisper' object
code snippet:
import multiprocessing
from multiprocessing import freeze_support
from faster_whisper import WhisperModel
if __name__ == '__main__':
freeze_support()
model = WhisperModel("large-v2", device="cpu", num_workers=1, cpu_threads=8,compute_type='int8')
shared_manager = multiprocessing.Manager()
shared_object = shared_manager.list()
print("append started")
shared_object.append(model)
print("append finished")
print(shared_object)
| open | 2023-11-30T11:28:09Z | 2024-04-16T07:55:24Z | https://github.com/SYSTRAN/faster-whisper/issues/597 | [] | Rahulvisio | 3 |
s3rius/FastAPI-template | graphql | 67 | Use async url for Ormar ORM | Hello,
While looking at the template [here](https://github.com/s3rius/FastAPI-template/blob/master/fastapi_template/template/%7B%7Bcookiecutter.project_name%7D%7D/%7B%7Bcookiecutter.project_name%7D%7D/settings.py#L52) :
I noticed that for `SqlAlchemy` we use async scheme (`"postgresql+asyncpg"` ) but not for Ormar, is there a reason or it's just missing the template ?
Thanks ! | closed | 2022-04-07T10:06:42Z | 2022-04-17T11:23:38Z | https://github.com/s3rius/FastAPI-template/issues/67 | [] | sorasful | 4 |
psf/requests | python | 6,719 | ERROR - Cannot set verify_mode to CERT_NONE when check_hostname is enabled | <!-- Summary. -->
I'm working with the exchangelib (v5.4) library.
A couple of days ago I noticed that in the Docker container exchangelib started complaining with an error `ValueError: Cannot set verify_mode to CERT_NONE when check_hostname is enabled`.
Exchangelib tries to connect to a local server with a low security level via a custom adapter with this lines
```python
from requests.adapters import HTTPAdapter
class CustomHttpAdapter(HTTPAdapter):
"""Transport adapter that allows us to use custom ssl_context."""
def __init__(self, ssl_context=None, **kwargs):
self.ssl_context = ssl_context
super().__init__(**kwargs)
def init_poolmanager(self, connections, maxsize, block=False, **kwargs):
self.ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
self.ssl_context.set_ciphers('DEFAULT:@SECLEVEL=0')
self.ssl_context.check_hostname = False
self.poolmanager = urllib3.poolmanager.PoolManager(
num_pools=connections, maxsize=maxsize,
block=block, ssl_context=self.ssl_context, ssl_version=ssl.PROTOCOL_TLSv1)
```
With this exchangelib shows an error:
```
File "/usr/local/lib/python3.11/site-packages/cached_property.py", line 70, in __get__
2024-05-23T11:53:23.558367392Z return obj_dict[name]
2024-05-23T11:53:23.558368832Z ~~~~~~~~^^^^^^
2024-05-23T11:53:23.558370213Z KeyError: \'calendar\'
```
But if I change the version of requests package from 2.32.0 to 2.31.0, connection and information exchange occurs normally.
## System Information
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.7"
},
"implementation": {
"name": "CPython",
"version": "3.11.9"
},
"platform": {
"release": "6.5.0-35-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.32.0"
},
"system_ssl": {
"version": "300000b0"
},
"urllib3": {
"version": "2.2.1"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2024-05-23T12:06:59Z | 2024-05-23T12:54:17Z | https://github.com/psf/requests/issues/6719 | [] | Barsovski | 1 |
huggingface/transformers | nlp | 36,004 | FSDP Torch XLA vs. FSDPv2 (SMPD) Torch XLA checkpoint saving bug | ### System Info
There is bug in how trainer (SFTTrainer) saves the checkpoint when we use FSDPv2 (SMPD) on TPU. This behavior does not show up with old method to run Torch XLA code ( xla_spawn.py). This behavior causes the new checkpoint to be almost exactly as the base model , throwing this error with PEFT
`Found missing adapter keys while loading the checkpoint: {missing_keys}`
even without PEFT, the weight of the models seems not affected by the training process.
The problem may related to how the saving function with FSDPv2 Torch XLA works in the trainer file. The same code is working 100% with GPU and also is working with xla_spawn.py FSDP method.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To replicate save the code as sft.py and run it with PJRT_DEVICE=TPU XLA_USE_SPMD=1 python3 sft.py:
```
import torch
import torch_xla
import peft
import trl
import torch_xla.core.xla_model as xm
from datasets import load_dataset
from peft import LoraConfig,PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
from trl import SFTTrainer, SFTConfig
import wandb
wandb.init(mode="disabled")
device = xm.xla_device() # Set up TPU device.
print(device)
def train():
model_id = "meta-llama/Llama-3.2-1B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
data = load_dataset("philschmid/dolly-15k-oai-style",split="train")
lora_config = LoraConfig(r=8,target_modules=["k_proj", "v_proj"],task_type="CAUSAL_LM")
fsdp_config = {'fsdp_transformer_layer_cls_to_wrap': ['LlamaDecoderLayer'], 'xla': True, 'xla_fsdp_v2': True, 'xla_fsdp_grad_ckpt': True}
args=SFTConfig(
per_device_train_batch_size=8,
num_train_epochs=1,
max_steps=-1,
output_dir="output",
optim="adafactor",
logging_steps=50,
learning_rate=2e-5,
max_seq_length=2048,
packing=True,
dataset_text_field=None,
save_strategy="no",
dataloader_drop_last = True, # Required for SPMD.
fsdp="full_shard",
fsdp_config=fsdp_config)
trainer = SFTTrainer(
model=model,
train_dataset=data,
tokenizer = tokenizer,
args=args,
peft_config=lora_config)
trainer.train()
final_model=trainer.model
final_model.to("cpu")
final_model.save_pretrained("./LoRa")
if __name__ == "__main__":
train()
```
You will notice in the output folder, that the saved model is not in LoRa format (not two adapter files adapter_config.json adapter_model.safetensors). This is because with FSDPv2, we will ended up here (You can check by adding print statement).
https://github.com/huggingface/transformers/blob/62db3e6ed67a74cc1ed1436acd9973915c0a4475/src/transformers/trainer.py#L3821
However, if we use the same code with GPU or with old xla_spawn (FSDP) method, this issue will disappear. To replicate the same code with FSDP first run
`wget https://raw.githubusercontent.com/huggingface/transformers/refs/heads/main/examples/pytorch/xla_spawn.py
`
then save the below code and run it with python3 xla_spawn --num_cores x sft.py :
```
from datasets import load_dataset
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import TrainingArguments
from trl import SFTTrainer,SFTConfig
import os
from peft import LoraConfig, get_peft_model, PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, BitsAndBytesConfig
import transformers
import wandb
wandb.init(mode="disabled")
def main():
data = load_dataset("philschmid/dolly-15k-oai-style",split="train")
model_id = "meta-llama/Llama-3.2-1B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.add_special_tokens({'pad_token': tokenizer.eos_token})
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
#target_modules=["k_proj", "v_proj","embed_tokens", "lm_head"]
lora_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
target_modules=["q_proj", "k_proj", "v_proj","embed_tokens", "lm_head"],
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
model=model,
train_dataset=data,
args=SFTConfig(
per_device_train_batch_size=1,
num_train_epochs=3,
max_steps=-1,
output_dir="./output",
logging_steps=50,
learning_rate=5e-5,
max_seq_length=2048,
save_steps=1000000,
save_only_model=True,
packing=True,
dataset_num_proc=40,
),
peft_config=lora_config,
)
trainer.train()
final_model=trainer.model
final_model.to("cpu")
final_model.save_pretrained("./LoRa")
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
```
With this code everything works great! because the saving function will ended up here:
https://github.com/huggingface/transformers/blob/62db3e6ed67a74cc1ed1436acd9973915c0a4475/src/transformers/trainer.py#L3824
I merged the LoRa adapter with the base model and the generated output is as expected from a finetuned model!
Finally, please note that this issue is not related to PEFT, because even if you use SFTTrainer without PEFT, this issue still exist. I believe it has to do with how we save checkpoint with FSDPv2 when we use TPUs.
### Expected behavior
The model with LoRa should save two adapter files and when we merge LoRa with the base model we should not have this message (You should update PEFT to the latest version (0.14.0) as it adds additional check to detect problems with LoRa checkpoints.) :
`Found missing adapter keys while loading the checkpoint: {missing_keys}`
| open | 2025-02-01T20:29:16Z | 2025-03-13T11:26:27Z | https://github.com/huggingface/transformers/issues/36004 | [
"Good First Issue",
"bug"
] | salrowili | 6 |
modin-project/modin | pandas | 7,350 | Possible issue with `dropna(how="all")` not deleting data from partition on ray. | When processing a large dataframe with modin running on ray, if I had previously dropped invalid rows, it runs into an issue by accessing data from the new dataframe (after dropna).
It looks like the data is not released from ray, or maybe modin `dropna` operation is not removing it properly.
It works fine if I run an operation where modin defaults to pandas.
# EXAMPLE:
```
import modin.pandas as pd
data = [
{"record": 1, "data_set": [0,0,0,0], "index": 1},
{"record": 2, "data_set": [0,0,0,0], "index": 2},
{"record": 3, "data_set": [0,0,0,0], "index": 3},
{"record": 4, "data_set": [0,0,0,0], "index": 4},
{"record": 5, "data_set": [0,0,0,0], "index": 5},
{"record": 6, "data_set": [0,0,0,0], "index": 6},
{"record": 7, "data_set": [0,0,0,0], "index": 7},
{"record": 8, "data_set": [0,0,0,0], "index": 8},
{"record": 9, "data_set": [0,0,0,0], "index": 9},
{"record": 10, "data_set": [0,0,0,0], "index": 10},
] * 10000
modin_df = pd.DataFrame(data)
# process and remove unwanted rows
# imagine this as a more complex than just filtering by index
modin_df = modin_df.apply(lambda x: x if x["index"] < 5 else None, axis=1).dropna(how="all")
# try to access data_set column
# imagine this as a more complex processing job
modin_df.apply(lambda x: x["data_set"], axis=1)
```
# ERROR:
<details>
```python-traceback
{
"name": "RayTaskError(KeyError)",
"message": "ray::_apply_func() (pid=946, ip=10.169.23.29)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RayTaskError: ray::_deploy_ray_func() (pid=942, ip=10.169.23.29)
File \"pandas/_libs/index.pyx\", line 138, in pandas._libs.index.IndexEngine.get_loc
File \"pandas/_libs/index.pyx\", line 165, in pandas._libs.index.IndexEngine.get_loc
File \"pandas/_libs/hashtable_class_helper.pxi\", line 5745, in pandas._libs.hashtable.PyObjectHashTable.get_item
File \"pandas/_libs/hashtable_class_helper.pxi\", line 5753, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'data_set'
The above exception was the direct cause of the following exception:
ray::_deploy_ray_func() (pid=942, ip=10.169.23.29)
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/execution/ray/implementations/pandas_on_ray/partitioning/virtual_partition.py\", line 313, in _deploy_ray_func
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/partitioning/axis_partition.py\", line 419, in deploy_axis_func
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/dataframe/dataframe.py\", line 1788, in _tree_reduce_func
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/storage_formats/pandas/query_compiler.py\", line 3084, in <lambda>
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py\", line 9568, in apply
return op.apply().__finalize__(self, method=\"apply\")
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py\", line 764, in apply
return self.apply_standard()
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py\", line 891, in apply_standard
results, res_index = self.apply_series_generator()
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py\", line 907, in apply_series_generator
results[i] = self.f(v)
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/utils.py\", line 611, in wrapper
File \"/var/folders/lz/4cs_fypj0ld8x6kyk9rbkl400000gn/T/ipykernel_24081/3890645143.py\", line 24, in <lambda>
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/series.py\", line 981, in __getitem__
return self._get_value(key)
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/series.py\", line 1089, in _get_value
loc = self.index.get_loc(label)
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py\", line 3804, in get_loc
raise KeyError(key) from err
KeyError: 'data_set'",
"stack": "---------------------------------------------------------------------------
RayTaskError(KeyError) Traceback (most recent call last)
Cell In[79], line 24
20 modin_df = modin_df.apply(lambda x: x if x[\"index\"] < 5 else None, axis=1).dropna(how=\"all\")
22 # try to access data_set column
23 # imagine this as a more complex processing job
---> 24 modin_df.apply(lambda x: x[\"data_set\"], axis=1)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/logging/logger_decorator.py:128, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
113 \"\"\"
114 Compute function with logging if Modin logging is enabled.
115
(...)
125 Any
126 \"\"\"
127 if LogMode.get() == \"disable\":
--> 128 return obj(*args, **kwargs)
130 logger = get_logger()
131 logger_level = getattr(logger, log_level)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/pandas/dataframe.py:419, in DataFrame.apply(self, func, axis, raw, result_type, args, **kwargs)
416 else:
417 output_type = DataFrame
--> 419 return output_type(query_compiler=query_compiler)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/logging/logger_decorator.py:128, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
113 \"\"\"
114 Compute function with logging if Modin logging is enabled.
115
(...)
125 Any
126 \"\"\"
127 if LogMode.get() == \"disable\":
--> 128 return obj(*args, **kwargs)
130 logger = get_logger()
131 logger_level = getattr(logger, log_level)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/pandas/series.py:144, in Series.__init__(self, data, index, dtype, name, copy, fastpath, query_compiler)
130 name = data.name
132 query_compiler = from_pandas(
133 pandas.DataFrame(
134 pandas.Series(
(...)
142 )
143 )._query_compiler
--> 144 self._query_compiler = query_compiler.columnarize()
145 if name is not None:
146 self.name = name
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/logging/logger_decorator.py:128, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
113 \"\"\"
114 Compute function with logging if Modin logging is enabled.
115
(...)
125 Any
126 \"\"\"
127 if LogMode.get() == \"disable\":
--> 128 return obj(*args, **kwargs)
130 logger = get_logger()
131 logger_level = getattr(logger, log_level)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/storage_formats/base/query_compiler.py:1236, in BaseQueryCompiler.columnarize(self)
1232 if self._shape_hint == \"column\":
1233 return self
1235 if len(self.columns) != 1 or (
-> 1236 len(self.index) == 1 and self.index[0] == MODIN_UNNAMED_SERIES_LABEL
1237 ):
1238 return self.transpose()
1239 return self
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/storage_formats/pandas/query_compiler.py:87, in _get_axis.<locals>.<lambda>(self)
74 \"\"\"
75 Build index labels getter of the specified axis.
76
(...)
84 callable(PandasQueryCompiler) -> pandas.Index
85 \"\"\"
86 if axis == 0:
---> 87 return lambda self: self._modin_frame.index
88 else:
89 return lambda self: self._modin_frame.columns
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/dataframe/dataframe.py:522, in PandasDataframe._get_index(self)
520 index, row_lengths = self._index_cache.get(return_lengths=True)
521 else:
--> 522 index, row_lengths = self._compute_axis_labels_and_lengths(0)
523 self.set_index_cache(index)
524 if self._row_lengths_cache is None:
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/logging/logger_decorator.py:128, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
113 \"\"\"
114 Compute function with logging if Modin logging is enabled.
115
(...)
125 Any
126 \"\"\"
127 if LogMode.get() == \"disable\":
--> 128 return obj(*args, **kwargs)
130 logger = get_logger()
131 logger_level = getattr(logger, log_level)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/dataframe/dataframe.py:626, in PandasDataframe._compute_axis_labels_and_lengths(self, axis, partitions)
624 if partitions is None:
625 partitions = self._partitions
--> 626 new_index, internal_idx = self._partition_mgr_cls.get_indices(axis, partitions)
627 return new_index, list(map(len, internal_idx))
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/logging/logger_decorator.py:128, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
113 \"\"\"
114 Compute function with logging if Modin logging is enabled.
115
(...)
125 Any
126 \"\"\"
127 if LogMode.get() == \"disable\":
--> 128 return obj(*args, **kwargs)
130 logger = get_logger()
131 logger_level = getattr(logger, log_level)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/partitioning/partition_manager.py:933, in PandasDataframePartitionManager.get_indices(cls, axis, partitions, index_func)
931 if len(target):
932 new_idx = [idx.apply(func) for idx in target[0]]
--> 933 new_idx = cls.get_objects_from_partitions(new_idx)
934 else:
935 new_idx = [pandas.Index([])]
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/logging/logger_decorator.py:128, in enable_logging.<locals>.decorator.<locals>.run_and_log(*args, **kwargs)
113 \"\"\"
114 Compute function with logging if Modin logging is enabled.
115
(...)
125 Any
126 \"\"\"
127 if LogMode.get() == \"disable\":
--> 128 return obj(*args, **kwargs)
130 logger = get_logger()
131 logger_level = getattr(logger, log_level)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/partitioning/partition_manager.py:874, in PandasDataframePartitionManager.get_objects_from_partitions(cls, partitions)
870 partitions[idx] = part.force_materialization()
871 assert all(
872 [len(partition.list_of_blocks) == 1 for partition in partitions]
873 ), \"Implementation assumes that each partition contains a single block.\"
--> 874 return cls._execution_wrapper.materialize(
875 [partition.list_of_blocks[0] for partition in partitions]
876 )
877 return [partition.get() for partition in partitions]
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/execution/ray/common/engine_wrapper.py:92, in RayWrapper.materialize(cls, obj_id)
77 @classmethod
78 def materialize(cls, obj_id):
79 \"\"\"
80 Get the value of object from the Plasma store.
81
(...)
90 Whatever was identified by `obj_id`.
91 \"\"\"
---> 92 return ray.get(obj_id)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/ray/_private/auto_init_hook.py:21, in wrap_auto_init.<locals>.auto_init_wrapper(*args, **kwargs)
18 @wraps(fn)
19 def auto_init_wrapper(*args, **kwargs):
20 auto_init_ray()
---> 21 return fn(*args, **kwargs)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/ray/_private/client_mode_hook.py:102, in client_mode_hook.<locals>.wrapper(*args, **kwargs)
98 if client_mode_should_convert():
99 # Legacy code
100 # we only convert init function if RAY_CLIENT_MODE=1
101 if func.__name__ != \"init\" or is_client_mode_enabled_by_default:
--> 102 return getattr(ray, func.__name__)(*args, **kwargs)
103 return func(*args, **kwargs)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/ray/util/client/api.py:42, in _ClientAPI.get(self, vals, timeout)
35 def get(self, vals, *, timeout=None):
36 \"\"\"get is the hook stub passed on to replace `ray.get`
37
38 Args:
39 vals: [Client]ObjectRef or list of these refs to retrieve.
40 timeout: Optional timeout in milliseconds
41 \"\"\"
---> 42 return self.worker.get(vals, timeout=timeout)
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/ray/util/client/worker.py:433, in Worker.get(self, vals, timeout)
431 op_timeout = max_blocking_operation_time
432 try:
--> 433 res = self._get(to_get, op_timeout)
434 break
435 except GetTimeoutError:
File ~/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/ray/util/client/worker.py:461, in Worker._get(self, ref, timeout)
459 logger.exception(\"Failed to deserialize {}\".format(chunk.error))
460 raise
--> 461 raise err
462 if chunk.total_size > OBJECT_TRANSFER_WARNING_SIZE and log_once(
463 \"client_object_transfer_size_warning\"
464 ):
465 size_gb = chunk.total_size / 2**30
RayTaskError(KeyError): ray::_apply_func() (pid=946, ip=10.169.23.29)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RayTaskError: ray::_deploy_ray_func() (pid=942, ip=10.169.23.29)
File \"pandas/_libs/index.pyx\", line 138, in pandas._libs.index.IndexEngine.get_loc
File \"pandas/_libs/index.pyx\", line 165, in pandas._libs.index.IndexEngine.get_loc
File \"pandas/_libs/hashtable_class_helper.pxi\", line 5745, in pandas._libs.hashtable.PyObjectHashTable.get_item
File \"pandas/_libs/hashtable_class_helper.pxi\", line 5753, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'data_set'
The above exception was the direct cause of the following exception:
ray::_deploy_ray_func() (pid=942, ip=10.169.23.29)
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/execution/ray/implementations/pandas_on_ray/partitioning/virtual_partition.py\", line 313, in _deploy_ray_func
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/partitioning/axis_partition.py\", line 419, in deploy_axis_func
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/dataframe/pandas/dataframe/dataframe.py\", line 1788, in _tree_reduce_func
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/core/storage_formats/pandas/query_compiler.py\", line 3084, in <lambda>
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/frame.py\", line 9568, in apply
return op.apply().__finalize__(self, method=\"apply\")
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py\", line 764, in apply
return self.apply_standard()
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py\", line 891, in apply_standard
results, res_index = self.apply_series_generator()
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/apply.py\", line 907, in apply_series_generator
results[i] = self.f(v)
File \"/Users/brunoj/.pyenv/versions/3.9.18/envs/che/lib/python3.9/site-packages/modin/utils.py\", line 611, in wrapper
File \"/var/folders/lz/4cs_fypj0ld8x6kyk9rbkl400000gn/T/ipykernel_24081/3890645143.py\", line 24, in <lambda>
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/series.py\", line 981, in __getitem__
return self._get_value(key)
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/series.py\", line 1089, in _get_value
loc = self.index.get_loc(label)
File \"/home/ray/anaconda3/lib/python3.9/site-packages/pandas/core/indexes/base.py\", line 3804, in get_loc
raise KeyError(key) from err
KeyError: 'data_set'"
}
```
</details>
# INSTALLED VERSION
```
UserWarning: Setuptools is replacing distutils.
INSTALLED VERSIONS
------------------
commit : f5f9ae993ba5ed26461d3c9d26fbefecab88ee69
python : 3.9.18.final.0
python-bits : 64
OS : Darwin
OS-release : 23.5.0
Version : Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
Modin dependencies
------------------
modin : 0.31.0+5.gf5f9ae99
ray : 2.23.0
dask : 2024.7.1
distributed : 2024.7.1
pandas dependencies
-------------------
pandas : 2.2.2
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 69.5.1
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.4
IPython : 8.18.1
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.6.1
gcsfs : 2024.6.1
matplotlib : 3.9.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : 0.23.1
pyarrow : 14.0.2
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : 2024.6.1
scipy : 1.13.1
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
``` | open | 2024-07-23T11:05:03Z | 2024-07-25T21:22:44Z | https://github.com/modin-project/modin/issues/7350 | [
"bug 🦗",
"P0"
] | brunojensen | 1 |
gee-community/geemap | jupyter | 1,118 | bug with package named geospatial | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: newest
- Python version: 3.8
- Operating System: windows 10
### Description
When I using the package named geospatial "mamba install -c conda-forge geospatial"
problem with geopandas reading shp
### What I Did
```
when using gpd.read_file('xxx.shp') the terminal will show:the 'read_file' function requires the 'fiona' package, but it is not installed or does not import correctly.
Importing fiona resulted in: DLL load failed while importing ogrext: The specified procedure could not be found.
```
| closed | 2022-06-23T09:50:29Z | 2022-06-23T17:04:12Z | https://github.com/gee-community/geemap/issues/1118 | [
"bug"
] | BobNJU | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 922 | CycleGAN inference | Can I get multiple variants from one trained CycleGAN in inference?
For instance:
I have one picture of a horse and I would like to have 4 different(!!!) pictures in style, trained in CycleGAN.
Is it possible? | closed | 2020-02-18T10:16:56Z | 2020-02-19T06:39:28Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/922 | [] | Anastasiyabordak | 1 |
xlwings/xlwings | automation | 1,955 | Apple Event timed out when doing range.api.sort() | #### OS (e.g. Windows 10 or macOS Sierra)
MacOS Monterey v12.4
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
Python 3.9.5
xlwings 0.27.10
Microsoft Excel for Mac v16.62
#### Describe your issue (incl. Traceback!)
Hi,
I was working on a sheet with 17000 data rows and 250 columns. I tried to sort the data range by using range.api.sort(), and I encountered Apple Event timed out exception as below.
I know there are closed issues about apple event timed out. I was not able to set ‘timeout’ attribute in sort(), and I tried range.options(chunksize=5000).api.sort(…), but it didn’t work.
Here are my tries:
1) First, I copied from a raw source, then sort(), it failed. The data remained a half-finished (not sure if it completed or not in exception). I run sort() again based on it, and it passed.
2) Then, I copied from raw source each time and run sort(), it failed all the time
I guess it might cost few in the first situation, thus it could pass based on almost-sorted data?
The same issue when I was going to copy/paste large amount of cells. It didn’t work with range().options(chunksize=5000). As workaround, I set Excel visible and real-time-refresh, so that Excel could respond to MacOS. It worked. For even more columns doing copy/paste, I changed codes to run in batch, and it worked.
But I have no idea in this sort() case. It cannot be trunked, cannot run in batch. I have to sort the whole data range at one time.
Anyone could help?
```python
# Your traceback here
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/aeosa/aem/aemsend.py", line 74, in send
replyevent = self._sendproc(self.AEM_event, flags, timeout)
File "/usr/local/lib/python3.9/site-packages/aeosa/aem/aemsend.py", line 23, in sendappleevent
return evt.send(flags, timeout)
aem.ae.MacOSError: -1712
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/aeosa/appscript/reference.py", line 479, in __call__
return self.AS_appdata.target().event(self._code, params, atts, codecs=self.AS_appdata).send(timeout, sendflags)
File "/usr/local/lib/python3.9/site-packages/aeosa/aem/aemsend.py", line 77, in send
raise EventError(err.args[0]) from err
aem.aemsend.EventError: Command failed: Apple event timed out. (-1712)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/alexela/SynologyDrive/Lonch_Home/05. programming/update-contract.py", line 304, in <module>
refresh_formulas(wb)
File "/Users/alexela/SynologyDrive/Lonch_Home/05. programming/update-contract.py", line 158, in refresh_formulas
sht.range((data_row_start, data_col_start), (data_row_end, data_col_end)) \
File "/usr/local/lib/python3.9/site-packages/aeosa/appscript/reference.py", line 515, in __call__
raise CommandError(self, (args, kargs), e, self.AS_appdata) from e
appscript.reference.CommandError: Command failed:
OSERROR: -1712
MESSAGE: Apple event timed out.
COMMAND: app(pid=8587).workbooks[2].worksheets['Data'].cells['$B$5:$IN$17153'].sort(key1=app(pid=8587).workbooks[2].worksheets['Data'].cells['$HV$5:$HV$17153'], order1=1, orientation=1)
```
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
```python
# Your code here
sort_range = sht.range((data_row_start, sort_col), (data_row_end, sort_col))
shut.range ((data_row_start, data_col_start), (data_row_end, data_col_end)).api.sort(key1=sort_range.api, order1=1, orientation=1)
``` | closed | 2022-07-07T19:57:56Z | 2022-07-08T08:47:30Z | https://github.com/xlwings/xlwings/issues/1955 | [] | tsusoft | 2 |
ipython/ipython | data-science | 14,806 | TAB completion does not work for packages that use native namespaces. | TAB completion does not work for packages that use native namespaces:
https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#native-namespace-packages
Original bug report:
https://github.com/kylebarron/arro3/issues/290
> Currently when installing the 3 arrow3 packages, the structure looks like this:
> ```bash
> $ ls /lib/python3.11/site-packages/arro3/
> compute core io
> ```
>
> When importing the modules in ipython, tab completion is not working (to find the functions in each module
> ```python
> In [4]: import arro3.io
>
> In [5]: import arro3.core
>
> In [6]: import arro3.compute
>
> In [7]: arro3.io.<TAB> ==> no list of functions.
> ```
>
> When importing them with an alias, tab completion works:
> ```python
> In [1]: import arro3.io as arro3_io
>
> In [2]: import arro3.compute as arro3_compute
>
> In [3]: import arro3.core as arro3_core
>
> In [4]: arro3_io.<TAB>
> infer_csv_schema() read_csv() read_ipc_stream() read_parquet() store write_ipc() write_json() write_parquet()
> infer_json_schema() read_ipc() read_json() read_parquet_async() write_csv() write_ipc_stream() write_ndjson()
> ```
>
> When adding just an empty `_init__.py` at `/lib/python3.11/site-packages/arro3/__init__.py`, tab completion works:
> ```
> In [1]: import arro3.io as arro3_io
>
> In [2]: import arro3.compute as arro3_compute
>
> In [3]: import arro3.core as arro3_core
>
> In [4]: arro3.io.<TAB>
> infer_csv_schema() read_csv() read_ipc_stream() read_parquet() store write_ipc() write_json() write_parquet()
> infer_json_schema() read_ipc() read_json() read_parquet_async() write_csv() write_ipc_stream() write_ndjson()
> ```
Reply:
>> When adding just an empty _init__.py at /lib/python3.11/site-packages/arro3/__init__.py, tab completion works:
>
>In theory, that is supposed to break namespace packaging: https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#native-namespace-packages
>
>> All that is required to create a native namespace package is that you just omit __init__.py from the namespace package directory
| open | 2025-02-28T09:32:23Z | 2025-02-28T10:01:30Z | https://github.com/ipython/ipython/issues/14806 | [
"tab-completion"
] | ghuls | 3 |
dmlc/gluon-nlp | numpy | 592 | [Model] Port OpenAI GPT to gluon-nlp | Port the GPT model to gluon-nlp:
https://github.com/openai/gpt-2
https://github.com/openai/finetune-transformer-lm
A good first step is to port their pre-trained models and be able to perform inference. | open | 2019-02-15T18:20:23Z | 2019-06-18T22:47:20Z | https://github.com/dmlc/gluon-nlp/issues/592 | [
"help wanted"
] | eric-haibin-lin | 2 |
hbldh/bleak | asyncio | 1,015 | macOS: Discovering Bluetooth devices raises BleakError("Bluetooth device is turned off") | * bleak version: 0.17.0
* Python version: 3.10.6
* Operating System: macOS Monterey Version 12.6
### Description
Tried to discover Bluetooth devices that can be connected.
### What I Did
Tried executing :
```
import asyncio
from bleak import BleakScanner
async def main():
devices = await BleakScanner.discover()
for d in devices:
print(d)
asyncio.run(main())
```
Traceback received:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/homebrew/Cellar/python@3.10/3.10.6_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "<stdin>", line 2, in main
File "/opt/homebrew/lib/python3.10/site-packages/bleak/backends/scanner.py", line 123, in discover
async with cls(**kwargs) as scanner:
File "/opt/homebrew/lib/python3.10/site-packages/bleak/backends/corebluetooth/scanner.py", line 70, in __init__
self._manager = CentralManagerDelegate.alloc().init()
File "/opt/homebrew/lib/python3.10/site-packages/bleak/backends/corebluetooth/CentralManagerDelegate.py", line 88, in init
raise BleakError("Bluetooth device is turned off")
bleak.exc.BleakError: Bluetooth device is turned off
Exception ignored in: <CentralManagerDelegate objective-c instance 0x0>
Traceback (most recent call last):
File "/Users/harshalpatil/.espressif/python_env/idf5.1_py3.10_env/lib/python3.10/site-packages/bleak/backends/corebluetooth/CentralManagerDelegate.py", line 102, in __del__
IndexError: NSRangeException - Cannot remove an observer <CentralManagerDelegate 0x126623430> for the key path "isScanning" from <CBCentralManager 0x60000098c980> because it is not registered as an observer.
Exception ignored in: <function CentralManagerDelegate.__del__ at 0x102a13640>
```
| closed | 2022-09-21T06:48:22Z | 2022-09-21T14:05:24Z | https://github.com/hbldh/bleak/issues/1015 | [] | Harshal5 | 0 |
scrapy/scrapy | web-scraping | 6,600 | Investigate off-by-1 in `scrapy.cmdline._pop_command_name()` | It looks like `del argv[i]` removes the wrong item in `scrapy.cmdline._pop_command_name()` but as we don't seem to see any problems because of this it's worth investigating what exactly happens here and either fixing or refactoring the code. | closed | 2025-01-01T19:47:08Z | 2025-01-14T15:40:25Z | https://github.com/scrapy/scrapy/issues/6600 | [
"bug",
"good first issue"
] | wRAR | 4 |
pydantic/logfire | fastapi | 573 | LiteLLM <> Logfire critical error | ### Description
When using logfire with liteLLM I have a weird error that crashed the logfire span.
```
from litellm import completion
logfire.configure()
with logfire.span("litellm-test") as span:
response = completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Recommend a fantasy book"}],
)
span.set_attribute("response_data", response)
print(response.choices[0].message.content)
```
and it gives :
```
Internal error in Logfire
Traceback (most recent call last):
File "/home/user/Sources/callisto/backend/nkai/playground/test.py", line 21, in <module>
span.set_attribute("response_data", response)
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/main.py", line 1691, in set_attribute
self._json_schema_properties[key] = create_json_schema(value, set())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 135, in create_json_schema
return schema(obj, seen) if callable(schema) else schema
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 283, in _pydantic_model_schema
return _custom_object_schema(obj, 'PydanticModel', [*fields, *extra], seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 356, in _custom_object_schema
**_properties({key: getattr(obj, key) for key in keys}, seen),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 342, in _properties
if (value_schema := create_json_schema(value, seen)) not in PLAIN_SCHEMAS:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 117, in create_json_schema
return _array_schema(obj, seen)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 246, in _array_schema
item_schema = create_json_schema(item, seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 135, in create_json_schema
return schema(obj, seen) if callable(schema) else schema
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 283, in _pydantic_model_schema
return _custom_object_schema(obj, 'PydanticModel', [*fields, *extra], seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 356, in _custom_object_schema
**_properties({key: getattr(obj, key) for key in keys}, seen),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 342, in _properties
if (value_schema := create_json_schema(value, seen)) not in PLAIN_SCHEMAS:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 135, in create_json_schema
return schema(obj, seen) if callable(schema) else schema
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 283, in _pydantic_model_schema
return _custom_object_schema(obj, 'PydanticModel', [*fields, *extra], seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/logfire/_internal/json_schema.py", line 356, in _custom_object_schema
**_properties({key: getattr(obj, key) for key in keys}, seen),
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/Niskus/lib/python3.12/site-packages/pydantic/main.py", line 856, in __getattr__
raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'Message' object has no attribute 'audio'
```
I tested this using LiteLLM 1.51, 1.52, 1.50 and logfire 0.53, 1.x and 2.x
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="1.3.2"
platform="Linux-6.8.0-45-generic-x86_64-with-glibc2.39"
python="3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0]"
[related_packages]
requests="2.32.3"
pydantic="2.9.2"
fastapi="0.111.1"
openai="1.54.1"
protobuf="4.25.5"
rich="13.9.4"
executing="2.1.0"
opentelemetry-api="1.27.0"
opentelemetry-exporter-otlp-proto-common="1.27.0"
opentelemetry-exporter-otlp-proto-http="1.27.0"
opentelemetry-instrumentation="0.48b0"
opentelemetry-instrumentation-asgi="0.48b0"
opentelemetry-instrumentation-celery="0.48b0"
opentelemetry-instrumentation-fastapi="0.48b0"
opentelemetry-proto="1.27.0"
opentelemetry-sdk="1.27.0"
opentelemetry-semantic-conventions="0.48b0"
opentelemetry-util-http="0.48b0"
```
| closed | 2024-11-06T11:02:26Z | 2024-11-20T09:53:05Z | https://github.com/pydantic/logfire/issues/573 | [
"bug",
"good first issue",
"P1"
] | CharlesOural | 2 |
apache/airflow | data-science | 47,919 | get_uri() is not implemented in task sdk | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
get_uri() is not implemented in connections.py in task sdk hence returning None and DAG is failing
[2025-03-18, 16:31:07] INFO - URI: None chan="stdout" source="task"
[2025-03-18, 16:31:07] INFO - An assert is being made below that the uri is of type string chan="stdout" source="task"
[2025-03-18, 16:31:07] ERROR - Task failed with exception source="task" error_detail=[{"exc_type":"AssertionError","exc_value":"","exc_notes":[],"syntax_error":null,"is_cause":false,"frames":[{"filename":"/opt/airflow/task-sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":606,"name":"run"},{"filename":"/opt/airflow/task-sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":730,"name":"_execute_task"},{"filename":"/opt/airflow/task-sdk/src/airflow/sdk/definitions/baseoperator.py","lineno":373,"name":"wrapper"},{"filename":"/opt/airflow/providers/standard/src/airflow/providers/standard/operators/python.py","lineno":196,"name":"execute"},{"filename":"/opt/airflow/providers/standard/src/airflow/providers/standard/operators/python.py","lineno":220,"name":"execute_callable"},{"filename":"/opt/airflow/airflow/utils/operator_helpers.py","lineno":261,"name":"run"},{"filename":"/files/dags/connection_tests/test_uri_gen.py","lineno":55,"name":"check_uri_gen"}]}]
### What you think should happen instead?
_No response_
### How to reproduce
Create a conenction manually and run the belwo DAG:
```python
def check_uri_gen():
try:
c = Connection()
conn = c.get_connection_from_secrets(<connection_name>)
uri = conn.get_uri()
print("An assert is being made below that the uri is of type string")
assert isinstance(uri, str)
print(f"The uri is: {uri}")
print("An assert is being made below that the get_uri() function of the Connection class is working correctly correctly")
assert uri == "conn-type-string://username:password@dns_name.dns:33302/database_scheme"
except AirflowRuntimeError:
print("There is no connection to pull data from.")
with DAG(
dag_id=dag_name,
start_date=datetime(2021, 1, 1),
schedule=None,
doc_md=docs,
tags=["core", "connections"],
) as dag:
t1 = PythonOperator(
task_id="check_uri_generation",
python_callable=check_uri_gen,
)
t1
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-18T16:52:02Z | 2025-03-21T15:26:53Z | https://github.com/apache/airflow/issues/47919 | [
"kind:bug",
"priority:medium",
"area:core",
"affected_version:3.0.0beta"
] | atul-astronomer | 2 |
ultralytics/yolov5 | pytorch | 13,264 | Syntax and understanding questions about reading tensorflow lite results | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
While researching how to understand the results and syntax of reading a converted yolov5s model to Tensorflow Lite. I found the following statements and ran across a syntax that I don't understand.
# output only first tensor [1,6300,85] = [xywh, conf, class0, class1, ...]
# x = x[0][0] # [x(1,6300,85), ...] to x(6300,85)
# xywh = x[..., :4] # x(6300,4) boxes
# conf = x[..., 4:5] # x(6300,1) confidences
# cls = tf.reshape(tf.cast(tf.argmax(x[..., 5:], axis=1), tf.float32), (-1, 1)) # x(6300,1) classes
# return tf.concat([conf, cls, xywh], 1)
What does the ... in x[..., :4] mean?
For my use case. I'm running a tensorflow lite model in a vendor's SDK. When inspecting the inference results, I get the following shape (1, 25200, 6) from result['StatefulPartitionedCall:0'].shape. Do the results really have 25200 good detections?
The first sample is [ 1 1 1 6 1 127] from result['StatefulPartitionedCall:0'][0][0]. Where the first 4 elements of [ 1 1 1 6 ] are xywh, the 5 element [ 1 ] is the confidence, and the 6th element [127] is the class. Are my assumptions on how to read this correct? I'm finding the class value of 127 hard to believe because I only trained this model using one class.
The vendor's SDK is heavily trimmed down, from the tensorflow library only the tensorflow.core library section is installed. Because of this, tf.reshape, rf.cast, and tf.argmax method are not found. Is there a way to calculate the cls variable by only using numpy?
When I look at my model using Netron, I see the following outputs.

Should I dequantize my tensor by applying the given equation to get more understandable results?
Thank you for your time and words of wisdom.
### Additional
_No response_ | closed | 2024-08-17T05:07:53Z | 2024-08-18T15:27:07Z | https://github.com/ultralytics/yolov5/issues/13264 | [
"question"
] | mwickersheim | 1 |
pydantic/pydantic-ai | pydantic | 1,045 | AttributeError: 'Graph' object has no attribute 'iter' | ### Initial Checks
- [x] I confirm that I'm using the latest version of Pydantic AI
### Description
AttributeError: 'Graph' object has no attribute 'iter'
### Example Code
```Python
from __future__ import annotations as _annotations
from dataclasses import dataclass
from pydantic_graph import Graph, BaseNode, End, GraphRunContext
@dataclass
class CountDownState:
counter: int
@dataclass
class CountDown(BaseNode[CountDownState]):
async def run(self, ctx: GraphRunContext[CountDownState]) -> CountDown | End[int]:
if ctx.state.counter <= 0:
return End(ctx.state.counter)
ctx.state.counter -= 1
return CountDown()
count_down_graph = Graph(nodes=[CountDown])
async def main():
state = CountDownState(counter=3)
async with count_down_graph.iter(CountDown(), state=state) as run:
async for node in run:
print('Node:', node)
#> Node: CountDown()
#> Node: CountDown()
#> Node: CountDown()
#> Node: End(data=0)
print('Final result:', run.result.output)
#> Final result: 0
print('History snapshots:', [step.data_snapshot() for step in run.history])
"""
History snapshots:
[CountDown(), CountDown(), CountDown(), CountDown(), End(data=0)]
"""
if __name__ == "__main__":
asyncio.run(main())
```
### Python, Pydantic AI & LLM client version
```Text
The version of pydantic_ai is 0.0.24 and the version of pydantic_graph is 0.0.24.
``` | closed | 2025-03-04T11:08:25Z | 2025-03-04T11:13:57Z | https://github.com/pydantic/pydantic-ai/issues/1045 | [
"need confirmation"
] | rohithbojja | 4 |
Kaliiiiiiiiii-Vinyzu/patchright-python | web-scraping | 10 | File not found when using PYINSTALLER | Hey there,
when using patchright with pyinstaller on windows it is showing an error. I am also using Playwright and it's working well.
The error didn't mention what file was not found. Please see the stacktrace below:
File "patchright\sync_api\_context_manager.py", line 60, in start
File "patchright\sync_api\_context_manager.py", line 54, in __enter__
File "patchright\sync_api\_context_manager.py", line 37, in greenlet_main
File "asyncio\base_events.py", line 654, in run_until_complete
File "patchright\_impl\_connection.py", line 240, in run_as_sync
File "patchright\_impl\_connection.py", line 249, in run
File "patchright\_impl\_transport.py", line 108, in connect
File "patchright\_impl\_transport.py", line 95, in connect
File "asyncio\subprocess.py", line 223, in create_subprocess_exec
File "asyncio\base_events.py", line 1708, in subprocess_exec
File "asyncio\windows_events.py", line 399, in _make_subprocess_transport
File "asyncio\base_subprocess.py", line 36, in __init__
File "asyncio\windows_events.py", line 929, in _start
File "asyncio\windows_utils.py", line 153, in __init__
File "subprocess.py", line 1026, in __init__
File "subprocess.py", line 1538, in _execute_child
FileNotFoundError: [WinError 2]
My pyinstaller script is very straight forward
```
set PLAYWRIGHT_BROWSERS_PATH=0
playwright install firefox chromium
pyinstaller --onefile -F --icon=myicon.ico -n main main.py
``` | closed | 2025-01-13T13:11:41Z | 2025-02-16T20:58:33Z | https://github.com/Kaliiiiiiiiii-Vinyzu/patchright-python/issues/10 | [
"enhancement",
"third-party"
] | brunoamuniz | 5 |
wkentaro/labelme | computer-vision | 726 | 可以添加图像的放大缩小功能吗,有些标签太密集,不好划分[Feature] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2020-07-16T09:32:45Z | 2021-09-30T13:59:34Z | https://github.com/wkentaro/labelme/issues/726 | [] | lx-rookie | 2 |
deepspeedai/DeepSpeed | machine-learning | 6,995 | AttributeError: 'DeepSpeedZeroOptimizer' object has no attribute 'ipg_index' | Hello, I am encountering an issue with deepspeed and would appreciate your help.
`[rank2]: Traceback (most recent call last):
[rank2]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 568, in <module>
[rank2]: main()
[rank2]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 564, in main
[rank2]: mmtrainer.train()
[rank2]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 300, in train
[rank2]: train_loss, train_acc, train_auc = self._train_one_epoch()
[rank2]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 425, in _train_one_epoch
[rank2]: loss.backward()
[rank2]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/_tensor.py", line 521, in backward
[rank2]: torch.autograd.backward(
[rank2]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank2]: _engine_run_backward(
[rank2]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward
[rank2]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank2]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 809, in reduce_partition_and_remove_grads
[rank2]: self.reduce_ready_partitions_and_remove_grads(param, i)
[rank2]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1257, in reduce_ready_partitions_and_remove_grads
[rank2]: self.reduce_independent_p_g_buckets_and_remove_grads(param, i)
[rank2]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 852, in reduce_independent_p_g_buckets_and_remove_grads
[rank2]: new_grad_tensor = self.ipg_buffer[self.ipg_index].narrow(0, self.elements_in_ipg_bucket, param.numel())
[rank2]: AttributeError: 'DeepSpeedZeroOptimizer' object has no attribute 'ipg_index'
0%| | 0/1 [00:02<?, ?it/s]
[rank3]: Traceback (most recent call last):
[rank3]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 568, in <module>
[rank3]: main()
[rank3]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 564, in main
[rank3]: mmtrainer.train()
[rank3]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 300, in train
[rank3]: train_loss, train_acc, train_auc = self._train_one_epoch()
[rank3]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 425, in _train_one_epoch
[rank3]: loss.backward()
[rank3]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/_tensor.py", line 521, in backward
[rank3]: torch.autograd.backward(
[rank3]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank3]: _engine_run_backward(
[rank3]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward
[rank3]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank3]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 809, in reduce_partition_and_remove_grads
[rank3]: self.reduce_ready_partitions_and_remove_grads(param, i)
[rank3]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1257, in reduce_ready_partitions_and_remove_grads
[rank3]: self.reduce_independent_p_g_buckets_and_remove_grads(param, i)
[rank3]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 852, in reduce_independent_p_g_buckets_and_remove_grads
[rank3]: new_grad_tensor = self.ipg_buffer[self.ipg_index].narrow(0, self.elements_in_ipg_bucket, param.numel())
[rank3]: AttributeError: 'DeepSpeedZeroOptimizer' object has no attribute 'ipg_index'
/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py:769: UserWarning: c10d::broadcast_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py:769: UserWarning: Error detected in torch::autograd::AccumulateGrad. No forward pass information available. Enable detect anomaly during forward pass for more information. (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:89.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
0%| | 0/1 [00:02<?, ?it/s]
[rank1]: Traceback (most recent call last):
[rank1]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 568, in <module>
[rank1]: main()
[rank1]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 564, in main
[rank1]: mmtrainer.train()
[rank1]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 300, in train
[rank1]: train_loss, train_acc, train_auc = self._train_one_epoch()
[rank1]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 425, in _train_one_epoch
[rank1]: loss.backward()
[rank1]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/_tensor.py", line 521, in backward
[rank1]: torch.autograd.backward(
[rank1]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank1]: _engine_run_backward(
[rank1]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward
[rank1]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank1]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 809, in reduce_partition_and_remove_grads
[rank1]: self.reduce_ready_partitions_and_remove_grads(param, i)
[rank1]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1257, in reduce_ready_partitions_and_remove_grads
[rank1]: self.reduce_independent_p_g_buckets_and_remove_grads(param, i)
[rank1]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 852, in reduce_independent_p_g_buckets_and_remove_grads
[rank1]: new_grad_tensor = self.ipg_buffer[self.ipg_index].narrow(0, self.elements_in_ipg_bucket, param.numel())
[rank1]: AttributeError: 'DeepSpeedZeroOptimizer' object has no attribute 'ipg_index'
input x dtype:torch.float32
/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py:769: UserWarning: c10d::broadcast_: an autograd kernel was not registered to the Autograd key(s) but we are trying to backprop through it. This may lead to silently incorrect behavior. This behavior is deprecated and will be removed in a future version of PyTorch. If your operator is differentiable, please ensure you have registered an autograd kernel to the correct Autograd key (e.g. DispatchKey::Autograd, DispatchKey::CompositeImplicitAutograd). If your operator is not differentiable, or to squash this warning and use the previous behavior, please register torch::CppFunction::makeFallthrough() to DispatchKey::Autograd. (Triggered internally at ../torch/csrc/autograd/autograd_not_implemented_fallback.cpp:63.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py:769: UserWarning: Error detected in torch::autograd::AccumulateGrad. No forward pass information available. Enable detect anomaly during forward pass for more information. (Triggered internally at ../torch/csrc/autograd/python_anomaly_mode.cpp:89.)
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
0%| | 0/1 [00:02<?, ?it/s]
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 568, in <module>
[rank0]: main()
[rank0]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 564, in main
[rank0]: mmtrainer.train()
[rank0]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 300, in train
[rank0]: train_loss, train_acc, train_auc = self._train_one_epoch()
[rank0]: File "/root/Desktop/code/txf/teng_code_test/zero2_test/trainer.py", line 425, in _train_one_epoch
[rank0]: loss.backward()
[rank0]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/_tensor.py", line 521, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/__init__.py", line 289, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 809, in reduce_partition_and_remove_grads
[rank0]: self.reduce_ready_partitions_and_remove_grads(param, i)
[rank0]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 1257, in reduce_ready_partitions_and_remove_grads
[rank0]: self.reduce_independent_p_g_buckets_and_remove_grads(param, i)
[rank0]: File "/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed/runtime/zero/stage_1_and_2.py", line 852, in reduce_independent_p_g_buckets_and_remove_grads
[rank0]: new_grad_tensor = self.ipg_buffer[self.ipg_index].narrow(0, self.elements_in_ipg_bucket, param.numel())
[rank0]: AttributeError: 'DeepSpeedZeroOptimizer' object has no attribute 'ipg_index'
`
ds_report:
`DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [YES] ...... [OKAY]
cpu_adam ............... [YES] ...... [OKAY]
fused_adam ............. [YES] ...... [OKAY]
fused_lamb ............. [YES] ...... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/torch']
torch version .................... 2.4.1+cu118
deepspeed install path ........... ['/root/anaconda3/envs/tpcgnn/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.9.5+fc9e1ee0, fc9e1ee0, HEAD
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.8
deepspeed wheel compiled w. ...... torch 2.4, cuda 11.8
`
Thank you in advance for your help! I look forward to your response.
| closed | 2025-02-03T09:52:38Z | 2025-02-17T16:47:49Z | https://github.com/deepspeedai/DeepSpeed/issues/6995 | [] | Tengxf | 5 |
Yorko/mlcourse.ai | matplotlib | 719 | Typo in the feature naming in Reduction Impurity counting | In the book, [Feature importance page](https://mlcourse.ai/book/topic05/topic5_part3_feature_importance.html) there is a typo in the feature name. One of the chosen should be "Petal length (cm)".
<img width="767" alt="image" src="https://user-images.githubusercontent.com/17138883/189652317-d999f0a6-43bc-4b74-99c7-a3b0ba1a117d.png">
| closed | 2022-09-12T12:26:14Z | 2022-09-13T23:01:01Z | https://github.com/Yorko/mlcourse.ai/issues/719 | [] | aulasau | 1 |
pydata/pandas-datareader | pandas | 228 | EUROSTAT test is broken | ```
======================================================================
FAIL: test_get_sts_cobp_a (pandas_datareader.tests.test_eurostat.TestEurostat)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/travis/build/femtotrader/pandas-datareader/pandas_datareader/tests/test_eurostat.py", line 74, in test_get_sts_cobp_a
tm.assert_series_equal(result, expected)
File "/home/travis/miniconda/envs/test-environment/lib/python2.7/site-packages/pandas/util/testing.py", line 681, in assert_series_equal
assert_almost_equal(left.values, right.values, check_less_precise)
File "pandas/src/testing.pyx", line 58, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2906)
File "pandas/src/testing.pyx", line 93, in pandas._testing.assert_almost_equal (pandas/src/testing.c:1917)
File "pandas/src/testing.pyx", line 140, in pandas._testing.assert_almost_equal (pandas/src/testing.c:2724)
AssertionError: expected 200.05000 but got 200.00000, with decimal 5
```
https://travis-ci.org/femtotrader/pandas-datareader/jobs/158256932
| closed | 2016-09-07T18:51:22Z | 2016-09-07T19:16:12Z | https://github.com/pydata/pandas-datareader/issues/228 | [] | femtotrader | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.