Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,726
| 17,936,626,750
|
IssuesEvent
|
2021-09-10 16:07:50
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
opened
|
Process functions have inconsistent behavior in validating inputs nested in dictionaries
|
type/bug priority/important topic/engine topic/processes
|
Consider the two following calcfunctions
```python
@engine.calcfunction
def test_kwargs(**kwargs):
results = {}
for key, value in kwargs['namespace'].items():
results[key] = value + 1
return results
test_kwargs(namespace': {'a': orm.Int(1)})
@engine.calcfunction
def test_kwargs(namespace):
results = {}
for key, value in namespace.items():
results[key] = value + 1
return results
test_kwargs(namespace={'a': orm.Int(1)})
```
The first one will work just fine, but the second will except, complaining that `namespace` cannot be a normal dictionary. They clearly are identical and the second should also be accepted, as long as all leaf values are storable. The `namespace` should literally just become the namespace in the link labels.
|
1.0
|
Process functions have inconsistent behavior in validating inputs nested in dictionaries - Consider the two following calcfunctions
```python
@engine.calcfunction
def test_kwargs(**kwargs):
results = {}
for key, value in kwargs['namespace'].items():
results[key] = value + 1
return results
test_kwargs(namespace': {'a': orm.Int(1)})
@engine.calcfunction
def test_kwargs(namespace):
results = {}
for key, value in namespace.items():
results[key] = value + 1
return results
test_kwargs(namespace={'a': orm.Int(1)})
```
The first one will work just fine, but the second will except, complaining that `namespace` cannot be a normal dictionary. They clearly are identical and the second should also be accepted, as long as all leaf values are storable. The `namespace` should literally just become the namespace in the link labels.
|
process
|
process functions have inconsistent behavior in validating inputs nested in dictionaries consider the two following calcfunctions python engine calcfunction def test kwargs kwargs results for key value in kwargs items results value return results test kwargs namespace a orm int engine calcfunction def test kwargs namespace results for key value in namespace items results value return results test kwargs namespace a orm int the first one will work just fine but the second will except complaining that namespace cannot be a normal dictionary they clearly are identical and the second should also be accepted as long as all leaf values are storable the namespace should literally just become the namespace in the link labels
| 1
|
106,004
| 11,464,454,607
|
IssuesEvent
|
2020-02-07 18:06:46
|
luckygogreen/AccessGateway
|
https://api.github.com/repos/luckygogreen/AccessGateway
|
closed
|
Celery Setting配置
|
documentation help wanted
|
CELERY_DEFAULT_QUEUE:默认队列
BROKER_URL : 代理人即rabbitmq的网址
CELERY_RESULT_BACKEND:结果存储地址
CELERY_TASK_SERIALIZER:任务序列化方式
CELERY_RESULT_SERIALIZER:任务执行结果序列化方式
CELERY_TASK_RESULT_EXPIRES:任务过期时间
CELERY_ACCEPT_CONTENT:指定任务接受的内容序列化类型(序列化),一个列表;
# 注意,celery4版本后,CELERY_BROKER_URL改为BROKER_URL
BROKER_URL = 'amqp://username:passwd@host:port/虚拟主机名'
# 指定结果的接受地址
CELERY_RESULT_BACKEND = 'redis://username:passwd@host:port/db'
# 指定任务序列化方式
CELERY_TASK_SERIALIZER = 'msgpack'
# 指定结果序列化方式
CELERY_RESULT_SERIALIZER = 'msgpack'
# 任务过期时间,celery任务执行结果的超时时间
CELERY_TASK_RESULT_EXPIRES = 60 * 20
# 指定任务接受的序列化类型.
CELERY_ACCEPT_CONTENT = ["msgpack"]
# 任务发送完成是否需要确认,这一项对性能有一点影响
CELERY_ACKS_LATE = True
# 压缩方案选择,可以是zlib, bzip2,默认是发送没有压缩的数据
CELERY_MESSAGE_COMPRESSION = 'zlib'
# 规定完成任务的时间
CELERYD_TASK_TIME_LIMIT = 5 # 在5s内完成任务,否则执行该任务的worker将被杀死,任务移交给父进程
# celery worker的并发数,默认是服务器的内核数目,也是命令行-c参数指定的数目
CELERYD_CONCURRENCY = 4
# celery worker 每次去rabbitmq预取任务的数量
CELERYD_PREFETCH_MULTIPLIER = 4
# 每个worker执行了多少任务就会死掉,默认是无限的
CELERYD_MAX_TASKS_PER_CHILD = 40
# 设置默认的队列名称,如果一个消息不符合其他的队列就会放在默认队列里面,如果什么都不设置的话,数据都会发送到默认的队列中
CELERY_DEFAULT_QUEUE = "default"
# 设置详细的队列
CELERY_QUEUES = {
"default": { # 这是上面指定的默认队列
"exchange": "default",
"exchange_type": "direct",
"routing_key": "default"
},
"topicqueue": { # 这是一个topic队列 凡是topictest开头的routing key都会被放到这个队列
"routing_key": "topic.#",
"exchange": "topic_exchange",
"exchange_type": "topic",
},
"task_eeg": { # 设置扇形交换机
"exchange": "tasks",
"exchange_type": "fanout",
"binding_key": "tasks",
},
}
|
1.0
|
Celery Setting配置 - CELERY_DEFAULT_QUEUE:默认队列
BROKER_URL : 代理人即rabbitmq的网址
CELERY_RESULT_BACKEND:结果存储地址
CELERY_TASK_SERIALIZER:任务序列化方式
CELERY_RESULT_SERIALIZER:任务执行结果序列化方式
CELERY_TASK_RESULT_EXPIRES:任务过期时间
CELERY_ACCEPT_CONTENT:指定任务接受的内容序列化类型(序列化),一个列表;
# 注意,celery4版本后,CELERY_BROKER_URL改为BROKER_URL
BROKER_URL = 'amqp://username:passwd@host:port/虚拟主机名'
# 指定结果的接受地址
CELERY_RESULT_BACKEND = 'redis://username:passwd@host:port/db'
# 指定任务序列化方式
CELERY_TASK_SERIALIZER = 'msgpack'
# 指定结果序列化方式
CELERY_RESULT_SERIALIZER = 'msgpack'
# 任务过期时间,celery任务执行结果的超时时间
CELERY_TASK_RESULT_EXPIRES = 60 * 20
# 指定任务接受的序列化类型.
CELERY_ACCEPT_CONTENT = ["msgpack"]
# 任务发送完成是否需要确认,这一项对性能有一点影响
CELERY_ACKS_LATE = True
# 压缩方案选择,可以是zlib, bzip2,默认是发送没有压缩的数据
CELERY_MESSAGE_COMPRESSION = 'zlib'
# 规定完成任务的时间
CELERYD_TASK_TIME_LIMIT = 5 # 在5s内完成任务,否则执行该任务的worker将被杀死,任务移交给父进程
# celery worker的并发数,默认是服务器的内核数目,也是命令行-c参数指定的数目
CELERYD_CONCURRENCY = 4
# celery worker 每次去rabbitmq预取任务的数量
CELERYD_PREFETCH_MULTIPLIER = 4
# 每个worker执行了多少任务就会死掉,默认是无限的
CELERYD_MAX_TASKS_PER_CHILD = 40
# 设置默认的队列名称,如果一个消息不符合其他的队列就会放在默认队列里面,如果什么都不设置的话,数据都会发送到默认的队列中
CELERY_DEFAULT_QUEUE = "default"
# 设置详细的队列
CELERY_QUEUES = {
"default": { # 这是上面指定的默认队列
"exchange": "default",
"exchange_type": "direct",
"routing_key": "default"
},
"topicqueue": { # 这是一个topic队列 凡是topictest开头的routing key都会被放到这个队列
"routing_key": "topic.#",
"exchange": "topic_exchange",
"exchange_type": "topic",
},
"task_eeg": { # 设置扇形交换机
"exchange": "tasks",
"exchange_type": "fanout",
"binding_key": "tasks",
},
}
|
non_process
|
celery setting配置 celery default queue:默认队列 broker url 代理人即rabbitmq的网址 celery result backend:结果存储地址 celery task serializer:任务序列化方式 celery result serializer:任务执行结果序列化方式 celery task result expires:任务过期时间 celery accept content:指定任务接受的内容序列化类型 序列化 ,一个列表; 注意, ,celery broker url改为broker url broker url amqp username passwd host port 虚拟主机名 指定结果的接受地址 celery result backend redis username passwd host port db 指定任务序列化方式 celery task serializer msgpack 指定结果序列化方式 celery result serializer msgpack 任务过期时间 celery任务执行结果的超时时间 celery task result expires 指定任务接受的序列化类型 celery accept content 任务发送完成是否需要确认,这一项对性能有一点影响 celery acks late true 压缩方案选择,可以是zlib ,默认是发送没有压缩的数据 celery message compression zlib 规定完成任务的时间 celeryd task time limit ,否则执行该任务的worker将被杀死,任务移交给父进程 celery worker的并发数,默认是服务器的内核数目 也是命令行 c参数指定的数目 celeryd concurrency celery worker 每次去rabbitmq预取任务的数量 celeryd prefetch multiplier 每个worker执行了多少任务就会死掉,默认是无限的 celeryd max tasks per child 设置默认的队列名称,如果一个消息不符合其他的队列就会放在默认队列里面,如果什么都不设置的话,数据都会发送到默认的队列中 celery default queue default 设置详细的队列 celery queues default 这是上面指定的默认队列 exchange default exchange type direct routing key default topicqueue 这是一个topic队列 凡是topictest开头的routing key都会被放到这个队列 routing key topic exchange topic exchange exchange type topic task eeg 设置扇形交换机 exchange tasks exchange type fanout binding key tasks
| 0
|
32,402
| 6,052,396,659
|
IssuesEvent
|
2017-06-13 04:40:41
|
mds3dstn71/bryce
|
https://api.github.com/repos/mds3dstn71/bryce
|
opened
|
Versioning problematic when projects are worked simultaneously
|
documentation
|
The current Versioning section contains the following:
> [MAIN].[SITE].[MINOR]
- `MAIN` - a bump would mean a drastic change in content presentation for the
home page
- `SITE` - current web site clone task, e.g. S01 => 1, S32 => 32, etc.
- `MINOR` - closed issues and minor changes introduced after bumping to current
`SITE`; all minor changes within a day are counted as a single bump
`MINOR` bumps reflects the changes made on the current `SITE`. But what if two `SITE`s are given updates? How can the `MINOR` bump reflect that it is an update for a particular `SITE` version?
Possible solutions:
1. Who cares?
2. Use a different version scheme (TODO)
3. Separate `SITE`s to their own repositories
|
1.0
|
Versioning problematic when projects are worked simultaneously - The current Versioning section contains the following:
> [MAIN].[SITE].[MINOR]
- `MAIN` - a bump would mean a drastic change in content presentation for the
home page
- `SITE` - current web site clone task, e.g. S01 => 1, S32 => 32, etc.
- `MINOR` - closed issues and minor changes introduced after bumping to current
`SITE`; all minor changes within a day are counted as a single bump
`MINOR` bumps reflects the changes made on the current `SITE`. But what if two `SITE`s are given updates? How can the `MINOR` bump reflect that it is an update for a particular `SITE` version?
Possible solutions:
1. Who cares?
2. Use a different version scheme (TODO)
3. Separate `SITE`s to their own repositories
|
non_process
|
versioning problematic when projects are worked simultaneously the current versioning section contains the following main a bump would mean a drastic change in content presentation for the home page site current web site clone task e g etc minor closed issues and minor changes introduced after bumping to current site all minor changes within a day are counted as a single bump minor bumps reflects the changes made on the current site but what if two site s are given updates how can the minor bump reflect that it is an update for a particular site version possible solutions who cares use a different version scheme todo separate site s to their own repositories
| 0
|
407,218
| 11,908,171,859
|
IssuesEvent
|
2020-03-31 00:11:00
|
eclipse-ee4j/glassfish
|
https://api.github.com/repos/eclipse-ee4j/glassfish
|
closed
|
"asadmin help create-auth-realm" shows incorrect information
|
Component: command_line_interface ERR: Assignee Priority: Minor Stale Type: Bug
|
When reading the text produced by `asadmin help create-auth-realm`, under "--property", subsection "You can specify the following properties for JDBCRealm:", the property "group-table" has 3 entries (2 should be removed). Property "group-table-user-name-column" was not mentioned (and should be added).
(This corresponds to the web admin console's "Group Table User Name Column" field when creating a new JDBC authentication realm.)
The datestamp at the end of the file shows "Java EE 7 20 Sep 2010 create-auth-realm(1)".
#### Environment
Generic (JRE)
#### Affected Versions
[4.1, 4.1.1]
|
1.0
|
"asadmin help create-auth-realm" shows incorrect information - When reading the text produced by `asadmin help create-auth-realm`, under "--property", subsection "You can specify the following properties for JDBCRealm:", the property "group-table" has 3 entries (2 should be removed). Property "group-table-user-name-column" was not mentioned (and should be added).
(This corresponds to the web admin console's "Group Table User Name Column" field when creating a new JDBC authentication realm.)
The datestamp at the end of the file shows "Java EE 7 20 Sep 2010 create-auth-realm(1)".
#### Environment
Generic (JRE)
#### Affected Versions
[4.1, 4.1.1]
|
non_process
|
asadmin help create auth realm shows incorrect information when reading the text produced by asadmin help create auth realm under property subsection you can specify the following properties for jdbcrealm the property group table has entries should be removed property group table user name column was not mentioned and should be added this corresponds to the web admin console s group table user name column field when creating a new jdbc authentication realm the datestamp at the end of the file shows java ee sep create auth realm environment generic jre affected versions
| 0
|
2,685
| 5,534,910,057
|
IssuesEvent
|
2017-03-21 16:17:58
|
elastic/beats
|
https://api.github.com/repos/elastic/beats
|
opened
|
Support Azure in add_cloud_metadata Processor
|
:Processors enhancement libbeat
|
I had a request for this at ElasticON. I'm not finding much documentation via Google (maybe I should be trying Live). I'll see about getting an Azure account to spin up and instance and see what's available from http://169.254.169.254/metadata/v1/maintenance\.
https://azure.microsoft.com/en-us/blog/what-just-happened-to-my-vm-in-vm-metadata-service/
https://azure.microsoft.com/en-us/blog/accessing-and-using-azure-vm-unique-id/
|
1.0
|
Support Azure in add_cloud_metadata Processor - I had a request for this at ElasticON. I'm not finding much documentation via Google (maybe I should be trying Live). I'll see about getting an Azure account to spin up and instance and see what's available from http://169.254.169.254/metadata/v1/maintenance\.
https://azure.microsoft.com/en-us/blog/what-just-happened-to-my-vm-in-vm-metadata-service/
https://azure.microsoft.com/en-us/blog/accessing-and-using-azure-vm-unique-id/
|
process
|
support azure in add cloud metadata processor i had a request for this at elasticon i m not finding much documentation via google maybe i should be trying live i ll see about getting an azure account to spin up and instance and see what s available from
| 1
|
14,668
| 17,787,177,689
|
IssuesEvent
|
2021-08-31 12:30:44
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
reopened
|
Dependency Dashboard
|
api: bigquery type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/all -->[chore(deps): update all dependencies](../pull/926) (`google-cloud-bigquery`, `google-cloud-testutils`, `google-crc32c`, `importlib-metadata`, `pytest`, `typing-extensions`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/all -->[chore(deps): update all dependencies](../pull/926) (`google-cloud-bigquery`, `google-cloud-testutils`, `google-crc32c`, `importlib-metadata`, `pytest`, `typing-extensions`)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any pull google cloud bigquery google cloud testutils google importlib metadata pytest typing extensions check this box to trigger a request for renovate to run again on this repository
| 1
|
117,999
| 9,967,800,787
|
IssuesEvent
|
2019-07-08 14:20:12
|
raiden-network/raiden
|
https://api.github.com/repos/raiden-network/raiden
|
closed
|
Increase test coverage on raiden/raiden_service.py
|
testing
|
`raiden/raiden_service.py` currently has 85% test coverage. Increase it to 90%.
Coverage report: https://codecov.io/gh/raiden-network/raiden/tree/master/raiden
|
1.0
|
Increase test coverage on raiden/raiden_service.py - `raiden/raiden_service.py` currently has 85% test coverage. Increase it to 90%.
Coverage report: https://codecov.io/gh/raiden-network/raiden/tree/master/raiden
|
non_process
|
increase test coverage on raiden raiden service py raiden raiden service py currently has test coverage increase it to coverage report
| 0
|
7,686
| 10,774,097,721
|
IssuesEvent
|
2019-11-03 02:02:48
|
bisq-network/bisq
|
https://api.github.com/repos/bisq-network/bisq
|
closed
|
Allow for input of custom addresses and amount on payout transaction
|
in:trade-process in:wallet was:dropped
|
At present, a Bisq trade has 4 phases:
1. Make offer:
A) Maker transaction
◦ trade fee
◦ security deposit
◦ payment (if BTC seller)
2. Take offer:
A) Taker transaction
▪ trade fee
▪ security deposit
▪ payment (if BTC seller)
B) Escrow transaction
▪ payment
▪ seller’s security deposit
▪ buyer’s security deposit
3. The BTC buyer confirms Fiat/Altcoin payment sent
A) Payout transaction is created and signed by the buyer
◦ seller’s security deposit
◦ buyer’s security deposit
◦ buyer’s payment
4. The BTC seller confirms that they’ve received the Fiat/Altcoin payment
A) Payout transaction is signed and broadcast by the seller
▪ seller’s security deposit
▪ buyer’s security deposit
▪ buyer’s payment
My suggestion is to allow, on phases 1 and 2, traders to set custom addresses on which to receive their payout amounts.
Bisq is a platform that facilitates trust-less trade. There’s little reason as to why the payout amounts should be deposited in the Bisq internal wallet. Most people will eventually withdraw that amount onto their own external wallets.
So there should be an optional placeholder field for traders to put a specif address of their own to an external wallet. The offer maker could specify theirs on the “Make offer” screen and the offer taker could specify theirs on the “Take offer” screen.
Multiple payout addresses and custom splits
There could also be the option to specify up to 2 outputs (addresses) for the offer taker. This would increase the payout transaction size but since it’s the offer taker who pays for the miner fees, they could make that judgment.
This tool would allow for the enforcement of off-Bisq contracts/deals. The requirement of BTC to conduct trades on Bisq is a detractor for many newcomers to Bitcoin who don’t own any. This would allow for an on-boarding platform for newcomers without having them go to central exchanges. A newcomer would go to the Bisq forum or slack and ask for a loan. Someone would take them on their offer and privately, the bitcoiner would provide the newcomer with a reimbursement address for the loan and they’d agree on values (interest). The newcomer would then present the bitcoiner with the “Take offer” screen, which would show the bitcoiner’s addresses as one of the payout addresses and with the agreed amount set to it. This could be done by a screen-share or a video call. The bitcoiner would then fund the trade by scanning the QR-code on screen (or else).
With this architecture, the lender would be certain that the Bisq software would send them, at the end, their due amount and they wouldn’t have to trust the newcomer.
UI examples


|
1.0
|
Allow for input of custom addresses and amount on payout transaction - At present, a Bisq trade has 4 phases:
1. Make offer:
A) Maker transaction
◦ trade fee
◦ security deposit
◦ payment (if BTC seller)
2. Take offer:
A) Taker transaction
▪ trade fee
▪ security deposit
▪ payment (if BTC seller)
B) Escrow transaction
▪ payment
▪ seller’s security deposit
▪ buyer’s security deposit
3. The BTC buyer confirms Fiat/Altcoin payment sent
A) Payout transaction is created and signed by the buyer
◦ seller’s security deposit
◦ buyer’s security deposit
◦ buyer’s payment
4. The BTC seller confirms that they’ve received the Fiat/Altcoin payment
A) Payout transaction is signed and broadcast by the seller
▪ seller’s security deposit
▪ buyer’s security deposit
▪ buyer’s payment
My suggestion is to allow, on phases 1 and 2, traders to set custom addresses on which to receive their payout amounts.
Bisq is a platform that facilitates trust-less trade. There’s little reason as to why the payout amounts should be deposited in the Bisq internal wallet. Most people will eventually withdraw that amount onto their own external wallets.
So there should be an optional placeholder field for traders to put a specif address of their own to an external wallet. The offer maker could specify theirs on the “Make offer” screen and the offer taker could specify theirs on the “Take offer” screen.
Multiple payout addresses and custom splits
There could also be the option to specify up to 2 outputs (addresses) for the offer taker. This would increase the payout transaction size but since it’s the offer taker who pays for the miner fees, they could make that judgment.
This tool would allow for the enforcement of off-Bisq contracts/deals. The requirement of BTC to conduct trades on Bisq is a detractor for many newcomers to Bitcoin who don’t own any. This would allow for an on-boarding platform for newcomers without having them go to central exchanges. A newcomer would go to the Bisq forum or slack and ask for a loan. Someone would take them on their offer and privately, the bitcoiner would provide the newcomer with a reimbursement address for the loan and they’d agree on values (interest). The newcomer would then present the bitcoiner with the “Take offer” screen, which would show the bitcoiner’s addresses as one of the payout addresses and with the agreed amount set to it. This could be done by a screen-share or a video call. The bitcoiner would then fund the trade by scanning the QR-code on screen (or else).
With this architecture, the lender would be certain that the Bisq software would send them, at the end, their due amount and they wouldn’t have to trust the newcomer.
UI examples


|
process
|
allow for input of custom addresses and amount on payout transaction at present a bisq trade has phases make offer a maker transaction ◦ trade fee ◦ security deposit ◦ payment if btc seller take offer a taker transaction ▪ trade fee ▪ security deposit ▪ payment if btc seller b escrow transaction ▪ payment ▪ seller’s security deposit ▪ buyer’s security deposit the btc buyer confirms fiat altcoin payment sent a payout transaction is created and signed by the buyer ◦ seller’s security deposit ◦ buyer’s security deposit ◦ buyer’s payment the btc seller confirms that they’ve received the fiat altcoin payment a payout transaction is signed and broadcast by the seller ▪ seller’s security deposit ▪ buyer’s security deposit ▪ buyer’s payment my suggestion is to allow on phases and traders to set custom addresses on which to receive their payout amounts bisq is a platform that facilitates trust less trade there’s little reason as to why the payout amounts should be deposited in the bisq internal wallet most people will eventually withdraw that amount onto their own external wallets so there should be an optional placeholder field for traders to put a specif address of their own to an external wallet the offer maker could specify theirs on the “make offer” screen and the offer taker could specify theirs on the “take offer” screen multiple payout addresses and custom splits there could also be the option to specify up to outputs addresses for the offer taker this would increase the payout transaction size but since it’s the offer taker who pays for the miner fees they could make that judgment this tool would allow for the enforcement of off bisq contracts deals the requirement of btc to conduct trades on bisq is a detractor for many newcomers to bitcoin who don’t own any this would allow for an on boarding platform for newcomers without having them go to central exchanges a newcomer would go to the bisq forum or slack and ask for a loan someone would take them on their offer and privately the bitcoiner would provide the newcomer with a reimbursement address for the loan and they’d agree on values interest the newcomer would then present the bitcoiner with the “take offer” screen which would show the bitcoiner’s addresses as one of the payout addresses and with the agreed amount set to it this could be done by a screen share or a video call the bitcoiner would then fund the trade by scanning the qr code on screen or else with this architecture the lender would be certain that the bisq software would send them at the end their due amount and they wouldn’t have to trust the newcomer ui examples
| 1
|
46,571
| 6,024,470,221
|
IssuesEvent
|
2017-06-08 05:18:22
|
openMF/community-app
|
https://api.github.com/repos/openMF/community-app
|
closed
|
Reskin: Large gap between Field Name and it's corresponding radio button
|
design gsoc p1 reskin
|
Check the screenshots for some of the cases:



|
1.0
|
Reskin: Large gap between Field Name and it's corresponding radio button - Check the screenshots for some of the cases:



|
non_process
|
reskin large gap between field name and it s corresponding radio button check the screenshots for some of the cases
| 0
|
7,676
| 10,761,671,574
|
IssuesEvent
|
2019-10-31 21:19:44
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Crash QGIS when generate tiles with : Raster tools - Generate XYZ tiles(Directory) - Zoom 19
|
Bug Processing
|
Crash QGIS (version 3.8.3 and 3.4.12) when generate tiles with : Raster tools - Generate XYZ tiles(Directory) for zoom 19
In attachment the settings (screenshot1.PNG).
In attachment the error when treatment is realised at 80% (screenshot2.PNG).
And stack trace
NCS::JPC::CTilePartHeaderBase::_GetSequentialPacketHeader :
NCS::JPC::CTilePartHeader::GetSequentialPacketHeader :
NCS::JPC::CPrecinct::ReadPackets :
NCS::JPC::CMTPrecinct::CreateSubBands :
NCS::JPC::CPrecinct::Read :
NCS::JPC::CResolution::ReadSubBandLineMT :
NCS::JPC::CResolution::INTERLEAVE_2D :
NCS::JPC::CResolution::HOR_SR :
NCS::JPC::CResolution::GET_STATE_BUFFER :
NCS::JPC::CResolution::VER_SR_INPUT2 :
NCS::JPC::CResolution::VER_SR :
NCS::JPC::CResolution::SR_2D :
NCS::JPC::CResolution::Read :
NCS::SDK::CNode2D::Read :
NCS::JPC::CComponent::Read :
NCS::SDK::CNode2D::ReadInputs :
NCS::JPC::CMCTNode::ReadInputs :
NCS::JPC::CMCTNode::Read :
NCS::SDK::CNode2D::Read :
NCS::JPC::CDCShiftNode::Read :
NCS::SDK::CNode2D::Read :
NCS::SDK::CNodeTiler2D::Read :
NCS::JPC::CResampler::ReadInternalMT :
NCS::JPC::CResampler::Read :
NCS::JP2::CReader::ReadLine :
NCS::CView::ReadLineBILInternal :
NCS::CView::ReadLineBIL :
ECWDataset::ReadBandsDirectly :
ECWDataset::ReadBands :
ECWDataset::IRasterIO :
ECWDataset::AdviseRead :
GDALRasterBand::RasterIO :
QgsMultiBandColorRenderer::block :
QgsBrightnessContrastFilter::block :
QgsHueSaturationFilter::block :
QgsRasterResampleFilter::block :
QgsRasterProjector::block :
QgsRasterIterator::readNextRasterPart :
QgsRasterDrawer::draw :
QgsRasterLayerRenderer::render :
QgsMapRendererCustomPainterJob::doRender :
QgsMapRendererCustomPainterJob::start :
PyInit__core :
QgsMapRendererCustomPainterJob::renderSynchronously :
PyInit__core :
PyMethodDef_RawFastCallKeywords :
PyMethodDef_RawFastCallKeywords :
PyEval_EvalFrameDefault :
PyMethodDef_RawFastCallKeywords :
PyEval_EvalFrameDefault :
PyFunction_FastCallDict :
PyMethodDef_RawFastCallDict :
PyObject_Call :
PyInit_sip :
CPLStringList::empty :
PyInit__core :
QgsProcessingAlgorithm::runPrepared :
QgsProcessingAlgRunnerTask::run :
PyInit__core :
QgsTask::start :
QThreadPoolPrivate::reset :
QThread::start :
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.4.12-Madeira
QGIS code revision: 625767347a
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.4.1
Running against GDAL: 2.4.1
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 6.3.9600


Thank you.
|
1.0
|
Crash QGIS when generate tiles with : Raster tools - Generate XYZ tiles(Directory) - Zoom 19 - Crash QGIS (version 3.8.3 and 3.4.12) when generate tiles with : Raster tools - Generate XYZ tiles(Directory) for zoom 19
In attachment the settings (screenshot1.PNG).
In attachment the error when treatment is realised at 80% (screenshot2.PNG).
And stack trace
NCS::JPC::CTilePartHeaderBase::_GetSequentialPacketHeader :
NCS::JPC::CTilePartHeader::GetSequentialPacketHeader :
NCS::JPC::CPrecinct::ReadPackets :
NCS::JPC::CMTPrecinct::CreateSubBands :
NCS::JPC::CPrecinct::Read :
NCS::JPC::CResolution::ReadSubBandLineMT :
NCS::JPC::CResolution::INTERLEAVE_2D :
NCS::JPC::CResolution::HOR_SR :
NCS::JPC::CResolution::GET_STATE_BUFFER :
NCS::JPC::CResolution::VER_SR_INPUT2 :
NCS::JPC::CResolution::VER_SR :
NCS::JPC::CResolution::SR_2D :
NCS::JPC::CResolution::Read :
NCS::SDK::CNode2D::Read :
NCS::JPC::CComponent::Read :
NCS::SDK::CNode2D::ReadInputs :
NCS::JPC::CMCTNode::ReadInputs :
NCS::JPC::CMCTNode::Read :
NCS::SDK::CNode2D::Read :
NCS::JPC::CDCShiftNode::Read :
NCS::SDK::CNode2D::Read :
NCS::SDK::CNodeTiler2D::Read :
NCS::JPC::CResampler::ReadInternalMT :
NCS::JPC::CResampler::Read :
NCS::JP2::CReader::ReadLine :
NCS::CView::ReadLineBILInternal :
NCS::CView::ReadLineBIL :
ECWDataset::ReadBandsDirectly :
ECWDataset::ReadBands :
ECWDataset::IRasterIO :
ECWDataset::AdviseRead :
GDALRasterBand::RasterIO :
QgsMultiBandColorRenderer::block :
QgsBrightnessContrastFilter::block :
QgsHueSaturationFilter::block :
QgsRasterResampleFilter::block :
QgsRasterProjector::block :
QgsRasterIterator::readNextRasterPart :
QgsRasterDrawer::draw :
QgsRasterLayerRenderer::render :
QgsMapRendererCustomPainterJob::doRender :
QgsMapRendererCustomPainterJob::start :
PyInit__core :
QgsMapRendererCustomPainterJob::renderSynchronously :
PyInit__core :
PyMethodDef_RawFastCallKeywords :
PyMethodDef_RawFastCallKeywords :
PyEval_EvalFrameDefault :
PyMethodDef_RawFastCallKeywords :
PyEval_EvalFrameDefault :
PyFunction_FastCallDict :
PyMethodDef_RawFastCallDict :
PyObject_Call :
PyInit_sip :
CPLStringList::empty :
PyInit__core :
QgsProcessingAlgorithm::runPrepared :
QgsProcessingAlgRunnerTask::run :
PyInit__core :
QgsTask::start :
QThreadPoolPrivate::reset :
QThread::start :
BaseThreadInitThunk :
RtlUserThreadStart :
QGIS Info
QGIS Version: 3.4.12-Madeira
QGIS code revision: 625767347a
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.4.1
Running against GDAL: 2.4.1
System Info
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 6.3.9600


Thank you.
|
process
|
crash qgis when generate tiles with raster tools generate xyz tiles directory zoom crash qgis version and when generate tiles with raster tools generate xyz tiles directory for zoom in attachment the settings png in attachment the error when treatment is realised at png and stack trace ncs jpc ctilepartheaderbase getsequentialpacketheader ncs jpc ctilepartheader getsequentialpacketheader ncs jpc cprecinct readpackets ncs jpc cmtprecinct createsubbands ncs jpc cprecinct read ncs jpc cresolution readsubbandlinemt ncs jpc cresolution interleave ncs jpc cresolution hor sr ncs jpc cresolution get state buffer ncs jpc cresolution ver sr ncs jpc cresolution ver sr ncs jpc cresolution sr ncs jpc cresolution read ncs sdk read ncs jpc ccomponent read ncs sdk readinputs ncs jpc cmctnode readinputs ncs jpc cmctnode read ncs sdk read ncs jpc cdcshiftnode read ncs sdk read ncs sdk read ncs jpc cresampler readinternalmt ncs jpc cresampler read ncs creader readline ncs cview readlinebilinternal ncs cview readlinebil ecwdataset readbandsdirectly ecwdataset readbands ecwdataset irasterio ecwdataset adviseread gdalrasterband rasterio qgsmultibandcolorrenderer block qgsbrightnesscontrastfilter block qgshuesaturationfilter block qgsrasterresamplefilter block qgsrasterprojector block qgsrasteriterator readnextrasterpart qgsrasterdrawer draw qgsrasterlayerrenderer render qgsmaprenderercustompainterjob dorender qgsmaprenderercustompainterjob start pyinit core qgsmaprenderercustompainterjob rendersynchronously pyinit core pymethoddef rawfastcallkeywords pymethoddef rawfastcallkeywords pyeval evalframedefault pymethoddef rawfastcallkeywords pyeval evalframedefault pyfunction fastcalldict pymethoddef rawfastcalldict pyobject call pyinit sip cplstringlist empty pyinit core qgsprocessingalgorithm runprepared qgsprocessingalgrunnertask run pyinit core qgstask start qthreadpoolprivate reset qthread start basethreadinitthunk rtluserthreadstart qgis info qgis version madeira qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version thank you
| 1
|
20,471
| 27,131,399,766
|
IssuesEvent
|
2023-02-16 10:02:34
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Require Python imports to be qualified with repo name
|
P4 type: process team-Rules-Python stale
|
For context see #6886, #7051, and [discussion thread](https://groups.google.com/forum/?nomobile=true#!topic/bazel-sig-python/H26e3VAxsp8).
For all Python modules that are defined within the build (not standard library modules, not extra libraries installed non-hermetically on the system), we should require that they be imported using their fully qualified name, which includes the repo's canonical name. For instance, we should not allow importing Python modules defined in bazel packages `//third_party/...` or `@some_repo//third_party/...` via the import statement `import third_party.[...]`, precisely because that's ambiguous and collides between same-named packages in different repos.
For cases where the package really should be top-level, like a repo that exposes an existing library like `numpy` to the build, the `py_library` can still use the `imports` attribute to put it on `PYTHONPATH`.
We need a design doc to cover this change and also how to handle importing repos from runfiles in the face of repo renaming.
|
1.0
|
Require Python imports to be qualified with repo name - For context see #6886, #7051, and [discussion thread](https://groups.google.com/forum/?nomobile=true#!topic/bazel-sig-python/H26e3VAxsp8).
For all Python modules that are defined within the build (not standard library modules, not extra libraries installed non-hermetically on the system), we should require that they be imported using their fully qualified name, which includes the repo's canonical name. For instance, we should not allow importing Python modules defined in bazel packages `//third_party/...` or `@some_repo//third_party/...` via the import statement `import third_party.[...]`, precisely because that's ambiguous and collides between same-named packages in different repos.
For cases where the package really should be top-level, like a repo that exposes an existing library like `numpy` to the build, the `py_library` can still use the `imports` attribute to put it on `PYTHONPATH`.
We need a design doc to cover this change and also how to handle importing repos from runfiles in the face of repo renaming.
|
process
|
require python imports to be qualified with repo name for context see and for all python modules that are defined within the build not standard library modules not extra libraries installed non hermetically on the system we should require that they be imported using their fully qualified name which includes the repo s canonical name for instance we should not allow importing python modules defined in bazel packages third party or some repo third party via the import statement import third party precisely because that s ambiguous and collides between same named packages in different repos for cases where the package really should be top level like a repo that exposes an existing library like numpy to the build the py library can still use the imports attribute to put it on pythonpath we need a design doc to cover this change and also how to handle importing repos from runfiles in the face of repo renaming
| 1
|
182,543
| 14,917,299,887
|
IssuesEvent
|
2021-01-22 19:37:58
|
wix/sentry-testkit
|
https://api.github.com/repos/wix/sentry-testkit
|
opened
|
update sentry-testkit documentation on docs.sentry.io
|
documentation goodness-squad
|
There's a section of `Sentry-Testkit` in official [Sentry Docs](https://docs.sentry.io/platforms/javascript/configuration/sentry-testkit/) pages.
We need to enrich the the very initial readme documentation we have there, as it might be outdated and not informal enough.
The sections we need to add are:
* Integration with Puppeteer
* Network Interception
It may be copied from our docs pages. The purpose of this task is so it will be published there as well.
|
1.0
|
update sentry-testkit documentation on docs.sentry.io - There's a section of `Sentry-Testkit` in official [Sentry Docs](https://docs.sentry.io/platforms/javascript/configuration/sentry-testkit/) pages.
We need to enrich the the very initial readme documentation we have there, as it might be outdated and not informal enough.
The sections we need to add are:
* Integration with Puppeteer
* Network Interception
It may be copied from our docs pages. The purpose of this task is so it will be published there as well.
|
non_process
|
update sentry testkit documentation on docs sentry io there s a section of sentry testkit in official pages we need to enrich the the very initial readme documentation we have there as it might be outdated and not informal enough the sections we need to add are integration with puppeteer network interception it may be copied from our docs pages the purpose of this task is so it will be published there as well
| 0
|
18,952
| 24,912,144,528
|
IssuesEvent
|
2022-10-30 00:58:48
|
opensearch-project/data-prepper
|
https://api.github.com/repos/opensearch-project/data-prepper
|
opened
|
OpenSearch Enrichment Processor
|
plugin - processor
|
**Is your feature request related to a problem? Please describe.**
Pipeline authors often want to enrich Events with data from an existing OpenSearch cluster. This allows authors to enrich events with data from other events which were already sent to OpenSearch.
**Describe the solution you'd like**
Provide an OpenSearch enrichment processor. It would take some of the following parameters.
* A query template which can perform queries using parameters from the input Event.
* Document to Event mappings.
* The same connection configuration options as available in the `opensearch` sink.
```
processor:
- opensearch_enrichment:
query: "requestId:${/requestId}"
mappings:
- from_key: "bytes"
to_key: "bytes"
hosts: ["https://localhost:9200"]
cert: path/to/cert
username: YOUR_USERNAME_HERE
password: YOUR_PASSWORD_HERE
```
**Context**
This plugin would probably have some similarities to an OpenSearch plugin for Logstash, as proposed in the following issues.
https://github.com/opensearch-project/OpenSearch/issues/1976
https://github.com/opensearch-project/opensearch-clients/issues/4
|
1.0
|
OpenSearch Enrichment Processor - **Is your feature request related to a problem? Please describe.**
Pipeline authors often want to enrich Events with data from an existing OpenSearch cluster. This allows authors to enrich events with data from other events which were already sent to OpenSearch.
**Describe the solution you'd like**
Provide an OpenSearch enrichment processor. It would take some of the following parameters.
* A query template which can perform queries using parameters from the input Event.
* Document to Event mappings.
* The same connection configuration options as available in the `opensearch` sink.
```
processor:
- opensearch_enrichment:
query: "requestId:${/requestId}"
mappings:
- from_key: "bytes"
to_key: "bytes"
hosts: ["https://localhost:9200"]
cert: path/to/cert
username: YOUR_USERNAME_HERE
password: YOUR_PASSWORD_HERE
```
**Context**
This plugin would probably have some similarities to an OpenSearch plugin for Logstash, as proposed in the following issues.
https://github.com/opensearch-project/OpenSearch/issues/1976
https://github.com/opensearch-project/opensearch-clients/issues/4
|
process
|
opensearch enrichment processor is your feature request related to a problem please describe pipeline authors often want to enrich events with data from an existing opensearch cluster this allows authors to enrich events with data from other events which were already sent to opensearch describe the solution you d like provide an opensearch enrichment processor it would take some of the following parameters a query template which can perform queries using parameters from the input event document to event mappings the same connection configuration options as available in the opensearch sink processor opensearch enrichment query requestid requestid mappings from key bytes to key bytes hosts cert path to cert username your username here password your password here context this plugin would probably have some similarities to an opensearch plugin for logstash as proposed in the following issues
| 1
|
291,074
| 8,919,700,147
|
IssuesEvent
|
2019-01-21 02:17:48
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.roblox.com - Roblox Player Launcher not working
|
browser-firefox priority-important severity-important
|
<!-- @browser: Firefox 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.roblox.com
**Browser / Version**: Firefox 66.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Roblox Player Launcher not working
**Steps to Reproduce**:
After clicking play button, and installing launcher, it shoud automaticaly launch roblox, but launcher isnt working, on other browsers it work normally.
[](https://webcompat.com/uploads/2018/12/4a549053-48b8-4f5c-afb1-d199487f6ade.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181217093726</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u"[console.log(\n _______ _________ _____ ______ _\n / _____ \\ |____ ____| / ___ \\ | ____ \\ | |\n / / \\_\\ | | / / \\ \\ | | \\ \\ | |\n | | | | / / \\ \\ | | | | | |\n \\ \\______ | | | | | | | |___/ / | |\n \\______ \\ | | | | | | | ____/ | |\n \\ \\ | | | | | | | | | |\n _ | | | | \\ \\ / / | | |_|\n \\ \\_____/ / | | \\ \\___/ / | | _\n \\_______/ |_| \\_____/ |_| |_|\n\n Keep your account safe! Do not send any information from\n here to anyone or paste any text here.\n\n If someone is asking you to copy or paste text here then\n you're giving someone access to your account, your gear,\n and your Robux.\n\n To learn more about keeping your account safe you can go to\n\n https://en.help.roblox.com/hc/en-us/articles/203313380-Account-Security-Theft-Keeping-your-Account-Safe-) https://js.rbxcdn.com/351896654d3d3fe0de05f95c1cd5f79f.js.gzip:61:1294]", u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 20}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 22}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://securepubads.g.doubleclick.net/gampad/ads?gdfp_req=1&pvsid=2828271492004104&correlator=461811085301000&output=json_html&callback=googletag.impl.pubads.callbackProxy1&impl=fifs&adsid=AKTT1HW0vuBIwstMv1WIJhs-TfH4E5ejvmCVGNy3IeZ-rd7KC88lhxdeMg&pucrd=CgwIABABGAMgACgAOAESAhgHeAM&jar=2018-12-18-14&json_a=1&eid=21062454%2C21062844&vrg=285&guci=1.2.0.0.2.2.0.0&plat=1%3A1081352%2C2%3A1081352&sc=1&sfv=1-0-31&iu_parts=1015347%2CRoblox_GameDetail_Top_728x90%2CRoblox_GameDetail_Right_160x600&enc_prev_ius=%2F0%2F1%2C%2F0%2F2&prev_iu_szs=728x90%2C160x600&cust_params=Age%3D23%252C18AndOver%26A%3D23%252C18AndOver%26Genres%3DFPS%26Env%3DProduction%26PlaceID%3D301549746%26Gender%3DMale%26PLVU%3DFalse&cookie=ID%3Dd6170694a7f6b797%3AT%3D1542571848%3AS%3DALNI_MbKrIpPjUNMsPAZE_z5CntE9TEBcA&bc=13&abxe=1&lmt=1545144236&dt=1545144236352&dlt=1545144233445&idt=2870&frm=20&biw=1903&bih=966&oid=3&adxs=588%2C1369&adys=52%2C162&adks=4076218811%2C807308613&ucis=1%7C2&ifi=1&u_tz=60&u_his=3&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_sd=1&flash=0&url=https%3A%2F%2Fwww.roblox.com%2Fgames%2F301549746%2FCounter-Blox&ref=https%3A%2F%2Fwww.google.com%2F&dssz=74&icsg=2201841827840&std=23&vis=2&scr_x=0&scr_y=0&psz=728x107%7C160x615&msz=728x-1%7C160x-1&ga_vid=83380549.1542571807&ga_sid=1545144236&ga_hid=2017359506&ga_fc=true&fws=4%2C4 zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://securepubads.g.doubleclick.net/pcs/view?xai=AKAOjstGNUi6N_lZwfRTWSOHjwT__d-aiLiGOk8BR5kzatsOA-FXy_-tuzjWgMAuVulRrS4JOpsztbsJGNtdTgOrHgOyk5Vs80F3Ytsw_e_UKnL34QWmIZfxuxLxwZpcE1WXMwCMnvOxO6w8nYNqJu-7ejQZb3ph-5fvAM2hKdrUD-XMasjcdBf_Y8--DiV68gBvuQxZMH0bEnbRYe3ueJ6yzBorYrLRMHn0_mYjW1GyuKLuE3U0AAMBGrUYvFnx_SnvR5IF7ub9SzEBy3g5oQ&sai=AMfl-YS7qnaaYSmmvVNFRWZkgaGmjqfbWhgKDhQQEa_GRJi8RzD0rzk-FF_vfvgmpnWZvZhX-nVTTjg-PBf9DF0LUyWsFCGb3BXB4S6kM_GSew&sig=Cg0ArKJSzMMKTTxMoRCFEAE&adurl= zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[console.info(Powered by AMP HTML Version 1812051624460, https://www.roblox.com/games/301549746/Counter-Blox) https://cdn.ampproject.org/rtv/011812051624460/amp4ads-v0.js:549:49]', u'[console.info(Powered by AMP HTML Version 1812051624460, https://www.roblox.com/games/301549746/Counter-Blox) https://cdn.ampproject.org/rtv/011812051624460/amp4ads-v0.js:549:49]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://securepubads.g.doubleclick.net/pagead/adview?ai=CsO9DrgcZXMjHMsKtY5rMoKAJ7uXeuVSO4v29uQjAmsy23gkQASCX-qMnYOnExoXUGqAB8Zr9vQPIAQLgAgCoAwHIAwiqBNUBT9AusDgXMppb6qEZ8RmMs5_4hZ1Ob2tgNMlG9zpivx0fvVJ4fqjHJjyA9ZBHBlG7wWbuxe37exrDCTHBjKRjzI5clr1uVCY1yW46VFR8ZLi9-An8v1ki7D9Pyn3rzel4A9myLtB40gAqBDkgm4Dn5CsuZUbncPUKYAJkPP0zeAAvELIbViMhFWzwQbTT5U8mgH0gBSPh-iQHP9ojSPtZWLDBTwRFp0_v0QkmQE6l6Znwa5hXbZaxXHOIcjqAybtD6lPAjwUpLK7ahHVcF0CFV7ro5r2AwATs_IXB9AHgBAGSBQQIBBgBkgUECAUYBKAGAoAH9-SCQqgHjs4bqAfVyRuoB6gGqAfZyxuoB8_MG6gHpr4b2AcB8gcEEK6MA9IICQiA4ZBwEAEYAYAKA9gTCg&sigh=9AwdGiTPvgY&tpd=AGWhJmuu3nO4nnJuf2f0FqNCQdjfFdyt7ne3m6teQh5tmpfPzQ zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 20}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 20}]', u'[console.error(Possibly unhandled rejection: {"data":{"errors":[{"code":0,"message":"TooManyRequests"}]},"status":429,"config":{"method":"POST","transformRequest":[null],"transformResponse":[null],"jsonpCallbackParam":"callback","url":"https://presence.roblox.com/v1/presence/users","data":{"userIds":[19026337,388574944,434871299,524492356,516953360,316328747,387674966,564341829,445610556,18904633,168050440,529935073,497461807,547114316,231069263,284469952,379201681,445563470,373697333,376862816,411192870,436090663,94137183,139796294,373761960,456292703,490819730,434260754,526606636,487212575,511557427,497544518,422954713,482496322,438381900,488728103,79790605,492522789,487004060,485212510,423089789,352634078,462279524,299846332,393639034,365225138]},"withCredentials":true,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json;charset=utf-8","X-CSRF-TOKEN":"ReYqILQL8AHi"}},"statusText":""}) https://js.rbxcdn.com/cbaa19d624645aa7982dd4b3d0bbca77.js.gzip:127:210]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://googleads.g.doubleclick.net/pagead/drt/si zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[JavaScript Error: "Bd przetwarzania XML: nie znaleziono gwnego elementu\nObszar: https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32_Protocol\nNumer wiersza: 1, kolumna 1:" {file: "https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32_Protocol" line: 1}]', u'[JavaScript Error: "Bd przetwarzania XML: nie znaleziono gwnego elementu\nObszar: https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32\nNumer wiersza: 1, kolumna 1:" {file: "https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32" line: 1}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.roblox.com - Roblox Player Launcher not working - <!-- @browser: Firefox 66.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0) Gecko/20100101 Firefox/66.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.roblox.com
**Browser / Version**: Firefox 66.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Roblox Player Launcher not working
**Steps to Reproduce**:
After clicking play button, and installing launcher, it shoud automaticaly launch roblox, but launcher isnt working, on other browsers it work normally.
[](https://webcompat.com/uploads/2018/12/4a549053-48b8-4f5c-afb1-d199487f6ade.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20181217093726</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u"[console.log(\n _______ _________ _____ ______ _\n / _____ \\ |____ ____| / ___ \\ | ____ \\ | |\n / / \\_\\ | | / / \\ \\ | | \\ \\ | |\n | | | | / / \\ \\ | | | | | |\n \\ \\______ | | | | | | | |___/ / | |\n \\______ \\ | | | | | | | ____/ | |\n \\ \\ | | | | | | | | | |\n _ | | | | \\ \\ / / | | |_|\n \\ \\_____/ / | | \\ \\___/ / | | _\n \\_______/ |_| \\_____/ |_| |_|\n\n Keep your account safe! Do not send any information from\n here to anyone or paste any text here.\n\n If someone is asking you to copy or paste text here then\n you're giving someone access to your account, your gear,\n and your Robux.\n\n To learn more about keeping your account safe you can go to\n\n https://en.help.roblox.com/hc/en-us/articles/203313380-Account-Security-Theft-Keeping-your-Account-Safe-) https://js.rbxcdn.com/351896654d3d3fe0de05f95c1cd5f79f.js.gzip:61:1294]", u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 20}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 22}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://securepubads.g.doubleclick.net/gampad/ads?gdfp_req=1&pvsid=2828271492004104&correlator=461811085301000&output=json_html&callback=googletag.impl.pubads.callbackProxy1&impl=fifs&adsid=AKTT1HW0vuBIwstMv1WIJhs-TfH4E5ejvmCVGNy3IeZ-rd7KC88lhxdeMg&pucrd=CgwIABABGAMgACgAOAESAhgHeAM&jar=2018-12-18-14&json_a=1&eid=21062454%2C21062844&vrg=285&guci=1.2.0.0.2.2.0.0&plat=1%3A1081352%2C2%3A1081352&sc=1&sfv=1-0-31&iu_parts=1015347%2CRoblox_GameDetail_Top_728x90%2CRoblox_GameDetail_Right_160x600&enc_prev_ius=%2F0%2F1%2C%2F0%2F2&prev_iu_szs=728x90%2C160x600&cust_params=Age%3D23%252C18AndOver%26A%3D23%252C18AndOver%26Genres%3DFPS%26Env%3DProduction%26PlaceID%3D301549746%26Gender%3DMale%26PLVU%3DFalse&cookie=ID%3Dd6170694a7f6b797%3AT%3D1542571848%3AS%3DALNI_MbKrIpPjUNMsPAZE_z5CntE9TEBcA&bc=13&abxe=1&lmt=1545144236&dt=1545144236352&dlt=1545144233445&idt=2870&frm=20&biw=1903&bih=966&oid=3&adxs=588%2C1369&adys=52%2C162&adks=4076218811%2C807308613&ucis=1%7C2&ifi=1&u_tz=60&u_his=3&u_h=1080&u_w=1920&u_ah=1040&u_aw=1920&u_cd=24&u_sd=1&flash=0&url=https%3A%2F%2Fwww.roblox.com%2Fgames%2F301549746%2FCounter-Blox&ref=https%3A%2F%2Fwww.google.com%2F&dssz=74&icsg=2201841827840&std=23&vis=2&scr_x=0&scr_y=0&psz=728x107%7C160x615&msz=728x-1%7C160x-1&ga_vid=83380549.1542571807&ga_sid=1545144236&ga_hid=2017359506&ga_fc=true&fws=4%2C4 zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://securepubads.g.doubleclick.net/pcs/view?xai=AKAOjstGNUi6N_lZwfRTWSOHjwT__d-aiLiGOk8BR5kzatsOA-FXy_-tuzjWgMAuVulRrS4JOpsztbsJGNtdTgOrHgOyk5Vs80F3Ytsw_e_UKnL34QWmIZfxuxLxwZpcE1WXMwCMnvOxO6w8nYNqJu-7ejQZb3ph-5fvAM2hKdrUD-XMasjcdBf_Y8--DiV68gBvuQxZMH0bEnbRYe3ueJ6yzBorYrLRMHn0_mYjW1GyuKLuE3U0AAMBGrUYvFnx_SnvR5IF7ub9SzEBy3g5oQ&sai=AMfl-YS7qnaaYSmmvVNFRWZkgaGmjqfbWhgKDhQQEa_GRJi8RzD0rzk-FF_vfvgmpnWZvZhX-nVTTjg-PBf9DF0LUyWsFCGb3BXB4S6kM_GSew&sig=Cg0ArKJSzMMKTTxMoRCFEAE&adurl= zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[console.info(Powered by AMP HTML Version 1812051624460, https://www.roblox.com/games/301549746/Counter-Blox) https://cdn.ampproject.org/rtv/011812051624460/amp4ads-v0.js:549:49]', u'[console.info(Powered by AMP HTML Version 1812051624460, https://www.roblox.com/games/301549746/Counter-Blox) https://cdn.ampproject.org/rtv/011812051624460/amp4ads-v0.js:549:49]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://securepubads.g.doubleclick.net/pagead/adview?ai=CsO9DrgcZXMjHMsKtY5rMoKAJ7uXeuVSO4v29uQjAmsy23gkQASCX-qMnYOnExoXUGqAB8Zr9vQPIAQLgAgCoAwHIAwiqBNUBT9AusDgXMppb6qEZ8RmMs5_4hZ1Ob2tgNMlG9zpivx0fvVJ4fqjHJjyA9ZBHBlG7wWbuxe37exrDCTHBjKRjzI5clr1uVCY1yW46VFR8ZLi9-An8v1ki7D9Pyn3rzel4A9myLtB40gAqBDkgm4Dn5CsuZUbncPUKYAJkPP0zeAAvELIbViMhFWzwQbTT5U8mgH0gBSPh-iQHP9ojSPtZWLDBTwRFp0_v0QkmQE6l6Znwa5hXbZaxXHOIcjqAybtD6lPAjwUpLK7ahHVcF0CFV7ro5r2AwATs_IXB9AHgBAGSBQQIBBgBkgUECAUYBKAGAoAH9-SCQqgHjs4bqAfVyRuoB6gGqAfZyxuoB8_MG6gHpr4b2AcB8gcEEK6MA9IICQiA4ZBwEAEYAYAKA9gTCg&sigh=9AwdGiTPvgY&tpd=AGWhJmuu3nO4nnJuf2f0FqNCQdjfFdyt7ne3m6teQh5tmpfPzQ zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 20}]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://cdns.us1.gigya.com/gs/webSdk/Api.aspx?apiKey=3_OsvmtBbTg6S_EUbwTPtbbmoihFY5ON6v6hbVrTbuqpBs7SyF_LQaJwtwKJ60sY1p&version=latest#origin=https://www.roblox.com/games/301549746/Counter-Blox&hasGmid=false&gig_loggerConfig=%7B%22logLevel%22%3A0%2C%22clientMuteLevel%22%3A0%2C%22logTheme%22%3A1%7D" line: 20}]', u'[console.error(Possibly unhandled rejection: {"data":{"errors":[{"code":0,"message":"TooManyRequests"}]},"status":429,"config":{"method":"POST","transformRequest":[null],"transformResponse":[null],"jsonpCallbackParam":"callback","url":"https://presence.roblox.com/v1/presence/users","data":{"userIds":[19026337,388574944,434871299,524492356,516953360,316328747,387674966,564341829,445610556,18904633,168050440,529935073,497461807,547114316,231069263,284469952,379201681,445563470,373697333,376862816,411192870,436090663,94137183,139796294,373761960,456292703,490819730,434260754,526606636,487212575,511557427,497544518,422954713,482496322,438381900,488728103,79790605,492522789,487004060,485212510,423089789,352634078,462279524,299846332,393639034,365225138]},"withCredentials":true,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json;charset=utf-8","X-CSRF-TOKEN":"ReYqILQL8AHi"}},"statusText":""}) https://js.rbxcdn.com/cbaa19d624645aa7982dd4b3d0bbca77.js.gzip:127:210]', u'[JavaScript Warning: "danie dostpu do ciasteczek lub danych stron pod adresemhttps://googleads.g.doubleclick.net/pagead/drt/si zostao zablokowane, poniewa pochodzio od elementu ledzcego, ablokowanie treci jest wczone." {file: "https://www.roblox.com/games/301549746/Counter-Blox" line: 0}]', u'[JavaScript Error: "Bd przetwarzania XML: nie znaleziono gwnego elementu\nObszar: https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32_Protocol\nNumer wiersza: 1, kolumna 1:" {file: "https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32_Protocol" line: 1}]', u'[JavaScript Error: "Bd przetwarzania XML: nie znaleziono gwnego elementu\nObszar: https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32\nNumer wiersza: 1, kolumna 1:" {file: "https://assetgame.roblox.com/game/report-event?name=GameLaunchAttempt_Win32" line: 1}]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
roblox player launcher not working url browser version firefox operating system windows tested another browser yes problem type site is not usable description roblox player launcher not working steps to reproduce after clicking play button and installing launcher it shoud automaticaly launch roblox but launcher isnt working on other browsers it work normally browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen false mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly console messages u u u u u u u u u u status config method post transformrequest transformresponse jsonpcallbackparam callback url withcredentials true headers accept application json text plain content type application json charset utf x csrf token statustext u u u from with ❤️
| 0
|
16,705
| 21,843,253,366
|
IssuesEvent
|
2022-05-18 00:15:00
|
lbryio/scribe
|
https://api.github.com/repos/lbryio/scribe
|
closed
|
Precomputed address history statuses take up a lot of space and slow down initial sync
|
area: block processor type: feature request
|
- precomputed statuses should be an optional feature, off by default
- address statuses add ~20gb to the database, which might not be ok for all users
- precomputed statuses should not be calculated until initial sync has finished. After initial sync they should be initialized.
|
1.0
|
Precomputed address history statuses take up a lot of space and slow down initial sync - - precomputed statuses should be an optional feature, off by default
- address statuses add ~20gb to the database, which might not be ok for all users
- precomputed statuses should not be calculated until initial sync has finished. After initial sync they should be initialized.
|
process
|
precomputed address history statuses take up a lot of space and slow down initial sync precomputed statuses should be an optional feature off by default address statuses add to the database which might not be ok for all users precomputed statuses should not be calculated until initial sync has finished after initial sync they should be initialized
| 1
|
4,635
| 7,480,625,671
|
IssuesEvent
|
2018-04-04 18:00:07
|
awslabs/serverless-application-model
|
https://api.github.com/repos/awslabs/serverless-application-model
|
opened
|
Setup semantic release
|
area/release-process type/feature
|
<!--
Before reporting a new issue, make sure we don't have any duplicates already open or closed by
searching the issues list. If there is a duplicate, re-open or add a comment to the
existing issue instead of creating a new one. If you are reporting a bug,
make sure to include relevant information asked below to help with debugging.
## GENERAL HELP QUESTIONS ##
Github Issues is for bug reports and feature requests. If you have general support
questions, the following locations are a good place:
- Post a question in StackOverflow with "aws-sam" tag
-->
**Description:**
**Description:**
Set up semantic-release https://github.com/semantic-release/semantic-release/blob/caribou/docs/support/FAQ.md#can-i-use-semantic-release-to-publish-non-javascript-packages to facilitate automated releases based on semantic versioning and commit messages.
This requires us to follow conventional commits. We should incorporate something like https://github.com/marionebl/commitlint.
We should also update `DEVELOPMENT_GUIDE.rst` once this is setup.
|
1.0
|
Setup semantic release - <!--
Before reporting a new issue, make sure we don't have any duplicates already open or closed by
searching the issues list. If there is a duplicate, re-open or add a comment to the
existing issue instead of creating a new one. If you are reporting a bug,
make sure to include relevant information asked below to help with debugging.
## GENERAL HELP QUESTIONS ##
Github Issues is for bug reports and feature requests. If you have general support
questions, the following locations are a good place:
- Post a question in StackOverflow with "aws-sam" tag
-->
**Description:**
**Description:**
Set up semantic-release https://github.com/semantic-release/semantic-release/blob/caribou/docs/support/FAQ.md#can-i-use-semantic-release-to-publish-non-javascript-packages to facilitate automated releases based on semantic versioning and commit messages.
This requires us to follow conventional commits. We should incorporate something like https://github.com/marionebl/commitlint.
We should also update `DEVELOPMENT_GUIDE.rst` once this is setup.
|
process
|
setup semantic release before reporting a new issue make sure we don t have any duplicates already open or closed by searching the issues list if there is a duplicate re open or add a comment to the existing issue instead of creating a new one if you are reporting a bug make sure to include relevant information asked below to help with debugging general help questions github issues is for bug reports and feature requests if you have general support questions the following locations are a good place post a question in stackoverflow with aws sam tag description description set up semantic release to facilitate automated releases based on semantic versioning and commit messages this requires us to follow conventional commits we should incorporate something like we should also update development guide rst once this is setup
| 1
|
5,049
| 7,859,872,663
|
IssuesEvent
|
2018-06-21 18:01:12
|
leg2015/Aagos
|
https://api.github.com/repos/leg2015/Aagos
|
closed
|
Create Python Script to Clean Data
|
data processing
|
- [ ] Python script to clean data
* should loop through each directory for each mutation combo and add to one large csv
* Want two scripts:
1. Grabs data from:
- `representative.csv`
- `gene_stats.csv`
- `fitness.csv`
2. Grabs data from:
- `representative.csv`
- `gene_stats.csv`
|
1.0
|
Create Python Script to Clean Data - - [ ] Python script to clean data
* should loop through each directory for each mutation combo and add to one large csv
* Want two scripts:
1. Grabs data from:
- `representative.csv`
- `gene_stats.csv`
- `fitness.csv`
2. Grabs data from:
- `representative.csv`
- `gene_stats.csv`
|
process
|
create python script to clean data python script to clean data should loop through each directory for each mutation combo and add to one large csv want two scripts grabs data from representative csv gene stats csv fitness csv grabs data from representative csv gene stats csv
| 1
|
324,382
| 23,996,391,233
|
IssuesEvent
|
2022-09-14 07:55:04
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
closed
|
Guideline 2022.02.xx - Plugins migration to use dynamic import
|
Priority: High Accepted Documentation C169-Rennes-Métropole-2021-GeOrchestra3
|
## Description
Migration guidelines for 2022.02.xx should have a chapter related to migration of plugins in project's plugins.js file.
*Documentation section involved*
- [ ] User Guide
- [x] Developer Guide
## Other useful information
|
1.0
|
Guideline 2022.02.xx - Plugins migration to use dynamic import - ## Description
Migration guidelines for 2022.02.xx should have a chapter related to migration of plugins in project's plugins.js file.
*Documentation section involved*
- [ ] User Guide
- [x] Developer Guide
## Other useful information
|
non_process
|
guideline xx plugins migration to use dynamic import description migration guidelines for xx should have a chapter related to migration of plugins in project s plugins js file documentation section involved user guide developer guide other useful information
| 0
|
18,789
| 24,693,412,643
|
IssuesEvent
|
2022-10-19 10:12:37
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
GO:0033483 gas homeostasis
|
obsoletion parent relationship query cellular processes
|
Hello,
For the BP refactoring, we will split cellular and organism-level processes, as described in the [2022 bp refactoring plan - top level](https://docs.google.com/document/d/1k8yuWTMSkYVTnt2hRbrPVH8Ud3gDwC5IrGu4PqHkKds/edit).
This ticket describes changes to the GO:0033483 gas homeostasis branch:
- [x] obsolete GO:0033483 gas homeostasis: no annotations, unnecessary grouping term
- [x] GO:0033484 nitric oxide homeostasis -> move under GO:0055082 cellular chemical homeostasis
- [x] GO:0032364 oxygen homeostasis -> move under GO:0055082 cellular chemical homeostasis
|
1.0
|
GO:0033483 gas homeostasis - Hello,
For the BP refactoring, we will split cellular and organism-level processes, as described in the [2022 bp refactoring plan - top level](https://docs.google.com/document/d/1k8yuWTMSkYVTnt2hRbrPVH8Ud3gDwC5IrGu4PqHkKds/edit).
This ticket describes changes to the GO:0033483 gas homeostasis branch:
- [x] obsolete GO:0033483 gas homeostasis: no annotations, unnecessary grouping term
- [x] GO:0033484 nitric oxide homeostasis -> move under GO:0055082 cellular chemical homeostasis
- [x] GO:0032364 oxygen homeostasis -> move under GO:0055082 cellular chemical homeostasis
|
process
|
go gas homeostasis hello for the bp refactoring we will split cellular and organism level processes as described in the this ticket describes changes to the go gas homeostasis branch obsolete go gas homeostasis no annotations unnecessary grouping term go nitric oxide homeostasis move under go cellular chemical homeostasis go oxygen homeostasis move under go cellular chemical homeostasis
| 1
|
22,179
| 30,729,262,256
|
IssuesEvent
|
2023-07-27 22:59:01
|
hashgraph/hedera-json-rpc-relay
|
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
|
closed
|
Move the JSON RPC Relay grafana dashboard configs into the JSON RPC Relay repository
|
enhancement P3 process
|
### Problem
As a developers adding metrics to the JSON RPC Relay I would like to have a fast iterative development experience. All development updates should be done in one place or repository. This should be the JSON RPC Relay. The grafana dashboard is part of the product and should be maintained with the product.
### Solution
- This can broken down into tasks or steps
- First move or copy the grafana dashboard to the JSON RPC Relay repository. We will now have duplication of the dashboard in both the local node and rpc node repositories. Also add prometheus configuration to the JSON RPC Relay.
- Integrate the local node repository with the JSON RPC Relay repository. Research (spike) needed here. Possible solution: https://stackoverflow.com/questions/15844542/git-symlink-reference-to-a-file-in-an-external-repository
- Define a deployment process for the dashboard to the production grafana server. Perhaps this means the JSON RPC Relay node has its own dedicated grafana server.
### Alternatives
_No response_
|
1.0
|
Move the JSON RPC Relay grafana dashboard configs into the JSON RPC Relay repository - ### Problem
As a developers adding metrics to the JSON RPC Relay I would like to have a fast iterative development experience. All development updates should be done in one place or repository. This should be the JSON RPC Relay. The grafana dashboard is part of the product and should be maintained with the product.
### Solution
- This can broken down into tasks or steps
- First move or copy the grafana dashboard to the JSON RPC Relay repository. We will now have duplication of the dashboard in both the local node and rpc node repositories. Also add prometheus configuration to the JSON RPC Relay.
- Integrate the local node repository with the JSON RPC Relay repository. Research (spike) needed here. Possible solution: https://stackoverflow.com/questions/15844542/git-symlink-reference-to-a-file-in-an-external-repository
- Define a deployment process for the dashboard to the production grafana server. Perhaps this means the JSON RPC Relay node has its own dedicated grafana server.
### Alternatives
_No response_
|
process
|
move the json rpc relay grafana dashboard configs into the json rpc relay repository problem as a developers adding metrics to the json rpc relay i would like to have a fast iterative development experience all development updates should be done in one place or repository this should be the json rpc relay the grafana dashboard is part of the product and should be maintained with the product solution this can broken down into tasks or steps first move or copy the grafana dashboard to the json rpc relay repository we will now have duplication of the dashboard in both the local node and rpc node repositories also add prometheus configuration to the json rpc relay integrate the local node repository with the json rpc relay repository research spike needed here possible solution define a deployment process for the dashboard to the production grafana server perhaps this means the json rpc relay node has its own dedicated grafana server alternatives no response
| 1
|
17,101
| 22,622,401,041
|
IssuesEvent
|
2022-06-30 07:44:00
|
didi/mpx
|
https://api.github.com/repos/didi/mpx
|
closed
|
[Bug report] 使用mpx.reportEvent报错
|
processing
|
**问题描述**
请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整:
1. 问题触发的条件
在代码中加入 `mpx.reportEvengt`,在开发工具和真机上都会报错。
``` ts
import mpx, { createPage } from '@mpxjs/core'
createPage({
onLoad () {
mpx.reportEvent('test_id', {
a: 1,
b: false
})
}
})
```
2. 期望的表现
不报错
3. 实际的表现
报错如下。

**环境信息描述**
环境信息描述
系统类型(Mac或者Windows)
macOS Montery 2.3.1
Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看)
"@mpxjs/api-proxy": "^2.7.1",
"@mpxjs/core": "^2.7.18",
"@mpxjs/miniprogram-simulate": "^1.4.8",
"@mpxjs/mpx-jest": "0.0.9",
"@mpxjs/webpack-plugin": "^2.7.18",
小程序开发者工具信息(小程序平台、开发者工具版本、基础库版本)
小程序基础库:2.13.2
开发工具:1.05.220425
Node: 16.14.0
npm: 8.3.1
**Demo**
[duo-build.zip](https://github.com/didi/mpx/files/8986218/duo-build.zip)
|
1.0
|
[Bug report] 使用mpx.reportEvent报错 - **问题描述**
请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整:
1. 问题触发的条件
在代码中加入 `mpx.reportEvengt`,在开发工具和真机上都会报错。
``` ts
import mpx, { createPage } from '@mpxjs/core'
createPage({
onLoad () {
mpx.reportEvent('test_id', {
a: 1,
b: false
})
}
})
```
2. 期望的表现
不报错
3. 实际的表现
报错如下。

**环境信息描述**
环境信息描述
系统类型(Mac或者Windows)
macOS Montery 2.3.1
Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看)
"@mpxjs/api-proxy": "^2.7.1",
"@mpxjs/core": "^2.7.18",
"@mpxjs/miniprogram-simulate": "^1.4.8",
"@mpxjs/mpx-jest": "0.0.9",
"@mpxjs/webpack-plugin": "^2.7.18",
小程序开发者工具信息(小程序平台、开发者工具版本、基础库版本)
小程序基础库:2.13.2
开发工具:1.05.220425
Node: 16.14.0
npm: 8.3.1
**Demo**
[duo-build.zip](https://github.com/didi/mpx/files/8986218/duo-build.zip)
|
process
|
使用mpx reportevent报错 问题描述 请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整: 问题触发的条件 在代码中加入 mpx reportevengt ,在开发工具和真机上都会报错。 ts import mpx createpage from mpxjs core createpage onload mpx reportevent test id a b false 期望的表现 不报错 实际的表现 报错如下。 环境信息描述 环境信息描述 系统类型 mac或者windows macos montery mpx依赖版本 mpxjs core、 mpxjs webpack plugin和 mpxjs api proxy的具体版本,可以通过package lock json或者实际去node modules当中查看 mpxjs api proxy mpxjs core mpxjs miniprogram simulate mpxjs mpx jest mpxjs webpack plugin 小程序开发者工具信息 小程序平台、开发者工具版本、基础库版本) 小程序基础库: 开发工具: node npm demo
| 1
|
13,374
| 15,835,712,103
|
IssuesEvent
|
2021-04-06 18:22:04
|
EKGF/ekg-mm
|
https://api.github.com/repos/EKGF/ekg-mm
|
closed
|
Remove 'members only' sections from EKG/MM doc
|
ekg-mm-process
|
Since this repo is now public it doesn't make sense anymore to publish a "members only version" anymore.
|
1.0
|
Remove 'members only' sections from EKG/MM doc - Since this repo is now public it doesn't make sense anymore to publish a "members only version" anymore.
|
process
|
remove members only sections from ekg mm doc since this repo is now public it doesn t make sense anymore to publish a members only version anymore
| 1
|
19,358
| 25,491,345,658
|
IssuesEvent
|
2022-11-27 04:50:19
|
hsmusic/hsmusic-wiki
|
https://api.github.com/repos/hsmusic/hsmusic-wiki
|
closed
|
Emoji and some symbols aren't being filtered out when normalizing names
|
type: bug (user-facing) scope: data processing
|
See "Tracks - by Name" listing. None of these symbols, emoji, or punctuation should be retained for sorting purposes (e.g. `♉ - Breathtak1ng` should be under B's, `♬ Disc 3 ♬` should be under D's, `!!~~~ the will to figt ~~~!!` should be under W's).

|
1.0
|
Emoji and some symbols aren't being filtered out when normalizing names - See "Tracks - by Name" listing. None of these symbols, emoji, or punctuation should be retained for sorting purposes (e.g. `♉ - Breathtak1ng` should be under B's, `♬ Disc 3 ♬` should be under D's, `!!~~~ the will to figt ~~~!!` should be under W's).

|
process
|
emoji and some symbols aren t being filtered out when normalizing names see tracks by name listing none of these symbols emoji or punctuation should be retained for sorting purposes e g ♉ should be under b s ♬ disc ♬ should be under d s the will to figt should be under w s
| 1
|
4,725
| 7,570,087,786
|
IssuesEvent
|
2018-04-23 07:51:01
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Rename possibly offensive terminology in child_process
|
child_process errors
|
Currently, we use the terminology "Child died" when child processed get terminated.
We discussed the terminology in https://github.com/nodejs/node/pull/14293#discussion_r127606290 , copying my comment from there:
> I'm usually the last one to worry about that sort of stuff - but is it possible this terminology is offensive? If we swapped out other trigger references I think it would be nice to have a code base without dead children.
@jasnell suggested we change the terminology to "Child exited". I agree it's better terminology, in general I think we should avoid irrelevant stuff that might cause people to feel uneasy (like death related terminology in code) where we can and I think it could help keep the project friendly and inclusive.
-----
- [x] Estimate the impact of changing the error message. (Semver Major?)
- [ ] Check for any objections from collaborators to this change.
- [ ] Establish consensus that these changes are something we want to do (vs, collaborators feel they are churn).
|
1.0
|
Rename possibly offensive terminology in child_process - Currently, we use the terminology "Child died" when child processed get terminated.
We discussed the terminology in https://github.com/nodejs/node/pull/14293#discussion_r127606290 , copying my comment from there:
> I'm usually the last one to worry about that sort of stuff - but is it possible this terminology is offensive? If we swapped out other trigger references I think it would be nice to have a code base without dead children.
@jasnell suggested we change the terminology to "Child exited". I agree it's better terminology, in general I think we should avoid irrelevant stuff that might cause people to feel uneasy (like death related terminology in code) where we can and I think it could help keep the project friendly and inclusive.
-----
- [x] Estimate the impact of changing the error message. (Semver Major?)
- [ ] Check for any objections from collaborators to this change.
- [ ] Establish consensus that these changes are something we want to do (vs, collaborators feel they are churn).
|
process
|
rename possibly offensive terminology in child process currently we use the terminology child died when child processed get terminated we discussed the terminology in copying my comment from there i m usually the last one to worry about that sort of stuff but is it possible this terminology is offensive if we swapped out other trigger references i think it would be nice to have a code base without dead children jasnell suggested we change the terminology to child exited i agree it s better terminology in general i think we should avoid irrelevant stuff that might cause people to feel uneasy like death related terminology in code where we can and i think it could help keep the project friendly and inclusive estimate the impact of changing the error message semver major check for any objections from collaborators to this change establish consensus that these changes are something we want to do vs collaborators feel they are churn
| 1
|
136
| 2,574,321,021
|
IssuesEvent
|
2015-02-11 16:17:19
|
mvdm/vandermeerlab
|
https://api.github.com/repos/mvdm/vandermeerlab
|
opened
|
Make conversion to cm in LoadPos() more robust
|
data preprocessing neuralynx
|
LoadPos() has a cfg option to convert pixels to cm, but this is currently done in a way that depends on the actual tracking data:
X_pixelsize = max(pos_tsd.data(1,:)) - min(pos_tsd.data(1,:));
Y_pixelsize = max(pos_tsd.data(2,:)) - min(pos_tsd.data(2,:));
This is problematic because then if the rat leans over the track more or less, or there is some errant point being picked up off the track, the result would be different even though obviously the track size didn't change!
Ideas, @aacarey? One simple way is to get the conversion factor manually once, store it in the expkeys, and give it to LoadPos as a config field.
|
1.0
|
Make conversion to cm in LoadPos() more robust - LoadPos() has a cfg option to convert pixels to cm, but this is currently done in a way that depends on the actual tracking data:
X_pixelsize = max(pos_tsd.data(1,:)) - min(pos_tsd.data(1,:));
Y_pixelsize = max(pos_tsd.data(2,:)) - min(pos_tsd.data(2,:));
This is problematic because then if the rat leans over the track more or less, or there is some errant point being picked up off the track, the result would be different even though obviously the track size didn't change!
Ideas, @aacarey? One simple way is to get the conversion factor manually once, store it in the expkeys, and give it to LoadPos as a config field.
|
process
|
make conversion to cm in loadpos more robust loadpos has a cfg option to convert pixels to cm but this is currently done in a way that depends on the actual tracking data x pixelsize max pos tsd data min pos tsd data y pixelsize max pos tsd data min pos tsd data this is problematic because then if the rat leans over the track more or less or there is some errant point being picked up off the track the result would be different even though obviously the track size didn t change ideas aacarey one simple way is to get the conversion factor manually once store it in the expkeys and give it to loadpos as a config field
| 1
|
675,339
| 23,090,629,525
|
IssuesEvent
|
2022-07-26 14:56:50
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
facebook.com - design is broken
|
status-needsinfo browser-firefox priority-critical engine-gecko
|
<!-- @browser: Firefox 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 -->
<!-- @reported_with: unknown -->
**URL**: https://facebook.com
**Browser / Version**: Firefox 102.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
I have tested this problem on 3 differend firefox browsers. When I want to scroll down frineds on left side in messanger I can not it is not possible no matter what I do.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
facebook.com - design is broken - <!-- @browser: Firefox 102.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0 -->
<!-- @reported_with: unknown -->
**URL**: https://facebook.com
**Browser / Version**: Firefox 102.0
**Operating System**: Windows 10
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items not fully visible
**Steps to Reproduce**:
I have tested this problem on 3 differend firefox browsers. When I want to scroll down frineds on left side in messanger I can not it is not possible no matter what I do.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
facebook com design is broken url browser version firefox operating system windows tested another browser no problem type design is broken description items not fully visible steps to reproduce i have tested this problem on differend firefox browsers when i want to scroll down frineds on left side in messanger i can not it is not possible no matter what i do browser configuration none from with ❤️
| 0
|
153,410
| 19,706,291,762
|
IssuesEvent
|
2022-01-12 22:29:10
|
vascomfnunes/vasco.dev
|
https://api.github.com/repos/vascomfnunes/vasco.dev
|
closed
|
CVE-2021-3749 (High) detected in axios-0.21.1.tgz
|
security vulnerability
|
## CVE-2021-3749 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.21.1.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.1.tgz">https://registry.npmjs.org/axios/-/axios-0.21.1.tgz</a></p>
<p>Path to dependency file: vasco.dev/package.json</p>
<p>Path to vulnerable library: vasco.dev/node_modules/axios/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.14.tgz (Root Library)
- localtunnel-2.0.1.tgz
- :x: **axios-0.21.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vascomfnunes/vasco.dev/commit/fea50522e35271097f9438ccc60a6d8f1c81c3cc">fea50522e35271097f9438ccc60a6d8f1c81c3cc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
axios is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749>CVE-2021-3749</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/axios/axios/releases/tag/v0.21.2">https://github.com/axios/axios/releases/tag/v0.21.2</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: axios - 0.21.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-3749 (High) detected in axios-0.21.1.tgz - ## CVE-2021-3749 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axios-0.21.1.tgz</b></p></summary>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.1.tgz">https://registry.npmjs.org/axios/-/axios-0.21.1.tgz</a></p>
<p>Path to dependency file: vasco.dev/package.json</p>
<p>Path to vulnerable library: vasco.dev/node_modules/axios/package.json</p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.14.tgz (Root Library)
- localtunnel-2.0.1.tgz
- :x: **axios-0.21.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/vascomfnunes/vasco.dev/commit/fea50522e35271097f9438ccc60a6d8f1c81c3cc">fea50522e35271097f9438ccc60a6d8f1c81c3cc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
axios is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749>CVE-2021-3749</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/axios/axios/releases/tag/v0.21.2">https://github.com/axios/axios/releases/tag/v0.21.2</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: axios - 0.21.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in axios tgz cve high severity vulnerability vulnerable library axios tgz promise based http client for the browser and node js library home page a href path to dependency file vasco dev package json path to vulnerable library vasco dev node modules axios package json dependency hierarchy browser sync tgz root library localtunnel tgz x axios tgz vulnerable library found in head commit a href found in base branch master vulnerability details axios is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution axios step up your open source security game with whitesource
| 0
|
214,074
| 24,035,081,832
|
IssuesEvent
|
2022-09-15 18:22:30
|
jtimberlake/react-server
|
https://api.github.com/repos/jtimberlake/react-server
|
opened
|
CVE-2022-2900 (High) detected in parse-url-5.0.1.tgz
|
security vulnerability
|
## CVE-2022-2900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-5.0.1.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz">https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- lerna-3.13.2.tgz (Root Library)
- version-3.13.2.tgz
- github-client-3.13.1.tgz
- git-url-parse-11.1.2.tgz
- git-up-4.0.1.tgz
- :x: **parse-url-5.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/react-server/commit/fc4f63cd9d7cfd34b5d6322a49f9b670bd83cb27">fc4f63cd9d7cfd34b5d6322a49f9b670bd83cb27</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Server-Side Request Forgery (SSRF) in GitHub repository ionicabizau/parse-url prior to 8.1.0.
<p>Publish Date: 2022-09-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2900>CVE-2022-2900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-09-14</p>
<p>Fix Resolution (parse-url): 8.0.0</p>
<p>Direct dependency fix Resolution (lerna): 5.1.8</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2022-2900 (High) detected in parse-url-5.0.1.tgz - ## CVE-2022-2900 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-url-5.0.1.tgz</b></p></summary>
<p>An advanced url parser supporting git urls too.</p>
<p>Library home page: <a href="https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz">https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/parse-url/package.json</p>
<p>
Dependency Hierarchy:
- lerna-3.13.2.tgz (Root Library)
- version-3.13.2.tgz
- github-client-3.13.1.tgz
- git-url-parse-11.1.2.tgz
- git-up-4.0.1.tgz
- :x: **parse-url-5.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jtimberlake/react-server/commit/fc4f63cd9d7cfd34b5d6322a49f9b670bd83cb27">fc4f63cd9d7cfd34b5d6322a49f9b670bd83cb27</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Server-Side Request Forgery (SSRF) in GitHub repository ionicabizau/parse-url prior to 8.1.0.
<p>Publish Date: 2022-09-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2900>CVE-2022-2900</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-09-14</p>
<p>Fix Resolution (parse-url): 8.0.0</p>
<p>Direct dependency fix Resolution (lerna): 5.1.8</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_process
|
cve high detected in parse url tgz cve high severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file package json path to vulnerable library node modules parse url package json dependency hierarchy lerna tgz root library version tgz github client tgz git url parse tgz git up tgz x parse url tgz vulnerable library found in head commit a href found in base branch master vulnerability details server side request forgery ssrf in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution parse url direct dependency fix resolution lerna rescue worker helmet automatic remediation is available for this issue
| 0
|
439,997
| 30,725,420,587
|
IssuesEvent
|
2023-07-27 19:13:59
|
weaveworks/weave-gitops
|
https://api.github.com/repos/weaveworks/weave-gitops
|
opened
|
How to Configure SSO in Helmchart WeaveGitops (Discussion #4103)
|
documentation
|
<!--
Please fill out as much information as possible. The more info we have, the
faster we can schedule the work :)
-->
**What issue with the docs have you found?**
- [ x ] Missing information
- [ ] Incorrect information
- [ ] Something else
**Describe what you are trying to do**
Hi.
I am need configure sso with azure, but, not find the documentation that reference step by step.
Any friend to help
**Which docs version are you using?**
<!--
latest
-->
**Which pages are affected?**
<!--
A bulleted list of links to the page(s) you have been following. (or have tried to
follow.)
-->
**Detail the issues you found and the improvements that you would like to see**
<!--
A bulleted list of all the pages you have been following. (or have tried to
follow.)
-->
**Would you be able or interested to contribute this work to the docs?**
<!--
y/n
-->
|
1.0
|
How to Configure SSO in Helmchart WeaveGitops (Discussion #4103) - <!--
Please fill out as much information as possible. The more info we have, the
faster we can schedule the work :)
-->
**What issue with the docs have you found?**
- [ x ] Missing information
- [ ] Incorrect information
- [ ] Something else
**Describe what you are trying to do**
Hi.
I am need configure sso with azure, but, not find the documentation that reference step by step.
Any friend to help
**Which docs version are you using?**
<!--
latest
-->
**Which pages are affected?**
<!--
A bulleted list of links to the page(s) you have been following. (or have tried to
follow.)
-->
**Detail the issues you found and the improvements that you would like to see**
<!--
A bulleted list of all the pages you have been following. (or have tried to
follow.)
-->
**Would you be able or interested to contribute this work to the docs?**
<!--
y/n
-->
|
non_process
|
how to configure sso in helmchart weavegitops discussion please fill out as much information as possible the more info we have the faster we can schedule the work what issue with the docs have you found missing information incorrect information something else describe what you are trying to do hi i am need configure sso with azure but not find the documentation that reference step by step any friend to help which docs version are you using latest which pages are affected a bulleted list of links to the page s you have been following or have tried to follow detail the issues you found and the improvements that you would like to see a bulleted list of all the pages you have been following or have tried to follow would you be able or interested to contribute this work to the docs y n
| 0
|
39,257
| 10,320,335,399
|
IssuesEvent
|
2019-08-30 20:11:53
|
bitcoin-s/bitcoin-s
|
https://api.github.com/repos/bitcoin-s/bitcoin-s
|
closed
|
eclairRpcTest/test does not download bitcoind binaries
|
bug build
|
Introduced in #710
If i run
`eclairRpcTest/test`
I get an error that looks like this for all of my eclair unit tests
```[info] java.nio.file.NoSuchFileException: /home/chris/dev/bitcoin-s-core/binaries/bitcoind
[info] at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
[info] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
[info] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
[info] at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
[info] at java.nio.file.Files.newDirectoryStream(Files.java:457)
[info] at java.nio.file.Files.list(Files.java:3451)
```
If you run a `clean` and then run _only_ `eclairRpcTest/test` (so that bitcoind's tests aren't being run) you should be able to reproduce
|
1.0
|
eclairRpcTest/test does not download bitcoind binaries - Introduced in #710
If i run
`eclairRpcTest/test`
I get an error that looks like this for all of my eclair unit tests
```[info] java.nio.file.NoSuchFileException: /home/chris/dev/bitcoin-s-core/binaries/bitcoind
[info] at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
[info] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
[info] at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
[info] at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
[info] at java.nio.file.Files.newDirectoryStream(Files.java:457)
[info] at java.nio.file.Files.list(Files.java:3451)
```
If you run a `clean` and then run _only_ `eclairRpcTest/test` (so that bitcoind's tests aren't being run) you should be able to reproduce
|
non_process
|
eclairrpctest test does not download bitcoind binaries introduced in if i run eclairrpctest test i get an error that looks like this for all of my eclair unit tests java nio file nosuchfileexception home chris dev bitcoin s core binaries bitcoind at sun nio fs unixexception translatetoioexception unixexception java at sun nio fs unixexception rethrowasioexception unixexception java at sun nio fs unixexception rethrowasioexception unixexception java at sun nio fs unixfilesystemprovider newdirectorystream unixfilesystemprovider java at java nio file files newdirectorystream files java at java nio file files list files java if you run a clean and then run only eclairrpctest test so that bitcoind s tests aren t being run you should be able to reproduce
| 0
|
130,418
| 10,608,406,838
|
IssuesEvent
|
2019-10-11 07:27:32
|
viszerale-therapie/simple-courses
|
https://api.github.com/repos/viszerale-therapie/simple-courses
|
closed
|
Fwd: bitte dringend - neuer Kurs fehlerhaft online
|
bug prio1 ready for test
|
*Sent by @Andreas-Schoenefeldt. Created by [fire](https://fire.fundersclub.com/).*
---
Laut Error Log: An exception occurred while executing 'INSERT INTO termine (language, taxJurisdiction, datum_von, datum_bis, deaktiviert, assistentID, teacherID, courseTypeID, locationID, max_teilnehmer, termin_de, preis, lastUpdated) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' with params ["de", null, "2019-11-15 00:00:00", "2019-11-15 00:00:00", 0, null, null, 16, null, "8", null, "[{\"v\":280,\"c\":\"EUR\"}]", "2019-06-13 14:08:43"]: SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'teacherID' cannot be null”
Ist ein Handfester Bug.
> Lieber Andreas,
>
> ich hab jetzt alle Daten im Backend angelegt und wollte jetzt die eigentlichen Kurstermine anlegen.
>
> **Dabei gibt’s zwei Probleme: **
>
> 1) Ich kann den Kurs nicht speichern. Wenn ich unter Courses den Kurs eingebe zum geplanten Datum und dann auf Save klicke, dann dreht sich das Rad ewig und es passiert nichts weiter. Vielleicht verwirrt der lange Name als Kursleiter das System?
|
1.0
|
Fwd: bitte dringend - neuer Kurs fehlerhaft online - *Sent by @Andreas-Schoenefeldt. Created by [fire](https://fire.fundersclub.com/).*
---
Laut Error Log: An exception occurred while executing 'INSERT INTO termine (language, taxJurisdiction, datum_von, datum_bis, deaktiviert, assistentID, teacherID, courseTypeID, locationID, max_teilnehmer, termin_de, preis, lastUpdated) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)' with params ["de", null, "2019-11-15 00:00:00", "2019-11-15 00:00:00", 0, null, null, 16, null, "8", null, "[{\"v\":280,\"c\":\"EUR\"}]", "2019-06-13 14:08:43"]: SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'teacherID' cannot be null”
Ist ein Handfester Bug.
> Lieber Andreas,
>
> ich hab jetzt alle Daten im Backend angelegt und wollte jetzt die eigentlichen Kurstermine anlegen.
>
> **Dabei gibt’s zwei Probleme: **
>
> 1) Ich kann den Kurs nicht speichern. Wenn ich unter Courses den Kurs eingebe zum geplanten Datum und dann auf Save klicke, dann dreht sich das Rad ewig und es passiert nichts weiter. Vielleicht verwirrt der lange Name als Kursleiter das System?
|
non_process
|
fwd bitte dringend neuer kurs fehlerhaft online sent by andreas schoenefeldt created by laut error log an exception occurred while executing insert into termine language taxjurisdiction datum von datum bis deaktiviert assistentid teacherid coursetypeid locationid max teilnehmer termin de preis lastupdated values with params sqlstate integrity constraint violation column teacherid cannot be null” ist ein handfester bug lieber andreas ich hab jetzt alle daten im backend angelegt und wollte jetzt die eigentlichen kurstermine anlegen dabei gibt’s zwei probleme ich kann den kurs nicht speichern wenn ich unter courses den kurs eingebe zum geplanten datum und dann auf save klicke dann dreht sich das rad ewig und es passiert nichts weiter vielleicht verwirrt der lange name als kursleiter das system
| 0
|
14,375
| 17,398,345,317
|
IssuesEvent
|
2021-08-02 16:02:41
|
deepset-ai/haystack
|
https://api.github.com/repos/deepset-ai/haystack
|
closed
|
TransformersSummarizer crashes if given long input
|
Contributions wanted! topic:models topic:pipeline topic:preprocessing type:feature
|
If the TransformersSummarizer is given an input that is longer than the model's max_seq_len, an error will be thrown. Instead, I think a warning message should be printed to console and the input text should be truncated so that the Node can still run.
|
1.0
|
TransformersSummarizer crashes if given long input - If the TransformersSummarizer is given an input that is longer than the model's max_seq_len, an error will be thrown. Instead, I think a warning message should be printed to console and the input text should be truncated so that the Node can still run.
|
process
|
transformerssummarizer crashes if given long input if the transformerssummarizer is given an input that is longer than the model s max seq len an error will be thrown instead i think a warning message should be printed to console and the input text should be truncated so that the node can still run
| 1
|
356,805
| 10,597,707,811
|
IssuesEvent
|
2019-10-10 01:47:42
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Timeline label shows repeated months
|
Misc/Timezones Priority:P1 Type:Bug Visualization/
|
More often than not, I get my timeline charts all messed up because I get repeated periods on the label. This caused all data to be shifted on the visual presentation, making it absolutely misleading.
Does this happen to anyone else and is there a way to fix it?
<img width="786" alt="screen shot 2017-09-07 at 15 32 26" src="https://user-images.githubusercontent.com/4039615/30168940-cd93a224-93e2-11e7-9a37-12bcdc47dc53.png">
|
1.0
|
Timeline label shows repeated months - More often than not, I get my timeline charts all messed up because I get repeated periods on the label. This caused all data to be shifted on the visual presentation, making it absolutely misleading.
Does this happen to anyone else and is there a way to fix it?
<img width="786" alt="screen shot 2017-09-07 at 15 32 26" src="https://user-images.githubusercontent.com/4039615/30168940-cd93a224-93e2-11e7-9a37-12bcdc47dc53.png">
|
non_process
|
timeline label shows repeated months more often than not i get my timeline charts all messed up because i get repeated periods on the label this caused all data to be shifted on the visual presentation making it absolutely misleading does this happen to anyone else and is there a way to fix it img width alt screen shot at src
| 0
|
487,444
| 14,046,725,666
|
IssuesEvent
|
2020-11-02 05:33:25
|
AY2021S1-CS2103T-T13-3/tp
|
https://api.github.com/repos/AY2021S1-CS2103T-T13-3/tp
|
closed
|
As a busy trainer, I can ask if xxx time is available
|
priority.low type.story
|
so that I can add new student to that time slot
|
1.0
|
As a busy trainer, I can ask if xxx time is available - so that I can add new student to that time slot
|
non_process
|
as a busy trainer i can ask if xxx time is available so that i can add new student to that time slot
| 0
|
11,550
| 14,433,782,918
|
IssuesEvent
|
2020-12-07 05:43:04
|
cggos/cggos.github.io
|
https://api.github.com/repos/cggos/cggos.github.io
|
opened
|
图像频率域分析之傅里叶变换 - Gavin Gao's Blog
|
Gitalk image-process-fft2
|
https://cggos.github.io/image-process-fft2.html
[TOC]傅里叶变换基础傅里叶级数法国数学家傅里叶发现,任何周期函数都可以用正弦函数和余弦函数构成的无穷级数来表示(选择正弦函数与余弦函数作为基函数是因为它们是正交的),即 任何周期信号都可以表示成一系列正弦信号的叠加 三角形式\[f(t) = \frac{a_0}{2} +\sum_{k=1}^{+\inft...
|
1.0
|
图像频率域分析之傅里叶变换 - Gavin Gao's Blog - https://cggos.github.io/image-process-fft2.html
[TOC]傅里叶变换基础傅里叶级数法国数学家傅里叶发现,任何周期函数都可以用正弦函数和余弦函数构成的无穷级数来表示(选择正弦函数与余弦函数作为基函数是因为它们是正交的),即 任何周期信号都可以表示成一系列正弦信号的叠加 三角形式\[f(t) = \frac{a_0}{2} +\sum_{k=1}^{+\inft...
|
process
|
图像频率域分析之傅里叶变换 gavin gao s blog 傅里叶变换基础傅里叶级数法国数学家傅里叶发现,任何周期函数都可以用正弦函数和余弦函数构成的无穷级数来表示(选择正弦函数与余弦函数作为基函数是因为它们是正交的),即 任何周期信号都可以表示成一系列正弦信号的叠加 三角形式 f t frac a sum k inft
| 1
|
5,288
| 8,073,214,795
|
IssuesEvent
|
2018-08-06 18:30:25
|
GoogleCloudPlatform/google-cloud-python
|
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
|
closed
|
Spanner: 'test_transaction_read_and_insert_then_rollback' systest flake
|
api: spanner flaky testing type: process
|
See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7509
```python
________ TestSessionAPI.test_transaction_read_and_insert_then_rollback _________
args = (<tests.system.test_system.TestSessionAPI testMethod=test_transaction_read_and_insert_then_rollback>,)
kwargs = {}, tries = 0
@wraps(to_wrap)
def wrapped_function(*args, **kwargs):
tries = 0
while tries < self.max_tries:
try:
> return to_wrap(*args, **kwargs)
../.nox/sys-2-7/lib/python2.7/site-packages/test_utils/retry.py:95:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/system/test_system.py:566: in test_transaction_read_and_insert_then_rollback
rows = list(transaction.read(self.TABLE, self.COLUMNS, self.ALL))
google/cloud/spanner_v1/streamed.py:141: in __iter__
self._consume_next()
google/cloud/spanner_v1/streamed.py:114: in _consume_next
response = six.next(self._response_iterator)
google/cloud/spanner_v1/snapshot.py:44: in _restart_on_unavailable
for item in iterator:
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/grpc_helpers.py:83: in next
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = Aborted('Transaction was aborted.',)
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.ABORTED
detail..."file_line":1095,"grpc_message":"Transaction was aborted.","grpc_status":10}"
>
def raise_from(value, from_value):
> raise value
E Aborted: 409 Transaction was aborted.
../.nox/sys-2-7/lib/python2.7/site-packages/six.py:737: Aborted
```
|
1.0
|
Spanner: 'test_transaction_read_and_insert_then_rollback' systest flake - See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7509
```python
________ TestSessionAPI.test_transaction_read_and_insert_then_rollback _________
args = (<tests.system.test_system.TestSessionAPI testMethod=test_transaction_read_and_insert_then_rollback>,)
kwargs = {}, tries = 0
@wraps(to_wrap)
def wrapped_function(*args, **kwargs):
tries = 0
while tries < self.max_tries:
try:
> return to_wrap(*args, **kwargs)
../.nox/sys-2-7/lib/python2.7/site-packages/test_utils/retry.py:95:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/system/test_system.py:566: in test_transaction_read_and_insert_then_rollback
rows = list(transaction.read(self.TABLE, self.COLUMNS, self.ALL))
google/cloud/spanner_v1/streamed.py:141: in __iter__
self._consume_next()
google/cloud/spanner_v1/streamed.py:114: in _consume_next
response = six.next(self._response_iterator)
google/cloud/spanner_v1/snapshot.py:44: in _restart_on_unavailable
for item in iterator:
../.nox/sys-2-7/lib/python2.7/site-packages/google/api_core/grpc_helpers.py:83: in next
six.raise_from(exceptions.from_grpc_error(exc), exc)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = Aborted('Transaction was aborted.',)
from_value = <_Rendezvous of RPC that terminated with:
status = StatusCode.ABORTED
detail..."file_line":1095,"grpc_message":"Transaction was aborted.","grpc_status":10}"
>
def raise_from(value, from_value):
> raise value
E Aborted: 409 Transaction was aborted.
../.nox/sys-2-7/lib/python2.7/site-packages/six.py:737: Aborted
```
|
process
|
spanner test transaction read and insert then rollback systest flake see python testsessionapi test transaction read and insert then rollback args kwargs tries wraps to wrap def wrapped function args kwargs tries while tries self max tries try return to wrap args kwargs nox sys lib site packages test utils retry py tests system test system py in test transaction read and insert then rollback rows list transaction read self table self columns self all google cloud spanner streamed py in iter self consume next google cloud spanner streamed py in consume next response six next self response iterator google cloud spanner snapshot py in restart on unavailable for item in iterator nox sys lib site packages google api core grpc helpers py in next six raise from exceptions from grpc error exc exc value aborted transaction was aborted from value rendezvous of rpc that terminated with status statuscode aborted detail file line grpc message transaction was aborted grpc status def raise from value from value raise value e aborted transaction was aborted nox sys lib site packages six py aborted
| 1
|
83,854
| 10,340,938,648
|
IssuesEvent
|
2019-09-03 23:57:09
|
spring-projects/spring-boot
|
https://api.github.com/repos/spring-projects/spring-boot
|
closed
|
Document @SpringBootApplication scanBasePackages restrictions
|
type: documentation
|
If I write an `Application` class in `com.acme.app` and annotate in such a way to drive Spring scan its parent package too:
```java
package com.acme.app;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication(scanBasePackages = {"com.acme"})
public class Application {
// ...
}
```
Then if I have my entity and repository classes defined under sibling packages to `com.acme.app`:
* `com.acme.entities` and
* `com.acme.repos`
respectively, they won't be picked up by Spring Boot and in order to make them recognized I have to explicitly use `EntityScan` and `EnableJpaRepositories`:
```java
package com.acme.app;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.domain.EntityScan;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
@SpringBootApplication(scanBasePackages = {"com.acme"})
@EnableJpaRepositories(basePackages = {"com.acme.repositories"})
@EntityScan(basePackages = {"com.acme.entities"})
public class Application {
...
```
I was stepping through the Spring Boot code to see why is this happening and it boils down to `@SpringBootApplication(scanBasePackages = {"com.acme"})` not adding `com.acme` to `org.springframework.boot.autoconfigure.AutoConfigurationPackages.BasePackages#packages` (it will only add the package of the `Application` class, `com.acme.app` to `BasePackages#packages`).
What is the thinking behind this logic? Wouldn't it make sense to make `@SpringBootApplication(scanBasePackages = {"com.acme"})` add `com.acme` to `BasePackages#packages` too?
|
1.0
|
Document @SpringBootApplication scanBasePackages restrictions - If I write an `Application` class in `com.acme.app` and annotate in such a way to drive Spring scan its parent package too:
```java
package com.acme.app;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication(scanBasePackages = {"com.acme"})
public class Application {
// ...
}
```
Then if I have my entity and repository classes defined under sibling packages to `com.acme.app`:
* `com.acme.entities` and
* `com.acme.repos`
respectively, they won't be picked up by Spring Boot and in order to make them recognized I have to explicitly use `EntityScan` and `EnableJpaRepositories`:
```java
package com.acme.app;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.domain.EntityScan;
import org.springframework.data.jpa.repository.config.EnableJpaRepositories;
@SpringBootApplication(scanBasePackages = {"com.acme"})
@EnableJpaRepositories(basePackages = {"com.acme.repositories"})
@EntityScan(basePackages = {"com.acme.entities"})
public class Application {
...
```
I was stepping through the Spring Boot code to see why is this happening and it boils down to `@SpringBootApplication(scanBasePackages = {"com.acme"})` not adding `com.acme` to `org.springframework.boot.autoconfigure.AutoConfigurationPackages.BasePackages#packages` (it will only add the package of the `Application` class, `com.acme.app` to `BasePackages#packages`).
What is the thinking behind this logic? Wouldn't it make sense to make `@SpringBootApplication(scanBasePackages = {"com.acme"})` add `com.acme` to `BasePackages#packages` too?
|
non_process
|
document springbootapplication scanbasepackages restrictions if i write an application class in com acme app and annotate in such a way to drive spring scan its parent package too java package com acme app import org springframework boot autoconfigure springbootapplication springbootapplication scanbasepackages com acme public class application then if i have my entity and repository classes defined under sibling packages to com acme app com acme entities and com acme repos respectively they won t be picked up by spring boot and in order to make them recognized i have to explicitly use entityscan and enablejparepositories java package com acme app import org springframework boot springapplication import org springframework boot autoconfigure springbootapplication import org springframework boot autoconfigure domain entityscan import org springframework data jpa repository config enablejparepositories springbootapplication scanbasepackages com acme enablejparepositories basepackages com acme repositories entityscan basepackages com acme entities public class application i was stepping through the spring boot code to see why is this happening and it boils down to springbootapplication scanbasepackages com acme not adding com acme to org springframework boot autoconfigure autoconfigurationpackages basepackages packages it will only add the package of the application class com acme app to basepackages packages what is the thinking behind this logic wouldn t it make sense to make springbootapplication scanbasepackages com acme add com acme to basepackages packages too
| 0
|
67
| 2,523,272,543
|
IssuesEvent
|
2015-01-20 09:03:47
|
sysown/proxysql-0.2
|
https://api.github.com/repos/sysown/proxysql-0.2
|
opened
|
Extract modifiers from comment
|
cxx_pa development enhancement GLOBAL MYSQL PROTOCOL QUERY PROCESSOR
|
Application should be able to send instructions and modify the behavior of the proxy using key/value pairs inside a comment.
We need to define the list of variables.
|
1.0
|
Extract modifiers from comment - Application should be able to send instructions and modify the behavior of the proxy using key/value pairs inside a comment.
We need to define the list of variables.
|
process
|
extract modifiers from comment application should be able to send instructions and modify the behavior of the proxy using key value pairs inside a comment we need to define the list of variables
| 1
|
16,597
| 21,651,786,071
|
IssuesEvent
|
2022-05-06 10:06:02
|
2i2c-org/team-compass
|
https://api.github.com/repos/2i2c-org/team-compass
|
opened
|
Commit to holding our team meeting on a set date
|
:label: team-process :label: meeting
|
### Context
At the minute, our team meeting happens every 4 weeks. Which means the meeting creeps earlier into the month as the year passes since not all months have four weeks. When trying to create a standup bot that will more reliably notify the team member who will be the facilitator for the meeting, syncing up these migrating periods is really difficult without complex logic (especially since the period of the standup is not available to be set via the API! The default is weekly and needs manually updating in the dashboard!)
### Proposal
Instead of "every four weeks", we commit to holding our team meeting "every 3rd Tuesday of the month". This will prevent the date it happens creeping through the month and means I don't have to write complicated, calendar-based logic to sync up the standup to provide the facilitator enough warning of their role.
### Updates and actions
- [ ] @2i2c-org/tech-team votes on above proposal
- [ ] If proposal passes vote, change the repeat frequency of our team meeting to "every 3rd Tuesday of the month" in the Team Calendar
|
1.0
|
Commit to holding our team meeting on a set date - ### Context
At the minute, our team meeting happens every 4 weeks. Which means the meeting creeps earlier into the month as the year passes since not all months have four weeks. When trying to create a standup bot that will more reliably notify the team member who will be the facilitator for the meeting, syncing up these migrating periods is really difficult without complex logic (especially since the period of the standup is not available to be set via the API! The default is weekly and needs manually updating in the dashboard!)
### Proposal
Instead of "every four weeks", we commit to holding our team meeting "every 3rd Tuesday of the month". This will prevent the date it happens creeping through the month and means I don't have to write complicated, calendar-based logic to sync up the standup to provide the facilitator enough warning of their role.
### Updates and actions
- [ ] @2i2c-org/tech-team votes on above proposal
- [ ] If proposal passes vote, change the repeat frequency of our team meeting to "every 3rd Tuesday of the month" in the Team Calendar
|
process
|
commit to holding our team meeting on a set date context at the minute our team meeting happens every weeks which means the meeting creeps earlier into the month as the year passes since not all months have four weeks when trying to create a standup bot that will more reliably notify the team member who will be the facilitator for the meeting syncing up these migrating periods is really difficult without complex logic especially since the period of the standup is not available to be set via the api the default is weekly and needs manually updating in the dashboard proposal instead of every four weeks we commit to holding our team meeting every tuesday of the month this will prevent the date it happens creeping through the month and means i don t have to write complicated calendar based logic to sync up the standup to provide the facilitator enough warning of their role updates and actions org tech team votes on above proposal if proposal passes vote change the repeat frequency of our team meeting to every tuesday of the month in the team calendar
| 1
|
41,272
| 5,345,960,150
|
IssuesEvent
|
2017-02-17 18:21:42
|
DerpyFirework/KSP-Structures
|
https://api.github.com/repos/DerpyFirework/KSP-Structures
|
closed
|
Test performance and memory impact
|
needs testing
|
Need to see what sort of performance impact adding STRUCTURES has on the game, and what sort of memory impact the plugin has.
|
1.0
|
Test performance and memory impact - Need to see what sort of performance impact adding STRUCTURES has on the game, and what sort of memory impact the plugin has.
|
non_process
|
test performance and memory impact need to see what sort of performance impact adding structures has on the game and what sort of memory impact the plugin has
| 0
|
10,524
| 13,307,264,452
|
IssuesEvent
|
2020-08-25 21:48:53
|
googleapis/nodejs-dialogflow-cx
|
https://api.github.com/repos/googleapis/nodejs-dialogflow-cx
|
opened
|
promote library to GA
|
type: process
|
Package name: **@google-cloud/dialogflow-cx**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] 28 days elapsed since last beta release with new API surface
- [ ] Server API is GA
- [ ] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
promote library to GA - Package name: **@google-cloud/dialogflow-cx**
Current release: **beta**
Proposed release: **GA**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] 28 days elapsed since last beta release with new API surface
- [ ] Server API is GA
- [ ] Package API is stable, and we can commit to backward compatibility
- [ ] All dependencies are GA
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
promote library to ga package name google cloud dialogflow cx current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
578,570
| 17,148,068,636
|
IssuesEvent
|
2021-07-13 16:46:09
|
CyanLabs/Syn3Updater
|
https://api.github.com/repos/CyanLabs/Syn3Updater
|
closed
|
2.10.0 - Error: System.NullReferenceException
|
Priority: High Type: Bug
|
Object reference not set to an instance of an object.
DownloadViewModelService.cs:line 25
DownloadViewModel.cs:line 697
DownloadViewModel.cs:line 460
DownloadViewModel.cs:line 245
---
DownloadViewModel.cs: 行 647
DownloadViewModel.cs: 行 179
---
DownloadViewModel.cs:line 162
Download.xaml.cs:line 18
HomeViewModelService.cs:line 47
|
1.0
|
2.10.0 - Error: System.NullReferenceException - Object reference not set to an instance of an object.
DownloadViewModelService.cs:line 25
DownloadViewModel.cs:line 697
DownloadViewModel.cs:line 460
DownloadViewModel.cs:line 245
---
DownloadViewModel.cs: 行 647
DownloadViewModel.cs: 行 179
---
DownloadViewModel.cs:line 162
Download.xaml.cs:line 18
HomeViewModelService.cs:line 47
|
non_process
|
error system nullreferenceexception object reference not set to an instance of an object downloadviewmodelservice cs line downloadviewmodel cs line downloadviewmodel cs line downloadviewmodel cs line downloadviewmodel cs 行 downloadviewmodel cs 行 downloadviewmodel cs line download xaml cs line homeviewmodelservice cs line
| 0
|
71,428
| 18,738,023,505
|
IssuesEvent
|
2021-11-04 10:10:43
|
diasurgical/devilutionX
|
https://api.github.com/repos/diasurgical/devilutionX
|
closed
|
CMake: Option to use system SDL_Image
|
help wanted build system
|
Writing a package definition for DevilutionX (for RetroLX) and discovered there's no way to tell it to use the system SDL_image.
|
1.0
|
CMake: Option to use system SDL_Image - Writing a package definition for DevilutionX (for RetroLX) and discovered there's no way to tell it to use the system SDL_image.
|
non_process
|
cmake option to use system sdl image writing a package definition for devilutionx for retrolx and discovered there s no way to tell it to use the system sdl image
| 0
|
2,067
| 4,876,301,766
|
IssuesEvent
|
2016-11-16 12:24:21
|
Jumpscale/jscockpit
|
https://api.github.com/repos/Jumpscale/jscockpit
|
closed
|
"502 Bad Gateway" error when updating Moehaha Cockpit
|
process_duplicate
|
https://moehaha-cockpit.aydo2.com/cockpit/version
<img width="1220" alt="screen shot 2016-09-20 at 12 08 02" src="https://cloud.githubusercontent.com/assets/13795109/18666237/05459cfe-7f2b-11e6-8b18-a1a3e8df1d62.png">
|
1.0
|
"502 Bad Gateway" error when updating Moehaha Cockpit - https://moehaha-cockpit.aydo2.com/cockpit/version
<img width="1220" alt="screen shot 2016-09-20 at 12 08 02" src="https://cloud.githubusercontent.com/assets/13795109/18666237/05459cfe-7f2b-11e6-8b18-a1a3e8df1d62.png">
|
process
|
bad gateway error when updating moehaha cockpit img width alt screen shot at src
| 1
|
5,297
| 5,621,941,542
|
IssuesEvent
|
2017-04-04 11:26:20
|
Cadasta/cadasta-platform
|
https://api.github.com/repos/Cadasta/cadasta-platform
|
opened
|
No password verification when changing account's mail address
|
bug needs discussion security
|
### Steps to reproduce the error
1. Go to Edit Profile
2. Change mail address
### Actual behavior
User is not prompted to verify their password to confirm identity.
All the other fixes described in bug #1140 are properly included though:
1. Confirmation mail to new email address does not include username
2. The mail address is not actually updated until the verification link is clicked
3. Correct notification is sent to the old address
But I wonder if we should ask for password verification as well, as we do when changing the password for instance.
### Expected behavior
Ask for password verification?
@adri, what do you think?
|
True
|
No password verification when changing account's mail address - ### Steps to reproduce the error
1. Go to Edit Profile
2. Change mail address
### Actual behavior
User is not prompted to verify their password to confirm identity.
All the other fixes described in bug #1140 are properly included though:
1. Confirmation mail to new email address does not include username
2. The mail address is not actually updated until the verification link is clicked
3. Correct notification is sent to the old address
But I wonder if we should ask for password verification as well, as we do when changing the password for instance.
### Expected behavior
Ask for password verification?
@adri, what do you think?
|
non_process
|
no password verification when changing account s mail address steps to reproduce the error go to edit profile change mail address actual behavior user is not prompted to verify their password to confirm identity all the other fixes described in bug are properly included though confirmation mail to new email address does not include username the mail address is not actually updated until the verification link is clicked correct notification is sent to the old address but i wonder if we should ask for password verification as well as we do when changing the password for instance expected behavior ask for password verification adri what do you think
| 0
|
51,626
| 6,537,705,016
|
IssuesEvent
|
2017-09-01 00:17:01
|
JuanVictorBedoya/FisLab
|
https://api.github.com/repos/JuanVictorBedoya/FisLab
|
closed
|
Modules Definition
|
Design
|
Module nomination & scope:
Must define how many modules will have de system, wath will be their names and their scopes
|
1.0
|
Modules Definition - Module nomination & scope:
Must define how many modules will have de system, wath will be their names and their scopes
|
non_process
|
modules definition module nomination scope must define how many modules will have de system wath will be their names and their scopes
| 0
|
327,803
| 28,083,384,075
|
IssuesEvent
|
2023-03-30 08:09:46
|
dotnet/machinelearning-modelbuilder
|
https://api.github.com/repos/dotnet/machinelearning-modelbuilder
|
closed
|
Image classification: Validation data strategy cannot be changed successful on the "Advanced data options->Validation data" dialog.
|
Priority:0 Reported by: Test
|
**System Information (please complete the following information):**
Windows OS: Windows-11-Enterprise-22H2
ML.Net Model Builder 2022: 17.14.4.2316401 (Main Build)
Microsoft Visual Studio Enterprise: 2022(17.4.5)
.Net: 6.0
**Describe the bug**
- On which step of the process did you run into an issue:
Validation data strategy cannot be changed successful on the "Advanced data options->Validation data" dialog for Image classification scenario.
**TestMatrix**
https://testpass.blob.core.windows.net/test-pass-data/weather.zip
**To Reproduce**
Steps to reproduce the behavior:
1. Select Create a new project from the Visual Studio start window.
2. Choose the C# Console App (.NET Core) project template.
3. Add model builder by right click on the project.
4. Select "Image classification" scenario.
5. Go to the Data page, choose the image folder then click the "Advanced data options..." link.
6. Go to the "Validation data" tab, choose other option and click "Save" button. (The default selected option is "Split".)
7. Open the "Advanced data options->Validation data" dialog again and you will see that the selected option is still the original one.
**Expected behavior**
Validation data strategy should be changed successful.
**Screenshot**

**Additional context**
|
1.0
|
Image classification: Validation data strategy cannot be changed successful on the "Advanced data options->Validation data" dialog. - **System Information (please complete the following information):**
Windows OS: Windows-11-Enterprise-22H2
ML.Net Model Builder 2022: 17.14.4.2316401 (Main Build)
Microsoft Visual Studio Enterprise: 2022(17.4.5)
.Net: 6.0
**Describe the bug**
- On which step of the process did you run into an issue:
Validation data strategy cannot be changed successful on the "Advanced data options->Validation data" dialog for Image classification scenario.
**TestMatrix**
https://testpass.blob.core.windows.net/test-pass-data/weather.zip
**To Reproduce**
Steps to reproduce the behavior:
1. Select Create a new project from the Visual Studio start window.
2. Choose the C# Console App (.NET Core) project template.
3. Add model builder by right click on the project.
4. Select "Image classification" scenario.
5. Go to the Data page, choose the image folder then click the "Advanced data options..." link.
6. Go to the "Validation data" tab, choose other option and click "Save" button. (The default selected option is "Split".)
7. Open the "Advanced data options->Validation data" dialog again and you will see that the selected option is still the original one.
**Expected behavior**
Validation data strategy should be changed successful.
**Screenshot**

**Additional context**
|
non_process
|
image classification validation data strategy cannot be changed successful on the advanced data options validation data dialog system information please complete the following information windows os windows enterprise ml net model builder main build microsoft visual studio enterprise net describe the bug on which step of the process did you run into an issue validation data strategy cannot be changed successful on the advanced data options validation data dialog for image classification scenario testmatrix to reproduce steps to reproduce the behavior select create a new project from the visual studio start window choose the c console app net core project template add model builder by right click on the project select image classification scenario go to the data page choose the image folder then click the advanced data options link go to the validation data tab choose other option and click save button the default selected option is split open the advanced data options validation data dialog again and you will see that the selected option is still the original one expected behavior validation data strategy should be changed successful screenshot additional context
| 0
|
2,948
| 3,973,953,025
|
IssuesEvent
|
2016-05-04 20:26:32
|
dotnet/wcf
|
https://api.github.com/repos/dotnet/wcf
|
closed
|
Make it possible to stop self-hosted exe when not running non-elevated
|
Infrastructure
|
The script to start the self-hosted WCF service is able to run in a non-elevated window. It self-elevates and starts the service successfully. However, the script to stop the service fails with "access denied" when run from a non-elevated window. The net effect is attempting to do an OuterLoop run self-hosted from a non-elevated CMD window will leave the service running, making it impossible to clean the enlistment without manual intervention.
PR #1113 attempts to solve this by using RunElevated.vbs, but the TaskKill.exe still fails to stop the exe. The task here is to investigate and make it succeed. It is currently expected we will run selfhost tests in the lab elevated. This change is protection so we don't render an enlistment uncleanable if we accidentally run non-elevated.
|
1.0
|
Make it possible to stop self-hosted exe when not running non-elevated - The script to start the self-hosted WCF service is able to run in a non-elevated window. It self-elevates and starts the service successfully. However, the script to stop the service fails with "access denied" when run from a non-elevated window. The net effect is attempting to do an OuterLoop run self-hosted from a non-elevated CMD window will leave the service running, making it impossible to clean the enlistment without manual intervention.
PR #1113 attempts to solve this by using RunElevated.vbs, but the TaskKill.exe still fails to stop the exe. The task here is to investigate and make it succeed. It is currently expected we will run selfhost tests in the lab elevated. This change is protection so we don't render an enlistment uncleanable if we accidentally run non-elevated.
|
non_process
|
make it possible to stop self hosted exe when not running non elevated the script to start the self hosted wcf service is able to run in a non elevated window it self elevates and starts the service successfully however the script to stop the service fails with access denied when run from a non elevated window the net effect is attempting to do an outerloop run self hosted from a non elevated cmd window will leave the service running making it impossible to clean the enlistment without manual intervention pr attempts to solve this by using runelevated vbs but the taskkill exe still fails to stop the exe the task here is to investigate and make it succeed it is currently expected we will run selfhost tests in the lab elevated this change is protection so we don t render an enlistment uncleanable if we accidentally run non elevated
| 0
|
56,269
| 6,972,114,843
|
IssuesEvent
|
2017-12-11 16:03:41
|
fgpv-vpgf/fgpv-vpgf
|
https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf
|
closed
|
User may not know how to exit toggle filter settings menu in a data grid
|
experience: design feedback: discussion priority: high
|
A user could find it confusing to exit out of these settings. Can we make this more intuitive

|
1.0
|
User may not know how to exit toggle filter settings menu in a data grid - A user could find it confusing to exit out of these settings. Can we make this more intuitive

|
non_process
|
user may not know how to exit toggle filter settings menu in a data grid a user could find it confusing to exit out of these settings can we make this more intuitive
| 0
|
12,633
| 15,016,545,864
|
IssuesEvent
|
2021-02-01 09:43:42
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
failed to add a worker node
|
process_wontfix type_bug
|
failed to add nodes to your cluster. due to error not enough capacity in farm freefarm for 1 kubernetes nodes of flavor K8SNodeFlavor.MEDIUM. Use the refresh button on the upper right to restart Extend Kubernetes Cluster creation
<img width="1098" alt="Screenshot 2021-01-26 at 14 37 10" src="https://user-images.githubusercontent.com/43240801/105851990-200f7c00-5fe4-11eb-879b-26c690f680fa.png">
|
1.0
|
failed to add a worker node - failed to add nodes to your cluster. due to error not enough capacity in farm freefarm for 1 kubernetes nodes of flavor K8SNodeFlavor.MEDIUM. Use the refresh button on the upper right to restart Extend Kubernetes Cluster creation
<img width="1098" alt="Screenshot 2021-01-26 at 14 37 10" src="https://user-images.githubusercontent.com/43240801/105851990-200f7c00-5fe4-11eb-879b-26c690f680fa.png">
|
process
|
failed to add a worker node failed to add nodes to your cluster due to error not enough capacity in farm freefarm for kubernetes nodes of flavor medium use the refresh button on the upper right to restart extend kubernetes cluster creation img width alt screenshot at src
| 1
|
500,896
| 14,516,901,819
|
IssuesEvent
|
2020-12-13 17:35:44
|
ansible/awx
|
https://api.github.com/repos/ansible/awx
|
closed
|
Instance Group Form doclink should be on the right of the form
|
component:ui_next priority:medium qe:regression qe:visual state:needs_devel type:bug
|

This should be right-justified, on the far right side of the form.
|
1.0
|
Instance Group Form doclink should be on the right of the form - 
This should be right-justified, on the far right side of the form.
|
non_process
|
instance group form doclink should be on the right of the form this should be right justified on the far right side of the form
| 0
|
347,246
| 10,427,218,172
|
IssuesEvent
|
2019-09-16 19:24:50
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
Month is displayed on the line chart tooltips instead of week when grouped by week
|
Priority:P2 Type:Bug Visualization/
|
**Describe the bug**
Month is displayed on the line chart tooltips instead of week when grouped by week. Looks like regression in 0.33.0 since it was displaying properly in previous versions.
**To Reproduce**
Steps to reproduce the behavior:
1. Create line visualization with grouping by week
2. Hover on the dot on the line
3. See month instead of the week
**Expected behavior**
See week.
**Screenshots**

**Information about your Metabase Installation:**
- Your browser and the version: Chrome 76.0.3809.132
- Your operating system: Windows 10
- Metabase version: 0.33.2
- Metabase hosting environment: Google App Engine
- Metabase internal database: Postgres
**Severity**
Annoying.
|
1.0
|
Month is displayed on the line chart tooltips instead of week when grouped by week - **Describe the bug**
Month is displayed on the line chart tooltips instead of week when grouped by week. Looks like regression in 0.33.0 since it was displaying properly in previous versions.
**To Reproduce**
Steps to reproduce the behavior:
1. Create line visualization with grouping by week
2. Hover on the dot on the line
3. See month instead of the week
**Expected behavior**
See week.
**Screenshots**

**Information about your Metabase Installation:**
- Your browser and the version: Chrome 76.0.3809.132
- Your operating system: Windows 10
- Metabase version: 0.33.2
- Metabase hosting environment: Google App Engine
- Metabase internal database: Postgres
**Severity**
Annoying.
|
non_process
|
month is displayed on the line chart tooltips instead of week when grouped by week describe the bug month is displayed on the line chart tooltips instead of week when grouped by week looks like regression in since it was displaying properly in previous versions to reproduce steps to reproduce the behavior create line visualization with grouping by week hover on the dot on the line see month instead of the week expected behavior see week screenshots information about your metabase installation your browser and the version chrome your operating system windows metabase version metabase hosting environment google app engine metabase internal database postgres severity annoying
| 0
|
142,116
| 13,016,679,794
|
IssuesEvent
|
2020-07-26 08:05:45
|
titus-ong/chordparser
|
https://api.github.com/repos/titus-ong/chordparser
|
closed
|
Update Colab Notebook
|
bug documentation
|
With readthedocs now, the documentation section of the notebook should be moved to /docs, leaving only the working example. With the changes in classes and methods in v0.3.x, the code has to be updated as well.
|
1.0
|
Update Colab Notebook - With readthedocs now, the documentation section of the notebook should be moved to /docs, leaving only the working example. With the changes in classes and methods in v0.3.x, the code has to be updated as well.
|
non_process
|
update colab notebook with readthedocs now the documentation section of the notebook should be moved to docs leaving only the working example with the changes in classes and methods in x the code has to be updated as well
| 0
|
105
| 2,544,425,692
|
IssuesEvent
|
2015-01-29 09:49:23
|
robotology/yarp
|
https://api.github.com/repos/robotology/yarp
|
opened
|
Standard headers for MSVC
|
Type: Process
|
We have these files for MSVC in YARP:
* ``src/idls/thrift/msvc/inttypes.h``
* ``src/idls/thrift/msvc/stdint.h``
* ``src/libYARP_manager/include/yarp/manager/ymm-dir.h`` (slightly modified version of ``dirent.h``)
* ``src/yarpdataplayer-gtk/include/msvc/dirent.h``
* ``src/yarpdataplayer-qt/msvc/dirent.h``
``inttypes.h`` and ``stdint.h`` are standard C99 headers, ``dirent.h`` is standard posix.
I think we should:
* [ ] Check if these are required for all MSVC version (including newer) and ensure that the compiler version is used when available.
* [ ] Move these files outside from the ``src`` directory (perhaps in ``extern``), so that we can avoid duplicating these files.
|
1.0
|
Standard headers for MSVC - We have these files for MSVC in YARP:
* ``src/idls/thrift/msvc/inttypes.h``
* ``src/idls/thrift/msvc/stdint.h``
* ``src/libYARP_manager/include/yarp/manager/ymm-dir.h`` (slightly modified version of ``dirent.h``)
* ``src/yarpdataplayer-gtk/include/msvc/dirent.h``
* ``src/yarpdataplayer-qt/msvc/dirent.h``
``inttypes.h`` and ``stdint.h`` are standard C99 headers, ``dirent.h`` is standard posix.
I think we should:
* [ ] Check if these are required for all MSVC version (including newer) and ensure that the compiler version is used when available.
* [ ] Move these files outside from the ``src`` directory (perhaps in ``extern``), so that we can avoid duplicating these files.
|
process
|
standard headers for msvc we have these files for msvc in yarp src idls thrift msvc inttypes h src idls thrift msvc stdint h src libyarp manager include yarp manager ymm dir h slightly modified version of dirent h src yarpdataplayer gtk include msvc dirent h src yarpdataplayer qt msvc dirent h inttypes h and stdint h are standard headers dirent h is standard posix i think we should check if these are required for all msvc version including newer and ensure that the compiler version is used when available move these files outside from the src directory perhaps in extern so that we can avoid duplicating these files
| 1
|
17,131
| 22,649,106,633
|
IssuesEvent
|
2022-07-01 11:44:08
|
PyCQA/pylint
|
https://api.github.com/repos/PyCQA/pylint
|
closed
|
No cyclic-import messages with jobs=0
|
Bug :beetle: topic-multiprocessing
|
Running `pylint -jobs=0` does not report about cyclic-import. This can be reproduced with a trivial example.
`a.py`:
```
import b
```
`b.py`:
```
import a
```
This issue is not new to pylint 2.7. It is more noticeable only because running jobs in parallel became more useful in this version.
```
pylint --version
pylint 2.7.1
astroid 2.5
Python 3.8.6 (default, Sep 25 2020, 09:36:53)
[GCC 10.2.0]
```
|
1.0
|
No cyclic-import messages with jobs=0 - Running `pylint -jobs=0` does not report about cyclic-import. This can be reproduced with a trivial example.
`a.py`:
```
import b
```
`b.py`:
```
import a
```
This issue is not new to pylint 2.7. It is more noticeable only because running jobs in parallel became more useful in this version.
```
pylint --version
pylint 2.7.1
astroid 2.5
Python 3.8.6 (default, Sep 25 2020, 09:36:53)
[GCC 10.2.0]
```
|
process
|
no cyclic import messages with jobs running pylint jobs does not report about cyclic import this can be reproduced with a trivial example a py import b b py import a this issue is not new to pylint it is more noticeable only because running jobs in parallel became more useful in this version pylint version pylint astroid python default sep
| 1
|
18,486
| 24,550,866,356
|
IssuesEvent
|
2022-10-12 12:30:51
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Participant Manager] Participant details screen is not working for Closed study
|
Bug P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
Steps:-
1. Login into PM
2. Navigate to any closed study
3. Navigate to Participant details screen from any site and observe
A/R:- Getting error message in Participant details
E/R:- Participant details should be displayed without any error message
Note:- Issue is not observed for open study

|
3.0
|
[Participant Manager] Participant details screen is not working for Closed study - Steps:-
1. Login into PM
2. Navigate to any closed study
3. Navigate to Participant details screen from any site and observe
A/R:- Getting error message in Participant details
E/R:- Participant details should be displayed without any error message
Note:- Issue is not observed for open study

|
process
|
participant details screen is not working for closed study steps login into pm navigate to any closed study navigate to participant details screen from any site and observe a r getting error message in participant details e r participant details should be displayed without any error message note issue is not observed for open study
| 1
|
618,045
| 19,423,358,796
|
IssuesEvent
|
2021-12-21 00:05:40
|
LACMTA/mybus
|
https://api.github.com/repos/LACMTA/mybus
|
closed
|
Hotfix: update Line descriptions
|
0-Priority: High
|
Line 2: change Downtown LA to USC
Line 51: change Wilshire Center to Westlake/MacArthur Park Station
Line 53: add Willowbrook/Rosa Parks Station
Line 230: Change Mission College to Sylmar Station
Line 256 change Commerce to CSULA
Line 260 Change Altadena to Pasadena
|
1.0
|
Hotfix: update Line descriptions - Line 2: change Downtown LA to USC
Line 51: change Wilshire Center to Westlake/MacArthur Park Station
Line 53: add Willowbrook/Rosa Parks Station
Line 230: Change Mission College to Sylmar Station
Line 256 change Commerce to CSULA
Line 260 Change Altadena to Pasadena
|
non_process
|
hotfix update line descriptions line change downtown la to usc line change wilshire center to westlake macarthur park station line add willowbrook rosa parks station line change mission college to sylmar station line change commerce to csula line change altadena to pasadena
| 0
|
18,305
| 24,417,795,780
|
IssuesEvent
|
2022-10-05 17:28:24
|
biocodellc/localcontexts_db
|
https://api.github.com/repos/biocodellc/localcontexts_db
|
closed
|
Content updates for Registration > Choose an account
|
info registration process content
|
Adding clarification to the “Choose an account” page in the registration process through text updates and adding a tooltip.
### Current text
<img width="1354" alt="Screenshot of the Choose an account page in the Hub" src="https://user-images.githubusercontent.com/49764220/193894514-30fe939a-d4f3-41aa-8a4d-4f915d61e9cc.png">
### Updated text
**Choose an account**
Local Contexts has three types of accounts. You can choose to join an existing account or create a new account. You can be a member of multiple accounts.
**Community account**
_[Add tooltip]_ A community entity might be an Indigenous or local community Cultural Department, Tribal and Historical Preservation Officers, Community Center, Preservation Office, Community Library, Archive, Museum, or Land Council
Who? An Indigenous or local community entity or representative
What? Customize and apply Traditional Knowledge and Biocultural Labels, and create projects
**Institution account**
_[Tooltip]_ An institution might be an archive, library, museum, historical society, gallery, data repository, university, or media production company
Who? Cultural or research institution, data repository, and other organizations. If you are an Indigenous institution, choose the community account.
What? Create projects and generate Notices
**Researcher account**
Who? An individual who carries out academic or scientific research independently or in an institution
What? Create projects and generate Notices
|
1.0
|
Content updates for Registration > Choose an account - Adding clarification to the “Choose an account” page in the registration process through text updates and adding a tooltip.
### Current text
<img width="1354" alt="Screenshot of the Choose an account page in the Hub" src="https://user-images.githubusercontent.com/49764220/193894514-30fe939a-d4f3-41aa-8a4d-4f915d61e9cc.png">
### Updated text
**Choose an account**
Local Contexts has three types of accounts. You can choose to join an existing account or create a new account. You can be a member of multiple accounts.
**Community account**
_[Add tooltip]_ A community entity might be an Indigenous or local community Cultural Department, Tribal and Historical Preservation Officers, Community Center, Preservation Office, Community Library, Archive, Museum, or Land Council
Who? An Indigenous or local community entity or representative
What? Customize and apply Traditional Knowledge and Biocultural Labels, and create projects
**Institution account**
_[Tooltip]_ An institution might be an archive, library, museum, historical society, gallery, data repository, university, or media production company
Who? Cultural or research institution, data repository, and other organizations. If you are an Indigenous institution, choose the community account.
What? Create projects and generate Notices
**Researcher account**
Who? An individual who carries out academic or scientific research independently or in an institution
What? Create projects and generate Notices
|
process
|
content updates for registration choose an account adding clarification to the “choose an account” page in the registration process through text updates and adding a tooltip current text img width alt screenshot of the choose an account page in the hub src updated text choose an account local contexts has three types of accounts you can choose to join an existing account or create a new account you can be a member of multiple accounts community account a community entity might be an indigenous or local community cultural department tribal and historical preservation officers community center preservation office community library archive museum or land council who an indigenous or local community entity or representative what customize and apply traditional knowledge and biocultural labels and create projects institution account an institution might be an archive library museum historical society gallery data repository university or media production company who cultural or research institution data repository and other organizations if you are an indigenous institution choose the community account what create projects and generate notices researcher account who an individual who carries out academic or scientific research independently or in an institution what create projects and generate notices
| 1
|
11,049
| 13,879,909,098
|
IssuesEvent
|
2020-10-17 16:27:24
|
candango/myfuses
|
https://api.github.com/repos/candango/myfuses
|
closed
|
Handle context better.
|
core.verbs enhancement process
|
To migrate global variables from a context to another we set a variable as global in a loop, every time, even if the variable is still defined as global.
Let's handle the context better.
|
1.0
|
Handle context better. - To migrate global variables from a context to another we set a variable as global in a loop, every time, even if the variable is still defined as global.
Let's handle the context better.
|
process
|
handle context better to migrate global variables from a context to another we set a variable as global in a loop every time even if the variable is still defined as global let s handle the context better
| 1
|
221,040
| 7,373,261,898
|
IssuesEvent
|
2018-03-13 16:47:50
|
SiLeBAT/FSK-Lab
|
https://api.github.com/repos/SiLeBAT/FSK-Lab
|
opened
|
FSK-Writer: write out the calculated values of the model output parameters as dedicated HDF5 files
|
enhancement medium priority
|
It would be helpful if the results of a simulation (i.e. the numerical values of each output parameter) can be accessed directly in the generated FSKX file. For example one could create a folder inside the FSKX file for each simulation run that holds HDF5 files for each output parameter.
|
1.0
|
FSK-Writer: write out the calculated values of the model output parameters as dedicated HDF5 files - It would be helpful if the results of a simulation (i.e. the numerical values of each output parameter) can be accessed directly in the generated FSKX file. For example one could create a folder inside the FSKX file for each simulation run that holds HDF5 files for each output parameter.
|
non_process
|
fsk writer write out the calculated values of the model output parameters as dedicated files it would be helpful if the results of a simulation i e the numerical values of each output parameter can be accessed directly in the generated fskx file for example one could create a folder inside the fskx file for each simulation run that holds files for each output parameter
| 0
|
14,810
| 18,143,492,141
|
IssuesEvent
|
2021-09-25 02:39:23
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
GRASS geoprocessing tools don't work on 3.20.1.
|
Feedback stale Processing Bug
|
### What is the bug or the crash?
GRASS tools run in QGIS 3.20.1 but keep failing. I then reinstalled QGIS 3.18.3 and opened QGIS with this icon "QGIS Desktop 3.18.3 with GRASS 7.8.5" and the grass processing tools work fine now.
Thank you for all the great work you do
### Steps to reproduce the issue
1. I have a point shapefile (EPSG4326) loaded in QGIS

2. I then try to run any Grass algorithm, in the example case, v.buffer:

3. I've tried different datasets and other grass algorithms, but the output stays the same.
4. The error is not in the previous QGIS version
### Versions
3.20.1
### Additional context
My colleague had the same issue and also had to revert back to a previous QGIS version to be able to use GRASS algorithms
|
1.0
|
GRASS geoprocessing tools don't work on 3.20.1. - ### What is the bug or the crash?
GRASS tools run in QGIS 3.20.1 but keep failing. I then reinstalled QGIS 3.18.3 and opened QGIS with this icon "QGIS Desktop 3.18.3 with GRASS 7.8.5" and the grass processing tools work fine now.
Thank you for all the great work you do
### Steps to reproduce the issue
1. I have a point shapefile (EPSG4326) loaded in QGIS

2. I then try to run any Grass algorithm, in the example case, v.buffer:

3. I've tried different datasets and other grass algorithms, but the output stays the same.
4. The error is not in the previous QGIS version
### Versions
3.20.1
### Additional context
My colleague had the same issue and also had to revert back to a previous QGIS version to be able to use GRASS algorithms
|
process
|
grass geoprocessing tools don t work on what is the bug or the crash grass tools run in qgis but keep failing i then reinstalled qgis and opened qgis with this icon qgis desktop with grass and the grass processing tools work fine now thank you for all the great work you do steps to reproduce the issue i have a point shapefile loaded in qgis i then try to run any grass algorithm in the example case v buffer i ve tried different datasets and other grass algorithms but the output stays the same the error is not in the previous qgis version versions additional context my colleague had the same issue and also had to revert back to a previous qgis version to be able to use grass algorithms
| 1
|
4,043
| 6,973,841,393
|
IssuesEvent
|
2017-12-11 21:58:54
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
[System.Diagnostics.Process]::GetProcesses(computer) returns local processes
|
area-System.Diagnostics.Process bug
|
Repro in PowerShell Core 6
```powershell
[System.Diagnostics.Process]::GetProcesses("not existing computer")
```
Expected
```none
error not able to connect to computer
```
Actual
```none
local processes returned
```
|
1.0
|
[System.Diagnostics.Process]::GetProcesses(computer) returns local processes - Repro in PowerShell Core 6
```powershell
[System.Diagnostics.Process]::GetProcesses("not existing computer")
```
Expected
```none
error not able to connect to computer
```
Actual
```none
local processes returned
```
|
process
|
getprocesses computer returns local processes repro in powershell core powershell getprocesses not existing computer expected none error not able to connect to computer actual none local processes returned
| 1
|
286,262
| 21,569,853,957
|
IssuesEvent
|
2022-05-02 06:40:18
|
carbynestack/carbynestack.github.io
|
https://api.github.com/repos/carbynestack/carbynestack.github.io
|
closed
|
Link to dissemination activities
|
documentation
|
The community section of the website should offer links to various dissemination artefacts. Should include at least:
- Press coverage (blog posts, articles, etc.)
- Conference contributions (papers, talks, tutorials, etc.)
|
1.0
|
Link to dissemination activities - The community section of the website should offer links to various dissemination artefacts. Should include at least:
- Press coverage (blog posts, articles, etc.)
- Conference contributions (papers, talks, tutorials, etc.)
|
non_process
|
link to dissemination activities the community section of the website should offer links to various dissemination artefacts should include at least press coverage blog posts articles etc conference contributions papers talks tutorials etc
| 0
|
64,544
| 8,743,487,973
|
IssuesEvent
|
2018-12-12 19:18:50
|
projectacrn/acrn-hypervisor
|
https://api.github.com/repos/projectacrn/acrn-hypervisor
|
closed
|
doc: update HLD Power Management
|
area: documentation
|
Transcode, edit, and upload HLD 0.7 section 8 (Power Management)
|
1.0
|
doc: update HLD Power Management - Transcode, edit, and upload HLD 0.7 section 8 (Power Management)
|
non_process
|
doc update hld power management transcode edit and upload hld section power management
| 0
|
1,434
| 3,996,660,812
|
IssuesEvent
|
2016-05-10 19:32:28
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
opened
|
disk upload without temporary
|
component:data processing enhancement
|
When the user uploads a disk image / iso, CXF creates a temporary file on the filesystem of the controller. The upload method is invoked only when the file is transferred to the controller. This file will usually be several gigabytes in size, and the controller should not have the requirement to be able to store it, not even temporarily.
Instead, as the file is being uploaded, the controller should transfer it to the the (or any) host that stores the disk.
|
1.0
|
disk upload without temporary - When the user uploads a disk image / iso, CXF creates a temporary file on the filesystem of the controller. The upload method is invoked only when the file is transferred to the controller. This file will usually be several gigabytes in size, and the controller should not have the requirement to be able to store it, not even temporarily.
Instead, as the file is being uploaded, the controller should transfer it to the the (or any) host that stores the disk.
|
process
|
disk upload without temporary when the user uploads a disk image iso cxf creates a temporary file on the filesystem of the controller the upload method is invoked only when the file is transferred to the controller this file will usually be several gigabytes in size and the controller should not have the requirement to be able to store it not even temporarily instead as the file is being uploaded the controller should transfer it to the the or any host that stores the disk
| 1
|
26,670
| 7,857,468,898
|
IssuesEvent
|
2018-06-21 10:53:13
|
ShaikASK/Testing
|
https://api.github.com/repos/ShaikASK/Testing
|
closed
|
Edit New Hire : Application displays incorrect "OT Bill Rate" in edit "New Hire" screen
|
Defect HR Admin Module HR User Module New Hire P2 Release #3 Build 4
|
Steps
1.Launch the URL
2.Sign in as HR Admin user
3.Create New Hire by providing all mandatory field
4.Select OT Rate as 2 and OT Bill Rate as 3 from up and down arrows
5.Click on save
6.Edit the above created New Hire
Experienced Behaviour : Observed that OT Bill Rate is displayed as 2 instead of 3
Expected Behaviour : Ensure that application should display the same OT Bill edit mode
|
1.0
|
Edit New Hire : Application displays incorrect "OT Bill Rate" in edit "New Hire" screen - Steps
1.Launch the URL
2.Sign in as HR Admin user
3.Create New Hire by providing all mandatory field
4.Select OT Rate as 2 and OT Bill Rate as 3 from up and down arrows
5.Click on save
6.Edit the above created New Hire
Experienced Behaviour : Observed that OT Bill Rate is displayed as 2 instead of 3
Expected Behaviour : Ensure that application should display the same OT Bill edit mode
|
non_process
|
edit new hire application displays incorrect ot bill rate in edit new hire screen steps launch the url sign in as hr admin user create new hire by providing all mandatory field select ot rate as and ot bill rate as from up and down arrows click on save edit the above created new hire experienced behaviour observed that ot bill rate is displayed as instead of expected behaviour ensure that application should display the same ot bill edit mode
| 0
|
229,647
| 25,362,303,477
|
IssuesEvent
|
2022-11-21 01:05:27
|
interserver/mailbaby-api-samples
|
https://api.github.com/repos/interserver/mailbaby-api-samples
|
opened
|
CVE-2022-4065 (Medium) detected in testng-6.13.1.jar
|
security vulnerability
|
## CVE-2022-4065 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>testng-6.13.1.jar</b></p></summary>
<p>A testing framework for the JVM</p>
<p>Library home page: <a href="http://testng.org">http://testng.org</a></p>
<p>Path to dependency file: /openapi-client/groovy/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.testng/testng/6.13.1/2495393a0b4b7d7a4b49ea1f8516376f70f482c/testng-6.13.1.jar</p>
<p>
Dependency Hierarchy:
- groovy-all-2.5.14.jar (Root Library)
- groovy-testng-2.5.14.jar
- :x: **testng-6.13.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/interserver/mailbaby-api-samples/commit/0879348474e22463e77dc76ba5e5f7e6300a2b6c">0879348474e22463e77dc76ba5e5f7e6300a2b6c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in cbeust testng. It has been declared as critical. Affected by this vulnerability is the function testngXmlExistsInJar of the file testng-core/src/main/java/org/testng/JarFileUtils.java of the component XML File Parser. The manipulation leads to path traversal. The attack can be launched remotely. The name of the patch is 9150736cd2c123a6a3b60e6193630859f9f0422b. It is recommended to apply a patch to fix this issue. The associated identifier of this vulnerability is VDB-214027.
<p>Publish Date: 2022-11-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4065>CVE-2022-4065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-4065 (Medium) detected in testng-6.13.1.jar - ## CVE-2022-4065 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>testng-6.13.1.jar</b></p></summary>
<p>A testing framework for the JVM</p>
<p>Library home page: <a href="http://testng.org">http://testng.org</a></p>
<p>Path to dependency file: /openapi-client/groovy/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.testng/testng/6.13.1/2495393a0b4b7d7a4b49ea1f8516376f70f482c/testng-6.13.1.jar</p>
<p>
Dependency Hierarchy:
- groovy-all-2.5.14.jar (Root Library)
- groovy-testng-2.5.14.jar
- :x: **testng-6.13.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/interserver/mailbaby-api-samples/commit/0879348474e22463e77dc76ba5e5f7e6300a2b6c">0879348474e22463e77dc76ba5e5f7e6300a2b6c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was found in cbeust testng. It has been declared as critical. Affected by this vulnerability is the function testngXmlExistsInJar of the file testng-core/src/main/java/org/testng/JarFileUtils.java of the component XML File Parser. The manipulation leads to path traversal. The attack can be launched remotely. The name of the patch is 9150736cd2c123a6a3b60e6193630859f9f0422b. It is recommended to apply a patch to fix this issue. The associated identifier of this vulnerability is VDB-214027.
<p>Publish Date: 2022-11-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4065>CVE-2022-4065</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in testng jar cve medium severity vulnerability vulnerable library testng jar a testing framework for the jvm library home page a href path to dependency file openapi client groovy build gradle path to vulnerable library home wss scanner gradle caches modules files org testng testng testng jar dependency hierarchy groovy all jar root library groovy testng jar x testng jar vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was found in cbeust testng it has been declared as critical affected by this vulnerability is the function testngxmlexistsinjar of the file testng core src main java org testng jarfileutils java of the component xml file parser the manipulation leads to path traversal the attack can be launched remotely the name of the patch is it is recommended to apply a patch to fix this issue the associated identifier of this vulnerability is vdb publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href step up your open source security game with mend
| 0
|
11,367
| 14,189,692,346
|
IssuesEvent
|
2020-11-14 01:59:17
|
Ghost-chu/QuickShop-Reremake
|
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
|
closed
|
QuickShop reload event
|
Good First Issue In Process Priority:Major
|
**Describe the Feature**
An event that fires after the plugin finished reloading, so a plugin that uses the API can re register its commands in QuickShop
|
1.0
|
QuickShop reload event - **Describe the Feature**
An event that fires after the plugin finished reloading, so a plugin that uses the API can re register its commands in QuickShop
|
process
|
quickshop reload event describe the feature an event that fires after the plugin finished reloading so a plugin that uses the api can re register its commands in quickshop
| 1
|
271,090
| 8,475,778,653
|
IssuesEvent
|
2018-10-24 19:55:18
|
dojot/dojot
|
https://api.github.com/repos/dojot/dojot
|
opened
|
[GUI] Device filtering: error in query parameters
|
Priority:Medium Team:Frontend Type:Bug
|
**Steps to reproduce the problem:**
1. Create multiple devices
2. Search for a specific device on page 6.

3. The device is not found. Note that **page_num = 6** in the request for BE


4. Click on page 1. The device is found.

**Expected behavior:** search should begin on page 1
**Affected Version:** 0.3.0-nightly_20181010
|
1.0
|
[GUI] Device filtering: error in query parameters - **Steps to reproduce the problem:**
1. Create multiple devices
2. Search for a specific device on page 6.

3. The device is not found. Note that **page_num = 6** in the request for BE


4. Click on page 1. The device is found.

**Expected behavior:** search should begin on page 1
**Affected Version:** 0.3.0-nightly_20181010
|
non_process
|
device filtering error in query parameters steps to reproduce the problem create multiple devices search for a specific device on page the device is not found note that page num in the request for be click on page the device is found expected behavior search should begin on page affected version nightly
| 0
|
18,180
| 24,231,978,004
|
IssuesEvent
|
2022-09-26 19:09:08
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Add tests for mongodb's `findRaw`, `aggregateRaw` and `runCommandRaw` in sequential transactions
|
process/candidate kind/improvement topic: tests tech/typescript team/client
|
Creating this issue as a reminder that we are missing functional tests for `findRaw`, `aggregateRaw` and `runCommandRaw` in sequential transactions, we currently have no guarantees that it's working as expected.
+Reminder: Check if this is tested on the engines
|
1.0
|
Add tests for mongodb's `findRaw`, `aggregateRaw` and `runCommandRaw` in sequential transactions - Creating this issue as a reminder that we are missing functional tests for `findRaw`, `aggregateRaw` and `runCommandRaw` in sequential transactions, we currently have no guarantees that it's working as expected.
+Reminder: Check if this is tested on the engines
|
process
|
add tests for mongodb s findraw aggregateraw and runcommandraw in sequential transactions creating this issue as a reminder that we are missing functional tests for findraw aggregateraw and runcommandraw in sequential transactions we currently have no guarantees that it s working as expected reminder check if this is tested on the engines
| 1
|
17,641
| 23,465,928,452
|
IssuesEvent
|
2022-08-16 16:45:29
|
GoogleCloudPlatform/ruby-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/ruby-docs-samples
|
closed
|
Fix Ruby 3.0 tests
|
type: process samples
|
Currently, the Ruby 3 tests are failing pretty much consistently in this repo. Fixing this, I believe, requires the following:
* Update the toplevel Gemfile to use `google-style ~> 1.25.1`, which brings in a Rubocop version that is compatible with Ruby 3.
* Make any necessary fixes across the repo to conform to the Rubocop change
* Make sure all Gemfile.lock files are updated, to bring in updated versions of client libraries that are Ruby 3 compatible.
* Make any other necessary keyword argument fixes in the samples themselves and their tests to get the tests to pass on Ruby 3.
If this is going to take a long time, we could hack the test scripts to omit Ruby 3 temporarily. If we do this, make sure its effect is limited to this repo. We don't want to disable Ruby 3 tests in google-cloud-ruby itself or any other repos.
|
1.0
|
Fix Ruby 3.0 tests - Currently, the Ruby 3 tests are failing pretty much consistently in this repo. Fixing this, I believe, requires the following:
* Update the toplevel Gemfile to use `google-style ~> 1.25.1`, which brings in a Rubocop version that is compatible with Ruby 3.
* Make any necessary fixes across the repo to conform to the Rubocop change
* Make sure all Gemfile.lock files are updated, to bring in updated versions of client libraries that are Ruby 3 compatible.
* Make any other necessary keyword argument fixes in the samples themselves and their tests to get the tests to pass on Ruby 3.
If this is going to take a long time, we could hack the test scripts to omit Ruby 3 temporarily. If we do this, make sure its effect is limited to this repo. We don't want to disable Ruby 3 tests in google-cloud-ruby itself or any other repos.
|
process
|
fix ruby tests currently the ruby tests are failing pretty much consistently in this repo fixing this i believe requires the following update the toplevel gemfile to use google style which brings in a rubocop version that is compatible with ruby make any necessary fixes across the repo to conform to the rubocop change make sure all gemfile lock files are updated to bring in updated versions of client libraries that are ruby compatible make any other necessary keyword argument fixes in the samples themselves and their tests to get the tests to pass on ruby if this is going to take a long time we could hack the test scripts to omit ruby temporarily if we do this make sure its effect is limited to this repo we don t want to disable ruby tests in google cloud ruby itself or any other repos
| 1
|
20,940
| 27,798,557,654
|
IssuesEvent
|
2023-03-17 14:17:35
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
Process functions: Infer `help` of input ports from function argument docstring
|
topic/workflows type/accepted feature priority/nice-to-have topic/processes
|
The `input` method of the `ProcessSpec` allows defining a `help` message for the input port, which is very useful when defining `WorkChain` and `CalcJob` plugins. However, this API is not directly available for process functions since the process spec is inferred dynamically from the function signature. It would be possible to infer it from the docstring. This would be very useful especially when the process function gets exposed in a workchain. The docstring will then be immediately available from the workchain's process specification and the user does not have to search the source code of the process function.
|
1.0
|
Process functions: Infer `help` of input ports from function argument docstring - The `input` method of the `ProcessSpec` allows defining a `help` message for the input port, which is very useful when defining `WorkChain` and `CalcJob` plugins. However, this API is not directly available for process functions since the process spec is inferred dynamically from the function signature. It would be possible to infer it from the docstring. This would be very useful especially when the process function gets exposed in a workchain. The docstring will then be immediately available from the workchain's process specification and the user does not have to search the source code of the process function.
|
process
|
process functions infer help of input ports from function argument docstring the input method of the processspec allows defining a help message for the input port which is very useful when defining workchain and calcjob plugins however this api is not directly available for process functions since the process spec is inferred dynamically from the function signature it would be possible to infer it from the docstring this would be very useful especially when the process function gets exposed in a workchain the docstring will then be immediately available from the workchain s process specification and the user does not have to search the source code of the process function
| 1
|
11,112
| 13,957,681,387
|
IssuesEvent
|
2020-10-24 08:07:24
|
alexanderkotsev/geoportal
|
https://api.github.com/repos/alexanderkotsev/geoportal
|
opened
|
DE: request for a new harvesting
|
DE - Germany Geoportal Harvesting process
|
Dear Geoportal Helpdesk,
As mentioned in Roberts Mail from 2020/03/02 we would like to initiate a new push of our metadata records to the EU Geoportal. For this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the Geoportal harvesting "sandbox", please.
Also we kindly ask you again, if you could provide us two or three original csw-requests (for an internal validation/review on our side), which you are using to get the metadata records from our catalogue instance.
Thanks in advance and best regards,
Anja (on behalf of Coordination Office SDI Germany)
|
1.0
|
DE: request for a new harvesting - Dear Geoportal Helpdesk,
As mentioned in Roberts Mail from 2020/03/02 we would like to initiate a new push of our metadata records to the EU Geoportal. For this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the Geoportal harvesting "sandbox", please.
Also we kindly ask you again, if you could provide us two or three original csw-requests (for an internal validation/review on our side), which you are using to get the metadata records from our catalogue instance.
Thanks in advance and best regards,
Anja (on behalf of Coordination Office SDI Germany)
|
process
|
de request for a new harvesting dear geoportal helpdesk as mentioned in roberts mail from we would like to initiate a new push of our metadata records to the eu geoportal for this reason we kindly ask you to start a new harvesting of our catalogue instance and publish them for us in the geoportal harvesting quot sandbox quot please also we kindly ask you again if you could provide us two or three original csw requests for an internal validation review on our side which you are using to get the metadata records from our catalogue instance thanks in advance and best regards anja on behalf of coordination office sdi germany
| 1
|
3,586
| 6,621,660,753
|
IssuesEvent
|
2017-09-21 20:03:53
|
WikiWatershed/model-my-watershed
|
https://api.github.com/repos/WikiWatershed/model-my-watershed
|
closed
|
Geoprocessing API: Responses should be proper json
|
BigCZ Geoprocessing API
|
## Current Behavior
- We send completed job responses like this:
```json
{
"error": "",
"finished": "2017-08-28T21:34:44.207Z",
"job_uuid": "0f90ff96-92f6-44b8-86bb-076394390912",
"result": "{\"survey\": {\"displayName\": \"Animals\", \"name\": \"animals\", \"categories\": [{\"aeu\": 0, \"typ
e\": \"Sheep\"}, {\"aeu\": 0, \"type\": \"Horses\"}, {\"aeu\": 0, \"type\": \"Turkeys\"}, {\"aeu\": 0, \"type\": \"Ch
ickens, Layers\"}, {\"aeu\": 0, \"type\": \"Cows, Beef\"}, {\"aeu\": 0, \"type\": \"Pigs/Hogs/Swine\"}, {\"aeu\": 0,
\"type\": \"Cows, Dairy\"}, {\"aeu\": 0, \"type\": \"Chickens, Broilers\"}]}}",
"started": "2017-08-28T21:34:44.141Z",
"status": "complete"
}
```
- Errors responses like 404s come back as html
## Desired Behavior
- Result shouldn't be stringified json. It's not what our api users will expect, and it's annoying to work with.
```json
{
"error": "",
"finished": "2017-08-28T21:34:44.207Z",
"job_uuid": "0f90ff96-92f6-44b8-86bb-076394390912",
"result": {"survey": {"displayName": "Animals", "name": "animals", "categories": [{"aeu": 0, "typ
e": "...you get the idea" }]}},
"started": "2017-08-28T21:34:44.141Z",
"status": "complete"
}
```
- Error responses should come back as json
|
1.0
|
Geoprocessing API: Responses should be proper json - ## Current Behavior
- We send completed job responses like this:
```json
{
"error": "",
"finished": "2017-08-28T21:34:44.207Z",
"job_uuid": "0f90ff96-92f6-44b8-86bb-076394390912",
"result": "{\"survey\": {\"displayName\": \"Animals\", \"name\": \"animals\", \"categories\": [{\"aeu\": 0, \"typ
e\": \"Sheep\"}, {\"aeu\": 0, \"type\": \"Horses\"}, {\"aeu\": 0, \"type\": \"Turkeys\"}, {\"aeu\": 0, \"type\": \"Ch
ickens, Layers\"}, {\"aeu\": 0, \"type\": \"Cows, Beef\"}, {\"aeu\": 0, \"type\": \"Pigs/Hogs/Swine\"}, {\"aeu\": 0,
\"type\": \"Cows, Dairy\"}, {\"aeu\": 0, \"type\": \"Chickens, Broilers\"}]}}",
"started": "2017-08-28T21:34:44.141Z",
"status": "complete"
}
```
- Errors responses like 404s come back as html
## Desired Behavior
- Result shouldn't be stringified json. It's not what our api users will expect, and it's annoying to work with.
```json
{
"error": "",
"finished": "2017-08-28T21:34:44.207Z",
"job_uuid": "0f90ff96-92f6-44b8-86bb-076394390912",
"result": {"survey": {"displayName": "Animals", "name": "animals", "categories": [{"aeu": 0, "typ
e": "...you get the idea" }]}},
"started": "2017-08-28T21:34:44.141Z",
"status": "complete"
}
```
- Error responses should come back as json
|
process
|
geoprocessing api responses should be proper json current behavior we send completed job responses like this json error finished job uuid result survey displayname animals name animals categories aeu typ e sheep aeu type horses aeu type turkeys aeu type ch ickens layers aeu type cows beef aeu type pigs hogs swine aeu type cows dairy aeu type chickens broilers started status complete errors responses like come back as html desired behavior result shouldn t be stringified json it s not what our api users will expect and it s annoying to work with json error finished job uuid result survey displayname animals name animals categories aeu typ e you get the idea started status complete error responses should come back as json
| 1
|
200,371
| 15,103,572,881
|
IssuesEvent
|
2021-02-08 10:28:57
|
smapiot/piral
|
https://api.github.com/repos/smapiot/piral
|
closed
|
Parcel + Lazy loading + css does not clean up css import correctly
|
bug in-testing parcel
|
# Bug Report
For more information, see the `CONTRIBUTING` guide.
## Prerequisites
- [x] Can you reproduce the problem in a [MWE](https://en.wikipedia.org/wiki/Minimal_working_example)?
- [x] Are you running the latest version?
- [x] Did you perform a search in the issues?
## Environment Details and Version
piral 0.12.4
## Description
When building a pilet, where a shared component loads css files and two lazy-loaded components use it (but not the not-lazy ones), parcel moves the css content to be bundled with the index.js, but piral still adds a link to the dynamically loaded js files.
This only seems to happen for pilets, not for the shell. And only if the lazy-loaded component does not import any other css files.
## Steps to Reproduce
1. In a pilet create a component `Shared` that imports an (s)css file.
2. Create a component `A`, which uses `Shared`.
2. Lazy load A in your application
3. Observe:
A file `A.hash.css` is created
The file `A.hash.js` adds this file using `d.createElement("link")`
4. Create a component `B` in the same way as `A` and also lazy-load it
5. Observe:
index.css now contains the css from the shared component
`B.hash.css` is not created
`A.hash.css` will not be created anymore (check by deleting it and rebuild)
The files `A/B.hash.js` still add this their imaginary css files using `d.createElement("link")`, even though they don't exist.
## Expected behavior
The js files should no longer reference the css files, which don't exist.
## Actual behavior
css code has been moved to index, but the components still try to load their own css
## Possible Origin / Solution
As a temporary solution, I imported the `Shared` component in the index file, but that is a hacky solution.
|
1.0
|
Parcel + Lazy loading + css does not clean up css import correctly - # Bug Report
For more information, see the `CONTRIBUTING` guide.
## Prerequisites
- [x] Can you reproduce the problem in a [MWE](https://en.wikipedia.org/wiki/Minimal_working_example)?
- [x] Are you running the latest version?
- [x] Did you perform a search in the issues?
## Environment Details and Version
piral 0.12.4
## Description
When building a pilet, where a shared component loads css files and two lazy-loaded components use it (but not the not-lazy ones), parcel moves the css content to be bundled with the index.js, but piral still adds a link to the dynamically loaded js files.
This only seems to happen for pilets, not for the shell. And only if the lazy-loaded component does not import any other css files.
## Steps to Reproduce
1. In a pilet create a component `Shared` that imports an (s)css file.
2. Create a component `A`, which uses `Shared`.
2. Lazy load A in your application
3. Observe:
A file `A.hash.css` is created
The file `A.hash.js` adds this file using `d.createElement("link")`
4. Create a component `B` in the same way as `A` and also lazy-load it
5. Observe:
index.css now contains the css from the shared component
`B.hash.css` is not created
`A.hash.css` will not be created anymore (check by deleting it and rebuild)
The files `A/B.hash.js` still add this their imaginary css files using `d.createElement("link")`, even though they don't exist.
## Expected behavior
The js files should no longer reference the css files, which don't exist.
## Actual behavior
css code has been moved to index, but the components still try to load their own css
## Possible Origin / Solution
As a temporary solution, I imported the `Shared` component in the index file, but that is a hacky solution.
|
non_process
|
parcel lazy loading css does not clean up css import correctly bug report for more information see the contributing guide prerequisites can you reproduce the problem in a are you running the latest version did you perform a search in the issues environment details and version piral description when building a pilet where a shared component loads css files and two lazy loaded components use it but not the not lazy ones parcel moves the css content to be bundled with the index js but piral still adds a link to the dynamically loaded js files this only seems to happen for pilets not for the shell and only if the lazy loaded component does not import any other css files steps to reproduce in a pilet create a component shared that imports an s css file create a component a which uses shared lazy load a in your application observe a file a hash css is created the file a hash js adds this file using d createelement link create a component b in the same way as a and also lazy load it observe index css now contains the css from the shared component b hash css is not created a hash css will not be created anymore check by deleting it and rebuild the files a b hash js still add this their imaginary css files using d createelement link even though they don t exist expected behavior the js files should no longer reference the css files which don t exist actual behavior css code has been moved to index but the components still try to load their own css possible origin solution as a temporary solution i imported the shared component in the index file but that is a hacky solution
| 0
|
20,653
| 10,864,852,964
|
IssuesEvent
|
2019-11-14 17:42:51
|
orbeon/orbeon-forms
|
https://api.github.com/repos/orbeon/orbeon-forms
|
opened
|
Consider making closed sections load lazily
|
Area: Performance
|
We do this with the wizard view in two places, using `xxf:update="full"` on `xf:switch`:
- for all wizard pages (in `wizard.xbl`)
- for repeated top-level sections (in `repeater.xbl`)
Possibly, just adding `xxf:update="full"` on `xf:switch` in `section.xbl` might do the trick?
- [ ] quick check with `xxf:update="full"`
- [ ] make sure we don't have an issue with nested `xxf:update="full"` (wizard)
(Note that in Form Builder, we do more: we do not even produce the static analysis of the control tree. But we cannot do this at runtime as we need the controls to be live.)
[+1 from customer](https://3.basecamp.com/3600924/buckets/1966716/messages/2195552713#__recording_2202297109)
|
True
|
Consider making closed sections load lazily - We do this with the wizard view in two places, using `xxf:update="full"` on `xf:switch`:
- for all wizard pages (in `wizard.xbl`)
- for repeated top-level sections (in `repeater.xbl`)
Possibly, just adding `xxf:update="full"` on `xf:switch` in `section.xbl` might do the trick?
- [ ] quick check with `xxf:update="full"`
- [ ] make sure we don't have an issue with nested `xxf:update="full"` (wizard)
(Note that in Form Builder, we do more: we do not even produce the static analysis of the control tree. But we cannot do this at runtime as we need the controls to be live.)
[+1 from customer](https://3.basecamp.com/3600924/buckets/1966716/messages/2195552713#__recording_2202297109)
|
non_process
|
consider making closed sections load lazily we do this with the wizard view in two places using xxf update full on xf switch for all wizard pages in wizard xbl for repeated top level sections in repeater xbl possibly just adding xxf update full on xf switch in section xbl might do the trick quick check with xxf update full make sure we don t have an issue with nested xxf update full wizard note that in form builder we do more we do not even produce the static analysis of the control tree but we cannot do this at runtime as we need the controls to be live
| 0
|
3,814
| 6,797,594,275
|
IssuesEvent
|
2017-11-01 23:46:58
|
pwittchen/ReactiveSensors
|
https://api.github.com/repos/pwittchen/ReactiveSensors
|
closed
|
Release 0.2.0
|
release process
|
**Initial release notes**:
- returning error through Rx to make the error handling easier - issue #29
- creating sensor observable with the ability to specify handler - PR #26
- updated project dependencies
- migrated library to RxJava2.x on `RxJava2.x` branch
- kept backward compatibility with RxJava1.x on `RxJava1.x` branch
- removed `master` branch
**Things to do**:
- Branch `RxJava1.x`
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
- Branch `RxJava2.x`
- [x] update artifact name
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
1.0
|
Release 0.2.0 - **Initial release notes**:
- returning error through Rx to make the error handling easier - issue #29
- creating sensor observable with the ability to specify handler - PR #26
- updated project dependencies
- migrated library to RxJava2.x on `RxJava2.x` branch
- kept backward compatibility with RxJava1.x on `RxJava1.x` branch
- removed `master` branch
**Things to do**:
- Branch `RxJava1.x`
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
- Branch `RxJava2.x`
- [x] update artifact name
- [x] update JavaDoc on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] create new GitHub release
|
process
|
release initial release notes returning error through rx to make the error handling easier issue creating sensor observable with the ability to specify handler pr updated project dependencies migrated library to x on x branch kept backward compatibility with x on x branch removed master branch things to do branch x update javadoc on gh pages bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release branch x update artifact name update javadoc on gh pages bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md create new github release
| 1
|
9,112
| 12,193,274,975
|
IssuesEvent
|
2020-04-29 14:11:05
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
Calling onnx export hangs using multiprocessing
|
high priority module: multiprocessing module: onnx triaged
|
## 🐛 Bug
Calling `torch.onnx.export` in a parent and a child process using `multiprocessing` hangs on Linux. This behavior occurs both with the nightly and latest stable version of PyTorch.
## To Reproduce
Steps to reproduce the behavior:
`torch_export_bug.py`
```python
import torch
import multiprocessing
def _get_model():
class ExampleNet(torch.nn.Module):
def __init__(self):
super(ExampleNet, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 16, kernel_size=5, padding=0)
self.fc1 = torch.nn.Linear(16 * 12 * 12, 100)
self.fc2 = torch.nn.Linear(
100, 2
) # For binary classification, final layer needs only 2 outputs
def forward(self, x):
out = self.conv1(x)
out = torch.nn.functional.relu(out)
out = torch.nn.functional.max_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = torch.nn.functional.relu(out)
out = self.fc2(out)
return out
dummy_input = torch.empty(1, 1, 28, 28)
example_net = ExampleNet()
torch.onnx.export(
example_net,
dummy_input,
"model.onnx",
do_constant_folding=False,
export_params=True,
input_names=["input"],
output_names=["output"],
enable_onnx_checker=False,
)
return None
def proc():
print("\tGetting model inside proc")
# it blocks here only when we have called crypten.nn.from_pytorch in the parent process
model = _get_model()
print("\tGot model inside proc")
return model
print("[+] Start")
# it doesn't block if we call this multiple times inside the same process
model = _get_model()
print("[+] Got model")
process = multiprocessing.Process(target=proc, args=())
print("[+] Starting process")
process.start()
print("[+] Waiting process")
process.join()
print("[+] End")
```
## Environment
Dockerfile:
```bash
FROM python:3.7.7
COPY torch_export_bug.py torch_export_bug.py
RUN pip install numpy
RUN pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
```
```
root@db3bb57fd765:/# pip freeze
future==0.18.2
numpy==1.18.2
Pillow==7.1.1
torch==1.6.0.dev20200407+cpu
torchvision==0.6.0.dev20200407+cpu
```
cc @ezyang @gchanan @zou3519 @suo @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
|
1.0
|
Calling onnx export hangs using multiprocessing - ## 🐛 Bug
Calling `torch.onnx.export` in a parent and a child process using `multiprocessing` hangs on Linux. This behavior occurs both with the nightly and latest stable version of PyTorch.
## To Reproduce
Steps to reproduce the behavior:
`torch_export_bug.py`
```python
import torch
import multiprocessing
def _get_model():
class ExampleNet(torch.nn.Module):
def __init__(self):
super(ExampleNet, self).__init__()
self.conv1 = torch.nn.Conv2d(1, 16, kernel_size=5, padding=0)
self.fc1 = torch.nn.Linear(16 * 12 * 12, 100)
self.fc2 = torch.nn.Linear(
100, 2
) # For binary classification, final layer needs only 2 outputs
def forward(self, x):
out = self.conv1(x)
out = torch.nn.functional.relu(out)
out = torch.nn.functional.max_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = torch.nn.functional.relu(out)
out = self.fc2(out)
return out
dummy_input = torch.empty(1, 1, 28, 28)
example_net = ExampleNet()
torch.onnx.export(
example_net,
dummy_input,
"model.onnx",
do_constant_folding=False,
export_params=True,
input_names=["input"],
output_names=["output"],
enable_onnx_checker=False,
)
return None
def proc():
print("\tGetting model inside proc")
# it blocks here only when we have called crypten.nn.from_pytorch in the parent process
model = _get_model()
print("\tGot model inside proc")
return model
print("[+] Start")
# it doesn't block if we call this multiple times inside the same process
model = _get_model()
print("[+] Got model")
process = multiprocessing.Process(target=proc, args=())
print("[+] Starting process")
process.start()
print("[+] Waiting process")
process.join()
print("[+] End")
```
## Environment
Dockerfile:
```bash
FROM python:3.7.7
COPY torch_export_bug.py torch_export_bug.py
RUN pip install numpy
RUN pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
```
```
root@db3bb57fd765:/# pip freeze
future==0.18.2
numpy==1.18.2
Pillow==7.1.1
torch==1.6.0.dev20200407+cpu
torchvision==0.6.0.dev20200407+cpu
```
cc @ezyang @gchanan @zou3519 @suo @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof
|
process
|
calling onnx export hangs using multiprocessing 🐛 bug calling torch onnx export in a parent and a child process using multiprocessing hangs on linux this behavior occurs both with the nightly and latest stable version of pytorch to reproduce steps to reproduce the behavior torch export bug py python import torch import multiprocessing def get model class examplenet torch nn module def init self super examplenet self init self torch nn kernel size padding self torch nn linear self torch nn linear for binary classification final layer needs only outputs def forward self x out self x out torch nn functional relu out out torch nn functional max out out out view out size out self out out torch nn functional relu out out self out return out dummy input torch empty example net examplenet torch onnx export example net dummy input model onnx do constant folding false export params true input names output names enable onnx checker false return none def proc print tgetting model inside proc it blocks here only when we have called crypten nn from pytorch in the parent process model get model print tgot model inside proc return model print start it doesn t block if we call this multiple times inside the same process model get model print got model process multiprocessing process target proc args print starting process process start print waiting process process join print end environment dockerfile bash from python copy torch export bug py torch export bug py run pip install numpy run pip install pre torch torchvision f root pip freeze future numpy pillow torch cpu torchvision cpu cc ezyang gchanan suo houseroad spandantiwari lara hdr bowenbao neginraoof
| 1
|
438,822
| 12,652,065,422
|
IssuesEvent
|
2020-06-17 02:23:26
|
eBay/ebayui-core
|
https://api.github.com/repos/eBay/ebayui-core
|
closed
|
Dialog: close button issues in IE & Edge
|
component: dialog priority: 3 regression: no resolution: won't fix severity: non-blocker status: backlog type: bug
|
<!-- Delete any sections below that are not relevant. -->
# Bug Report
## eBayUI Version: 2.6.0-2 and older versions and skin 7.3.2
## Description
1)On left and right panel scrollable dialog
the close button has to be clicked twice to close the dialog.
1st click removes focus and second click closed the dialog
2)On left and Right panel dialog
The close button first appears onone side then disappears and re-appears on the other end
will post the screenshots later
## Workaround
<!-- Is there a known workaround? If so, what is it? -->
## Screenshots
<!-- Upload screenshots if appropriate. -->
**issue 1:**
befoe close icon click

after close icon first click

it looses focus. only the next click closes the dialog
**issue 2:**
after opening a left panel dialog

immediately the close icon moves to left

|
1.0
|
Dialog: close button issues in IE & Edge - <!-- Delete any sections below that are not relevant. -->
# Bug Report
## eBayUI Version: 2.6.0-2 and older versions and skin 7.3.2
## Description
1)On left and right panel scrollable dialog
the close button has to be clicked twice to close the dialog.
1st click removes focus and second click closed the dialog
2)On left and Right panel dialog
The close button first appears onone side then disappears and re-appears on the other end
will post the screenshots later
## Workaround
<!-- Is there a known workaround? If so, what is it? -->
## Screenshots
<!-- Upload screenshots if appropriate. -->
**issue 1:**
befoe close icon click

after close icon first click

it looses focus. only the next click closes the dialog
**issue 2:**
after opening a left panel dialog

immediately the close icon moves to left

|
non_process
|
dialog close button issues in ie edge bug report ebayui version and older versions and skin description on left and right panel scrollable dialog the close button has to be clicked twice to close the dialog click removes focus and second click closed the dialog on left and right panel dialog the close button first appears onone side then disappears and re appears on the other end will post the screenshots later workaround screenshots issue befoe close icon click after close icon first click it looses focus only the next click closes the dialog issue after opening a left panel dialog immediately the close icon moves to left
| 0
|
436,587
| 12,550,981,257
|
IssuesEvent
|
2020-06-06 13:10:02
|
googleapis/elixir-google-api
|
https://api.github.com/repos/googleapis/elixir-google-api
|
opened
|
Synthesis failed for Language
|
api: language autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate Language. :broken_heart:
Here's the output from running `synth.py`:
```
2020-06-06 06:09:19,284 autosynth [INFO] > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-06-06 06:09:20,086 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2020-06-06 06:09:20,090 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2020-06-06 06:09:20,093 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2020-06-06 06:09:20,096 autosynth [DEBUG] > Running: git config push.default simple
2020-06-06 06:09:20,099 autosynth [DEBUG] > Running: git branch -f autosynth-language
2020-06-06 06:09:20,102 autosynth [DEBUG] > Running: git checkout autosynth-language
Switched to branch 'autosynth-language'
2020-06-06 06:09:20,349 autosynth [INFO] > Running synthtool
2020-06-06 06:09:20,349 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/language/synth.metadata', 'synth.py', '--']
2020-06-06 06:09:20,351 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata clients/language/synth.metadata synth.py -- Language
tee: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api: Is a directory
2020-06-06 06:09:20,589 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-language
nothing to commit, working tree clean
2020-06-06 06:09:22,512 synthtool [DEBUG] > Running: docker run --rm -v/tmpfs/tmp/tmpkj2jtcl6/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Language
DEBUG:synthtool:Running: docker run --rm -v/tmpfs/tmp/tmpkj2jtcl6/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Language
/workspace /workspace
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
Resolving Hex dependencies...
Dependency resolution completed:
Unchanged:
certifi 2.5.1
google_api_discovery 0.7.0
google_gax 0.3.2
hackney 1.15.2
idna 6.0.0
jason 1.2.1
metrics 1.0.1
mime 1.3.1
mimerl 1.2.0
oauth2 0.9.4
parse_trans 3.3.0
poison 3.1.0
ssl_verify_fun 1.1.5
temp 0.4.7
tesla 1.3.3
unicode_util_compat 0.4.1
* Getting google_api_discovery (Hex package)
* Getting tesla (Hex package)
* Getting oauth2 (Hex package)
* Getting temp (Hex package)
* Getting jason (Hex package)
* Getting poison (Hex package)
* Getting hackney (Hex package)
* Getting certifi (Hex package)
* Getting idna (Hex package)
* Getting metrics (Hex package)
* Getting mimerl (Hex package)
* Getting ssl_verify_fun (Hex package)
* Getting unicode_util_compat (Hex package)
* Getting parse_trans (Hex package)
* Getting mime (Hex package)
* Getting google_gax (Hex package)
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
==> temp
Compiling 3 files (.ex)
Generated temp app
===> Compiling parse_trans
===> Compiling mimerl
===> Compiling metrics
===> Compiling unicode_util_compat
===> Compiling idna
==> jason
Compiling 8 files (.ex)
Generated jason app
warning: String.strip/1 is deprecated. Use String.trim/1 instead
/workspace/deps/poison/mix.exs:4
==> poison
Compiling 4 files (.ex)
warning: Integer.to_char_list/2 is deprecated. Use Integer.to_charlist/2 instead
lib/poison/encoder.ex:173
Generated poison app
==> ssl_verify_fun
Compiling 7 files (.erl)
Generated ssl_verify_fun app
===> Compiling certifi
===> Compiling hackney
==> oauth2
Compiling 13 files (.ex)
Generated oauth2 app
==> mime
Compiling 2 files (.ex)
Generated mime app
==> tesla
Compiling 26 files (.ex)
Generated tesla app
==> google_gax
Compiling 5 files (.ex)
Generated google_gax app
==> google_api_discovery
Compiling 21 files (.ex)
Generated google_api_discovery app
==> google_apis
Compiling 27 files (.ex)
warning: System.cwd/0 is deprecated. Use File.cwd/0 instead
lib/google_apis/publisher.ex:24
Generated google_apis app
13:09:56.640 [info] FETCHING: https://language.googleapis.com/$discovery/GOOGLE_REST_SIMPLE_URI?version=v1
13:09:56.763 [info] FETCHING: https://language.googleapis.com/$discovery/rest?version=v1
13:09:56.773 [info] FOUND: https://language.googleapis.com/$discovery/rest?version=v1
Revision check: old=20200502, new=20200530, generating=true
Creating leading directories
Writing AnalyzeEntitiesRequest to clients/language/lib/google_api/language/v1/model/analyze_entities_request.ex.
Writing AnalyzeEntitiesResponse to clients/language/lib/google_api/language/v1/model/analyze_entities_response.ex.
Writing AnalyzeEntitySentimentRequest to clients/language/lib/google_api/language/v1/model/analyze_entity_sentiment_request.ex.
Writing AnalyzeEntitySentimentResponse to clients/language/lib/google_api/language/v1/model/analyze_entity_sentiment_response.ex.
Writing AnalyzeSentimentRequest to clients/language/lib/google_api/language/v1/model/analyze_sentiment_request.ex.
Writing AnalyzeSentimentResponse to clients/language/lib/google_api/language/v1/model/analyze_sentiment_response.ex.
Writing AnalyzeSyntaxRequest to clients/language/lib/google_api/language/v1/model/analyze_syntax_request.ex.
Writing AnalyzeSyntaxResponse to clients/language/lib/google_api/language/v1/model/analyze_syntax_response.ex.
Writing AnnotateTextRequest to clients/language/lib/google_api/language/v1/model/annotate_text_request.ex.
Writing AnnotateTextResponse to clients/language/lib/google_api/language/v1/model/annotate_text_response.ex.
Writing ClassificationCategory to clients/language/lib/google_api/language/v1/model/classification_category.ex.
Writing ClassifyTextRequest to clients/language/lib/google_api/language/v1/model/classify_text_request.ex.
Writing ClassifyTextResponse to clients/language/lib/google_api/language/v1/model/classify_text_response.ex.
Writing DependencyEdge to clients/language/lib/google_api/language/v1/model/dependency_edge.ex.
Writing Document to clients/language/lib/google_api/language/v1/model/document.ex.
Writing Entity to clients/language/lib/google_api/language/v1/model/entity.ex.
Writing EntityMention to clients/language/lib/google_api/language/v1/model/entity_mention.ex.
Writing Features to clients/language/lib/google_api/language/v1/model/features.ex.
Writing PartOfSpeech to clients/language/lib/google_api/language/v1/model/part_of_speech.ex.
Writing Sentence to clients/language/lib/google_api/language/v1/model/sentence.ex.
Writing Sentiment to clients/language/lib/google_api/language/v1/model/sentiment.ex.
Writing Status to clients/language/lib/google_api/language/v1/model/status.ex.
Writing TextSpan to clients/language/lib/google_api/language/v1/model/text_span.ex.
Writing Token to clients/language/lib/google_api/language/v1/model/token.ex.
Writing Documents to clients/language/lib/google_api/language/v1/api/documents.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
13:09:57.186 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
2020-06-06 06:10:00,468 synthtool [DEBUG] > Wrote metadata to clients/language/synth.metadata.
DEBUG:synthtool:Wrote metadata to clients/language/synth.metadata.
2020-06-06 06:10:00,497 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 555, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 121, in synthesize
with open(log_file_path, "rt") as fp:
IsADirectoryError: [Errno 21] Is a directory: '/tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api'
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/567e2ab8-5e4f-4fb0-8bae-9d0cc90aa1af/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
|
1.0
|
Synthesis failed for Language - Hello! Autosynth couldn't regenerate Language. :broken_heart:
Here's the output from running `synth.py`:
```
2020-06-06 06:09:19,284 autosynth [INFO] > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-06-06 06:09:20,086 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2020-06-06 06:09:20,090 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2020-06-06 06:09:20,093 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2020-06-06 06:09:20,096 autosynth [DEBUG] > Running: git config push.default simple
2020-06-06 06:09:20,099 autosynth [DEBUG] > Running: git branch -f autosynth-language
2020-06-06 06:09:20,102 autosynth [DEBUG] > Running: git checkout autosynth-language
Switched to branch 'autosynth-language'
2020-06-06 06:09:20,349 autosynth [INFO] > Running synthtool
2020-06-06 06:09:20,349 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/language/synth.metadata', 'synth.py', '--']
2020-06-06 06:09:20,351 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata clients/language/synth.metadata synth.py -- Language
tee: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api: Is a directory
2020-06-06 06:09:20,589 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-language
nothing to commit, working tree clean
2020-06-06 06:09:22,512 synthtool [DEBUG] > Running: docker run --rm -v/tmpfs/tmp/tmpkj2jtcl6/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Language
DEBUG:synthtool:Running: docker run --rm -v/tmpfs/tmp/tmpkj2jtcl6/repo:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Language
/workspace /workspace
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
Resolving Hex dependencies...
Dependency resolution completed:
Unchanged:
certifi 2.5.1
google_api_discovery 0.7.0
google_gax 0.3.2
hackney 1.15.2
idna 6.0.0
jason 1.2.1
metrics 1.0.1
mime 1.3.1
mimerl 1.2.0
oauth2 0.9.4
parse_trans 3.3.0
poison 3.1.0
ssl_verify_fun 1.1.5
temp 0.4.7
tesla 1.3.3
unicode_util_compat 0.4.1
* Getting google_api_discovery (Hex package)
* Getting tesla (Hex package)
* Getting oauth2 (Hex package)
* Getting temp (Hex package)
* Getting jason (Hex package)
* Getting poison (Hex package)
* Getting hackney (Hex package)
* Getting certifi (Hex package)
* Getting idna (Hex package)
* Getting metrics (Hex package)
* Getting mimerl (Hex package)
* Getting ssl_verify_fun (Hex package)
* Getting unicode_util_compat (Hex package)
* Getting parse_trans (Hex package)
* Getting mime (Hex package)
* Getting google_gax (Hex package)
[33mThe mix.lock file was generated with a newer version of Hex. Update your client by running `mix local.hex` to avoid losing data.[0m
==> temp
Compiling 3 files (.ex)
Generated temp app
===> Compiling parse_trans
===> Compiling mimerl
===> Compiling metrics
===> Compiling unicode_util_compat
===> Compiling idna
==> jason
Compiling 8 files (.ex)
Generated jason app
warning: String.strip/1 is deprecated. Use String.trim/1 instead
/workspace/deps/poison/mix.exs:4
==> poison
Compiling 4 files (.ex)
warning: Integer.to_char_list/2 is deprecated. Use Integer.to_charlist/2 instead
lib/poison/encoder.ex:173
Generated poison app
==> ssl_verify_fun
Compiling 7 files (.erl)
Generated ssl_verify_fun app
===> Compiling certifi
===> Compiling hackney
==> oauth2
Compiling 13 files (.ex)
Generated oauth2 app
==> mime
Compiling 2 files (.ex)
Generated mime app
==> tesla
Compiling 26 files (.ex)
Generated tesla app
==> google_gax
Compiling 5 files (.ex)
Generated google_gax app
==> google_api_discovery
Compiling 21 files (.ex)
Generated google_api_discovery app
==> google_apis
Compiling 27 files (.ex)
warning: System.cwd/0 is deprecated. Use File.cwd/0 instead
lib/google_apis/publisher.ex:24
Generated google_apis app
13:09:56.640 [info] FETCHING: https://language.googleapis.com/$discovery/GOOGLE_REST_SIMPLE_URI?version=v1
13:09:56.763 [info] FETCHING: https://language.googleapis.com/$discovery/rest?version=v1
13:09:56.773 [info] FOUND: https://language.googleapis.com/$discovery/rest?version=v1
Revision check: old=20200502, new=20200530, generating=true
Creating leading directories
Writing AnalyzeEntitiesRequest to clients/language/lib/google_api/language/v1/model/analyze_entities_request.ex.
Writing AnalyzeEntitiesResponse to clients/language/lib/google_api/language/v1/model/analyze_entities_response.ex.
Writing AnalyzeEntitySentimentRequest to clients/language/lib/google_api/language/v1/model/analyze_entity_sentiment_request.ex.
Writing AnalyzeEntitySentimentResponse to clients/language/lib/google_api/language/v1/model/analyze_entity_sentiment_response.ex.
Writing AnalyzeSentimentRequest to clients/language/lib/google_api/language/v1/model/analyze_sentiment_request.ex.
Writing AnalyzeSentimentResponse to clients/language/lib/google_api/language/v1/model/analyze_sentiment_response.ex.
Writing AnalyzeSyntaxRequest to clients/language/lib/google_api/language/v1/model/analyze_syntax_request.ex.
Writing AnalyzeSyntaxResponse to clients/language/lib/google_api/language/v1/model/analyze_syntax_response.ex.
Writing AnnotateTextRequest to clients/language/lib/google_api/language/v1/model/annotate_text_request.ex.
Writing AnnotateTextResponse to clients/language/lib/google_api/language/v1/model/annotate_text_response.ex.
Writing ClassificationCategory to clients/language/lib/google_api/language/v1/model/classification_category.ex.
Writing ClassifyTextRequest to clients/language/lib/google_api/language/v1/model/classify_text_request.ex.
Writing ClassifyTextResponse to clients/language/lib/google_api/language/v1/model/classify_text_response.ex.
Writing DependencyEdge to clients/language/lib/google_api/language/v1/model/dependency_edge.ex.
Writing Document to clients/language/lib/google_api/language/v1/model/document.ex.
Writing Entity to clients/language/lib/google_api/language/v1/model/entity.ex.
Writing EntityMention to clients/language/lib/google_api/language/v1/model/entity_mention.ex.
Writing Features to clients/language/lib/google_api/language/v1/model/features.ex.
Writing PartOfSpeech to clients/language/lib/google_api/language/v1/model/part_of_speech.ex.
Writing Sentence to clients/language/lib/google_api/language/v1/model/sentence.ex.
Writing Sentiment to clients/language/lib/google_api/language/v1/model/sentiment.ex.
Writing Status to clients/language/lib/google_api/language/v1/model/status.ex.
Writing TextSpan to clients/language/lib/google_api/language/v1/model/text_span.ex.
Writing Token to clients/language/lib/google_api/language/v1/model/token.ex.
Writing Documents to clients/language/lib/google_api/language/v1/api/documents.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
13:09:57.186 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
2020-06-06 06:10:00,468 synthtool [DEBUG] > Wrote metadata to clients/language/synth.metadata.
DEBUG:synthtool:Wrote metadata to clients/language/synth.metadata.
2020-06-06 06:10:00,497 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 555, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 121, in synthesize
with open(log_file_path, "rt") as fp:
IsADirectoryError: [Errno 21] Is a directory: '/tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api'
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/567e2ab8-5e4f-4fb0-8bae-9d0cc90aa1af/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
|
non_process
|
synthesis failed for language hello autosynth couldn t regenerate language broken heart here s the output from running synth py autosynth logs will be written to tmpfs src github synthtool logs googleapis elixir google api autosynth running git config global core excludesfile home kbuilder autosynth gitignore autosynth running git config user name yoshi automation autosynth running git config user email yoshi automation google com autosynth running git config push default simple autosynth running git branch f autosynth language autosynth running git checkout autosynth language switched to branch autosynth language autosynth running synthtool autosynth autosynth running tmpfs src github synthtool env bin m synthtool metadata clients language synth metadata synth py language tee tmpfs src github synthtool logs googleapis elixir google api is a directory synthtool executing home kbuilder cache synthtool elixir google api synth py on branch autosynth language nothing to commit working tree clean synthtool running docker run rm v tmpfs tmp repo workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh language debug synthtool running docker run rm v tmpfs tmp repo workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh language workspace workspace mix lock file was generated with a newer version of hex update your client by running mix local hex to avoid losing data resolving hex dependencies dependency resolution completed unchanged certifi google api discovery google gax hackney idna jason metrics mime mimerl parse trans poison ssl verify fun temp tesla unicode util compat getting google api discovery hex package getting tesla hex package getting hex package getting temp hex package getting jason hex package getting poison hex package getting hackney hex package getting certifi hex package getting idna hex package getting metrics hex package getting mimerl hex package getting ssl verify fun hex package getting unicode util compat hex package getting parse trans hex package getting mime hex package getting google gax hex package mix lock file was generated with a newer version of hex update your client by running mix local hex to avoid losing data temp compiling files ex generated temp app compiling parse trans compiling mimerl compiling metrics compiling unicode util compat compiling idna jason compiling files ex generated jason app warning string strip is deprecated use string trim instead workspace deps poison mix exs poison compiling files ex warning integer to char list is deprecated use integer to charlist instead lib poison encoder ex generated poison app ssl verify fun compiling files erl generated ssl verify fun app compiling certifi compiling hackney compiling files ex generated app mime compiling files ex generated mime app tesla compiling files ex generated tesla app google gax compiling files ex generated google gax app google api discovery compiling files ex generated google api discovery app google apis compiling files ex warning system cwd is deprecated use file cwd instead lib google apis publisher ex generated google apis app fetching fetching found revision check old new generating true creating leading directories writing analyzeentitiesrequest to clients language lib google api language model analyze entities request ex writing analyzeentitiesresponse to clients language lib google api language model analyze entities response ex writing analyzeentitysentimentrequest to clients language lib google api language model analyze entity sentiment request ex writing analyzeentitysentimentresponse to clients language lib google api language model analyze entity sentiment response ex writing analyzesentimentrequest to clients language lib google api language model analyze sentiment request ex writing analyzesentimentresponse to clients language lib google api language model analyze sentiment response ex writing analyzesyntaxrequest to clients language lib google api language model analyze syntax request ex writing analyzesyntaxresponse to clients language lib google api language model analyze syntax response ex writing annotatetextrequest to clients language lib google api language model annotate text request ex writing annotatetextresponse to clients language lib google api language model annotate text response ex writing classificationcategory to clients language lib google api language model classification category ex writing classifytextrequest to clients language lib google api language model classify text request ex writing classifytextresponse to clients language lib google api language model classify text response ex writing dependencyedge to clients language lib google api language model dependency edge ex writing document to clients language lib google api language model document ex writing entity to clients language lib google api language model entity ex writing entitymention to clients language lib google api language model entity mention ex writing features to clients language lib google api language model features ex writing partofspeech to clients language lib google api language model part of speech ex writing sentence to clients language lib google api language model sentence ex writing sentiment to clients language lib google api language model sentiment ex writing status to clients language lib google api language model status ex writing textspan to clients language lib google api language model text span ex writing token to clients language lib google api language model token ex writing documents to clients language lib google api language api documents ex writing connection ex writing metadata ex writing mix exs writing readme md writing license writing gitignore writing config config exs writing test test helper exs found only discovery revision and or formatting changes not significant enough for a pr fixing file permissions synthtool wrote metadata to clients language synth metadata debug synthtool wrote metadata to clients language synth metadata autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize base synth log path file tmpfs src github synthtool autosynth synthesizer py line in synthesize with open log file path rt as fp isadirectoryerror is a directory tmpfs src github synthtool logs googleapis elixir google api google internal developers can see the full log
| 0
|
240,909
| 20,100,626,795
|
IssuesEvent
|
2022-02-07 03:20:55
|
edh-git/EL_Display_Hub
|
https://api.github.com/repos/edh-git/EL_Display_Hub
|
closed
|
USB ???? ?? ?????? ??? ? ?? win11
|
QA Test
|
Product:EasyCanvas<br><br>Device OS:iOS<br><br>e-mail:smkim@devguru.co.kr 4.6.2.0<br><br>USB ???? ?? ?????? ??? ? ?? win11<br><br>[EL_Display_919c8672-06ca-48ca-af11-90376a7302b3.zip](https://github.com/edh-git/EL_Display_Hub/blob/main/EL_Display_919c8672-06ca-48ca-af11-90376a7302b3.zip)
|
1.0
|
USB ???? ?? ?????? ??? ? ?? win11 - Product:EasyCanvas<br><br>Device OS:iOS<br><br>e-mail:smkim@devguru.co.kr 4.6.2.0<br><br>USB ???? ?? ?????? ??? ? ?? win11<br><br>[EL_Display_919c8672-06ca-48ca-af11-90376a7302b3.zip](https://github.com/edh-git/EL_Display_Hub/blob/main/EL_Display_919c8672-06ca-48ca-af11-90376a7302b3.zip)
|
non_process
|
usb product easycanvas device os ios e mail smkim devguru co kr usb
| 0
|
11,053
| 13,888,692,097
|
IssuesEvent
|
2020-10-19 06:45:13
|
fluent/fluent-bit
|
https://api.github.com/repos/fluent/fluent-bit
|
closed
|
Add S3 bucket Output plugin
|
work-in-process
|
Feature:
I have always wanted to push my logs to aws S3 bucket directly.
Will it be possible for us to have an output plugin that will push the logs to s3 bucket real time which will create a file based on daily or weekly log file. Similar to pushing logs to elasticsearch.
|
1.0
|
Add S3 bucket Output plugin - Feature:
I have always wanted to push my logs to aws S3 bucket directly.
Will it be possible for us to have an output plugin that will push the logs to s3 bucket real time which will create a file based on daily or weekly log file. Similar to pushing logs to elasticsearch.
|
process
|
add bucket output plugin feature i have always wanted to push my logs to aws bucket directly will it be possible for us to have an output plugin that will push the logs to bucket real time which will create a file based on daily or weekly log file similar to pushing logs to elasticsearch
| 1
|
13,657
| 16,370,581,633
|
IssuesEvent
|
2021-05-15 02:56:12
|
googleapis/sphinx-docfx-yaml
|
https://api.github.com/repos/googleapis/sphinx-docfx-yaml
|
closed
|
Enable tests using GitHub Actions
|
priority: p1 type: process
|
Add a testing infrastructure for existing codebase. Preferably using Kokoro.
|
1.0
|
Enable tests using GitHub Actions - Add a testing infrastructure for existing codebase. Preferably using Kokoro.
|
process
|
enable tests using github actions add a testing infrastructure for existing codebase preferably using kokoro
| 1
|
14,870
| 18,280,250,872
|
IssuesEvent
|
2021-10-05 01:41:49
|
quark-engine/quark-engine
|
https://api.github.com/repos/quark-engine/quark-engine
|
closed
|
Rule does not match when no spaces between arguments in the Java signature of the descriptor
|
work-in-progress issue-processing-state-06
|
I have written a _custom Quark rule_, and was _expecting to have 100% confidence_ for this rule. **I only get 40%**.
_Update: see next comment for important info._
My malicious sample does this:
```java
import com.esotericsoftware.kryonet.Client;
import java.net.InetAddress;
...
Controller.u = new Client();
Controller.u.getKryo().register(String.class);
Controller.u.setTimeout(30000);
Controller.u.start();
Controller.u.connect(30000, InetAddress.getByName(Controller.host), Controller.port);
```
I want to write a rule that catches this combination
1. `InetAddress.getByName`
2. `com.esotericsoftware.kryonet.Client.connect`
This is what I wrote in `00154.json`:
```json
{
"crime": "Connect hostname to TCP or UDP socket using KryoNet",
"x1_permission": [],
"x2n3n4_comb": [
{
"class": "Ljava/net/InetAddress;",
"method": "getByName",
"descriptor": "(Ljava/lang/String;)Ljava/net/InetAddress;"
},
{
"class": "Lcom/esotericsoftware/kryonet/Client;",
"method": "connect",
"descriptor": "(ILjava/net/InetAddress;I)V"
}
],
"yscore": 1,
"label": ["socket"]
}
```
To my understanding, given what the malware does, I should have **100% confidence** as it does both in sequence, and the tainted analysis should also match.
But when I run it (with Quark v21.02.1), **I only get 40%.**

If I ask for details, I see that it matches only up to the first API (`getByName`) but **does not match `connect`.**
```java
public void connect (int timeout, String host, int tcpPort) throws IOException {
connect(timeout, InetAddress.getByName(host), tcpPort, -1);
}
```
[See here the source code of `com.esotericsoftware.kryonet.Client`](https://github.com/EsotericSoftware/kryonet/blob/master/src/com/esotericsoftware/kryonet/Client.java).

**Desktop (please complete the following information):**
- OS: Linux Debian
- Version: v21.02.1
Malicious sample: `f82d6f24af2a4444c696c64060582d8aed6280da578c4dea3bb71bd6a11ebcf8`
You can download it from [Koodous](https://koodous.com/apks?search=f82d6f24af2a4444c696c64060582d8aed6280da578c4dea3bb71bd6a11ebcf8), or VirusTotal etc.
|
1.0
|
Rule does not match when no spaces between arguments in the Java signature of the descriptor - I have written a _custom Quark rule_, and was _expecting to have 100% confidence_ for this rule. **I only get 40%**.
_Update: see next comment for important info._
My malicious sample does this:
```java
import com.esotericsoftware.kryonet.Client;
import java.net.InetAddress;
...
Controller.u = new Client();
Controller.u.getKryo().register(String.class);
Controller.u.setTimeout(30000);
Controller.u.start();
Controller.u.connect(30000, InetAddress.getByName(Controller.host), Controller.port);
```
I want to write a rule that catches this combination
1. `InetAddress.getByName`
2. `com.esotericsoftware.kryonet.Client.connect`
This is what I wrote in `00154.json`:
```json
{
"crime": "Connect hostname to TCP or UDP socket using KryoNet",
"x1_permission": [],
"x2n3n4_comb": [
{
"class": "Ljava/net/InetAddress;",
"method": "getByName",
"descriptor": "(Ljava/lang/String;)Ljava/net/InetAddress;"
},
{
"class": "Lcom/esotericsoftware/kryonet/Client;",
"method": "connect",
"descriptor": "(ILjava/net/InetAddress;I)V"
}
],
"yscore": 1,
"label": ["socket"]
}
```
To my understanding, given what the malware does, I should have **100% confidence** as it does both in sequence, and the tainted analysis should also match.
But when I run it (with Quark v21.02.1), **I only get 40%.**

If I ask for details, I see that it matches only up to the first API (`getByName`) but **does not match `connect`.**
```java
public void connect (int timeout, String host, int tcpPort) throws IOException {
connect(timeout, InetAddress.getByName(host), tcpPort, -1);
}
```
[See here the source code of `com.esotericsoftware.kryonet.Client`](https://github.com/EsotericSoftware/kryonet/blob/master/src/com/esotericsoftware/kryonet/Client.java).

**Desktop (please complete the following information):**
- OS: Linux Debian
- Version: v21.02.1
Malicious sample: `f82d6f24af2a4444c696c64060582d8aed6280da578c4dea3bb71bd6a11ebcf8`
You can download it from [Koodous](https://koodous.com/apks?search=f82d6f24af2a4444c696c64060582d8aed6280da578c4dea3bb71bd6a11ebcf8), or VirusTotal etc.
|
process
|
rule does not match when no spaces between arguments in the java signature of the descriptor i have written a custom quark rule and was expecting to have confidence for this rule i only get update see next comment for important info my malicious sample does this java import com esotericsoftware kryonet client import java net inetaddress controller u new client controller u getkryo register string class controller u settimeout controller u start controller u connect inetaddress getbyname controller host controller port i want to write a rule that catches this combination inetaddress getbyname com esotericsoftware kryonet client connect this is what i wrote in json json crime connect hostname to tcp or udp socket using kryonet permission comb class ljava net inetaddress method getbyname descriptor ljava lang string ljava net inetaddress class lcom esotericsoftware kryonet client method connect descriptor iljava net inetaddress i v yscore label to my understanding given what the malware does i should have confidence as it does both in sequence and the tainted analysis should also match but when i run it with quark i only get if i ask for details i see that it matches only up to the first api getbyname but does not match connect java public void connect int timeout string host int tcpport throws ioexception connect timeout inetaddress getbyname host tcpport desktop please complete the following information os linux debian version malicious sample you can download it from or virustotal etc
| 1
|
315,543
| 9,622,049,655
|
IssuesEvent
|
2019-05-14 12:11:29
|
inverse-inc/packetfence
|
https://api.github.com/repos/inverse-inc/packetfence
|
opened
|
Embedded doc not available
|
Priority: Medium Type: Bug os specific: RHEL / CentOS
|
On CentOS nightly builds (old and new GUI), `doc` is missing from `$PF/html/pfappserver/root/static/` causing embedded doc to be unavailable.
|
1.0
|
Embedded doc not available - On CentOS nightly builds (old and new GUI), `doc` is missing from `$PF/html/pfappserver/root/static/` causing embedded doc to be unavailable.
|
non_process
|
embedded doc not available on centos nightly builds old and new gui doc is missing from pf html pfappserver root static causing embedded doc to be unavailable
| 0
|
283,076
| 30,889,566,331
|
IssuesEvent
|
2023-08-04 02:55:09
|
maddyCode23/linux-4.1.15
|
https://api.github.com/repos/maddyCode23/linux-4.1.15
|
reopened
|
CVE-2018-13099 (Medium) detected in linux-stable-rtv4.1.33
|
Mend: dependency security vulnerability
|
## CVE-2018-13099 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in fs/f2fs/inline.c in the Linux kernel through 4.4. A denial of service (out-of-bounds memory access and BUG) can occur for a modified f2fs filesystem image in which an inline inode contains an invalid reserved blkaddr.
<p>Publish Date: 2018-07-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-13099>CVE-2018-13099</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-13099">https://nvd.nist.gov/vuln/detail/CVE-2018-13099</a></p>
<p>Release Date: 2018-07-03</p>
<p>Fix Resolution: linux-yocto - 4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68,5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-13099 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2018-13099 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in fs/f2fs/inline.c in the Linux kernel through 4.4. A denial of service (out-of-bounds memory access and BUG) can occur for a modified f2fs filesystem image in which an inline inode contains an invalid reserved blkaddr.
<p>Publish Date: 2018-07-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-13099>CVE-2018-13099</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-13099">https://nvd.nist.gov/vuln/detail/CVE-2018-13099</a></p>
<p>Release Date: 2018-07-03</p>
<p>Fix Resolution: linux-yocto - 4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68,5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details an issue was discovered in fs inline c in the linux kernel through a denial of service out of bounds memory access and bug can occur for a modified filesystem image in which an inline inode contains an invalid reserved blkaddr publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux yocto gitautoinc gitautoinc step up your open source security game with mend
| 0
|
5,762
| 8,599,063,547
|
IssuesEvent
|
2018-11-16 00:11:27
|
googleapis/google-cloud-java
|
https://api.github.com/repos/googleapis/google-cloud-java
|
closed
|
Upgrade Guava to the latest version
|
type: process
|
[Spring Cloud GCP](https://github.com/spring-cloud/spring-cloud-gcp) uses `google-cloud-bom` for the common dependencies. We have recently turned on Snyk vulnerability detection, and it found a [deserialization issue](https://snyk.io/vuln/SNYK-JAVA-COMGOOGLEGUAVA-32236) with Guava 20.0.
The recommended remediation step is to upgrade to 24.1.1 or higher.
Would it be possible to upgrade Guava version in [google-cloud-clients POM](https://github.com/googleapis/google-cloud-java/blob/a7bd8a5717e70d7df2ee8e57ed1a37dac0bde94e/google-cloud-clients/pom.xml#L163)?
Spring Cloud GCP tracking issue: spring-cloud/spring-cloud-gcp#1207
|
1.0
|
Upgrade Guava to the latest version - [Spring Cloud GCP](https://github.com/spring-cloud/spring-cloud-gcp) uses `google-cloud-bom` for the common dependencies. We have recently turned on Snyk vulnerability detection, and it found a [deserialization issue](https://snyk.io/vuln/SNYK-JAVA-COMGOOGLEGUAVA-32236) with Guava 20.0.
The recommended remediation step is to upgrade to 24.1.1 or higher.
Would it be possible to upgrade Guava version in [google-cloud-clients POM](https://github.com/googleapis/google-cloud-java/blob/a7bd8a5717e70d7df2ee8e57ed1a37dac0bde94e/google-cloud-clients/pom.xml#L163)?
Spring Cloud GCP tracking issue: spring-cloud/spring-cloud-gcp#1207
|
process
|
upgrade guava to the latest version uses google cloud bom for the common dependencies we have recently turned on snyk vulnerability detection and it found a with guava the recommended remediation step is to upgrade to or higher would it be possible to upgrade guava version in spring cloud gcp tracking issue spring cloud spring cloud gcp
| 1
|
17,408
| 23,224,401,735
|
IssuesEvent
|
2022-08-02 21:46:00
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
reopened
|
CADSR-VS latest submission failed to process
|
ontology processing problem
|
BioPortal shows [CADSR-VS](https://bioportal.bioontology.org/ontologies/CADSR-VS) submission 164 with status "Uploaded", and nothing further.
Parsing log file at `/srv/ncbo/repository/CADSR-VS/164` shows processing started, then halted:
```
# Logfile created on 2022-01-27 21:34:38 -0800 by logger.rb/v1.4.3
I, [2022-01-27T21:34:38.705790 #22160] INFO -- : ["Starting to process http://data.bioontology.org/ontologies/CADSR-VS/submissions/164"]
I, [2022-01-27T21:34:38.731532 #22160] INFO -- : ["Starting to process CADSR-VS/submissions/164"]
```
`ncbo_cron` console session shows latest submission as invalid:
```
> sub = LinkedData::Models::OntologySubmission.find(RDF::URI.new('http://data.bioontology.org/ontologies/CADSR-VS/submissions/164')).first
> sub.bring_remaining
> sub.valid?
=> false
> sub.errors
=> {:submissionId=>{:integer=>"Attribute `submissionId` value `164` must be a `Integer`"}}
```
|
1.0
|
CADSR-VS latest submission failed to process - BioPortal shows [CADSR-VS](https://bioportal.bioontology.org/ontologies/CADSR-VS) submission 164 with status "Uploaded", and nothing further.
Parsing log file at `/srv/ncbo/repository/CADSR-VS/164` shows processing started, then halted:
```
# Logfile created on 2022-01-27 21:34:38 -0800 by logger.rb/v1.4.3
I, [2022-01-27T21:34:38.705790 #22160] INFO -- : ["Starting to process http://data.bioontology.org/ontologies/CADSR-VS/submissions/164"]
I, [2022-01-27T21:34:38.731532 #22160] INFO -- : ["Starting to process CADSR-VS/submissions/164"]
```
`ncbo_cron` console session shows latest submission as invalid:
```
> sub = LinkedData::Models::OntologySubmission.find(RDF::URI.new('http://data.bioontology.org/ontologies/CADSR-VS/submissions/164')).first
> sub.bring_remaining
> sub.valid?
=> false
> sub.errors
=> {:submissionId=>{:integer=>"Attribute `submissionId` value `164` must be a `Integer`"}}
```
|
process
|
cadsr vs latest submission failed to process bioportal shows submission with status uploaded and nothing further parsing log file at srv ncbo repository cadsr vs shows processing started then halted logfile created on by logger rb i info i info ncbo cron console session shows latest submission as invalid sub linkeddata models ontologysubmission find rdf uri new sub bring remaining sub valid false sub errors submissionid integer attribute submissionid value must be a integer
| 1
|
10,973
| 13,776,907,854
|
IssuesEvent
|
2020-10-08 10:07:25
|
prisma/e2e-tests
|
https://api.github.com/repos/prisma/e2e-tests
|
opened
|
check-for-update script fails with merge conflicts
|
bug/2-confirmed kind/bug process/candidate
|
- This usually fixes itself
- Example: https://prisma-company.slack.com/archives/CV0A8N0FL/p1602144614012200
- This issue tracks understanding and fixing that
|
1.0
|
check-for-update script fails with merge conflicts - - This usually fixes itself
- Example: https://prisma-company.slack.com/archives/CV0A8N0FL/p1602144614012200
- This issue tracks understanding and fixing that
|
process
|
check for update script fails with merge conflicts this usually fixes itself example this issue tracks understanding and fixing that
| 1
|
427,071
| 12,392,505,271
|
IssuesEvent
|
2020-05-20 14:06:28
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.programme.tv - design is broken
|
browser-fixme ml-needsdiagnosis-false ml-probability-high priority-normal
|
<!-- @browser: qwant -->
<!-- @ua_header: Mozilla/5.0 (Android; Mobile) Gecko/68.0 Firefox/68.0 QwantMobile/3.5 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/53144 -->
**URL**: https://www.programme.tv/tnt/soiree/jeudi.php
**Browser / Version**: qwant
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
Des suggestions de liens recouvrent le programme télé des chaines situées sur la colonne de gauche (M6, TMC)
Il faut passer en version ordinateur pour pouvoir les voir
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/9a8ea8d6-cb7e-4ee2-9258-39173dc7512f.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200107173653</li><li>channel: default</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/5/8a08415e-5ab5-44c5-a024-499074c9d3c2)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.programme.tv - design is broken - <!-- @browser: qwant -->
<!-- @ua_header: Mozilla/5.0 (Android; Mobile) Gecko/68.0 Firefox/68.0 QwantMobile/3.5 -->
<!-- @reported_with: mobile-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/53144 -->
**URL**: https://www.programme.tv/tnt/soiree/jeudi.php
**Browser / Version**: qwant
**Operating System**: Android
**Tested Another Browser**: No
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
Des suggestions de liens recouvrent le programme télé des chaines situées sur la colonne de gauche (M6, TMC)
Il faut passer en version ordinateur pour pouvoir les voir
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/9a8ea8d6-cb7e-4ee2-9258-39173dc7512f.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200107173653</li><li>channel: default</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/5/8a08415e-5ab5-44c5-a024-499074c9d3c2)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
design is broken url browser version qwant operating system android tested another browser no problem type design is broken description items are overlapped steps to reproduce des suggestions de liens recouvrent le programme télé des chaines situées sur la colonne de gauche tmc il faut passer en version ordinateur pour pouvoir les voir view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel default hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
2,842
| 5,806,477,312
|
IssuesEvent
|
2017-05-04 02:58:07
|
uccser/verto
|
https://api.github.com/repos/uccser/verto
|
closed
|
Glossary term not saved if back reference not given
|
bug processor implementation
|
A glossary term should be saved in `required_glossary_terms`, regardless if a back reference is given.
|
1.0
|
Glossary term not saved if back reference not given - A glossary term should be saved in `required_glossary_terms`, regardless if a back reference is given.
|
process
|
glossary term not saved if back reference not given a glossary term should be saved in required glossary terms regardless if a back reference is given
| 1
|
39,535
| 8,663,159,071
|
IssuesEvent
|
2018-11-28 16:39:30
|
ArctosDB/arctos
|
https://api.github.com/repos/ArctosDB/arctos
|
closed
|
Moving cryovials from one freezer box to another
|
Enhancement Function-ContainerOrBarcode
|
In dealing with a freezer box of spilled cryovials, we were faced with the prospect of trying to rescan vials into an existing container that virtually still contained the cryovials in a different order. I tried to move all contents of this container to a virtual Dummy box in order to rescan them back into the existing box, which did not work, probably because that "dummy" box contained other non-position objects. So the student assigned a new label and rescanned the tubes into the "new" box. However, we tried one last time to move all contents back to the old label, because we had other data referencing that barcode. It worked, sort of - but the old positions were still in the box. So now, we have a freezer box with two sets of positions, one of which is empty and the other which contains collection objects. I'm pretty sure this isn't supposed to happen. I can no longer edit. Dusty, can you remove the empty positions and keep the ones with collection objects?
Box barcode = DGR18043
|
1.0
|
Moving cryovials from one freezer box to another - In dealing with a freezer box of spilled cryovials, we were faced with the prospect of trying to rescan vials into an existing container that virtually still contained the cryovials in a different order. I tried to move all contents of this container to a virtual Dummy box in order to rescan them back into the existing box, which did not work, probably because that "dummy" box contained other non-position objects. So the student assigned a new label and rescanned the tubes into the "new" box. However, we tried one last time to move all contents back to the old label, because we had other data referencing that barcode. It worked, sort of - but the old positions were still in the box. So now, we have a freezer box with two sets of positions, one of which is empty and the other which contains collection objects. I'm pretty sure this isn't supposed to happen. I can no longer edit. Dusty, can you remove the empty positions and keep the ones with collection objects?
Box barcode = DGR18043
|
non_process
|
moving cryovials from one freezer box to another in dealing with a freezer box of spilled cryovials we were faced with the prospect of trying to rescan vials into an existing container that virtually still contained the cryovials in a different order i tried to move all contents of this container to a virtual dummy box in order to rescan them back into the existing box which did not work probably because that dummy box contained other non position objects so the student assigned a new label and rescanned the tubes into the new box however we tried one last time to move all contents back to the old label because we had other data referencing that barcode it worked sort of but the old positions were still in the box so now we have a freezer box with two sets of positions one of which is empty and the other which contains collection objects i m pretty sure this isn t supposed to happen i can no longer edit dusty can you remove the empty positions and keep the ones with collection objects box barcode
| 0
|
6,571
| 9,654,392,169
|
IssuesEvent
|
2019-05-19 13:45:37
|
linnovate/root
|
https://api.github.com/repos/linnovate/root
|
closed
|
in documents, updates field gets stuck after selecting a document from template
|
2.0.7 Documents Process bug critical
|
go to documents
select a folder that has an office with a doc template in it
click on select from template
select the file that appears in the window
write some updates so the updates field will move downwards
try to scroll the updates field to see the add button again
the updates field is stuck and cant move downwards anymore

|
1.0
|
in documents, updates field gets stuck after selecting a document from template - go to documents
select a folder that has an office with a doc template in it
click on select from template
select the file that appears in the window
write some updates so the updates field will move downwards
try to scroll the updates field to see the add button again
the updates field is stuck and cant move downwards anymore

|
process
|
in documents updates field gets stuck after selecting a document from template go to documents select a folder that has an office with a doc template in it click on select from template select the file that appears in the window write some updates so the updates field will move downwards try to scroll the updates field to see the add button again the updates field is stuck and cant move downwards anymore
| 1
|
13,959
| 16,738,409,455
|
IssuesEvent
|
2021-06-11 06:41:58
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Better indent migrate warnings and errors
|
kind/improvement process/candidate team/migrations
|
## Problem
Migrate warnings and errors formatted in lists have no indentation after the first line.

## Suggested solution
Format these messages with indentation after the first line:
```
- Lorem ipsum dolor sit amet consectetur adipiscing elit, urna consequat felis
vehicula class ultricies mollis dictumst, aenean non a in donec nulla.
- Phasellus ante pellentesque erat cum risus consequat imperdiet aliquam,
integer placerat et turpis mi eros nec lobortis taciti, vehicula nisl litora tellus
ligula porttitor metus.
```
The wrap width would need to be based on the terminal width.
## Alternatives
Leave things as they are.
|
1.0
|
Better indent migrate warnings and errors - ## Problem
Migrate warnings and errors formatted in lists have no indentation after the first line.

## Suggested solution
Format these messages with indentation after the first line:
```
- Lorem ipsum dolor sit amet consectetur adipiscing elit, urna consequat felis
vehicula class ultricies mollis dictumst, aenean non a in donec nulla.
- Phasellus ante pellentesque erat cum risus consequat imperdiet aliquam,
integer placerat et turpis mi eros nec lobortis taciti, vehicula nisl litora tellus
ligula porttitor metus.
```
The wrap width would need to be based on the terminal width.
## Alternatives
Leave things as they are.
|
process
|
better indent migrate warnings and errors problem migrate warnings and errors formatted in lists have no indentation after the first line suggested solution format these messages with indentation after the first line lorem ipsum dolor sit amet consectetur adipiscing elit urna consequat felis vehicula class ultricies mollis dictumst aenean non a in donec nulla phasellus ante pellentesque erat cum risus consequat imperdiet aliquam integer placerat et turpis mi eros nec lobortis taciti vehicula nisl litora tellus ligula porttitor metus the wrap width would need to be based on the terminal width alternatives leave things as they are
| 1
|
287,391
| 21,654,346,723
|
IssuesEvent
|
2022-05-06 12:44:55
|
sighupio/fury-kubernetes-service-mesh
|
https://api.github.com/repos/sighupio/fury-kubernetes-service-mesh
|
opened
|
Update docs to current standard
|
documentation
|
We need to update the docs of this addon to follow our current standard layout and content schema
|
1.0
|
Update docs to current standard - We need to update the docs of this addon to follow our current standard layout and content schema
|
non_process
|
update docs to current standard we need to update the docs of this addon to follow our current standard layout and content schema
| 0
|
2,623
| 3,789,323,475
|
IssuesEvent
|
2016-03-21 17:33:56
|
servo/servo
|
https://api.github.com/repos/servo/servo
|
closed
|
Move test timeout handling into contenttest harness
|
A-infrastructure B-interesting-project
|
It would be extremely nice to have test timeouts count as failures and show us filenames. The best way to achieve that is by the following:
* get rid of the timer in `harness.js`
* create a task in `run_test` in `contenttest.rs` which sits in a loop and waits for three messages - OutputReceived, TimerExpired, and TestComplete. The first message prints to the screen and appends to a buffer; the second message triggers test failure via fail!; the third message sends the complete output buffer to the test runner and cancels the timer.
* create a timer in `run_test` (you can use `Window::SetTimeout` for inspiration) and make it communicate with the new controller task
* make the I/O in `run_test` communicate with the new controller task
* ...
* Profit???
|
1.0
|
Move test timeout handling into contenttest harness - It would be extremely nice to have test timeouts count as failures and show us filenames. The best way to achieve that is by the following:
* get rid of the timer in `harness.js`
* create a task in `run_test` in `contenttest.rs` which sits in a loop and waits for three messages - OutputReceived, TimerExpired, and TestComplete. The first message prints to the screen and appends to a buffer; the second message triggers test failure via fail!; the third message sends the complete output buffer to the test runner and cancels the timer.
* create a timer in `run_test` (you can use `Window::SetTimeout` for inspiration) and make it communicate with the new controller task
* make the I/O in `run_test` communicate with the new controller task
* ...
* Profit???
|
non_process
|
move test timeout handling into contenttest harness it would be extremely nice to have test timeouts count as failures and show us filenames the best way to achieve that is by the following get rid of the timer in harness js create a task in run test in contenttest rs which sits in a loop and waits for three messages outputreceived timerexpired and testcomplete the first message prints to the screen and appends to a buffer the second message triggers test failure via fail the third message sends the complete output buffer to the test runner and cancels the timer create a timer in run test you can use window settimeout for inspiration and make it communicate with the new controller task make the i o in run test communicate with the new controller task profit
| 0
|
12,837
| 15,222,772,968
|
IssuesEvent
|
2021-02-18 01:04:28
|
googlemaps/google-maps-services-java
|
https://api.github.com/repos/googlemaps/google-maps-services-java
|
closed
|
Prevent google-maps-services-java artifact from being published
|
triage me type: process
|
When `./gradlew publish` is run, two artifacts are published to sonatype and github packages: `google-maps-services.jar` and `google-maps-services-java.jar`. The latter should not be published since the former is the correct artifact to consume.
|
1.0
|
Prevent google-maps-services-java artifact from being published - When `./gradlew publish` is run, two artifacts are published to sonatype and github packages: `google-maps-services.jar` and `google-maps-services-java.jar`. The latter should not be published since the former is the correct artifact to consume.
|
process
|
prevent google maps services java artifact from being published when gradlew publish is run two artifacts are published to sonatype and github packages google maps services jar and google maps services java jar the latter should not be published since the former is the correct artifact to consume
| 1
|
307,246
| 9,415,024,037
|
IssuesEvent
|
2019-04-10 11:38:13
|
bitshares/bitshares-ui
|
https://api.github.com/repos/bitshares/bitshares-ui
|
closed
|
[0] Exchange header 24hr change indicator broken
|
[2] Verified [3] Bug [4b] Normal Priority [5a] Tiny [6] Core [6] RC Blockage
|
https://wallet.bitshares.org/#/market/NOWCOIN_BTS

I guess it happens on illiquid markets when there is no data available
|
1.0
|
[0] Exchange header 24hr change indicator broken - https://wallet.bitshares.org/#/market/NOWCOIN_BTS

I guess it happens on illiquid markets when there is no data available
|
non_process
|
exchange header change indicator broken i guess it happens on illiquid markets when there is no data available
| 0
|
18,071
| 24,084,416,092
|
IssuesEvent
|
2022-09-19 09:36:59
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[SQL Connector] Failed at fetching schema info for EMPTY when consumer consume from topic using AVRO schema
|
compute/data-processing
|
In selectIntoTableUsingAvroBasedSchema() and using avro schema, the final getValue() call will get a
`Failed at fetching schema info for EMPTY` exception.
Seems the message has a schema version and avro supports schema versioning. But the schema version is an empty string. Not sure why this could happen. Should dig into why we have such behaviour.
|
1.0
|
[SQL Connector] Failed at fetching schema info for EMPTY when consumer consume from topic using AVRO schema - In selectIntoTableUsingAvroBasedSchema() and using avro schema, the final getValue() call will get a
`Failed at fetching schema info for EMPTY` exception.
Seems the message has a schema version and avro supports schema versioning. But the schema version is an empty string. Not sure why this could happen. Should dig into why we have such behaviour.
|
process
|
failed at fetching schema info for empty when consumer consume from topic using avro schema in selectintotableusingavrobasedschema and using avro schema the final getvalue call will get a failed at fetching schema info for empty exception seems the message has a schema version and avro supports schema versioning but the schema version is an empty string not sure why this could happen should dig into why we have such behaviour
| 1
|
3,928
| 6,847,410,015
|
IssuesEvent
|
2017-11-13 15:21:41
|
syndesisio/syndesis-ui
|
https://api.github.com/repos/syndesisio/syndesis-ui
|
closed
|
Standardize the code linting flow for all contributors
|
dev process enhancement in progress Priority - Low
|
With more developers coming on board we should strive to keep consistency in our code in regards of format, style and syntax. Applying a linting phase bound to the transpilation/versioning phases of development seems a good start.
The idea is to ensure that all developers get their code linted regardless the IDE in use and also get tips and hints on malformed code on dev-time. On a side note, a proper linting upon committing code will reduce the time required for peer-reviewing our code.
**Requirements**
- [x] Update lint config manifest with industry standard conventions for Angular/TypeScript projects whereas required
- [x] Ensure a linting pass is applied upon committing code to the forked repo
- [x] Update all install scripts necessary to make these rules available upon bootstrapping the frontend app
- [x] Refactor and format the code whereas necessary to fulfill the new linter rules.
|
1.0
|
Standardize the code linting flow for all contributors - With more developers coming on board we should strive to keep consistency in our code in regards of format, style and syntax. Applying a linting phase bound to the transpilation/versioning phases of development seems a good start.
The idea is to ensure that all developers get their code linted regardless the IDE in use and also get tips and hints on malformed code on dev-time. On a side note, a proper linting upon committing code will reduce the time required for peer-reviewing our code.
**Requirements**
- [x] Update lint config manifest with industry standard conventions for Angular/TypeScript projects whereas required
- [x] Ensure a linting pass is applied upon committing code to the forked repo
- [x] Update all install scripts necessary to make these rules available upon bootstrapping the frontend app
- [x] Refactor and format the code whereas necessary to fulfill the new linter rules.
|
process
|
standardize the code linting flow for all contributors with more developers coming on board we should strive to keep consistency in our code in regards of format style and syntax applying a linting phase bound to the transpilation versioning phases of development seems a good start the idea is to ensure that all developers get their code linted regardless the ide in use and also get tips and hints on malformed code on dev time on a side note a proper linting upon committing code will reduce the time required for peer reviewing our code requirements update lint config manifest with industry standard conventions for angular typescript projects whereas required ensure a linting pass is applied upon committing code to the forked repo update all install scripts necessary to make these rules available upon bootstrapping the frontend app refactor and format the code whereas necessary to fulfill the new linter rules
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.