Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
267,391
| 20,202,052,788
|
IssuesEvent
|
2022-02-11 16:10:38
|
aws/aws-cdk
|
https://api.github.com/repos/aws/aws-cdk
|
opened
|
(aws-glue): aws-glue table has two different descriptions for the bucket property
|
feature-request needs-triage documentation
|
### link to reference doc page
https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.Table.html
### Describe your issue?
In the documentation for an aws-glue table construct, the first occurrence of the bucket property describes the property as:
`S3 bucket in which to store data.`
[LINK](https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.Table.html#bucketspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan)
The second occurrence describes it as:
`S3 bucket in which the table's data resides.`
[LINK](https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.Table.html#bucketspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan-1)
Which of these is the more accurate description of the property, from the two options, and can the documentation be updated to be more clear?
|
1.0
|
(aws-glue): aws-glue table has two different descriptions for the bucket property - ### link to reference doc page
https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.Table.html
### Describe your issue?
In the documentation for an aws-glue table construct, the first occurrence of the bucket property describes the property as:
`S3 bucket in which to store data.`
[LINK](https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.Table.html#bucketspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan)
The second occurrence describes it as:
`S3 bucket in which the table's data resides.`
[LINK](https://docs.aws.amazon.com/cdk/api/v2/docs/@aws-cdk_aws-glue-alpha.Table.html#bucketspan-classapi-icon-api-icon-experimental-titlethis-api-element-is-experimental-it-may-change-without-noticespan-1)
Which of these is the more accurate description of the property, from the two options, and can the documentation be updated to be more clear?
|
non_process
|
aws glue aws glue table has two different descriptions for the bucket property link to reference doc page describe your issue in the documentation for an aws glue table construct the first occurrence of the bucket property describes the property as bucket in which to store data the second occurrence describes it as bucket in which the table s data resides which of these is the more accurate description of the property from the two options and can the documentation be updated to be more clear
| 0
|
132,579
| 18,752,349,098
|
IssuesEvent
|
2021-11-05 05:00:21
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
Standardize display of non-editable content
|
Design Drupal engineering Epic Content forms Needs refining UX writing Design system Platform CMS Team
|
## Background
### User Story or Problem Statement
There are a number of forms where non-editable content appears (or we want to display it), inline in the form, or on proofing pages. For example:
* VAST data on facility forms (migrated content that should not be edited)
* Migrated form data on VA forms (migrated content that should not be edited)
* Nationally controlled content on Top task pages like VAMC Policies (entity referenced content) (see https://balsamiq.cloud/s21kwcm/py46j7t/r1717)
* Default Appointments intro text on Facility health services (see https://balsamiq.cloud/s21kwcm/py46j7t/rCC1D)
* National Health service taxonomy content on VAMC System Health Service and VAMC Facility Health Service nodes (no design or implementation yet)
We need some consistent patterns for displaying this content inline. The Policies page design shows grey background and some nice tooltips to explain what this content is.
This will require a more full inventory than the list of examples above, and noting various attributes of each situation to represent the diversity of variants.
### Affected users and stakeholders
* All CMS editors
### Hypothesis
We believe that standards for displaying non-editable content in a content form will help editors write better content for Veterans. We'll know that to be true from user feedback on a variety of forms
## Acceptance Criteria
- [ ] a pattern for non-editable content display is
- [ ] designed
- [ ] implemented on content types that should use it
- [ ] new pattern is documented in the CMS design system
Once merged, will this work cause changes that CMS users will notice?
- [x] Y: Add the annoucements label to this issue for the PM and UX writer to review and include a design review in the ACs.
- [ ] N: No futher action needed
|
2.0
|
Standardize display of non-editable content - ## Background
### User Story or Problem Statement
There are a number of forms where non-editable content appears (or we want to display it), inline in the form, or on proofing pages. For example:
* VAST data on facility forms (migrated content that should not be edited)
* Migrated form data on VA forms (migrated content that should not be edited)
* Nationally controlled content on Top task pages like VAMC Policies (entity referenced content) (see https://balsamiq.cloud/s21kwcm/py46j7t/r1717)
* Default Appointments intro text on Facility health services (see https://balsamiq.cloud/s21kwcm/py46j7t/rCC1D)
* National Health service taxonomy content on VAMC System Health Service and VAMC Facility Health Service nodes (no design or implementation yet)
We need some consistent patterns for displaying this content inline. The Policies page design shows grey background and some nice tooltips to explain what this content is.
This will require a more full inventory than the list of examples above, and noting various attributes of each situation to represent the diversity of variants.
### Affected users and stakeholders
* All CMS editors
### Hypothesis
We believe that standards for displaying non-editable content in a content form will help editors write better content for Veterans. We'll know that to be true from user feedback on a variety of forms
## Acceptance Criteria
- [ ] a pattern for non-editable content display is
- [ ] designed
- [ ] implemented on content types that should use it
- [ ] new pattern is documented in the CMS design system
Once merged, will this work cause changes that CMS users will notice?
- [x] Y: Add the annoucements label to this issue for the PM and UX writer to review and include a design review in the ACs.
- [ ] N: No futher action needed
|
non_process
|
standardize display of non editable content background user story or problem statement there are a number of forms where non editable content appears or we want to display it inline in the form or on proofing pages for example vast data on facility forms migrated content that should not be edited migrated form data on va forms migrated content that should not be edited nationally controlled content on top task pages like vamc policies entity referenced content see default appointments intro text on facility health services see national health service taxonomy content on vamc system health service and vamc facility health service nodes no design or implementation yet we need some consistent patterns for displaying this content inline the policies page design shows grey background and some nice tooltips to explain what this content is this will require a more full inventory than the list of examples above and noting various attributes of each situation to represent the diversity of variants affected users and stakeholders all cms editors hypothesis we believe that standards for displaying non editable content in a content form will help editors write better content for veterans we ll know that to be true from user feedback on a variety of forms acceptance criteria a pattern for non editable content display is designed implemented on content types that should use it new pattern is documented in the cms design system once merged will this work cause changes that cms users will notice y add the annoucements label to this issue for the pm and ux writer to review and include a design review in the acs n no futher action needed
| 0
|
179,036
| 6,621,040,791
|
IssuesEvent
|
2017-09-21 17:39:10
|
OpenEMS/openems
|
https://api.github.com/repos/OpenEMS/openems
|
closed
|
show device ConfigChannels
|
Component: UI Priority: Medium Type: Enhancement
|
From openems-gui created by [MatthiasRossmann](https://github.com/MatthiasRossmann) : OpenEMS/openems-gui#18
### Bug Report or Feature Request (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [X] feature request
```
### Desired functionality.
the ConfigChannels of devices are not displayed in the ui

for excample minSoc and chargeSoc of ess
|
1.0
|
show device ConfigChannels - From openems-gui created by [MatthiasRossmann](https://github.com/MatthiasRossmann) : OpenEMS/openems-gui#18
### Bug Report or Feature Request (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [X] feature request
```
### Desired functionality.
the ConfigChannels of devices are not displayed in the ui

for excample minSoc and chargeSoc of ess
|
non_process
|
show device configchannels from openems gui created by openems openems gui bug report or feature request mark with an x bug report please search issues before submitting feature request desired functionality the configchannels of devices are not displayed in the ui for excample minsoc and chargesoc of ess
| 0
|
305,898
| 23,136,479,591
|
IssuesEvent
|
2022-07-28 14:42:06
|
GAM-DIMNT-CPTEC/ECFLOW
|
https://api.github.com/repos/GAM-DIMNT-CPTEC/ECFLOW
|
closed
|
Testes do ECFLOW na EGEON
|
documentation
|
Passos necessários para acessar o ECFLOW na EGEON
$ ssh -X roberto.garcia@egeon (-X para ativar modo gráfico, ecflow_ui)
$ module load ecflow
$ ecflow_server, ecflow_client, ...
- Repliquei os testes da Itapemirim (mudei referências ao diretório ECFLOW para ecflow, para ficar padronizado com o IO)
- Testes ok em 28/Jul/22
|
1.0
|
Testes do ECFLOW na EGEON - Passos necessários para acessar o ECFLOW na EGEON
$ ssh -X roberto.garcia@egeon (-X para ativar modo gráfico, ecflow_ui)
$ module load ecflow
$ ecflow_server, ecflow_client, ...
- Repliquei os testes da Itapemirim (mudei referências ao diretório ECFLOW para ecflow, para ficar padronizado com o IO)
- Testes ok em 28/Jul/22
|
non_process
|
testes do ecflow na egeon passos necessários para acessar o ecflow na egeon ssh x roberto garcia egeon x para ativar modo gráfico ecflow ui module load ecflow ecflow server ecflow client repliquei os testes da itapemirim mudei referências ao diretório ecflow para ecflow para ficar padronizado com o io testes ok em jul
| 0
|
10,385
| 13,195,327,000
|
IssuesEvent
|
2020-08-13 18:27:34
|
Ghost-chu/QuickShop-Reremake
|
https://api.github.com/repos/Ghost-chu/QuickShop-Reremake
|
closed
|
[BUG] Shops are disappearing
|
Bug Help Wanted In Process Priority:Major
|
Hi, im using quickshop in 2 worlds. I don't know why shops are disappearing from 1 world when I restart the server. Im using mysql.
CONFIG:
```
#
# whether to use decimal format output.
config-version: 108
language: en-US
game-language: default
dev-mode: true
tax: 0.0
tax-account: tax
show-tax: false
respect-item-flag: true
log-actions: true
whole-number-prices-only: false
force-bukkit-chat-handler: false
disabled-metrics: false
updater: true
auto-report-errors: true
include-offlineplayer-list: false
economy-type: 0
use-decimal-format: false
decimal-format: '#,###.##'
send-display-item-protection-alert: false
send-shop-protection-alert: false
server-platform: 0
use-caching: false
database:
mysql: true
host: localhost
port: 3306
database: [DELETED]
user: [DELETED]
password: [DELETED]
prefix: none
usessl: false
queue: false
queue-commit-interval: 2
auto-fix-encoding-issue-in-database: false
limits:
use: false
default: 10
old-algorithm: false
ranks:
quickshop:
vip: 20
shop-blocks:
- CHEST
- TRAPPED_CHEST
- ENDER_CHEST
shop:
cost: 0
refund: false
disable-creative-mode-trading: false
disable-super-tool: false
allow-owner-break-shop-sign: false
price-change-requires-fee: true
fee-for-price-change: 0
lock: true
disable-quick-create: false
interact:
switch-mode: false
sneak-to-create: false
sneak-to-trade: false
sneak-to-control: false
auto-sign: true
pay-unlimited-shop-owners: false
display-items: true
display-items-check-ticks: 6000
display-type: 2
display-auto-despawn: false
display-despawn-range: 20
display-check-time: 40
display-allow-stacks: false
enchance-display-protect: false
enchance-shop-protect: false
find-distance: 45
alternate-currency-symbol: $
disable-vault-format: false
currency-symbol-on-right: false
ignore-unlimited-shop-messages: false
auto-fetch-shop-messages: true
ignore-cancel-chat-event: false
allow-shop-without-space-for-sign: false
minimum-price: 0.01
maximum-price: -1
price-restriction: []
maximum-digits-in-price: -1
show-owner-uuid-in-controlpanel-if-op: false
sign-material: OAK_WALL_SIGN
use-enchantment-for-enchanted-book: false
blacklist-world:
- disabled_world_name
blacklist-lores:
- SoulBound
protection-checking: true
max-shops-checks-in-once: 100
display-item-use-name: false
update-sign-when-inventory-moving: true
allow-economy-loan: false
word-for-trade-all-items: all
allow-free-shop: false
use-fast-shop-search-algorithm: true
ongoing-fee:
enable: false
ticks: 42000
cost-per-shop: 2
ignore-unlimited: true
force-load-downgrade-items:
enable: false
method: 0
remove-protection-trigger: true
allow-stacks: false
blacklist:
- Bedrock
lockette:
enable: true
private: '[Private]'
more_users: '[More Users]'
plugin:
Multiverse-Core: true
OpenInv: true
PlaceHolderAPI: true
LWC: true
BlockHub:
enable: true
only: false
effect:
sound:
ontabcomplete: true
oncommand: true
onclick: true
matcher:
work-type: 1
item:
damage: true
repaircost: false
displayname: true
lores: true
enchs: true
potions: true
attributes: true
itemflags: true
custommodeldata: true
books: true
banner: true
skull: true
firework: true
map: true
leatherArmor: true
fishBucket: true
suspiciousStew: true
shulkerBox: true
mixedeconomy:
deposit: eco give {0} {1}
withdraw: eco take {0} {1}
integration:
towny:
enable: false
ignore-disabled-worlds: true
create:
- SHOPTYPE
- MODIFY
trade: []
worldguard:
enable: false
whitelist-mode: false
create:
- FLAG
- CHEST_ACCESS
trade: []
plotsquared:
enable: false
whitelist-mode: true
lands:
enable: false
whitelist-mode: false
ignore-disabled-worlds: true
residence:
enable: false
whitelist-mode: true
create:
- FLAG
- interact
- use
trade: []
factions:
enable: false
whitelist-mode: true
create:
require:
open: false
normal: true
wilderness: false
peaceful: true
permanent: false
safezone: false
own: false
warzone: false
flags:
- CONTAINER
- ECONOMY
trade:
require:
open: false
normal: true
wilderness: false
peaceful: false
permanent: false
safezone: false
own: false
warzone: false
flags: []
protect:
explode: true
hopper: true
entity: true
custom-item-stacksize: []
server-uuid: b6e9642d-5d6b-4fdd-90d3-73a43854d5c2
```
|
1.0
|
[BUG] Shops are disappearing - Hi, im using quickshop in 2 worlds. I don't know why shops are disappearing from 1 world when I restart the server. Im using mysql.
CONFIG:
```
#
# whether to use decimal format output.
config-version: 108
language: en-US
game-language: default
dev-mode: true
tax: 0.0
tax-account: tax
show-tax: false
respect-item-flag: true
log-actions: true
whole-number-prices-only: false
force-bukkit-chat-handler: false
disabled-metrics: false
updater: true
auto-report-errors: true
include-offlineplayer-list: false
economy-type: 0
use-decimal-format: false
decimal-format: '#,###.##'
send-display-item-protection-alert: false
send-shop-protection-alert: false
server-platform: 0
use-caching: false
database:
mysql: true
host: localhost
port: 3306
database: [DELETED]
user: [DELETED]
password: [DELETED]
prefix: none
usessl: false
queue: false
queue-commit-interval: 2
auto-fix-encoding-issue-in-database: false
limits:
use: false
default: 10
old-algorithm: false
ranks:
quickshop:
vip: 20
shop-blocks:
- CHEST
- TRAPPED_CHEST
- ENDER_CHEST
shop:
cost: 0
refund: false
disable-creative-mode-trading: false
disable-super-tool: false
allow-owner-break-shop-sign: false
price-change-requires-fee: true
fee-for-price-change: 0
lock: true
disable-quick-create: false
interact:
switch-mode: false
sneak-to-create: false
sneak-to-trade: false
sneak-to-control: false
auto-sign: true
pay-unlimited-shop-owners: false
display-items: true
display-items-check-ticks: 6000
display-type: 2
display-auto-despawn: false
display-despawn-range: 20
display-check-time: 40
display-allow-stacks: false
enchance-display-protect: false
enchance-shop-protect: false
find-distance: 45
alternate-currency-symbol: $
disable-vault-format: false
currency-symbol-on-right: false
ignore-unlimited-shop-messages: false
auto-fetch-shop-messages: true
ignore-cancel-chat-event: false
allow-shop-without-space-for-sign: false
minimum-price: 0.01
maximum-price: -1
price-restriction: []
maximum-digits-in-price: -1
show-owner-uuid-in-controlpanel-if-op: false
sign-material: OAK_WALL_SIGN
use-enchantment-for-enchanted-book: false
blacklist-world:
- disabled_world_name
blacklist-lores:
- SoulBound
protection-checking: true
max-shops-checks-in-once: 100
display-item-use-name: false
update-sign-when-inventory-moving: true
allow-economy-loan: false
word-for-trade-all-items: all
allow-free-shop: false
use-fast-shop-search-algorithm: true
ongoing-fee:
enable: false
ticks: 42000
cost-per-shop: 2
ignore-unlimited: true
force-load-downgrade-items:
enable: false
method: 0
remove-protection-trigger: true
allow-stacks: false
blacklist:
- Bedrock
lockette:
enable: true
private: '[Private]'
more_users: '[More Users]'
plugin:
Multiverse-Core: true
OpenInv: true
PlaceHolderAPI: true
LWC: true
BlockHub:
enable: true
only: false
effect:
sound:
ontabcomplete: true
oncommand: true
onclick: true
matcher:
work-type: 1
item:
damage: true
repaircost: false
displayname: true
lores: true
enchs: true
potions: true
attributes: true
itemflags: true
custommodeldata: true
books: true
banner: true
skull: true
firework: true
map: true
leatherArmor: true
fishBucket: true
suspiciousStew: true
shulkerBox: true
mixedeconomy:
deposit: eco give {0} {1}
withdraw: eco take {0} {1}
integration:
towny:
enable: false
ignore-disabled-worlds: true
create:
- SHOPTYPE
- MODIFY
trade: []
worldguard:
enable: false
whitelist-mode: false
create:
- FLAG
- CHEST_ACCESS
trade: []
plotsquared:
enable: false
whitelist-mode: true
lands:
enable: false
whitelist-mode: false
ignore-disabled-worlds: true
residence:
enable: false
whitelist-mode: true
create:
- FLAG
- interact
- use
trade: []
factions:
enable: false
whitelist-mode: true
create:
require:
open: false
normal: true
wilderness: false
peaceful: true
permanent: false
safezone: false
own: false
warzone: false
flags:
- CONTAINER
- ECONOMY
trade:
require:
open: false
normal: true
wilderness: false
peaceful: false
permanent: false
safezone: false
own: false
warzone: false
flags: []
protect:
explode: true
hopper: true
entity: true
custom-item-stacksize: []
server-uuid: b6e9642d-5d6b-4fdd-90d3-73a43854d5c2
```
|
process
|
shops are disappearing hi im using quickshop in worlds i don t know why shops are disappearing from world when i restart the server im using mysql config whether to use decimal format output config version language en us game language default dev mode true tax tax account tax show tax false respect item flag true log actions true whole number prices only false force bukkit chat handler false disabled metrics false updater true auto report errors true include offlineplayer list false economy type use decimal format false decimal format send display item protection alert false send shop protection alert false server platform use caching false database mysql true host localhost port database user password prefix none usessl false queue false queue commit interval auto fix encoding issue in database false limits use false default old algorithm false ranks quickshop vip shop blocks chest trapped chest ender chest shop cost refund false disable creative mode trading false disable super tool false allow owner break shop sign false price change requires fee true fee for price change lock true disable quick create false interact switch mode false sneak to create false sneak to trade false sneak to control false auto sign true pay unlimited shop owners false display items true display items check ticks display type display auto despawn false display despawn range display check time display allow stacks false enchance display protect false enchance shop protect false find distance alternate currency symbol disable vault format false currency symbol on right false ignore unlimited shop messages false auto fetch shop messages true ignore cancel chat event false allow shop without space for sign false minimum price maximum price price restriction maximum digits in price show owner uuid in controlpanel if op false sign material oak wall sign use enchantment for enchanted book false blacklist world disabled world name blacklist lores soulbound protection checking true max shops checks in once display item use name false update sign when inventory moving true allow economy loan false word for trade all items all allow free shop false use fast shop search algorithm true ongoing fee enable false ticks cost per shop ignore unlimited true force load downgrade items enable false method remove protection trigger true allow stacks false blacklist bedrock lockette enable true private more users plugin multiverse core true openinv true placeholderapi true lwc true blockhub enable true only false effect sound ontabcomplete true oncommand true onclick true matcher work type item damage true repaircost false displayname true lores true enchs true potions true attributes true itemflags true custommodeldata true books true banner true skull true firework true map true leatherarmor true fishbucket true suspiciousstew true shulkerbox true mixedeconomy deposit eco give withdraw eco take integration towny enable false ignore disabled worlds true create shoptype modify trade worldguard enable false whitelist mode false create flag chest access trade plotsquared enable false whitelist mode true lands enable false whitelist mode false ignore disabled worlds true residence enable false whitelist mode true create flag interact use trade factions enable false whitelist mode true create require open false normal true wilderness false peaceful true permanent false safezone false own false warzone false flags container economy trade require open false normal true wilderness false peaceful false permanent false safezone false own false warzone false flags protect explode true hopper true entity true custom item stacksize server uuid
| 1
|
271,331
| 8,482,773,541
|
IssuesEvent
|
2018-10-25 19:31:52
|
containous/traefik
|
https://api.github.com/repos/containous/traefik
|
closed
|
Multiple basic auth entries with same username
|
area/authentication contributor/waiting-for-feedback kind/enhancement priority/P3
|
<!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, please refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://slack.traefik.io
-->
### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
Looks like basic auth entries are overwritten when using multiple entries for the same username?
I want multiple passwords for same user.
I am using `traefik.frontend.auth.basic.users`
<!--
HOW TO WRITE A GOOD BUG REPORT?
- Respect the issue template as much as possible.
- If possible, use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- The context should lead to something, an idea or a problem that you’re facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### What did you expect to see?
Use all the provided entries
### What did you see instead?
Only last entry was used for the username
### Output of `traefik version`: (_What version of Traefik are you using?_)
<!--
For the Traefik Docker image:
docker run [IMAGE] version
ex: docker run traefik version
For the alpine Traefik Docker image:
docker run [IMAGE] traefik version
ex: docker run traefik traefik version
-->
```
(paste your output here)
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
```toml
# (paste your configuration here)
```
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in DEBUG level (`--logLevel=DEBUG` switch)
```
(paste your output here)
```
|
1.0
|
Multiple basic auth entries with same username - <!--
DO NOT FILE ISSUES FOR GENERAL SUPPORT QUESTIONS.
The issue tracker is for reporting bugs and feature requests only.
For end-user related support questions, please refer to one of the following:
- Stack Overflow (using the "traefik" tag): https://stackoverflow.com/questions/tagged/traefik
- the Traefik community Slack channel: https://slack.traefik.io
-->
### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
Looks like basic auth entries are overwritten when using multiple entries for the same username?
I want multiple passwords for same user.
I am using `traefik.frontend.auth.basic.users`
<!--
HOW TO WRITE A GOOD BUG REPORT?
- Respect the issue template as much as possible.
- If possible, use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
- The title should be short and descriptive.
- Explain the conditions which led you to report this issue: the context.
- The context should lead to something, an idea or a problem that you’re facing.
- Remain clear and concise.
- Format your messages to help the reader focus on what matters and understand the structure of your message, use Markdown syntax https://help.github.com/articles/github-flavored-markdown
-->
### What did you expect to see?
Use all the provided entries
### What did you see instead?
Only last entry was used for the username
### Output of `traefik version`: (_What version of Traefik are you using?_)
<!--
For the Traefik Docker image:
docker run [IMAGE] version
ex: docker run traefik version
For the alpine Traefik Docker image:
docker run [IMAGE] traefik version
ex: docker run traefik traefik version
-->
```
(paste your output here)
```
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
```toml
# (paste your configuration here)
```
<!--
Add more configuration information here.
-->
### If applicable, please paste the log output in DEBUG level (`--logLevel=DEBUG` switch)
```
(paste your output here)
```
|
non_process
|
multiple basic auth entries with same username do not file issues for general support questions the issue tracker is for reporting bugs and feature requests only for end user related support questions please refer to one of the following stack overflow using the traefik tag the traefik community slack channel do you want to request a feature or report a bug bug what did you do looks like basic auth entries are overwritten when using multiple entries for the same username i want multiple passwords for same user i am using traefik frontend auth basic users how to write a good bug report respect the issue template as much as possible if possible use the command traefik bug see the title should be short and descriptive explain the conditions which led you to report this issue the context the context should lead to something an idea or a problem that you’re facing remain clear and concise format your messages to help the reader focus on what matters and understand the structure of your message use markdown syntax what did you expect to see use all the provided entries what did you see instead only last entry was used for the username output of traefik version what version of traefik are you using for the traefik docker image docker run version ex docker run traefik version for the alpine traefik docker image docker run traefik version ex docker run traefik traefik version paste your output here what is your environment configuration arguments toml provider platform toml paste your configuration here add more configuration information here if applicable please paste the log output in debug level loglevel debug switch paste your output here
| 0
|
100,179
| 12,507,627,983
|
IssuesEvent
|
2020-06-02 14:24:12
|
nextcloud/forms
|
https://api.github.com/repos/nextcloud/forms
|
closed
|
Design review of current state
|
2. developing bug design enhancement help wanted high overview priority
|
So @skjnldsv and I just did a design review of the latest state in https://github.com/nextcloud/forms/pull/256, here are items to be done. :)
cc @nextcloud/forms feel free to attach your name to items which you would like to pick up (I will do some design and wording ones). **Bold items are ones we considered more important.**
## Fresh usability test with 3 separate people
Sorted somewhat by importance, duplicates some of the items below, but repeating to emphasize their importance:
- [ ] "Only allow one response per user" is confusing, should only be visible when possible
- [x] Sometimes question title is not saved _-> Seems to be outdated._
- [x] Submission view: Pressing Enter in short text submits whole form (if everything is filled), unexpected #413
- [x] Questions with either empty titles or empty answer options are not shown – we should communicate that to the form creator and highlight those unfinished questions? #415
- [x] Not clear which question type (multiselect or checkboxes) is used, should show checkboxes / radio buttons in edit mode too #409
- [x] Not possible to switch question type #352 -> scheduled for V2.1
- [x] Sometimes last response option is not saved when using Tab-key to advance to next field #396
- [x] Not clear that "New question" is editable, should not be prefilled → pull request https://github.com/nextcloud/forms/pull/367 @jancborchardt
- [x] Not clear that "New form" is editable, should not be prefilled → pull request https://github.com/nextcloud/forms/pull/367 @jancborchardt
- [x] Submission view: "People can enter a short answer" for short/long text makes no sense, needs to be differentiated. In create view: "People can enter a short answer" and in submit view "Enter a short answer" #389 _(we can do a quick-fix to make it "Short answer text" and disable the field?) - @jancborchardt for the quick fix_
- [x] Questions with empty titles are possible and will be shown, should be pointed out #397
- [x] Not clear how to get share link, needs to be in sidebar next to radio option too, and ideally in responses view too when empty
- [x] Autosave sometimes messes with input of questions, try typing in answer options fast and sometimes characters disappear https://github.com/nextcloud/forms/pull/327
- [x] "Add question" button position confusing on top, should be below questions https://github.com/nextcloud/forms/pull/328
- [x] Icon of multiple choice question should use radio icon to reduce confusion about what "Multiple choice" is https://github.com/nextcloud/forms/pull/329
## Create
* [ ] Add bottom padding to compensate the add new answer input (optional if looks weird)
* [x] Icon of "Add a question" button should be white in the dark theme as well, as otherwise it's not enough contrast. (But also honor theming color.) #429
* [x] Show number of responses on responses button like "15 responses", and in the action menu as well #422 ->scheduled for V2.1
* [x] **Focus on title after creation** → @jancborchardt pull request https://github.com/nextcloud/forms/pull/369
* [x] **Replace fake title with empty string and show placeholder, fallback to default title**
* [x] **In edit mode, not enough difference between choices types:**
* [x] **Default question title is not great, put more details on chat question title type "New multiple choice question"** #389
* [x] **Checkbox/Radio should not shift horizontally**
* [x] Align checkboxes with title.
* [x] Still show checkbox when editing #409
- [x] Descenders of form title are cut, improve padding #410
* [x] Input of long option-text gets messy, [see comment](https://github.com/nextcloud/forms/issues/296#issuecomment-623120930) → pull request https://github.com/nextcloud/forms/pull/366
* [x] ~~Remove auto completion~~ actually quite useful when creating similar forms
* [x] Form title needs to be marked up as h2
* [x] Empty content wording @jancborchardt
* [x] **3 blue buttons, too much.**
* [x] No need for emptycontent , add text to new question menu +
* [x] **Show results not primary** @jancborchardt
* [x] "Add new answer" => "Add answer number x"
* [x] Remove apostrophes from loading message
- [x] Multiple choice could use the radio circle as icon to be more clear about what it is @jancborchardt https://github.com/nextcloud/forms/pull/329
* [x] Change short/long answer placeholder to "People can enter a short answer" @jancborchardt
* [x] **Put new question menu on the left**
* **Add question zindex issue**
* [x] Lower opacity of drag handle
* [x] Title in narrow view is too narrow
* [x] **Mandatory in title with star \***
## Submit
* [ ] Long Form-Titles just get cut
* [x] Line-Breaks in description are not shown in Submit-View #424
## Navigation
* [x] Sort by last submission
* [x] New forms should be sorted up top
* [x] Change copy to clipboard to "Copy share link" @jancborchardt
* [x] Put form icons as default for all, and checkmark when expired @jancborchardt https://github.com/nextcloud/forms/pull/326
## Sidebar
* [ ] Invert logic of "Only allow one response per user" → "Allow multiple responses per user" and only show if relevant
* [x] Improve shared user component view #425 -> scheduled for V2.2
* [x] Disable typing on expiration field. When manually changing the text, expires falls back to 1970. #414
* [x] Don’t open sidebar by default on mobile as it overlaps/hides the form info
* [x] **Wording** @jancborchardt
* [x] **Structure** @jancborchardt
## Results
* [x] "Summary" view in results #314 ->scheduled for V2.1
* [x] Statistics => Results and be an H2 @jancborchardt
* [x] Add "Back to form" in the top navbar @jotoeri
* [x] Export to csv should be bellow Heading @jancborchardt
* [x] Just the title as H2: "Results of formtitle" @jancborchardt
## A11y
* [x] Check everything with WAVE, Axe and/or Lighthouse
* [ ] Explore drag&drop with keyboard, maybe focus the handle and use arrow up/down
## Later
* [x] Decide better icons maybe to illustrate what form type it is
* [x] Add navigation second line status #423 -> scheduled for V2.1
* [x] Add navigation new submissions count #422 -> scheduled for V2.1
|
1.0
|
Design review of current state - So @skjnldsv and I just did a design review of the latest state in https://github.com/nextcloud/forms/pull/256, here are items to be done. :)
cc @nextcloud/forms feel free to attach your name to items which you would like to pick up (I will do some design and wording ones). **Bold items are ones we considered more important.**
## Fresh usability test with 3 separate people
Sorted somewhat by importance, duplicates some of the items below, but repeating to emphasize their importance:
- [ ] "Only allow one response per user" is confusing, should only be visible when possible
- [x] Sometimes question title is not saved _-> Seems to be outdated._
- [x] Submission view: Pressing Enter in short text submits whole form (if everything is filled), unexpected #413
- [x] Questions with either empty titles or empty answer options are not shown – we should communicate that to the form creator and highlight those unfinished questions? #415
- [x] Not clear which question type (multiselect or checkboxes) is used, should show checkboxes / radio buttons in edit mode too #409
- [x] Not possible to switch question type #352 -> scheduled for V2.1
- [x] Sometimes last response option is not saved when using Tab-key to advance to next field #396
- [x] Not clear that "New question" is editable, should not be prefilled → pull request https://github.com/nextcloud/forms/pull/367 @jancborchardt
- [x] Not clear that "New form" is editable, should not be prefilled → pull request https://github.com/nextcloud/forms/pull/367 @jancborchardt
- [x] Submission view: "People can enter a short answer" for short/long text makes no sense, needs to be differentiated. In create view: "People can enter a short answer" and in submit view "Enter a short answer" #389 _(we can do a quick-fix to make it "Short answer text" and disable the field?) - @jancborchardt for the quick fix_
- [x] Questions with empty titles are possible and will be shown, should be pointed out #397
- [x] Not clear how to get share link, needs to be in sidebar next to radio option too, and ideally in responses view too when empty
- [x] Autosave sometimes messes with input of questions, try typing in answer options fast and sometimes characters disappear https://github.com/nextcloud/forms/pull/327
- [x] "Add question" button position confusing on top, should be below questions https://github.com/nextcloud/forms/pull/328
- [x] Icon of multiple choice question should use radio icon to reduce confusion about what "Multiple choice" is https://github.com/nextcloud/forms/pull/329
## Create
* [ ] Add bottom padding to compensate the add new answer input (optional if looks weird)
* [x] Icon of "Add a question" button should be white in the dark theme as well, as otherwise it's not enough contrast. (But also honor theming color.) #429
* [x] Show number of responses on responses button like "15 responses", and in the action menu as well #422 ->scheduled for V2.1
* [x] **Focus on title after creation** → @jancborchardt pull request https://github.com/nextcloud/forms/pull/369
* [x] **Replace fake title with empty string and show placeholder, fallback to default title**
* [x] **In edit mode, not enough difference between choices types:**
* [x] **Default question title is not great, put more details on chat question title type "New multiple choice question"** #389
* [x] **Checkbox/Radio should not shift horizontally**
* [x] Align checkboxes with title.
* [x] Still show checkbox when editing #409
- [x] Descenders of form title are cut, improve padding #410
* [x] Input of long option-text gets messy, [see comment](https://github.com/nextcloud/forms/issues/296#issuecomment-623120930) → pull request https://github.com/nextcloud/forms/pull/366
* [x] ~~Remove auto completion~~ actually quite useful when creating similar forms
* [x] Form title needs to be marked up as h2
* [x] Empty content wording @jancborchardt
* [x] **3 blue buttons, too much.**
* [x] No need for emptycontent , add text to new question menu +
* [x] **Show results not primary** @jancborchardt
* [x] "Add new answer" => "Add answer number x"
* [x] Remove apostrophes from loading message
- [x] Multiple choice could use the radio circle as icon to be more clear about what it is @jancborchardt https://github.com/nextcloud/forms/pull/329
* [x] Change short/long answer placeholder to "People can enter a short answer" @jancborchardt
* [x] **Put new question menu on the left**
* **Add question zindex issue**
* [x] Lower opacity of drag handle
* [x] Title in narrow view is too narrow
* [x] **Mandatory in title with star \***
## Submit
* [ ] Long Form-Titles just get cut
* [x] Line-Breaks in description are not shown in Submit-View #424
## Navigation
* [x] Sort by last submission
* [x] New forms should be sorted up top
* [x] Change copy to clipboard to "Copy share link" @jancborchardt
* [x] Put form icons as default for all, and checkmark when expired @jancborchardt https://github.com/nextcloud/forms/pull/326
## Sidebar
* [ ] Invert logic of "Only allow one response per user" → "Allow multiple responses per user" and only show if relevant
* [x] Improve shared user component view #425 -> scheduled for V2.2
* [x] Disable typing on expiration field. When manually changing the text, expires falls back to 1970. #414
* [x] Don’t open sidebar by default on mobile as it overlaps/hides the form info
* [x] **Wording** @jancborchardt
* [x] **Structure** @jancborchardt
## Results
* [x] "Summary" view in results #314 ->scheduled for V2.1
* [x] Statistics => Results and be an H2 @jancborchardt
* [x] Add "Back to form" in the top navbar @jotoeri
* [x] Export to csv should be bellow Heading @jancborchardt
* [x] Just the title as H2: "Results of formtitle" @jancborchardt
## A11y
* [x] Check everything with WAVE, Axe and/or Lighthouse
* [ ] Explore drag&drop with keyboard, maybe focus the handle and use arrow up/down
## Later
* [x] Decide better icons maybe to illustrate what form type it is
* [x] Add navigation second line status #423 -> scheduled for V2.1
* [x] Add navigation new submissions count #422 -> scheduled for V2.1
|
non_process
|
design review of current state so skjnldsv and i just did a design review of the latest state in here are items to be done cc nextcloud forms feel free to attach your name to items which you would like to pick up i will do some design and wording ones bold items are ones we considered more important fresh usability test with separate people sorted somewhat by importance duplicates some of the items below but repeating to emphasize their importance only allow one response per user is confusing should only be visible when possible sometimes question title is not saved seems to be outdated submission view pressing enter in short text submits whole form if everything is filled unexpected questions with either empty titles or empty answer options are not shown – we should communicate that to the form creator and highlight those unfinished questions not clear which question type multiselect or checkboxes is used should show checkboxes radio buttons in edit mode too not possible to switch question type scheduled for sometimes last response option is not saved when using tab key to advance to next field not clear that new question is editable should not be prefilled → pull request jancborchardt not clear that new form is editable should not be prefilled → pull request jancborchardt submission view people can enter a short answer for short long text makes no sense needs to be differentiated in create view people can enter a short answer and in submit view enter a short answer we can do a quick fix to make it short answer text and disable the field jancborchardt for the quick fix questions with empty titles are possible and will be shown should be pointed out not clear how to get share link needs to be in sidebar next to radio option too and ideally in responses view too when empty autosave sometimes messes with input of questions try typing in answer options fast and sometimes characters disappear add question button position confusing on top should be below questions icon of multiple choice question should use radio icon to reduce confusion about what multiple choice is create add bottom padding to compensate the add new answer input optional if looks weird icon of add a question button should be white in the dark theme as well as otherwise it s not enough contrast but also honor theming color show number of responses on responses button like responses and in the action menu as well scheduled for focus on title after creation → jancborchardt pull request replace fake title with empty string and show placeholder fallback to default title in edit mode not enough difference between choices types default question title is not great put more details on chat question title type new multiple choice question checkbox radio should not shift horizontally align checkboxes with title still show checkbox when editing descenders of form title are cut improve padding input of long option text gets messy → pull request remove auto completion actually quite useful when creating similar forms form title needs to be marked up as empty content wording jancborchardt blue buttons too much no need for emptycontent add text to new question menu show results not primary jancborchardt add new answer add answer number x remove apostrophes from loading message multiple choice could use the radio circle as icon to be more clear about what it is jancborchardt change short long answer placeholder to people can enter a short answer jancborchardt put new question menu on the left add question zindex issue lower opacity of drag handle title in narrow view is too narrow mandatory in title with star submit long form titles just get cut line breaks in description are not shown in submit view navigation sort by last submission new forms should be sorted up top change copy to clipboard to copy share link jancborchardt put form icons as default for all and checkmark when expired jancborchardt sidebar invert logic of only allow one response per user → allow multiple responses per user and only show if relevant improve shared user component view scheduled for disable typing on expiration field when manually changing the text expires falls back to don’t open sidebar by default on mobile as it overlaps hides the form info wording jancborchardt structure jancborchardt results summary view in results scheduled for statistics results and be an jancborchardt add back to form in the top navbar jotoeri export to csv should be bellow heading jancborchardt just the title as results of formtitle jancborchardt check everything with wave axe and or lighthouse explore drag drop with keyboard maybe focus the handle and use arrow up down later decide better icons maybe to illustrate what form type it is add navigation second line status scheduled for add navigation new submissions count scheduled for
| 0
|
77,194
| 7,568,240,419
|
IssuesEvent
|
2018-04-22 18:04:34
|
soundar24/roundSlider
|
https://api.github.com/repos/soundar24/roundSlider
|
closed
|
Issue in keyboard action direction
|
bug testing
|
In roundSlider while keyboard interaction, the left decrements the value and the right key increments the value. But it should be done vice versa.
Reported here:
http://roundsliderui.com/#comment-2906392541
Current workaround:
http://jsfiddle.net/soundar24/LpuLe9tr/660/
|
1.0
|
Issue in keyboard action direction - In roundSlider while keyboard interaction, the left decrements the value and the right key increments the value. But it should be done vice versa.
Reported here:
http://roundsliderui.com/#comment-2906392541
Current workaround:
http://jsfiddle.net/soundar24/LpuLe9tr/660/
|
non_process
|
issue in keyboard action direction in roundslider while keyboard interaction the left decrements the value and the right key increments the value but it should be done vice versa reported here current workaround
| 0
|
11,761
| 14,593,428,330
|
IssuesEvent
|
2020-12-19 22:46:23
|
ewen-lbh/portfolio
|
https://api.github.com/repos/ewen-lbh/portfolio
|
opened
|
Started and finished dates instead of created
|
processing
|
Still allow created as a shortcut for setting both as the same date. Makes `wip:` obsolete, if there's a started date and not a finished date, it's wip.
|
1.0
|
Started and finished dates instead of created - Still allow created as a shortcut for setting both as the same date. Makes `wip:` obsolete, if there's a started date and not a finished date, it's wip.
|
process
|
started and finished dates instead of created still allow created as a shortcut for setting both as the same date makes wip obsolete if there s a started date and not a finished date it s wip
| 1
|
18,320
| 24,438,849,687
|
IssuesEvent
|
2022-10-06 13:22:41
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
opened
|
[FALSE-POSITIVE?]
|
whitelisting process
|
**Domains or links**
js.pusher.com
**More Information**
How did you discover your web site or domain was listed here?
2. Reported by another user.
**Have you requested removal from other sources?**
No
**Additional context**
Add any other context about the problem here.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
1.0
|
[FALSE-POSITIVE?] - **Domains or links**
js.pusher.com
**More Information**
How did you discover your web site or domain was listed here?
2. Reported by another user.
**Have you requested removal from other sources?**
No
**Additional context**
Add any other context about the problem here.
:exclamation:
We understand being listed on a list like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
|
process
|
domains or links js pusher com more information how did you discover your web site or domain was listed here reported by another user have you requested removal from other sources no additional context add any other context about the problem here exclamation we understand being listed on a list like this can be frustrating and embarrassing for many web site owners the first step is to remain calm the second step is to rest assured one of our maintainers will address your issue as soon as possible please make sure you have provided as much information as possible to help speed up the process
| 1
|
192,724
| 14,628,046,403
|
IssuesEvent
|
2020-12-23 13:27:49
|
LIBCAS/ARCLib
|
https://api.github.com/repos/LIBCAS/ARCLib
|
closed
|
Nelze založit novou definici workflow
|
to test
|
Nedaří se mi založit novou definici workflow, vždy se vrací chyba: _500 Error java.lang.NullPointerException_
|
1.0
|
Nelze založit novou definici workflow - Nedaří se mi založit novou definici workflow, vždy se vrací chyba: _500 Error java.lang.NullPointerException_
|
non_process
|
nelze založit novou definici workflow nedaří se mi založit novou definici workflow vždy se vrací chyba error java lang nullpointerexception
| 0
|
1,258
| 3,791,349,662
|
IssuesEvent
|
2016-03-22 02:09:55
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
ProcessTests.TestSessionId fails on Win 7
|
2 - In Progress System.Diagnostics.Process
|
```
14:19:57 System.Diagnostics.Tests.ProcessTests.TestSessionId [FAIL]
14:19:57 System.EntryPointNotFoundException : Unable to find an entry point named 'ProcessIdToSessionId' in DLL 'api-ms-win-core-processthreads-l1-1-2.dll'.
14:19:57 Stack Trace:
14:19:57 at System.Diagnostics.Tests.Interop.ProcessIdToSessionId(UInt32 dwProcessId, UInt32& pSessionId)
14:19:57 at System.Diagnostics.Tests.ProcessTests.TestSessionId()
```
|
1.0
|
ProcessTests.TestSessionId fails on Win 7 - ```
14:19:57 System.Diagnostics.Tests.ProcessTests.TestSessionId [FAIL]
14:19:57 System.EntryPointNotFoundException : Unable to find an entry point named 'ProcessIdToSessionId' in DLL 'api-ms-win-core-processthreads-l1-1-2.dll'.
14:19:57 Stack Trace:
14:19:57 at System.Diagnostics.Tests.Interop.ProcessIdToSessionId(UInt32 dwProcessId, UInt32& pSessionId)
14:19:57 at System.Diagnostics.Tests.ProcessTests.TestSessionId()
```
|
process
|
processtests testsessionid fails on win system diagnostics tests processtests testsessionid system entrypointnotfoundexception unable to find an entry point named processidtosessionid in dll api ms win core processthreads dll stack trace at system diagnostics tests interop processidtosessionid dwprocessid psessionid at system diagnostics tests processtests testsessionid
| 1
|
3,587
| 6,621,661,187
|
IssuesEvent
|
2017-09-21 20:03:59
|
WikiWatershed/model-my-watershed
|
https://api.github.com/repos/WikiWatershed/model-my-watershed
|
closed
|
Geoprocessing API: Integrate Token Auth
|
BigCZ Geoprocessing API
|
http://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication
- [ Generate tokens for all existing users](http://www.django-rest-framework.org/api-guide/authentication/#by-using-signals)
- On user creation, generate token
- Secure endpoints
|
1.0
|
Geoprocessing API: Integrate Token Auth - http://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication
- [ Generate tokens for all existing users](http://www.django-rest-framework.org/api-guide/authentication/#by-using-signals)
- On user creation, generate token
- Secure endpoints
|
process
|
geoprocessing api integrate token auth on user creation generate token secure endpoints
| 1
|
616,211
| 19,296,438,147
|
IssuesEvent
|
2021-12-12 17:10:00
|
bounswe/2021SpringGroup3
|
https://api.github.com/repos/bounswe/2021SpringGroup3
|
closed
|
Mobile: Accept-Reject Join Community Requests
|
Type: Feature Status: Completed Priority: Medium Component: Mobile
|
Moderators should be able to accept or reject the pending requests for their private communities.
- A page that is only visible for the moderators to list pending invitations should be implemented.
- Moderators should be able to accept or reject the pending requests.
- This page should be accessible from the community detail page.
- Pages' contents should be updated after the operations.
- A pending icon should appear when a user tries to join button to a private community.
**Deadline**: 12.12.2021
At least one team member from the mobile team should review the implementation. (📢 FYI @halilbaydar @kiymetakdemir)
|
1.0
|
Mobile: Accept-Reject Join Community Requests - Moderators should be able to accept or reject the pending requests for their private communities.
- A page that is only visible for the moderators to list pending invitations should be implemented.
- Moderators should be able to accept or reject the pending requests.
- This page should be accessible from the community detail page.
- Pages' contents should be updated after the operations.
- A pending icon should appear when a user tries to join button to a private community.
**Deadline**: 12.12.2021
At least one team member from the mobile team should review the implementation. (📢 FYI @halilbaydar @kiymetakdemir)
|
non_process
|
mobile accept reject join community requests moderators should be able to accept or reject the pending requests for their private communities a page that is only visible for the moderators to list pending invitations should be implemented moderators should be able to accept or reject the pending requests this page should be accessible from the community detail page pages contents should be updated after the operations a pending icon should appear when a user tries to join button to a private community deadline at least one team member from the mobile team should review the implementation 📢 fyi halilbaydar kiymetakdemir
| 0
|
438,683
| 12,643,027,053
|
IssuesEvent
|
2020-06-16 09:08:18
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.smule.com - desktop site instead of mobile site
|
browser-fenix engine-gecko priority-normal
|
<!-- @browser: Firefox Mobile 77.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:77.0) Gecko/77.0 Firefox/77.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/54171 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.smule.com/s/upload/arr/661238525_2892205/arrangement/edit
**Browser / Version**: Firefox Mobile 77.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
Desktop tool feature error!!
everytime i set to desktop, after i open other app (exp. Notepad) that is back reload and back to mobile feature but desktop feature in cheklist. After reload desktop feature can not ON OFF, I should open new window with same link
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.smule.com - desktop site instead of mobile site - <!-- @browser: Firefox Mobile 77.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:77.0) Gecko/77.0 Firefox/77.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/54171 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.smule.com/s/upload/arr/661238525_2892205/arrangement/edit
**Browser / Version**: Firefox Mobile 77.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Chrome
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
Desktop tool feature error!!
everytime i set to desktop, after i open other app (exp. Notepad) that is back reload and back to mobile feature but desktop feature in cheklist. After reload desktop feature can not ON OFF, I should open new window with same link
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
desktop site instead of mobile site url browser version firefox mobile operating system android tested another browser yes chrome problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce desktop tool feature error everytime i set to desktop after i open other app exp notepad that is back reload and back to mobile feature but desktop feature in cheklist after reload desktop feature can not on off i should open new window with same link browser configuration none from with ❤️
| 0
|
495,511
| 14,283,396,458
|
IssuesEvent
|
2020-11-23 10:57:23
|
MrDaGree/ELS-FiveM
|
https://api.github.com/repos/MrDaGree/ELS-FiveM
|
closed
|
When i have the ELS script on my server the headlights on the cars with ELS just keeps blinking like if the emergency lights was on, but they are not.
|
priority: low status: pending type: question
|
**Describe the bug**
When i have the ELS script on my server the headlights on the cars with ELS just keeps blinking like if the emergency lights was on, but they are not.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
When I turn it off it should not still blink
**Screenshots**
If applicable, add screenshots to help explain your problem.
**ELS Information:**
Version #: 1.75
Server Version #:
Screenshots of Config.lua, vcf.lua, vcf folder:
**Additional context**
Add any other context about the problem here.
|
1.0
|
When i have the ELS script on my server the headlights on the cars with ELS just keeps blinking like if the emergency lights was on, but they are not. - **Describe the bug**
When i have the ELS script on my server the headlights on the cars with ELS just keeps blinking like if the emergency lights was on, but they are not.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
When I turn it off it should not still blink
**Screenshots**
If applicable, add screenshots to help explain your problem.
**ELS Information:**
Version #: 1.75
Server Version #:
Screenshots of Config.lua, vcf.lua, vcf folder:
**Additional context**
Add any other context about the problem here.
|
non_process
|
when i have the els script on my server the headlights on the cars with els just keeps blinking like if the emergency lights was on but they are not describe the bug when i have the els script on my server the headlights on the cars with els just keeps blinking like if the emergency lights was on but they are not to reproduce steps to reproduce the behavior expected behavior when i turn it off it should not still blink screenshots if applicable add screenshots to help explain your problem els information version server version screenshots of config lua vcf lua vcf folder additional context add any other context about the problem here
| 0
|
15,701
| 19,848,299,769
|
IssuesEvent
|
2022-01-21 09:25:08
|
deepset-ai/haystack
|
https://api.github.com/repos/deepset-ai/haystack
|
reopened
|
Using Tika Converter
|
type:question topic:preprocessing
|
**Question**
How can I run haystack docker container with default file converter as tika?
**Additional context**
And also built in tika converter can convert and process ppt files? If no how can I add tika ppt converter to haystack? To process also from ppt files.
|
1.0
|
Using Tika Converter - **Question**
How can I run haystack docker container with default file converter as tika?
**Additional context**
And also built in tika converter can convert and process ppt files? If no how can I add tika ppt converter to haystack? To process also from ppt files.
|
process
|
using tika converter question how can i run haystack docker container with default file converter as tika additional context and also built in tika converter can convert and process ppt files if no how can i add tika ppt converter to haystack to process also from ppt files
| 1
|
6,105
| 8,966,753,277
|
IssuesEvent
|
2019-01-29 00:13:42
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[processing][needs-docs] remove R provider from Processing core
|
Automatic new feature Easy Processing User Manual
|
Original commit: https://github.com/qgis/QGIS/commit/144492d4ce69b02cce336e4df59ab7e4dd80b8f3 by web-flow
[processing][needs-docs] remove R provider from Processing core
|
1.0
|
[processing][needs-docs] remove R provider from Processing core - Original commit: https://github.com/qgis/QGIS/commit/144492d4ce69b02cce336e4df59ab7e4dd80b8f3 by web-flow
[processing][needs-docs] remove R provider from Processing core
|
process
|
remove r provider from processing core original commit by web flow remove r provider from processing core
| 1
|
6,289
| 8,658,810,202
|
IssuesEvent
|
2018-11-28 02:46:03
|
Snownee/Cuisine
|
https://api.github.com/repos/Snownee/Cuisine
|
closed
|
Immersive Engineering Cloche compatability
|
Compatibility
|
The Garden Cloche from Immersive Engineering is quite a handy tool to automate farming. It would be great if cuisine crops where compatible with it.
I also think food automation in general could be a great addition. Like supporting the IE Squeezer for juices maybe?
|
True
|
Immersive Engineering Cloche compatability - The Garden Cloche from Immersive Engineering is quite a handy tool to automate farming. It would be great if cuisine crops where compatible with it.
I also think food automation in general could be a great addition. Like supporting the IE Squeezer for juices maybe?
|
non_process
|
immersive engineering cloche compatability the garden cloche from immersive engineering is quite a handy tool to automate farming it would be great if cuisine crops where compatible with it i also think food automation in general could be a great addition like supporting the ie squeezer for juices maybe
| 0
|
13,118
| 15,504,768,477
|
IssuesEvent
|
2021-03-11 14:39:13
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
HybridWorker Group created by this script
|
Pri2 assigned-to-author automation/svc doc-enhancement process-automation/subsvc triaged
|
Hi team,
I moved this from here: https://github.com/MicrosoftDocs/azure-docs.fr-fr/issues/607
The user is makign a suggestion on the original documentation
@pajcarpentier commented 6 hours ago — with docs.microsoft.com
Hello all.
you should precise that the Hyrid Worker group is created by the script New-OnPremiseHybridWorker.ps1.
As it is a mandatory field, and there is nowhere it is explain that the Hybrid worker group is created when installing the Hybrid Worker, it makes confusion on the field.
I mean, the following section :
"When the agent has successfully connected to Azure Monitor logs, it's listed on the Connected Sources tab of the log analytics Settings page. You can verify that the agent has correctly downloaded the Automation solution when it has a folder called AzureAutomationFiles in C:\Program Files\Microsoft Monitoring Agent\Agent. To confirm the version of the Hybrid Runbook Worker, you can browse to C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\ and note the \version subfolder."
has to be updated as the subfolder "AzureAutomationFiles" is only installed after the "New-OnPremiseHybridWorker.ps1" script has been ran on the On Premise machine.
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf
* Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e
* Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#feedback)
* Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
1.0
|
HybridWorker Group created by this script - Hi team,
I moved this from here: https://github.com/MicrosoftDocs/azure-docs.fr-fr/issues/607
The user is makign a suggestion on the original documentation
@pajcarpentier commented 6 hours ago — with docs.microsoft.com
Hello all.
you should precise that the Hyrid Worker group is created by the script New-OnPremiseHybridWorker.ps1.
As it is a mandatory field, and there is nowhere it is explain that the Hybrid worker group is created when installing the Hybrid Worker, it makes confusion on the field.
I mean, the following section :
"When the agent has successfully connected to Azure Monitor logs, it's listed on the Connected Sources tab of the log analytics Settings page. You can verify that the agent has correctly downloaded the Automation solution when it has a folder called AzureAutomationFiles in C:\Program Files\Microsoft Monitoring Agent\Agent. To confirm the version of the Hybrid Runbook Worker, you can browse to C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\ and note the \version subfolder."
has to be updated as the subfolder "AzureAutomationFiles" is only installed after the "New-OnPremiseHybridWorker.ps1" script has been ran on the On Premise machine.
Thank you.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 7b29372c-7bd9-7da2-4cff-9afbb432bccf
* Version Independent ID: 66ce101d-d21b-3fdf-be70-7f9cadc1570e
* Content: [Azure Automation Windows Hybrid Runbook Worker](https://docs.microsoft.com/en-us/azure/automation/automation-windows-hrw-install#feedback)
* Content Source: [articles/automation/automation-windows-hrw-install.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-windows-hrw-install.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @bobbytreed
* Microsoft Alias: **robreed**
|
process
|
hybridworker group created by this script hi team i moved this from here the user is makign a suggestion on the original documentation pajcarpentier commented hours ago — with docs microsoft com hello all you should precise that the hyrid worker group is created by the script new onpremisehybridworker as it is a mandatory field and there is nowhere it is explain that the hybrid worker group is created when installing the hybrid worker it makes confusion on the field i mean the following section when the agent has successfully connected to azure monitor logs it s listed on the connected sources tab of the log analytics settings page you can verify that the agent has correctly downloaded the automation solution when it has a folder called azureautomationfiles in c program files microsoft monitoring agent agent to confirm the version of the hybrid runbook worker you can browse to c program files microsoft monitoring agent agent azureautomation and note the version subfolder has to be updated as the subfolder azureautomationfiles is only installed after the new onpremisehybridworker script has been ran on the on premise machine thank you document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login bobbytreed microsoft alias robreed
| 1
|
244,956
| 26,492,815,586
|
IssuesEvent
|
2023-01-18 01:04:06
|
turkdevops/play-with-docker
|
https://api.github.com/repos/turkdevops/play-with-docker
|
closed
|
CVE-2020-8912 (Low) detected in github.com/aws/aws-sdk-go-v1.12.15 - autoclosed
|
security vulnerability
|
## CVE-2020-8912 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/aws/aws-sdk-go-v1.12.15</b></p></summary>
<p>AWS SDK for the Go programming language.</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/aws/aws-sdk-go/@v/v1.12.15.zip">https://proxy.golang.org/github.com/aws/aws-sdk-go/@v/v1.12.15.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **github.com/aws/aws-sdk-go-v1.12.15** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/play-with-docker/commit/27377d4ea18db54381a8dc972091f3c342337ec9">27377d4ea18db54381a8dc972091f3c342337ec9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability in the in-band key negotiation exists in the AWS S3 Crypto SDK for GoLang versions prior to V2. An attacker with write access to the targeted bucket can change the encryption algorithm of an object in the bucket, which can then allow them to change AES-GCM to AES-CTR. Using this in combination with a decryption oracle can reveal the authentication key used by AES-GCM as decrypting the GMAC tag leaves the authentication key recoverable as an algebraic equation. It is recommended to update your SDK to V2 or later, and re-encrypt your files.
<p>Publish Date: 2020-08-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-8912>CVE-2020-8912</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-7f33-f4f5-xwgw">https://github.com/advisories/GHSA-7f33-f4f5-xwgw</a></p>
<p>Release Date: 2020-08-17</p>
<p>Fix Resolution: v1.34.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-8912 (Low) detected in github.com/aws/aws-sdk-go-v1.12.15 - autoclosed - ## CVE-2020-8912 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/aws/aws-sdk-go-v1.12.15</b></p></summary>
<p>AWS SDK for the Go programming language.</p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/aws/aws-sdk-go/@v/v1.12.15.zip">https://proxy.golang.org/github.com/aws/aws-sdk-go/@v/v1.12.15.zip</a></p>
<p>
Dependency Hierarchy:
- :x: **github.com/aws/aws-sdk-go-v1.12.15** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/play-with-docker/commit/27377d4ea18db54381a8dc972091f3c342337ec9">27377d4ea18db54381a8dc972091f3c342337ec9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability in the in-band key negotiation exists in the AWS S3 Crypto SDK for GoLang versions prior to V2. An attacker with write access to the targeted bucket can change the encryption algorithm of an object in the bucket, which can then allow them to change AES-GCM to AES-CTR. Using this in combination with a decryption oracle can reveal the authentication key used by AES-GCM as decrypting the GMAC tag leaves the authentication key recoverable as an algebraic equation. It is recommended to update your SDK to V2 or later, and re-encrypt your files.
<p>Publish Date: 2020-08-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-8912>CVE-2020-8912</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-7f33-f4f5-xwgw">https://github.com/advisories/GHSA-7f33-f4f5-xwgw</a></p>
<p>Release Date: 2020-08-17</p>
<p>Fix Resolution: v1.34.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve low detected in github com aws aws sdk go autoclosed cve low severity vulnerability vulnerable library github com aws aws sdk go aws sdk for the go programming language library home page a href dependency hierarchy x github com aws aws sdk go vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability in the in band key negotiation exists in the aws crypto sdk for golang versions prior to an attacker with write access to the targeted bucket can change the encryption algorithm of an object in the bucket which can then allow them to change aes gcm to aes ctr using this in combination with a decryption oracle can reveal the authentication key used by aes gcm as decrypting the gmac tag leaves the authentication key recoverable as an algebraic equation it is recommended to update your sdk to or later and re encrypt your files publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
425,551
| 12,342,100,844
|
IssuesEvent
|
2020-05-14 23:42:52
|
themeetinghouse/web
|
https://api.github.com/repos/themeetinghouse/web
|
closed
|
Replace Horizontal Scroll List with Grid Layout (Watch Page)
|
Priority 1
|
See: https://www.figma.com/file/Ym6BNDP4qWHnwZ8Atfk0B06k/TMH-Website?node-id=289%3A25
In addition to the switch to a grid layout, horizontal lines need to be added above headers too.
|
1.0
|
Replace Horizontal Scroll List with Grid Layout (Watch Page) - See: https://www.figma.com/file/Ym6BNDP4qWHnwZ8Atfk0B06k/TMH-Website?node-id=289%3A25
In addition to the switch to a grid layout, horizontal lines need to be added above headers too.
|
non_process
|
replace horizontal scroll list with grid layout watch page see in addition to the switch to a grid layout horizontal lines need to be added above headers too
| 0
|
21,828
| 30,318,331,766
|
IssuesEvent
|
2023-07-10 17:12:21
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
New term - organismStatus
|
Term - add Class - Occurrence Process - needs Task Group
|
### This proposal is under active development in the ['OSR - How Did It Die?' Task Group](https://www.tdwg.org/community/osr/how-did-it-die/).
* Submitter: Sophia Ratcliffe - NBN Trust (https://nbnatlas.org/)
* Proponents (at least two independent parties who need this term): I have had requests from several of our data providers asking for the ability to supply and filter records by alive/dead status
* Justification (why is this term necessary?): We receive many records where the occurrence was of a dead animal and at the moment there is no way store the status of the organism in a DwC term with a controlled vocabulary. It is really important in marine records, which often contain stranding information, and for road kill records.
Proposed definition of the new term:
* Term name (in lowerCamelCase): organismStatus
* Class (e.g. Location, Taxon): organism
* Definition of the term: A description of the status of the organism (alive or dead)
* Comment (examples, recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary, with the terms alive, dead, unknown
* Refines (identifier of the broader term this term refines, if applicable): Currently we've been using occurrenceRemarks, but it is not easily searchable.
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): N/A
* ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): N/A
|
1.0
|
New term - organismStatus - ### This proposal is under active development in the ['OSR - How Did It Die?' Task Group](https://www.tdwg.org/community/osr/how-did-it-die/).
* Submitter: Sophia Ratcliffe - NBN Trust (https://nbnatlas.org/)
* Proponents (at least two independent parties who need this term): I have had requests from several of our data providers asking for the ability to supply and filter records by alive/dead status
* Justification (why is this term necessary?): We receive many records where the occurrence was of a dead animal and at the moment there is no way store the status of the organism in a DwC term with a controlled vocabulary. It is really important in marine records, which often contain stranding information, and for road kill records.
Proposed definition of the new term:
* Term name (in lowerCamelCase): organismStatus
* Class (e.g. Location, Taxon): organism
* Definition of the term: A description of the status of the organism (alive or dead)
* Comment (examples, recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary, with the terms alive, dead, unknown
* Refines (identifier of the broader term this term refines, if applicable): Currently we've been using occurrenceRemarks, but it is not easily searchable.
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): N/A
* ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): N/A
|
process
|
new term organismstatus this proposal is under active development in the submitter sophia ratcliffe nbn trust proponents at least two independent parties who need this term i have had requests from several of our data providers asking for the ability to supply and filter records by alive dead status justification why is this term necessary we receive many records where the occurrence was of a dead animal and at the moment there is no way store the status of the organism in a dwc term with a controlled vocabulary it is really important in marine records which often contain stranding information and for road kill records proposed definition of the new term term name in lowercamelcase organismstatus class e g location taxon organism definition of the term a description of the status of the organism alive or dead comment examples recommendations regarding content etc recommended best practice is to use a controlled vocabulary with the terms alive dead unknown refines identifier of the broader term this term refines if applicable currently we ve been using occurrenceremarks but it is not easily searchable replaces identifier of the existing term that would be deprecated and replaced by this term if applicable n a abcd xpath of the equivalent term in abcd if applicable n a
| 1
|
706
| 3,202,184,655
|
IssuesEvent
|
2015-10-02 12:40:49
|
dexX7/java-libbitcoinconsensus
|
https://api.github.com/repos/dexX7/java-libbitcoinconsensus
|
closed
|
Cleanup and specify the use of labels
|
process
|
Some labels are slightly overlapping or ambiguous, such as `build process`, `release` and `ci`.
The label `architecture` might be too broad.
Ideally it is clarified when and how labels should be used.
|
1.0
|
Cleanup and specify the use of labels - Some labels are slightly overlapping or ambiguous, such as `build process`, `release` and `ci`.
The label `architecture` might be too broad.
Ideally it is clarified when and how labels should be used.
|
process
|
cleanup and specify the use of labels some labels are slightly overlapping or ambiguous such as build process release and ci the label architecture might be too broad ideally it is clarified when and how labels should be used
| 1
|
5,496
| 8,362,921,098
|
IssuesEvent
|
2018-10-03 18:14:20
|
cityofaustin/techstack
|
https://api.github.com/repos/cityofaustin/techstack
|
closed
|
Refine content for DSD process pages
|
Content type: Process Page Department: Development Services Site Content Size: M Team: Content
|
- [ ] Landing page
- [ ] Overview page (will we need that for our process pages?) Share history of that page with Tori
- [ ] Refine content for each step
|
1.0
|
Refine content for DSD process pages - - [ ] Landing page
- [ ] Overview page (will we need that for our process pages?) Share history of that page with Tori
- [ ] Refine content for each step
|
process
|
refine content for dsd process pages landing page overview page will we need that for our process pages share history of that page with tori refine content for each step
| 1
|
5,551
| 8,393,853,558
|
IssuesEvent
|
2018-10-09 21:52:34
|
google/eme_logger
|
https://api.github.com/repos/google/eme_logger
|
closed
|
[meta] Import open issues
|
process
|
There are open issues in a private bug tracker that should be imported into github now that the github issue tracker has been enabled.
|
1.0
|
[meta] Import open issues - There are open issues in a private bug tracker that should be imported into github now that the github issue tracker has been enabled.
|
process
|
import open issues there are open issues in a private bug tracker that should be imported into github now that the github issue tracker has been enabled
| 1
|
81,179
| 7,770,262,239
|
IssuesEvent
|
2018-06-04 08:14:58
|
SatelliteQE/robottelo
|
https://api.github.com/repos/SatelliteQE/robottelo
|
opened
|
api.test_audit uses incorrect query to filter the audits specific to the test
|
6.3 6.4 Bug Low test-failure
|
`test_positive_update_by_type` sometimes fails (on parralel run) on fetching the appropriate audit records, since it relies on it being a first in the returned list.
This is not true especially on running in parralel. We should use more specific search query, like using the name of the architecture instead of using just a type:
in the example below, we created and updated an arch called `pvohRDPuHO`, however, there was another test dealing with arch called 'EpXnvsLgyc'.
```
created_entity = created_entity.update(['name'])
audit = entities.Audit().search(
query={'search': 'type={0}'.format(
created_entity.__class__.__name__.lower())
}
)[0]
> self.assertEqual(audit.auditable_name, name)
E AssertionError: 'EpXnvsLgyc' != 'pvohRDPuHO'
E - EpXnvsLgyc
E + pvohRDPuHO
```
The results looked like this and hence we blindly accessed `[0]`, it matched the record related to a different test:
```
..
"results": [{"user_id":4,"user_type":null,"user_name":"admin","version":1,"comment":null,"associated_id":null,"associated_type":null,"remote_address":"10.8.212.13","associated_name":null,"created_at":"2018-06-01 16:53:49 UTC","id":68,"auditable_id":5,"auditable_name":"EpXnvsLgyc","auditable_type":"Architecture","action":"create","audited_changes":{"name":"EpXnvsLgyc"}},
...
```
|
1.0
|
api.test_audit uses incorrect query to filter the audits specific to the test - `test_positive_update_by_type` sometimes fails (on parralel run) on fetching the appropriate audit records, since it relies on it being a first in the returned list.
This is not true especially on running in parralel. We should use more specific search query, like using the name of the architecture instead of using just a type:
in the example below, we created and updated an arch called `pvohRDPuHO`, however, there was another test dealing with arch called 'EpXnvsLgyc'.
```
created_entity = created_entity.update(['name'])
audit = entities.Audit().search(
query={'search': 'type={0}'.format(
created_entity.__class__.__name__.lower())
}
)[0]
> self.assertEqual(audit.auditable_name, name)
E AssertionError: 'EpXnvsLgyc' != 'pvohRDPuHO'
E - EpXnvsLgyc
E + pvohRDPuHO
```
The results looked like this and hence we blindly accessed `[0]`, it matched the record related to a different test:
```
..
"results": [{"user_id":4,"user_type":null,"user_name":"admin","version":1,"comment":null,"associated_id":null,"associated_type":null,"remote_address":"10.8.212.13","associated_name":null,"created_at":"2018-06-01 16:53:49 UTC","id":68,"auditable_id":5,"auditable_name":"EpXnvsLgyc","auditable_type":"Architecture","action":"create","audited_changes":{"name":"EpXnvsLgyc"}},
...
```
|
non_process
|
api test audit uses incorrect query to filter the audits specific to the test test positive update by type sometimes fails on parralel run on fetching the appropriate audit records since it relies on it being a first in the returned list this is not true especially on running in parralel we should use more specific search query like using the name of the architecture instead of using just a type in the example below we created and updated an arch called pvohrdpuho however there was another test dealing with arch called epxnvslgyc created entity created entity update audit entities audit search query search type format created entity class name lower self assertequal audit auditable name name e assertionerror epxnvslgyc pvohrdpuho e epxnvslgyc e pvohrdpuho the results looked like this and hence we blindly accessed it matched the record related to a different test results user id user type null user name admin version comment null associated id null associated type null remote address associated name null created at utc id auditable id auditable name epxnvslgyc auditable type architecture action create audited changes name epxnvslgyc
| 0
|
205,232
| 23,313,810,373
|
IssuesEvent
|
2022-08-08 10:38:41
|
symfony/symfony-docs
|
https://api.github.com/repos/symfony/symfony-docs
|
closed
|
[Security] Add `#[IsGranted()]`
|
Security hasPR
|
| Q | A
| ------------ | ---
| Feature PR | symfony/symfony#46907
| PR author(s) | @nicolas-grekas
| Merged in | 6.2
|
True
|
[Security] Add `#[IsGranted()]` - | Q | A
| ------------ | ---
| Feature PR | symfony/symfony#46907
| PR author(s) | @nicolas-grekas
| Merged in | 6.2
|
non_process
|
add q a feature pr symfony symfony pr author s nicolas grekas merged in
| 0
|
8,937
| 12,054,980,614
|
IssuesEvent
|
2020-04-15 12:11:59
|
TOMP-WG/TOMP-API
|
https://api.github.com/repos/TOMP-WG/TOMP-API
|
closed
|
Use Git Flow for changes to the API
|
process
|
As part of the improved versioning and use of git's features, we should also make better and more consistent use of branches. I suggest we use a variant of the ubiquitous Git Flow style of doing branches. Concretely, this means:
- the `master` branch is **only** used for release commits (so v1.2 -> v.1.2.1 -> v1.3 -> v2.0 etc.)
- `develop` branches (here two, for the next major and minor releases) is used for incremental changes
- when a develop version is ready for release, it briefly goes into a `release-v.x.x.x` branch to make the last few changes and bug fixes before being merged into master with the next version number
- bigger changes that need a lot of work happen in `feature/<x>` branches that branch off from develop and are merged back into it when completed
- if a bug needs a quick fix, a `hotfix` branch can be made branching from a release in master and be merged back into it with a version bump
Git Flow is the most common organisation of branches used among projects that have any such organisation. It is completely compatible and even symbiotic with the other development practices we intend to adopt like semantic versioning #102 and releases using tags #107. Since it is about how we use branches I cannot put this change in a PR, so I'll create the branches already to show how it'd work. This is non-destructive so we can revert to the current practice if we do not approve.
See [https://nvie.com/posts/a-successful-git-branching-model/](https://nvie.com/posts/a-successful-git-branching-model/) for the Git Flow specification.
|
1.0
|
Use Git Flow for changes to the API - As part of the improved versioning and use of git's features, we should also make better and more consistent use of branches. I suggest we use a variant of the ubiquitous Git Flow style of doing branches. Concretely, this means:
- the `master` branch is **only** used for release commits (so v1.2 -> v.1.2.1 -> v1.3 -> v2.0 etc.)
- `develop` branches (here two, for the next major and minor releases) is used for incremental changes
- when a develop version is ready for release, it briefly goes into a `release-v.x.x.x` branch to make the last few changes and bug fixes before being merged into master with the next version number
- bigger changes that need a lot of work happen in `feature/<x>` branches that branch off from develop and are merged back into it when completed
- if a bug needs a quick fix, a `hotfix` branch can be made branching from a release in master and be merged back into it with a version bump
Git Flow is the most common organisation of branches used among projects that have any such organisation. It is completely compatible and even symbiotic with the other development practices we intend to adopt like semantic versioning #102 and releases using tags #107. Since it is about how we use branches I cannot put this change in a PR, so I'll create the branches already to show how it'd work. This is non-destructive so we can revert to the current practice if we do not approve.
See [https://nvie.com/posts/a-successful-git-branching-model/](https://nvie.com/posts/a-successful-git-branching-model/) for the Git Flow specification.
|
process
|
use git flow for changes to the api as part of the improved versioning and use of git s features we should also make better and more consistent use of branches i suggest we use a variant of the ubiquitous git flow style of doing branches concretely this means the master branch is only used for release commits so v etc develop branches here two for the next major and minor releases is used for incremental changes when a develop version is ready for release it briefly goes into a release v x x x branch to make the last few changes and bug fixes before being merged into master with the next version number bigger changes that need a lot of work happen in feature branches that branch off from develop and are merged back into it when completed if a bug needs a quick fix a hotfix branch can be made branching from a release in master and be merged back into it with a version bump git flow is the most common organisation of branches used among projects that have any such organisation it is completely compatible and even symbiotic with the other development practices we intend to adopt like semantic versioning and releases using tags since it is about how we use branches i cannot put this change in a pr so i ll create the branches already to show how it d work this is non destructive so we can revert to the current practice if we do not approve see for the git flow specification
| 1
|
79,607
| 7,720,888,364
|
IssuesEvent
|
2018-05-24 01:53:02
|
fabric8io/fabric8-test
|
https://api.github.com/repos/fabric8io/fabric8-test
|
opened
|
We need to create a job that periodically deletes test accounts github repos
|
E2E-test booster-test
|
If we don't do this - we will have NNNN repos in the github account.
|
2.0
|
We need to create a job that periodically deletes test accounts github repos - If we don't do this - we will have NNNN repos in the github account.
|
non_process
|
we need to create a job that periodically deletes test accounts github repos if we don t do this we will have nnnn repos in the github account
| 0
|
4,001
| 6,927,272,613
|
IssuesEvent
|
2017-11-30 22:12:32
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Provide way to send Control+C to proccess
|
api-needs-work area-System.Diagnostics.Process up-for-grabs
|
Currently .NET doesn't have way to exit nicely from created console process.
It would be nice if we had a such a method.
``` csharp
namespace System.Diagnostics
{
public class Process
{
public void SendCtrlCSignal()
{
...
}
}
}
```
It can be implemented following way on Windows: http://stackoverflow.com/a/15281070/61505 , for unix we can call `kill (pid, SIGINT);` http://stackoverflow.com/a/1761182/61505
It provide a lot of problems for other users too http://stanislavs.org/stopping-command-line-applications-programatically-with-ctrl-c-events-from-net/
|
1.0
|
Provide way to send Control+C to proccess - Currently .NET doesn't have way to exit nicely from created console process.
It would be nice if we had a such a method.
``` csharp
namespace System.Diagnostics
{
public class Process
{
public void SendCtrlCSignal()
{
...
}
}
}
```
It can be implemented following way on Windows: http://stackoverflow.com/a/15281070/61505 , for unix we can call `kill (pid, SIGINT);` http://stackoverflow.com/a/1761182/61505
It provide a lot of problems for other users too http://stanislavs.org/stopping-command-line-applications-programatically-with-ctrl-c-events-from-net/
|
process
|
provide way to send control c to proccess currently net doesn t have way to exit nicely from created console process it would be nice if we had a such a method csharp namespace system diagnostics public class process public void sendctrlcsignal it can be implemented following way on windows for unix we can call kill pid sigint it provide a lot of problems for other users too
| 1
|
18,147
| 24,186,904,073
|
IssuesEvent
|
2022-09-23 14:02:00
|
cloudfoundry/korifi
|
https://api.github.com/repos/cloudfoundry/korifi
|
closed
|
[Feature]: Developer can push apps using the top-level `health-check-type` field in the manifest
|
Top-level process config
|
### Background
**As a** developer
**I want** top-level process configuration in manifests to be supported
**So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc.
### Acceptance Criteria
* **GIVEN** I have the following node app:
```js
var http = require('http');
http.createServer(function (request, response) {
response.writeHead(500, {'Content-Type': 'text/plain'});
response.end('no');
}).listen(process.env.PORT);
```
with the following `manifest.yml`:
```yaml
---
applications:
- name: real-app
health-check-type: http
```
**WHEN I** `cf push`
**THEN I** see the push fails
* **GIVEN** I have the same app with the following manifest:
```yaml
---
applications:
- name: my-app
health-check-type: port
processes:
- type: web
health-check-type: http
```
**WHEN I** `cf push`
**THEN I** see the push fails
|
1.0
|
[Feature]: Developer can push apps using the top-level `health-check-type` field in the manifest - ### Background
**As a** developer
**I want** top-level process configuration in manifests to be supported
**So that** I can use shortcut `cf push` flags like `-c`, `-i`, `-m` etc.
### Acceptance Criteria
* **GIVEN** I have the following node app:
```js
var http = require('http');
http.createServer(function (request, response) {
response.writeHead(500, {'Content-Type': 'text/plain'});
response.end('no');
}).listen(process.env.PORT);
```
with the following `manifest.yml`:
```yaml
---
applications:
- name: real-app
health-check-type: http
```
**WHEN I** `cf push`
**THEN I** see the push fails
* **GIVEN** I have the same app with the following manifest:
```yaml
---
applications:
- name: my-app
health-check-type: port
processes:
- type: web
health-check-type: http
```
**WHEN I** `cf push`
**THEN I** see the push fails
|
process
|
developer can push apps using the top level health check type field in the manifest background as a developer i want top level process configuration in manifests to be supported so that i can use shortcut cf push flags like c i m etc acceptance criteria given i have the following node app js var http require http http createserver function request response response writehead content type text plain response end no listen process env port with the following manifest yml yaml applications name real app health check type http when i cf push then i see the push fails given i have the same app with the following manifest yaml applications name my app health check type port processes type web health check type http when i cf push then i see the push fails
| 1
|
38,402
| 8,470,252,175
|
IssuesEvent
|
2018-10-24 03:14:39
|
MicrosoftDocs/live-share
|
https://api.github.com/repos/MicrosoftDocs/live-share
|
closed
|
[VS Code] The host rejected your request to join the collaboration session.
|
area: share and join investigating needs more info vscode
|
<!--
For Visual Studio problems/feedback, please use the "Report a Problem..." feature built into the tool. See https://aka.ms/vsls-vsproblem.
For VS Code issues, attach verbose logs as follows:
1. Press F1 (or Ctrl-Shift-P), type "export logs" and run the "Live Share: Export Logs" command.
2. Drag and drop the zip to the issue on this screen and wait for it to upload before creating the issue.
For feature requests, please include enough of this same info so we know if the request is tool or language/platform specific.
-->
## Error:
The host rejected your request to join the collaboration session.
## Steps to Reproduce:
1.
2.
||Version Data|
|-:|:-|
|**extensionName**|VSLS|
|**extensionVersion**|0.3.535|
|**protocolVersion**|2.2|
|**applicationName**|VSCode|
|**applicationVersion**|1.26.0|
|**platformName**|MacOS|
|**platformVersion**|17.7.0|
|
1.0
|
[VS Code] The host rejected your request to join the collaboration session. - <!--
For Visual Studio problems/feedback, please use the "Report a Problem..." feature built into the tool. See https://aka.ms/vsls-vsproblem.
For VS Code issues, attach verbose logs as follows:
1. Press F1 (or Ctrl-Shift-P), type "export logs" and run the "Live Share: Export Logs" command.
2. Drag and drop the zip to the issue on this screen and wait for it to upload before creating the issue.
For feature requests, please include enough of this same info so we know if the request is tool or language/platform specific.
-->
## Error:
The host rejected your request to join the collaboration session.
## Steps to Reproduce:
1.
2.
||Version Data|
|-:|:-|
|**extensionName**|VSLS|
|**extensionVersion**|0.3.535|
|**protocolVersion**|2.2|
|**applicationName**|VSCode|
|**applicationVersion**|1.26.0|
|**platformName**|MacOS|
|**platformVersion**|17.7.0|
|
non_process
|
the host rejected your request to join the collaboration session for visual studio problems feedback please use the report a problem feature built into the tool see for vs code issues attach verbose logs as follows press or ctrl shift p type export logs and run the live share export logs command drag and drop the zip to the issue on this screen and wait for it to upload before creating the issue for feature requests please include enough of this same info so we know if the request is tool or language platform specific error the host rejected your request to join the collaboration session steps to reproduce version data extensionname vsls extensionversion protocolversion applicationname vscode applicationversion platformname macos platformversion
| 0
|
25,472
| 4,158,753,972
|
IssuesEvent
|
2016-06-17 05:17:35
|
NishantUpadhyay-BTC/BLISS-Issue-Tracking
|
https://api.github.com/repos/NishantUpadhyay-BTC/BLISS-Issue-Tracking
|
reopened
|
#1343 - Guest UI - "We're Sorry but Something Went Wrong" - also on office ui...
|
bug Deployed to Test
|
I was trying to test various availability scenarios on staging today and got a strange error in both the Office UI and Guest UI. Also noticed that the Availability page did not seem to be displaying correct available lodgings.
First, I brought up our "Check Availability" guest in the Office UI, and tried to start a new reservation starting tomorrow (Wed, May 25, 2016) for cabin B7- which is available to share. Upon selecting that cabin - either via that green dot at the bottom or by opening up the cabin view and choosing 'Reserve Now", I was presented with the error: "Something went wrong. Please try again When you click OK, we will refresh the screen to give you an updated view. (This may take a moment)". I could not reserve the cabin. However, I was able to place the guest into the Dorm- D7. I removed the guest from D7 and deleted that test reservation.
Now I go to the Availability screen. I expect to see one plumbing cabin available - since B7 is available to share - but the Availability screen shows no plumbing and no non-plumbing cabins available. Then proceed to the PR availability page and I do a search for available lodgings on 5/25 for a single female willing to share- and am presented with a Non-Plumbing cabin! This could only be the Dorm space - which should not appear here. But why wasn't it showing B7- a plumbing cabin that was available to share?
Then, I select the non-plumbing cabin by clicking "Reserve Now" and login using the ID for guiguest024.breitenbush.com. Immediately after completing login I am presented with the error:
"We're sorry, but something went wrong. If you are the application owner check the logs for more information."
NOTE: All of this was around 5PM-5:20 PDT.
So, I then manually navigated back to https://guestuitest.breitnet.net (by typing manually, since there was no option for returning on the error page, and no way to use the back button). I found that I was logged in as Gui Guest024, and checked availability for the same date. I got the same result and once again chose to reserve the non-plumbing cabin. This time I was presented with the option to choose my existing reservation or cancel that one and start a new one (which is a very nice feature BTW -good thinking) - however, clicking either of these buttons returns me to the error message above.
So - it looks like there is some fundamental glitch that's preventing these reservations 24 hours out? This is a big deal, as we need to be able to make these- and it's a big deal that the Availability table was displaying incorrect data.
|
1.0
|
#1343 - Guest UI - "We're Sorry but Something Went Wrong" - also on office ui... - I was trying to test various availability scenarios on staging today and got a strange error in both the Office UI and Guest UI. Also noticed that the Availability page did not seem to be displaying correct available lodgings.
First, I brought up our "Check Availability" guest in the Office UI, and tried to start a new reservation starting tomorrow (Wed, May 25, 2016) for cabin B7- which is available to share. Upon selecting that cabin - either via that green dot at the bottom or by opening up the cabin view and choosing 'Reserve Now", I was presented with the error: "Something went wrong. Please try again When you click OK, we will refresh the screen to give you an updated view. (This may take a moment)". I could not reserve the cabin. However, I was able to place the guest into the Dorm- D7. I removed the guest from D7 and deleted that test reservation.
Now I go to the Availability screen. I expect to see one plumbing cabin available - since B7 is available to share - but the Availability screen shows no plumbing and no non-plumbing cabins available. Then proceed to the PR availability page and I do a search for available lodgings on 5/25 for a single female willing to share- and am presented with a Non-Plumbing cabin! This could only be the Dorm space - which should not appear here. But why wasn't it showing B7- a plumbing cabin that was available to share?
Then, I select the non-plumbing cabin by clicking "Reserve Now" and login using the ID for guiguest024.breitenbush.com. Immediately after completing login I am presented with the error:
"We're sorry, but something went wrong. If you are the application owner check the logs for more information."
NOTE: All of this was around 5PM-5:20 PDT.
So, I then manually navigated back to https://guestuitest.breitnet.net (by typing manually, since there was no option for returning on the error page, and no way to use the back button). I found that I was logged in as Gui Guest024, and checked availability for the same date. I got the same result and once again chose to reserve the non-plumbing cabin. This time I was presented with the option to choose my existing reservation or cancel that one and start a new one (which is a very nice feature BTW -good thinking) - however, clicking either of these buttons returns me to the error message above.
So - it looks like there is some fundamental glitch that's preventing these reservations 24 hours out? This is a big deal, as we need to be able to make these- and it's a big deal that the Availability table was displaying incorrect data.
|
non_process
|
guest ui we re sorry but something went wrong also on office ui i was trying to test various availability scenarios on staging today and got a strange error in both the office ui and guest ui also noticed that the availability page did not seem to be displaying correct available lodgings first i brought up our check availability guest in the office ui and tried to start a new reservation starting tomorrow wed may for cabin which is available to share upon selecting that cabin either via that green dot at the bottom or by opening up the cabin view and choosing reserve now i was presented with the error something went wrong please try again when you click ok we will refresh the screen to give you an updated view this may take a moment i could not reserve the cabin however i was able to place the guest into the dorm i removed the guest from and deleted that test reservation now i go to the availability screen i expect to see one plumbing cabin available since is available to share but the availability screen shows no plumbing and no non plumbing cabins available then proceed to the pr availability page and i do a search for available lodgings on for a single female willing to share and am presented with a non plumbing cabin this could only be the dorm space which should not appear here but why wasn t it showing a plumbing cabin that was available to share then i select the non plumbing cabin by clicking reserve now and login using the id for breitenbush com immediately after completing login i am presented with the error we re sorry but something went wrong if you are the application owner check the logs for more information note all of this was around pdt so i then manually navigated back to by typing manually since there was no option for returning on the error page and no way to use the back button i found that i was logged in as gui and checked availability for the same date i got the same result and once again chose to reserve the non plumbing cabin this time i was presented with the option to choose my existing reservation or cancel that one and start a new one which is a very nice feature btw good thinking however clicking either of these buttons returns me to the error message above so it looks like there is some fundamental glitch that s preventing these reservations hours out this is a big deal as we need to be able to make these and it s a big deal that the availability table was displaying incorrect data
| 0
|
185,734
| 6,727,231,839
|
IssuesEvent
|
2017-10-17 12:55:46
|
pravega/pravega
|
https://api.github.com/repos/pravega/pravega
|
closed
|
Automatic checkpoints
|
area/readergroup kind/feature priority/P1
|
**Problem description**
Rather than requiring some user process to call initiateCheckpoint() we should be able to supply them automatically for users that want checkpoints, but don't have such a process.
**Problem location**
ReaderGroup
**Suggestions for an improvement**
The members of the group could keep a timer and use this as the basis for invoking initiateCheckpoint
|
1.0
|
Automatic checkpoints - **Problem description**
Rather than requiring some user process to call initiateCheckpoint() we should be able to supply them automatically for users that want checkpoints, but don't have such a process.
**Problem location**
ReaderGroup
**Suggestions for an improvement**
The members of the group could keep a timer and use this as the basis for invoking initiateCheckpoint
|
non_process
|
automatic checkpoints problem description rather than requiring some user process to call initiatecheckpoint we should be able to supply them automatically for users that want checkpoints but don t have such a process problem location readergroup suggestions for an improvement the members of the group could keep a timer and use this as the basis for invoking initiatecheckpoint
| 0
|
11,360
| 14,175,358,212
|
IssuesEvent
|
2020-11-12 21:27:25
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Deployment Job -- do these support workspace.clean?
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
Do depoyment jobs support workspace.clean? It looks like by default, they will continue to pile up downloaded artifacts, which is different from the classic release jobs.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Deployment Job -- do these support workspace.clean? -
Do depoyment jobs support workspace.clean? It looks like by default, they will continue to pile up downloaded artifacts, which is different from the classic release jobs.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 5aeeaace-1c5b-a51b-e41f-f25b806155b8
* Version Independent ID: fd7ff690-b2e4-41c7-a342-e528b911c6e1
* Content: [Deployment jobs - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops)
* Content Source: [docs/pipelines/process/deployment-jobs.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/deployment-jobs.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
deployment job do these support workspace clean do depoyment jobs support workspace clean it looks like by default they will continue to pile up downloaded artifacts which is different from the classic release jobs document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
257,492
| 22,170,959,654
|
IssuesEvent
|
2022-06-06 00:18:04
|
FuelLabs/fuelup
|
https://api.github.com/repos/FuelLabs/fuelup
|
closed
|
Add CI for building, testing, publishing
|
P: High ci testing
|
Once we've got a bare-bones cargo project, we should prioritise setting up CI.
We'll likely want to publish the `fuelup` binary for each platform similar to how we publish `forc` for each platform too.
|
1.0
|
Add CI for building, testing, publishing - Once we've got a bare-bones cargo project, we should prioritise setting up CI.
We'll likely want to publish the `fuelup` binary for each platform similar to how we publish `forc` for each platform too.
|
non_process
|
add ci for building testing publishing once we ve got a bare bones cargo project we should prioritise setting up ci we ll likely want to publish the fuelup binary for each platform similar to how we publish forc for each platform too
| 0
|
15,856
| 20,033,058,475
|
IssuesEvent
|
2022-02-02 08:56:38
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
MongoDB: The connection string format and parameters are defined by mongo-rust-driver
|
process/candidate team/migrations team/client topic: mongodb
|
On MongoDB, the datasource URLs from user schemas are always passed verbatim to the Rust MongoDB driver (https://github.com/mongodb/mongo-rust-driver).
By contrast, on all SQL connectors, the logic of parsing the connection strings and defining the parameters live in Prisma code.
This state of affairs exposes us to:
- Risk of breaking changes upstream that we may not notice and would be hard to mitigate
- Risk if we ever want to use another driver
- Risk that we may want to define parameters that have meaning for prisma but not the underlying driver, at some point.
- Inconsistency with the rest of our connection strings
:concerned:
|
1.0
|
MongoDB: The connection string format and parameters are defined by mongo-rust-driver - On MongoDB, the datasource URLs from user schemas are always passed verbatim to the Rust MongoDB driver (https://github.com/mongodb/mongo-rust-driver).
By contrast, on all SQL connectors, the logic of parsing the connection strings and defining the parameters live in Prisma code.
This state of affairs exposes us to:
- Risk of breaking changes upstream that we may not notice and would be hard to mitigate
- Risk if we ever want to use another driver
- Risk that we may want to define parameters that have meaning for prisma but not the underlying driver, at some point.
- Inconsistency with the rest of our connection strings
:concerned:
|
process
|
mongodb the connection string format and parameters are defined by mongo rust driver on mongodb the datasource urls from user schemas are always passed verbatim to the rust mongodb driver by contrast on all sql connectors the logic of parsing the connection strings and defining the parameters live in prisma code this state of affairs exposes us to risk of breaking changes upstream that we may not notice and would be hard to mitigate risk if we ever want to use another driver risk that we may want to define parameters that have meaning for prisma but not the underlying driver at some point inconsistency with the rest of our connection strings concerned
| 1
|
8,632
| 11,784,560,386
|
IssuesEvent
|
2020-03-17 08:37:22
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
BigQuery: deprecate pandas code paths that do not use pyarrow
|
api: bigquery type: process
|
In the pandas-related BigQuery code, there is a lot of branching on whether `pyarrow` is available or not, and significant portions of business logic dealing with both cases.
The `pyarrow` side is often more efficient, more concise, and less prone to weird edge cases, thus it is preferred that it eventually becomes the only code path.
The goal of this issue is to emit deprecation warnings whenever a code path is hit that deals with `pandas`, but without the `pyarrow` dependency available.
|
1.0
|
BigQuery: deprecate pandas code paths that do not use pyarrow - In the pandas-related BigQuery code, there is a lot of branching on whether `pyarrow` is available or not, and significant portions of business logic dealing with both cases.
The `pyarrow` side is often more efficient, more concise, and less prone to weird edge cases, thus it is preferred that it eventually becomes the only code path.
The goal of this issue is to emit deprecation warnings whenever a code path is hit that deals with `pandas`, but without the `pyarrow` dependency available.
|
process
|
bigquery deprecate pandas code paths that do not use pyarrow in the pandas related bigquery code there is a lot of branching on whether pyarrow is available or not and significant portions of business logic dealing with both cases the pyarrow side is often more efficient more concise and less prone to weird edge cases thus it is preferred that it eventually becomes the only code path the goal of this issue is to emit deprecation warnings whenever a code path is hit that deals with pandas but without the pyarrow dependency available
| 1
|
16,136
| 20,386,881,492
|
IssuesEvent
|
2022-02-22 08:03:20
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Removing xlink:href attribute node from <use> element throws error
|
TYPE: bug AREA: client SYSTEM: URL processing STATE: Need response FREQUENCY: level 2
|
Removing `xlink:href` attribute with `removeAttributeNode` from a `<use>` element throws an error with Hammerhead (_"Uncaught DOMException: Failed to execute 'removeAttributeNode' on 'Element': The node provided is owned by another element."_ in Chrome and _"NotFoundError: Node was not found"_ in Firefox) but works fine without proxy.
Here's a simple example:
```html
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<svg style="display: none" xmlns="http://www.w3.org/2000/svg">
<defs>
<g id="icon">
<rect x="0" y="0" width="100" height="100"></rect>
</g>
</defs>
</svg>
<svg xmlns="http://www.w3.org/2000/svg">
<use xlink:href="#icon"></use>
</svg>
<script>
var use = document.querySelector('use');
use.removeAttributeNode(use.attributes[0]);
</script>
</body>
</html>
```
You can check it here: https://m4w4q7.github.io/remove-attribute-node-bug-example
(an empty page without proxy and a black rectangle with console error when using proxy)
It also works if we use `href` instead of `xlink:href`, but Safari does not support it yet, so we can't change that.
This method of removing the attribute is used in recent versions of hyperHTML, and currently we can't use TestCafe with it.
|
1.0
|
Removing xlink:href attribute node from <use> element throws error - Removing `xlink:href` attribute with `removeAttributeNode` from a `<use>` element throws an error with Hammerhead (_"Uncaught DOMException: Failed to execute 'removeAttributeNode' on 'Element': The node provided is owned by another element."_ in Chrome and _"NotFoundError: Node was not found"_ in Firefox) but works fine without proxy.
Here's a simple example:
```html
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<svg style="display: none" xmlns="http://www.w3.org/2000/svg">
<defs>
<g id="icon">
<rect x="0" y="0" width="100" height="100"></rect>
</g>
</defs>
</svg>
<svg xmlns="http://www.w3.org/2000/svg">
<use xlink:href="#icon"></use>
</svg>
<script>
var use = document.querySelector('use');
use.removeAttributeNode(use.attributes[0]);
</script>
</body>
</html>
```
You can check it here: https://m4w4q7.github.io/remove-attribute-node-bug-example
(an empty page without proxy and a black rectangle with console error when using proxy)
It also works if we use `href` instead of `xlink:href`, but Safari does not support it yet, so we can't change that.
This method of removing the attribute is used in recent versions of hyperHTML, and currently we can't use TestCafe with it.
|
process
|
removing xlink href attribute node from element throws error removing xlink href attribute with removeattributenode from a element throws an error with hammerhead uncaught domexception failed to execute removeattributenode on element the node provided is owned by another element in chrome and notfounderror node was not found in firefox but works fine without proxy here s a simple example html svg style display none xmlns svg xmlns var use document queryselector use use removeattributenode use attributes you can check it here an empty page without proxy and a black rectangle with console error when using proxy it also works if we use href instead of xlink href but safari does not support it yet so we can t change that this method of removing the attribute is used in recent versions of hyperhtml and currently we can t use testcafe with it
| 1
|
16,941
| 22,294,204,310
|
IssuesEvent
|
2022-06-12 20:15:16
|
benthosdev/benthos
|
https://api.github.com/repos/benthosdev/benthos
|
closed
|
Allow caching the result of a series of processors
|
enhancement processors cool idea quality of life
|
This a proposal for a higher-order `cached` processor that can wrap a series of processors and cache their results if they successfully process messages batches. The proposed API looks like this:
```yaml
pipeline:
processors:
cached:
resource: foocache
key: '${! json("message.id") }'
ttl: 5m
processors:
- resource: expensive_processor_1
- resource: expensive_processor_2
- resource: expensive_processor_3
```
|
1.0
|
Allow caching the result of a series of processors - This a proposal for a higher-order `cached` processor that can wrap a series of processors and cache their results if they successfully process messages batches. The proposed API looks like this:
```yaml
pipeline:
processors:
cached:
resource: foocache
key: '${! json("message.id") }'
ttl: 5m
processors:
- resource: expensive_processor_1
- resource: expensive_processor_2
- resource: expensive_processor_3
```
|
process
|
allow caching the result of a series of processors this a proposal for a higher order cached processor that can wrap a series of processors and cache their results if they successfully process messages batches the proposed api looks like this yaml pipeline processors cached resource foocache key json message id ttl processors resource expensive processor resource expensive processor resource expensive processor
| 1
|
9,967
| 13,012,230,491
|
IssuesEvent
|
2020-07-25 04:08:10
|
AzureAD/microsoft-identity-web
|
https://api.github.com/repos/AzureAD/microsoft-identity-web
|
closed
|
Get AccessTokenOnBehalfOfUser if (Current)HttpContext is not available (anymore)
|
bug fixed scenario:long-running-process
|
From @pocki and copied from https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/issues/233
### This issue is for a: (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
```
### The issue was found for the following scenario:
Please add an 'x' for the scenario(s) where you found an issue
1. Web app that signs in users
1. [ ] with a work and school account in your organization: [1-WebApp-OIDC/1-1-MyOrg](../blob/master/1-WebApp-OIDC/1-1-MyOrg)
1. [ ] with any work and school account: [/1-WebApp-OIDC/1-2-AnyOrg](../blob/master/1-WebApp-OIDC/1-2-AnyOrg)
1. [ ] with any work or school account or Microsoft personal account: [1-WebApp-OIDC/1-3-AnyOrgOrPersonal](../blob/master/1-WebApp-OIDC/1-3-AnyOrgOrPersonal)
1. [ ] with users in National or sovereign clouds [1-WebApp-OIDC/1-4-Sovereign](../blob/master/1-WebApp-OIDC/1-4-Sovereign)
1. [ ] with B2C users [1-WebApp-OIDC/1-5-B2C](../blob/master/1-WebApp-OIDC/1-5-B2C)
1. Web app that calls Microsoft Graph
1. [ ] Calling graph with the Microsoft Graph SDK: [2-WebApp-graph-user/2-1-Call-MSGraph](../blob/master/2-WebApp-graph-user/2-1-Call-MSGraph)
1. [ ] With specific token caches: [2-WebApp-graph-user/2-2-TokenCache](../blob/master/2-WebApp-graph-user/2-2-TokenCache)
1. [ ] Calling Microsoft Graph in national clouds: [2-WebApp-graph-user/2-4-Sovereign-Call-MSGraph](../blob/master/2-WebApp-graph-user/2-4-Sovereign-Call-MSGraph)
1. [x] Web app calling several APIs [3-WebApp-multi-APIs](../blob/master/3-WebApp-multi-APIs)
1. [ ] Web app calling your own Web API [4-WebApp-your-API](../blob/master/4-WebApp-your-API)
1. Web app restricting users
1. [ ] by Roles: [5-WebApp-AuthZ/5-1-Roles](../blob/master/5-WebApp-AuthZ/5-1-Roles)
1. [ ] by Groups: [5-WebApp-AuthZ/5-2-Groups](../blob/master/5-WebApp-AuthZ/5-2-Groups)
1. [ ] Deployment to Azure
1. [ ] Other (please describe)
### Repro-ing the issue
**Repro steps**
<!-- the minimal steps to reproduce -->
Is it somehow possible to receive an AccessToken on behalf of User if only (at least) ClaimsPrincipal (like in #159) is available but no full (Current)HttpContext?
Why: I have a long running task moved to a IHostedService. In this service I need an AccessToken at beginning and at the end (for the same scope). The AccessToken for the beginning is no problem, I can request it before the start and/or use the TokenCache. But in the end (>1 hour after begin) of the HostedService the token needs to be refreshed, but I can not call the TokenAcquisition because HttpContext is not available (out of Scope/Disposed) in IHostedService.
Actually I use ```TokenAcquisition.GetAccessTokenOnBehalfOfUserAsync``` to get and to "refresh"/get a new token
**Expected behavior**
<!-- A clear and concise description of what you expected to happen (or code).-->
Use of ```TokenAcquisition.GetAccessTokenOnBehalfOfUserAsync``` where HttpContext is not available (Disposed or out of Scope)
**Actual behavior**
<!-- A clear and concise description of what happens, e.g. exception is thrown, UI freezes -->
With modifications of #159 I can pass the HttpContext.User as a Parameter to the HostedService: System.NullReferenceException "Object reference not set to an instance of an object."
> var request = CurrentHttpContext.Request; //CurrentHttpContext is null/is already disposed
> at Microsoft.Identity.Web.TokenAcquisition.BuildConfidentialClientApplication() in C:\xxx\Microsoft.Identity.Web\TokenAcquisition.cs:line 345
at Microsoft.Identity.Web.TokenAcquisition.GetOrBuildConfidentialClientApplication() in C:\xxx\Microsoft.Identity.Web\TokenAcquisition.cs:line 333
Line numbers may not match with this Repo
**Possible Solution**
<!--- Only if you have suggestions on a fix for the bug -->
Is it possible to set needed values for TokenAquisition manually?
Is there another method instead of ```TokenAcquisition.GetAccessTokenOnBehalfOfUserAsync``` to refresh a token?
### Versions
> ASP.NET Core 3.0
> Microsoft.Identity.Web from this Repo, manually updated to ASP.NET Core 3.0 with
```
<PackageReference Include="Microsoft.AspNetCore.Authentication.AzureAD.UI" Version="3.0.0" />
<PackageReference Include="Microsoft.AspNetCore.Authentication.AzureADB2C.UI" Version="3.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="3.0.1" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="3.0.1" />
<PackageReference Include="Microsoft.Identity.Client" Version="4.7.1" />
```
### Mention any other details that might be useful
Is there any other possiblity? Have I missed something? Anyone another suggestion how to solve this?
|
1.0
|
Get AccessTokenOnBehalfOfUser if (Current)HttpContext is not available (anymore) - From @pocki and copied from https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/issues/233
### This issue is for a: (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
```
### The issue was found for the following scenario:
Please add an 'x' for the scenario(s) where you found an issue
1. Web app that signs in users
1. [ ] with a work and school account in your organization: [1-WebApp-OIDC/1-1-MyOrg](../blob/master/1-WebApp-OIDC/1-1-MyOrg)
1. [ ] with any work and school account: [/1-WebApp-OIDC/1-2-AnyOrg](../blob/master/1-WebApp-OIDC/1-2-AnyOrg)
1. [ ] with any work or school account or Microsoft personal account: [1-WebApp-OIDC/1-3-AnyOrgOrPersonal](../blob/master/1-WebApp-OIDC/1-3-AnyOrgOrPersonal)
1. [ ] with users in National or sovereign clouds [1-WebApp-OIDC/1-4-Sovereign](../blob/master/1-WebApp-OIDC/1-4-Sovereign)
1. [ ] with B2C users [1-WebApp-OIDC/1-5-B2C](../blob/master/1-WebApp-OIDC/1-5-B2C)
1. Web app that calls Microsoft Graph
1. [ ] Calling graph with the Microsoft Graph SDK: [2-WebApp-graph-user/2-1-Call-MSGraph](../blob/master/2-WebApp-graph-user/2-1-Call-MSGraph)
1. [ ] With specific token caches: [2-WebApp-graph-user/2-2-TokenCache](../blob/master/2-WebApp-graph-user/2-2-TokenCache)
1. [ ] Calling Microsoft Graph in national clouds: [2-WebApp-graph-user/2-4-Sovereign-Call-MSGraph](../blob/master/2-WebApp-graph-user/2-4-Sovereign-Call-MSGraph)
1. [x] Web app calling several APIs [3-WebApp-multi-APIs](../blob/master/3-WebApp-multi-APIs)
1. [ ] Web app calling your own Web API [4-WebApp-your-API](../blob/master/4-WebApp-your-API)
1. Web app restricting users
1. [ ] by Roles: [5-WebApp-AuthZ/5-1-Roles](../blob/master/5-WebApp-AuthZ/5-1-Roles)
1. [ ] by Groups: [5-WebApp-AuthZ/5-2-Groups](../blob/master/5-WebApp-AuthZ/5-2-Groups)
1. [ ] Deployment to Azure
1. [ ] Other (please describe)
### Repro-ing the issue
**Repro steps**
<!-- the minimal steps to reproduce -->
Is it somehow possible to receive an AccessToken on behalf of User if only (at least) ClaimsPrincipal (like in #159) is available but no full (Current)HttpContext?
Why: I have a long running task moved to a IHostedService. In this service I need an AccessToken at beginning and at the end (for the same scope). The AccessToken for the beginning is no problem, I can request it before the start and/or use the TokenCache. But in the end (>1 hour after begin) of the HostedService the token needs to be refreshed, but I can not call the TokenAcquisition because HttpContext is not available (out of Scope/Disposed) in IHostedService.
Actually I use ```TokenAcquisition.GetAccessTokenOnBehalfOfUserAsync``` to get and to "refresh"/get a new token
**Expected behavior**
<!-- A clear and concise description of what you expected to happen (or code).-->
Use of ```TokenAcquisition.GetAccessTokenOnBehalfOfUserAsync``` where HttpContext is not available (Disposed or out of Scope)
**Actual behavior**
<!-- A clear and concise description of what happens, e.g. exception is thrown, UI freezes -->
With modifications of #159 I can pass the HttpContext.User as a Parameter to the HostedService: System.NullReferenceException "Object reference not set to an instance of an object."
> var request = CurrentHttpContext.Request; //CurrentHttpContext is null/is already disposed
> at Microsoft.Identity.Web.TokenAcquisition.BuildConfidentialClientApplication() in C:\xxx\Microsoft.Identity.Web\TokenAcquisition.cs:line 345
at Microsoft.Identity.Web.TokenAcquisition.GetOrBuildConfidentialClientApplication() in C:\xxx\Microsoft.Identity.Web\TokenAcquisition.cs:line 333
Line numbers may not match with this Repo
**Possible Solution**
<!--- Only if you have suggestions on a fix for the bug -->
Is it possible to set needed values for TokenAquisition manually?
Is there another method instead of ```TokenAcquisition.GetAccessTokenOnBehalfOfUserAsync``` to refresh a token?
### Versions
> ASP.NET Core 3.0
> Microsoft.Identity.Web from this Repo, manually updated to ASP.NET Core 3.0 with
```
<PackageReference Include="Microsoft.AspNetCore.Authentication.AzureAD.UI" Version="3.0.0" />
<PackageReference Include="Microsoft.AspNetCore.Authentication.AzureADB2C.UI" Version="3.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="3.0.1" />
<PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="3.0.1" />
<PackageReference Include="Microsoft.Identity.Client" Version="4.7.1" />
```
### Mention any other details that might be useful
Is there any other possiblity? Have I missed something? Anyone another suggestion how to solve this?
|
process
|
get accesstokenonbehalfofuser if current httpcontext is not available anymore from pocki and copied from this issue is for a mark with an x bug report please search issues before submitting feature request documentation issue or request regression a behavior that used to work and stopped in a new release the issue was found for the following scenario please add an x for the scenario s where you found an issue web app that signs in users with a work and school account in your organization blob master webapp oidc myorg with any work and school account blob master webapp oidc anyorg with any work or school account or microsoft personal account blob master webapp oidc anyorgorpersonal with users in national or sovereign clouds blob master webapp oidc sovereign with users blob master webapp oidc web app that calls microsoft graph calling graph with the microsoft graph sdk blob master webapp graph user call msgraph with specific token caches blob master webapp graph user tokencache calling microsoft graph in national clouds blob master webapp graph user sovereign call msgraph web app calling several apis blob master webapp multi apis web app calling your own web api blob master webapp your api web app restricting users by roles blob master webapp authz roles by groups blob master webapp authz groups deployment to azure other please describe repro ing the issue repro steps is it somehow possible to receive an accesstoken on behalf of user if only at least claimsprincipal like in is available but no full current httpcontext why i have a long running task moved to a ihostedservice in this service i need an accesstoken at beginning and at the end for the same scope the accesstoken for the beginning is no problem i can request it before the start and or use the tokencache but in the end hour after begin of the hostedservice the token needs to be refreshed but i can not call the tokenacquisition because httpcontext is not available out of scope disposed in ihostedservice actually i use tokenacquisition getaccesstokenonbehalfofuserasync to get and to refresh get a new token expected behavior use of tokenacquisition getaccesstokenonbehalfofuserasync where httpcontext is not available disposed or out of scope actual behavior with modifications of i can pass the httpcontext user as a parameter to the hostedservice system nullreferenceexception object reference not set to an instance of an object var request currenthttpcontext request currenthttpcontext is null is already disposed at microsoft identity web tokenacquisition buildconfidentialclientapplication in c xxx microsoft identity web tokenacquisition cs line at microsoft identity web tokenacquisition getorbuildconfidentialclientapplication in c xxx microsoft identity web tokenacquisition cs line line numbers may not match with this repo possible solution is it possible to set needed values for tokenaquisition manually is there another method instead of tokenacquisition getaccesstokenonbehalfofuserasync to refresh a token versions asp net core microsoft identity web from this repo manually updated to asp net core with mention any other details that might be useful is there any other possiblity have i missed something anyone another suggestion how to solve this
| 1
|
350,277
| 31,877,017,862
|
IssuesEvent
|
2023-09-16 00:44:53
|
ValveSoftware/steam-for-linux
|
https://api.github.com/repos/ValveSoftware/steam-for-linux
|
closed
|
[Big Picture Mode] Overlay scaling issues on 4k display
|
Big Picture overlay Need Retest
|
#### Your system information
* Steam client version (build number or date): Mar 17 2023, at 18:27:00
* Distribution (e.g. Ubuntu): Arch Linux
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
#### Please describe your issue in as much detail as possible:
When i start a game in Big Picture mode, and then open Steam Overlay, it's super big:

It can be fixed partially by disabling "Automatically Scale User Interface", but in this case - text on "Controller settings" is still to big. And text becomes too small in normal steam interface, not overlay UI.

It happens only on Linux, on Windows overlay scaling is fine.
It happens on my display (Dell U2720Q) and TV (LG CX 55 inch Class 4K Smart OLED TV)
#### Steps for reproducing this issue:
1. Open Steam in Big Picture Mode on 4k display
2. Starte any game
3. Open Steam Overlay
|
1.0
|
[Big Picture Mode] Overlay scaling issues on 4k display - #### Your system information
* Steam client version (build number or date): Mar 17 2023, at 18:27:00
* Distribution (e.g. Ubuntu): Arch Linux
* Opted into Steam client beta?: No
* Have you checked for system updates?: Yes
#### Please describe your issue in as much detail as possible:
When i start a game in Big Picture mode, and then open Steam Overlay, it's super big:

It can be fixed partially by disabling "Automatically Scale User Interface", but in this case - text on "Controller settings" is still to big. And text becomes too small in normal steam interface, not overlay UI.

It happens only on Linux, on Windows overlay scaling is fine.
It happens on my display (Dell U2720Q) and TV (LG CX 55 inch Class 4K Smart OLED TV)
#### Steps for reproducing this issue:
1. Open Steam in Big Picture Mode on 4k display
2. Starte any game
3. Open Steam Overlay
|
non_process
|
overlay scaling issues on display your system information steam client version build number or date mar at distribution e g ubuntu arch linux opted into steam client beta no have you checked for system updates yes please describe your issue in as much detail as possible when i start a game in big picture mode and then open steam overlay it s super big it can be fixed partially by disabling automatically scale user interface but in this case text on controller settings is still to big and text becomes too small in normal steam interface not overlay ui it happens only on linux on windows overlay scaling is fine it happens on my display dell and tv lg cx inch class smart oled tv steps for reproducing this issue open steam in big picture mode on display starte any game open steam overlay
| 0
|
683,605
| 23,388,595,979
|
IssuesEvent
|
2022-08-11 15:40:40
|
yalla-coop/chiltern-website
|
https://api.github.com/repos/yalla-coop/chiltern-website
|
closed
|
Ordering items on the website
|
priority-3
|
**Is your feature / client request related to a problem? Please describe.**
A clear and concise description of what the problem is.
It's very difficult to order items on the website. We have the number system on the stories and news page, however when you add a new item you have to go in and change the number on all of the other news/stories which is very time consuming. We also don't have the option to order the research items on the research and outcomes tab: https://www.chilternmusictherapy.co.uk/insights or the supporters at the bottom of this page: https://www.chilternmusictherapy.co.uk/support-us
**Describe the solution you'd like**
An easier way to order items on the:
- Stories and News tab
- Research and outcomes tab
- Supporters
**Describe alternatives you've considered**
The current solution isn't really working.
**Additional context**
Add any other context or screenshots about the feature request here.
**Team - do not edit**
@thejoefriel
@fadeomar
@@Israa91
|
1.0
|
Ordering items on the website - **Is your feature / client request related to a problem? Please describe.**
A clear and concise description of what the problem is.
It's very difficult to order items on the website. We have the number system on the stories and news page, however when you add a new item you have to go in and change the number on all of the other news/stories which is very time consuming. We also don't have the option to order the research items on the research and outcomes tab: https://www.chilternmusictherapy.co.uk/insights or the supporters at the bottom of this page: https://www.chilternmusictherapy.co.uk/support-us
**Describe the solution you'd like**
An easier way to order items on the:
- Stories and News tab
- Research and outcomes tab
- Supporters
**Describe alternatives you've considered**
The current solution isn't really working.
**Additional context**
Add any other context or screenshots about the feature request here.
**Team - do not edit**
@thejoefriel
@fadeomar
@@Israa91
|
non_process
|
ordering items on the website is your feature client request related to a problem please describe a clear and concise description of what the problem is it s very difficult to order items on the website we have the number system on the stories and news page however when you add a new item you have to go in and change the number on all of the other news stories which is very time consuming we also don t have the option to order the research items on the research and outcomes tab or the supporters at the bottom of this page describe the solution you d like an easier way to order items on the stories and news tab research and outcomes tab supporters describe alternatives you ve considered the current solution isn t really working additional context add any other context or screenshots about the feature request here team do not edit thejoefriel fadeomar
| 0
|
102,067
| 12,742,810,312
|
IssuesEvent
|
2020-06-26 09:10:43
|
creme-ml/creme
|
https://api.github.com/repos/creme-ml/creme
|
closed
|
LSH for KNN
|
Status: Design phase Type: Improvement
|
It would be worthy to explore the use of [Locality Sensitive Hashing](https://www.wikiwand.com/en/Locality-sensitive_hashing) for improving the speed of the `neighbors.NearestNeighbours` class. After a quick scan online [this](https://unboxresearch.com/articles/lsh_post1.html) seems to be the best introduction. It seems to me that it is fairly easy to remove elements from an LSH data structure.
|
1.0
|
LSH for KNN - It would be worthy to explore the use of [Locality Sensitive Hashing](https://www.wikiwand.com/en/Locality-sensitive_hashing) for improving the speed of the `neighbors.NearestNeighbours` class. After a quick scan online [this](https://unboxresearch.com/articles/lsh_post1.html) seems to be the best introduction. It seems to me that it is fairly easy to remove elements from an LSH data structure.
|
non_process
|
lsh for knn it would be worthy to explore the use of for improving the speed of the neighbors nearestneighbours class after a quick scan online seems to be the best introduction it seems to me that it is fairly easy to remove elements from an lsh data structure
| 0
|
39,223
| 12,643,923,268
|
IssuesEvent
|
2020-06-16 10:39:14
|
Ndh-31/AAIBHApp
|
https://api.github.com/repos/Ndh-31/AAIBHApp
|
opened
|
CVE-2018-1000632 (High) detected in dom4j-1.6.1.jar
|
security vulnerability
|
## CVE-2018-1000632 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dom4j-1.6.1.jar</b></p></summary>
<p>dom4j: the flexible XML framework for Java</p>
<p>Library home page: <a href="http://dom4j.org">http://dom4j.org</a></p>
<p>Path to vulnerable library: /AAIBHApp/AAIBHApp/aaibh-ear/bin/bin/dom4j-1.6.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **dom4j-1.6.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ndh-31/AAIBHApp/commit/169eb8259db4f54489525becfdeb2745d697365e">169eb8259db4f54489525becfdeb2745d697365e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection vulnerability in Class: Element. Methods: addElement, addAttribute that can result in an attacker tampering with XML documents through XML injection. This attack appear to be exploitable via an attacker specifying attributes or elements in the XML document. This vulnerability appears to have been fixed in 2.1.1 or later.
<p>Publish Date: 2018-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000632>CVE-2018-1000632</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632</a></p>
<p>Release Date: 2018-08-20</p>
<p>Fix Resolution: org.dom4j:dom4j:2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-1000632 (High) detected in dom4j-1.6.1.jar - ## CVE-2018-1000632 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dom4j-1.6.1.jar</b></p></summary>
<p>dom4j: the flexible XML framework for Java</p>
<p>Library home page: <a href="http://dom4j.org">http://dom4j.org</a></p>
<p>Path to vulnerable library: /AAIBHApp/AAIBHApp/aaibh-ear/bin/bin/dom4j-1.6.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **dom4j-1.6.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Ndh-31/AAIBHApp/commit/169eb8259db4f54489525becfdeb2745d697365e">169eb8259db4f54489525becfdeb2745d697365e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
dom4j version prior to version 2.1.1 contains a CWE-91: XML Injection vulnerability in Class: Element. Methods: addElement, addAttribute that can result in an attacker tampering with XML documents through XML injection. This attack appear to be exploitable via an attacker specifying attributes or elements in the XML document. This vulnerability appears to have been fixed in 2.1.1 or later.
<p>Publish Date: 2018-08-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-1000632>CVE-2018-1000632</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-1000632</a></p>
<p>Release Date: 2018-08-20</p>
<p>Fix Resolution: org.dom4j:dom4j:2.0.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in jar cve high severity vulnerability vulnerable library jar the flexible xml framework for java library home page a href path to vulnerable library aaibhapp aaibhapp aaibh ear bin bin jar dependency hierarchy x jar vulnerable library found in head commit a href vulnerability details version prior to version contains a cwe xml injection vulnerability in class element methods addelement addattribute that can result in an attacker tampering with xml documents through xml injection this attack appear to be exploitable via an attacker specifying attributes or elements in the xml document this vulnerability appears to have been fixed in or later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org step up your open source security game with whitesource
| 0
|
17,734
| 23,649,379,954
|
IssuesEvent
|
2022-08-26 04:10:22
|
googleapis/google-cloud-go
|
https://api.github.com/repos/googleapis/google-cloud-go
|
closed
|
storage: re-enable OLM tests once generated library breaking change is released
|
api: storage type: process
|
Storage generated client is releasing a breaking change to address `Age: 0` being sent to and from GCS otherwise we can't determine existence of value https://github.com/googleapis/google-api-go-client/pull/1598
|
1.0
|
storage: re-enable OLM tests once generated library breaking change is released - Storage generated client is releasing a breaking change to address `Age: 0` being sent to and from GCS otherwise we can't determine existence of value https://github.com/googleapis/google-api-go-client/pull/1598
|
process
|
storage re enable olm tests once generated library breaking change is released storage generated client is releasing a breaking change to address age being sent to and from gcs otherwise we can t determine existence of value
| 1
|
5,860
| 8,681,910,568
|
IssuesEvent
|
2018-12-02 01:21:19
|
lightningWhite/weatherLearning
|
https://api.github.com/repos/lightningWhite/weatherLearning
|
closed
|
Create difference columns for each attribute
|
dataProcessing
|
For each attribute, we need to calculate the difference between the current value and what it was x-hrs. in the past and create a column containing that data, where x-hrs is the time difference decided on in issue 3.
We also need another column for the target value. This will need to be the weather condition x-hours in the future.
|
1.0
|
Create difference columns for each attribute - For each attribute, we need to calculate the difference between the current value and what it was x-hrs. in the past and create a column containing that data, where x-hrs is the time difference decided on in issue 3.
We also need another column for the target value. This will need to be the weather condition x-hours in the future.
|
process
|
create difference columns for each attribute for each attribute we need to calculate the difference between the current value and what it was x hrs in the past and create a column containing that data where x hrs is the time difference decided on in issue we also need another column for the target value this will need to be the weather condition x hours in the future
| 1
|
8,385
| 11,545,410,311
|
IssuesEvent
|
2020-02-18 13:23:54
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
[Introspection] Use @map for problematic ENUM values
|
bug/2-confirmed kind/bug process/next-milestone topic: introspection
|
- `Unexpected token. Expected one of: End of block ('}'), enum field declaration.`
---
sakila schema
```
"error: Unexpected token. Expected one of: End of block ('}'), enum field declaration.
--> schema.prisma:180
|
179 | G
180 | NC-17
| ^ Unexpected token.
181 | PG
| "
```
|
1.0
|
[Introspection] Use @map for problematic ENUM values - - `Unexpected token. Expected one of: End of block ('}'), enum field declaration.`
---
sakila schema
```
"error: Unexpected token. Expected one of: End of block ('}'), enum field declaration.
--> schema.prisma:180
|
179 | G
180 | NC-17
| ^ Unexpected token.
181 | PG
| "
```
|
process
|
use map for problematic enum values unexpected token expected one of end of block enum field declaration sakila schema error unexpected token expected one of end of block enum field declaration schema prisma g nc unexpected token pg
| 1
|
272,209
| 8,500,256,300
|
IssuesEvent
|
2018-10-29 19:21:08
|
Sage-Bionetworks/Agora
|
https://api.github.com/repos/Sage-Bionetworks/Agora
|
closed
|
Summary of Evidence tab: update category name
|
Nov 2018 SFN Milestone SfN final clean up moderate priority
|
Change "AD Genetic Association" to "Genetic Association with LOAD" (also updated in Figma)
<img width="786" alt="screen shot 2018-10-25 at 1 47 22 pm" src="https://user-images.githubusercontent.com/40642407/47529455-b5979080-d85c-11e8-8866-583b6e64379e.png">
|
1.0
|
Summary of Evidence tab: update category name - Change "AD Genetic Association" to "Genetic Association with LOAD" (also updated in Figma)
<img width="786" alt="screen shot 2018-10-25 at 1 47 22 pm" src="https://user-images.githubusercontent.com/40642407/47529455-b5979080-d85c-11e8-8866-583b6e64379e.png">
|
non_process
|
summary of evidence tab update category name change ad genetic association to genetic association with load also updated in figma img width alt screen shot at pm src
| 0
|
60,599
| 7,360,580,917
|
IssuesEvent
|
2018-03-10 20:00:46
|
vhirtham/GDL
|
https://api.github.com/repos/vhirtham/GDL
|
closed
|
Uniform arrays
|
design
|
Only the first member of an uniform array is returned by glGetProgramInterfaceiv when looking for uniforms. The name string contains the random access operator "[0]". Additionally it is not guaranteed that the handles of the following array elements are contiguous.
https://stackoverflow.com/questions/32154710/are-glgetuniformlocation-indices-for-arrays-of-uniforms-guaranteed-sequential-ex
So you cant get the location of an array element by simply adding its index to the first elements location. Therefore we have to find every elements location.
Now the question is, how we want to treat/store arrays in general. there are a few options:
1. Simply add an entry for each element to the uniform map. The size parameter stores how many elements are left until the arrays end. Therefore error checking when setting more than one array element via glProgramUniform1fv is quiet easy. Just compare the number of variables to set with the size attribute of the uniform. (maybe rename the attribute to Arraysize). The length of the full array is obtained by just checking the size value of the first element e.g. "myArray[0]"
2. Basically the same as 1 but removing the size attribute and query for the size by counting the number of identical names in the map.
3. Use an own uniform array class with its own map. It stores the locations of each member in a vector and might get a nice interface by overloading the random access operator.
|
1.0
|
Uniform arrays - Only the first member of an uniform array is returned by glGetProgramInterfaceiv when looking for uniforms. The name string contains the random access operator "[0]". Additionally it is not guaranteed that the handles of the following array elements are contiguous.
https://stackoverflow.com/questions/32154710/are-glgetuniformlocation-indices-for-arrays-of-uniforms-guaranteed-sequential-ex
So you cant get the location of an array element by simply adding its index to the first elements location. Therefore we have to find every elements location.
Now the question is, how we want to treat/store arrays in general. there are a few options:
1. Simply add an entry for each element to the uniform map. The size parameter stores how many elements are left until the arrays end. Therefore error checking when setting more than one array element via glProgramUniform1fv is quiet easy. Just compare the number of variables to set with the size attribute of the uniform. (maybe rename the attribute to Arraysize). The length of the full array is obtained by just checking the size value of the first element e.g. "myArray[0]"
2. Basically the same as 1 but removing the size attribute and query for the size by counting the number of identical names in the map.
3. Use an own uniform array class with its own map. It stores the locations of each member in a vector and might get a nice interface by overloading the random access operator.
|
non_process
|
uniform arrays only the first member of an uniform array is returned by glgetprograminterfaceiv when looking for uniforms the name string contains the random access operator additionally it is not guaranteed that the handles of the following array elements are contiguous so you cant get the location of an array element by simply adding its index to the first elements location therefore we have to find every elements location now the question is how we want to treat store arrays in general there are a few options simply add an entry for each element to the uniform map the size parameter stores how many elements are left until the arrays end therefore error checking when setting more than one array element via is quiet easy just compare the number of variables to set with the size attribute of the uniform maybe rename the attribute to arraysize the length of the full array is obtained by just checking the size value of the first element e g myarray basically the same as but removing the size attribute and query for the size by counting the number of identical names in the map use an own uniform array class with its own map it stores the locations of each member in a vector and might get a nice interface by overloading the random access operator
| 0
|
12,092
| 14,740,078,824
|
IssuesEvent
|
2021-01-07 08:28:47
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Portland - SA Billing - Late Fee Account List
|
anc-process anp-important ant-bug has attachment
|
In GitLab by @kdjstudios on Oct 3, 2018, 11:08
[Portland.xlsx](/uploads/c131a00df52ca6fe276a2860454240f2/Portland.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-78551/conversation
|
1.0
|
Portland - SA Billing - Late Fee Account List - In GitLab by @kdjstudios on Oct 3, 2018, 11:08
[Portland.xlsx](/uploads/c131a00df52ca6fe276a2860454240f2/Portland.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-78551/conversation
|
process
|
portland sa billing late fee account list in gitlab by kdjstudios on oct uploads portland xlsx hd
| 1
|
772,149
| 27,108,525,993
|
IssuesEvent
|
2023-02-15 13:51:26
|
kubernetes/ingress-nginx
|
https://api.github.com/repos/kubernetes/ingress-nginx
|
closed
|
4.4.3 had random indentation in values.yaml
|
kind/bug needs-triage needs-priority
|
The now rolled back (?) 4.4.3 had indentation changed to 4 spaces instead of 2 spaces like this:
```yaml
controller:
name: controller
image:
## Keep false as default for now!
chroot: false
```
+ also removed / added new lines which makes it super hard to diff my values.yml vs origin.
|
1.0
|
4.4.3 had random indentation in values.yaml - The now rolled back (?) 4.4.3 had indentation changed to 4 spaces instead of 2 spaces like this:
```yaml
controller:
name: controller
image:
## Keep false as default for now!
chroot: false
```
+ also removed / added new lines which makes it super hard to diff my values.yml vs origin.
|
non_process
|
had random indentation in values yaml the now rolled back had indentation changed to spaces instead of spaces like this yaml controller name controller image keep false as default for now chroot false also removed added new lines which makes it super hard to diff my values yml vs origin
| 0
|
21,288
| 28,484,544,752
|
IssuesEvent
|
2023-04-18 06:52:27
|
camunda/issues
|
https://api.github.com/repos/camunda/issues
|
closed
|
BPMN Signal Events(5): Signal name as an expression
|
component:desktopModeler component:operate component:webModeler component:zeebe-process-automation public kind:epic feature-parity potential:8.3
|
### Value Proposition Statement
<!-- Value Proposition Statement. Example: Discoverable and reusable integrations in Web Modeler -->
### User Problem
<!-- User Problem. Describe “why” we should do this by stating out the user problem. Example: Currently, when I implement a custom integration (as a Job Worker), I need to know the properties to use when I want to call it from a process. I need to know the type, as well as the variables it requires as inputs and the variables it produces as outputs. When I implement the job worker myself and/or have access to the code, I know all this and can use it. However, if I want to allow others to use the job worker, I need to produce additional documentation for them. It would be great if I could make the job worker discoverable and reusable by others in the tool. -->
### User Stories
<!-- User Stories. In the discovery phase, they should describe mainly the “_what_“. Later, in the define and implement phases, they also describe the “_how_“. For example:
* As a developer, In the web modeler, I can create a template
* As a developer, when using the web modeler modeling canvas, I can see all the templates that I have created when defining the type of a new task
* As a developer, I can define an icon for a template
* _Added in the define phase:_
* As a developer, when I select a template as task type, the template is applied to the properties panel
* -->
### Implementation Notes
<!-- Notes to consider for implementation, for example:
* In Cawemo we already have the capability to manage templates via the feature that we call “catalog”
* What we would build now is the ability to a) use this feature in the web modeler to create templates and b) when the context pad opens for defining the type of a task, the templates that decorate service tasks are shown
* We should clarify terminology (integrations vs. connectors vs. job workers vs. element templates.) Particularly “element templates” might not be a term that a user intuitively understands.
* See these high level wireframes to capture the idea -->
### Validation Criteria
<!-- Criteria indicating success, for example metrics collected using telemetry. To be used during Validation phase to evaluate the success of the overall Epic. -->
### Breakdown
> This section links to various sub-issues / -tasks contributing to respective epic phase or phase results where appropriate.
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Reviewed by design: {date}
* Designer assigned: {Yes, No Design Necessary, or No Designer Available}
* Assignee:
* Design Brief - {link to design brief }
* Research Brief - {link to research brief }
Design Deliverables
* {Deliverable Name} {Link to GH Issue}
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
<!-- Example: link to relevant support cases -->
|
1.0
|
BPMN Signal Events(5): Signal name as an expression - ### Value Proposition Statement
<!-- Value Proposition Statement. Example: Discoverable and reusable integrations in Web Modeler -->
### User Problem
<!-- User Problem. Describe “why” we should do this by stating out the user problem. Example: Currently, when I implement a custom integration (as a Job Worker), I need to know the properties to use when I want to call it from a process. I need to know the type, as well as the variables it requires as inputs and the variables it produces as outputs. When I implement the job worker myself and/or have access to the code, I know all this and can use it. However, if I want to allow others to use the job worker, I need to produce additional documentation for them. It would be great if I could make the job worker discoverable and reusable by others in the tool. -->
### User Stories
<!-- User Stories. In the discovery phase, they should describe mainly the “_what_“. Later, in the define and implement phases, they also describe the “_how_“. For example:
* As a developer, In the web modeler, I can create a template
* As a developer, when using the web modeler modeling canvas, I can see all the templates that I have created when defining the type of a new task
* As a developer, I can define an icon for a template
* _Added in the define phase:_
* As a developer, when I select a template as task type, the template is applied to the properties panel
* -->
### Implementation Notes
<!-- Notes to consider for implementation, for example:
* In Cawemo we already have the capability to manage templates via the feature that we call “catalog”
* What we would build now is the ability to a) use this feature in the web modeler to create templates and b) when the context pad opens for defining the type of a task, the templates that decorate service tasks are shown
* We should clarify terminology (integrations vs. connectors vs. job workers vs. element templates.) Particularly “element templates” might not be a term that a user intuitively understands.
* See these high level wireframes to capture the idea -->
### Validation Criteria
<!-- Criteria indicating success, for example metrics collected using telemetry. To be used during Validation phase to evaluate the success of the overall Epic. -->
### Breakdown
> This section links to various sub-issues / -tasks contributing to respective epic phase or phase results where appropriate.
#### Discovery phase ##
<!-- Example: link to "Conduct customer interview with xyz" -->
#### Define phase ##
<!-- Consider: UI, UX, technical design, documentation design -->
<!-- Example: link to "Define User-Journey Flow" or "Define target architecture" -->
Design Planning
* Reviewed by design: {date}
* Designer assigned: {Yes, No Design Necessary, or No Designer Available}
* Assignee:
* Design Brief - {link to design brief }
* Research Brief - {link to research brief }
Design Deliverables
* {Deliverable Name} {Link to GH Issue}
Documentation Planning
<!-- Complex changes must be reviewed during the Define phase by the DRI of Documentation or technical writer. -->
<!-- Briefly describe the anticipated impact to documentation. -->
<!-- Example: "Creates structural changes in docs as UX is reworked." _Add docs reviewer to Epic for feedback._ -->
Risk Management <!-- add link to risk management issue -->
* Risk Class: <!-- e.g. very low | low | medium | high | very high -->
* Risk Treatment: <!-- e.g. avoid | mitigate | transfer | accept -->
#### Implement phase ##
<!-- Example: link to "Implement User Story xyz". Should not only include core implementation, but also documentation. -->
#### Validate phase ##
<!-- Example: link to "Evaluate usage data of last quarter" -->
### Links to additional collateral
<!-- Example: link to relevant support cases -->
|
process
|
bpmn signal events signal name as an expression value proposition statement user problem user stories user stories in the discovery phase they should describe mainly the “ what “ later in the define and implement phases they also describe the “ how “ for example as a developer in the web modeler i can create a template as a developer when using the web modeler modeling canvas i can see all the templates that i have created when defining the type of a new task as a developer i can define an icon for a template added in the define phase as a developer when i select a template as task type the template is applied to the properties panel implementation notes notes to consider for implementation for example in cawemo we already have the capability to manage templates via the feature that we call “catalog” what we would build now is the ability to a use this feature in the web modeler to create templates and b when the context pad opens for defining the type of a task the templates that decorate service tasks are shown we should clarify terminology integrations vs connectors vs job workers vs element templates particularly “element templates” might not be a term that a user intuitively understands see these high level wireframes to capture the idea validation criteria breakdown this section links to various sub issues tasks contributing to respective epic phase or phase results where appropriate discovery phase define phase design planning reviewed by design date designer assigned yes no design necessary or no designer available assignee design brief link to design brief research brief link to research brief design deliverables deliverable name link to gh issue documentation planning risk management risk class risk treatment implement phase validate phase links to additional collateral
| 1
|
18,597
| 24,571,278,377
|
IssuesEvent
|
2022-10-13 08:50:44
|
aiidateam/plumpy
|
https://api.github.com/repos/aiidateam/plumpy
|
closed
|
Exceptions raised during state transition interrupt the state transition flow
|
type/bug priority/critical topic/processes
|
When a `Process` is in a state transition, an uncaught exception that is raised will cause the entire process to topple and will end up in an inconsistent state. I would argue that this should simply transition from the current state transition to the excepted state transition. This might require a bit of a change in the allowed transitions between various state. One can imagine an exception being raised in `on_entered` meaning the process has finished transitioning to the new state. If this happens to be `EXCEPTED` and an exception is caught here, technically the process needs to transition from `EXCEPTED` to `EXCEPTED` which I believe according to the current schema is not allowed.
|
1.0
|
Exceptions raised during state transition interrupt the state transition flow - When a `Process` is in a state transition, an uncaught exception that is raised will cause the entire process to topple and will end up in an inconsistent state. I would argue that this should simply transition from the current state transition to the excepted state transition. This might require a bit of a change in the allowed transitions between various state. One can imagine an exception being raised in `on_entered` meaning the process has finished transitioning to the new state. If this happens to be `EXCEPTED` and an exception is caught here, technically the process needs to transition from `EXCEPTED` to `EXCEPTED` which I believe according to the current schema is not allowed.
|
process
|
exceptions raised during state transition interrupt the state transition flow when a process is in a state transition an uncaught exception that is raised will cause the entire process to topple and will end up in an inconsistent state i would argue that this should simply transition from the current state transition to the excepted state transition this might require a bit of a change in the allowed transitions between various state one can imagine an exception being raised in on entered meaning the process has finished transitioning to the new state if this happens to be excepted and an exception is caught here technically the process needs to transition from excepted to excepted which i believe according to the current schema is not allowed
| 1
|
636,682
| 20,605,744,015
|
IssuesEvent
|
2022-03-06 23:22:54
|
code-ready/crc
|
https://api.github.com/repos/code-ready/crc
|
reopened
|
[Dev] crc console with podman preset fails
|
kind/bug size/S priority/minor status/stale podman
|
Executing `crc console` for podman preset fails with cryptic message.
```
$ crc config
- preset : podman
$ crc start
[...]
$ ./crc console --log-level debug
DEBU CodeReady Containers version: 1.34.0+09227706
DEBU OpenShift version: 4.9.5 (not embedded in executable)
DEBU Running 'crc console'
DEBU Checking file: /home/prkumar/.crc/machines/crc/.crc-exist
DEBU Found binary path at /home/prkumar/.crc/bin/crc-driver-libvirt
DEBU Launching plugin server for driver libvirt
DEBU Plugin server listening at address 127.0.0.1:46871
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetState
DEBU (crc) DBG | time="2021-11-30T17:01:52+05:30" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2021-11-30T17:01:52+05:30" level=debug msg="Fetching VM..."
DEBU (crc) Calling .GetBundleName
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
DEBU (crc) DBG | time="2021-11-30T17:01:52+05:30" level=debug msg="Closing plugin on server side"
Error loading cluster configuration: read /home/prkumar/.crc/cache/crc_podman_libvirt_3.4.1: is a directory
```
|
1.0
|
[Dev] crc console with podman preset fails - Executing `crc console` for podman preset fails with cryptic message.
```
$ crc config
- preset : podman
$ crc start
[...]
$ ./crc console --log-level debug
DEBU CodeReady Containers version: 1.34.0+09227706
DEBU OpenShift version: 4.9.5 (not embedded in executable)
DEBU Running 'crc console'
DEBU Checking file: /home/prkumar/.crc/machines/crc/.crc-exist
DEBU Found binary path at /home/prkumar/.crc/bin/crc-driver-libvirt
DEBU Launching plugin server for driver libvirt
DEBU Plugin server listening at address 127.0.0.1:46871
DEBU () Calling .GetVersion
DEBU Using API Version 1
DEBU () Calling .SetConfigRaw
DEBU () Calling .GetMachineName
DEBU (crc) Calling .GetState
DEBU (crc) DBG | time="2021-11-30T17:01:52+05:30" level=debug msg="Getting current state..."
DEBU (crc) DBG | time="2021-11-30T17:01:52+05:30" level=debug msg="Fetching VM..."
DEBU (crc) Calling .GetBundleName
DEBU Making call to close driver server
DEBU (crc) Calling .Close
DEBU Successfully made call to close driver server
DEBU Making call to close connection to plugin binary
DEBU (crc) DBG | time="2021-11-30T17:01:52+05:30" level=debug msg="Closing plugin on server side"
Error loading cluster configuration: read /home/prkumar/.crc/cache/crc_podman_libvirt_3.4.1: is a directory
```
|
non_process
|
crc console with podman preset fails executing crc console for podman preset fails with cryptic message crc config preset podman crc start crc console log level debug debu codeready containers version debu openshift version not embedded in executable debu running crc console debu checking file home prkumar crc machines crc crc exist debu found binary path at home prkumar crc bin crc driver libvirt debu launching plugin server for driver libvirt debu plugin server listening at address debu calling getversion debu using api version debu calling setconfigraw debu calling getmachinename debu crc calling getstate debu crc dbg time level debug msg getting current state debu crc dbg time level debug msg fetching vm debu crc calling getbundlename debu making call to close driver server debu crc calling close debu successfully made call to close driver server debu making call to close connection to plugin binary debu crc dbg time level debug msg closing plugin on server side error loading cluster configuration read home prkumar crc cache crc podman libvirt is a directory
| 0
|
68,279
| 28,311,354,230
|
IssuesEvent
|
2023-04-10 15:40:55
|
amplication/amplication
|
https://api.github.com/repos/amplication/amplication
|
closed
|
🐛 Bug Report: Service folder should be in kebab case
|
type: bug epic: Service Creation
|
### What happened?

Service folder contains spaces
### What you expected to happen
service name should be written in kebab case
### How to reproduce
create service
on repository step see service folder
### Amplication version
_No response_
### Environment
sandbox
### Are you willing to submit PR?
_No response_
|
1.0
|
🐛 Bug Report: Service folder should be in kebab case - ### What happened?

Service folder contains spaces
### What you expected to happen
service name should be written in kebab case
### How to reproduce
create service
on repository step see service folder
### Amplication version
_No response_
### Environment
sandbox
### Are you willing to submit PR?
_No response_
|
non_process
|
🐛 bug report service folder should be in kebab case what happened service folder contains spaces what you expected to happen service name should be written in kebab case how to reproduce create service on repository step see service folder amplication version no response environment sandbox are you willing to submit pr no response
| 0
|
427,211
| 12,393,858,097
|
IssuesEvent
|
2020-05-20 16:02:22
|
googleapis/elixir-google-api
|
https://api.github.com/repos/googleapis/elixir-google-api
|
closed
|
Synthesis failed for CloudDebugger
|
api: clouddebugger autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate CloudDebugger. :broken_heart:
Here's the output from running `synth.py`:
```
led to remove deps/parse_trans/ebin/parse_trans.app: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_mod.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_codegen.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/ct_expand.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/exprecs.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_pp.beam: Permission denied
warning: failed to remove deps/parse_trans/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/parse_trans/hex_metadata.config: Permission denied
warning: failed to remove deps/parse_trans/README.md: Permission denied
warning: failed to remove deps/parse_trans/rebar.config: Permission denied
warning: failed to remove deps/parse_trans/include/codegen.hrl: Permission denied
warning: failed to remove deps/parse_trans/include/exprecs.hrl: Permission denied
warning: failed to remove deps/parse_trans/.fetch: Permission denied
warning: failed to remove deps/parse_trans/.hex: Permission denied
warning: failed to remove deps/idna/LICENSE: Permission denied
warning: failed to remove deps/idna/rebar.lock: Permission denied
warning: failed to remove deps/idna/src/idna.erl: Permission denied
warning: failed to remove deps/idna/src/idna_logger.hrl: Permission denied
warning: failed to remove deps/idna/src/idna_ucs.erl: Permission denied
warning: failed to remove deps/idna/src/punycode.erl: Permission denied
warning: failed to remove deps/idna/src/idna_table.erl: Permission denied
warning: failed to remove deps/idna/src/idna_context.erl: Permission denied
warning: failed to remove deps/idna/src/idna.app.src: Permission denied
warning: failed to remove deps/idna/src/idna_mapping.erl: Permission denied
warning: failed to remove deps/idna/src/idna_data.erl: Permission denied
warning: failed to remove deps/idna/src/idna_bidi.erl: Permission denied
warning: failed to remove deps/idna/ebin/idna_mapping.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_context.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_bidi.beam: Permission denied
warning: failed to remove deps/idna/ebin/punycode.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_table.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_data.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_ucs.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna.app: Permission denied
warning: failed to remove deps/idna/ebin/idna.beam: Permission denied
warning: failed to remove deps/idna/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/idna/hex_metadata.config: Permission denied
warning: failed to remove deps/idna/README.md: Permission denied
warning: failed to remove deps/idna/rebar.config: Permission denied
warning: failed to remove deps/idna/.fetch: Permission denied
warning: failed to remove deps/idna/rebar.config.script: Permission denied
warning: failed to remove deps/idna/.hex: Permission denied
warning: failed to remove deps/hackney/MAINTAINERS: Permission denied
warning: failed to remove deps/hackney/LICENSE: Permission denied
warning: failed to remove deps/hackney/rebar.lock: Permission denied
warning: failed to remove deps/hackney/src/hackney_ssl.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_response.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_cookie.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_url.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.app.src: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool_handler.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_trace.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_multipart.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers_new.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_util.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_socks5.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_request.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_app.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_internal.hrl: Permission denied
warning: failed to remove deps/hackney/src/hackney_date.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_manager.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_bstr.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_sup.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_local_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_stream.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_metrics.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_methods.hrl: Permission denied
warning: failed to remove deps/hackney/NOTICE: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_trace.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool_handler.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_url.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_manager.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_metrics.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_stream.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_sup.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_multipart.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_socks5.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_app.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_response.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.app: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers_new.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_cookie.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_request.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_util.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_date.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_ssl.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_bstr.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_tcp.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_local_tcp.beam: Permission denied
warning: failed to remove deps/hackney/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/hackney/hex_metadata.config: Permission denied
warning: failed to remove deps/hackney/README.md: Permission denied
warning: failed to remove deps/hackney/rebar.config: Permission denied
warning: failed to remove deps/hackney/include/hackney.hrl: Permission denied
warning: failed to remove deps/hackney/include/hackney_lib.hrl: Permission denied
warning: failed to remove deps/hackney/.fetch: Permission denied
warning: failed to remove deps/hackney/.hex: Permission denied
warning: failed to remove deps/hackney/NEWS.md: Permission denied
Removing __pycache__/
Removing specifications/gdd/CloudDebugger-v2.json
Traceback (most recent call last):
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 559, in _inner_main
sys.exit(EXIT_CODE_SKIPPED)
SystemExit: 28
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 611, in _inner_main
executor.check_call(["git", "clean", "-fdx"], cwd=working_repo_path)
File "/tmpfs/src/github/synthtool/autosynth/executor.py", line 29, in check_call
subprocess.check_call(command, **args)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['git', 'clean', '-fdx']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge/c19bc74d-4d42-4305-94c3-323bf90f3ebc).
|
1.0
|
Synthesis failed for CloudDebugger - Hello! Autosynth couldn't regenerate CloudDebugger. :broken_heart:
Here's the output from running `synth.py`:
```
led to remove deps/parse_trans/ebin/parse_trans.app: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_mod.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_codegen.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/ct_expand.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/exprecs.beam: Permission denied
warning: failed to remove deps/parse_trans/ebin/parse_trans_pp.beam: Permission denied
warning: failed to remove deps/parse_trans/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/parse_trans/hex_metadata.config: Permission denied
warning: failed to remove deps/parse_trans/README.md: Permission denied
warning: failed to remove deps/parse_trans/rebar.config: Permission denied
warning: failed to remove deps/parse_trans/include/codegen.hrl: Permission denied
warning: failed to remove deps/parse_trans/include/exprecs.hrl: Permission denied
warning: failed to remove deps/parse_trans/.fetch: Permission denied
warning: failed to remove deps/parse_trans/.hex: Permission denied
warning: failed to remove deps/idna/LICENSE: Permission denied
warning: failed to remove deps/idna/rebar.lock: Permission denied
warning: failed to remove deps/idna/src/idna.erl: Permission denied
warning: failed to remove deps/idna/src/idna_logger.hrl: Permission denied
warning: failed to remove deps/idna/src/idna_ucs.erl: Permission denied
warning: failed to remove deps/idna/src/punycode.erl: Permission denied
warning: failed to remove deps/idna/src/idna_table.erl: Permission denied
warning: failed to remove deps/idna/src/idna_context.erl: Permission denied
warning: failed to remove deps/idna/src/idna.app.src: Permission denied
warning: failed to remove deps/idna/src/idna_mapping.erl: Permission denied
warning: failed to remove deps/idna/src/idna_data.erl: Permission denied
warning: failed to remove deps/idna/src/idna_bidi.erl: Permission denied
warning: failed to remove deps/idna/ebin/idna_mapping.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_context.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_bidi.beam: Permission denied
warning: failed to remove deps/idna/ebin/punycode.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_table.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_data.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna_ucs.beam: Permission denied
warning: failed to remove deps/idna/ebin/idna.app: Permission denied
warning: failed to remove deps/idna/ebin/idna.beam: Permission denied
warning: failed to remove deps/idna/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/idna/hex_metadata.config: Permission denied
warning: failed to remove deps/idna/README.md: Permission denied
warning: failed to remove deps/idna/rebar.config: Permission denied
warning: failed to remove deps/idna/.fetch: Permission denied
warning: failed to remove deps/idna/rebar.config.script: Permission denied
warning: failed to remove deps/idna/.hex: Permission denied
warning: failed to remove deps/hackney/MAINTAINERS: Permission denied
warning: failed to remove deps/hackney/LICENSE: Permission denied
warning: failed to remove deps/hackney/rebar.lock: Permission denied
warning: failed to remove deps/hackney/src/hackney_ssl.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_response.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_cookie.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_url.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.app.src: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool_handler.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_trace.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_multipart.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_headers_new.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_http_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_util.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_socks5.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_request.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_app.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_internal.hrl: Permission denied
warning: failed to remove deps/hackney/src/hackney_date.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_manager.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_connect.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_bstr.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_sup.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_local_tcp.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_stream.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_pool.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_metrics.erl: Permission denied
warning: failed to remove deps/hackney/src/hackney_methods.hrl: Permission denied
warning: failed to remove deps/hackney/NOTICE: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_trace.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_pool_handler.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_url.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_manager.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_metrics.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_stream.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_sup.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_multipart.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_socks5.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_app.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_http_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_response.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney.app: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_headers_new.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_cookie.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_request.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_util.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_connect.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_date.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_ssl.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_bstr.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_tcp.beam: Permission denied
warning: failed to remove deps/hackney/ebin/hackney_local_tcp.beam: Permission denied
warning: failed to remove deps/hackney/.rebar3/erlcinfo: Permission denied
warning: failed to remove deps/hackney/hex_metadata.config: Permission denied
warning: failed to remove deps/hackney/README.md: Permission denied
warning: failed to remove deps/hackney/rebar.config: Permission denied
warning: failed to remove deps/hackney/include/hackney.hrl: Permission denied
warning: failed to remove deps/hackney/include/hackney_lib.hrl: Permission denied
warning: failed to remove deps/hackney/.fetch: Permission denied
warning: failed to remove deps/hackney/.hex: Permission denied
warning: failed to remove deps/hackney/NEWS.md: Permission denied
Removing __pycache__/
Removing specifications/gdd/CloudDebugger-v2.json
Traceback (most recent call last):
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 559, in _inner_main
sys.exit(EXIT_CODE_SKIPPED)
SystemExit: 28
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 615, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 476, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 611, in _inner_main
executor.check_call(["git", "clean", "-fdx"], cwd=working_repo_path)
File "/tmpfs/src/github/synthtool/autosynth/executor.py", line 29, in check_call
subprocess.check_call(command, **args)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['git', 'clean', '-fdx']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge/c19bc74d-4d42-4305-94c3-323bf90f3ebc).
|
non_process
|
synthesis failed for clouddebugger hello autosynth couldn t regenerate clouddebugger broken heart here s the output from running synth py led to remove deps parse trans ebin parse trans app permission denied warning failed to remove deps parse trans ebin parse trans mod beam permission denied warning failed to remove deps parse trans ebin parse trans codegen beam permission denied warning failed to remove deps parse trans ebin ct expand beam permission denied warning failed to remove deps parse trans ebin parse trans beam permission denied warning failed to remove deps parse trans ebin exprecs beam permission denied warning failed to remove deps parse trans ebin parse trans pp beam permission denied warning failed to remove deps parse trans erlcinfo permission denied warning failed to remove deps parse trans hex metadata config permission denied warning failed to remove deps parse trans readme md permission denied warning failed to remove deps parse trans rebar config permission denied warning failed to remove deps parse trans include codegen hrl permission denied warning failed to remove deps parse trans include exprecs hrl permission denied warning failed to remove deps parse trans fetch permission denied warning failed to remove deps parse trans hex permission denied warning failed to remove deps idna license permission denied warning failed to remove deps idna rebar lock permission denied warning failed to remove deps idna src idna erl permission denied warning failed to remove deps idna src idna logger hrl permission denied warning failed to remove deps idna src idna ucs erl permission denied warning failed to remove deps idna src punycode erl permission denied warning failed to remove deps idna src idna table erl permission denied warning failed to remove deps idna src idna context erl permission denied warning failed to remove deps idna src idna app src permission denied warning failed to remove deps idna src idna mapping erl permission denied warning failed to remove deps idna src idna data erl permission denied warning failed to remove deps idna src idna bidi erl permission denied warning failed to remove deps idna ebin idna mapping beam permission denied warning failed to remove deps idna ebin idna context beam permission denied warning failed to remove deps idna ebin idna bidi beam permission denied warning failed to remove deps idna ebin punycode beam permission denied warning failed to remove deps idna ebin idna table beam permission denied warning failed to remove deps idna ebin idna data beam permission denied warning failed to remove deps idna ebin idna ucs beam permission denied warning failed to remove deps idna ebin idna app permission denied warning failed to remove deps idna ebin idna beam permission denied warning failed to remove deps idna erlcinfo permission denied warning failed to remove deps idna hex metadata config permission denied warning failed to remove deps idna readme md permission denied warning failed to remove deps idna rebar config permission denied warning failed to remove deps idna fetch permission denied warning failed to remove deps idna rebar config script permission denied warning failed to remove deps idna hex permission denied warning failed to remove deps hackney maintainers permission denied warning failed to remove deps hackney license permission denied warning failed to remove deps hackney rebar lock permission denied warning failed to remove deps hackney src hackney ssl erl permission denied warning failed to remove deps hackney src hackney response erl permission denied warning failed to remove deps hackney src hackney tcp erl permission denied warning failed to remove deps hackney src hackney http erl permission denied warning failed to remove deps hackney src hackney cookie erl permission denied warning failed to remove deps hackney src hackney url erl permission denied warning failed to remove deps hackney src hackney headers erl permission denied warning failed to remove deps hackney src hackney app src permission denied warning failed to remove deps hackney src hackney pool handler erl permission denied warning failed to remove deps hackney src hackney trace erl permission denied warning failed to remove deps hackney src hackney multipart erl permission denied warning failed to remove deps hackney src hackney headers new erl permission denied warning failed to remove deps hackney src hackney http connect erl permission denied warning failed to remove deps hackney src hackney util erl permission denied warning failed to remove deps hackney src hackney erl permission denied warning failed to remove deps hackney src hackney request erl permission denied warning failed to remove deps hackney src hackney app erl permission denied warning failed to remove deps hackney src hackney internal hrl permission denied warning failed to remove deps hackney src hackney date erl permission denied warning failed to remove deps hackney src hackney manager erl permission denied warning failed to remove deps hackney src hackney connect erl permission denied warning failed to remove deps hackney src hackney bstr erl permission denied warning failed to remove deps hackney src hackney sup erl permission denied warning failed to remove deps hackney src hackney erl permission denied warning failed to remove deps hackney src hackney local tcp erl permission denied warning failed to remove deps hackney src hackney stream erl permission denied warning failed to remove deps hackney src hackney pool erl permission denied warning failed to remove deps hackney src hackney metrics erl permission denied warning failed to remove deps hackney src hackney methods hrl permission denied warning failed to remove deps hackney notice permission denied warning failed to remove deps hackney ebin hackney pool beam permission denied warning failed to remove deps hackney ebin hackney trace beam permission denied warning failed to remove deps hackney ebin hackney pool handler beam permission denied warning failed to remove deps hackney ebin hackney beam permission denied warning failed to remove deps hackney ebin hackney headers beam permission denied warning failed to remove deps hackney ebin hackney url beam permission denied warning failed to remove deps hackney ebin hackney manager beam permission denied warning failed to remove deps hackney ebin hackney metrics beam permission denied warning failed to remove deps hackney ebin hackney stream beam permission denied warning failed to remove deps hackney ebin hackney sup beam permission denied warning failed to remove deps hackney ebin hackney multipart beam permission denied warning failed to remove deps hackney ebin hackney http beam permission denied warning failed to remove deps hackney ebin hackney beam permission denied warning failed to remove deps hackney ebin hackney app beam permission denied warning failed to remove deps hackney ebin hackney http connect beam permission denied warning failed to remove deps hackney ebin hackney response beam permission denied warning failed to remove deps hackney ebin hackney app permission denied warning failed to remove deps hackney ebin hackney headers new beam permission denied warning failed to remove deps hackney ebin hackney cookie beam permission denied warning failed to remove deps hackney ebin hackney request beam permission denied warning failed to remove deps hackney ebin hackney util beam permission denied warning failed to remove deps hackney ebin hackney connect beam permission denied warning failed to remove deps hackney ebin hackney date beam permission denied warning failed to remove deps hackney ebin hackney ssl beam permission denied warning failed to remove deps hackney ebin hackney bstr beam permission denied warning failed to remove deps hackney ebin hackney tcp beam permission denied warning failed to remove deps hackney ebin hackney local tcp beam permission denied warning failed to remove deps hackney erlcinfo permission denied warning failed to remove deps hackney hex metadata config permission denied warning failed to remove deps hackney readme md permission denied warning failed to remove deps hackney rebar config permission denied warning failed to remove deps hackney include hackney hrl permission denied warning failed to remove deps hackney include hackney lib hrl permission denied warning failed to remove deps hackney fetch permission denied warning failed to remove deps hackney hex permission denied warning failed to remove deps hackney news md permission denied removing pycache removing specifications gdd clouddebugger json traceback most recent call last file tmpfs src github synthtool autosynth synth py line in inner main sys exit exit code skipped systemexit during handling of the above exception another exception occurred traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main executor check call cwd working repo path file tmpfs src github synthtool autosynth executor py line in check call subprocess check call command args file home kbuilder pyenv versions lib subprocess py line in check call raise calledprocesserror retcode cmd subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
| 0
|
14,086
| 3,223,353,565
|
IssuesEvent
|
2015-10-09 09:25:47
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
[C# Feature] Support more feature for LINQ query
|
Area-Language Design Feature Request Language-C#
|
In some complex situation, the LINQ query will be loss the readability and hard to maintain.
so it would be better if the LINQ query syntax improved like below.
allow more condition expression in the join clause. the **equals** expression is the only choice nowdays.
```csharp
from l in EntityList
join r in EntityList on l.ParentId == r.ParentId && l.Id > r.Id
select r
```
allow left join clause in LINQ to let the query more simplify .
```csharp
from l in EntityList
left join r in EntityList on l.ParentId == r.ParentId && l.Id > r.Id
select r
```
--
I'm not good at English, but hope you guys understand what I'm saying. :smile:
|
1.0
|
[C# Feature] Support more feature for LINQ query - In some complex situation, the LINQ query will be loss the readability and hard to maintain.
so it would be better if the LINQ query syntax improved like below.
allow more condition expression in the join clause. the **equals** expression is the only choice nowdays.
```csharp
from l in EntityList
join r in EntityList on l.ParentId == r.ParentId && l.Id > r.Id
select r
```
allow left join clause in LINQ to let the query more simplify .
```csharp
from l in EntityList
left join r in EntityList on l.ParentId == r.ParentId && l.Id > r.Id
select r
```
--
I'm not good at English, but hope you guys understand what I'm saying. :smile:
|
non_process
|
support more feature for linq query in some complex situation the linq query will be loss the readability and hard to maintain so it would be better if the linq query syntax improved like below allow more condition expression in the join clause the equals expression is the only choice nowdays csharp from l in entitylist join r in entitylist on l parentid r parentid l id r id select r allow left join clause in linq to let the query more simplify csharp from l in entitylist left join r in entitylist on l parentid r parentid l id r id select r i m not good at english but hope you guys understand what i m saying smile
| 0
|
29,262
| 4,481,947,447
|
IssuesEvent
|
2016-08-29 02:21:47
|
cakephp/cakephp
|
https://api.github.com/repos/cakephp/cakephp
|
closed
|
(question) assertHtml() with nested tags
|
Question testing
|
API for the `assertHtml()` is very thin and I can not very well understand the regex.
I've this code:
```
<blockquote>
<p>First level</p>
<blockquote>
<p>Second level</p>
</blockquote>
<p>First level</p>
</blockquote>
```
How to properly write the `$expected` array, in this case?
I tried as well, but doesn't work:
```
$expected = [
['blockquote' => true],
['p' => true],
'First level',
'/p',
['blockquote' => true],
['p' => true],
'Second level',
'/p',
'/blockquote',
['p' => true],
'First level',
'/p',
'/blockquote',
];
```
Thanks.
|
1.0
|
(question) assertHtml() with nested tags - API for the `assertHtml()` is very thin and I can not very well understand the regex.
I've this code:
```
<blockquote>
<p>First level</p>
<blockquote>
<p>Second level</p>
</blockquote>
<p>First level</p>
</blockquote>
```
How to properly write the `$expected` array, in this case?
I tried as well, but doesn't work:
```
$expected = [
['blockquote' => true],
['p' => true],
'First level',
'/p',
['blockquote' => true],
['p' => true],
'Second level',
'/p',
'/blockquote',
['p' => true],
'First level',
'/p',
'/blockquote',
];
```
Thanks.
|
non_process
|
question asserthtml with nested tags api for the asserthtml is very thin and i can not very well understand the regex i ve this code first level second level first level how to properly write the expected array in this case i tried as well but doesn t work expected first level p second level p blockquote first level p blockquote thanks
| 0
|
117
| 2,549,991,090
|
IssuesEvent
|
2015-02-01 00:49:29
|
sysown/proxysql-0.2
|
https://api.github.com/repos/sysown/proxysql-0.2
|
closed
|
Crash when accessing admin interface when proxysql started witn -n
|
ADMIN bug QUERY PROCESSOR
|
```
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff7fa9700 (LWP 29179)]
0x00007ffff38ea1ef in Standard_ProxySQL_Admin::stats___mysql_query_rules (this=0x7ffff5c1c2a0) at Standard_ProxySQL_Admin.cpp:1947
1947 SQLite3_result * resultset=GloQPro->get_stats_query_rules();
(gdb) bt
#0 0x00007ffff38ea1ef in Standard_ProxySQL_Admin::stats___mysql_query_rules (this=0x7ffff5c1c2a0) at Standard_ProxySQL_Admin.cpp:1947
#1 0x00007ffff38e636c in child_mysql (arg=0x7ffff5c21390) at Standard_ProxySQL_Admin.cpp:1093
#2 0x00007ffff7bc4182 in start_thread (arg=0x7ffff7fa9700) at pthread_create.c:312
#3 0x00007ffff699300d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
```
|
1.0
|
Crash when accessing admin interface when proxysql started witn -n - ```
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff7fa9700 (LWP 29179)]
0x00007ffff38ea1ef in Standard_ProxySQL_Admin::stats___mysql_query_rules (this=0x7ffff5c1c2a0) at Standard_ProxySQL_Admin.cpp:1947
1947 SQLite3_result * resultset=GloQPro->get_stats_query_rules();
(gdb) bt
#0 0x00007ffff38ea1ef in Standard_ProxySQL_Admin::stats___mysql_query_rules (this=0x7ffff5c1c2a0) at Standard_ProxySQL_Admin.cpp:1947
#1 0x00007ffff38e636c in child_mysql (arg=0x7ffff5c21390) at Standard_ProxySQL_Admin.cpp:1093
#2 0x00007ffff7bc4182 in start_thread (arg=0x7ffff7fa9700) at pthread_create.c:312
#3 0x00007ffff699300d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
```
|
process
|
crash when accessing admin interface when proxysql started witn n program received signal sigsegv segmentation fault in standard proxysql admin stats mysql query rules this at standard proxysql admin cpp result resultset gloqpro get stats query rules gdb bt in standard proxysql admin stats mysql query rules this at standard proxysql admin cpp in child mysql arg at standard proxysql admin cpp in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s
| 1
|
181,174
| 6,656,935,466
|
IssuesEvent
|
2017-09-29 23:19:16
|
hatnote/montage
|
https://api.github.com/repos/hatnote/montage
|
closed
|
Adding a new coordinator to a campaign
|
coordinator view enhancement priority high
|
A coordinator should be able to add another user as a coordinator to a campaign.
|
1.0
|
Adding a new coordinator to a campaign - A coordinator should be able to add another user as a coordinator to a campaign.
|
non_process
|
adding a new coordinator to a campaign a coordinator should be able to add another user as a coordinator to a campaign
| 0
|
18,810
| 24,712,874,009
|
IssuesEvent
|
2022-10-20 03:18:30
|
TabooLib/chemdah
|
https://api.github.com/repos/TabooLib/chemdah
|
closed
|
[建议] 使用FORCE_LOOK标签时 可选择关闭失明效果
|
enhancement processed
|
在某些情况下(深色或暗色调皮肤形象的NPC)应用失明效果可能视觉效果不佳

在使用FORCE_LOOK标签时 可选择是否关闭失明效果 保留减速效果
|
1.0
|
[建议] 使用FORCE_LOOK标签时 可选择关闭失明效果 - 在某些情况下(深色或暗色调皮肤形象的NPC)应用失明效果可能视觉效果不佳

在使用FORCE_LOOK标签时 可选择是否关闭失明效果 保留减速效果
|
process
|
使用force look标签时 可选择关闭失明效果 在某些情况下(深色或暗色调皮肤形象的npc)应用失明效果可能视觉效果不佳 在使用force look标签时 可选择是否关闭失明效果 保留减速效果
| 1
|
5,615
| 20,205,153,526
|
IssuesEvent
|
2022-02-11 19:25:37
|
org-acme/test8
|
https://api.github.com/repos/org-acme/test8
|
opened
|
:robot: Automatic branch protection - main
|
automation security checks
|
:shield: The following protections were added to the main branch of the repository:
- Require code owner review
- Require approving review count
- Enforce admins
@jlmayorga
For more information, please see: https://github.com/org-acme/repo-manager
|
1.0
|
:robot: Automatic branch protection - main - :shield: The following protections were added to the main branch of the repository:
- Require code owner review
- Require approving review count
- Enforce admins
@jlmayorga
For more information, please see: https://github.com/org-acme/repo-manager
|
non_process
|
robot automatic branch protection main shield the following protections were added to the main branch of the repository require code owner review require approving review count enforce admins jlmayorga for more information please see
| 0
|
4,606
| 7,452,711,340
|
IssuesEvent
|
2018-03-29 09:19:50
|
w3c/webauthn
|
https://api.github.com/repos/w3c/webauthn
|
closed
|
Images not visible in published spec
|
type:editorial type:process
|
on https://www.w3.org/TR/webauthn/
the image URL in the document is (for example) "images/fido-signature-formats-figure1.svg"
The image file at this time exists under https://www.w3.org/TR/webauthn/fido-signature-formats-figure1.svg
|
1.0
|
Images not visible in published spec - on https://www.w3.org/TR/webauthn/
the image URL in the document is (for example) "images/fido-signature-formats-figure1.svg"
The image file at this time exists under https://www.w3.org/TR/webauthn/fido-signature-formats-figure1.svg
|
process
|
images not visible in published spec on the image url in the document is for example images fido signature formats svg the image file at this time exists under
| 1
|
7,680
| 4,039,300,551
|
IssuesEvent
|
2016-05-20 03:42:24
|
servo/servo
|
https://api.github.com/repos/servo/servo
|
closed
|
Can we not use automatic gitignore generation in fontconfig?
|
A-build A-platform/fonts C-is-this-done I-papercut
|
(There is no issue tracker for [mozilla-servo/fontconfig](https://github.com/mozilla-servo/fontconfig), so I'm posting here.
Fontconfig uses [automatic gitignore generation](https://github.com/mozilla-servo/fontconfig/blob/master/git.mk). However, this means that the submodule for fontconfig turns up in `git status`, and a `git diff` shows that it's been made a "dirty" submodule.
Could we go back to a static gitignore? Alternatively, can we put the generated object files in build/ or a similar place?
|
1.0
|
Can we not use automatic gitignore generation in fontconfig? - (There is no issue tracker for [mozilla-servo/fontconfig](https://github.com/mozilla-servo/fontconfig), so I'm posting here.
Fontconfig uses [automatic gitignore generation](https://github.com/mozilla-servo/fontconfig/blob/master/git.mk). However, this means that the submodule for fontconfig turns up in `git status`, and a `git diff` shows that it's been made a "dirty" submodule.
Could we go back to a static gitignore? Alternatively, can we put the generated object files in build/ or a similar place?
|
non_process
|
can we not use automatic gitignore generation in fontconfig there is no issue tracker for so i m posting here fontconfig uses however this means that the submodule for fontconfig turns up in git status and a git diff shows that it s been made a dirty submodule could we go back to a static gitignore alternatively can we put the generated object files in build or a similar place
| 0
|
158,023
| 6,020,558,446
|
IssuesEvent
|
2017-06-07 16:42:31
|
rism-ch/verovio
|
https://api.github.com/repos/rism-ch/verovio
|
closed
|
Out-of-order cross-staff chord notes
|
enhancement low priority
|
Here is a funny case:
The red note is a cross-staff note which belongs to the chord on the staff above.
<img width="354" alt="screen shot 2017-04-28 at 9 42 14 am" src="https://cloud.githubusercontent.com/assets/3487289/25538344/2fdce3ea-2bf7-11e7-85a4-b402e43cd8f0.png">
The current verovio algorithm decides that the stem should go up on the second chord. Then it searches for the lowest pitch in the chord, and assigns that to be the note that the stem is attached to. However in this case the lowest pitch is not the lowest visual note in the chord, which is the G3 since it is on the staff below.
MEI data:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-model href="http://music-encoding.org/schema/3.0.0/mei-all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?>
<?xml-model href="http://music-encoding.org/schema/3.0.0/mei-all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?>
<mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="3.0.0">
<meiHead>
<fileDesc>
<titleStmt>
<title />
</titleStmt>
<pubStmt />
</fileDesc>
<encodingDesc>
<appInfo>
<application isodate="2017-04-28T09:43:05" version="0.9.14-dev-18df18e">
<name>Verovio</name>
<p>Transcoded from Humdrum</p>
</application>
</appInfo>
</encodingDesc>
<workDesc>
<work>
<titleStmt>
<title />
</titleStmt>
</work>
</workDesc>
</meiHead>
<music>
<body>
<mdiv>
<score>
<scoreDef xml:id="scoredef-0000000224276653">
<staffGrp xml:id="staffgrp-0000001397772765" symbol="brace" barthru="true">
<staffDef xml:id="staffdef-0000001043246822" n="1" clef.shape="G" clef.line="2" key.sig="1f" lines="5" />
<staffDef xml:id="staffdef-0000000635580091" n="2" clef.shape="F" clef.line="4" key.sig="1f" lines="5" />
</staffGrp>
</scoreDef>
<section xml:id="section-0000001863527059">
<measure xml:id="measure-L3" n="1">
<staff xml:id="staff-L3F2N1" n="1">
<layer xml:id="layer-L3F2N1" n="1">
<chord xml:id="chord-L4F2" dur="2">
<note xml:id="note-L4F2S1" oct="4" pname="d">
<accid xml:id="accid-L4F2S1" accid.ges="n" />
</note>
<note xml:id="note-L4F2S2" staff="2" oct="3" pname="b">
<accid xml:id="accid-L4F2S2" accid.ges="f" />
</note>
<note xml:id="note-L4F2S3" staff="2" oct="3" pname="f">
<accid xml:id="accid-L4F2S3" accid.ges="n" />
</note>
</chord>
<chord xml:id="chord-L5F2" dur="2">
<note xml:id="note-L5F2S1" oct="4" pname="c">
<accid xml:id="accid-L5F2S1" accid.ges="n" />
</note>
<note xml:id="note-L5F2S2" staff="2" oct="3" pname="g">
<accid xml:id="accid-L5F2S2" accid.ges="n" />
</note>
<note xml:id="note-L5F2S3" oct="3" pname="e">
<accid xml:id="accid-L5F2S3" accid.ges="n" />
</note>
</chord>
</layer>
</staff>
<staff xml:id="staff-L3F1N1" n="2">
<layer xml:id="layer-L3F1N1" n="1">
<note xml:id="note-L4F1" dur="1" oct="2" pname="c" stem.dir="down">
<accid xml:id="accid-L4F1" accid.ges="n" />
</note>
</layer>
</staff>
</measure>
</section>
</score>
</mdiv>
</body>
</music>
</mei>
```
|
1.0
|
Out-of-order cross-staff chord notes - Here is a funny case:
The red note is a cross-staff note which belongs to the chord on the staff above.
<img width="354" alt="screen shot 2017-04-28 at 9 42 14 am" src="https://cloud.githubusercontent.com/assets/3487289/25538344/2fdce3ea-2bf7-11e7-85a4-b402e43cd8f0.png">
The current verovio algorithm decides that the stem should go up on the second chord. Then it searches for the lowest pitch in the chord, and assigns that to be the note that the stem is attached to. However in this case the lowest pitch is not the lowest visual note in the chord, which is the G3 since it is on the staff below.
MEI data:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-model href="http://music-encoding.org/schema/3.0.0/mei-all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?>
<?xml-model href="http://music-encoding.org/schema/3.0.0/mei-all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?>
<mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="3.0.0">
<meiHead>
<fileDesc>
<titleStmt>
<title />
</titleStmt>
<pubStmt />
</fileDesc>
<encodingDesc>
<appInfo>
<application isodate="2017-04-28T09:43:05" version="0.9.14-dev-18df18e">
<name>Verovio</name>
<p>Transcoded from Humdrum</p>
</application>
</appInfo>
</encodingDesc>
<workDesc>
<work>
<titleStmt>
<title />
</titleStmt>
</work>
</workDesc>
</meiHead>
<music>
<body>
<mdiv>
<score>
<scoreDef xml:id="scoredef-0000000224276653">
<staffGrp xml:id="staffgrp-0000001397772765" symbol="brace" barthru="true">
<staffDef xml:id="staffdef-0000001043246822" n="1" clef.shape="G" clef.line="2" key.sig="1f" lines="5" />
<staffDef xml:id="staffdef-0000000635580091" n="2" clef.shape="F" clef.line="4" key.sig="1f" lines="5" />
</staffGrp>
</scoreDef>
<section xml:id="section-0000001863527059">
<measure xml:id="measure-L3" n="1">
<staff xml:id="staff-L3F2N1" n="1">
<layer xml:id="layer-L3F2N1" n="1">
<chord xml:id="chord-L4F2" dur="2">
<note xml:id="note-L4F2S1" oct="4" pname="d">
<accid xml:id="accid-L4F2S1" accid.ges="n" />
</note>
<note xml:id="note-L4F2S2" staff="2" oct="3" pname="b">
<accid xml:id="accid-L4F2S2" accid.ges="f" />
</note>
<note xml:id="note-L4F2S3" staff="2" oct="3" pname="f">
<accid xml:id="accid-L4F2S3" accid.ges="n" />
</note>
</chord>
<chord xml:id="chord-L5F2" dur="2">
<note xml:id="note-L5F2S1" oct="4" pname="c">
<accid xml:id="accid-L5F2S1" accid.ges="n" />
</note>
<note xml:id="note-L5F2S2" staff="2" oct="3" pname="g">
<accid xml:id="accid-L5F2S2" accid.ges="n" />
</note>
<note xml:id="note-L5F2S3" oct="3" pname="e">
<accid xml:id="accid-L5F2S3" accid.ges="n" />
</note>
</chord>
</layer>
</staff>
<staff xml:id="staff-L3F1N1" n="2">
<layer xml:id="layer-L3F1N1" n="1">
<note xml:id="note-L4F1" dur="1" oct="2" pname="c" stem.dir="down">
<accid xml:id="accid-L4F1" accid.ges="n" />
</note>
</layer>
</staff>
</measure>
</section>
</score>
</mdiv>
</body>
</music>
</mei>
```
|
non_process
|
out of order cross staff chord notes here is a funny case the red note is a cross staff note which belongs to the chord on the staff above img width alt screen shot at am src the current verovio algorithm decides that the stem should go up on the second chord then it searches for the lowest pitch in the chord and assigns that to be the note that the stem is attached to however in this case the lowest pitch is not the lowest visual note in the chord which is the since it is on the staff below mei data xml xml model href type application xml schematypens xml model href type application xml schematypens verovio transcoded from humdrum
| 0
|
21,316
| 28,565,695,924
|
IssuesEvent
|
2023-04-21 01:47:19
|
GoogleCloudPlatform/cloud-ops-sandbox
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-ops-sandbox
|
opened
|
Automate release process
|
type: process priority: p3
|
### Description
Evaluate [release-please] bot vs. ad-hoc implementation to automate the release of the Sandbox versions.
### Requirements
The release process should be automatically triggered on event of the tag placement. For the current version X.Y.Z, it should support both, a new version release i.e. X.Y.(Z+1) for fixes only and X.(Y+1).0 for features and fixes and a patch of the previous release i.e. X.(Y-1).(Z'+1) where X.(Y-1).Z' is the last version of the previous release.
The main branch should be synchronized with the latest version of the new release.
Each release has to have a dedicated branch which should be always up to date with the latest version for that release.
Additional operations such as updating `provisioning/version.txt` and version numbers in README.md files that are used for the walkthrough URL should be updated as well.
[release-please]: https://github.com/googleapis/repo-automation-bots/tree/main/packages/release-please
|
1.0
|
Automate release process - ### Description
Evaluate [release-please] bot vs. ad-hoc implementation to automate the release of the Sandbox versions.
### Requirements
The release process should be automatically triggered on event of the tag placement. For the current version X.Y.Z, it should support both, a new version release i.e. X.Y.(Z+1) for fixes only and X.(Y+1).0 for features and fixes and a patch of the previous release i.e. X.(Y-1).(Z'+1) where X.(Y-1).Z' is the last version of the previous release.
The main branch should be synchronized with the latest version of the new release.
Each release has to have a dedicated branch which should be always up to date with the latest version for that release.
Additional operations such as updating `provisioning/version.txt` and version numbers in README.md files that are used for the walkthrough URL should be updated as well.
[release-please]: https://github.com/googleapis/repo-automation-bots/tree/main/packages/release-please
|
process
|
automate release process description evaluate bot vs ad hoc implementation to automate the release of the sandbox versions requirements the release process should be automatically triggered on event of the tag placement for the current version x y z it should support both a new version release i e x y z for fixes only and x y for features and fixes and a patch of the previous release i e x y z where x y z is the last version of the previous release the main branch should be synchronized with the latest version of the new release each release has to have a dedicated branch which should be always up to date with the latest version for that release additional operations such as updating provisioning version txt and version numbers in readme md files that are used for the walkthrough url should be updated as well
| 1
|
18,399
| 24,535,002,517
|
IssuesEvent
|
2022-10-11 19:53:12
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Release 4.2.3 - October 2022
|
P1 type: process release team-OSS
|
# Status of Bazel 4.2.3
- Expected release date: Next week
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/44)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 4.2, simply send a PR against the `release-4.2.3` branch.
Task list:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 4.2.3 - October 2022 - # Status of Bazel 4.2.3
- Expected release date: Next week
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/44)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 4.2, simply send a PR against the `release-4.2.3` branch.
Task list:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release october status of bazel expected release date next week to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
| 1
|
20,312
| 26,955,857,437
|
IssuesEvent
|
2023-02-08 14:52:53
|
UserOfficeProject/user-office-project-issue-tracker
|
https://api.github.com/repos/UserOfficeProject/user-office-project-issue-tracker
|
closed
|
HPL submitters to have clear descriptions associated to each access route (new continuation, direct etc)
|
origin: project type: process status: clarify area: uop/stfc new scope
|
David Carroll from HPL reported that there have been some issues with users submitting proposals through the wrong route. Feedback from users was that the descriptions are not entirely clear. George in support had to rectify the issue. To help prevent this issue occurring again and improve the user experience we should consider this requirement for HPL release. Going forward the round manager will be able to make changes when required. Review with HPL core group once we are ready to start engaging.
To note, we currently have this requirement for LSF.
@TickleThePanda
|
1.0
|
HPL submitters to have clear descriptions associated to each access route (new continuation, direct etc) - David Carroll from HPL reported that there have been some issues with users submitting proposals through the wrong route. Feedback from users was that the descriptions are not entirely clear. George in support had to rectify the issue. To help prevent this issue occurring again and improve the user experience we should consider this requirement for HPL release. Going forward the round manager will be able to make changes when required. Review with HPL core group once we are ready to start engaging.
To note, we currently have this requirement for LSF.
@TickleThePanda
|
process
|
hpl submitters to have clear descriptions associated to each access route new continuation direct etc david carroll from hpl reported that there have been some issues with users submitting proposals through the wrong route feedback from users was that the descriptions are not entirely clear george in support had to rectify the issue to help prevent this issue occurring again and improve the user experience we should consider this requirement for hpl release going forward the round manager will be able to make changes when required review with hpl core group once we are ready to start engaging to note we currently have this requirement for lsf ticklethepanda
| 1
|
1,658
| 4,288,638,334
|
IssuesEvent
|
2016-07-17 15:58:47
|
kerubistan/kerub
|
https://api.github.com/repos/kerubistan/kerub
|
opened
|
add support for BSD lvm
|
component:data processing component:scheduling
|
lvm distributed with OpenBSD does not have exactly the same commands, vgs, lvs and pvs seem to have the same output format but all needs to be invoked with `lvm`.
|
1.0
|
add support for BSD lvm - lvm distributed with OpenBSD does not have exactly the same commands, vgs, lvs and pvs seem to have the same output format but all needs to be invoked with `lvm`.
|
process
|
add support for bsd lvm lvm distributed with openbsd does not have exactly the same commands vgs lvs and pvs seem to have the same output format but all needs to be invoked with lvm
| 1
|
6,929
| 10,092,479,586
|
IssuesEvent
|
2019-07-26 16:47:47
|
CGAL/cgal
|
https://api.github.com/repos/CGAL/cgal
|
closed
|
Link error with CGAL 4.14 due to some 'inline' missing in write_ply_points.h
|
Pkg::Point_set_processing_3 bug
|
## Issue Details
Link error `multiple definition of void CGAL::internal::PLY::property_header_type<char>(std::ostream&)` when `write_ply_points.h` is included by several compilation unit.
This error is due to some `inline` missing in `write_ply_points.h`.
Suggested fix: [write_ply_points.zip](https://github.com/CGAL/cgal/files/3396344/write_ply_points.zip)
## Environment
* Operating system (Windows/Mac/Linux, 32/64 bits): Linux
* Compiler: gcc version 7.4.0
* Release or debug mode: debug & release
* Specific flags used (if any):
* CGAL version: 4.14 release
* Boost version:
* Other libraries versions if used (Eigen, TBB, etc.):
|
1.0
|
Link error with CGAL 4.14 due to some 'inline' missing in write_ply_points.h - ## Issue Details
Link error `multiple definition of void CGAL::internal::PLY::property_header_type<char>(std::ostream&)` when `write_ply_points.h` is included by several compilation unit.
This error is due to some `inline` missing in `write_ply_points.h`.
Suggested fix: [write_ply_points.zip](https://github.com/CGAL/cgal/files/3396344/write_ply_points.zip)
## Environment
* Operating system (Windows/Mac/Linux, 32/64 bits): Linux
* Compiler: gcc version 7.4.0
* Release or debug mode: debug & release
* Specific flags used (if any):
* CGAL version: 4.14 release
* Boost version:
* Other libraries versions if used (Eigen, TBB, etc.):
|
process
|
link error with cgal due to some inline missing in write ply points h issue details link error multiple definition of void cgal internal ply property header type std ostream when write ply points h is included by several compilation unit this error is due to some inline missing in write ply points h suggested fix environment operating system windows mac linux bits linux compiler gcc version release or debug mode debug release specific flags used if any cgal version release boost version other libraries versions if used eigen tbb etc
| 1
|
46,485
| 13,170,486,183
|
IssuesEvent
|
2020-08-11 15:13:55
|
srubianof/Angular-Spring5
|
https://api.github.com/repos/srubianof/Angular-Spring5
|
closed
|
CVE-2020-13822 (High) detected in elliptic-6.5.2.tgz
|
security vulnerability
|
## CVE-2020-13822 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Angular-Spring5/cliente-app/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Angular-Spring5/cliente-app/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.900.5.tgz (Root Library)
- webpack-4.41.2.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.0.4.tgz
- :x: **elliptic-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srubianof/Angular-Spring5/commit/5e7f2748c1f5f1339936536bdffa3aff74e7f870">5e7f2748c1f5f1339936536bdffa3aff74e7f870</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading '\0' bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.
<p>Publish Date: 2020-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822>CVE-2020-13822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-13822 (High) detected in elliptic-6.5.2.tgz - ## CVE-2020-13822 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.2.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/Angular-Spring5/cliente-app/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/Angular-Spring5/cliente-app/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.900.5.tgz (Root Library)
- webpack-4.41.2.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.0.4.tgz
- :x: **elliptic-6.5.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/srubianof/Angular-Spring5/commit/5e7f2748c1f5f1339936536bdffa3aff74e7f870">5e7f2748c1f5f1339936536bdffa3aff74e7f870</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading '\0' bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.
<p>Publish Date: 2020-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822>CVE-2020-13822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in elliptic tgz cve high severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file tmp ws scm angular cliente app package json path to vulnerable library tmp ws scm angular cliente app node modules elliptic package json dependency hierarchy build angular tgz root library webpack tgz node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library found in head commit a href vulnerability details the elliptic package for node js allows ecdsa signature malleability via variations in encoding leading bytes or integer overflows this could conceivably have a security relevant impact if an application relied on a single canonical signature publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource
| 0
|
334,624
| 10,142,906,896
|
IssuesEvent
|
2019-08-04 06:50:28
|
delaford/game
|
https://api.github.com/repos/delaford/game
|
closed
|
Picking up dropped items does not happen
|
backend bug canvas core engine high priority
|
<!-- Please don't delete this template or we'll close your issue -->
<!-- Before creating an issue please make sure you are using the latest version of the game. -->
**What is the current behavior?**
1. Drop an item.
2. Pick it up.
3. Observe that you went to place of dropped item.
4. Observe that you did NOT pick up the item despite the item disappearing from the floor.
**If the current behavior is a bug, please provide the exact steps to reproduce.**
1. Drop an item.
2. Right-click, `Pick up ITEM`
**What is the expected behavior?**
For the item to be slotted into your inventory -- if allowed.
**Additional context**
|
1.0
|
Picking up dropped items does not happen - <!-- Please don't delete this template or we'll close your issue -->
<!-- Before creating an issue please make sure you are using the latest version of the game. -->
**What is the current behavior?**
1. Drop an item.
2. Pick it up.
3. Observe that you went to place of dropped item.
4. Observe that you did NOT pick up the item despite the item disappearing from the floor.
**If the current behavior is a bug, please provide the exact steps to reproduce.**
1. Drop an item.
2. Right-click, `Pick up ITEM`
**What is the expected behavior?**
For the item to be slotted into your inventory -- if allowed.
**Additional context**
|
non_process
|
picking up dropped items does not happen what is the current behavior drop an item pick it up observe that you went to place of dropped item observe that you did not pick up the item despite the item disappearing from the floor if the current behavior is a bug please provide the exact steps to reproduce drop an item right click pick up item what is the expected behavior for the item to be slotted into your inventory if allowed additional context
| 0
|
75,429
| 15,398,211,435
|
IssuesEvent
|
2021-03-03 23:33:30
|
kadirselcuk/vue-cli-plugin-apollo
|
https://api.github.com/repos/kadirselcuk/vue-cli-plugin-apollo
|
opened
|
CVE-2020-7789 (Medium) detected in node-notifier-5.4.3.tgz
|
security vulnerability
|
## CVE-2020-7789 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-notifier-5.4.3.tgz</b></p></summary>
<p>A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz">https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz</a></p>
<p>Path to dependency file: vue-cli-plugin-apollo/client-addon/package.json</p>
<p>Path to vulnerable library: vue-cli-plugin-apollo/client-addon/node_modules/node-notifier/package.json</p>
<p>
Dependency Hierarchy:
- cli-ui-3.12.1.tgz (Root Library)
- :x: **node-notifier-5.4.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kadirselcuk/vue-cli-plugin-apollo/commit/1914e5bad2511a5ddaffd9cb7218906a52c7548f">1914e5bad2511a5ddaffd9cb7218906a52c7548f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7789>CVE-2020-7789</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7789 (Medium) detected in node-notifier-5.4.3.tgz - ## CVE-2020-7789 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-notifier-5.4.3.tgz</b></p></summary>
<p>A Node.js module for sending notifications on native Mac, Windows (post and pre 8) and Linux (or Growl as fallback)</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz">https://registry.npmjs.org/node-notifier/-/node-notifier-5.4.3.tgz</a></p>
<p>Path to dependency file: vue-cli-plugin-apollo/client-addon/package.json</p>
<p>Path to vulnerable library: vue-cli-plugin-apollo/client-addon/node_modules/node-notifier/package.json</p>
<p>
Dependency Hierarchy:
- cli-ui-3.12.1.tgz (Root Library)
- :x: **node-notifier-5.4.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kadirselcuk/vue-cli-plugin-apollo/commit/1914e5bad2511a5ddaffd9cb7218906a52c7548f">1914e5bad2511a5ddaffd9cb7218906a52c7548f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-notifier before 9.0.0. It allows an attacker to run arbitrary commands on Linux machines due to the options params not being sanitised when being passed an array.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7789>CVE-2020-7789</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7789</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: 9.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in node notifier tgz cve medium severity vulnerability vulnerable library node notifier tgz a node js module for sending notifications on native mac windows post and pre and linux or growl as fallback library home page a href path to dependency file vue cli plugin apollo client addon package json path to vulnerable library vue cli plugin apollo client addon node modules node notifier package json dependency hierarchy cli ui tgz root library x node notifier tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package node notifier before it allows an attacker to run arbitrary commands on linux machines due to the options params not being sanitised when being passed an array publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
698,813
| 23,992,221,925
|
IssuesEvent
|
2022-09-14 02:57:52
|
Gogo1951/Groupie
|
https://api.github.com/repos/Gogo1951/Groupie
|
closed
|
Temple ST vs BT
|
Priority - 3 Average Type - Enhancement
|
https://discord.com/channels/1009139814836207656/1017861992045871194/1019413928415084594
Need to take "Temple" out of the scan. Just use Black and Sunken. I think...
|
1.0
|
Temple ST vs BT - https://discord.com/channels/1009139814836207656/1017861992045871194/1019413928415084594
Need to take "Temple" out of the scan. Just use Black and Sunken. I think...
|
non_process
|
temple st vs bt need to take temple out of the scan just use black and sunken i think
| 0
|
15,056
| 18,763,116,647
|
IssuesEvent
|
2021-11-05 19:04:12
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
opened
|
PH - solid fuels - %O2 / XA
|
Process Heating
|
revisions to backend and desktop interaction mean that the calcuation for excess air or %O2 is no longer shown.
can put it back so it looks like gas fuels (at the bottom) but won't be coming as part of the suite results.
Does not need to be in compressed air release.
|
1.0
|
PH - solid fuels - %O2 / XA - revisions to backend and desktop interaction mean that the calcuation for excess air or %O2 is no longer shown.
can put it back so it looks like gas fuels (at the bottom) but won't be coming as part of the suite results.
Does not need to be in compressed air release.
|
process
|
ph solid fuels xa revisions to backend and desktop interaction mean that the calcuation for excess air or is no longer shown can put it back so it looks like gas fuels at the bottom but won t be coming as part of the suite results does not need to be in compressed air release
| 1
|
12,033
| 14,738,616,927
|
IssuesEvent
|
2021-01-07 05:16:22
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
1837 staged fee was done twice
|
anc-ops anc-process anp-important ant-bug ant-support
|
In GitLab by @kdjstudios on Jun 28, 2018, 10:50
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-28-31793/conversation
**Server:** External (All)
**Client/Site:** Keener (All)
**Account:** Multi (1837)
**Issue:**
I just noticed on account 1837 that the staged fee was generated twice.
I did the stage fee and then generated an invoice between the billing cycles. Invoice 28310.
I just noticed that the same amount was generated on another invoice 28797 when I did the actual billing for that billing cycle.
I think I have seen this happen before that I generate an invoice between billing cycles and then whatever that staged fee was occurs on the regular invoice for that billing cycle.
Another account that this happened on was 6784. An invoice was generated between billing cycles from a staged fee but then when the billing was done, it created another invoice with that staged fee amount that had already been processed / used.
I do quite a few staged fees between billing cycles and generate those invoices so now I am concerned that these staged fees are being used twice.
|
1.0
|
1837 staged fee was done twice - In GitLab by @kdjstudios on Jun 28, 2018, 10:50
**Submitted by:** Gaylan Garrett <gaylan@keenercom.net>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-28-31793/conversation
**Server:** External (All)
**Client/Site:** Keener (All)
**Account:** Multi (1837)
**Issue:**
I just noticed on account 1837 that the staged fee was generated twice.
I did the stage fee and then generated an invoice between the billing cycles. Invoice 28310.
I just noticed that the same amount was generated on another invoice 28797 when I did the actual billing for that billing cycle.
I think I have seen this happen before that I generate an invoice between billing cycles and then whatever that staged fee was occurs on the regular invoice for that billing cycle.
Another account that this happened on was 6784. An invoice was generated between billing cycles from a staged fee but then when the billing was done, it created another invoice with that staged fee amount that had already been processed / used.
I do quite a few staged fees between billing cycles and generate those invoices so now I am concerned that these staged fees are being used twice.
|
process
|
staged fee was done twice in gitlab by kdjstudios on jun submitted by gaylan garrett helpdesk server external all client site keener all account multi issue i just noticed on account that the staged fee was generated twice i did the stage fee and then generated an invoice between the billing cycles invoice i just noticed that the same amount was generated on another invoice when i did the actual billing for that billing cycle i think i have seen this happen before that i generate an invoice between billing cycles and then whatever that staged fee was occurs on the regular invoice for that billing cycle another account that this happened on was an invoice was generated between billing cycles from a staged fee but then when the billing was done it created another invoice with that staged fee amount that had already been processed used i do quite a few staged fees between billing cycles and generate those invoices so now i am concerned that these staged fees are being used twice
| 1
|
18,559
| 24,555,562,128
|
IssuesEvent
|
2022-10-12 15:36:26
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[iOS] Notifications > Participant is not entering into the mobile app in the following scenario
|
Bug P1 iOS Process: Fixed Process: Tested dev
|
Kill the app > Add activities and publish > After getting notification > click on the notification > user will be staying in the Mobile home screen > Not entering into the app
|
2.0
|
[iOS] Notifications > Participant is not entering into the mobile app in the following scenario - Kill the app > Add activities and publish > After getting notification > click on the notification > user will be staying in the Mobile home screen > Not entering into the app
|
process
|
notifications participant is not entering into the mobile app in the following scenario kill the app add activities and publish after getting notification click on the notification user will be staying in the mobile home screen not entering into the app
| 1
|
104,992
| 16,623,608,658
|
IssuesEvent
|
2021-06-03 06:44:02
|
Thanraj/OpenSSL_1.0.1
|
https://api.github.com/repos/Thanraj/OpenSSL_1.0.1
|
opened
|
CVE-2014-3509 (Medium) detected in opensslOpenSSL_1_0_1
|
security vulnerability
|
## CVE-2014-3509 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_1</b></p></summary>
<p>
<p>Akamai fork of openssl master.</p>
<p>Library home page: <a href=https://github.com/akamai/openssl.git>https://github.com/akamai/openssl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Thanraj/OpenSSL_1.0.1/commit/f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc">f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/t1_lib.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/t1_lib.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Race condition in the ssl_parse_serverhello_tlsext function in t1_lib.c in OpenSSL 1.0.0 before 1.0.0n and 1.0.1 before 1.0.1i, when multithreading and session resumption are used, allows remote SSL servers to cause a denial of service (memory overwrite and client application crash) or possibly have unspecified other impact by sending Elliptic Curve (EC) Supported Point Formats Extension data.
<p>Publish Date: 2014-08-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-3509>CVE-2014-3509</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-3509">https://nvd.nist.gov/vuln/detail/CVE-2014-3509</a></p>
<p>Release Date: 2014-08-13</p>
<p>Fix Resolution: 1.0.0n,1.0.1i</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2014-3509 (Medium) detected in opensslOpenSSL_1_0_1 - ## CVE-2014-3509 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opensslOpenSSL_1_0_1</b></p></summary>
<p>
<p>Akamai fork of openssl master.</p>
<p>Library home page: <a href=https://github.com/akamai/openssl.git>https://github.com/akamai/openssl.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Thanraj/OpenSSL_1.0.1/commit/f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc">f1fe40536a9d3c961cc1415e9dd6d4fd002b61dc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/t1_lib.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>OpenSSL_1.0.1/ssl/t1_lib.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Race condition in the ssl_parse_serverhello_tlsext function in t1_lib.c in OpenSSL 1.0.0 before 1.0.0n and 1.0.1 before 1.0.1i, when multithreading and session resumption are used, allows remote SSL servers to cause a denial of service (memory overwrite and client application crash) or possibly have unspecified other impact by sending Elliptic Curve (EC) Supported Point Formats Extension data.
<p>Publish Date: 2014-08-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-3509>CVE-2014-3509</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2014-3509">https://nvd.nist.gov/vuln/detail/CVE-2014-3509</a></p>
<p>Release Date: 2014-08-13</p>
<p>Fix Resolution: 1.0.0n,1.0.1i</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in opensslopenssl cve medium severity vulnerability vulnerable library opensslopenssl akamai fork of openssl master library home page a href found in head commit a href found in base branch master vulnerable source files openssl ssl lib c openssl ssl lib c vulnerability details race condition in the ssl parse serverhello tlsext function in lib c in openssl before and before when multithreading and session resumption are used allows remote ssl servers to cause a denial of service memory overwrite and client application crash or possibly have unspecified other impact by sending elliptic curve ec supported point formats extension data publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
204,075
| 7,080,178,330
|
IssuesEvent
|
2018-01-10 12:32:16
|
opencollective/opencollective
|
https://api.github.com/repos/opencollective/opencollective
|
opened
|
Editing Event data crashes
|
bug priority
|
Reported by user: I tried changing the end time of an event and the whole page turned into this and I lost the event I was creating https://media.aaronpk.com/Screen-Shot-2018-01-09-17-10-21-j8r8uf37fF.jpg. it happens if I put the cursor in the "hour" slot of the time and press a number. I'm guessing it's because the date it's trying to parse is something like "2018-10-02 39:00:00" (the time was "9:00" and I was trying to replace the 9 with a 3.
|
1.0
|
Editing Event data crashes - Reported by user: I tried changing the end time of an event and the whole page turned into this and I lost the event I was creating https://media.aaronpk.com/Screen-Shot-2018-01-09-17-10-21-j8r8uf37fF.jpg. it happens if I put the cursor in the "hour" slot of the time and press a number. I'm guessing it's because the date it's trying to parse is something like "2018-10-02 39:00:00" (the time was "9:00" and I was trying to replace the 9 with a 3.
|
non_process
|
editing event data crashes reported by user i tried changing the end time of an event and the whole page turned into this and i lost the event i was creating it happens if i put the cursor in the hour slot of the time and press a number i m guessing it s because the date it s trying to parse is something like the time was and i was trying to replace the with a
| 0
|
10,105
| 13,044,162,144
|
IssuesEvent
|
2020-07-29 03:47:30
|
tikv/tikv
|
https://api.github.com/repos/tikv/tikv
|
closed
|
UCP: Migrate scalar function `CurrentTime1Arg` from TiDB
|
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
|
## Description
Port the scalar function `CurrentTime1Arg` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
2.0
|
UCP: Migrate scalar function `CurrentTime1Arg` from TiDB -
## Description
Port the scalar function `CurrentTime1Arg` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @sticnarf
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
|
process
|
ucp migrate scalar function from tidb description port the scalar function from tidb to coprocessor score mentor s sticnarf recommended skills rust programming learning materials already implemented expressions ported from tidb
| 1
|
1,455
| 4,029,641,198
|
IssuesEvent
|
2016-05-18 11:30:03
|
openvstorage/openvstorage-health-check
|
https://api.github.com/repos/openvstorage/openvstorage-health-check
|
closed
|
failed to retrieve config from etcd during healthcheck
|
priority_critical process_wontfix type_bug
|
```
[INFO] Checking consistency of volumedriver vs. ovsdb for vPool 'beops-vpool-b':
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/celery/local.py", line 167, in <lambda>
__call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
File "/usr/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "healthcheck.py", line 85, in check_attended
return HealthCheckController.execute_check()
File "/usr/lib/python2.7/dist-packages/celery/local.py", line 167, in <lambda>
__call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
File "/usr/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "healthcheck.py", line 132, in execute_check
HealthCheckController.check_openvstorage()
File "/usr/lib/python2.7/dist-packages/celery/local.py", line 167, in <lambda>
__call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
File "/usr/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "healthcheck.py", line 182, in check_openvstorage
ovs.check_model_consistency()
File "/opt/OpenvStorage/ovs/extensions/healthcheck/openvstorage/openvstoragecluster_health_check.py", line 887, in check_model_consistency
voldrv_client = src.LocalStorageRouterClient(config_file)
RuntimeError: Failed to retrieve config from etcd
```
|
1.0
|
failed to retrieve config from etcd during healthcheck - ```
[INFO] Checking consistency of volumedriver vs. ovsdb for vPool 'beops-vpool-b':
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/celery/local.py", line 167, in <lambda>
__call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
File "/usr/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "healthcheck.py", line 85, in check_attended
return HealthCheckController.execute_check()
File "/usr/lib/python2.7/dist-packages/celery/local.py", line 167, in <lambda>
__call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
File "/usr/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "healthcheck.py", line 132, in execute_check
HealthCheckController.check_openvstorage()
File "/usr/lib/python2.7/dist-packages/celery/local.py", line 167, in <lambda>
__call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
File "/usr/lib/python2.7/dist-packages/celery/app/task.py", line 420, in __call__
return self.run(*args, **kwargs)
File "healthcheck.py", line 182, in check_openvstorage
ovs.check_model_consistency()
File "/opt/OpenvStorage/ovs/extensions/healthcheck/openvstorage/openvstoragecluster_health_check.py", line 887, in check_model_consistency
voldrv_client = src.LocalStorageRouterClient(config_file)
RuntimeError: Failed to retrieve config from etcd
```
|
process
|
failed to retrieve config from etcd during healthcheck checking consistency of volumedriver vs ovsdb for vpool beops vpool b traceback most recent call last file line in file usr lib dist packages celery local py line in call lambda x a kw x get current object a kw file usr lib dist packages celery app task py line in call return self run args kwargs file healthcheck py line in check attended return healthcheckcontroller execute check file usr lib dist packages celery local py line in call lambda x a kw x get current object a kw file usr lib dist packages celery app task py line in call return self run args kwargs file healthcheck py line in execute check healthcheckcontroller check openvstorage file usr lib dist packages celery local py line in call lambda x a kw x get current object a kw file usr lib dist packages celery app task py line in call return self run args kwargs file healthcheck py line in check openvstorage ovs check model consistency file opt openvstorage ovs extensions healthcheck openvstorage openvstoragecluster health check py line in check model consistency voldrv client src localstoragerouterclient config file runtimeerror failed to retrieve config from etcd
| 1
|
586,512
| 17,579,510,794
|
IssuesEvent
|
2021-08-16 04:30:19
|
l4ssc/Polyphony
|
https://api.github.com/repos/l4ssc/Polyphony
|
closed
|
Industrial Warehouses
|
Priority: Major Release: 1.0.0 Type: Enhancement Status: In Progress
|
Will be a custom mod for in-world storage of blocks and items, inspired by Immersive Engineering and Impractical Storage
|
1.0
|
Industrial Warehouses - Will be a custom mod for in-world storage of blocks and items, inspired by Immersive Engineering and Impractical Storage
|
non_process
|
industrial warehouses will be a custom mod for in world storage of blocks and items inspired by immersive engineering and impractical storage
| 0
|
17,021
| 22,391,522,355
|
IssuesEvent
|
2022-06-17 08:11:48
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Outdated list of architectures for `process.arch`?
|
doc process
|
### Affected URL(s)
https://nodejs.org/api/process.html
### Description of the problem
I'm trying to figure out which architectures node supports since I publish binary executables [esbuild](https://esbuild.github.io/) for various platforms. [The documentation](https://nodejs.org/api/process.html) says this:
> ## `process.arch`
>
> The operating system CPU architecture for which the Node.js binary was compiled. Possible values are: `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,`'mipsel'`, `'ppc'`, `'ppc64'`, `'s390'`, `'s390x'`, `'x32'`, and `'x64'`.
However, [the code](https://github.com/nodejs/node/blob/bd86e5186a33803aa9283b9a4c6946da33b67511/configure.py#L49-L51) says this:
> ```py
> valid_arch = ('arm', 'arm64', 'ia32', 'mips', 'mipsel', 'mips64el', 'ppc',
> 'ppc64', 'x32','x64', 'x86', 'x86_64', 's390x', 'riscv64',
> 'loong64')
> ```
These are the differences:
```patch
arm
arm64
ia32
+loong64
mips
+mips64el
mipsel
ppc
ppc64
+riscv64
-s390
s390x
x32
x64
+x86_64
+x86
```
Is the documentation outdated? Are all architectures in that code officially supported by node, or only some of them?
|
1.0
|
Outdated list of architectures for `process.arch`? - ### Affected URL(s)
https://nodejs.org/api/process.html
### Description of the problem
I'm trying to figure out which architectures node supports since I publish binary executables [esbuild](https://esbuild.github.io/) for various platforms. [The documentation](https://nodejs.org/api/process.html) says this:
> ## `process.arch`
>
> The operating system CPU architecture for which the Node.js binary was compiled. Possible values are: `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,`'mipsel'`, `'ppc'`, `'ppc64'`, `'s390'`, `'s390x'`, `'x32'`, and `'x64'`.
However, [the code](https://github.com/nodejs/node/blob/bd86e5186a33803aa9283b9a4c6946da33b67511/configure.py#L49-L51) says this:
> ```py
> valid_arch = ('arm', 'arm64', 'ia32', 'mips', 'mipsel', 'mips64el', 'ppc',
> 'ppc64', 'x32','x64', 'x86', 'x86_64', 's390x', 'riscv64',
> 'loong64')
> ```
These are the differences:
```patch
arm
arm64
ia32
+loong64
mips
+mips64el
mipsel
ppc
ppc64
+riscv64
-s390
s390x
x32
x64
+x86_64
+x86
```
Is the documentation outdated? Are all architectures in that code officially supported by node, or only some of them?
|
process
|
outdated list of architectures for process arch affected url s description of the problem i m trying to figure out which architectures node supports since i publish binary executables for various platforms says this process arch the operating system cpu architecture for which the node js binary was compiled possible values are arm mips mipsel ppc and however says this py valid arch arm mips mipsel ppc these are the differences patch arm mips mipsel ppc is the documentation outdated are all architectures in that code officially supported by node or only some of them
| 1
|
21,081
| 28,030,857,502
|
IssuesEvent
|
2023-03-28 12:18:23
|
sparc4-dev/astropop
|
https://api.github.com/repos/sparc4-dev/astropop
|
closed
|
Registration with `cross-correlation` method fails.
|
image-processing waiting
|
Due to https://github.com/scikit-image/scikit-image/issues/6456, registration with `cross-correlation` method fails. Fixed a lower version of `scikit-image` where the registration works.
|
1.0
|
Registration with `cross-correlation` method fails. - Due to https://github.com/scikit-image/scikit-image/issues/6456, registration with `cross-correlation` method fails. Fixed a lower version of `scikit-image` where the registration works.
|
process
|
registration with cross correlation method fails due to registration with cross correlation method fails fixed a lower version of scikit image where the registration works
| 1
|
605,601
| 18,737,645,543
|
IssuesEvent
|
2021-11-04 09:45:36
|
betagouv/service-national-universel
|
https://api.github.com/repos/betagouv/service-national-universel
|
closed
|
fix(inscription): collecte et le traitement des données de moins de 15 ans
|
enhancement priority-HIGH inscription
|
### Fonctionnalité
etape "consentements"
si enfant a moins de 15 ans afficher les consentement pour la collecte de données (il y a le consentnemtn parental ET consenetnemtn du jeune)
etape "Pièces justificatives"
Si authentification via France Connect, alors pas d'accord pour la collecte et le traitement des données
|
1.0
|
fix(inscription): collecte et le traitement des données de moins de 15 ans - ### Fonctionnalité
etape "consentements"
si enfant a moins de 15 ans afficher les consentement pour la collecte de données (il y a le consentnemtn parental ET consenetnemtn du jeune)
etape "Pièces justificatives"
Si authentification via France Connect, alors pas d'accord pour la collecte et le traitement des données
|
non_process
|
fix inscription collecte et le traitement des données de moins de ans fonctionnalité etape consentements si enfant a moins de ans afficher les consentement pour la collecte de données il y a le consentnemtn parental et consenetnemtn du jeune etape pièces justificatives si authentification via france connect alors pas d accord pour la collecte et le traitement des données
| 0
|
2,134
| 4,973,821,375
|
IssuesEvent
|
2016-12-06 02:54:19
|
codefordenver/org
|
https://api.github.com/repos/codefordenver/org
|
closed
|
Figure out what we want to do to improve the onboarding process
|
Process
|
As a new member or CfD, onboarding can be complicated and overwhelming, especially if the brigade is in the middle of a project.
|
1.0
|
Figure out what we want to do to improve the onboarding process - As a new member or CfD, onboarding can be complicated and overwhelming, especially if the brigade is in the middle of a project.
|
process
|
figure out what we want to do to improve the onboarding process as a new member or cfd onboarding can be complicated and overwhelming especially if the brigade is in the middle of a project
| 1
|
256,106
| 8,126,845,851
|
IssuesEvent
|
2018-08-17 05:05:47
|
aowen87/BAR
|
https://api.github.com/repos/aowen87/BAR
|
closed
|
build_visit: --no-qt doesn't appear to disable qt
|
Bug Likelihood: 3 - Occasional Priority: Normal Severity: 2 - Minor Irritation
|
build_visit --console --no-qt
I see the message:
disabling qt
However the Qt License prompt is shown, it is downloaded & built.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 957
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: build_visit: --no-qt doesn't appear to disable qt
Assigned to: Cyrus Harrison
Category:
Target version: 2.4.2
Author: Cyrus Harrison
Start: 02/06/2012
Due date:
% Done: 0
Estimated time:
Created: 02/06/2012 04:16 pm
Updated: 02/22/2012 03:25 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
build_visit --console --no-qt
I see the message:
disabling qt
However the Qt License prompt is shown, it is downloaded & built.
Comments:
Update from LLNL proj meeting.
Resolved w/ commits:RC: r17393Trunk: r17395
|
1.0
|
build_visit: --no-qt doesn't appear to disable qt - build_visit --console --no-qt
I see the message:
disabling qt
However the Qt License prompt is shown, it is downloaded & built.
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 957
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Normal
Subject: build_visit: --no-qt doesn't appear to disable qt
Assigned to: Cyrus Harrison
Category:
Target version: 2.4.2
Author: Cyrus Harrison
Start: 02/06/2012
Due date:
% Done: 0
Estimated time:
Created: 02/06/2012 04:16 pm
Updated: 02/22/2012 03:25 pm
Likelihood: 3 - Occasional
Severity: 2 - Minor Irritation
Found in version: 2.4.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
build_visit --console --no-qt
I see the message:
disabling qt
However the Qt License prompt is shown, it is downloaded & built.
Comments:
Update from LLNL proj meeting.
Resolved w/ commits:RC: r17393Trunk: r17395
|
non_process
|
build visit no qt doesn t appear to disable qt build visit console no qt i see the message disabling qt however the qt license prompt is shown it is downloaded built redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority normal subject build visit no qt doesn t appear to disable qt assigned to cyrus harrison category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood occasional severity minor irritation found in version impact expected use os all support group any description build visit console no qt i see the message disabling qt however the qt license prompt is shown it is downloaded built comments update from llnl proj meeting resolved w commits rc
| 0
|
8,767
| 11,884,608,376
|
IssuesEvent
|
2020-03-27 17:58:09
|
tehblasian/aita
|
https://api.github.com/repos/tehblasian/aita
|
closed
|
Process Data
|
data-processing story
|
We need to process the data as to have it translated for the analysis and classification processes.
- [ ] TFIDF #15
- [ ] Vectorize using doc2vec #16
- [ ] Frequencies #17
|
1.0
|
Process Data - We need to process the data as to have it translated for the analysis and classification processes.
- [ ] TFIDF #15
- [ ] Vectorize using doc2vec #16
- [ ] Frequencies #17
|
process
|
process data we need to process the data as to have it translated for the analysis and classification processes tfidf vectorize using frequencies
| 1
|
86,590
| 10,511,631,496
|
IssuesEvent
|
2019-09-27 15:52:05
|
bmwcarit/Emma
|
https://api.github.com/repos/bmwcarit/Emma
|
opened
|
Clean-up `genDocs` scripts
|
bug documentation good first issue low prio question
|
# Description
<!-- What is the bug about? -->
Since we switched to GitHub pages there is no need for html documentation anymore.
To be discussed if we should keep the html generation part for docs. However it might likely fail at some point since it won't be executed regularly anymore.
We should definitively keep the UML and call graph generation part.
@KGergo88 what do you think? Could you work on this?
|
1.0
|
Clean-up `genDocs` scripts - # Description
<!-- What is the bug about? -->
Since we switched to GitHub pages there is no need for html documentation anymore.
To be discussed if we should keep the html generation part for docs. However it might likely fail at some point since it won't be executed regularly anymore.
We should definitively keep the UML and call graph generation part.
@KGergo88 what do you think? Could you work on this?
|
non_process
|
clean up gendocs scripts description since we switched to github pages there is no need for html documentation anymore to be discussed if we should keep the html generation part for docs however it might likely fail at some point since it won t be executed regularly anymore we should definitively keep the uml and call graph generation part what do you think could you work on this
| 0
|
4,962
| 7,804,948,674
|
IssuesEvent
|
2018-06-11 09:13:54
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
VDisk Auto Snapshot Settings
|
process_wontfix
|
VDisk automatic snapshot settings Hello, thank you for your reply
My current virtual disk does not have an automatic snapshot. I hope to have this feature and can customize the interval. what do I need to do? and l ovs config get /ovs/framework/scheduling/celery is null
|
1.0
|
VDisk Auto Snapshot Settings - VDisk automatic snapshot settings Hello, thank you for your reply
My current virtual disk does not have an automatic snapshot. I hope to have this feature and can customize the interval. what do I need to do? and l ovs config get /ovs/framework/scheduling/celery is null
|
process
|
vdisk auto snapshot settings vdisk automatic snapshot settings hello thank you for your reply my current virtual disk does not have an automatic snapshot i hope to have this feature and can customize the interval what do i need to do and l ovs config get ovs framework scheduling celery is null
| 1
|
58,395
| 14,381,568,713
|
IssuesEvent
|
2020-12-02 05:43:20
|
GoogleContainerTools/skaffold
|
https://api.github.com/repos/GoogleContainerTools/skaffold
|
closed
|
Are DockerHub rate limits going to be a problem for the busybox container
|
build/kaniko kind/friction priority/p1
|
Docker announced new rate limits:
See https://docs.docker.com/docker-hub/download-rate-limit/
When using skaffold to fire off a kaniko job I hit this rate limit.
```
Normal Pulling 20s (x2 over 35s) kubelet Pulling image "busybox"
Warning Failed 19s (x2 over 34s) kubelet Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
```
It looks like the Kaniko pod that skaffold is firing off includes a busybox container which is hosted in DockerHub.
Would it make sense to use a container hosted in GCR instead to avoid any potential issue with DockerHub rate limits?
```
Name: kaniko-bztqs
Namespace: emojichat
Priority: 0
Node: ip-172-23-103-252.us-west-2.compute.internal/172.23.103.252
Start Time: Thu, 19 Nov 2020 20:15:27 -0800
Labels: skaffold-kaniko=skaffold-kaniko
Annotations: kubernetes.io/psp: csp-psp
Status: Pending
IP: 172.23.103.3
IPs:
IP: 172.23.103.3
Init Containers:
kaniko-init-container:
Container ID:
Image: busybox
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
while [ ! -f /tmp/complete ]; do sleep 1; done
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 8
memory: 16Gi
Environment: <none>
Mounts:
/kaniko/.docker/ from docker-config (rw)
/kaniko/buildcontext from kaniko-emptydir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p4kvq (ro)
Containers:
kaniko:
Container ID:
Image: gcr.io/kaniko-project/executor:latest
Image ID:
Port: <none>
Host Port: <none>
Args:
--destination
963188529772.dkr.ecr.us-west-2.amazonaws.com/emojichat/chatroom:267b30d-dirty
--dockerfile
Dockerfile
--context
dir:///kaniko/buildcontext
--cache
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 8
memory: 16Gi
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /secret/
UPSTREAM_CLIENT_TYPE: UpstreamClient(skaffold-)
AWS_REGION: us-west-2
IMAGE_REPO: ...
IMAGE_NAME: chatroom
IMAGE_TAG: 267b30d-dirty
Mounts:
/kaniko/.docker/ from docker-config (rw)
/kaniko/buildcontext from kaniko-emptydir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p4kvq (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kaniko-emptydir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
docker-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: docker-config
Optional: false
default-token-p4kvq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-p4kvq
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
```
|
1.0
|
Are DockerHub rate limits going to be a problem for the busybox container - Docker announced new rate limits:
See https://docs.docker.com/docker-hub/download-rate-limit/
When using skaffold to fire off a kaniko job I hit this rate limit.
```
Normal Pulling 20s (x2 over 35s) kubelet Pulling image "busybox"
Warning Failed 19s (x2 over 34s) kubelet Failed to pull image "busybox": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
```
It looks like the Kaniko pod that skaffold is firing off includes a busybox container which is hosted in DockerHub.
Would it make sense to use a container hosted in GCR instead to avoid any potential issue with DockerHub rate limits?
```
Name: kaniko-bztqs
Namespace: emojichat
Priority: 0
Node: ip-172-23-103-252.us-west-2.compute.internal/172.23.103.252
Start Time: Thu, 19 Nov 2020 20:15:27 -0800
Labels: skaffold-kaniko=skaffold-kaniko
Annotations: kubernetes.io/psp: csp-psp
Status: Pending
IP: 172.23.103.3
IPs:
IP: 172.23.103.3
Init Containers:
kaniko-init-container:
Container ID:
Image: busybox
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
while [ ! -f /tmp/complete ]; do sleep 1; done
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 8
memory: 16Gi
Environment: <none>
Mounts:
/kaniko/.docker/ from docker-config (rw)
/kaniko/buildcontext from kaniko-emptydir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p4kvq (ro)
Containers:
kaniko:
Container ID:
Image: gcr.io/kaniko-project/executor:latest
Image ID:
Port: <none>
Host Port: <none>
Args:
--destination
963188529772.dkr.ecr.us-west-2.amazonaws.com/emojichat/chatroom:267b30d-dirty
--dockerfile
Dockerfile
--context
dir:///kaniko/buildcontext
--cache
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 8
memory: 16Gi
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /secret/
UPSTREAM_CLIENT_TYPE: UpstreamClient(skaffold-)
AWS_REGION: us-west-2
IMAGE_REPO: ...
IMAGE_NAME: chatroom
IMAGE_TAG: 267b30d-dirty
Mounts:
/kaniko/.docker/ from docker-config (rw)
/kaniko/buildcontext from kaniko-emptydir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p4kvq (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kaniko-emptydir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
docker-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: docker-config
Optional: false
default-token-p4kvq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-p4kvq
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
```
|
non_process
|
are dockerhub rate limits going to be a problem for the busybox container docker announced new rate limits see when using skaffold to fire off a kaniko job i hit this rate limit normal pulling over kubelet pulling image busybox warning failed over kubelet failed to pull image busybox rpc error code unknown desc error response from daemon toomanyrequests you have reached your pull rate limit you may increase the limit by authenticating and upgrading it looks like the kaniko pod that skaffold is firing off includes a busybox container which is hosted in dockerhub would it make sense to use a container hosted in gcr instead to avoid any potential issue with dockerhub rate limits name kaniko bztqs namespace emojichat priority node ip us west compute internal start time thu nov labels skaffold kaniko skaffold kaniko annotations kubernetes io psp csp psp status pending ip ips ip init containers kaniko init container container id image busybox image id port host port command sh c while do sleep done state waiting reason errimagepull ready false restart count requests cpu memory environment mounts kaniko docker from docker config rw kaniko buildcontext from kaniko emptydir rw var run secrets kubernetes io serviceaccount from default token ro containers kaniko container id image gcr io kaniko project executor latest image id port host port args destination dkr ecr us west amazonaws com emojichat chatroom dirty dockerfile dockerfile context dir kaniko buildcontext cache state waiting reason podinitializing ready false restart count requests cpu memory environment google application credentials secret upstream client type upstreamclient skaffold aws region us west image repo image name chatroom image tag dirty mounts kaniko docker from docker config rw kaniko buildcontext from kaniko emptydir rw var run secrets kubernetes io serviceaccount from default token ro conditions type status initialized false ready false containersready false podscheduled true volumes kaniko emptydir type emptydir a temporary directory that shares a pod s lifetime medium sizelimit docker config type configmap a volume populated by a configmap name docker config optional false default token type secret a volume populated by a secret secretname default token optional false qos class burstable node selectors tolerations node kubernetes io not ready noexecute op exists for node kubernetes io unreachable noexecute op exists for
| 0
|
7,326
| 10,467,485,892
|
IssuesEvent
|
2019-09-22 05:47:22
|
yodaos-project/ShadowNode
|
https://api.github.com/repos/yodaos-project/ShadowNode
|
closed
|
could not listen SIGCHLD on JS land
|
bug process
|
* **Version**: v0.11.x
* **Platform**: darwin
* **Subsystem**: process
SIGCHLD would not be installed on new listeners.
```js
process.on('SIGCHLD', (signal) => { console.log(signal) })
```
Also SIGCHLD( and others like SIGUSR1 etc.) has different values on linux/darwin, only linux version was supported currently.
Refs: http://man7.org/linux/man-pages/man7/signal.7.html
|
1.0
|
could not listen SIGCHLD on JS land - * **Version**: v0.11.x
* **Platform**: darwin
* **Subsystem**: process
SIGCHLD would not be installed on new listeners.
```js
process.on('SIGCHLD', (signal) => { console.log(signal) })
```
Also SIGCHLD( and others like SIGUSR1 etc.) has different values on linux/darwin, only linux version was supported currently.
Refs: http://man7.org/linux/man-pages/man7/signal.7.html
|
process
|
could not listen sigchld on js land version x platform darwin subsystem process sigchld would not be installed on new listeners js process on sigchld signal console log signal also sigchld and others like etc has different values on linux darwin only linux version was supported currently refs
| 1
|
2,945
| 5,923,300,629
|
IssuesEvent
|
2017-05-23 07:33:25
|
rubberduck-vba/Rubberduck
|
https://api.github.com/repos/rubberduck-vba/Rubberduck
|
closed
|
Some Parser Errors
|
bug critical parse-tree-processing
|
Hi guys.
I found some strange Parser errors:
Im on Branch "next" Commit: e13e9c6836e71218d79faee565d1d45406e5262f
1.

The first error seems to be happening at the end of this "With" block.
Funny thing is that this is at L64C13 but the error reports L64C45
DBSheet is a worksheet variable
2.

The next one at the end of a normal function returning a string.
Notice how it says L12C50 but the cursor is on L12C39 (Which is at the end of the "then"
And the error message seems to be pointing to the content of L11
|
1.0
|
Some Parser Errors - Hi guys.
I found some strange Parser errors:
Im on Branch "next" Commit: e13e9c6836e71218d79faee565d1d45406e5262f
1.

The first error seems to be happening at the end of this "With" block.
Funny thing is that this is at L64C13 but the error reports L64C45
DBSheet is a worksheet variable
2.

The next one at the end of a normal function returning a string.
Notice how it says L12C50 but the cursor is on L12C39 (Which is at the end of the "then"
And the error message seems to be pointing to the content of L11
|
process
|
some parser errors hi guys i found some strange parser errors im on branch next commit the first error seems to be happening at the end of this with block funny thing is that this is at but the error reports dbsheet is a worksheet variable the next one at the end of a normal function returning a string notice how it says but the cursor is on which is at the end of the then and the error message seems to be pointing to the content of
| 1
|
8,685
| 11,816,579,714
|
IssuesEvent
|
2020-03-20 09:20:23
|
cranec-project/Covid-19
|
https://api.github.com/repos/cranec-project/Covid-19
|
opened
|
ward-level command and control
|
At overwhelm stage Critical ICU process Management Need Patient management
|
a combination of fear of infection, overwhelm and lack of training and equipment, as well as ever-changing best-operating practices, means that caregivers will need a central command center giving instructions and tracking activities of the medical personnel.
this system needs to be based on audio (e.g., dictation systems) and able to operate under noisy conditions, such as caused by ventilators and negative pressure systems, as well as the usual myriad of beeps provided by medical monitoring systems.
|
1.0
|
ward-level command and control - a combination of fear of infection, overwhelm and lack of training and equipment, as well as ever-changing best-operating practices, means that caregivers will need a central command center giving instructions and tracking activities of the medical personnel.
this system needs to be based on audio (e.g., dictation systems) and able to operate under noisy conditions, such as caused by ventilators and negative pressure systems, as well as the usual myriad of beeps provided by medical monitoring systems.
|
process
|
ward level command and control a combination of fear of infection overwhelm and lack of training and equipment as well as ever changing best operating practices means that caregivers will need a central command center giving instructions and tracking activities of the medical personnel this system needs to be based on audio e g dictation systems and able to operate under noisy conditions such as caused by ventilators and negative pressure systems as well as the usual myriad of beeps provided by medical monitoring systems
| 1
|
1,224
| 3,756,870,087
|
IssuesEvent
|
2016-03-13 16:48:56
|
y-lohse/cozy-frost
|
https://api.github.com/repos/y-lohse/cozy-frost
|
closed
|
Iframe header
|
postprocessing
|
Some websites (like github) add a meta header telling the browser to forbid embeding the page in an iframe. While this makes sense for the original website, it doesn't for the snapshots. So maybe remove those.
|
1.0
|
Iframe header - Some websites (like github) add a meta header telling the browser to forbid embeding the page in an iframe. While this makes sense for the original website, it doesn't for the snapshots. So maybe remove those.
|
process
|
iframe header some websites like github add a meta header telling the browser to forbid embeding the page in an iframe while this makes sense for the original website it doesn t for the snapshots so maybe remove those
| 1
|
4,979
| 7,809,056,497
|
IssuesEvent
|
2018-06-11 22:27:35
|
nunit/nunit.analyzers
|
https://api.github.com/repos/nunit/nunit.analyzers
|
closed
|
Add myget feed to tooling
|
is:process
|
Appveyor should upload the different builds (master and PRs) to myget. Is waiting on #36 for enabling package of nuget packes.
Probably we also need to represent the version as currently Appveyor is at version 1 (+ the build number).
|
1.0
|
Add myget feed to tooling - Appveyor should upload the different builds (master and PRs) to myget. Is waiting on #36 for enabling package of nuget packes.
Probably we also need to represent the version as currently Appveyor is at version 1 (+ the build number).
|
process
|
add myget feed to tooling appveyor should upload the different builds master and prs to myget is waiting on for enabling package of nuget packes probably we also need to represent the version as currently appveyor is at version the build number
| 1
|
240,129
| 20,013,091,178
|
IssuesEvent
|
2022-02-01 09:14:28
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
opened
|
[CI] BooleanTermsIT testMultiValueField failing
|
:Analytics/Aggregations >test-failure Team:Analytics
|
**Build scan:**
https://gradle-enterprise.elastic.co/s/5ey776foyu7lc/tests/:server:internalClusterTest/org.elasticsearch.search.aggregations.bucket.BooleanTermsIT/testMultiValueField
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.search.aggregations.bucket.BooleanTermsIT.testMultiValueField" -Dtests.seed=4D45D17F3171382 -Dtests.locale=nl-NL -Dtests.timezone=America/Moncton -Druntime.java=17 -Dtests.fips.enabled=true`
**Applicable branches:**
master
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.search.aggregations.bucket.BooleanTermsIT&tests.test=testMultiValueField
**Failure excerpt:**
```
java.lang.ClassCastException: class org.elasticsearch.search.aggregations.bucket.terms.UnmappedTerms cannot be cast to class org.elasticsearch.search.aggregations.bucket.terms.LongTerms (org.elasticsearch.search.aggregations.bucket.terms.UnmappedTerms and org.elasticsearch.search.aggregations.bucket.terms.LongTerms are in unnamed module of loader 'app')
at __randomizedtesting.SeedInfo.seed([4D45D17F3171382:303277C68A12E44C]:0)
at org.elasticsearch.search.aggregations.bucket.BooleanTermsIT.testMultiValueField(BooleanTermsIT.java:117)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
```
|
1.0
|
[CI] BooleanTermsIT testMultiValueField failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/5ey776foyu7lc/tests/:server:internalClusterTest/org.elasticsearch.search.aggregations.bucket.BooleanTermsIT/testMultiValueField
**Reproduction line:**
`./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.search.aggregations.bucket.BooleanTermsIT.testMultiValueField" -Dtests.seed=4D45D17F3171382 -Dtests.locale=nl-NL -Dtests.timezone=America/Moncton -Druntime.java=17 -Dtests.fips.enabled=true`
**Applicable branches:**
master
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.search.aggregations.bucket.BooleanTermsIT&tests.test=testMultiValueField
**Failure excerpt:**
```
java.lang.ClassCastException: class org.elasticsearch.search.aggregations.bucket.terms.UnmappedTerms cannot be cast to class org.elasticsearch.search.aggregations.bucket.terms.LongTerms (org.elasticsearch.search.aggregations.bucket.terms.UnmappedTerms and org.elasticsearch.search.aggregations.bucket.terms.LongTerms are in unnamed module of loader 'app')
at __randomizedtesting.SeedInfo.seed([4D45D17F3171382:303277C68A12E44C]:0)
at org.elasticsearch.search.aggregations.bucket.BooleanTermsIT.testMultiValueField(BooleanTermsIT.java:117)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:-2)
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:44)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:824)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:475)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:375)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:831)
at java.lang.Thread.run(Thread.java:833)
```
|
non_process
|
booleantermsit testmultivaluefield failing build scan reproduction line gradlew server internalclustertest tests org elasticsearch search aggregations bucket booleantermsit testmultivaluefield dtests seed dtests locale nl nl dtests timezone america moncton druntime java dtests fips enabled true applicable branches master reproduces locally yes failure history failure excerpt java lang classcastexception class org elasticsearch search aggregations bucket terms unmappedterms cannot be cast to class org elasticsearch search aggregations bucket terms longterms org elasticsearch search aggregations bucket terms unmappedterms and org elasticsearch search aggregations bucket terms longterms are in unnamed module of loader app at randomizedtesting seedinfo seed at org elasticsearch search aggregations bucket booleantermsit testmultivaluefield booleantermsit java at jdk internal reflect nativemethodaccessorimpl nativemethodaccessorimpl java at jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 0
|
23,565
| 12,025,997,374
|
IssuesEvent
|
2020-04-12 12:08:16
|
EFForg/https-everywhere
|
https://api.github.com/repos/EFForg/https-everywhere
|
closed
|
High cpu consumption with new update
|
needs more info performance
|
With new update, `2020.3.16` cpu usage jumped about 10x and takes the whole cpu core I believe. Regarding to macOS activity monitor it's just above ~120%.
Version: `2020.3.16`, browser: `Version 83.0.4094.0 (Official Build) canary (64-bit)`, macOS: `10.14.6 (18G3020)`
|
True
|
High cpu consumption with new update - With new update, `2020.3.16` cpu usage jumped about 10x and takes the whole cpu core I believe. Regarding to macOS activity monitor it's just above ~120%.
Version: `2020.3.16`, browser: `Version 83.0.4094.0 (Official Build) canary (64-bit)`, macOS: `10.14.6 (18G3020)`
|
non_process
|
high cpu consumption with new update with new update cpu usage jumped about and takes the whole cpu core i believe regarding to macos activity monitor it s just above version browser version official build canary bit macos
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.