Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
6,290
9,300,306,626
IssuesEvent
2019-03-23 12:38:47
googleapis/google-cloud-java
https://api.github.com/repos/googleapis/google-cloud-java
closed
cloud KMS violates semver
api: cloudkms type: process
Note: This client is a work-in-progress, and may occasionally make backwards-incompatible changes. and yet it is 1.1.0
1.0
cloud KMS violates semver - Note: This client is a work-in-progress, and may occasionally make backwards-incompatible changes. and yet it is 1.1.0
process
cloud kms violates semver note this client is a work in progress and may occasionally make backwards incompatible changes and yet it is
1
11,763
9,477,990,796
IssuesEvent
2019-04-19 20:45:50
Azure/azure-cli
https://api.github.com/repos/Azure/azure-cli
closed
Unable to update bot facebook channel
BotService bug
**Describe the bug** I can create a facebook channel via `az bot facebook create`, but if I run it again to update the channel it fails. **To Reproduce** 1. create a new bot using `az bot create` 2. create a FB channel using `az bot facebook create` 3. try to update FB channel using `az bot facebook create` again **Expected behavior** Can update channel successfully **Environment summary** Install Method (e.g. pip, interactive script, apt-get, Docker, MSI, edge build) / CLI version (`az --version`) / OS version / Shell Type (e.g. bash, cmd.exe, Bash on Windows) install: brew macOS 10.13.6 azure-cli (2.0.33) shell: zsh **Additional context** Error: ``` Error occurred in request., RetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) Traceback (most recent call last): File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/adapters.py", line 440, in send timeout=timeout File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 732, in urlopen body_pos=body_pos, **response_kw) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 732, in urlopen body_pos=body_pos, **response_kw) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 732, in urlopen body_pos=body_pos, **response_kw) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 712, in urlopen retries = retries.increment(method, url, response=response, _pool=self) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/util/retry.py", line 388, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/service_client.py", line 257, in send **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/adapters.py", line 499, in send raise RetryError(e, request=request) requests.exceptions.RetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/knack/cli.py", line 197, in invoke cmd_result = self.invocation.execute(args) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 336, in execute six.reraise(*sys.exc_info()) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/six.py", line 693, in reraise raise value File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 310, in execute result = cmd(params) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 171, in __call__ return super(AzCliCommand, self).__call__(*args, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/knack/commands.py", line 109, in __call__ return self.handler(*args, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 420, in default_command_handler result = op(**command_args) File "/Users/josephcheung/.azure/cliextensions/botservice/azext_bot/custom.py", line 337, in facebook_create return create_channel(client, channel, 'FacebookChannel', resource_group_name, resource_name) File "/Users/josephcheung/.azure/cliextensions/botservice/azext_bot/custom.py", line 323, in create_channel parameters=botChannel File "/Users/josephcheung/.azure/cliextensions/botservice/azext_bot/botservice/operations/channels_operations.py", line 96, in create request, header_parameters, body_content, **operation_config) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/service_client.py", line 290, in send raise_with_traceback(ClientRequestError, msg, err) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/exceptions.py", line 45, in raise_with_traceback raise error.with_traceback(exc_traceback) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/service_client.py", line 257, in send **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/adapters.py", line 499, in send raise RetryError(e, request=request) msrest.exceptions.ClientRequestError: Error occurred in request., RetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) ```
1.0
Unable to update bot facebook channel - **Describe the bug** I can create a facebook channel via `az bot facebook create`, but if I run it again to update the channel it fails. **To Reproduce** 1. create a new bot using `az bot create` 2. create a FB channel using `az bot facebook create` 3. try to update FB channel using `az bot facebook create` again **Expected behavior** Can update channel successfully **Environment summary** Install Method (e.g. pip, interactive script, apt-get, Docker, MSI, edge build) / CLI version (`az --version`) / OS version / Shell Type (e.g. bash, cmd.exe, Bash on Windows) install: brew macOS 10.13.6 azure-cli (2.0.33) shell: zsh **Additional context** Error: ``` Error occurred in request., RetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) Traceback (most recent call last): File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/adapters.py", line 440, in send timeout=timeout File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 732, in urlopen body_pos=body_pos, **response_kw) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 732, in urlopen body_pos=body_pos, **response_kw) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 732, in urlopen body_pos=body_pos, **response_kw) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/connectionpool.py", line 712, in urlopen retries = retries.increment(method, url, response=response, _pool=self) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/urllib3/util/retry.py", line 388, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/service_client.py", line 257, in send **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/adapters.py", line 499, in send raise RetryError(e, request=request) requests.exceptions.RetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/knack/cli.py", line 197, in invoke cmd_result = self.invocation.execute(args) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 336, in execute six.reraise(*sys.exc_info()) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/six.py", line 693, in reraise raise value File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 310, in execute result = cmd(params) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 171, in __call__ return super(AzCliCommand, self).__call__(*args, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/knack/commands.py", line 109, in __call__ return self.handler(*args, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 420, in default_command_handler result = op(**command_args) File "/Users/josephcheung/.azure/cliextensions/botservice/azext_bot/custom.py", line 337, in facebook_create return create_channel(client, channel, 'FacebookChannel', resource_group_name, resource_name) File "/Users/josephcheung/.azure/cliextensions/botservice/azext_bot/custom.py", line 323, in create_channel parameters=botChannel File "/Users/josephcheung/.azure/cliextensions/botservice/azext_bot/botservice/operations/channels_operations.py", line 96, in create request, header_parameters, body_content, **operation_config) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/service_client.py", line 290, in send raise_with_traceback(ClientRequestError, msg, err) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/exceptions.py", line 45, in raise_with_traceback raise error.with_traceback(exc_traceback) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/msrest/service_client.py", line 257, in send **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "/usr/local/Cellar/azure-cli/2.0.33/libexec/lib/python3.6/site-packages/requests/adapters.py", line 499, in send raise RetryError(e, request=request) msrest.exceptions.ClientRequestError: Error occurred in request., RetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/9a0ad265-47aa-4d8c-849f-e314fa4bfdb6/resourceGroups/zeep-chat/providers/Microsoft.BotService/botServices/test-zeep-123/channels/FacebookChannel?api-version=2017-12-01 (Caused by ResponseError('too many 500 error responses',)) ```
non_process
unable to update bot facebook channel describe the bug i can create a facebook channel via az bot facebook create but if i run it again to update the channel it fails to reproduce create a new bot using az bot create create a fb channel using az bot facebook create try to update fb channel using az bot facebook create again expected behavior can update channel successfully environment summary install method e g pip interactive script apt get docker msi edge build cli version az version os version shell type e g bash cmd exe bash on windows install brew macos azure cli shell zsh additional context error error occurred in request retryerror httpsconnectionpool host management azure com port max retries exceeded with url subscriptions resourcegroups zeep chat providers microsoft botservice botservices test zeep channels facebookchannel api version caused by responseerror too many error responses traceback most recent call last file usr local cellar azure cli libexec lib site packages requests adapters py line in send timeout timeout file usr local cellar azure cli libexec lib site packages connectionpool py line in urlopen body pos body pos response kw file usr local cellar azure cli libexec lib site packages connectionpool py line in urlopen body pos body pos response kw file usr local cellar azure cli libexec lib site packages connectionpool py line in urlopen body pos body pos response kw file usr local cellar azure cli libexec lib site packages connectionpool py line in urlopen retries retries increment method url response response pool self file usr local cellar azure cli libexec lib site packages util retry py line in increment raise maxretryerror pool url error or responseerror cause exceptions maxretryerror httpsconnectionpool host management azure com port max retries exceeded with url subscriptions resourcegroups zeep chat providers microsoft botservice botservices test zeep channels facebookchannel api version caused by responseerror too many error responses during handling of the above exception another exception occurred traceback most recent call last file usr local cellar azure cli libexec lib site packages msrest service client py line in send kwargs file usr local cellar azure cli libexec lib site packages requests sessions py line in request resp self send prep send kwargs file usr local cellar azure cli libexec lib site packages requests sessions py line in send r adapter send request kwargs file usr local cellar azure cli libexec lib site packages requests adapters py line in send raise retryerror e request request requests exceptions retryerror httpsconnectionpool host management azure com port max retries exceeded with url subscriptions resourcegroups zeep chat providers microsoft botservice botservices test zeep channels facebookchannel api version caused by responseerror too many error responses during handling of the above exception another exception occurred traceback most recent call last file usr local cellar azure cli libexec lib site packages knack cli py line in invoke cmd result self invocation execute args file usr local cellar azure cli libexec lib site packages azure cli core commands init py line in execute six reraise sys exc info file usr local cellar azure cli libexec lib site packages six py line in reraise raise value file usr local cellar azure cli libexec lib site packages azure cli core commands init py line in execute result cmd params file usr local cellar azure cli libexec lib site packages azure cli core commands init py line in call return super azclicommand self call args kwargs file usr local cellar azure cli libexec lib site packages knack commands py line in call return self handler args kwargs file usr local cellar azure cli libexec lib site packages azure cli core init py line in default command handler result op command args file users josephcheung azure cliextensions botservice azext bot custom py line in facebook create return create channel client channel facebookchannel resource group name resource name file users josephcheung azure cliextensions botservice azext bot custom py line in create channel parameters botchannel file users josephcheung azure cliextensions botservice azext bot botservice operations channels operations py line in create request header parameters body content operation config file usr local cellar azure cli libexec lib site packages msrest service client py line in send raise with traceback clientrequesterror msg err file usr local cellar azure cli libexec lib site packages msrest exceptions py line in raise with traceback raise error with traceback exc traceback file usr local cellar azure cli libexec lib site packages msrest service client py line in send kwargs file usr local cellar azure cli libexec lib site packages requests sessions py line in request resp self send prep send kwargs file usr local cellar azure cli libexec lib site packages requests sessions py line in send r adapter send request kwargs file usr local cellar azure cli libexec lib site packages requests adapters py line in send raise retryerror e request request msrest exceptions clientrequesterror error occurred in request retryerror httpsconnectionpool host management azure com port max retries exceeded with url subscriptions resourcegroups zeep chat providers microsoft botservice botservices test zeep channels facebookchannel api version caused by responseerror too many error responses
0
11,351
14,171,908,266
IssuesEvent
2020-11-12 16:16:52
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
change variable value to upper case.
Pri2 devops-cicd-process/tech devops/prod support-request
[Enter feedback here] How can i use the upper function to change the variable value to upper case. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6 * Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18 * Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops) * Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
change variable value to upper case. - [Enter feedback here] How can i use the upper function to change the variable value to upper case. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 77c58a78-a567-e99a-9eb7-62dddd1b90b6 * Version Independent ID: 680a79bc-11de-39fc-43e3-e07dc762db18 * Content: [Expressions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops) * Content Source: [docs/pipelines/process/expressions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/expressions.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
change variable value to upper case how can i use the upper function to change the variable value to upper case document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
264,869
28,214,072,710
IssuesEvent
2023-04-05 07:37:46
hshivhare67/platform_device_renesas_kernel_v4.19.72
https://api.github.com/repos/hshivhare67/platform_device_renesas_kernel_v4.19.72
closed
CVE-2022-1734 (High) detected in linuxlinux-4.19.279 - autoclosed
Mend: dependency security vulnerability
## CVE-2022-1734 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw in Linux Kernel found in nfcmrvl_nci_unregister_dev() in drivers/nfc/nfcmrvl/main.c can lead to use after free both read or write when non synchronized between cleanup routine and firmware download routine. <p>Publish Date: 2022-05-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1734>CVE-2022-1734</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-1734">https://www.linuxkernelcves.com/cves/CVE-2022-1734</a></p> <p>Release Date: 2022-05-18</p> <p>Fix Resolution: v4.9.313,v4.14.278,v4.19.242,v5.4.193,v5.10.115,v5.15.39,v5.17.7,v5.18-rc6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-1734 (High) detected in linuxlinux-4.19.279 - autoclosed - ## CVE-2022-1734 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.279</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A flaw in Linux Kernel found in nfcmrvl_nci_unregister_dev() in drivers/nfc/nfcmrvl/main.c can lead to use after free both read or write when non synchronized between cleanup routine and firmware download routine. <p>Publish Date: 2022-05-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1734>CVE-2022-1734</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-1734">https://www.linuxkernelcves.com/cves/CVE-2022-1734</a></p> <p>Release Date: 2022-05-18</p> <p>Fix Resolution: v4.9.313,v4.14.278,v4.19.242,v5.4.193,v5.10.115,v5.15.39,v5.17.7,v5.18-rc6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch main vulnerable source files vulnerability details a flaw in linux kernel found in nfcmrvl nci unregister dev in drivers nfc nfcmrvl main c can lead to use after free both read or write when non synchronized between cleanup routine and firmware download routine publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
20,411
27,067,759,203
IssuesEvent
2023-02-14 02:41:19
nephio-project/sig-release
https://api.github.com/repos/nephio-project/sig-release
opened
Define and document Github Project Management process for Nephio
area/process-mgmt
We need to define the GitHub project management process for the Nephio project. As a part of this we will define and document - Issue types - Issue Type templates - labels - Milestones - Status - Project boards - PR life cycle - Branching /Tagging These are discussed more in the following documents. https://docs.google.com/document/d/17x5HNCAlsjNzFHhAlCkC-IlhX2I26kqghYunFTSW_Ls/edit#heading=h.amd9l4e7ztbx https://docs.google.com/document/d/1y-m4VQk67qlwk0JpcbfGroIinOm78S8itFWrTZDIFw4/edit#
1.0
Define and document Github Project Management process for Nephio - We need to define the GitHub project management process for the Nephio project. As a part of this we will define and document - Issue types - Issue Type templates - labels - Milestones - Status - Project boards - PR life cycle - Branching /Tagging These are discussed more in the following documents. https://docs.google.com/document/d/17x5HNCAlsjNzFHhAlCkC-IlhX2I26kqghYunFTSW_Ls/edit#heading=h.amd9l4e7ztbx https://docs.google.com/document/d/1y-m4VQk67qlwk0JpcbfGroIinOm78S8itFWrTZDIFw4/edit#
process
define and document github project management process for nephio we need to define the github project management process for the nephio project as a part of this we will define and document issue types issue type templates labels milestones status project boards pr life cycle branching tagging these are discussed more in the following documents
1
8,506
11,686,286,777
IssuesEvent
2020-03-05 10:35:56
heim-rs/heim
https://api.github.com/repos/heim-rs/heim
closed
Process::niceness for *nix systems
A-process C-enhancement O-linux O-macos
As "niceness" is the POSIX term, this method should be implemented in the extension trait, available for all *nix systems.
1.0
Process::niceness for *nix systems - As "niceness" is the POSIX term, this method should be implemented in the extension trait, available for all *nix systems.
process
process niceness for nix systems as niceness is the posix term this method should be implemented in the extension trait available for all nix systems
1
11,877
14,676,405,051
IssuesEvent
2020-12-30 20:06:14
ewen-lbh/portfolio
https://api.github.com/repos/ewen-lbh/portfolio
opened
Interpret `created` as a shortcut for `finished` and `started`
processing
Interpret ```yaml created: 2020-08-06 ``` as: ```yaml started: 2020-08-06 finished: 2020-08-06 ``` Conflicts with #27
1.0
Interpret `created` as a shortcut for `finished` and `started` - Interpret ```yaml created: 2020-08-06 ``` as: ```yaml started: 2020-08-06 finished: 2020-08-06 ``` Conflicts with #27
process
interpret created as a shortcut for finished and started interpret yaml created as yaml started finished conflicts with
1
16,479
21,415,294,334
IssuesEvent
2022-04-22 10:11:23
PyCQA/pylint
https://api.github.com/repos/PyCQA/pylint
closed
``cannot serialize '_io.TextIOWrapper' object'`` with ``--jobs 0``
Crash 💥 topic-multiprocessing
### Bug description When using the option --jobs 0, pylint crashes with the following error ### Configuration _No response_ ### Command used ```shell pylint -j 0 blah ``` ### Pylint output ```shell Traceback (most recent call last): File "/blah/.tox/blah/bin/pylint", line 8, in <module> sys.exit(run_pylint()) File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/__init__.py", line 22, in run_pylint PylintRun(argv or sys.argv[1:]) File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/lint/run.py", line 358, in __init__ linter.check(args) File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/lint/pylinter.py", line 1081, in check files_or_modules, File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/lint/parallel.py", line 148, in check_parallel jobs, initializer=initializer, initargs=[dill.dumps(linter)] File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 304, in dumps dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio) File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 276, in dump Pickler(file, protocol, **_kwds).dump(obj) File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 498, in dump StockPickler.dump(self, obj) File "/usr/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/usr/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/usr/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/usr/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/usr/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/usr/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: cannot serialize '_io.TextIOWrapper' object ``` ### Expected behavior Pylint should finish its analysis successfully, not causing any exception ### Pylint version ```shell pylint 2.13.7 astroid 2.11.3 Python 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] ``` ### OS / Environment _No response_ ### Additional dependencies _No response_
1.0
``cannot serialize '_io.TextIOWrapper' object'`` with ``--jobs 0`` - ### Bug description When using the option --jobs 0, pylint crashes with the following error ### Configuration _No response_ ### Command used ```shell pylint -j 0 blah ``` ### Pylint output ```shell Traceback (most recent call last): File "/blah/.tox/blah/bin/pylint", line 8, in <module> sys.exit(run_pylint()) File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/__init__.py", line 22, in run_pylint PylintRun(argv or sys.argv[1:]) File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/lint/run.py", line 358, in __init__ linter.check(args) File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/lint/pylinter.py", line 1081, in check files_or_modules, File "/blah/.tox/blah/lib/python3.7/site-packages/pylint/lint/parallel.py", line 148, in check_parallel jobs, initializer=initializer, initargs=[dill.dumps(linter)] File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 304, in dumps dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio) File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 276, in dump Pickler(file, protocol, **_kwds).dump(obj) File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 498, in dump StockPickler.dump(self, obj) File "/usr/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/usr/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/usr/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/usr/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/usr/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/usr/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/blah/.tox/blah/lib/python3.7/site-packages/dill/_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/usr/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/usr/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: cannot serialize '_io.TextIOWrapper' object ``` ### Expected behavior Pylint should finish its analysis successfully, not causing any exception ### Pylint version ```shell pylint 2.13.7 astroid 2.11.3 Python 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0] ``` ### OS / Environment _No response_ ### Additional dependencies _No response_
process
cannot serialize io textiowrapper object with jobs bug description when using the option jobs pylint crashes with the following error configuration no response command used shell pylint j blah pylint output shell traceback most recent call last file blah tox blah bin pylint line in sys exit run pylint file blah tox blah lib site packages pylint init py line in run pylint pylintrun argv or sys argv file blah tox blah lib site packages pylint lint run py line in init linter check args file blah tox blah lib site packages pylint lint pylinter py line in check files or modules file blah tox blah lib site packages pylint lint parallel py line in check parallel jobs initializer initializer initargs file blah tox blah lib site packages dill dill py line in dumps dump obj file protocol byref fmode recurse kwds strictio file blah tox blah lib site packages dill dill py line in dump pickler file protocol kwds dump obj file blah tox blah lib site packages dill dill py line in dump stockpickler dump self obj file usr lib pickle py line in dump self save obj file usr lib pickle py line in save self save reduce obj obj rv file usr lib pickle py line in save reduce save state file usr lib pickle py line in save f self obj call unbound method with explicit self file blah tox blah lib site packages dill dill py line in save module dict stockpickler save dict pickler obj file usr lib pickle py line in save dict self batch setitems obj items file usr lib pickle py line in batch setitems save v file usr lib pickle py line in save self save reduce obj obj rv file usr lib pickle py line in save reduce save state file usr lib pickle py line in save f self obj call unbound method with explicit self file blah tox blah lib site packages dill dill py line in save module dict stockpickler save dict pickler obj file usr lib pickle py line in save dict self batch setitems obj items file usr lib pickle py line in batch setitems save v file usr lib pickle py line in save rv reduce self proto typeerror cannot serialize io textiowrapper object expected behavior pylint should finish its analysis successfully not causing any exception pylint version shell pylint astroid python default nov os environment no response additional dependencies no response
1
226,574
18,040,856,082
IssuesEvent
2021-09-18 02:50:56
logicmoo/logicmoo_workspace
https://api.github.com/repos/logicmoo/logicmoo_workspace
opened
logicmoo.pfc.test.sanity_base.FILE_01@Test_9999_Line_9999__Exitcode_7
Test_9999 FILE_01 logicmoo.pfc.test.sanity_base
(cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif file_01.pfc) Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FILE_01/logicmoo_pfc_test_sanity_base_FILE_01_Test_9999_Line_9999__Exitcode_7/ This: https://jenkins.logicmoo.org/job/logicmoo_workspace/41/testReport/logicmoo.pfc.test.sanity_base/FILE_01/logicmoo_pfc_test_sanity_base_FILE_01_Test_9999_Line_9999__Exitcode_7/ ``` % running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/file_01.pfc'), %~ this_test_might_need( :-( use_module( library(logicmoo_plarkc)))) :- expects_dialect(pfc). header_sane:(must_clause_asserted(G):- cwc, must(clause_asserted_u(G))). :- ain((must_clause_asserted(G):- cwc, must(clause_asserted_u(G)))). must_clause_asserted(G):- cwc, must(clause_asserted_u(G)). :- listing(must_clause_asserted). %~ skipped( listing(must_clause_asserted)) :- sanity(predicate_property(must_clause_asserted(_),number_of_clauses(_))). a. :- listing(a). %~ skipped( listing(a)) :- header_sane:listing(a). % @TODO - fails here bc must_clause_asserted/1 needs love %~ skipped( listing(a)) % @TODO - fails here bc must_clause_asserted/1 needs love :- must_clause_asserted(a). sHOW_MUST_go_on_failed_F__A__I__L_(baseKB:clause_asserted_u(a)) %~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01.pfc#L33 %~ error( sHOW_MUST_go_on_failed_F__A__I__L_( baseKB : clause_asserted_u(a))) %~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01.pfc#L33 ``` totalTime=10 FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k file_01.pfc (returned 137)
2.0
logicmoo.pfc.test.sanity_base.FILE_01@Test_9999_Line_9999__Exitcode_7 - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; timeout --foreground --preserve-status -s SIGKILL -k 10s 10s lmoo-clif file_01.pfc) Latest: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/FILE_01/logicmoo_pfc_test_sanity_base_FILE_01_Test_9999_Line_9999__Exitcode_7/ This: https://jenkins.logicmoo.org/job/logicmoo_workspace/41/testReport/logicmoo.pfc.test.sanity_base/FILE_01/logicmoo_pfc_test_sanity_base_FILE_01_Test_9999_Line_9999__Exitcode_7/ ``` % running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/file_01.pfc'), %~ this_test_might_need( :-( use_module( library(logicmoo_plarkc)))) :- expects_dialect(pfc). header_sane:(must_clause_asserted(G):- cwc, must(clause_asserted_u(G))). :- ain((must_clause_asserted(G):- cwc, must(clause_asserted_u(G)))). must_clause_asserted(G):- cwc, must(clause_asserted_u(G)). :- listing(must_clause_asserted). %~ skipped( listing(must_clause_asserted)) :- sanity(predicate_property(must_clause_asserted(_),number_of_clauses(_))). a. :- listing(a). %~ skipped( listing(a)) :- header_sane:listing(a). % @TODO - fails here bc must_clause_asserted/1 needs love %~ skipped( listing(a)) % @TODO - fails here bc must_clause_asserted/1 needs love :- must_clause_asserted(a). sHOW_MUST_go_on_failed_F__A__I__L_(baseKB:clause_asserted_u(a)) %~ FIlE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01.pfc#L33 %~ error( sHOW_MUST_go_on_failed_F__A__I__L_( baseKB : clause_asserted_u(a))) %~ FILE: * https://logicmoo.org:2082/gitlab/logicmoo/logicmoo_workspace/-/blob/master/packs_sys/pfc/t/sanity_base/file_01.pfc#L33 ``` totalTime=10 FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k file_01.pfc (returned 137)
non_process
logicmoo pfc test sanity base file test line exitcode cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base timeout foreground preserve status s sigkill k lmoo clif file pfc latest this running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base file pfc this test might need use module library logicmoo plarkc expects dialect pfc header sane must clause asserted g cwc must clause asserted u g ain must clause asserted g cwc must clause asserted u g must clause asserted g cwc must clause asserted u g listing must clause asserted skipped listing must clause asserted sanity predicate property must clause asserted number of clauses a listing a skipped listing a header sane listing a todo fails here bc must clause asserted needs love skipped listing a todo fails here bc must clause asserted needs love must clause asserted a show must go on failed f a i l basekb clause asserted u a file error show must go on failed f a i l basekb clause asserted u a file totaltime failed var lib jenkins workspace logicmoo workspace bin lmoo junit minor k file pfc returned
0
28,554
4,422,109,560
IssuesEvent
2016-08-16 00:34:09
TransNexus/NexOSS
https://api.github.com/repos/TransNexus/NexOSS
closed
LCR Product Full View does not display
bug needs testing
#### Page Path #### Description of Issue Drilling down into the LCR full view shows no providers for ##1201200. The legacy view shows 5 providers for breakout ##1201200. ![image](https://cloud.githubusercontent.com/assets/18315382/17635337/c1e16e06-60a4-11e6-92a2-30683b5f6608.png) Legacy View ![image](https://cloud.githubusercontent.com/assets/18315382/17635372/eb98b6a0-60a4-11e6-824f-deb6a277ff7b.png)
1.0
LCR Product Full View does not display - #### Page Path #### Description of Issue Drilling down into the LCR full view shows no providers for ##1201200. The legacy view shows 5 providers for breakout ##1201200. ![image](https://cloud.githubusercontent.com/assets/18315382/17635337/c1e16e06-60a4-11e6-92a2-30683b5f6608.png) Legacy View ![image](https://cloud.githubusercontent.com/assets/18315382/17635372/eb98b6a0-60a4-11e6-824f-deb6a277ff7b.png)
non_process
lcr product full view does not display page path description of issue drilling down into the lcr full view shows no providers for the legacy view shows providers for breakout legacy view
0
522,583
15,162,336,836
IssuesEvent
2021-02-12 10:27:14
AyeCode/ayecode-connect
https://api.github.com/repos/AyeCode/ayecode-connect
closed
Bug Scrub for new release
For: Developer Priority: High Type: Bug
- [x] WPEU not activated on first connect. - [x] Import data page needs to be restricted to need "connection" - [x] Required plugins need better formatting - [x] CPT DB tables not created. - [x] Make sure GD core is installed first to prevent missing table notifications. - [x] Add warning before import that content may be removed.
1.0
Bug Scrub for new release - - [x] WPEU not activated on first connect. - [x] Import data page needs to be restricted to need "connection" - [x] Required plugins need better formatting - [x] CPT DB tables not created. - [x] Make sure GD core is installed first to prevent missing table notifications. - [x] Add warning before import that content may be removed.
non_process
bug scrub for new release wpeu not activated on first connect import data page needs to be restricted to need connection required plugins need better formatting cpt db tables not created make sure gd core is installed first to prevent missing table notifications add warning before import that content may be removed
0
9,056
12,130,463,950
IssuesEvent
2020-04-23 01:39:00
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
remove gcp-devrel-py-tools from logging/cloud-client/requirements-test.txt
priority: p2 remove-gcp-devrel-py-tools type: process
remove gcp-devrel-py-tools from logging/cloud-client/requirements-test.txt
1.0
remove gcp-devrel-py-tools from logging/cloud-client/requirements-test.txt - remove gcp-devrel-py-tools from logging/cloud-client/requirements-test.txt
process
remove gcp devrel py tools from logging cloud client requirements test txt remove gcp devrel py tools from logging cloud client requirements test txt
1
7,402
10,523,141,111
IssuesEvent
2019-09-30 10:17:01
didi/mpx
https://api.github.com/repos/didi/mpx
closed
$event参数传递bug
processing
```html <view bindtap="tapAction(item)"> </view> ``` ```js createPage({ data: { item: '$event' }, methods: { tapAction (value) { //bug 这里会输出事件对象 // 正确应该输出字符串$event console.log(value) } } }) ```
1.0
$event参数传递bug - ```html <view bindtap="tapAction(item)"> </view> ``` ```js createPage({ data: { item: '$event' }, methods: { tapAction (value) { //bug 这里会输出事件对象 // 正确应该输出字符串$event console.log(value) } } }) ```
process
event参数传递bug html js createpage data item event methods tapaction value bug 这里会输出事件对象 正确应该输出字符串 event console log value
1
11,518
14,399,729,137
IssuesEvent
2020-12-03 11:20:25
syncfusion/ej2-react-ui-components
https://api.github.com/repos/syncfusion/ej2-react-ui-components
closed
Bugs on document-editor RTL mode
bug word-processor
I test your DEMO : https://ej2.syncfusion.com/react/demos/#/material/document-editor/default and I see some bugs in RTL mode and other right to left langs, for reproduce just set text align right to left and type some words in persian or arabic then select one word and change font or font color or size, then continue typing, now try to delete characters one by one you can see all this row words changed and even CTRL+Z not work correctly, by the other means right to left typing is ok if you don't change any properties like fontName, fontSize, fontColor, fontBackColor, Bold, Italic and ... if you try to change then you can't type normally or use CTRL+Z or even HOME and END key on your row, regards Babak Yaghoobi
1.0
Bugs on document-editor RTL mode - I test your DEMO : https://ej2.syncfusion.com/react/demos/#/material/document-editor/default and I see some bugs in RTL mode and other right to left langs, for reproduce just set text align right to left and type some words in persian or arabic then select one word and change font or font color or size, then continue typing, now try to delete characters one by one you can see all this row words changed and even CTRL+Z not work correctly, by the other means right to left typing is ok if you don't change any properties like fontName, fontSize, fontColor, fontBackColor, Bold, Italic and ... if you try to change then you can't type normally or use CTRL+Z or even HOME and END key on your row, regards Babak Yaghoobi
process
bugs on document editor rtl mode i test your demo and i see some bugs in rtl mode and other right to left langs for reproduce just set text align right to left and type some words in persian or arabic then select one word and change font or font color or size then continue typing now try to delete characters one by one you can see all this row words changed and even ctrl z not work correctly by the other means right to left typing is ok if you don t change any properties like fontname fontsize fontcolor fontbackcolor bold italic and if you try to change then you can t type normally or use ctrl z or even home and end key on your row regards babak yaghoobi
1
241,830
26,256,946,283
IssuesEvent
2023-01-06 02:10:25
Baneeishaque/Firestore_Demo_Ruby
https://api.github.com/repos/Baneeishaque/Firestore_Demo_Ruby
opened
CVE-2021-32740 (High) detected in addressable-2.7.0.gem
security vulnerability
## CVE-2021-32740 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>addressable-2.7.0.gem</b></p></summary> <p>Addressable is an alternative implementation to the URI implementation that is part of Ruby's standard library. It is flexible, offers heuristic parsing, and additionally provides extensive support for IRIs and URI templates. </p> <p>Library home page: <a href="https://rubygems.org/gems/addressable-2.7.0.gem">https://rubygems.org/gems/addressable-2.7.0.gem</a></p> <p>Path to dependency file: /Gemfile.lock</p> <p>Path to vulnerable library: /var/lib/gems/2.5.0/cache/addressable-2.7.0.gem</p> <p> Dependency Hierarchy: - google-cloud-firestore-1.4.1.gem (Root Library) - google-gax-1.8.1.gem - googleauth-0.12.0.gem - signet-0.14.0.gem - :x: **addressable-2.7.0.gem** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Addressable is an alternative implementation to the URI implementation that is part of Ruby's standard library. An uncontrolled resource consumption vulnerability exists after version 2.3.0 through version 2.7.0. Within the URI template implementation in Addressable, a maliciously crafted template may result in uncontrolled resource consumption, leading to denial of service when matched against a URI. In typical usage, templates would not normally be read from untrusted user input, but nonetheless, no previous security advisory for Addressable has cautioned against doing this. Users of the parsing capabilities in Addressable but not the URI template capabilities are unaffected. The vulnerability is patched in version 2.8.0. As a workaround, only create Template objects from trusted sources that have been validated not to produce catastrophic backtracking. <p>Publish Date: 2021-07-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-32740>CVE-2021-32740</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sporkmonger/addressable/security/advisories/GHSA-jxhc-q857-3j6g">https://github.com/sporkmonger/addressable/security/advisories/GHSA-jxhc-q857-3j6g</a></p> <p>Release Date: 2021-07-06</p> <p>Fix Resolution: addressable - 2.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-32740 (High) detected in addressable-2.7.0.gem - ## CVE-2021-32740 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>addressable-2.7.0.gem</b></p></summary> <p>Addressable is an alternative implementation to the URI implementation that is part of Ruby's standard library. It is flexible, offers heuristic parsing, and additionally provides extensive support for IRIs and URI templates. </p> <p>Library home page: <a href="https://rubygems.org/gems/addressable-2.7.0.gem">https://rubygems.org/gems/addressable-2.7.0.gem</a></p> <p>Path to dependency file: /Gemfile.lock</p> <p>Path to vulnerable library: /var/lib/gems/2.5.0/cache/addressable-2.7.0.gem</p> <p> Dependency Hierarchy: - google-cloud-firestore-1.4.1.gem (Root Library) - google-gax-1.8.1.gem - googleauth-0.12.0.gem - signet-0.14.0.gem - :x: **addressable-2.7.0.gem** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Addressable is an alternative implementation to the URI implementation that is part of Ruby's standard library. An uncontrolled resource consumption vulnerability exists after version 2.3.0 through version 2.7.0. Within the URI template implementation in Addressable, a maliciously crafted template may result in uncontrolled resource consumption, leading to denial of service when matched against a URI. In typical usage, templates would not normally be read from untrusted user input, but nonetheless, no previous security advisory for Addressable has cautioned against doing this. Users of the parsing capabilities in Addressable but not the URI template capabilities are unaffected. The vulnerability is patched in version 2.8.0. As a workaround, only create Template objects from trusted sources that have been validated not to produce catastrophic backtracking. <p>Publish Date: 2021-07-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-32740>CVE-2021-32740</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sporkmonger/addressable/security/advisories/GHSA-jxhc-q857-3j6g">https://github.com/sporkmonger/addressable/security/advisories/GHSA-jxhc-q857-3j6g</a></p> <p>Release Date: 2021-07-06</p> <p>Fix Resolution: addressable - 2.8.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in addressable gem cve high severity vulnerability vulnerable library addressable gem addressable is an alternative implementation to the uri implementation that is part of ruby s standard library it is flexible offers heuristic parsing and additionally provides extensive support for iris and uri templates library home page a href path to dependency file gemfile lock path to vulnerable library var lib gems cache addressable gem dependency hierarchy google cloud firestore gem root library google gax gem googleauth gem signet gem x addressable gem vulnerable library vulnerability details addressable is an alternative implementation to the uri implementation that is part of ruby s standard library an uncontrolled resource consumption vulnerability exists after version through version within the uri template implementation in addressable a maliciously crafted template may result in uncontrolled resource consumption leading to denial of service when matched against a uri in typical usage templates would not normally be read from untrusted user input but nonetheless no previous security advisory for addressable has cautioned against doing this users of the parsing capabilities in addressable but not the uri template capabilities are unaffected the vulnerability is patched in version as a workaround only create template objects from trusted sources that have been validated not to produce catastrophic backtracking publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution addressable step up your open source security game with mend
0
520,416
15,085,929,171
IssuesEvent
2021-02-05 19:28:46
HEPData/hepdata
https://api.github.com/repos/HEPData/hepdata
opened
submission: check behaviour of perform_upload_action
complexity: low priority: medium type: bug
Today there were some problems with the web pods of the Kubernetes deployment. Uploading a small test file (few KB) to the Sandbox gave a timeout error after five minutes with this message displayed: https://github.com/HEPData/hepdata/blob/fcdc4d5635b9f85754b0fcea57428202b3211b4f/hepdata/modules/records/static/js/hepdata_record_js.js#L87-L89 However, regardless of the error, it seems the code proceeded to the `/consume` endpoint, resulting in a record getting stuck in a "processing" state. I don't see how this could arise, since the line: https://github.com/HEPData/hepdata/blob/fcdc4d5635b9f85754b0fcea57428202b3211b4f/hepdata/modules/records/static/js/hepdata_record_js.js#L81 which executes the `/consume` endpoint should only be run for `success` but not in the case of an `error`. This might be some weird problem specific to the production instance, but it could be worth checking the logic of `perform_upload_action` in the case of a timeout error.
1.0
submission: check behaviour of perform_upload_action - Today there were some problems with the web pods of the Kubernetes deployment. Uploading a small test file (few KB) to the Sandbox gave a timeout error after five minutes with this message displayed: https://github.com/HEPData/hepdata/blob/fcdc4d5635b9f85754b0fcea57428202b3211b4f/hepdata/modules/records/static/js/hepdata_record_js.js#L87-L89 However, regardless of the error, it seems the code proceeded to the `/consume` endpoint, resulting in a record getting stuck in a "processing" state. I don't see how this could arise, since the line: https://github.com/HEPData/hepdata/blob/fcdc4d5635b9f85754b0fcea57428202b3211b4f/hepdata/modules/records/static/js/hepdata_record_js.js#L81 which executes the `/consume` endpoint should only be run for `success` but not in the case of an `error`. This might be some weird problem specific to the production instance, but it could be worth checking the logic of `perform_upload_action` in the case of a timeout error.
non_process
submission check behaviour of perform upload action today there were some problems with the web pods of the kubernetes deployment uploading a small test file few kb to the sandbox gave a timeout error after five minutes with this message displayed however regardless of the error it seems the code proceeded to the consume endpoint resulting in a record getting stuck in a processing state i don t see how this could arise since the line which executes the consume endpoint should only be run for success but not in the case of an error this might be some weird problem specific to the production instance but it could be worth checking the logic of perform upload action in the case of a timeout error
0
18,470
24,550,399,309
IssuesEvent
2022-10-12 12:10:23
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Documentation issue: Installing Bazel on Ubuntu
P4 more data needed type: support / not a bug (process) team-OSS
Documentation URL: https://docs.bazel.build/versions/4.0.0/install-ubuntu.html I had to install tensorflow, but it had error. ubuntu version: ubuntu-18.04-desktop-amd64.iso Uncompressing..error [./bazel-2.0.0-installer-linux-x86_64.sh]: missing 1232 bytes in zipfile (attempting to process anyway) error: invalid zip file with overlapped components (possible zip bomb)
1.0
Documentation issue: Installing Bazel on Ubuntu - Documentation URL: https://docs.bazel.build/versions/4.0.0/install-ubuntu.html I had to install tensorflow, but it had error. ubuntu version: ubuntu-18.04-desktop-amd64.iso Uncompressing..error [./bazel-2.0.0-installer-linux-x86_64.sh]: missing 1232 bytes in zipfile (attempting to process anyway) error: invalid zip file with overlapped components (possible zip bomb)
process
documentation issue installing bazel on ubuntu documentation url i had to install tensorflow but it had error ubuntu version ubuntu desktop iso uncompressing error missing bytes in zipfile attempting to process anyway error invalid zip file with overlapped components possible zip bomb
1
66,603
12,805,896,507
IssuesEvent
2020-07-03 08:26:09
Regalis11/Barotrauma
https://api.github.com/repos/Regalis11/Barotrauma
closed
Audio does not work after OpenAL changes
Bug Code High prio
- [x] I have searched the issue tracker to check if the issue has already been reported. **Description** We've been getting some reports about game audio not working at all in the latest build (v0.10.0.0 RC 1) > "ALC device creation failed to many times!" > I have this bug only in last unstable version. > In stable or previous unstable version sound work correctly. **Steps To Reproduce** Unknown **Version** v0.10.0.0 RC 1
1.0
Audio does not work after OpenAL changes - - [x] I have searched the issue tracker to check if the issue has already been reported. **Description** We've been getting some reports about game audio not working at all in the latest build (v0.10.0.0 RC 1) > "ALC device creation failed to many times!" > I have this bug only in last unstable version. > In stable or previous unstable version sound work correctly. **Steps To Reproduce** Unknown **Version** v0.10.0.0 RC 1
non_process
audio does not work after openal changes i have searched the issue tracker to check if the issue has already been reported description we ve been getting some reports about game audio not working at all in the latest build rc alc device creation failed to many times i have this bug only in last unstable version in stable or previous unstable version sound work correctly steps to reproduce unknown version rc
0
5,308
8,125,245,812
IssuesEvent
2018-08-16 20:17:29
MetaMask/metamask-extension
https://api.github.com/repos/MetaMask/metamask-extension
closed
Cleanup state - old blacklist persisted in state
L09-process P3-soon T00-bug
should have had migration to remove the old blacklist as reported by core dev ultrasaur frankie
1.0
Cleanup state - old blacklist persisted in state - should have had migration to remove the old blacklist as reported by core dev ultrasaur frankie
process
cleanup state old blacklist persisted in state should have had migration to remove the old blacklist as reported by core dev ultrasaur frankie
1
20,938
27,796,647,412
IssuesEvent
2023-03-17 13:03:06
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
Status of Bazel 7.0.0-pre.20230306.4
P1 type: process release team-OSS
- Expected release date: 2023-03-17 Task list: - [x] Pick release baseline: [0ce17480](https://github.com/bazelbuild/bazel/commit/0ce17480390cdced2df8d59249561613e81f446f) with cherrypicks [28dc0f93](https://github.com/bazelbuild/bazel/commit/28dc0f93bf725d35124d3c17e8aaa654cfa3b498) [e79de51b](https://github.com/bazelbuild/bazel/commit/e79de51b91263b33ced77f0a749a1856972510d1) [e0cdaced](https://github.com/bazelbuild/bazel/commit/e0cdaced03750823021b8b1f5b82a71170d67642) - [ ] Create release candidate: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230306.4rc1/index.html - [ ] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds?branch=release-7.0.0-pre.20230306.4rc1 - [ ] Push the release: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230306.4/index.html - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Status of Bazel 7.0.0-pre.20230306.4 - - Expected release date: 2023-03-17 Task list: - [x] Pick release baseline: [0ce17480](https://github.com/bazelbuild/bazel/commit/0ce17480390cdced2df8d59249561613e81f446f) with cherrypicks [28dc0f93](https://github.com/bazelbuild/bazel/commit/28dc0f93bf725d35124d3c17e8aaa654cfa3b498) [e79de51b](https://github.com/bazelbuild/bazel/commit/e79de51b91263b33ced77f0a749a1856972510d1) [e0cdaced](https://github.com/bazelbuild/bazel/commit/e0cdaced03750823021b8b1f5b82a71170d67642) - [ ] Create release candidate: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230306.4rc1/index.html - [ ] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds?branch=release-7.0.0-pre.20230306.4rc1 - [ ] Push the release: https://releases.bazel.build/7.0.0/rolling/7.0.0-pre.20230306.4/index.html - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
status of bazel pre expected release date task list pick release baseline with cherrypicks create release candidate post submit push the release update the
1
352,456
25,068,090,618
IssuesEvent
2022-11-07 09:56:49
OpenPerceptionX/OpenLane
https://api.github.com/repos/OpenPerceptionX/OpenLane
closed
Bugs in v1.1
bug documentation
Hi @ChonghaoSima @zihanding819 , I found some bugs in v1.1 1. https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L182 `both_invisible_indices` are counted in `num_match_mat`, but https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L236 the denominator only counts visible points. 2. https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L203 those `-1`s should be removed bofore computing avg. > I got this with your realeased persformer ckpt ===> Evaluation laneline F-measure: 0.78679105 ===> Evaluation laneline Recall: 0.69620109 ===> Evaluation laneline Precision: 0.90448399 ===> Evaluation laneline Category Accuracy: 0.87950062 ===> Evaluation laneline x error (close): 0.18658032 m ===> Evaluation laneline x error (far): -0.22041094 m ===> Evaluation laneline z error (close): -0.02688632 m ===> Evaluation laneline z error (far): -0.34050375 m 3. `pred_lanes` should be converted to ndarray before https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L95 4. #18
1.0
Bugs in v1.1 - Hi @ChonghaoSima @zihanding819 , I found some bugs in v1.1 1. https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L182 `both_invisible_indices` are counted in `num_match_mat`, but https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L236 the denominator only counts visible points. 2. https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L203 those `-1`s should be removed bofore computing avg. > I got this with your realeased persformer ckpt ===> Evaluation laneline F-measure: 0.78679105 ===> Evaluation laneline Recall: 0.69620109 ===> Evaluation laneline Precision: 0.90448399 ===> Evaluation laneline Category Accuracy: 0.87950062 ===> Evaluation laneline x error (close): 0.18658032 m ===> Evaluation laneline x error (far): -0.22041094 m ===> Evaluation laneline z error (close): -0.02688632 m ===> Evaluation laneline z error (far): -0.34050375 m 3. `pred_lanes` should be converted to ndarray before https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L95 4. #18
non_process
bugs in hi chonghaosima i found some bugs in both invisible indices are counted in num match mat but the denominator only counts visible points those s should be removed bofore computing avg i got this with your realeased persformer ckpt evaluation laneline f measure evaluation laneline recall evaluation laneline precision evaluation laneline category accuracy evaluation laneline x error close m evaluation laneline x error far m evaluation laneline z error close m evaluation laneline z error far m pred lanes should be converted to ndarray before
0
20,328
29,820,790,717
IssuesEvent
2023-06-17 02:46:14
MetaMask/metamask-mobile
https://api.github.com/repos/MetaMask/metamask-mobile
closed
Metamask mobile not working on Aave
type-bug dapp-compatibility community stale
<!-- BEFORE SUBMITTING: 1) Please search to make sure this issue has not been opened already 2) If this is a implementation question or trouble with your personal project, please post on StackExchange. This will get your question answered more quickly and make it easier for other devs to find the answer in the future. --> **Bug and repro** _We have deployed a testing branch so you are able to debug properly if you want, at least on my side its giving an error when trying to make the web3 connection to show a button that would trigger a transaction. You can access the testing branch with virtual console to debug at https://dlp-300-vconsole.testing.aave.com/ Its giving “Network error” is it possible that by default Metamask is blocking the proper connection with the required API that its required to work? Im attaching here screenshots:_ ![image002](https://user-images.githubusercontent.com/2995401/80110604-25d0c080-857f-11ea-8eb3-c00b77722f0a.jpg) ![image003](https://user-images.githubusercontent.com/2995401/80110620-29fcde00-857f-11ea-800e-d4ee5bce7fc2.jpg) _For example in trust wallet and coinbase wallet there is no error and it works properly, similar way in other browser wallets and even in metamask desktop works well:_ ![image004](https://user-images.githubusercontent.com/2995401/80110579-1f424900-857f-11ea-8a43-aae4cf283671.jpg) ![image005](https://user-images.githubusercontent.com/2995401/80110568-1baec200-857f-11ea-971a-a67354ec4892.jpg)
True
Metamask mobile not working on Aave - <!-- BEFORE SUBMITTING: 1) Please search to make sure this issue has not been opened already 2) If this is a implementation question or trouble with your personal project, please post on StackExchange. This will get your question answered more quickly and make it easier for other devs to find the answer in the future. --> **Bug and repro** _We have deployed a testing branch so you are able to debug properly if you want, at least on my side its giving an error when trying to make the web3 connection to show a button that would trigger a transaction. You can access the testing branch with virtual console to debug at https://dlp-300-vconsole.testing.aave.com/ Its giving “Network error” is it possible that by default Metamask is blocking the proper connection with the required API that its required to work? Im attaching here screenshots:_ ![image002](https://user-images.githubusercontent.com/2995401/80110604-25d0c080-857f-11ea-8eb3-c00b77722f0a.jpg) ![image003](https://user-images.githubusercontent.com/2995401/80110620-29fcde00-857f-11ea-800e-d4ee5bce7fc2.jpg) _For example in trust wallet and coinbase wallet there is no error and it works properly, similar way in other browser wallets and even in metamask desktop works well:_ ![image004](https://user-images.githubusercontent.com/2995401/80110579-1f424900-857f-11ea-8a43-aae4cf283671.jpg) ![image005](https://user-images.githubusercontent.com/2995401/80110568-1baec200-857f-11ea-971a-a67354ec4892.jpg)
non_process
metamask mobile not working on aave before submitting please search to make sure this issue has not been opened already if this is a implementation question or trouble with your personal project please post on stackexchange this will get your question answered more quickly and make it easier for other devs to find the answer in the future bug and repro we have deployed a testing branch so you are able to debug properly if you want at least on my side its giving an error when trying to make the connection to show a button that would trigger a transaction you can access the testing branch with virtual console to debug at its giving “network error” is it possible that by default metamask is blocking the proper connection with the required api that its required to work im attaching here screenshots for example in trust wallet and coinbase wallet there is no error and it works properly similar way in other browser wallets and even in metamask desktop works well
0
16,181
20,625,925,600
IssuesEvent
2022-03-07 22:32:07
zotero/zotero
https://api.github.com/repos/zotero/zotero
opened
Show parent item title in search results in Add Note window
Word Processor Integration
Along with https://github.com/zotero/zotero/issues/2388, for child notes it would be helpful to show the parent item title.
1.0
Show parent item title in search results in Add Note window - Along with https://github.com/zotero/zotero/issues/2388, for child notes it would be helpful to show the parent item title.
process
show parent item title in search results in add note window along with for child notes it would be helpful to show the parent item title
1
344,163
24,800,220,713
IssuesEvent
2022-10-24 20:58:25
cdklabs/cdk-enterprise-iac
https://api.github.com/repos/cdklabs/cdk-enterprise-iac
closed
doc: Add examples of Aspects and Constructs in README.md
documentation
### Describe your issue? On some enterprise open source mirrors, the `README.md` is available but source code browsing is not. Need to either write out examples in `README.md` or automatically copy `API.md` to the end of `README.md`
1.0
doc: Add examples of Aspects and Constructs in README.md - ### Describe your issue? On some enterprise open source mirrors, the `README.md` is available but source code browsing is not. Need to either write out examples in `README.md` or automatically copy `API.md` to the end of `README.md`
non_process
doc add examples of aspects and constructs in readme md describe your issue on some enterprise open source mirrors the readme md is available but source code browsing is not need to either write out examples in readme md or automatically copy api md to the end of readme md
0
9,438
12,425,090,686
IssuesEvent
2020-05-24 14:48:06
code4romania/expert-consultation-api
https://api.github.com/repos/code4romania/expert-consultation-api
closed
[Documents] Consolidate the comments added for a document breakdown unit
document processing documents java spring
As an admin of the Legal Consultation platform I want to be able to consolidate the comments received for a legal document. Consolidating the comments means approving or rejecting the comments. For approved comments, the comments are reflected in the changes for the text of the document breakdown unit. ![RU - lege - integral - cu comentarii - open](https://user-images.githubusercontent.com/15039873/58744789-3b377c00-83fc-11e9-8ce2-c36ea75c6c9a.png)
1.0
[Documents] Consolidate the comments added for a document breakdown unit - As an admin of the Legal Consultation platform I want to be able to consolidate the comments received for a legal document. Consolidating the comments means approving or rejecting the comments. For approved comments, the comments are reflected in the changes for the text of the document breakdown unit. ![RU - lege - integral - cu comentarii - open](https://user-images.githubusercontent.com/15039873/58744789-3b377c00-83fc-11e9-8ce2-c36ea75c6c9a.png)
process
consolidate the comments added for a document breakdown unit as an admin of the legal consultation platform i want to be able to consolidate the comments received for a legal document consolidating the comments means approving or rejecting the comments for approved comments the comments are reflected in the changes for the text of the document breakdown unit
1
4,270
7,189,509,931
IssuesEvent
2018-02-02 14:17:35
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
getBloom: Report non-zero blooms only when 'verbose' is on
status-inprocess tools-getBloom type-enhancement
Also - there are a number of unimplemented tests in the CMakeLists.txt file plus various hidden commands epecially --check which should be enabled.
1.0
getBloom: Report non-zero blooms only when 'verbose' is on - Also - there are a number of unimplemented tests in the CMakeLists.txt file plus various hidden commands epecially --check which should be enabled.
process
getbloom report non zero blooms only when verbose is on also there are a number of unimplemented tests in the cmakelists txt file plus various hidden commands epecially check which should be enabled
1
10,923
13,724,973,877
IssuesEvent
2020-10-03 16:27:57
Gregory94/LaanLab-SATAY-DataAnalysis
https://api.github.com/repos/Gregory94/LaanLab-SATAY-DataAnalysis
closed
Convert Matlab code provided by the Kornmann lab for SATAY analysis to Python.
data processing
Using Python makes it easier to integrate the code in the rest of the workflow. Also, this makes documentation easier using for example Jupyter Notebooks or Jupyter Books.
1.0
Convert Matlab code provided by the Kornmann lab for SATAY analysis to Python. - Using Python makes it easier to integrate the code in the rest of the workflow. Also, this makes documentation easier using for example Jupyter Notebooks or Jupyter Books.
process
convert matlab code provided by the kornmann lab for satay analysis to python using python makes it easier to integrate the code in the rest of the workflow also this makes documentation easier using for example jupyter notebooks or jupyter books
1
10,077
13,044,161,964
IssuesEvent
2020-07-29 03:47:27
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `UnixTimestampCurrent` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `UnixTimestampCurrent` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `UnixTimestampCurrent` from TiDB - ## Description Port the scalar function `UnixTimestampCurrent` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function unixtimestampcurrent from tidb description port the scalar function unixtimestampcurrent from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
1
7,936
11,135,194,665
IssuesEvent
2019-12-20 13:49:04
Open-EO/openeo-processes
https://api.github.com/repos/Open-EO/openeo-processes
opened
loop over array (array_each or apply for arrays)
new process
We have some cases where we may want to iterate over an array so I'm thinking about either defining a function array_each or allow arrays as input data for apply. array_each (a bit outdated definition): ```json { "id": "array_each", "summary": "Applies a unary process to each array element", "description": "Applies a **unary** process which takes a single value such as `abs` or `sqrt` to each value in the array.", "categories": [ "arrays" ], "parameter_order": ["data", "process"], "parameters": { "data": { "description": "An array.", "schema": { "type": "array", "items": { "description": "Any data type is allowed." } }, "required": true }, "process": { "description": "A process (callback) to be applied on each value. The specified process must be unary meaning that it must work on a single value.", "schema": { "type": "object", "format": "callback", "parameters": { "x": { "description": "A value of any data type could be passed." } } }, "required": true } }, "returns": { "description": "An array with the newly computed values. The number of elements are the same as for the original array.", "schema": { "type": "array", "items": { "description": "Any data type is allowed." } } } } ```
1.0
loop over array (array_each or apply for arrays) - We have some cases where we may want to iterate over an array so I'm thinking about either defining a function array_each or allow arrays as input data for apply. array_each (a bit outdated definition): ```json { "id": "array_each", "summary": "Applies a unary process to each array element", "description": "Applies a **unary** process which takes a single value such as `abs` or `sqrt` to each value in the array.", "categories": [ "arrays" ], "parameter_order": ["data", "process"], "parameters": { "data": { "description": "An array.", "schema": { "type": "array", "items": { "description": "Any data type is allowed." } }, "required": true }, "process": { "description": "A process (callback) to be applied on each value. The specified process must be unary meaning that it must work on a single value.", "schema": { "type": "object", "format": "callback", "parameters": { "x": { "description": "A value of any data type could be passed." } } }, "required": true } }, "returns": { "description": "An array with the newly computed values. The number of elements are the same as for the original array.", "schema": { "type": "array", "items": { "description": "Any data type is allowed." } } } } ```
process
loop over array array each or apply for arrays we have some cases where we may want to iterate over an array so i m thinking about either defining a function array each or allow arrays as input data for apply array each a bit outdated definition json id array each summary applies a unary process to each array element description applies a unary process which takes a single value such as abs or sqrt to each value in the array categories arrays parameter order parameters data description an array schema type array items description any data type is allowed required true process description a process callback to be applied on each value the specified process must be unary meaning that it must work on a single value schema type object format callback parameters x description a value of any data type could be passed required true returns description an array with the newly computed values the number of elements are the same as for the original array schema type array items description any data type is allowed
1
5,074
7,869,853,857
IssuesEvent
2018-06-24 18:47:36
pwittchen/ReactiveNetwork
https://api.github.com/repos/pwittchen/ReactiveNetwork
closed
Release 1.0.0
RxJava2.x release process
**Initial release notes**: - fixed docs in https://github.com/pwittchen/ReactiveNetwork/commit/76ab2b23210207d83250da5d8fd0cd6e275e3f08 after reporting problem in #276 (returning false-positive connectivity results in one edge-case) - updated project dependencies - PR #269, commit 02449af2f38ac463e1aa8824beee46ea823fd83b - refactored `ReactiveNetwork` class with Builder pattern - PR #279 - removed the following methods from the `ReactiveNetwork` class: ```java Observable<Boolean> observeInternetConnectivity(int interval, String host, int port, int timeout) Observable<Boolean> observeInternetConnectivity(int initialIntervalInMs, int intervalInMs, String host, int port, int timeout) Observable<Boolean> observeInternetConnectivity(final int initialIntervalInMs, final int intervalInMs, final String host, final int port, final int timeoutInMs, final ErrorHandler errorHandler) Observable<Boolean> observeInternetConnectivity(final InternetObservingStrategy strategy) Observable<Boolean> observeInternetConnectivity(final InternetObservingStrategy strategy, final String host) Single<Boolean> checkInternetConnectivity(InternetObservingStrategy strategy) Single<Boolean> checkInternetConnectivity(String host,int port, int timeoutInMs) Single<Boolean> checkInternetConnectivity(String host, int port, int timeoutInMs, ErrorHandler errorHandler) Single<Boolean> checkInternetConnectivity(final InternetObservingStrategy strategy, final String host) ``` - added `InternetObservingSettings` class - added the following methods to the `ReactiveNetwork` class: ```java Observable<Boolean> observeInternetConnectivity(InternetObservingSettings settings) Single<Boolean> checkInternetConnectivity(InternetObservingSettings settings) ``` **Things to do**: - [x] update JavaDoc on `gh-pages` - [x] update documentation on `gh-pages` - [x] bump library version - [x] upload archives to Maven Central - [x] close and release artifact on Maven Central - [x] update `CHANGELOG.md` after Maven Sync - [x] bump library version in `README.md` - [x] update docs on gh-pages after updating `README.md` - [x] create new GitHub release
1.0
Release 1.0.0 - **Initial release notes**: - fixed docs in https://github.com/pwittchen/ReactiveNetwork/commit/76ab2b23210207d83250da5d8fd0cd6e275e3f08 after reporting problem in #276 (returning false-positive connectivity results in one edge-case) - updated project dependencies - PR #269, commit 02449af2f38ac463e1aa8824beee46ea823fd83b - refactored `ReactiveNetwork` class with Builder pattern - PR #279 - removed the following methods from the `ReactiveNetwork` class: ```java Observable<Boolean> observeInternetConnectivity(int interval, String host, int port, int timeout) Observable<Boolean> observeInternetConnectivity(int initialIntervalInMs, int intervalInMs, String host, int port, int timeout) Observable<Boolean> observeInternetConnectivity(final int initialIntervalInMs, final int intervalInMs, final String host, final int port, final int timeoutInMs, final ErrorHandler errorHandler) Observable<Boolean> observeInternetConnectivity(final InternetObservingStrategy strategy) Observable<Boolean> observeInternetConnectivity(final InternetObservingStrategy strategy, final String host) Single<Boolean> checkInternetConnectivity(InternetObservingStrategy strategy) Single<Boolean> checkInternetConnectivity(String host,int port, int timeoutInMs) Single<Boolean> checkInternetConnectivity(String host, int port, int timeoutInMs, ErrorHandler errorHandler) Single<Boolean> checkInternetConnectivity(final InternetObservingStrategy strategy, final String host) ``` - added `InternetObservingSettings` class - added the following methods to the `ReactiveNetwork` class: ```java Observable<Boolean> observeInternetConnectivity(InternetObservingSettings settings) Single<Boolean> checkInternetConnectivity(InternetObservingSettings settings) ``` **Things to do**: - [x] update JavaDoc on `gh-pages` - [x] update documentation on `gh-pages` - [x] bump library version - [x] upload archives to Maven Central - [x] close and release artifact on Maven Central - [x] update `CHANGELOG.md` after Maven Sync - [x] bump library version in `README.md` - [x] update docs on gh-pages after updating `README.md` - [x] create new GitHub release
process
release initial release notes fixed docs in after reporting problem in returning false positive connectivity results in one edge case updated project dependencies pr commit refactored reactivenetwork class with builder pattern pr removed the following methods from the reactivenetwork class java observable observeinternetconnectivity int interval string host int port int timeout observable observeinternetconnectivity int initialintervalinms int intervalinms string host int port int timeout observable observeinternetconnectivity final int initialintervalinms final int intervalinms final string host final int port final int timeoutinms final errorhandler errorhandler observable observeinternetconnectivity final internetobservingstrategy strategy observable observeinternetconnectivity final internetobservingstrategy strategy final string host single checkinternetconnectivity internetobservingstrategy strategy single checkinternetconnectivity string host int port int timeoutinms single checkinternetconnectivity string host int port int timeoutinms errorhandler errorhandler single checkinternetconnectivity final internetobservingstrategy strategy final string host added internetobservingsettings class added the following methods to the reactivenetwork class java observable observeinternetconnectivity internetobservingsettings settings single checkinternetconnectivity internetobservingsettings settings things to do update javadoc on gh pages update documentation on gh pages bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md update docs on gh pages after updating readme md create new github release
1
94,100
27,113,438,309
IssuesEvent
2023-02-15 16:49:20
opensafely-core/databuilder
https://api.github.com/repos/opensafely-core/databuilder
closed
Improve efficiency of generative tests by making them series-type-aware
databuilder-1
- [x] Agree design - [x] Extend new approach to all node types we currently support - [ ] Experiment to see how long it now takes to generate all nodes types; can we do this as part of the standard build? - [ ] If the answer to the last question is no, can we instead reduce the number of examples in the standard build to speed up the tests?
1.0
Improve efficiency of generative tests by making them series-type-aware - - [x] Agree design - [x] Extend new approach to all node types we currently support - [ ] Experiment to see how long it now takes to generate all nodes types; can we do this as part of the standard build? - [ ] If the answer to the last question is no, can we instead reduce the number of examples in the standard build to speed up the tests?
non_process
improve efficiency of generative tests by making them series type aware agree design extend new approach to all node types we currently support experiment to see how long it now takes to generate all nodes types can we do this as part of the standard build if the answer to the last question is no can we instead reduce the number of examples in the standard build to speed up the tests
0
8,098
11,274,424,185
IssuesEvent
2020-01-14 18:32:15
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
[Processing][Cartography]Set layer style
Feature Feature Request Feedback Processing
I am trying to apply a QML file from a web link (https://raw.githubusercontent.com/anitagraser/QGIS-resources/master/qgis2/osm_spatialite/osm_spatialite_googlemaps_multipolygon.qml) to my model. But **Set layer style** only accepts links from local resources.
1.0
[Processing][Cartography]Set layer style - I am trying to apply a QML file from a web link (https://raw.githubusercontent.com/anitagraser/QGIS-resources/master/qgis2/osm_spatialite/osm_spatialite_googlemaps_multipolygon.qml) to my model. But **Set layer style** only accepts links from local resources.
process
set layer style i am trying to apply a qml file from a web link to my model but set layer style only accepts links from local resources
1
288,308
31,861,262,813
IssuesEvent
2023-09-15 11:05:35
nidhi7598/linux-v4.19.72_CVE-2022-3564
https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564
opened
CVE-2020-11608 (Medium) detected in linuxlinux-4.19.294
Mend: dependency security vulnerability
## CVE-2020-11608 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/gspca/ov519.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/gspca/ov519.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.6.1. drivers/media/usb/gspca/ov519.c allows NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs when there are zero endpoints, aka CID-998912346c0d. <p>Publish Date: 2020-04-07 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11608>CVE-2020-11608</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-11608">https://nvd.nist.gov/vuln/detail/CVE-2020-11608</a></p> <p>Release Date: 2020-06-13</p> <p>Fix Resolution: linux - v5.7-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-11608 (Medium) detected in linuxlinux-4.19.294 - ## CVE-2020-11608 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/gspca/ov519.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/media/usb/gspca/ov519.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel before 5.6.1. drivers/media/usb/gspca/ov519.c allows NULL pointer dereferences in ov511_mode_init_regs and ov518_mode_init_regs when there are zero endpoints, aka CID-998912346c0d. <p>Publish Date: 2020-04-07 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-11608>CVE-2020-11608</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-11608">https://nvd.nist.gov/vuln/detail/CVE-2020-11608</a></p> <p>Release Date: 2020-06-13</p> <p>Fix Resolution: linux - v5.7-rc1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers media usb gspca c drivers media usb gspca c vulnerability details an issue was discovered in the linux kernel before drivers media usb gspca c allows null pointer dereferences in mode init regs and mode init regs when there are zero endpoints aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux step up your open source security game with mend
0
17,692
23,538,286,522
IssuesEvent
2022-08-20 01:50:06
nghi-huynh/HPA_HuBMAP_Kaggle_competition
https://api.github.com/repos/nghi-huynh/HPA_HuBMAP_Kaggle_competition
closed
Extra data preparation + Pre-trained models
data preprocessing data preparation
- [x] Prepare tiles (256x256) based on augmented dataset - [x] Update + organize pretrained models + dataset
1.0
Extra data preparation + Pre-trained models - - [x] Prepare tiles (256x256) based on augmented dataset - [x] Update + organize pretrained models + dataset
process
extra data preparation pre trained models prepare tiles based on augmented dataset update organize pretrained models dataset
1
458,540
13,176,973,793
IssuesEvent
2020-08-12 06:21:00
DS-13-Dev-Team/DS13
https://api.github.com/repos/DS-13-Dev-Team/DS13
closed
Runtime in hud.dm, line 21: wrong type of value for list
Cannot Reproduce Critical Priority High Priority
This is now known to be caused by null entries finding their way into the players' hud_list https://i.imgur.com/3oGH0bz.png Not entirely clear how. The player it happened to was a unitologist in security, they tried to use antag hud, and they also had a sec rig with its own huds
2.0
Runtime in hud.dm, line 21: wrong type of value for list - This is now known to be caused by null entries finding their way into the players' hud_list https://i.imgur.com/3oGH0bz.png Not entirely clear how. The player it happened to was a unitologist in security, they tried to use antag hud, and they also had a sec rig with its own huds
non_process
runtime in hud dm line wrong type of value for list this is now known to be caused by null entries finding their way into the players hud list not entirely clear how the player it happened to was a unitologist in security they tried to use antag hud and they also had a sec rig with its own huds
0
76,154
14,581,254,764
IssuesEvent
2020-12-18 10:26:25
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[4.0] Request: Change of Columns handling in Backend Forms
No Code Attached Yet
based on: https://github.com/joomla/joomla-cms/pull/29706 So fresh wind here: What if: - The columns will been used when a dev groups its fields additionaly by fieldsets: < fieldset name="myTab"> < fieldset name="basics"> < field>< /field> < /fieldset> < fieldset name="borders"> < field>< /field> < /fieldset> < /fieldset> And when no fieldsets are used INSIDE the main fieldset the columns are limited to one column... every additional fieldset (level 3 and after) should be ignored by this changes **This solution could have this benefits:** - the dev can define if she / he likes to use columns - columns count would / could still be possible to define if another PR will be created to define the columns - both sides got what they want to have (multiple columns and single column) - "Jumping issue" for hidden and later visible fields are not solved in general, yes but it will not happen anymore with this structure - you have nothing to change for existing J3 extensions XML to migrate In my test it was necessary to put div's arround the fieldsets to have a nice looking GUI In my test the generated HTML would need to look like this: https://ibb.co/PWhx0V7 https://ibb.co/6Yw0Pjx ![possible_columns_layout](https://user-images.githubusercontent.com/11912135/102586101-cf198780-4109-11eb-888b-6dd1d7d15279.jpg) ![possible_columns_layout_html](https://user-images.githubusercontent.com/11912135/102586108-d2ad0e80-4109-11eb-8f75-e493b2b1e7dd.jpg) **Things ToDo for this:** - Check in Form Renderer for fieldsets in main fieldsets -- Add the column classes as already been done in the actual release & wrap fieldset element inside a div -- Otherwise add column class for 1 column (because there are only fields and all of you are happy) - Small CSS changes (add rule to remove display inline block for sub sub fieldsets -- gets actually broken by [class^="column-"] > div, [class*=" column-"] > div { /* display: inline-flex; ... } in template.css on 10062 (Beta 5) (disable in console while live testing and it worked like a charm) And yes a small note inside the J4 docs would also be necessary to understand the logic and as i can see it should be backwards compatible too. I'm sorry but i have no clue how to create build that for you to testing - but the GUI changes can be simulated by changing the HTML directly in the browser. What do you guys think? _Originally posted by @marcorensch in https://github.com/joomla/joomla-cms/issues/29706#issuecomment-747918104_
1.0
[4.0] Request: Change of Columns handling in Backend Forms - based on: https://github.com/joomla/joomla-cms/pull/29706 So fresh wind here: What if: - The columns will been used when a dev groups its fields additionaly by fieldsets: < fieldset name="myTab"> < fieldset name="basics"> < field>< /field> < /fieldset> < fieldset name="borders"> < field>< /field> < /fieldset> < /fieldset> And when no fieldsets are used INSIDE the main fieldset the columns are limited to one column... every additional fieldset (level 3 and after) should be ignored by this changes **This solution could have this benefits:** - the dev can define if she / he likes to use columns - columns count would / could still be possible to define if another PR will be created to define the columns - both sides got what they want to have (multiple columns and single column) - "Jumping issue" for hidden and later visible fields are not solved in general, yes but it will not happen anymore with this structure - you have nothing to change for existing J3 extensions XML to migrate In my test it was necessary to put div's arround the fieldsets to have a nice looking GUI In my test the generated HTML would need to look like this: https://ibb.co/PWhx0V7 https://ibb.co/6Yw0Pjx ![possible_columns_layout](https://user-images.githubusercontent.com/11912135/102586101-cf198780-4109-11eb-888b-6dd1d7d15279.jpg) ![possible_columns_layout_html](https://user-images.githubusercontent.com/11912135/102586108-d2ad0e80-4109-11eb-8f75-e493b2b1e7dd.jpg) **Things ToDo for this:** - Check in Form Renderer for fieldsets in main fieldsets -- Add the column classes as already been done in the actual release & wrap fieldset element inside a div -- Otherwise add column class for 1 column (because there are only fields and all of you are happy) - Small CSS changes (add rule to remove display inline block for sub sub fieldsets -- gets actually broken by [class^="column-"] > div, [class*=" column-"] > div { /* display: inline-flex; ... } in template.css on 10062 (Beta 5) (disable in console while live testing and it worked like a charm) And yes a small note inside the J4 docs would also be necessary to understand the logic and as i can see it should be backwards compatible too. I'm sorry but i have no clue how to create build that for you to testing - but the GUI changes can be simulated by changing the HTML directly in the browser. What do you guys think? _Originally posted by @marcorensch in https://github.com/joomla/joomla-cms/issues/29706#issuecomment-747918104_
non_process
request change of columns handling in backend forms based on so fresh wind here what if the columns will been used when a dev groups its fields additionaly by fieldsets and when no fieldsets are used inside the main fieldset the columns are limited to one column every additional fieldset level and after should be ignored by this changes this solution could have this benefits the dev can define if she he likes to use columns columns count would could still be possible to define if another pr will be created to define the columns both sides got what they want to have multiple columns and single column jumping issue for hidden and later visible fields are not solved in general yes but it will not happen anymore with this structure you have nothing to change for existing extensions xml to migrate in my test it was necessary to put div s arround the fieldsets to have a nice looking gui in my test the generated html would need to look like this things todo for this check in form renderer for fieldsets in main fieldsets add the column classes as already been done in the actual release wrap fieldset element inside a div otherwise add column class for column because there are only fields and all of you are happy small css changes add rule to remove display inline block for sub sub fieldsets gets actually broken by div div display inline flex in template css on beta disable in console while live testing and it worked like a charm and yes a small note inside the docs would also be necessary to understand the logic and as i can see it should be backwards compatible too i m sorry but i have no clue how to create build that for you to testing but the gui changes can be simulated by changing the html directly in the browser what do you guys think originally posted by marcorensch in
0
12,396
14,909,855,430
IssuesEvent
2021-01-22 08:42:23
prisma/prisma
https://api.github.com/repos/prisma/prisma
closed
`prisma migrate` fails with MySQL
bug/1-repro-available kind/bug process/candidate team/migrations tech/engines
<!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description <!-- A clear and concise description of what the bug is. --> When trying to create & run a migration using `prisma migrate` with MySQL, it fails with ``Error: P1014 The underlying table for model `_migration` does not exist.`` This doesn't happen when using the exact same schema with PostgreSQL - it seems that for MySQL it's not creating the migrations table, and it also adds a `DROP TABLE `_migration` into the migration's README which doesn't appear under PostgreSQL. ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> 1. Create a new project (`yarn add -D @prisma/cli`, `yarn prisma init`) 2. Change `provider` in `prisma/schema.prisma` to `"mysql"` 3. Change database details in `prisma/.env` to point to an appropriate MySQL db 4. Add a simple model the the Prisma schema, something like this just to test ```prisma model User { id Int @default(autoincrement()) @id } ``` 5. Run `yarn prisma migrate save --experimental` 6. Run `yarn prisma migrate up --experimental` 7. Migration fails with an error code of `P1014` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> Migration is generated properly and runs as expected without failing. ## Prisma information <!-- Your Prisma schema, Prisma Client queries, ... Do not include your database credentials when sharing your Prisma schema! --> Base Prisma schema ```prisma datasource db { provider = "mysql" url = env("DATABASE_URL") } generator client { provider = "prisma-client-js" } model User { id Int @default(autoincrement()) @id } ``` ## Environment & setup <!-- In which environment does the problem occur --> - OS: Windows 10 20H2 - Database: MariaDB 10.5.8 - Node.js version: 15.3.0 - Prisma version: <!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]--> ``` @prisma/cli : 2.12.1 @prisma/client : 2.12.1 Current platform : windows Query Engine : query-engine cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\query-engine-windows.exe) Migration Engine : migration-engine-cli cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\migration-engine-windows.exe) Introspection Engine : introspection-core cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\introspection-engine-windows.exe) Format Binary : prisma-fmt cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\prisma-fmt-windows.exe) Studio : 0.322.0 ```
1.0
`prisma migrate` fails with MySQL - <!-- Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client. Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports --> ## Bug description <!-- A clear and concise description of what the bug is. --> When trying to create & run a migration using `prisma migrate` with MySQL, it fails with ``Error: P1014 The underlying table for model `_migration` does not exist.`` This doesn't happen when using the exact same schema with PostgreSQL - it seems that for MySQL it's not creating the migrations table, and it also adds a `DROP TABLE `_migration` into the migration's README which doesn't appear under PostgreSQL. ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> 1. Create a new project (`yarn add -D @prisma/cli`, `yarn prisma init`) 2. Change `provider` in `prisma/schema.prisma` to `"mysql"` 3. Change database details in `prisma/.env` to point to an appropriate MySQL db 4. Add a simple model the the Prisma schema, something like this just to test ```prisma model User { id Int @default(autoincrement()) @id } ``` 5. Run `yarn prisma migrate save --experimental` 6. Run `yarn prisma migrate up --experimental` 7. Migration fails with an error code of `P1014` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> Migration is generated properly and runs as expected without failing. ## Prisma information <!-- Your Prisma schema, Prisma Client queries, ... Do not include your database credentials when sharing your Prisma schema! --> Base Prisma schema ```prisma datasource db { provider = "mysql" url = env("DATABASE_URL") } generator client { provider = "prisma-client-js" } model User { id Int @default(autoincrement()) @id } ``` ## Environment & setup <!-- In which environment does the problem occur --> - OS: Windows 10 20H2 - Database: MariaDB 10.5.8 - Node.js version: 15.3.0 - Prisma version: <!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]--> ``` @prisma/cli : 2.12.1 @prisma/client : 2.12.1 Current platform : windows Query Engine : query-engine cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\query-engine-windows.exe) Migration Engine : migration-engine-cli cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\migration-engine-windows.exe) Introspection Engine : introspection-core cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\introspection-engine-windows.exe) Format Binary : prisma-fmt cf0680a1bfe8d5e743dc659cc7f08009f9587d58 (at node_modules\@prisma\engines\prisma-fmt-windows.exe) Studio : 0.322.0 ```
process
prisma migrate fails with mysql thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description when trying to create run a migration using prisma migrate with mysql it fails with error the underlying table for model migration does not exist this doesn t happen when using the exact same schema with postgresql it seems that for mysql it s not creating the migrations table and it also adds a drop table migration into the migration s readme which doesn t appear under postgresql how to reproduce steps to reproduce the behavior go to change run see error create a new project yarn add d prisma cli yarn prisma init change provider in prisma schema prisma to mysql change database details in prisma env to point to an appropriate mysql db add a simple model the the prisma schema something like this just to test prisma model user id int default autoincrement id run yarn prisma migrate save experimental run yarn prisma migrate up experimental migration fails with an error code of expected behavior migration is generated properly and runs as expected without failing prisma information your prisma schema prisma client queries do not include your database credentials when sharing your prisma schema base prisma schema prisma datasource db provider mysql url env database url generator client provider prisma client js model user id int default autoincrement id environment setup os windows database mariadb node js version prisma version prisma cli prisma client current platform windows query engine query engine at node modules prisma engines query engine windows exe migration engine migration engine cli at node modules prisma engines migration engine windows exe introspection engine introspection core at node modules prisma engines introspection engine windows exe format binary prisma fmt at node modules prisma engines prisma fmt windows exe studio
1
81,404
30,829,673,525
IssuesEvent
2023-08-01 23:56:01
dotCMS/core
https://api.github.com/repos/dotCMS/core
closed
LogViewer connection times out.
Type : Defect QA : Approved Merged QA : Passed Internal LTS: Next Team : Scout Triage Release : 23.06
### Parent Issue _No response_ ### Problem Statement The log viewer disconnects when the load balancer times out. The old implementation used a servlet to feed the front end and used a keep-alive strategy sending a blank space every 20 seconds. That implementation got replaced by a much more modern Server-Sent Events approach. However, we still need to keep the connection alive ### Steps to Reproduce since this is happening behind an LB reproducing the original problem has implications, But apparently, Firefox offers an option to simulate a time-out setting the value of `network.http.connection-timeout` in `about:config` ### Acceptance Criteria The log viewer should remain connected and functional even after long periods of inactivity ### dotCMS Version 23.01 ### Proposed Objective Customer Support ### Proposed Priority Priority 3 - Average ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. _No response_ ### Assumptions & Initiation Needs _No response_ ### Quality Assurance Notes & Workarounds _No response_ ### Sub-Tasks & Estimates _No response_
1.0
LogViewer connection times out. - ### Parent Issue _No response_ ### Problem Statement The log viewer disconnects when the load balancer times out. The old implementation used a servlet to feed the front end and used a keep-alive strategy sending a blank space every 20 seconds. That implementation got replaced by a much more modern Server-Sent Events approach. However, we still need to keep the connection alive ### Steps to Reproduce since this is happening behind an LB reproducing the original problem has implications, But apparently, Firefox offers an option to simulate a time-out setting the value of `network.http.connection-timeout` in `about:config` ### Acceptance Criteria The log viewer should remain connected and functional even after long periods of inactivity ### dotCMS Version 23.01 ### Proposed Objective Customer Support ### Proposed Priority Priority 3 - Average ### External Links... Slack Conversations, Support Tickets, Figma Designs, etc. _No response_ ### Assumptions & Initiation Needs _No response_ ### Quality Assurance Notes & Workarounds _No response_ ### Sub-Tasks & Estimates _No response_
non_process
logviewer connection times out parent issue no response problem statement the log viewer disconnects when the load balancer times out the old implementation used a servlet to feed the front end and used a keep alive strategy sending a blank space every seconds that implementation got replaced by a much more modern server sent events approach however we still need to keep the connection alive steps to reproduce since this is happening behind an lb reproducing the original problem has implications but apparently firefox offers an option to simulate a time out setting the value of network http connection timeout in about config acceptance criteria the log viewer should remain connected and functional even after long periods of inactivity dotcms version proposed objective customer support proposed priority priority average external links slack conversations support tickets figma designs etc no response assumptions initiation needs no response quality assurance notes workarounds no response sub tasks estimates no response
0
212,138
16,420,021,637
IssuesEvent
2021-05-19 11:26:36
globaldothealth/list
https://api.github.com/repos/globaldothealth/list
closed
Filter - Unable to filter records when there is no(blank) data
Applause Testing Data UI P2: Nice to have
View your issue at Applause Testing Services - https://platform.applause.com/products/27563/testcycles/310043/issues/4999305 ---- ACTION DETAILS ---- Action Performed: 1. Launch https://global.health/ 2. Click on Data 3. Click on "Gender" and enter the value "Male" 4. Try to Filter the records with blank(empty) values in the Gender column Expected Result: The system should allow me to filter the records when there is no data (It allows to filter based on Male, Female) Actual Result: The system does not allow me to filter the records when there is no data (It allows to filter based on Male, Female) Note: This is applicable to all the header where there is blank(empty) values Additional Info: Error Code/Message: ---- Applause Team Lead Recommendation ---- From Katarzyna Wladyszewska Reproducible: Unsure In scope: Yes Not a duplicate: yes Seems valid: yes Suggested value: somewhat valuable Comment: It is probably by design. ---- ENVIRONMENT ---- Firewall:McAfee,Language:English,Operating System:Windows,Web Browser:Firefox,Operating System Version:Windows 10 Home,Operating System Major Version:Windows 10 ---- APPLAUSE PROPERTIES ---- Applause Issue/Bug ID: 4999305 Title: Filter - Unable to filter records when there is no(blank) data Status: Pending Approval Type: Functional Frequency: Every Time Severity: Medium Product (Build): Global Health (March.Global Health URL) Test Cycle: Global Health - Test Cycle - 3/30/2021 ---- APPLAUSE ATTACHMENT(S) ---- Bug4999305_Recording__14.mp4 : https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Recording__14.mp4?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=VUCIM8d5G9DjZ4H1YgYMSxFP4JI%3D Bug4999305_Blank_Filter.png : https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Blank_Filter.png?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=08MgomK%2Fi8ZxrTx9a0bVkeAsPzA%3D ![Bug4999305_Recording__14.mp4](https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Recording__14.mp4?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=VUCIM8d5G9DjZ4H1YgYMSxFP4JI%3D) ![Bug4999305_Blank_Filter.png](https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Blank_Filter.png?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=08MgomK%2Fi8ZxrTx9a0bVkeAsPzA%3D)
1.0
Filter - Unable to filter records when there is no(blank) data - View your issue at Applause Testing Services - https://platform.applause.com/products/27563/testcycles/310043/issues/4999305 ---- ACTION DETAILS ---- Action Performed: 1. Launch https://global.health/ 2. Click on Data 3. Click on "Gender" and enter the value "Male" 4. Try to Filter the records with blank(empty) values in the Gender column Expected Result: The system should allow me to filter the records when there is no data (It allows to filter based on Male, Female) Actual Result: The system does not allow me to filter the records when there is no data (It allows to filter based on Male, Female) Note: This is applicable to all the header where there is blank(empty) values Additional Info: Error Code/Message: ---- Applause Team Lead Recommendation ---- From Katarzyna Wladyszewska Reproducible: Unsure In scope: Yes Not a duplicate: yes Seems valid: yes Suggested value: somewhat valuable Comment: It is probably by design. ---- ENVIRONMENT ---- Firewall:McAfee,Language:English,Operating System:Windows,Web Browser:Firefox,Operating System Version:Windows 10 Home,Operating System Major Version:Windows 10 ---- APPLAUSE PROPERTIES ---- Applause Issue/Bug ID: 4999305 Title: Filter - Unable to filter records when there is no(blank) data Status: Pending Approval Type: Functional Frequency: Every Time Severity: Medium Product (Build): Global Health (March.Global Health URL) Test Cycle: Global Health - Test Cycle - 3/30/2021 ---- APPLAUSE ATTACHMENT(S) ---- Bug4999305_Recording__14.mp4 : https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Recording__14.mp4?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=VUCIM8d5G9DjZ4H1YgYMSxFP4JI%3D Bug4999305_Blank_Filter.png : https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Blank_Filter.png?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=08MgomK%2Fi8ZxrTx9a0bVkeAsPzA%3D ![Bug4999305_Recording__14.mp4](https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Recording__14.mp4?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=VUCIM8d5G9DjZ4H1YgYMSxFP4JI%3D) ![Bug4999305_Blank_Filter.png](https://utest-dl.s3.amazonaws.com/6414/27563/310043/null/bugAttachment/Bug4999305_Blank_Filter.png?AWSAccessKeyId=AKIAJ2UIWMJ2OMC3UCQQ&Expires=1932902627&Signature=08MgomK%2Fi8ZxrTx9a0bVkeAsPzA%3D)
non_process
filter unable to filter records when there is no blank data view your issue at applause testing services action details action performed launch click on data click on gender and enter the value male try to filter the records with blank empty values in the gender column expected result the system should allow me to filter the records when there is no data it allows to filter based on male female actual result the system does not allow me to filter the records when there is no data it allows to filter based on male female note this is applicable to all the header where there is blank empty values additional info error code message applause team lead recommendation from katarzyna wladyszewska reproducible unsure in scope yes not a duplicate yes seems valid yes suggested value somewhat valuable comment it is probably by design environment firewall mcafee language english operating system windows web browser firefox operating system version windows home operating system major version windows applause properties applause issue bug id title filter unable to filter records when there is no blank data status pending approval type functional frequency every time severity medium product build global health march global health url test cycle global health test cycle applause attachment s recording blank filter png
0
15,536
19,703,298,805
IssuesEvent
2022-01-12 18:54:27
googleapis/python-source-context
https://api.github.com/repos/googleapis/python-source-context
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'source' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname 'source' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname source invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
7,459
10,562,336,339
IssuesEvent
2019-10-04 18:04:20
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
closed
Spanner: 'test_invalid_type' systest flakes
api: spanner flaky testing type: process
From: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7832 ```python _______________________ TestSessionAPI.test_invalid_type _______________________ @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc_gcp._channel._UnaryUnaryMultiCallable object at 0x7facb542c8d0> request = session: "projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634/ses... values { values { string_value: "0" } values { string_value: "" } } } } timeout = 3599.0 metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634'), ('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.3.0 gapic/1.4.0 gccl/1.4.0')] credentials = None def __call__(self, request, timeout=None, metadata=None, credentials=None): > response, _ = self.with_call(request, timeout, metadata, credentials) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc_gcp/_channel.py:224: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc_gcp._channel._UnaryUnaryMultiCallable object at 0x7facb542c8d0> request = session: "projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634/ses... values { values { string_value: "0" } values { string_value: "" } } } } timeout = 3599.0 metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634'), ('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.3.0 gapic/1.4.0 gccl/1.4.0')] credentials = None def with_call(self, request, timeout=None, metadata=None, credentials=None): channel_ref, affinity_key = self._preprocess(request) response, rendezvous = channel_ref.channel().unary_unary( self._multi_callable_processor.method(), self._multi_callable_processor.request_serializer(), self._multi_callable_processor.response_deserializer()).with_call( > request, timeout, metadata, credentials) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc_gcp/_channel.py:242: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7facb4bbf9e8> request = session: "projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634/ses... values { values { string_value: "0" } values { string_value: "" } } } } timeout = 3599.0 metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634'), ('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.3.0 gapic/1.4.0 gccl/1.4.0')] credentials = None def with_call(self, request, timeout=None, metadata=None, credentials=None): state, call, = self._blocking(request, timeout, metadata, credentials) > return _end_unary_response_blocking(state, call, True, None) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc/_channel.py:536: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7facb4bd0470> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7facb4bc5b08> with_call = True, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _Rendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _Rendezvous(state, None, None, deadline) E grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: E status = StatusCode.ABORTED E details = "Transaction was aborted." E debug_error_string = "{"created":"@1535565657.701685023","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1099,"grpc_message":"Transaction was aborted.","grpc_status":10}" E > ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc/_channel.py:466: _Rendezvous The above exception was the direct cause of the following exception: self = <tests.system.test_system.TestSessionAPI testMethod=test_invalid_type> def test_invalid_type(self): table = 'counters' columns = ('name', 'value') valid_input = (('', 0),) with self._db.batch() as batch: batch.delete(table, self.ALL) batch.insert(table, columns, valid_input) invalid_input = ((0, ''),) with self.assertRaises(exceptions.FailedPrecondition) as exc_info: with self._db.batch() as batch: batch.delete(table, self.ALL) > batch.insert(table, columns, invalid_input) tests/system/test_system.py:1392: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/spanner_v1/database.py:396: in __exit__ self._batch.commit() google/cloud/spanner_v1/batch.py:156: in commit metadata=metadata) google/cloud/spanner_v1/gapic/spanner_client.py:1027: in commit request, retry=retry, timeout=timeout, metadata=metadata) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:139: in __call__ return wrapped_func(*args, **kwargs) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/retry.py:260: in retry_wrapped_func on_error=on_error, ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/retry.py:177: in retry_target return target() ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/timeout.py:206: in func_with_timeout return func(*args, **kwargs) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:61: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E google.api_core.exceptions.Aborted: 409 Transaction was aborted. <string>:3: Aborted 1 failed, 62 passed, 2 skipped in 72.94 seconds ```
1.0
Spanner: 'test_invalid_type' systest flakes - From: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7832 ```python _______________________ TestSessionAPI.test_invalid_type _______________________ @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:59: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc_gcp._channel._UnaryUnaryMultiCallable object at 0x7facb542c8d0> request = session: "projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634/ses... values { values { string_value: "0" } values { string_value: "" } } } } timeout = 3599.0 metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634'), ('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.3.0 gapic/1.4.0 gccl/1.4.0')] credentials = None def __call__(self, request, timeout=None, metadata=None, credentials=None): > response, _ = self.with_call(request, timeout, metadata, credentials) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc_gcp/_channel.py:224: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc_gcp._channel._UnaryUnaryMultiCallable object at 0x7facb542c8d0> request = session: "projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634/ses... values { values { string_value: "0" } values { string_value: "" } } } } timeout = 3599.0 metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634'), ('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.3.0 gapic/1.4.0 gccl/1.4.0')] credentials = None def with_call(self, request, timeout=None, metadata=None, credentials=None): channel_ref, affinity_key = self._preprocess(request) response, rendezvous = channel_ref.channel().unary_unary( self._multi_callable_processor.method(), self._multi_callable_processor.request_serializer(), self._multi_callable_processor.response_deserializer()).with_call( > request, timeout, metadata, credentials) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc_gcp/_channel.py:242: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7facb4bbf9e8> request = session: "projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634/ses... values { values { string_value: "0" } values { string_value: "" } } } } timeout = 3599.0 metadata = [('google-cloud-resource-prefix', 'projects/precise-truck-742/instances/google-cloud-python-systest/databases/test_sessions_7832_1535565634'), ('x-goog-api-client', 'gl-python/3.6.0 grpc/1.15.0rc1 gax/1.3.0 gapic/1.4.0 gccl/1.4.0')] credentials = None def with_call(self, request, timeout=None, metadata=None, credentials=None): state, call, = self._blocking(request, timeout, metadata, credentials) > return _end_unary_response_blocking(state, call, True, None) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc/_channel.py:536: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7facb4bd0470> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7facb4bc5b08> with_call = True, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _Rendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _Rendezvous(state, None, None, deadline) E grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: E status = StatusCode.ABORTED E details = "Transaction was aborted." E debug_error_string = "{"created":"@1535565657.701685023","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1099,"grpc_message":"Transaction was aborted.","grpc_status":10}" E > ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/grpc/_channel.py:466: _Rendezvous The above exception was the direct cause of the following exception: self = <tests.system.test_system.TestSessionAPI testMethod=test_invalid_type> def test_invalid_type(self): table = 'counters' columns = ('name', 'value') valid_input = (('', 0),) with self._db.batch() as batch: batch.delete(table, self.ALL) batch.insert(table, columns, valid_input) invalid_input = ((0, ''),) with self.assertRaises(exceptions.FailedPrecondition) as exc_info: with self._db.batch() as batch: batch.delete(table, self.ALL) > batch.insert(table, columns, invalid_input) tests/system/test_system.py:1392: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ google/cloud/spanner_v1/database.py:396: in __exit__ self._batch.commit() google/cloud/spanner_v1/batch.py:156: in commit metadata=metadata) google/cloud/spanner_v1/gapic/spanner_client.py:1027: in commit request, retry=retry, timeout=timeout, metadata=metadata) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:139: in __call__ return wrapped_func(*args, **kwargs) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/retry.py:260: in retry_wrapped_func on_error=on_error, ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/retry.py:177: in retry_target return target() ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/timeout.py:206: in func_with_timeout return func(*args, **kwargs) ../.nox/sys-grpc-gcp-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:61: in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E google.api_core.exceptions.Aborted: 409 Transaction was aborted. <string>:3: Aborted 1 failed, 62 passed, 2 skipped in 72.94 seconds ```
process
spanner test invalid type systest flakes from python testsessionapi test invalid type six wraps callable def error remapped callable args kwargs try return callable args kwargs nox sys grpc gcp lib site packages google api core grpc helpers py self request session projects precise truck instances google cloud python systest databases test sessions ses values values string value values string value timeout metadata credentials none def call self request timeout none metadata none credentials none response self with call request timeout metadata credentials nox sys grpc gcp lib site packages grpc gcp channel py self request session projects precise truck instances google cloud python systest databases test sessions ses values values string value values string value timeout metadata credentials none def with call self request timeout none metadata none credentials none channel ref affinity key self preprocess request response rendezvous channel ref channel unary unary self multi callable processor method self multi callable processor request serializer self multi callable processor response deserializer with call request timeout metadata credentials nox sys grpc gcp lib site packages grpc gcp channel py self request session projects precise truck instances google cloud python systest databases test sessions ses values values string value values string value timeout metadata credentials none def with call self request timeout none metadata none credentials none state call self blocking request timeout metadata credentials return end unary response blocking state call true none nox sys grpc gcp lib site packages grpc channel py state call with call true deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous rendezvous state call none deadline return state response rendezvous else return state response else raise rendezvous state none none deadline e grpc channel rendezvous rendezvous of rpc that terminated with e status statuscode aborted e details transaction was aborted e debug error string created description error received from peer file src core lib surface call cc file line grpc message transaction was aborted grpc status e nox sys grpc gcp lib site packages grpc channel py rendezvous the above exception was the direct cause of the following exception self def test invalid type self table counters columns name value valid input with self db batch as batch batch delete table self all batch insert table columns valid input invalid input with self assertraises exceptions failedprecondition as exc info with self db batch as batch batch delete table self all batch insert table columns invalid input tests system test system py google cloud spanner database py in exit self batch commit google cloud spanner batch py in commit metadata metadata google cloud spanner gapic spanner client py in commit request retry retry timeout timeout metadata metadata nox sys grpc gcp lib site packages google api core gapic method py in call return wrapped func args kwargs nox sys grpc gcp lib site packages google api core retry py in retry wrapped func on error on error nox sys grpc gcp lib site packages google api core retry py in retry target return target nox sys grpc gcp lib site packages google api core timeout py in func with timeout return func args kwargs nox sys grpc gcp lib site packages google api core grpc helpers py in error remapped callable six raise from exceptions from grpc error exc exc e google api core exceptions aborted transaction was aborted aborted failed passed skipped in seconds
1
329,393
28,239,643,667
IssuesEvent
2023-04-06 05:48:15
huyentranlt/FeedBackOnline
https://api.github.com/repos/huyentranlt/FeedBackOnline
opened
BugID_102]_Sửa trainer cho gán topic_Khi nhấn [Sửa] button của 1 record thì tất các các [Trainer] combobox đều được enable
Bug Open Sev_Medium Pri_Medium Integration Test Fun_Incomplete Function
Precondition: Admin đứng tại màn hình Quản lý gán topic Step: 1. Chọn 1 item từ combobox [Lớp] 2. Click button [Sửa] của 1 record nào đó Actual output: [Trainer] combobox của tất cả các record được enable Expected output: Chỉ [Trainer] combobox của record được chọn được enable ---------- TestCaseID = 16 ![Untitled](https://user-images.githubusercontent.com/128335270/230282978-4107e220-c422-41ab-8790-141f664d5d5c.png)
1.0
BugID_102]_Sửa trainer cho gán topic_Khi nhấn [Sửa] button của 1 record thì tất các các [Trainer] combobox đều được enable - Precondition: Admin đứng tại màn hình Quản lý gán topic Step: 1. Chọn 1 item từ combobox [Lớp] 2. Click button [Sửa] của 1 record nào đó Actual output: [Trainer] combobox của tất cả các record được enable Expected output: Chỉ [Trainer] combobox của record được chọn được enable ---------- TestCaseID = 16 ![Untitled](https://user-images.githubusercontent.com/128335270/230282978-4107e220-c422-41ab-8790-141f664d5d5c.png)
non_process
bugid sửa trainer cho gán topic khi nhấn button của record thì tất các các combobox đều được enable precondition admin đứng tại màn hình quản lý gán topic step chọn item từ combobox click button của record nào đó actual output combobox của tất cả các record được enable expected output chỉ combobox của record được chọn được enable testcaseid
0
17,063
22,500,934,572
IssuesEvent
2022-06-23 11:46:31
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Update vulnerable versions of Go, Netty and Spring for 1.3.8
kind/toil support area/security team/distributed team/process-automation release/1.3.9
**Description** According to Snyk, Zeebe 1.3.8 uses: - io.netty:netty-common 4.1.72.Final -> CVE-2022-24823, CWE-200, CWE-378 :heavy_check_mark: - org.springframework:spring-beans 5.3.18 -> CVE-2022-22970, CWE-400 :heavy_check_mark: - org.springframework:spring-context 5.3.18 -> CVE-2022-22968, CWE-178 :heavy_check_mark: - io.netty:netty-handler 4.1.72.Final -> CWE-295 :heavy_check_mark: - go 1.15.15 -> CVE-2021-38297, CVE-2022-23806 We should update these versions. The go update is a bit more involved so we should just backport the upgrade to go 1.17 once that's done: https://github.com/camunda/zeebe/issues/9270 Related support case: https://jira.camunda.com/browse/SUPPORT-13714
1.0
Update vulnerable versions of Go, Netty and Spring for 1.3.8 - **Description** According to Snyk, Zeebe 1.3.8 uses: - io.netty:netty-common 4.1.72.Final -> CVE-2022-24823, CWE-200, CWE-378 :heavy_check_mark: - org.springframework:spring-beans 5.3.18 -> CVE-2022-22970, CWE-400 :heavy_check_mark: - org.springframework:spring-context 5.3.18 -> CVE-2022-22968, CWE-178 :heavy_check_mark: - io.netty:netty-handler 4.1.72.Final -> CWE-295 :heavy_check_mark: - go 1.15.15 -> CVE-2021-38297, CVE-2022-23806 We should update these versions. The go update is a bit more involved so we should just backport the upgrade to go 1.17 once that's done: https://github.com/camunda/zeebe/issues/9270 Related support case: https://jira.camunda.com/browse/SUPPORT-13714
process
update vulnerable versions of go netty and spring for description according to snyk zeebe uses io netty netty common final cve cwe cwe heavy check mark org springframework spring beans cve cwe heavy check mark org springframework spring context cve cwe heavy check mark io netty netty handler final cwe heavy check mark go cve cve we should update these versions the go update is a bit more involved so we should just backport the upgrade to go once that s done related support case
1
10,153
7,927,928,852
IssuesEvent
2018-07-06 09:44:43
vector-im/riot-web
https://api.github.com/repos/vector-im/riot-web
closed
Disambiguation algorithm should ignore whitespace
bug help-wanted major p1 security
<!-- This is a bug report template. By following the instructions below and filling out the sections with your information, you will help the us to get all the necessary data to fix your issue. You can also preview your report before submitting it. You may remove sections that aren't relevant to your particular case. Text between <!-- and --​> marks will be invisible in the report. --> ### Description Currently it is possible to impersonate someone by using the same display name as they, but adding e.g. a space or zero-width space to your display name. This causes the disambiguation algorithm to think that there is no need to show the full mxid, even though it is impossible for a human to tell the two names apart. ### Steps to reproduce - Have a user change their display name "testuser" - Have another user change their display name to "testuser " - Observe that messages from both users look identical ### Version information - **Platform**: web - **Browser**: Firefox 57 - **OS**: Linux - **URL**: riot.im/develop
True
Disambiguation algorithm should ignore whitespace - <!-- This is a bug report template. By following the instructions below and filling out the sections with your information, you will help the us to get all the necessary data to fix your issue. You can also preview your report before submitting it. You may remove sections that aren't relevant to your particular case. Text between <!-- and --​> marks will be invisible in the report. --> ### Description Currently it is possible to impersonate someone by using the same display name as they, but adding e.g. a space or zero-width space to your display name. This causes the disambiguation algorithm to think that there is no need to show the full mxid, even though it is impossible for a human to tell the two names apart. ### Steps to reproduce - Have a user change their display name "testuser" - Have another user change their display name to "testuser " - Observe that messages from both users look identical ### Version information - **Platform**: web - **Browser**: Firefox 57 - **OS**: Linux - **URL**: riot.im/develop
non_process
disambiguation algorithm should ignore whitespace this is a bug report template by following the instructions below and filling out the sections with your information you will help the us to get all the necessary data to fix your issue you can also preview your report before submitting it you may remove sections that aren t relevant to your particular case text between marks will be invisible in the report description currently it is possible to impersonate someone by using the same display name as they but adding e g a space or zero width space to your display name this causes the disambiguation algorithm to think that there is no need to show the full mxid even though it is impossible for a human to tell the two names apart steps to reproduce have a user change their display name testuser have another user change their display name to testuser observe that messages from both users look identical version information platform web browser firefox os linux url riot im develop
0
4,723
7,567,854,744
IssuesEvent
2018-04-22 14:28:27
threefoldfoundation/tfchain
https://api.github.com/repos/threefoldfoundation/tfchain
reopened
Published windows binaries should be signed, and have proper installer.
process_wontfix type_feature wontfix
I tried to use the downloadable windows binaries, but did not succeed in using them. Recent version of windows make it very complex to just run a downloaded executable.
1.0
Published windows binaries should be signed, and have proper installer. - I tried to use the downloadable windows binaries, but did not succeed in using them. Recent version of windows make it very complex to just run a downloaded executable.
process
published windows binaries should be signed and have proper installer i tried to use the downloadable windows binaries but did not succeed in using them recent version of windows make it very complex to just run a downloaded executable
1
84,440
10,373,435,183
IssuesEvent
2019-09-09 07:15:30
FCO/Red
https://api.github.com/repos/FCO/Red
closed
Document the depreciation will occur on :api<2>
Documentation
On `:api<2>` this ```perl6 has $!ref is referencing{ Model.id } ``` Will be deprecated in favor of ```perl6 has $!ref is referencing( *.id, :model<Model> ) ``` It will make better joins possible... It is currently not documented. Features should be documented on the [wiki](https://github.com/FCO/Red/wiki). Feel free to create a new wiki page if you cannot find an existing page that is suitable.
1.0
Document the depreciation will occur on :api<2> - On `:api<2>` this ```perl6 has $!ref is referencing{ Model.id } ``` Will be deprecated in favor of ```perl6 has $!ref is referencing( *.id, :model<Model> ) ``` It will make better joins possible... It is currently not documented. Features should be documented on the [wiki](https://github.com/FCO/Red/wiki). Feel free to create a new wiki page if you cannot find an existing page that is suitable.
non_process
document the depreciation will occur on api on api this has ref is referencing model id will be deprecated in favor of has ref is referencing id model it will make better joins possible it is currently not documented features should be documented on the feel free to create a new wiki page if you cannot find an existing page that is suitable
0
21,186
28,153,406,770
IssuesEvent
2023-04-03 04:50:55
ssytnt/papers
https://api.github.com/repos/ssytnt/papers
opened
Zoom to learn, learn to zoom[Zhang+(UC Berkeley),CVPR2019]
ImageProcessing
## 概要 入力データに着目した単一画像超解像。 ## 背景 従来は元画像を縮小し学習データとするためその過程でノイズが落ち、本物のノイズに対応できない。 ## 方法 ・手動でカメラの絞りかえて解像度の異なる複数のデータを取得し学習データとした上で、生データ(ベイヤー配列)を入力とするDNNで超解像を学習。 ・解像度の違いによるボケや位置ずれを吸収するため、Contextual lossに着想を得て、CoBi(Contexutal Bilateral Loss)を考案。マッチング失敗時のアーチファクトを低減するため色空間上の距離も考慮。RGB画像とVGGの特徴マップでそれぞれCoBiを計算し2つの和をロスとした。 ## 結果 ・PSNR/SSIMによる定量評価でDNNベースの最新手法を上回る性能を達成。 ・GAN系はノイズ由来のアーチファクト発生、CNN系はボケるが提案法は高品質。 ・学習データの形式をRGB画像、人工的に生成したベイヤー配列とすると性能劣化(生のセンサデータを入力とする有効性を確認) ![画像19](https://user-images.githubusercontent.com/129141420/229414368-326a5f80-b4ff-43af-b1e9-bbe710ecc5ea.png)
1.0
Zoom to learn, learn to zoom[Zhang+(UC Berkeley),CVPR2019] - ## 概要 入力データに着目した単一画像超解像。 ## 背景 従来は元画像を縮小し学習データとするためその過程でノイズが落ち、本物のノイズに対応できない。 ## 方法 ・手動でカメラの絞りかえて解像度の異なる複数のデータを取得し学習データとした上で、生データ(ベイヤー配列)を入力とするDNNで超解像を学習。 ・解像度の違いによるボケや位置ずれを吸収するため、Contextual lossに着想を得て、CoBi(Contexutal Bilateral Loss)を考案。マッチング失敗時のアーチファクトを低減するため色空間上の距離も考慮。RGB画像とVGGの特徴マップでそれぞれCoBiを計算し2つの和をロスとした。 ## 結果 ・PSNR/SSIMによる定量評価でDNNベースの最新手法を上回る性能を達成。 ・GAN系はノイズ由来のアーチファクト発生、CNN系はボケるが提案法は高品質。 ・学習データの形式をRGB画像、人工的に生成したベイヤー配列とすると性能劣化(生のセンサデータを入力とする有効性を確認) ![画像19](https://user-images.githubusercontent.com/129141420/229414368-326a5f80-b4ff-43af-b1e9-bbe710ecc5ea.png)
process
zoom to learn learn to zoom 概要 入力データに着目した単一画像超解像。 背景 従来は元画像を縮小し学習データとするためその過程でノイズが落ち、本物のノイズに対応できない。 方法 ・手動でカメラの絞りかえて解像度の異なる複数のデータを取得し学習データとした上で、生データ ベイヤー配列 を入力とするdnnで超解像を学習。 ・解像度の違いによるボケや位置ずれを吸収するため、contextual lossに着想を得て、cobi contexutal bilateral loss を考案。マッチング失敗時のアーチファクトを低減するため色空間上の距離も考慮。 。 結果 ・psnr ssimによる定量評価でdnnベースの最新手法を上回る性能を達成。 ・gan系はノイズ由来のアーチファクト発生、cnn系はボケるが提案法は高品質。 ・学習データの形式をrgb画像、人工的に生成したベイヤー配列とすると性能劣化(生のセンサデータを入力とする有効性を確認)
1
79,817
23,047,687,178
IssuesEvent
2022-07-24 06:20:09
vrra/FGAN-Build-a-thon-2022
https://api.github.com/repos/vrra/FGAN-Build-a-thon-2022
opened
[Register for Build-a-thon 2022]:
Register for Build-a-thon 2022
### Team name The Step-Dads ### Total number of team members 3 ### Contact Details of team member 1 am1200673@iitd.ac.in ### Contact Details of team member 2 20cs3074@rgipt.ac.in ### Contact Details of team member 3 ee3200575@iitd.ac.in ### Contact Details of team member 4 _No response_ ### Contact Details of mentor _No response_ ### tell us more about yourself Tanishq Panwar, student, IIT Delhi, India Aditya Mathur, student, IIT Delhi, India Hitarth Agarwal, student, RGIPT, India ### Code of Conduct - [X] I agree to follow the Challenge Code of Conduct
1.0
[Register for Build-a-thon 2022]: - ### Team name The Step-Dads ### Total number of team members 3 ### Contact Details of team member 1 am1200673@iitd.ac.in ### Contact Details of team member 2 20cs3074@rgipt.ac.in ### Contact Details of team member 3 ee3200575@iitd.ac.in ### Contact Details of team member 4 _No response_ ### Contact Details of mentor _No response_ ### tell us more about yourself Tanishq Panwar, student, IIT Delhi, India Aditya Mathur, student, IIT Delhi, India Hitarth Agarwal, student, RGIPT, India ### Code of Conduct - [X] I agree to follow the Challenge Code of Conduct
non_process
team name the step dads total number of team members contact details of team member iitd ac in contact details of team member rgipt ac in contact details of team member iitd ac in contact details of team member no response contact details of mentor no response tell us more about yourself tanishq panwar student iit delhi india aditya mathur student iit delhi india hitarth agarwal student rgipt india code of conduct i agree to follow the challenge code of conduct
0
11,731
14,569,339,619
IssuesEvent
2020-12-17 12:57:30
googleapis/python-storage
https://api.github.com/repos/googleapis/python-storage
opened
Change deprecated method calls
type: cleanup type: process
Deprecated methods: - blob.download_to_file() - bucket.list_blobs() Remove deprecated method calls and change method calls to `client.download_blob_to_file()` and `client.list_blobs()` ex: blob.download_to_filename() was internally call blob.download_to_file() now direct call to `client.download_blob_to_file()`
1.0
Change deprecated method calls - Deprecated methods: - blob.download_to_file() - bucket.list_blobs() Remove deprecated method calls and change method calls to `client.download_blob_to_file()` and `client.list_blobs()` ex: blob.download_to_filename() was internally call blob.download_to_file() now direct call to `client.download_blob_to_file()`
process
change deprecated method calls deprecated methods blob download to file bucket list blobs remove deprecated method calls and change method calls to client download blob to file and client list blobs ex blob download to filename was internally call blob download to file now direct call to client download blob to file
1
5,104
7,883,387,774
IssuesEvent
2018-06-27 04:43:10
gluster/glusterd2
https://api.github.com/repos/gluster/glusterd2
closed
[process] Require PRs to remain open at least 24 work hours
priority: medium process-improvement
Now that there's a greater involvement in GD2 from people in a wider variety of time zones it might be nice to require that PRs remain open for some time for review, etc. I raised this idea with a few people in person and they seemed to think it was OK and so I figured this place was the best place to float the idea. My proposal is: * Any non-trivial PR must remain open for commentary & feed back at least 24 work hours * A work hour is defined as an hour occurring on Mon - Fri (excluding Sat. and Sun.) - As an example: you post your work on 13:00 UTC on Friday it would not be eligible for merging until 13:00 UTC Monday. * A trivial PR can be reviewed and merged immediately * A trivial PR changes no code or the meaning of any documents. So, a change fixing the spelling of a comment or the grammar in a doc would be considered "trivial" for the purposes of this proposal Feedback very welcome!
1.0
[process] Require PRs to remain open at least 24 work hours - Now that there's a greater involvement in GD2 from people in a wider variety of time zones it might be nice to require that PRs remain open for some time for review, etc. I raised this idea with a few people in person and they seemed to think it was OK and so I figured this place was the best place to float the idea. My proposal is: * Any non-trivial PR must remain open for commentary & feed back at least 24 work hours * A work hour is defined as an hour occurring on Mon - Fri (excluding Sat. and Sun.) - As an example: you post your work on 13:00 UTC on Friday it would not be eligible for merging until 13:00 UTC Monday. * A trivial PR can be reviewed and merged immediately * A trivial PR changes no code or the meaning of any documents. So, a change fixing the spelling of a comment or the grammar in a doc would be considered "trivial" for the purposes of this proposal Feedback very welcome!
process
require prs to remain open at least work hours now that there s a greater involvement in from people in a wider variety of time zones it might be nice to require that prs remain open for some time for review etc i raised this idea with a few people in person and they seemed to think it was ok and so i figured this place was the best place to float the idea my proposal is any non trivial pr must remain open for commentary feed back at least work hours a work hour is defined as an hour occurring on mon fri excluding sat and sun as an example you post your work on utc on friday it would not be eligible for merging until utc monday a trivial pr can be reviewed and merged immediately a trivial pr changes no code or the meaning of any documents so a change fixing the spelling of a comment or the grammar in a doc would be considered trivial for the purposes of this proposal feedback very welcome
1
503,144
14,579,970,070
IssuesEvent
2020-12-18 08:21:53
itggot-TE4/Yabs
https://api.github.com/repos/itggot-TE4/Yabs
closed
Add return book functionality
comp: frontend point: 2 priority: high team: green type:feature✨
## ✨ Feature request ### 📇 User story As a user I want to be able to return a book I have loaned. ### 📜 Acceptance Criteria A list of changes that need to be implemented to consider this feature done. - [ ] Book is removed from students active loans. ### 💡 Additional context Add any other context or screenshots about the feature request here.
1.0
Add return book functionality - ## ✨ Feature request ### 📇 User story As a user I want to be able to return a book I have loaned. ### 📜 Acceptance Criteria A list of changes that need to be implemented to consider this feature done. - [ ] Book is removed from students active loans. ### 💡 Additional context Add any other context or screenshots about the feature request here.
non_process
add return book functionality ✨ feature request 📇 user story as a user i want to be able to return a book i have loaned 📜 acceptance criteria a list of changes that need to be implemented to consider this feature done book is removed from students active loans 💡 additional context add any other context or screenshots about the feature request here
0
626,684
19,830,819,776
IssuesEvent
2022-01-20 11:47:38
kubeapps/kubeapps
https://api.github.com/repos/kubeapps/kubeapps
closed
Step 5 of E2E tests is consistently failing
kind/bug priority/high size/M component/ci
### Description: Step `05-missing-permissions.js` of the E2E tests is failing most of the time. At first, initial investigation by @absoludity pointed to the absent "Missing permissions" sentence in the error message checked by the test. However, it seems that there are two types of errors showing: 1. ![image](https://user-images.githubusercontent.com/67455978/149835667-d5b40203-cf4e-41a8-8fd2-90e50d73a0b1.png) 2. <img width="856" alt="image" src="https://user-images.githubusercontent.com/67455978/149835704-f71b5056-9f18-4892-a238-30909b736d2e.png"> ### Steps to reproduce the issue: 1. Use OIDC user (not operator) 2. Try to install a package from a global repo ### Describe the results you received: Test not passing most of the time ### Describe the results you expected: Test consistently passing successfully
1.0
Step 5 of E2E tests is consistently failing - ### Description: Step `05-missing-permissions.js` of the E2E tests is failing most of the time. At first, initial investigation by @absoludity pointed to the absent "Missing permissions" sentence in the error message checked by the test. However, it seems that there are two types of errors showing: 1. ![image](https://user-images.githubusercontent.com/67455978/149835667-d5b40203-cf4e-41a8-8fd2-90e50d73a0b1.png) 2. <img width="856" alt="image" src="https://user-images.githubusercontent.com/67455978/149835704-f71b5056-9f18-4892-a238-30909b736d2e.png"> ### Steps to reproduce the issue: 1. Use OIDC user (not operator) 2. Try to install a package from a global repo ### Describe the results you received: Test not passing most of the time ### Describe the results you expected: Test consistently passing successfully
non_process
step of tests is consistently failing description step missing permissions js of the tests is failing most of the time at first initial investigation by absoludity pointed to the absent missing permissions sentence in the error message checked by the test however it seems that there are two types of errors showing img width alt image src steps to reproduce the issue use oidc user not operator try to install a package from a global repo describe the results you received test not passing most of the time describe the results you expected test consistently passing successfully
0
571
3,036,745,079
IssuesEvent
2015-08-06 13:45:49
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
На главном портале починить получние контрольной суммы по алгоритму Луна
In process of testing test
Найти строчки: //TODO: Fix Alhoritm Luna //Number 2187501 must give тCRC=3 //Check: http://planetcalc.ru/2464/ и поправить алгоритм. ссылка на онлайн-калькулятор: http://planetcalc.ru/2464/ Пример: для числа "2187501" - калькулятор выдает контрольную сумму 3 так-же и на: https://test.igov.org.ua/order/search если задать 21875013 - напишет как и должно быть - запись не найдена (вместо сообщения что номер не валиден. т.к. тут тоже алгоритм работает. а вот если в консоли обозревателя запустить: var n = parseInt(2187501); var nFactor = 1; var nCRC = 0; var nAddend; while (n !== 0){ nAddend = Math.round(nFactor * (n % 10)); nFactor = (nFactor === 2) ? 1 : 2; nAddend = nAddend > 9 ? nAddend - 9 : nAddend; nCRC += nAddend; n /= 10; } alert(nCRC%10); - вернет 7 (что неверно) П.С.: другие описания алгоритма: https://ru.wikipedia.org/wiki/%D0%90%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82%D0%BC_%D0%9B%D1%83%D0%BD%D0%B0 https://gist.github.com/dimiork/75d8b1288171fba45d71
1.0
На главном портале починить получние контрольной суммы по алгоритму Луна - Найти строчки: //TODO: Fix Alhoritm Luna //Number 2187501 must give тCRC=3 //Check: http://planetcalc.ru/2464/ и поправить алгоритм. ссылка на онлайн-калькулятор: http://planetcalc.ru/2464/ Пример: для числа "2187501" - калькулятор выдает контрольную сумму 3 так-же и на: https://test.igov.org.ua/order/search если задать 21875013 - напишет как и должно быть - запись не найдена (вместо сообщения что номер не валиден. т.к. тут тоже алгоритм работает. а вот если в консоли обозревателя запустить: var n = parseInt(2187501); var nFactor = 1; var nCRC = 0; var nAddend; while (n !== 0){ nAddend = Math.round(nFactor * (n % 10)); nFactor = (nFactor === 2) ? 1 : 2; nAddend = nAddend > 9 ? nAddend - 9 : nAddend; nCRC += nAddend; n /= 10; } alert(nCRC%10); - вернет 7 (что неверно) П.С.: другие описания алгоритма: https://ru.wikipedia.org/wiki/%D0%90%D0%BB%D0%B3%D0%BE%D1%80%D0%B8%D1%82%D0%BC_%D0%9B%D1%83%D0%BD%D0%B0 https://gist.github.com/dimiork/75d8b1288171fba45d71
process
на главном портале починить получние контрольной суммы по алгоритму луна найти строчки todo fix alhoritm luna number must give тcrc check и поправить алгоритм ссылка на онлайн калькулятор пример для числа калькулятор выдает контрольную сумму так же и на если задать напишет как и должно быть запись не найдена вместо сообщения что номер не валиден т к тут тоже алгоритм работает а вот если в консоли обозревателя запустить var n parseint var nfactor var ncrc var naddend while n naddend math round nfactor n nfactor nfactor naddend naddend naddend naddend ncrc naddend n alert ncrc вернет что неверно п с другие описания алгоритма
1
92,672
26,747,709,543
IssuesEvent
2023-01-30 17:07:08
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
Test failure due to inconsistencies related to MKL
stat:awaiting response type:build/install stalled comp:mkl TF 2.9
<details><summary>Click to expand!</summary> ### Issue Type Build/Install ### Have you reproduced the bug with TF nightly? No ### Source source ### Tensorflow Version 2.9.1 ### Custom Code No ### OS Platform and Distribution Linux RHEL 7 ### Python version 3.10 ### Bazel version 5.1.1 ### GCC/Compiler version 11.3 ### Current Behaviour? ```shell I see failing tests while running `bazel test`. The `testGetMemoryInfoCPU` is guarded by a skip `if test_util.IsMklEnabled():` And the other tests has a similar condition. So it looks like `IsMklEnabled` doesn't return true when it should. Further investigation leads to several macros in the build files: - `IsMklEnabled` returns true when `defined(INTEL_MKL) && defined(ENABLE_MKL)` - There is `if_mkl([":mkl_cpu_allocator"])` which is why `testGetMemoryInfoCPU` fails when true - `if_mkl` is true on `linux_x86_64` unconditionally - `if_mkl(["-DINTEL_MKL"])` - `if_enable_mkl(["-DENABLE_MKL"])` I'd suggest to simplify this so either there is MKL or there is not. Having multiple macros with different defaults doesn't make sense to me. Or maybe the default for "ENABLE_MKL" should be the same as for "INTEL_MKL", e.g. True on x86 ``` ### Standalone code to reproduce the issue ```shell bazel test ``` ### Relevant log output ```shell FAIL: testGetMemoryInfoCPU (__main__.ContextTest) ContextTest.testGetMemoryInfoCPU ---------------------------------------------------------------------- Traceback (most recent call last): File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/eager/context_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 2229, in decorated return func(self, *args, **kwargs) File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/eager/context_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context_test.py", line 141, in testGetMemoryInfoCPU with self.assertRaisesRegex(ValueError, 'Allocator stats not available'): AssertionError: ValueError not raised FAIL: test_simple (__main__.NodeFileWriterTest) NodeFileWriterTest.test_simple ---------------------------------------------------------------------- Traceback (most recent call last): File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/framework/node_file_writer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 2157, in decorated return func(self, *args, **kwargs) File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/framework/node_file_writer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/node_file_writer_test.py", line 142, in test_simple self.assertEqual(node_def1.op, 'MatMul') AssertionError: - _MklMatMul ? ---- + MatMul ``` </details>
1.0
Test failure due to inconsistencies related to MKL - <details><summary>Click to expand!</summary> ### Issue Type Build/Install ### Have you reproduced the bug with TF nightly? No ### Source source ### Tensorflow Version 2.9.1 ### Custom Code No ### OS Platform and Distribution Linux RHEL 7 ### Python version 3.10 ### Bazel version 5.1.1 ### GCC/Compiler version 11.3 ### Current Behaviour? ```shell I see failing tests while running `bazel test`. The `testGetMemoryInfoCPU` is guarded by a skip `if test_util.IsMklEnabled():` And the other tests has a similar condition. So it looks like `IsMklEnabled` doesn't return true when it should. Further investigation leads to several macros in the build files: - `IsMklEnabled` returns true when `defined(INTEL_MKL) && defined(ENABLE_MKL)` - There is `if_mkl([":mkl_cpu_allocator"])` which is why `testGetMemoryInfoCPU` fails when true - `if_mkl` is true on `linux_x86_64` unconditionally - `if_mkl(["-DINTEL_MKL"])` - `if_enable_mkl(["-DENABLE_MKL"])` I'd suggest to simplify this so either there is MKL or there is not. Having multiple macros with different defaults doesn't make sense to me. Or maybe the default for "ENABLE_MKL" should be the same as for "INTEL_MKL", e.g. True on x86 ``` ### Standalone code to reproduce the issue ```shell bazel test ``` ### Relevant log output ```shell FAIL: testGetMemoryInfoCPU (__main__.ContextTest) ContextTest.testGetMemoryInfoCPU ---------------------------------------------------------------------- Traceback (most recent call last): File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/eager/context_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 2229, in decorated return func(self, *args, **kwargs) File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/eager/context_test_cpu.runfiles/org_tensorflow/tensorflow/python/eager/context_test.py", line 141, in testGetMemoryInfoCPU with self.assertRaisesRegex(ValueError, 'Allocator stats not available'): AssertionError: ValueError not raised FAIL: test_simple (__main__.NodeFileWriterTest) NodeFileWriterTest.test_simple ---------------------------------------------------------------------- Traceback (most recent call last): File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/framework/node_file_writer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/test_util.py", line 2157, in decorated return func(self, *args, **kwargs) File "/dev/shm/jfg508/TensorFlow/2.9.1/foss-2022a/TensorFlow/bazel-root/bac5198b45911f6921886c0013c301e8/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/framework/node_file_writer_test_cpu.runfiles/org_tensorflow/tensorflow/python/framework/node_file_writer_test.py", line 142, in test_simple self.assertEqual(node_def1.op, 'MatMul') AssertionError: - _MklMatMul ? ---- + MatMul ``` </details>
non_process
test failure due to inconsistencies related to mkl click to expand issue type build install have you reproduced the bug with tf nightly no source source tensorflow version custom code no os platform and distribution linux rhel python version bazel version gcc compiler version current behaviour shell i see failing tests while running bazel test the testgetmemoryinfocpu is guarded by a skip if test util ismklenabled and the other tests has a similar condition so it looks like ismklenabled doesn t return true when it should further investigation leads to several macros in the build files ismklenabled returns true when defined intel mkl defined enable mkl there is if mkl which is why testgetmemoryinfocpu fails when true if mkl is true on linux unconditionally if mkl if enable mkl i d suggest to simplify this so either there is mkl or there is not having multiple macros with different defaults doesn t make sense to me or maybe the default for enable mkl should be the same as for intel mkl e g true on standalone code to reproduce the issue shell bazel test relevant log output shell fail testgetmemoryinfocpu main contexttest contexttest testgetmemoryinfocpu traceback most recent call last file dev shm tensorflow foss tensorflow bazel root execroot org tensorflow bazel out opt bin tensorflow python eager context test cpu runfiles org tensorflow tensorflow python framework test util py line in decorated return func self args kwargs file dev shm tensorflow foss tensorflow bazel root execroot org tensorflow bazel out opt bin tensorflow python eager context test cpu runfiles org tensorflow tensorflow python eager context test py line in testgetmemoryinfocpu with self assertraisesregex valueerror allocator stats not available assertionerror valueerror not raised fail test simple main nodefilewritertest nodefilewritertest test simple traceback most recent call last file dev shm tensorflow foss tensorflow bazel root execroot org tensorflow bazel out opt bin tensorflow python framework node file writer test cpu runfiles org tensorflow tensorflow python framework test util py line in decorated return func self args kwargs file dev shm tensorflow foss tensorflow bazel root execroot org tensorflow bazel out opt bin tensorflow python framework node file writer test cpu runfiles org tensorflow tensorflow python framework node file writer test py line in test simple self assertequal node op matmul assertionerror mklmatmul matmul
0
67,357
8,127,179,595
IssuesEvent
2018-08-17 07:00:08
JohnSegerstedt/Game1
https://api.github.com/repos/JohnSegerstedt/Game1
opened
Redesign canvas to NotDestroy between levels
redesign
Ex: If one would want to change something in the Bindings Menu, then with the current implementation that person would then have to change it on every level scene manually.
1.0
Redesign canvas to NotDestroy between levels - Ex: If one would want to change something in the Bindings Menu, then with the current implementation that person would then have to change it on every level scene manually.
non_process
redesign canvas to notdestroy between levels ex if one would want to change something in the bindings menu then with the current implementation that person would then have to change it on every level scene manually
0
10,112
13,044,162,209
IssuesEvent
2020-07-29 03:47:30
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `SysDateWithFsp` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `SysDateWithFsp` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `SysDateWithFsp` from TiDB - ## Description Port the scalar function `SysDateWithFsp` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @lonng ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function sysdatewithfsp from tidb description port the scalar function sysdatewithfsp from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb
1
11,693
14,544,062,810
IssuesEvent
2020-12-15 17:40:06
paul-buerkner/brms
https://api.github.com/repos/paul-buerkner/brms
closed
Error in Truncation For Distrbutional Parametrization
bug post-processing
based on a prior predictive check using pp_check(brms_fit), it does not look like the shifted lognormal, when using distributional coding over ndt, truncates properly under certain circumstances which I will describe below. First, here is the code used to make the model and run the prior predictive check: fit0 <- brm(formula = bf(formula = reaction_time | trunc(ub = 500) ~ 1, ndt ~ 1 + bigram_ideal_first_surprisal + block_num ), data = experiment_df, family = shifted_lognormal(), prior = c( set_prior("normal(-2,2)", class = "Intercept"), set_prior("normal(5.5,0.01)", class = "Intercept", dpar = "ndt"), set_prior("normal(0,0.01)", class = "b", coef = "bigram_ideal_first_surprisal", dpar = "ndt"), set_prior("normal(-0.05,0.001)", class = "b", coef = "block_num", dpar = "ndt") ), sample_prior = "only" ) pp_check(fit0) Interestingly, when make the priors on the inputs to ndt very small, I get a clear truncation at 500 in the pp_check() output, but when the priors on the inputs to ndt are big, then truncation fails.
1.0
Error in Truncation For Distrbutional Parametrization - based on a prior predictive check using pp_check(brms_fit), it does not look like the shifted lognormal, when using distributional coding over ndt, truncates properly under certain circumstances which I will describe below. First, here is the code used to make the model and run the prior predictive check: fit0 <- brm(formula = bf(formula = reaction_time | trunc(ub = 500) ~ 1, ndt ~ 1 + bigram_ideal_first_surprisal + block_num ), data = experiment_df, family = shifted_lognormal(), prior = c( set_prior("normal(-2,2)", class = "Intercept"), set_prior("normal(5.5,0.01)", class = "Intercept", dpar = "ndt"), set_prior("normal(0,0.01)", class = "b", coef = "bigram_ideal_first_surprisal", dpar = "ndt"), set_prior("normal(-0.05,0.001)", class = "b", coef = "block_num", dpar = "ndt") ), sample_prior = "only" ) pp_check(fit0) Interestingly, when make the priors on the inputs to ndt very small, I get a clear truncation at 500 in the pp_check() output, but when the priors on the inputs to ndt are big, then truncation fails.
process
error in truncation for distrbutional parametrization based on a prior predictive check using pp check brms fit it does not look like the shifted lognormal when using distributional coding over ndt truncates properly under certain circumstances which i will describe below first here is the code used to make the model and run the prior predictive check brm formula bf formula reaction time trunc ub ndt bigram ideal first surprisal block num data experiment df family shifted lognormal prior c set prior normal class intercept set prior normal class intercept dpar ndt set prior normal class b coef bigram ideal first surprisal dpar ndt set prior normal class b coef block num dpar ndt sample prior only pp check interestingly when make the priors on the inputs to ndt very small i get a clear truncation at in the pp check output but when the priors on the inputs to ndt are big then truncation fails
1
8,816
11,935,708,403
IssuesEvent
2020-04-02 09:03:26
BlesseNtumble/GalaxySpace
https://api.github.com/repos/BlesseNtumble/GalaxySpace
closed
Multiplayer Crash
1.12.2 in the process of correcting
1. Minecraft version: 1.12.2 2. Galacticraft version: GalacticraftCore-1.12.2-4.0.2.244 3. GalaxySpace version: GalaxySpace-1.12.2-2.0.12 4. AsmodeusCore version (for 2.0.1 version and above): AsmodeusCore-1.12.2-0.0.13 5. Side (Single player (SSP), Multiplayer (SMP), or SSP opened to LAN (LAN)): OPEN TO LAN ------------------------------------------------------------------------ Description of the issue: it says a fatal error occured when trying to join a local world (open to LAN)
1.0
Multiplayer Crash - 1. Minecraft version: 1.12.2 2. Galacticraft version: GalacticraftCore-1.12.2-4.0.2.244 3. GalaxySpace version: GalaxySpace-1.12.2-2.0.12 4. AsmodeusCore version (for 2.0.1 version and above): AsmodeusCore-1.12.2-0.0.13 5. Side (Single player (SSP), Multiplayer (SMP), or SSP opened to LAN (LAN)): OPEN TO LAN ------------------------------------------------------------------------ Description of the issue: it says a fatal error occured when trying to join a local world (open to LAN)
process
multiplayer crash minecraft version galacticraft version galacticraftcore galaxyspace version galaxyspace asmodeuscore version for version and above asmodeuscore side single player ssp multiplayer smp or ssp opened to lan lan open to lan description of the issue it says a fatal error occured when trying to join a local world open to lan
1
490
7,868,920,385
IssuesEvent
2018-06-24 06:47:53
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
OperationCanceledException during Formatting
Area-IDE Bug Tenet-Reliability
A customer found this in their activity log after editing JS/TS in a session with responsiveness issues. ``` System.OperationCanceledException: The operation was canceled. at System.Threading.CancellationToken.ThrowOperationCanceledException() at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) at Roslyn.Utilities.TaskExtensions.WaitAndGetResult_CanCallOnBackground[T](Task`1 task, CancellationToken cancellationToken) at Microsoft.CodeAnalysis.Editor.Implementation.Formatting.FormatCommandHandler.ExecuteReturnOrTypeCommand(EditorCommandArgs args, Action nextHandler, CancellationToken cancellationToken) at Microsoft.CodeAnalysis.Editor.Implementation.Formatting.FormatCommandHandler.ExecuteCommand(ReturnKeyCommandArgs args, Action nextHandler, CommandExecutionContext context) at Microsoft.VisualStudio.Commanding.CommandHandlerExtensions.ExecuteCommand[T](ICommandHandler commandHandler, T args, Action nextCommandHandler, CommandExecutionContext executionContext) at Microsoft.VisualStudio.UI.Text.Commanding.Implementation.EditorCommandHandlerService.&lt;&gt;c__DisplayClass14_1`1.&lt;Execute&gt;b__1() at Microsoft.VisualStudio.Text.Utilities.GuardedOperations.CallExtensionPoint(Object errorSource, Action call) --- End of stack trace from previous location where exception was thrown --- at Microsoft.VisualStudio.Telemetry.WindowsErrorReporting.WatsonReport.GetClrWatsonExceptionInfo(Exception exceptionObject) ``` My guess is that JS/TS formatting was too slow and either the user or a timeout attempted to cancel the operation. Perhaps OperationCanceledException is not supposed to bubble out of a command handler?
True
OperationCanceledException during Formatting - A customer found this in their activity log after editing JS/TS in a session with responsiveness issues. ``` System.OperationCanceledException: The operation was canceled. at System.Threading.CancellationToken.ThrowOperationCanceledException() at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken) at Roslyn.Utilities.TaskExtensions.WaitAndGetResult_CanCallOnBackground[T](Task`1 task, CancellationToken cancellationToken) at Microsoft.CodeAnalysis.Editor.Implementation.Formatting.FormatCommandHandler.ExecuteReturnOrTypeCommand(EditorCommandArgs args, Action nextHandler, CancellationToken cancellationToken) at Microsoft.CodeAnalysis.Editor.Implementation.Formatting.FormatCommandHandler.ExecuteCommand(ReturnKeyCommandArgs args, Action nextHandler, CommandExecutionContext context) at Microsoft.VisualStudio.Commanding.CommandHandlerExtensions.ExecuteCommand[T](ICommandHandler commandHandler, T args, Action nextCommandHandler, CommandExecutionContext executionContext) at Microsoft.VisualStudio.UI.Text.Commanding.Implementation.EditorCommandHandlerService.&lt;&gt;c__DisplayClass14_1`1.&lt;Execute&gt;b__1() at Microsoft.VisualStudio.Text.Utilities.GuardedOperations.CallExtensionPoint(Object errorSource, Action call) --- End of stack trace from previous location where exception was thrown --- at Microsoft.VisualStudio.Telemetry.WindowsErrorReporting.WatsonReport.GetClrWatsonExceptionInfo(Exception exceptionObject) ``` My guess is that JS/TS formatting was too slow and either the user or a timeout attempted to cancel the operation. Perhaps OperationCanceledException is not supposed to bubble out of a command handler?
non_process
operationcanceledexception during formatting a customer found this in their activity log after editing js ts in a session with responsiveness issues system operationcanceledexception the operation was canceled at system threading cancellationtoken throwoperationcanceledexception at system threading tasks task wait millisecondstimeout cancellationtoken cancellationtoken at roslyn utilities taskextensions waitandgetresult cancallonbackground task task cancellationtoken cancellationtoken at microsoft codeanalysis editor implementation formatting formatcommandhandler executereturnortypecommand editorcommandargs args action nexthandler cancellationtoken cancellationtoken at microsoft codeanalysis editor implementation formatting formatcommandhandler executecommand returnkeycommandargs args action nexthandler commandexecutioncontext context at microsoft visualstudio commanding commandhandlerextensions executecommand icommandhandler commandhandler t args action nextcommandhandler commandexecutioncontext executioncontext at microsoft visualstudio ui text commanding implementation editorcommandhandlerservice lt gt c lt execute gt b at microsoft visualstudio text utilities guardedoperations callextensionpoint object errorsource action call end of stack trace from previous location where exception was thrown at microsoft visualstudio telemetry windowserrorreporting watsonreport getclrwatsonexceptioninfo exception exceptionobject my guess is that js ts formatting was too slow and either the user or a timeout attempted to cancel the operation perhaps operationcanceledexception is not supposed to bubble out of a command handler
0
20,476
27,132,451,530
IssuesEvent
2023-02-16 10:44:53
googleapis/python-language
https://api.github.com/repos/googleapis/python-language
closed
.github/.OwlBot.lock.yaml is broken.
type: process api: language
This repo will not receive automatic updates until this issue is fixed. * YAMLException: can not read a block mapping entry; a multiline key may not be an implicit key (19:1) 16 | digest: sha256:f62c53736eccb0 ... 17 | 18 | trigger ci 19 | ------^
1.0
.github/.OwlBot.lock.yaml is broken. - This repo will not receive automatic updates until this issue is fixed. * YAMLException: can not read a block mapping entry; a multiline key may not be an implicit key (19:1) 16 | digest: sha256:f62c53736eccb0 ... 17 | 18 | trigger ci 19 | ------^
process
github owlbot lock yaml is broken this repo will not receive automatic updates until this issue is fixed yamlexception can not read a block mapping entry a multiline key may not be an implicit key digest trigger ci
1
930
3,398,472,077
IssuesEvent
2015-12-02 04:04:58
DarkEnergyScienceCollaboration/SRM_Task_List
https://api.github.com/repos/DarkEnergyScienceCollaboration/SRM_Task_List
opened
T:pd3.1:Reuse
ci DC3 DC3 SW: Implement the DESC-modified L2 reprocessing pipeline. Reprocess DC3 Data and Make Accessible for Analysis SW
DC3 SW: Replicate the Project/DM L2 pipeline technology to reprocess DC3 simulated data.
2.0
T:pd3.1:Reuse - DC3 SW: Replicate the Project/DM L2 pipeline technology to reprocess DC3 simulated data.
process
t reuse sw replicate the project dm pipeline technology to reprocess simulated data
1
10,314
13,157,376,296
IssuesEvent
2020-08-10 12:38:37
utopia-rise/godot-kotlin
https://api.github.com/repos/utopia-rise/godot-kotlin
closed
Drop redundant RPCMode enum
tools:annotation-processor tools:annotations wrapper:godot-library
Atm we have two different enums for RPCMode. This was from a time where we used the annotations as dependencies inside the annotation processor rather than the hardcoded fqname. With this issue we should drop the manually created enum and change the generated one to drop the RPC_ENUM prefix
1.0
Drop redundant RPCMode enum - Atm we have two different enums for RPCMode. This was from a time where we used the annotations as dependencies inside the annotation processor rather than the hardcoded fqname. With this issue we should drop the manually created enum and change the generated one to drop the RPC_ENUM prefix
process
drop redundant rpcmode enum atm we have two different enums for rpcmode this was from a time where we used the annotations as dependencies inside the annotation processor rather than the hardcoded fqname with this issue we should drop the manually created enum and change the generated one to drop the rpc enum prefix
1
28,712
2,710,974,958
IssuesEvent
2015-04-09 00:33:04
golang/go
https://api.github.com/repos/golang/go
closed
cmd/cgo: wrong value of exported variables in a DLL
accepted os-windows priority-later release-none repo-main size-l
by **hwang.dev**: <pre>What steps will reproduce the problem? 1. Create a DLL with an exported variable(which has been properly initialized within the DLL).e.g. (the code is in the attachment) In &quot;adll.h&quot; typedef struct { char *name; void (*bar)(); } T; __declspec(dllexport) const T exported_var; In &quot;adll.c&quot; const T exported_var = {&quot;var&quot;, &amp;bar_impl}; 2. Write a go program to access the variable: fmt.Println(C.exported_var, C.GoString(C.exported_var.name)) C.foo(&amp;C.exported_var) // foo calls exported_var.bar What is the expected output? 1. The correct value of C.exported_var can be obtained. 2. C.foo(&amp;C.exported_var) calls exported_var.bar without a problem. Or An error message that the use case cannot be supported. What do you see instead? 1. It just compiles without an error message. 2. During run, wrong values got. 3. C.foo(&amp;C.exported_var) causes panic with message &quot;runtime error: invalid memory address or nil pointer dereference&quot;. Which compiler are you using (5g, 6g, 8g, gccgo)? 6g Which operating system are you using? Windows 7 64 bit Which version are you using? (run 'go version') go1.0.3 C compiler: Mingw-w64/MSYS/gcc 4.7.1 Please provide any additional information below. I've found a workaround that a wrapper written in C will be able to get the correct variable value. e.g. /* const T* getVar() { return &amp;exported_var; } */ import &quot;C&quot; and then: fmt.Println(C.getVar(), C.GoString(C.getVar().name))</pre> Attachments: 1. <a href="https://storage.googleapis.com/go-attachment/4339/0/dllvar.zip">dllvar.zip</a> (1326 bytes)
1.0
cmd/cgo: wrong value of exported variables in a DLL - by **hwang.dev**: <pre>What steps will reproduce the problem? 1. Create a DLL with an exported variable(which has been properly initialized within the DLL).e.g. (the code is in the attachment) In &quot;adll.h&quot; typedef struct { char *name; void (*bar)(); } T; __declspec(dllexport) const T exported_var; In &quot;adll.c&quot; const T exported_var = {&quot;var&quot;, &amp;bar_impl}; 2. Write a go program to access the variable: fmt.Println(C.exported_var, C.GoString(C.exported_var.name)) C.foo(&amp;C.exported_var) // foo calls exported_var.bar What is the expected output? 1. The correct value of C.exported_var can be obtained. 2. C.foo(&amp;C.exported_var) calls exported_var.bar without a problem. Or An error message that the use case cannot be supported. What do you see instead? 1. It just compiles without an error message. 2. During run, wrong values got. 3. C.foo(&amp;C.exported_var) causes panic with message &quot;runtime error: invalid memory address or nil pointer dereference&quot;. Which compiler are you using (5g, 6g, 8g, gccgo)? 6g Which operating system are you using? Windows 7 64 bit Which version are you using? (run 'go version') go1.0.3 C compiler: Mingw-w64/MSYS/gcc 4.7.1 Please provide any additional information below. I've found a workaround that a wrapper written in C will be able to get the correct variable value. e.g. /* const T* getVar() { return &amp;exported_var; } */ import &quot;C&quot; and then: fmt.Println(C.getVar(), C.GoString(C.getVar().name))</pre> Attachments: 1. <a href="https://storage.googleapis.com/go-attachment/4339/0/dllvar.zip">dllvar.zip</a> (1326 bytes)
non_process
cmd cgo wrong value of exported variables in a dll by hwang dev what steps will reproduce the problem create a dll with an exported variable which has been properly initialized within the dll e g the code is in the attachment in quot adll h quot typedef struct char name void bar t declspec dllexport const t exported var in quot adll c quot const t exported var quot var quot amp bar impl write a go program to access the variable fmt println c exported var c gostring c exported var name c foo amp c exported var foo calls exported var bar what is the expected output the correct value of c exported var can be obtained c foo amp c exported var calls exported var bar without a problem or an error message that the use case cannot be supported what do you see instead it just compiles without an error message during run wrong values got c foo amp c exported var causes panic with message quot runtime error invalid memory address or nil pointer dereference quot which compiler are you using gccgo which operating system are you using windows bit which version are you using run go version c compiler mingw msys gcc please provide any additional information below i ve found a workaround that a wrapper written in c will be able to get the correct variable value e g const t getvar return amp exported var import quot c quot and then fmt println c getvar c gostring c getvar name attachments a href bytes
0
219,696
7,345,063,058
IssuesEvent
2018-03-07 16:21:20
solgenomics/sgn
https://api.github.com/repos/solgenomics/sgn
closed
trial comparison tool fails to draw on cassavabse
low priority
The trial comparison tool mysteriously stopped working on cassavabase/cassava-devel.
1.0
trial comparison tool fails to draw on cassavabse - The trial comparison tool mysteriously stopped working on cassavabase/cassava-devel.
non_process
trial comparison tool fails to draw on cassavabse the trial comparison tool mysteriously stopped working on cassavabase cassava devel
0
6,488
9,558,835,115
IssuesEvent
2019-05-03 15:08:48
mick-warehime/sixth_corp
https://api.github.com/repos/mick-warehime/sixth_corp
closed
Upgrade decision scene viewing
development process ui
Add a layout to better organize text. Implement drawing based on layouts. Figure out how to fit text to a given sized rect.
1.0
Upgrade decision scene viewing - Add a layout to better organize text. Implement drawing based on layouts. Figure out how to fit text to a given sized rect.
process
upgrade decision scene viewing add a layout to better organize text implement drawing based on layouts figure out how to fit text to a given sized rect
1
39,384
9,422,217,111
IssuesEvent
2019-04-11 08:52:09
contao/contao
https://api.github.com/repos/contao/contao
closed
"Save and new" action for form fields only works for admin users
defect
**Affected version(s)** 4.4.36, but probably all versions until now **Description** Creating a form field as a non-administrative user using the "Save and new" feature produces a "Forbidden" error **How to reproduce** -Log in using a non-admin account -Open the form generator -Go to the list of form fields of an existing form or create a new one and edit its fields -Add a new form field of any type -Instead of clicking "Save" or "Save and close", click on the tiny arrow on the right side of the "Save and close" button and select "Save and create" You should now see the following error page: ``` Forbidden What's the problem? Not enough permissions to access form ID . ``` **Additional information** As you can see, the error message is missing the ID of the form. The error is thrown in the core-bundle's `dca/tl_form_field.php` in function `checkPermissions()`. Administrators don't run into trouble here because permissions won't be checked at all. For non-admins, the following code snippet causes the error: ``` case 'create': if (!\strlen(Contao\Input::get('id')) || !\in_array(Contao\Input::get('id'), $root)) { throw new Contao\CoreBundle\Exception\AccessDeniedException('Not enough permissions to access form ID ' . Contao\Input::get('id') . '.'); } break; ``` `Contao\Input::get('id')` returns an empty string because there is no ID set in the URL, which for example looks like this: <domain>/contao?do=form&table=tl_form_field&act=create&mode=1&pid=19&rt=<token>
1.0
"Save and new" action for form fields only works for admin users - **Affected version(s)** 4.4.36, but probably all versions until now **Description** Creating a form field as a non-administrative user using the "Save and new" feature produces a "Forbidden" error **How to reproduce** -Log in using a non-admin account -Open the form generator -Go to the list of form fields of an existing form or create a new one and edit its fields -Add a new form field of any type -Instead of clicking "Save" or "Save and close", click on the tiny arrow on the right side of the "Save and close" button and select "Save and create" You should now see the following error page: ``` Forbidden What's the problem? Not enough permissions to access form ID . ``` **Additional information** As you can see, the error message is missing the ID of the form. The error is thrown in the core-bundle's `dca/tl_form_field.php` in function `checkPermissions()`. Administrators don't run into trouble here because permissions won't be checked at all. For non-admins, the following code snippet causes the error: ``` case 'create': if (!\strlen(Contao\Input::get('id')) || !\in_array(Contao\Input::get('id'), $root)) { throw new Contao\CoreBundle\Exception\AccessDeniedException('Not enough permissions to access form ID ' . Contao\Input::get('id') . '.'); } break; ``` `Contao\Input::get('id')` returns an empty string because there is no ID set in the URL, which for example looks like this: <domain>/contao?do=form&table=tl_form_field&act=create&mode=1&pid=19&rt=<token>
non_process
save and new action for form fields only works for admin users affected version s but probably all versions until now description creating a form field as a non administrative user using the save and new feature produces a forbidden error how to reproduce log in using a non admin account open the form generator go to the list of form fields of an existing form or create a new one and edit its fields add a new form field of any type instead of clicking save or save and close click on the tiny arrow on the right side of the save and close button and select save and create you should now see the following error page forbidden what s the problem not enough permissions to access form id additional information as you can see the error message is missing the id of the form the error is thrown in the core bundle s dca tl form field php in function checkpermissions administrators don t run into trouble here because permissions won t be checked at all for non admins the following code snippet causes the error case create if strlen contao input get id in array contao input get id root throw new contao corebundle exception accessdeniedexception not enough permissions to access form id contao input get id break contao input get id returns an empty string because there is no id set in the url which for example looks like this contao do form table tl form field act create mode pid rt
0
180,915
13,965,236,340
IssuesEvent
2020-10-25 21:32:25
Parabeac/Parabeac-Core
https://api.github.com/repos/Parabeac/Parabeac-Core
opened
Visual Generation [Integration Testing]
Needs a Test
**Input:** Design Node Tree **Output:** Intermediate Tree **Description:** Check that the Interpretation of visual elements is correct
1.0
Visual Generation [Integration Testing] - **Input:** Design Node Tree **Output:** Intermediate Tree **Description:** Check that the Interpretation of visual elements is correct
non_process
visual generation input design node tree output intermediate tree description check that the interpretation of visual elements is correct
0
150,819
19,633,960,352
IssuesEvent
2022-01-08 01:01:07
samqws-marketing/walmartlabs-concord
https://api.github.com/repos/samqws-marketing/walmartlabs-concord
opened
CVE-2021-44878 (Medium) detected in pac4j-oidc-4.0.0-RC3.jar
security vulnerability
## CVE-2021-44878 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pac4j-oidc-4.0.0-RC3.jar</b></p></summary> <p>Profile & Authentication Client for Java</p> <p>Library home page: <a href="https://github.com/pac4j/pac4j">https://github.com/pac4j/pac4j</a></p> <p>Path to dependency file: /server/dist/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/pac4j/pac4j-oidc/4.0.0-RC3/pac4j-oidc-4.0.0-RC3.jar,/y/org/pac4j/pac4j-oidc/4.0.0-RC3/pac4j-oidc-4.0.0-RC3.jar</p> <p> Dependency Hierarchy: - :x: **pac4j-oidc-4.0.0-RC3.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Pac4j v5.1 and earlier allows (by default) clients to accept and successfully validate ID Tokens with "none" algorithm (i.e., tokens with no signature) which is not secure and violates the OpenID Core Specification. The "none" algorithm does not require any signature verification when validating the ID tokens, which allows the attacker to bypass the token validation by injecting a malformed ID token using "none" as the value of "alg" key in the header with an empty signature value. <p>Publish Date: 2022-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44878>CVE-2021-44878</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44878">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44878</a></p> <p>Release Date: 2022-01-06</p> <p>Fix Resolution: org.pac4j:pac4j-oidc:5.2.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.pac4j","packageName":"pac4j-oidc","packageVersion":"4.0.0-RC3","packageFilePaths":["/server/dist/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.pac4j:pac4j-oidc:4.0.0-RC3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.pac4j:pac4j-oidc:5.2.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-44878","vulnerabilityDetails":"Pac4j v5.1 and earlier allows (by default) clients to accept and successfully validate ID Tokens with \"none\" algorithm (i.e., tokens with no signature) which is not secure and violates the OpenID Core Specification. The \"none\" algorithm does not require any signature verification when validating the ID tokens, which allows the attacker to bypass the token validation by injecting a malformed ID token using \"none\" as the value of \"alg\" key in the header with an empty signature value.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44878","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-44878 (Medium) detected in pac4j-oidc-4.0.0-RC3.jar - ## CVE-2021-44878 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pac4j-oidc-4.0.0-RC3.jar</b></p></summary> <p>Profile & Authentication Client for Java</p> <p>Library home page: <a href="https://github.com/pac4j/pac4j">https://github.com/pac4j/pac4j</a></p> <p>Path to dependency file: /server/dist/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/pac4j/pac4j-oidc/4.0.0-RC3/pac4j-oidc-4.0.0-RC3.jar,/y/org/pac4j/pac4j-oidc/4.0.0-RC3/pac4j-oidc-4.0.0-RC3.jar</p> <p> Dependency Hierarchy: - :x: **pac4j-oidc-4.0.0-RC3.jar** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Pac4j v5.1 and earlier allows (by default) clients to accept and successfully validate ID Tokens with "none" algorithm (i.e., tokens with no signature) which is not secure and violates the OpenID Core Specification. The "none" algorithm does not require any signature verification when validating the ID tokens, which allows the attacker to bypass the token validation by injecting a malformed ID token using "none" as the value of "alg" key in the header with an empty signature value. <p>Publish Date: 2022-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44878>CVE-2021-44878</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44878">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44878</a></p> <p>Release Date: 2022-01-06</p> <p>Fix Resolution: org.pac4j:pac4j-oidc:5.2.0</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.pac4j","packageName":"pac4j-oidc","packageVersion":"4.0.0-RC3","packageFilePaths":["/server/dist/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.pac4j:pac4j-oidc:4.0.0-RC3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.pac4j:pac4j-oidc:5.2.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-44878","vulnerabilityDetails":"Pac4j v5.1 and earlier allows (by default) clients to accept and successfully validate ID Tokens with \"none\" algorithm (i.e., tokens with no signature) which is not secure and violates the OpenID Core Specification. The \"none\" algorithm does not require any signature verification when validating the ID tokens, which allows the attacker to bypass the token validation by injecting a malformed ID token using \"none\" as the value of \"alg\" key in the header with an empty signature value.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44878","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in oidc jar cve medium severity vulnerability vulnerable library oidc jar profile authentication client for java library home page a href path to dependency file server dist pom xml path to vulnerable library home wss scanner repository org oidc oidc jar y org oidc oidc jar dependency hierarchy x oidc jar vulnerable library found in base branch master vulnerability details and earlier allows by default clients to accept and successfully validate id tokens with none algorithm i e tokens with no signature which is not secure and violates the openid core specification the none algorithm does not require any signature verification when validating the id tokens which allows the attacker to bypass the token validation by injecting a malformed id token using none as the value of alg key in the header with an empty signature value publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org oidc check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org oidc isminimumfixversionavailable true minimumfixversion org oidc isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails and earlier allows by default clients to accept and successfully validate id tokens with none algorithm i e tokens with no signature which is not secure and violates the openid core specification the none algorithm does not require any signature verification when validating the id tokens which allows the attacker to bypass the token validation by injecting a malformed id token using none as the value of alg key in the header with an empty signature value vulnerabilityurl
0
28,625
12,891,974,045
IssuesEvent
2020-07-13 18:43:55
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Include general guidance/approach for limiting traffic in/out of the AKS subnet to meet customer specific security requirements
Pri2 assigned-to-author container-service/svc doc-enhancement triaged
The section https://docs.microsoft.com/en-us/azure/aks/concepts-security#azure-network-security-groups notes that the NSGs managed by AKS allow traffic to flow appropriately by provide access to the API server, and are automatically modified by AKS in response to creation of load balancers etc. _The NSG rules that AKS applies to NICs reflect the default behavior that all traffic is allowed within the VNET._ In cases where traffic needs to be restricted within the VNET to meet compliance requirements, it would be helpful to have specific guidance about how to do that while not interfering with communication needed for the cluster to operate correctly. A specific example would be if a customer has a requirement to limit access to the AKS cluster to specific subnets over specific ports. If the AKS created/managed NSG should not be modified, then one approach could be to add a customer managed NSG and rule on the AKS subnet to limit traffic based on source subnet CIDR and target port, as long as that NSG did not block traffic required for load balancer access, platform communication, and egress as documented here: https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic. That approach may sound reasonable, but without some acknowledgment in the docs that additional restrictions may be needed and what approach to take (with constraints/considerations) customers would be left wondering if they should be thinking about this differently. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e0fcc83c-c911-4d82-98af-ed8880ca6c4a * Version Independent ID: 00462d96-9a30-8cef-5932-c99bab237706 * Content: [Concepts - Security in Azure Kubernetes Services (AKS)](https://docs.microsoft.com/en-us/azure/aks/concepts-security#azure-network-security-groups) * Content Source: [articles/aks/concepts-security.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/concepts-security.md) * Service: **container-service** * GitHub Login: @mlearned * Microsoft Alias: **mlearned**
1.0
Include general guidance/approach for limiting traffic in/out of the AKS subnet to meet customer specific security requirements - The section https://docs.microsoft.com/en-us/azure/aks/concepts-security#azure-network-security-groups notes that the NSGs managed by AKS allow traffic to flow appropriately by provide access to the API server, and are automatically modified by AKS in response to creation of load balancers etc. _The NSG rules that AKS applies to NICs reflect the default behavior that all traffic is allowed within the VNET._ In cases where traffic needs to be restricted within the VNET to meet compliance requirements, it would be helpful to have specific guidance about how to do that while not interfering with communication needed for the cluster to operate correctly. A specific example would be if a customer has a requirement to limit access to the AKS cluster to specific subnets over specific ports. If the AKS created/managed NSG should not be modified, then one approach could be to add a customer managed NSG and rule on the AKS subnet to limit traffic based on source subnet CIDR and target port, as long as that NSG did not block traffic required for load balancer access, platform communication, and egress as documented here: https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic. That approach may sound reasonable, but without some acknowledgment in the docs that additional restrictions may be needed and what approach to take (with constraints/considerations) customers would be left wondering if they should be thinking about this differently. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e0fcc83c-c911-4d82-98af-ed8880ca6c4a * Version Independent ID: 00462d96-9a30-8cef-5932-c99bab237706 * Content: [Concepts - Security in Azure Kubernetes Services (AKS)](https://docs.microsoft.com/en-us/azure/aks/concepts-security#azure-network-security-groups) * Content Source: [articles/aks/concepts-security.md](https://github.com/Microsoft/azure-docs/blob/master/articles/aks/concepts-security.md) * Service: **container-service** * GitHub Login: @mlearned * Microsoft Alias: **mlearned**
non_process
include general guidance approach for limiting traffic in out of the aks subnet to meet customer specific security requirements the section notes that the nsgs managed by aks allow traffic to flow appropriately by provide access to the api server and are automatically modified by aks in response to creation of load balancers etc the nsg rules that aks applies to nics reflect the default behavior that all traffic is allowed within the vnet in cases where traffic needs to be restricted within the vnet to meet compliance requirements it would be helpful to have specific guidance about how to do that while not interfering with communication needed for the cluster to operate correctly a specific example would be if a customer has a requirement to limit access to the aks cluster to specific subnets over specific ports if the aks created managed nsg should not be modified then one approach could be to add a customer managed nsg and rule on the aks subnet to limit traffic based on source subnet cidr and target port as long as that nsg did not block traffic required for load balancer access platform communication and egress as documented here that approach may sound reasonable but without some acknowledgment in the docs that additional restrictions may be needed and what approach to take with constraints considerations customers would be left wondering if they should be thinking about this differently document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
0
376,940
11,160,373,913
IssuesEvent
2019-12-26 09:28:58
StudioErrilhl/HSS
https://api.github.com/repos/StudioErrilhl/HSS
closed
Bug: infinite loop happening while working on bike
bug priority 1
If I chose to work on the bike by skipping to Monday after intro, (not having talked to Jules or done anything like that), and having gotten jumped to the bathroom after started working on the bike, taken a shower, and then going back, it will for some reason infinite loop and crash the game. It loops on the _repeat_event(6) (which should in theory not be possible to have repeat itself)
1.0
Bug: infinite loop happening while working on bike - If I chose to work on the bike by skipping to Monday after intro, (not having talked to Jules or done anything like that), and having gotten jumped to the bathroom after started working on the bike, taken a shower, and then going back, it will for some reason infinite loop and crash the game. It loops on the _repeat_event(6) (which should in theory not be possible to have repeat itself)
non_process
bug infinite loop happening while working on bike if i chose to work on the bike by skipping to monday after intro not having talked to jules or done anything like that and having gotten jumped to the bathroom after started working on the bike taken a shower and then going back it will for some reason infinite loop and crash the game it loops on the repeat event which should in theory not be possible to have repeat itself
0
11,050
13,883,464,178
IssuesEvent
2020-10-18 12:07:11
lishu/vscode-svg2
https://api.github.com/repos/lishu/vscode-svg2
closed
Implemen Emmet-style autocomplete
In process
When editing a `.svg` file, I can't seem to get autocompletion to trigger. It always says "No suggestions". Presumably it's some setting I have that's interfering with things, but I can't for the life of me figure out which one. How can I get SVG autocomplete to work again?
1.0
Implemen Emmet-style autocomplete - When editing a `.svg` file, I can't seem to get autocompletion to trigger. It always says "No suggestions". Presumably it's some setting I have that's interfering with things, but I can't for the life of me figure out which one. How can I get SVG autocomplete to work again?
process
implemen emmet style autocomplete when editing a svg file i can t seem to get autocompletion to trigger it always says no suggestions presumably it s some setting i have that s interfering with things but i can t for the life of me figure out which one how can i get svg autocomplete to work again
1
6,999
10,145,146,806
IssuesEvent
2019-08-05 02:43:50
processing-r/Processing.R
https://api.github.com/repos/processing-r/Processing.R
closed
Transfer the ownership from gaocegege to The Processing Foundation
community/processing priority/p3 size/small status/WIP type/help-wanted
I have the idea to transfer the ownership from me to The Processing Foundation, and this issue will be blocked until Evaluation 2 ends. When it ends, we will talk about it.
1.0
Transfer the ownership from gaocegege to The Processing Foundation - I have the idea to transfer the ownership from me to The Processing Foundation, and this issue will be blocked until Evaluation 2 ends. When it ends, we will talk about it.
process
transfer the ownership from gaocegege to the processing foundation i have the idea to transfer the ownership from me to the processing foundation and this issue will be blocked until evaluation ends when it ends we will talk about it
1
1,214
2,581,631,545
IssuesEvent
2015-02-14 07:46:58
arduino/Arduino
https://api.github.com/repos/arduino/Arduino
opened
Document other build path properties
Component: Documentation
The "Build Process" section of the hardware spec documents 2 global properties. https://github.com/arduino/Arduino/wiki/Arduino-IDE-1.5---3rd-party-Hardware-specification#build-process It should probably also document these: build.arch build.core.path build.system.path build.variant.path
1.0
Document other build path properties - The "Build Process" section of the hardware spec documents 2 global properties. https://github.com/arduino/Arduino/wiki/Arduino-IDE-1.5---3rd-party-Hardware-specification#build-process It should probably also document these: build.arch build.core.path build.system.path build.variant.path
non_process
document other build path properties the build process section of the hardware spec documents global properties it should probably also document these build arch build core path build system path build variant path
0
53,534
13,839,041,766
IssuesEvent
2020-10-14 07:19:42
FlipFloop/reactchat
https://api.github.com/repos/FlipFloop/reactchat
opened
CVE-2020-7720 (High) detected in node-forge-0.7.6.tgz, node-forge-0.9.2.tgz
security vulnerability
## CVE-2020-7720 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-forge-0.7.6.tgz</b>, <b>node-forge-0.9.2.tgz</b></p></summary> <p> <details><summary><b>node-forge-0.7.6.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.6.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.6.tgz</a></p> <p>Path to dependency file: reactchat/functions/package.json</p> <p>Path to vulnerable library: reactchat/functions/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - firebase-admin-8.13.0.tgz (Root Library) - :x: **node-forge-0.7.6.tgz** (Vulnerable Library) </details> <details><summary><b>node-forge-0.9.2.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.9.2.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.9.2.tgz</a></p> <p>Path to dependency file: reactchat/functions/package.json</p> <p>Path to vulnerable library: reactchat/functions/node_modules/google-p12-pem/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - firebase-admin-8.13.0.tgz (Root Library) - firestore-3.8.6.tgz - google-gax-1.15.3.tgz - google-auth-library-5.10.1.tgz - gtoken-4.1.4.tgz - google-p12-pem-2.0.4.tgz - :x: **node-forge-0.9.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/FlipFloop/reactchat/commit/89f31925cf6d50ed2dd105ff31a72b82dff868b0">89f31925cf6d50ed2dd105ff31a72b82dff868b0</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package node-forge before 0.10.0 is vulnerable to Prototype Pollution via the util.setPath function. Note: Version 0.10.0 is a breaking change removing the vulnerable functions. <p>Publish Date: 2020-09-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7720>CVE-2020-7720</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/digitalbazaar/forge/blob/master/CHANGELOG.md">https://github.com/digitalbazaar/forge/blob/master/CHANGELOG.md</a></p> <p>Release Date: 2020-09-13</p> <p>Fix Resolution: node-forge - 0.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7720 (High) detected in node-forge-0.7.6.tgz, node-forge-0.9.2.tgz - ## CVE-2020-7720 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-forge-0.7.6.tgz</b>, <b>node-forge-0.9.2.tgz</b></p></summary> <p> <details><summary><b>node-forge-0.7.6.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.7.6.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.7.6.tgz</a></p> <p>Path to dependency file: reactchat/functions/package.json</p> <p>Path to vulnerable library: reactchat/functions/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - firebase-admin-8.13.0.tgz (Root Library) - :x: **node-forge-0.7.6.tgz** (Vulnerable Library) </details> <details><summary><b>node-forge-0.9.2.tgz</b></p></summary> <p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p> <p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.9.2.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.9.2.tgz</a></p> <p>Path to dependency file: reactchat/functions/package.json</p> <p>Path to vulnerable library: reactchat/functions/node_modules/google-p12-pem/node_modules/node-forge/package.json</p> <p> Dependency Hierarchy: - firebase-admin-8.13.0.tgz (Root Library) - firestore-3.8.6.tgz - google-gax-1.15.3.tgz - google-auth-library-5.10.1.tgz - gtoken-4.1.4.tgz - google-p12-pem-2.0.4.tgz - :x: **node-forge-0.9.2.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/FlipFloop/reactchat/commit/89f31925cf6d50ed2dd105ff31a72b82dff868b0">89f31925cf6d50ed2dd105ff31a72b82dff868b0</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package node-forge before 0.10.0 is vulnerable to Prototype Pollution via the util.setPath function. Note: Version 0.10.0 is a breaking change removing the vulnerable functions. <p>Publish Date: 2020-09-01 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7720>CVE-2020-7720</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/digitalbazaar/forge/blob/master/CHANGELOG.md">https://github.com/digitalbazaar/forge/blob/master/CHANGELOG.md</a></p> <p>Release Date: 2020-09-13</p> <p>Fix Resolution: node-forge - 0.10.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in node forge tgz node forge tgz cve high severity vulnerability vulnerable libraries node forge tgz node forge tgz node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file reactchat functions package json path to vulnerable library reactchat functions node modules node forge package json dependency hierarchy firebase admin tgz root library x node forge tgz vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file reactchat functions package json path to vulnerable library reactchat functions node modules google pem node modules node forge package json dependency hierarchy firebase admin tgz root library firestore tgz google gax tgz google auth library tgz gtoken tgz google pem tgz x node forge tgz vulnerable library found in head commit a href found in base branch main vulnerability details the package node forge before is vulnerable to prototype pollution via the util setpath function note version is a breaking change removing the vulnerable functions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource
0
78,666
15,047,392,218
IssuesEvent
2021-02-03 08:51:40
RayTracing/raytracing.github.io
https://api.github.com/repos/RayTracing/raytracing.github.io
closed
checker_texture is spatially defined, could or should be defined by surface coordinates (U,V)
area: book area: code book 2: Next Week level: major type: enhancement
Maybe this is an intended behavior. The checked texture didn't use uv coordinate at all...it just use the normal (which is parameter p) at the hit point. This get correct result in final scene, but the image of "Checkered spheres" is weird. ```C++ virtual color value(double u, double v, const point3& p) const { auto sines = sin(10*p.x())*sin(10*p.y())*sin(10*p.z()); if (sines < 0) return odd->value(u, v, p); else return even->value(u, v, p); } ``` If we use u,v in place of `p.x() p.y()`, we can get the following image: ![twosphere](https://user-images.githubusercontent.com/42328994/87169421-f8520400-c29d-11ea-8070-477c05b0a983.png) (But this will cause the texture in final scene weird)
1.0
checker_texture is spatially defined, could or should be defined by surface coordinates (U,V) - Maybe this is an intended behavior. The checked texture didn't use uv coordinate at all...it just use the normal (which is parameter p) at the hit point. This get correct result in final scene, but the image of "Checkered spheres" is weird. ```C++ virtual color value(double u, double v, const point3& p) const { auto sines = sin(10*p.x())*sin(10*p.y())*sin(10*p.z()); if (sines < 0) return odd->value(u, v, p); else return even->value(u, v, p); } ``` If we use u,v in place of `p.x() p.y()`, we can get the following image: ![twosphere](https://user-images.githubusercontent.com/42328994/87169421-f8520400-c29d-11ea-8070-477c05b0a983.png) (But this will cause the texture in final scene weird)
non_process
checker texture is spatially defined could or should be defined by surface coordinates u v maybe this is an intended behavior the checked texture didn t use uv coordinate at all it just use the normal which is parameter p at the hit point this get correct result in final scene but the image of checkered spheres is weird c virtual color value double u double v const p const auto sines sin p x sin p y sin p z if sines return odd value u v p else return even value u v p if we use u v in place of p x p y we can get the following image but this will cause the texture in final scene weird
0
12,648
3,284,114,627
IssuesEvent
2015-10-28 15:29:07
coreos/rkt
https://api.github.com/repos/coreos/rkt
opened
TestPodManifest fails
area/testing help wanted kind/bug
The TestPodManifest run that uses the memory isolator fails on my local machine like this: ``` 548608 run --mds-register=false --pod-manifest=/home/steveej/synchronized/github/coreos/rkt/build-coreos/tmp/functional/test-tmp/rkt-test-manifest-845308657 rkt_run_pod_manifest_test.go:544: Expected "Memory Limit: 4194304" but not found: rkt: using image from file /home/steveej/synchronized/github/coreos/rkt/build-coreos/bin/stage1-coreos.aci run: group "rkt" not found, will use default gid when rendering images [ 6429.084854] inspect[4]: Memory Limit: 9223372036854771712 FAIL exit status 1 FAIL github.com/coreos/rkt/tests 64.801s tests/functional.mk:44: recipe for target '/home/steveej/synchronized/github/coreos/rkt/build-coreos/stamps/__tests_functional_mk_functional_tests.stamp' failed make: *** [/home/steveej/synchronized/github/coreos/rkt/build-coreos/stamps/__tests_functional_mk_functional_tests.stamp] Error 1 ``` The memory limit is obviously not respected. Used components on my host: kernel 4.2.5 systemd 217 ``` $ mount | grep cgroup tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/run/current-system/systemd/lib/systemd/systemd-cgroups-agent,name=systemd) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) ```
1.0
TestPodManifest fails - The TestPodManifest run that uses the memory isolator fails on my local machine like this: ``` 548608 run --mds-register=false --pod-manifest=/home/steveej/synchronized/github/coreos/rkt/build-coreos/tmp/functional/test-tmp/rkt-test-manifest-845308657 rkt_run_pod_manifest_test.go:544: Expected "Memory Limit: 4194304" but not found: rkt: using image from file /home/steveej/synchronized/github/coreos/rkt/build-coreos/bin/stage1-coreos.aci run: group "rkt" not found, will use default gid when rendering images [ 6429.084854] inspect[4]: Memory Limit: 9223372036854771712 FAIL exit status 1 FAIL github.com/coreos/rkt/tests 64.801s tests/functional.mk:44: recipe for target '/home/steveej/synchronized/github/coreos/rkt/build-coreos/stamps/__tests_functional_mk_functional_tests.stamp' failed make: *** [/home/steveej/synchronized/github/coreos/rkt/build-coreos/stamps/__tests_functional_mk_functional_tests.stamp] Error 1 ``` The memory limit is obviously not respected. Used components on my host: kernel 4.2.5 systemd 217 ``` $ mount | grep cgroup tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/run/current-system/systemd/lib/systemd/systemd-cgroups-agent,name=systemd) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) ```
non_process
testpodmanifest fails the testpodmanifest run that uses the memory isolator fails on my local machine like this run mds register false pod manifest home steveej synchronized github coreos rkt build coreos tmp functional test tmp rkt test manifest rkt run pod manifest test go expected memory limit but not found rkt using image from file home steveej synchronized github coreos rkt build coreos bin coreos aci run group rkt not found will use default gid when rendering images inspect memory limit fail exit status fail github com coreos rkt tests tests functional mk recipe for target home steveej synchronized github coreos rkt build coreos stamps tests functional mk functional tests stamp failed make error the memory limit is obviously not respected used components on my host kernel systemd mount grep cgroup tmpfs on sys fs cgroup type tmpfs ro nosuid nodev noexec mode cgroup on sys fs cgroup systemd type cgroup rw nosuid nodev noexec relatime xattr release agent run current system systemd lib systemd systemd cgroups agent name systemd cgroup on sys fs cgroup cpuset type cgroup rw nosuid nodev noexec relatime cpuset cgroup on sys fs cgroup cpu cpuacct type cgroup rw nosuid nodev noexec relatime cpu cpuacct cgroup on sys fs cgroup blkio type cgroup rw nosuid nodev noexec relatime blkio cgroup on sys fs cgroup memory type cgroup rw nosuid nodev noexec relatime memory cgroup on sys fs cgroup devices type cgroup rw nosuid nodev noexec relatime devices cgroup on sys fs cgroup freezer type cgroup rw nosuid nodev noexec relatime freezer cgroup on sys fs cgroup net cls type cgroup rw nosuid nodev noexec relatime net cls
0
5,894
8,710,249,559
IssuesEvent
2018-12-06 15:58:35
Open-EO/openeo-api
https://api.github.com/repos/Open-EO/openeo-api
closed
Dedicated export processes in the process graph
job management processes vote
The definition of the process graph output format is located at the moment in the preview and job requests. However, IMHO it makes more sense to have export definitions in the process graph itself. If you have complex process graphs you might want to export intermediate results and statistical analysis data. It is more flexible to have dedicated export processes, in which you can set the export format directly. With export processes one can export many different formats in a single process graph. For example, the following process will export all its inputs as GTiff files somewhere within a process graph and pipes the inputs upstream to the next process, so that further processing is possible with the same data: ``` "process_id": "raster_export", "format": "GTiff", "imagery": { "process_id": "get_data", "data_id": "nc_spm_08.landsat.raster.elevation", "imagery": { "process_id": "get_data", "data_id": "nc_spm_08.landsat.raster.slope" } } ```
1.0
Dedicated export processes in the process graph - The definition of the process graph output format is located at the moment in the preview and job requests. However, IMHO it makes more sense to have export definitions in the process graph itself. If you have complex process graphs you might want to export intermediate results and statistical analysis data. It is more flexible to have dedicated export processes, in which you can set the export format directly. With export processes one can export many different formats in a single process graph. For example, the following process will export all its inputs as GTiff files somewhere within a process graph and pipes the inputs upstream to the next process, so that further processing is possible with the same data: ``` "process_id": "raster_export", "format": "GTiff", "imagery": { "process_id": "get_data", "data_id": "nc_spm_08.landsat.raster.elevation", "imagery": { "process_id": "get_data", "data_id": "nc_spm_08.landsat.raster.slope" } } ```
process
dedicated export processes in the process graph the definition of the process graph output format is located at the moment in the preview and job requests however imho it makes more sense to have export definitions in the process graph itself if you have complex process graphs you might want to export intermediate results and statistical analysis data it is more flexible to have dedicated export processes in which you can set the export format directly with export processes one can export many different formats in a single process graph for example the following process will export all its inputs as gtiff files somewhere within a process graph and pipes the inputs upstream to the next process so that further processing is possible with the same data process id raster export format gtiff imagery process id get data data id nc spm landsat raster elevation imagery process id get data data id nc spm landsat raster slope
1
21,476
3,511,977,548
IssuesEvent
2016-01-10 17:58:12
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
dnsdist tarball is missing LICENSE or COPYING file
defect dnsdist
pdns tarball has it, but dnsdist one doesn't
1.0
dnsdist tarball is missing LICENSE or COPYING file - pdns tarball has it, but dnsdist one doesn't
non_process
dnsdist tarball is missing license or copying file pdns tarball has it but dnsdist one doesn t
0
717,630
24,683,390,729
IssuesEvent
2022-10-19 00:21:36
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
reopened
drivers: peci: user space handlers not building correctly
bug priority: medium Stale area: PECI
**Describe the bug** PECI subsystem cannot be enabled with USERSPACE feature User handlers do not build correctly. Please also mention any information which could help others to understand the problem you're facing: - What target platform are you using? - What have you tried to diagnose or workaround this issue? - ... **To Reproduce** Steps to reproduce the behavior: 1) cd samples/drivers/peci 2) west build -b mec172xevb_assy6906 -- -DCONFIG_USERSPACE=y 3) See build error **Expected behavior** PECI handlers build correctly **Impact** High **Logs and console output** /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:14:9: error: implicit declaration of function 'Z_SYSCALL_DRIVER_PECI'; did you mean 'Z_SYSCALL_DRIVER_GEN'? [-Werror=implicit-function-declaration] 14 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, config)); | ^~~~~~~~~~~~~~~~~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:14:36: error: 'config' undeclared (first use in this function); did you mean 'mpu_config'? 14 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, config)); | ^~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:14:36: note: each undeclared identifier is reported only once for each function it appears in 14 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, config)); | ^~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c: In function 'z_vrfy_peci_enable': /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:22:36: error: 'enable' undeclared (first use in this function) 22 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, enable)); | ^~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c: In function 'z_vrfy_peci_disable': /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:30:36: error: 'disable' undeclared (first use in this function) 30 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, disable)); | ^~~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c: In function 'z_vrfy_peci_transfer': /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:41:36: error: 'transfer' undeclared (first use in this function) 41 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, transfer)); | ^~~~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ **Environment (please complete the following information):** - OS: (e.g. Linux, MacOS, Windows) - Toolchain (e.g Zephyr SDK, ...) - Commit SHA or Version used **Additional context** Add any other context that could be relevant to your issue, such as pin setting, target configuration, ...
1.0
drivers: peci: user space handlers not building correctly - **Describe the bug** PECI subsystem cannot be enabled with USERSPACE feature User handlers do not build correctly. Please also mention any information which could help others to understand the problem you're facing: - What target platform are you using? - What have you tried to diagnose or workaround this issue? - ... **To Reproduce** Steps to reproduce the behavior: 1) cd samples/drivers/peci 2) west build -b mec172xevb_assy6906 -- -DCONFIG_USERSPACE=y 3) See build error **Expected behavior** PECI handlers build correctly **Impact** High **Logs and console output** /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:14:9: error: implicit declaration of function 'Z_SYSCALL_DRIVER_PECI'; did you mean 'Z_SYSCALL_DRIVER_GEN'? [-Werror=implicit-function-declaration] 14 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, config)); | ^~~~~~~~~~~~~~~~~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:14:36: error: 'config' undeclared (first use in this function); did you mean 'mpu_config'? 14 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, config)); | ^~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:14:36: note: each undeclared identifier is reported only once for each function it appears in 14 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, config)); | ^~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c: In function 'z_vrfy_peci_enable': /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:22:36: error: 'enable' undeclared (first use in this function) 22 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, enable)); | ^~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c: In function 'z_vrfy_peci_disable': /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:30:36: error: 'disable' undeclared (first use in this function) 30 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, disable)); | ^~~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ | ^~~~ /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c: In function 'z_vrfy_peci_transfer': /home/jamezaar/ecwork/zephyr_fork/drivers/peci/peci_handlers.c:41:36: error: 'transfer' undeclared (first use in this function) 41 | Z_OOPS(Z_SYSCALL_DRIVER_PECI(dev, transfer)); | ^~~~~~~~ /home/jamezaar/ecwork/zephyr_fork/include/syscall_handler.h:293:7: note: in definition of macro 'Z_OOPS' 293 | if (expr) { \ **Environment (please complete the following information):** - OS: (e.g. Linux, MacOS, Windows) - Toolchain (e.g Zephyr SDK, ...) - Commit SHA or Version used **Additional context** Add any other context that could be relevant to your issue, such as pin setting, target configuration, ...
non_process
drivers peci user space handlers not building correctly describe the bug peci subsystem cannot be enabled with userspace feature user handlers do not build correctly please also mention any information which could help others to understand the problem you re facing what target platform are you using what have you tried to diagnose or workaround this issue to reproduce steps to reproduce the behavior cd samples drivers peci west build b dconfig userspace y see build error expected behavior peci handlers build correctly impact high logs and console output home jamezaar ecwork zephyr fork drivers peci peci handlers c error implicit declaration of function z syscall driver peci did you mean z syscall driver gen z oops z syscall driver peci dev config home jamezaar ecwork zephyr fork include syscall handler h note in definition of macro z oops if expr home jamezaar ecwork zephyr fork drivers peci peci handlers c error config undeclared first use in this function did you mean mpu config z oops z syscall driver peci dev config home jamezaar ecwork zephyr fork include syscall handler h note in definition of macro z oops if expr home jamezaar ecwork zephyr fork drivers peci peci handlers c note each undeclared identifier is reported only once for each function it appears in z oops z syscall driver peci dev config home jamezaar ecwork zephyr fork include syscall handler h note in definition of macro z oops if expr home jamezaar ecwork zephyr fork drivers peci peci handlers c in function z vrfy peci enable home jamezaar ecwork zephyr fork drivers peci peci handlers c error enable undeclared first use in this function z oops z syscall driver peci dev enable home jamezaar ecwork zephyr fork include syscall handler h note in definition of macro z oops if expr home jamezaar ecwork zephyr fork drivers peci peci handlers c in function z vrfy peci disable home jamezaar ecwork zephyr fork drivers peci peci handlers c error disable undeclared first use in this function z oops z syscall driver peci dev disable home jamezaar ecwork zephyr fork include syscall handler h note in definition of macro z oops if expr home jamezaar ecwork zephyr fork drivers peci peci handlers c in function z vrfy peci transfer home jamezaar ecwork zephyr fork drivers peci peci handlers c error transfer undeclared first use in this function z oops z syscall driver peci dev transfer home jamezaar ecwork zephyr fork include syscall handler h note in definition of macro z oops if expr environment please complete the following information os e g linux macos windows toolchain e g zephyr sdk commit sha or version used additional context add any other context that could be relevant to your issue such as pin setting target configuration
0
394,204
27,023,857,538
IssuesEvent
2023-02-11 10:39:06
Lioydiano/Skydda
https://api.github.com/repos/Lioydiano/Skydda
closed
Regolamento
documentation
Il gioco richiede un regolamento sintetico ma completo per essere distribuito. Si suggerisce il formato Markdown (`.md`).
1.0
Regolamento - Il gioco richiede un regolamento sintetico ma completo per essere distribuito. Si suggerisce il formato Markdown (`.md`).
non_process
regolamento il gioco richiede un regolamento sintetico ma completo per essere distribuito si suggerisce il formato markdown md
0
5,562
8,403,529,866
IssuesEvent
2018-10-11 10:01:29
kiwicom/orbit-components
https://api.github.com/repos/kiwicom/orbit-components
closed
Loading_Button loading - The animation is not centered in IE
bug processing
<!--- Provide a general summary of the issue in the Title above --> ## Expected Behavior <!--- Tell us what should happen --> The animation should be in the center of the button ![image](https://user-images.githubusercontent.com/8585215/46681844-67438a00-cbec-11e8-9cfe-2b249e7aeffb.png) ## Current Behavior <!--- Tell us what happens instead of the expected behavior --> The animation is not centered and is not fully visible ![](https://d2mxuefqeaa7sj.cloudfront.net/s_ADF19995DB7DB7AC40745C602BBCB55AE2A9900DBD550D200ED96C7C7777CEAC_1539004288337_button+loading.gif) ## Storybook link https://kiwicom.github.io/orbit-components/?knob-Label=Date%20of%20birth&knob-Flex%5B0%5D=0%200%2060px&knob-Flex%5B1%5D=1%201%20100%25&knob-Flex%5B2%5D=0%200%2090px&selectedKind=Loading&selectedStory=Button%20loading&full=0&addons=1&stories=1&panelRight=0&addonPanel=storybook%2Factions%2Factions-panel
1.0
Loading_Button loading - The animation is not centered in IE - <!--- Provide a general summary of the issue in the Title above --> ## Expected Behavior <!--- Tell us what should happen --> The animation should be in the center of the button ![image](https://user-images.githubusercontent.com/8585215/46681844-67438a00-cbec-11e8-9cfe-2b249e7aeffb.png) ## Current Behavior <!--- Tell us what happens instead of the expected behavior --> The animation is not centered and is not fully visible ![](https://d2mxuefqeaa7sj.cloudfront.net/s_ADF19995DB7DB7AC40745C602BBCB55AE2A9900DBD550D200ED96C7C7777CEAC_1539004288337_button+loading.gif) ## Storybook link https://kiwicom.github.io/orbit-components/?knob-Label=Date%20of%20birth&knob-Flex%5B0%5D=0%200%2060px&knob-Flex%5B1%5D=1%201%20100%25&knob-Flex%5B2%5D=0%200%2090px&selectedKind=Loading&selectedStory=Button%20loading&full=0&addons=1&stories=1&panelRight=0&addonPanel=storybook%2Factions%2Factions-panel
process
loading button loading the animation is not centered in ie expected behavior the animation should be in the center of the button current behavior the animation is not centered and is not fully visible storybook link
1
18,368
24,496,366,918
IssuesEvent
2022-10-10 09:02:01
xcesco/kripton
https://api.github.com/repos/xcesco/kripton
closed
@BindSharedPreferences does not allow to specify shared preferences file name
bug shared-preferences module annotation-processor module
the name was taken directly from class name. There's a paramter to specify the filename but it is not considered
1.0
@BindSharedPreferences does not allow to specify shared preferences file name - the name was taken directly from class name. There's a paramter to specify the filename but it is not considered
process
bindsharedpreferences does not allow to specify shared preferences file name the name was taken directly from class name there s a paramter to specify the filename but it is not considered
1
155,297
12,245,602,838
IssuesEvent
2020-05-05 13:15:07
eclipse/kapua
https://api.github.com/repos/eclipse/kapua
closed
Create integration test for Scheduler Service
Test
**Is your feature request related to a problem? Please describe.** We need to add Unit and Integration test for the Scheduler Services. **Describe the solution you'd like** Integrating with Job feature, create test that starts Job in order to check: - CRUD - Simple interval triggering - Cron interval triggering - Processing on events **Describe alternatives you've considered** _None_ **Additional context** _None_
1.0
Create integration test for Scheduler Service - **Is your feature request related to a problem? Please describe.** We need to add Unit and Integration test for the Scheduler Services. **Describe the solution you'd like** Integrating with Job feature, create test that starts Job in order to check: - CRUD - Simple interval triggering - Cron interval triggering - Processing on events **Describe alternatives you've considered** _None_ **Additional context** _None_
non_process
create integration test for scheduler service is your feature request related to a problem please describe we need to add unit and integration test for the scheduler services describe the solution you d like integrating with job feature create test that starts job in order to check crud simple interval triggering cron interval triggering processing on events describe alternatives you ve considered none additional context none
0
2,611
5,367,776,235
IssuesEvent
2017-02-22 06:03:51
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
closed
[subtitles] [en] Brexit : l'Europe actuelle, c'est la violence sociale, politique et guerrière
Language: English Process: [6] Approved
# Video title Brexit : l'Europe actuelle, c'est la violence sociale, politique et guerrière # URL https://www.youtube.com/watch?v=0bMoXv8Ru_A&t=2s # Youtube subtitle language Anglais # Duration 1:33 # URL subtitles https://www.youtube.com/timedtext_editor?bl=vmp&tab=captions&v=0bMoXv8Ru_A&ui=hd&action_mde_edit_form=1&lang=en&ref=player
1.0
[subtitles] [en] Brexit : l'Europe actuelle, c'est la violence sociale, politique et guerrière - # Video title Brexit : l'Europe actuelle, c'est la violence sociale, politique et guerrière # URL https://www.youtube.com/watch?v=0bMoXv8Ru_A&t=2s # Youtube subtitle language Anglais # Duration 1:33 # URL subtitles https://www.youtube.com/timedtext_editor?bl=vmp&tab=captions&v=0bMoXv8Ru_A&ui=hd&action_mde_edit_form=1&lang=en&ref=player
process
brexit l europe actuelle c est la violence sociale politique et guerrière video title brexit l europe actuelle c est la violence sociale politique et guerrière url youtube subtitle language anglais duration url subtitles
1
3,059
6,047,212,614
IssuesEvent
2017-06-12 13:59:40
eranhd/Anti-Drug-Jerusalem
https://api.github.com/repos/eranhd/Anti-Drug-Jerusalem
closed
עבור כל נ.צ במפה - לאפשר הוספת שם למיקום
in process Iteration 2
כיוון שיש להם מקומות מוגדרים נרצה להוסיף מקומות ספציפים לפפי המיקום שנבחר, לכן צריך רדיוס מסוים מאותו המיקום ואם נקודת הדיווח בתוך הרדיוס אז ניתן לה את שם הרדיוס
1.0
עבור כל נ.צ במפה - לאפשר הוספת שם למיקום - כיוון שיש להם מקומות מוגדרים נרצה להוסיף מקומות ספציפים לפפי המיקום שנבחר, לכן צריך רדיוס מסוים מאותו המיקום ואם נקודת הדיווח בתוך הרדיוס אז ניתן לה את שם הרדיוס
process
עבור כל נ צ במפה לאפשר הוספת שם למיקום כיוון שיש להם מקומות מוגדרים נרצה להוסיף מקומות ספציפים לפפי המיקום שנבחר לכן צריך רדיוס מסוים מאותו המיקום ואם נקודת הדיווח בתוך הרדיוס אז ניתן לה את שם הרדיוס
1
160,844
13,798,206,977
IssuesEvent
2020-10-10 00:26:21
ember-learn/super-rentals-tutorial
https://api.github.com/repos/ember-learn/super-rentals-tutorial
opened
[WIP] Add links for learning more (Part 3 of 4)
Hacktoberfest documentation good first issue help wanted
## Background When the Super Rentals tutorial was revamped for Octane edition, many pages in the [Ember Guides](https://guides.emberjs.com/) were in flux. Possibly as a result, the tutorial (this repo) contains many link placeholders, marked with the keyword `TODO`. The links are meant to provide additional learning resources to the reader. ## Problem Let's fill in links for the following pages in the tutorial: - `07-reusable-components` - `08-working-with-data` - `09-route-params` You can find the Markdown files in the `src/markdown/tutorial` directory. ## TODOs To be announced.
1.0
[WIP] Add links for learning more (Part 3 of 4) - ## Background When the Super Rentals tutorial was revamped for Octane edition, many pages in the [Ember Guides](https://guides.emberjs.com/) were in flux. Possibly as a result, the tutorial (this repo) contains many link placeholders, marked with the keyword `TODO`. The links are meant to provide additional learning resources to the reader. ## Problem Let's fill in links for the following pages in the tutorial: - `07-reusable-components` - `08-working-with-data` - `09-route-params` You can find the Markdown files in the `src/markdown/tutorial` directory. ## TODOs To be announced.
non_process
add links for learning more part of background when the super rentals tutorial was revamped for octane edition many pages in the were in flux possibly as a result the tutorial this repo contains many link placeholders marked with the keyword todo the links are meant to provide additional learning resources to the reader problem let s fill in links for the following pages in the tutorial reusable components working with data route params you can find the markdown files in the src markdown tutorial directory todos to be announced
0
124,471
26,463,701,864
IssuesEvent
2023-01-16 20:31:36
github/roadmap
https://api.github.com/repos/github/roadmap
closed
Commenting on unchanged lines in a pull request (GHES)
github enterprise code server pull requests
### Summary Review comments are the primary way reviewers provide feedback to the author and other reviewers during the review phase of a pull request, but currently comments can only be added to the 3 lines before or after a changed line. We intend to allow users to add comments and suggest changes on any line in any file changed in the pull request. ![image](https://user-images.githubusercontent.com/2503052/144141747-af9a93b1-13c0-4a90-aa8a-97cca814ac08.png) ### Intended Outcome This will improve the review experience and increase review quality by allowing users to comment on lines that were not changed (but maybe should have been). Users will no longer need to add a comment to a random changed line and reference a line number when providing feedback about missing a change. ### How will it work? A users can add a comment, reply to a comment, or suggest changes to any line of a changed file, not just the 3 lines surrounding a change.
1.0
Commenting on unchanged lines in a pull request (GHES) - ### Summary Review comments are the primary way reviewers provide feedback to the author and other reviewers during the review phase of a pull request, but currently comments can only be added to the 3 lines before or after a changed line. We intend to allow users to add comments and suggest changes on any line in any file changed in the pull request. ![image](https://user-images.githubusercontent.com/2503052/144141747-af9a93b1-13c0-4a90-aa8a-97cca814ac08.png) ### Intended Outcome This will improve the review experience and increase review quality by allowing users to comment on lines that were not changed (but maybe should have been). Users will no longer need to add a comment to a random changed line and reference a line number when providing feedback about missing a change. ### How will it work? A users can add a comment, reply to a comment, or suggest changes to any line of a changed file, not just the 3 lines surrounding a change.
non_process
commenting on unchanged lines in a pull request ghes summary review comments are the primary way reviewers provide feedback to the author and other reviewers during the review phase of a pull request but currently comments can only be added to the lines before or after a changed line we intend to allow users to add comments and suggest changes on any line in any file changed in the pull request intended outcome this will improve the review experience and increase review quality by allowing users to comment on lines that were not changed but maybe should have been users will no longer need to add a comment to a random changed line and reference a line number when providing feedback about missing a change how will it work a users can add a comment reply to a comment or suggest changes to any line of a changed file not just the lines surrounding a change
0
13,224
15,691,311,318
IssuesEvent
2021-03-25 17:42:55
unicode-org/icu4x
https://api.github.com/repos/unicode-org/icu4x
opened
Add conventional comment reminders to PRs
C-process T-task
We should consider adding a link to https://conventionalcomments.org/ from the PR template to remind reviewers to use precise language.
1.0
Add conventional comment reminders to PRs - We should consider adding a link to https://conventionalcomments.org/ from the PR template to remind reviewers to use precise language.
process
add conventional comment reminders to prs we should consider adding a link to from the pr template to remind reviewers to use precise language
1
22,464
31,241,306,435
IssuesEvent
2023-08-20 22:24:15
Warzone2100/map-submission
https://api.github.com/repos/Warzone2100/map-submission
opened
[MAP]: RB-Usefull-P2
map unprocessed
### Upload Map [4c-RB-Usefull-P2-b978774ef086f7300bacf0bfa3211dad94ace97efbc2a9bbfdbab77d0430c902.zip](https://github.com/Warzone2100/map-submission/files/12389317/4c-RB-Usefull-P2-b978774ef086f7300bacf0bfa3211dad94ace97efbc2a9bbfdbab77d0430c902.zip) ### Authorship Other: This map was made by someone else ### Map Description (optional) _No response_ ### Notes for Reviewers (optional) _No response_
1.0
[MAP]: RB-Usefull-P2 - ### Upload Map [4c-RB-Usefull-P2-b978774ef086f7300bacf0bfa3211dad94ace97efbc2a9bbfdbab77d0430c902.zip](https://github.com/Warzone2100/map-submission/files/12389317/4c-RB-Usefull-P2-b978774ef086f7300bacf0bfa3211dad94ace97efbc2a9bbfdbab77d0430c902.zip) ### Authorship Other: This map was made by someone else ### Map Description (optional) _No response_ ### Notes for Reviewers (optional) _No response_
process
rb usefull upload map authorship other this map was made by someone else map description optional no response notes for reviewers optional no response
1
4,438
7,311,832,166
IssuesEvent
2018-02-28 19:01:47
kvakulo/Switcheroo
https://api.github.com/repos/kvakulo/Switcheroo
closed
[feature request] Make Alt key work as Return
enhancement in process
I would like to use switcheroo with alt-tab with one hand: alt-tab to pop-pup switcheroo, then alt to switch to the selected window. Being forced to press return to select a window forces me to use two hands. Seems so far tab key is not used in switcheroo, so it would'nt do any harm.
1.0
[feature request] Make Alt key work as Return - I would like to use switcheroo with alt-tab with one hand: alt-tab to pop-pup switcheroo, then alt to switch to the selected window. Being forced to press return to select a window forces me to use two hands. Seems so far tab key is not used in switcheroo, so it would'nt do any harm.
process
make alt key work as return i would like to use switcheroo with alt tab with one hand alt tab to pop pup switcheroo then alt to switch to the selected window being forced to press return to select a window forces me to use two hands seems so far tab key is not used in switcheroo so it would nt do any harm
1
12,352
14,885,083,529
IssuesEvent
2021-01-20 15:19:41
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Site participant registry > Page is not getting refreshed automatically when user decommissioned/activated the site for a study
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
AR : Site participant registry > Page is not getting refreshed automatically when user decommissioned/activated the site for a study ER : Page should be refreshed automatically https://user-images.githubusercontent.com/71445210/104170683-8c588000-5427-11eb-846f-807ba539a8ba.mp4
3.0
Site participant registry > Page is not getting refreshed automatically when user decommissioned/activated the site for a study - AR : Site participant registry > Page is not getting refreshed automatically when user decommissioned/activated the site for a study ER : Page should be refreshed automatically https://user-images.githubusercontent.com/71445210/104170683-8c588000-5427-11eb-846f-807ba539a8ba.mp4
process
site participant registry page is not getting refreshed automatically when user decommissioned activated the site for a study ar site participant registry page is not getting refreshed automatically when user decommissioned activated the site for a study er page should be refreshed automatically
1
19,963
26,442,249,949
IssuesEvent
2023-01-16 02:14:48
sophgo/tpu-mlir
https://api.github.com/repos/sophgo/tpu-mlir
closed
Codegen should be generated after final milr
bug processing
Currently, final mlir file generated after codegen, like this: ``` shell tpuc-opt Conv2d_int8_sym.mlir --strip-io-quant="quant_input=False quant_output=False" --weight-reorder --subnet-divide --layer-group --address-assign --save-weight --codegen="model_file=Conv2d_int8_sym.bmodel" --mlir-print-debuginfo -o Conv2d_int8_sym_final.mlir ``` It's not correct. codegen should follow final mlir. Test By: ``` shell # after build,then execute: test_onnx.py Conv2d ```
1.0
Codegen should be generated after final milr - Currently, final mlir file generated after codegen, like this: ``` shell tpuc-opt Conv2d_int8_sym.mlir --strip-io-quant="quant_input=False quant_output=False" --weight-reorder --subnet-divide --layer-group --address-assign --save-weight --codegen="model_file=Conv2d_int8_sym.bmodel" --mlir-print-debuginfo -o Conv2d_int8_sym_final.mlir ``` It's not correct. codegen should follow final mlir. Test By: ``` shell # after build,then execute: test_onnx.py Conv2d ```
process
codegen should be generated after final milr currently final mlir file generated after codegen like this shell tpuc opt sym mlir strip io quant quant input false quant output false weight reorder subnet divide layer group address assign save weight codegen model file sym bmodel mlir print debuginfo o sym final mlir it s not correct codegen should follow final mlir test by shell after build,then execute test onnx py
1
147,242
11,779,745,607
IssuesEvent
2020-03-16 18:37:08
ansible/awx
https://api.github.com/repos/ansible/awx
closed
Templates List - Hookup Sort Functionality
component:ui_next state:needs_test
## Templates List Mockup: https://tower-mockups.testing.ansible.com/patternfly/templates/templates/ * Hookup sort dropdown options * Name (Ascending/Descending) * Modified (Ascending/Descending) * Last Job Run (Ascending/Descending) * Inventory (Ascending/Descending) * Project (Ascending/Descending) * Mark strings for translation <img width="1409" alt="Screen Shot 2019-05-22 at 3 26 25 PM" src="https://user-images.githubusercontent.com/15881645/58202667-22b1be00-7ca6-11e9-8c8a-db4cb05f0de2.png">
1.0
Templates List - Hookup Sort Functionality - ## Templates List Mockup: https://tower-mockups.testing.ansible.com/patternfly/templates/templates/ * Hookup sort dropdown options * Name (Ascending/Descending) * Modified (Ascending/Descending) * Last Job Run (Ascending/Descending) * Inventory (Ascending/Descending) * Project (Ascending/Descending) * Mark strings for translation <img width="1409" alt="Screen Shot 2019-05-22 at 3 26 25 PM" src="https://user-images.githubusercontent.com/15881645/58202667-22b1be00-7ca6-11e9-8c8a-db4cb05f0de2.png">
non_process
templates list hookup sort functionality templates list mockup hookup sort dropdown options name ascending descending modified ascending descending last job run ascending descending inventory ascending descending project ascending descending mark strings for translation img width alt screen shot at pm src
0
8,742
11,870,326,022
IssuesEvent
2020-03-26 12:36:24
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
closed
`prisma2 reset` support
kind/feature process/candidate
## Problem When testing, it is necessary for some scenarios to reset database data. With Prisma 1 we had the CLI option `prisma reset`. ## Solution Provide a function programmatically to reset database data. ## Alternatives If the solution requires work, a CLI option works for now. ## Additional context I know Prisma 2 is not production-ready, but this feature is a must since we as a developer can automate tests and catch even more bugs not only for our apps but also for Prisma 2 and all its ecosystem.
1.0
`prisma2 reset` support - ## Problem When testing, it is necessary for some scenarios to reset database data. With Prisma 1 we had the CLI option `prisma reset`. ## Solution Provide a function programmatically to reset database data. ## Alternatives If the solution requires work, a CLI option works for now. ## Additional context I know Prisma 2 is not production-ready, but this feature is a must since we as a developer can automate tests and catch even more bugs not only for our apps but also for Prisma 2 and all its ecosystem.
process
reset support problem when testing it is necessary for some scenarios to reset database data with prisma we had the cli option prisma reset solution provide a function programmatically to reset database data alternatives if the solution requires work a cli option works for now additional context i know prisma is not production ready but this feature is a must since we as a developer can automate tests and catch even more bugs not only for our apps but also for prisma and all its ecosystem
1
9,774
12,794,011,148
IssuesEvent
2020-07-02 05:50:43
aiidateam/aiida-core
https://api.github.com/repos/aiidateam/aiida-core
closed
Add the flag `--paused` to `verdi process list`
priority/nice-to-have topic/processes topic/verdi type/accepted feature
This should filter processes that are paused which can be useful to find those that have been paused due to the exponential backoff mechanism
1.0
Add the flag `--paused` to `verdi process list` - This should filter processes that are paused which can be useful to find those that have been paused due to the exponential backoff mechanism
process
add the flag paused to verdi process list this should filter processes that are paused which can be useful to find those that have been paused due to the exponential backoff mechanism
1
136,155
12,701,932,174
IssuesEvent
2020-06-22 19:08:05
GabrielBFelix/EBook
https://api.github.com/repos/GabrielBFelix/EBook
opened
Checar / Criar o Documento de Modelos
documentation
Preenche-lo com o Modelo Conceitual, Modelo de Dados e o Dicionário de Dados, no formato Markdown, coloque no diretório "docs" do repositório;
1.0
Checar / Criar o Documento de Modelos - Preenche-lo com o Modelo Conceitual, Modelo de Dados e o Dicionário de Dados, no formato Markdown, coloque no diretório "docs" do repositório;
non_process
checar criar o documento de modelos preenche lo com o modelo conceitual modelo de dados e o dicionário de dados no formato markdown coloque no diretório docs do repositório
0
9,168
12,221,713,271
IssuesEvent
2020-05-02 09:29:03
naoki-shigehisa/paper
https://api.github.com/repos/naoki-shigehisa/paper
closed
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data
2003 Gaussian Process あとで調べる
## 0. 論文 タイトル:[Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data](https://papers.nips.cc/paper/2540-gaussian-process-latent-variable-models-for-visualisation-of-high-dimensional-data.pdf) 著者: ![スクリーンショット 2020-04-29 21 37 59](https://user-images.githubusercontent.com/43877096/80596681-b61c7300-8a61-11ea-9314-16b1b420f355.png) arXiv投稿日: 学会/ジャーナル:NIPS 2003 ## 1. どんなもの? PCAをガウス過程として解釈したGPLVMを提案 ## 2. 先行研究と比べてどこがすごい? 確率論的解釈を利用することで、真に生成的なアルゴリズムになっている ## 3. 技術や手法のキモはどこ? - Sparsificatio データをサブセットで表してスパース化することで計算を高速化 - Latent Variable Optimisatio ガウス分布としてデータをデータスペースに投影 ![スクリーンショット 2020-05-02 18 26 33](https://user-images.githubusercontent.com/43877096/80860416-785a5d00-8ca2-11ea-8c10-a0d6ea3b2f0a.png) - Kernel Optimisatio 以下に関して最適化 ![スクリーンショット 2020-05-02 18 26 39](https://user-images.githubusercontent.com/43877096/80860425-7ee8d480-8ca2-11ea-8bcd-173c949abea7.png) ↓全体象 ![スクリーンショット 2020-05-02 18 28 09](https://user-images.githubusercontent.com/43877096/80860472-b6578100-8ca2-11ea-8ee6-d39e4cc8bfcd.png) ## 4. どうやって有効だと検証した? 3つのデータセットで検証 - Oil data ![スクリーンショット 2020-05-02 18 13 58](https://user-images.githubusercontent.com/43877096/80860130-b5bdeb00-8ca0-11ea-8297-6eb8e561200e.png) ![スクリーンショット 2020-05-02 18 14 59](https://user-images.githubusercontent.com/43877096/80860146-da19c780-8ca0-11ea-8be4-bb78f41f9102.png) - digit images ![スクリーンショット 2020-05-02 18 16 48](https://user-images.githubusercontent.com/43877096/80860195-1b11dc00-8ca1-11ea-898d-016e5f45bb83.png) - face data ![スクリーンショット 2020-05-02 18 18 14](https://user-images.githubusercontent.com/43877096/80860225-4e546b00-8ca1-11ea-9f43-51a8bac5df94.png) ## 5. 議論はある? 難しかった 別途調べる ## 6. 次に読むべき論文は? GTM
1.0
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data - ## 0. 論文 タイトル:[Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data](https://papers.nips.cc/paper/2540-gaussian-process-latent-variable-models-for-visualisation-of-high-dimensional-data.pdf) 著者: ![スクリーンショット 2020-04-29 21 37 59](https://user-images.githubusercontent.com/43877096/80596681-b61c7300-8a61-11ea-9314-16b1b420f355.png) arXiv投稿日: 学会/ジャーナル:NIPS 2003 ## 1. どんなもの? PCAをガウス過程として解釈したGPLVMを提案 ## 2. 先行研究と比べてどこがすごい? 確率論的解釈を利用することで、真に生成的なアルゴリズムになっている ## 3. 技術や手法のキモはどこ? - Sparsificatio データをサブセットで表してスパース化することで計算を高速化 - Latent Variable Optimisatio ガウス分布としてデータをデータスペースに投影 ![スクリーンショット 2020-05-02 18 26 33](https://user-images.githubusercontent.com/43877096/80860416-785a5d00-8ca2-11ea-8c10-a0d6ea3b2f0a.png) - Kernel Optimisatio 以下に関して最適化 ![スクリーンショット 2020-05-02 18 26 39](https://user-images.githubusercontent.com/43877096/80860425-7ee8d480-8ca2-11ea-8bcd-173c949abea7.png) ↓全体象 ![スクリーンショット 2020-05-02 18 28 09](https://user-images.githubusercontent.com/43877096/80860472-b6578100-8ca2-11ea-8ee6-d39e4cc8bfcd.png) ## 4. どうやって有効だと検証した? 3つのデータセットで検証 - Oil data ![スクリーンショット 2020-05-02 18 13 58](https://user-images.githubusercontent.com/43877096/80860130-b5bdeb00-8ca0-11ea-8297-6eb8e561200e.png) ![スクリーンショット 2020-05-02 18 14 59](https://user-images.githubusercontent.com/43877096/80860146-da19c780-8ca0-11ea-8be4-bb78f41f9102.png) - digit images ![スクリーンショット 2020-05-02 18 16 48](https://user-images.githubusercontent.com/43877096/80860195-1b11dc00-8ca1-11ea-898d-016e5f45bb83.png) - face data ![スクリーンショット 2020-05-02 18 18 14](https://user-images.githubusercontent.com/43877096/80860225-4e546b00-8ca1-11ea-9f43-51a8bac5df94.png) ## 5. 議論はある? 難しかった 別途調べる ## 6. 次に読むべき論文は? GTM
process
gaussian process latent variable models for visualisation of high dimensional data 論文 タイトル: gaussian process latent variable models for visualisation of high dimensional data 著者: arxiv投稿日: 学会 ジャーナル:nips どんなもの? pcaをガウス過程として解釈したgplvmを提案 先行研究と比べてどこがすごい? 確率論的解釈を利用することで、真に生成的なアルゴリズムになっている 技術や手法のキモはどこ? sparsificatio データをサブセットで表してスパース化することで計算を高速化 latent variable optimisatio ガウス分布としてデータをデータスペースに投影 kernel optimisatio 以下に関して最適化 ↓全体象 どうやって有効だと検証した? oil data digit images face data 議論はある? 難しかった 別途調べる 次に読むべき論文は? gtm
1
146,880
19,471,203,277
IssuesEvent
2021-12-24 01:38:36
artsking/linux-5.12.9
https://api.github.com/repos/artsking/linux-5.12.9
opened
WS-2021-0472 (Medium) detected in linuxv5.12.17
security vulnerability
## WS-2021-0472 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.12.17</b></p></summary> <p> <p>Linux kernel stable tree mirror</p> <p>Library home page: <a href=https://github.com/gregkh/linux.git>https://github.com/gregkh/linux.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/batman-adv/bridge_loop_avoidance.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Linux/Kernel is vulnerable to error handling in net/batman-adv/bridge_loop_avoidance.c <p>Publish Date: 2021-11-29 <p>URL: <a href=https://github.com/gregkh/linux/commit/a8f7359259dd5923adc6129284fdad12fc5db347>WS-2021-0472</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1002137">https://osv.dev/vulnerability/UVI-2021-1002137</a></p> <p>Release Date: 2021-11-29</p> <p>Fix Resolution: Linux/Kernel - v5.4.157, v5.10.77, v5.14.16, v5.15, 5.16-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2021-0472 (Medium) detected in linuxv5.12.17 - ## WS-2021-0472 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.12.17</b></p></summary> <p> <p>Linux kernel stable tree mirror</p> <p>Library home page: <a href=https://github.com/gregkh/linux.git>https://github.com/gregkh/linux.git</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/batman-adv/bridge_loop_avoidance.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Linux/Kernel is vulnerable to error handling in net/batman-adv/bridge_loop_avoidance.c <p>Publish Date: 2021-11-29 <p>URL: <a href=https://github.com/gregkh/linux/commit/a8f7359259dd5923adc6129284fdad12fc5db347>WS-2021-0472</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1002137">https://osv.dev/vulnerability/UVI-2021-1002137</a></p> <p>Release Date: 2021-11-29</p> <p>Fix Resolution: Linux/Kernel - v5.4.157, v5.10.77, v5.14.16, v5.15, 5.16-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in ws medium severity vulnerability vulnerable library linux kernel stable tree mirror library home page a href found in base branch master vulnerable source files net batman adv bridge loop avoidance c vulnerability details in linux kernel is vulnerable to error handling in net batman adv bridge loop avoidance c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux kernel step up your open source security game with whitesource
0
252,796
8,042,135,099
IssuesEvent
2018-07-31 07:02:28
sunwukonga/BBB
https://api.github.com/repos/sunwukonga/BBB
opened
[HomeScreen] FlatLists on home screen are hard coded
medium priority
FlatLists on [HomeScreen] should populate using the countryCode stored in AsyncStorage, not a hard-coded value.
1.0
[HomeScreen] FlatLists on home screen are hard coded - FlatLists on [HomeScreen] should populate using the countryCode stored in AsyncStorage, not a hard-coded value.
non_process
flatlists on home screen are hard coded flatlists on should populate using the countrycode stored in asyncstorage not a hard coded value
0
9,484
2,615,153,235
IssuesEvent
2015-03-01 06:30:54
chrsmith/reaver-wps
https://api.github.com/repos/chrsmith/reaver-wps
opened
This is a query not an issue: regarding saving position.
auto-migrated Priority-Triage Type-Defect
``` Good morning and please pardon my ignorance. My query is regarding the saving of the session. Does Reaver automatically save position as it running or only when you stop the app and it says it has saved the session? My main reason of question is that the router I am testing at the moment (Provided by Sky broadband UK) allows between 20 and 24 (unsure why it changes) attempts and then will always lock out for 1 hour. So my estimated maximum calculation going based on 20 keys per hour to reach 11000 keys is 500 hours which is about 21 days to crack this password. Now that is a very long time! and if my computer was to crash and it hadnt saved automatically that could be a devastating loss of time. I appreciate you response greatly since I know this isnt an issue of the application. Thanks, Dean. ``` Original issue reported on code.google.com by `sa...@phoenx.co.uk` on 4 Feb 2012 at 7:14
1.0
This is a query not an issue: regarding saving position. - ``` Good morning and please pardon my ignorance. My query is regarding the saving of the session. Does Reaver automatically save position as it running or only when you stop the app and it says it has saved the session? My main reason of question is that the router I am testing at the moment (Provided by Sky broadband UK) allows between 20 and 24 (unsure why it changes) attempts and then will always lock out for 1 hour. So my estimated maximum calculation going based on 20 keys per hour to reach 11000 keys is 500 hours which is about 21 days to crack this password. Now that is a very long time! and if my computer was to crash and it hadnt saved automatically that could be a devastating loss of time. I appreciate you response greatly since I know this isnt an issue of the application. Thanks, Dean. ``` Original issue reported on code.google.com by `sa...@phoenx.co.uk` on 4 Feb 2012 at 7:14
non_process
this is a query not an issue regarding saving position good morning and please pardon my ignorance my query is regarding the saving of the session does reaver automatically save position as it running or only when you stop the app and it says it has saved the session my main reason of question is that the router i am testing at the moment provided by sky broadband uk allows between and unsure why it changes attempts and then will always lock out for hour so my estimated maximum calculation going based on keys per hour to reach keys is hours which is about days to crack this password now that is a very long time and if my computer was to crash and it hadnt saved automatically that could be a devastating loss of time i appreciate you response greatly since i know this isnt an issue of the application thanks dean original issue reported on code google com by sa phoenx co uk on feb at
0