Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
666,089
22,342,078,341
IssuesEvent
2022-06-15 02:35:39
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Words missing in viewport help message.
type: bug E2E-core stage: fire watch epic:ui-ux-improvements priority: highest
### Current behavior When clicking the viewport size message, it seems some words are missing. ![image](https://user-images.githubusercontent.com/8130013/173511455-e825bf90-22bd-43b1-8789-d4cbc7e4c8c5.png) What is `your`? I guess it's `config file`, right? ### Desired behavior Correct help message should be shown. ### Test code to reproduce Click viewport button. ### Cypress Version 10.1.0 ### Other _No response_
1.0
Words missing in viewport help message. - ### Current behavior When clicking the viewport size message, it seems some words are missing. ![image](https://user-images.githubusercontent.com/8130013/173511455-e825bf90-22bd-43b1-8789-d4cbc7e4c8c5.png) What is `your`? I guess it's `config file`, right? ### Desired behavior Correct help message should be shown. ### Test code to reproduce Click viewport button. ### Cypress Version 10.1.0 ### Other _No response_
non_process
words missing in viewport help message current behavior when clicking the viewport size message it seems some words are missing what is your i guess it s config file right desired behavior correct help message should be shown test code to reproduce click viewport button cypress version other no response
0
102,247
16,550,508,701
IssuesEvent
2021-05-28 08:02:29
AlexRogalskiy/typescript-tools
https://api.github.com/repos/AlexRogalskiy/typescript-tools
closed
CVE-2021-32640 (Medium) detected in ws-5.2.2.tgz
Status: Invalid security vulnerability
## CVE-2021-32640 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-5.2.2.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-5.2.2.tgz">https://registry.npmjs.org/ws/-/ws-5.2.2.tgz</a></p> <p>Path to dependency file: typescript-tools/package.json</p> <p>Path to vulnerable library: typescript-tools/node_modules/turndown/node_modules/ws/package.json</p> <p> Dependency Hierarchy: - typedoc-plugin-markdown-1.2.1.tgz (Root Library) - turndown-5.0.3.tgz - jsdom-11.12.0.tgz - :x: **ws-5.2.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/typescript-tools/commit/83f7ab279f5fd89309ff7ff41be80bbcdb23330e">83f7ab279f5fd89309ff7ff41be80bbcdb23330e</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options. <p>Publish Date: 2021-05-25 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p> <p>Release Date: 2021-05-25</p> <p>Fix Resolution: ws - 7.4.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-32640 (Medium) detected in ws-5.2.2.tgz - ## CVE-2021-32640 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ws-5.2.2.tgz</b></p></summary> <p>Simple to use, blazing fast and thoroughly tested websocket client and server for Node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/ws/-/ws-5.2.2.tgz">https://registry.npmjs.org/ws/-/ws-5.2.2.tgz</a></p> <p>Path to dependency file: typescript-tools/package.json</p> <p>Path to vulnerable library: typescript-tools/node_modules/turndown/node_modules/ws/package.json</p> <p> Dependency Hierarchy: - typedoc-plugin-markdown-1.2.1.tgz (Root Library) - turndown-5.0.3.tgz - jsdom-11.12.0.tgz - :x: **ws-5.2.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/typescript-tools/commit/83f7ab279f5fd89309ff7ff41be80bbcdb23330e">83f7ab279f5fd89309ff7ff41be80bbcdb23330e</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options. <p>Publish Date: 2021-05-25 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p> <p>Release Date: 2021-05-25</p> <p>Fix Resolution: ws - 7.4.6</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in ws tgz cve medium severity vulnerability vulnerable library ws tgz simple to use blazing fast and thoroughly tested websocket client and server for node js library home page a href path to dependency file typescript tools package json path to vulnerable library typescript tools node modules turndown node modules ws package json dependency hierarchy typedoc plugin markdown tgz root library turndown tgz jsdom tgz x ws tgz vulnerable library found in head commit a href vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ws step up your open source security game with whitesource
0
16,846
22,098,242,144
IssuesEvent
2022-06-01 11:53:34
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Export all records to ES by default
kind/feature scope/broker area/observability team/process-automation
**Is your feature request related to a problem? Please describe.** Currently, we use the Zeebe Elasticsearch export in Camunda Cloud to export some records to ES that are consumed by Operate, Tasklist, or Optimize. But we don't export all records by default. In addition to these applications, we could use the records in ES also for debugging behavior in Zeebe. Sometimes the records can't be read from the log stream because the log is compacted already (default: every 5 minutes). But some of the records are not exported by default. Most relevant of these records may be: * `message` * `message_subscription` * `process_message_subscription` * `message_start_event_subscription` ¹ * `timer` ¹ * `process_event` ¹ ¹ - this record is not available in the exporter yet #8337 **Describe the solution you'd like** Export all records to ES by default. Or, at least the most relevant records. For example, we could exclude `job_batch` records because there is usually a high number of these records but they may be not very valuable for debugging. Before changing the configuration, we should check the impact on the disk usage and performance. We should contact the Operate/Tasklist/Optimize teams (especially @sdorokhova) about the performance impact for ES and the applications. Additionally, we should write a short guide on how to read the records from Camunda Cloud (for the Zeebe medic or a Zeebe dev working on an incident/support case). Maybe, we could use a local Kibana. @deepthidevaki may help with this part. **Describe alternatives you've considered** The records are not exported by default. We need to change the configuration manually to export the records. This option has the disadvantage that the behavior you want to debug happened already. If it doesn't happen regularly or is not reproducible then we are not able to export the records. **Additional context** This request was raised as an action item of the following incident: https://docs.google.com/document/d/1xQPZbGUG57VOi-u4MI8CZF_I6FbBh0nc_ukSFiZwsNk
1.0
Export all records to ES by default - **Is your feature request related to a problem? Please describe.** Currently, we use the Zeebe Elasticsearch export in Camunda Cloud to export some records to ES that are consumed by Operate, Tasklist, or Optimize. But we don't export all records by default. In addition to these applications, we could use the records in ES also for debugging behavior in Zeebe. Sometimes the records can't be read from the log stream because the log is compacted already (default: every 5 minutes). But some of the records are not exported by default. Most relevant of these records may be: * `message` * `message_subscription` * `process_message_subscription` * `message_start_event_subscription` ¹ * `timer` ¹ * `process_event` ¹ ¹ - this record is not available in the exporter yet #8337 **Describe the solution you'd like** Export all records to ES by default. Or, at least the most relevant records. For example, we could exclude `job_batch` records because there is usually a high number of these records but they may be not very valuable for debugging. Before changing the configuration, we should check the impact on the disk usage and performance. We should contact the Operate/Tasklist/Optimize teams (especially @sdorokhova) about the performance impact for ES and the applications. Additionally, we should write a short guide on how to read the records from Camunda Cloud (for the Zeebe medic or a Zeebe dev working on an incident/support case). Maybe, we could use a local Kibana. @deepthidevaki may help with this part. **Describe alternatives you've considered** The records are not exported by default. We need to change the configuration manually to export the records. This option has the disadvantage that the behavior you want to debug happened already. If it doesn't happen regularly or is not reproducible then we are not able to export the records. **Additional context** This request was raised as an action item of the following incident: https://docs.google.com/document/d/1xQPZbGUG57VOi-u4MI8CZF_I6FbBh0nc_ukSFiZwsNk
process
export all records to es by default is your feature request related to a problem please describe currently we use the zeebe elasticsearch export in camunda cloud to export some records to es that are consumed by operate tasklist or optimize but we don t export all records by default in addition to these applications we could use the records in es also for debugging behavior in zeebe sometimes the records can t be read from the log stream because the log is compacted already default every minutes but some of the records are not exported by default most relevant of these records may be message message subscription process message subscription message start event subscription ¹ timer ¹ process event ¹ ¹ this record is not available in the exporter yet describe the solution you d like export all records to es by default or at least the most relevant records for example we could exclude job batch records because there is usually a high number of these records but they may be not very valuable for debugging before changing the configuration we should check the impact on the disk usage and performance we should contact the operate tasklist optimize teams especially sdorokhova about the performance impact for es and the applications additionally we should write a short guide on how to read the records from camunda cloud for the zeebe medic or a zeebe dev working on an incident support case maybe we could use a local kibana deepthidevaki may help with this part describe alternatives you ve considered the records are not exported by default we need to change the configuration manually to export the records this option has the disadvantage that the behavior you want to debug happened already if it doesn t happen regularly or is not reproducible then we are not able to export the records additional context this request was raised as an action item of the following incident
1
1,109
3,588,369,427
IssuesEvent
2016-01-30 23:45:54
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
Create global variable mysql-connpool_default_max
ADMIN CONNECTION POOL cxx_pa development MYSQL PROTOCOL QUERY PROCESSOR ROUTING
This variable will define the total number of connections per hostgroup. In a future version will be possible to fine tune the number of connections based on username/schema/hostgroup
1.0
Create global variable mysql-connpool_default_max - This variable will define the total number of connections per hostgroup. In a future version will be possible to fine tune the number of connections based on username/schema/hostgroup
process
create global variable mysql connpool default max this variable will define the total number of connections per hostgroup in a future version will be possible to fine tune the number of connections based on username schema hostgroup
1
2,911
5,902,715,419
IssuesEvent
2017-05-19 02:49:11
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
shell_exec does not work correctly when using "awk" before generating report
html report log-processing question
Hi Support, I'm facing the problem with "awk" and "goaccess" **My PHP script**: ``` $cmd = "awk '$7 ~ /schedule/' /var/logs/httpd/access_log* | goaccess -a -o /var/www/html/log/generate_report.html" $output = shell_exec($cmd); ``` **Error Log**: > AH01215: awk: cmd. line:1: (FILENAME=/var/logs/httpd/access_log-20170508 FNR=126) fatal: print to "standard output" failed (Broken pipe) This issue only happens when output of "awk" is input of "goaccess", because when I use another "awk" instead of "goaccess", it still works well. Ex: $cmd = "awk '$7 ~ /schedule/' /var/logs/httpd/access_log* | awk '{gsub(/[\/]+/,"/",$7); print }'" So, there is any way to fix this issue? Thank you so much. Looking forward to knowing your response.
1.0
shell_exec does not work correctly when using "awk" before generating report - Hi Support, I'm facing the problem with "awk" and "goaccess" **My PHP script**: ``` $cmd = "awk '$7 ~ /schedule/' /var/logs/httpd/access_log* | goaccess -a -o /var/www/html/log/generate_report.html" $output = shell_exec($cmd); ``` **Error Log**: > AH01215: awk: cmd. line:1: (FILENAME=/var/logs/httpd/access_log-20170508 FNR=126) fatal: print to "standard output" failed (Broken pipe) This issue only happens when output of "awk" is input of "goaccess", because when I use another "awk" instead of "goaccess", it still works well. Ex: $cmd = "awk '$7 ~ /schedule/' /var/logs/httpd/access_log* | awk '{gsub(/[\/]+/,"/",$7); print }'" So, there is any way to fix this issue? Thank you so much. Looking forward to knowing your response.
process
shell exec does not work correctly when using awk before generating report hi support i m facing the problem with awk and goaccess my php script cmd awk schedule var logs httpd access log goaccess a o var www html log generate report html output shell exec cmd error log awk cmd line filename var logs httpd access log fnr fatal print to standard output failed broken pipe this issue only happens when output of awk is input of goaccess because when i use another awk instead of goaccess it still works well ex cmd awk schedule var logs httpd access log awk gsub print so there is any way to fix this issue thank you so much looking forward to knowing your response
1
283,674
21,327,293,906
IssuesEvent
2022-04-18 01:41:20
aws/aws-iot-device-sdk-js-v2
https://api.github.com/repos/aws/aws-iot-device-sdk-js-v2
opened
Very confusing key
documentation needs-triage
### Describe the issue Every time I try IoT, I struggle with certificates. Most examples and documents use aws console generated certificates. Which works but still confusing. for example : On AWS GUI : there is two options to create certificate. ![image](https://user-images.githubusercontent.com/17027219/163740804-7a7c8999-27f9-4a82-9350-7df1e254d17c.png) 1. Auto-generate. which gives me five file ![image](https://user-images.githubusercontent.com/17027219/163740748-f8ed3e4e-f17f-4e97-9218-46c2fb3e4963.png) mostly works. But when first time tried IoT, I struggled a lot with wich key is for which props. ```ts const AwsIotDeviceSetting = { host: "************-ats.iot.ap-northeast-1.amazonaws.com", keyPath: 'certs/key_name.key', <--- which file path is it certPath: 'certs/private.pem', <--- which file path is it caPath: 'certs/root-CA.crt', <--- which file path is it clientId: "raspi", region: "ap-northeast-1" } const device = new iot.device(AwsIotDeviceSetting); ``` ended up trying all possible combinations. 2. use CSR. `openssl req -new -key my-device-private.pem.key > my-device.csr` command gives 3 files Again confused with which files are which. But worst is according to [115ts page of this pdf](https://pages.awscloud.com/rs/112-TZM-766/images/EV_iot-deepdive-aws2_Sep-2020.pdf) and [this article](https://dev.classmethod.jp/articles/try-fleet-provisioning-with-js-device-sdk/), there are 7 different ways to generate certificates, some works with some conditions. ( Sorry it is japanese. could not find english source. ) My seggesstion is : Add documentation or step be step example with AWS GUI, openssl and other possible certificate generation methods. Example : [in this docs](https://docs.amplify.aws/lib/datastore/getting-started/q/platform/js/) reader can select use `npm` or `curl` and different steps are presented. ### Links https://github.com/aws/aws-iot-device-sdk-js-v2/blob/main/README.md https://github.com/aws/aws-iot-device-sdk-js-v2/blob/main/samples/node/basic_connect/index.ts
1.0
Very confusing key - ### Describe the issue Every time I try IoT, I struggle with certificates. Most examples and documents use aws console generated certificates. Which works but still confusing. for example : On AWS GUI : there is two options to create certificate. ![image](https://user-images.githubusercontent.com/17027219/163740804-7a7c8999-27f9-4a82-9350-7df1e254d17c.png) 1. Auto-generate. which gives me five file ![image](https://user-images.githubusercontent.com/17027219/163740748-f8ed3e4e-f17f-4e97-9218-46c2fb3e4963.png) mostly works. But when first time tried IoT, I struggled a lot with wich key is for which props. ```ts const AwsIotDeviceSetting = { host: "************-ats.iot.ap-northeast-1.amazonaws.com", keyPath: 'certs/key_name.key', <--- which file path is it certPath: 'certs/private.pem', <--- which file path is it caPath: 'certs/root-CA.crt', <--- which file path is it clientId: "raspi", region: "ap-northeast-1" } const device = new iot.device(AwsIotDeviceSetting); ``` ended up trying all possible combinations. 2. use CSR. `openssl req -new -key my-device-private.pem.key > my-device.csr` command gives 3 files Again confused with which files are which. But worst is according to [115ts page of this pdf](https://pages.awscloud.com/rs/112-TZM-766/images/EV_iot-deepdive-aws2_Sep-2020.pdf) and [this article](https://dev.classmethod.jp/articles/try-fleet-provisioning-with-js-device-sdk/), there are 7 different ways to generate certificates, some works with some conditions. ( Sorry it is japanese. could not find english source. ) My seggesstion is : Add documentation or step be step example with AWS GUI, openssl and other possible certificate generation methods. Example : [in this docs](https://docs.amplify.aws/lib/datastore/getting-started/q/platform/js/) reader can select use `npm` or `curl` and different steps are presented. ### Links https://github.com/aws/aws-iot-device-sdk-js-v2/blob/main/README.md https://github.com/aws/aws-iot-device-sdk-js-v2/blob/main/samples/node/basic_connect/index.ts
non_process
very confusing key describe the issue every time i try iot i struggle with certificates most examples and documents use aws console generated certificates which works but still confusing for example on aws gui there is two options to create certificate auto generate which gives me five file mostly works but when first time tried iot i struggled a lot with wich key is for which props ts const awsiotdevicesetting host ats iot ap northeast amazonaws com keypath certs key name key which file path is it certpath certs private pem which file path is it capath certs root ca crt which file path is it clientid raspi region ap northeast const device new iot device awsiotdevicesetting ended up trying all possible combinations use csr openssl req new key my device private pem key my device csr command gives files again confused with which files are which but worst is according to and there are different ways to generate certificates some works with some conditions sorry it is japanese could not find english source my seggesstion is add documentation or step be step example with aws gui openssl and other possible certificate generation methods example reader can select use npm or curl and different steps are presented links
0
207,740
7,132,981,797
IssuesEvent
2018-01-22 16:08:39
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.canadiantire.ca - site is not usable
browser-firefox-mobile priority-normal
<!-- @browser: Firefox Mobile 59.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:59.0) Gecko/59.0 Firefox/59.0 --> <!-- @reported_with: mobile-reporter --> **URL**: http://www.canadiantire.ca/en/store-locator.html **Browser / Version**: Firefox Mobile 59.0 **Operating System**: Android 7.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: Site is blank on mobile mode **Steps to Reproduce**: None just tried to load page _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.canadiantire.ca - site is not usable - <!-- @browser: Firefox Mobile 59.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:59.0) Gecko/59.0 Firefox/59.0 --> <!-- @reported_with: mobile-reporter --> **URL**: http://www.canadiantire.ca/en/store-locator.html **Browser / Version**: Firefox Mobile 59.0 **Operating System**: Android 7.0 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: Site is blank on mobile mode **Steps to Reproduce**: None just tried to load page _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description site is blank on mobile mode steps to reproduce none just tried to load page from with ❤️
0
20,460
27,128,505,760
IssuesEvent
2023-02-16 07:58:47
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Get rid of cmd_helper
P4 type: process untriaged stale team-Rules-API
It has been marked as deprecated for a long time and it's not really useful: https://docs.bazel.build/versions/master/skylark/lib/cmd_helper.html cc @c-parsons
1.0
Get rid of cmd_helper - It has been marked as deprecated for a long time and it's not really useful: https://docs.bazel.build/versions/master/skylark/lib/cmd_helper.html cc @c-parsons
process
get rid of cmd helper it has been marked as deprecated for a long time and it s not really useful cc c parsons
1
104,315
8,970,567,023
IssuesEvent
2019-01-29 13:58:32
raiden-network/raiden
https://api.github.com/repos/raiden-network/raiden
opened
Make development setup documentation executable.
test review item
## Problem Definition [Suggested](https://github.com/raiden-network/raiden/tree/test-review/ob-review#suggestion) to us during the review of our tests. Essentially instead of documenting the steps needed to setup a raiden development environment like we already do [here](https://github.com/raiden-network/raiden/blob/2e659fc4ee24b13b081de36abb2d05d7311896b3/docs/overview_and_guide.rst#installation) the suggestion is to maintain a `tox` file that setups and describes the environment. Then the installation would sum up to run `tox -e dev`
1.0
Make development setup documentation executable. - ## Problem Definition [Suggested](https://github.com/raiden-network/raiden/tree/test-review/ob-review#suggestion) to us during the review of our tests. Essentially instead of documenting the steps needed to setup a raiden development environment like we already do [here](https://github.com/raiden-network/raiden/blob/2e659fc4ee24b13b081de36abb2d05d7311896b3/docs/overview_and_guide.rst#installation) the suggestion is to maintain a `tox` file that setups and describes the environment. Then the installation would sum up to run `tox -e dev`
non_process
make development setup documentation executable problem definition to us during the review of our tests essentially instead of documenting the steps needed to setup a raiden development environment like we already do the suggestion is to maintain a tox file that setups and describes the environment then the installation would sum up to run tox e dev
0
243,466
7,858,061,750
IssuesEvent
2018-06-21 12:53:55
robotology/icub-main
https://api.github.com/repos/robotology/icub-main
closed
ICUB devel is not configuring with latest YARP devel
Complexity: Low Priority: High Severity: Normal Type: Cleanup
Configuring latest ICUB devel with latest YARP devel, there are a lot of CMake errors like: ~~~ CMake Error at CMakeLists.txt:98 (include): include could not find load file: YarpInstallationHelpers -- Setting up installation of iCub.ini to /home/straversaro/src/robotology-superbuild/build/install/share/yarp/config/path.d folder. CMake Error at src/libraries/icubmod/CMakeLists.txt:8 (include): include could not find load file: YarpPlugin ~~~ Reverting this commit: https://github.com/robotology/icub-main/commit/fe10cefe4d19958d8d7618d7a3867205fce2980d seems to fix the problem. @pattacini @drdanz
1.0
ICUB devel is not configuring with latest YARP devel - Configuring latest ICUB devel with latest YARP devel, there are a lot of CMake errors like: ~~~ CMake Error at CMakeLists.txt:98 (include): include could not find load file: YarpInstallationHelpers -- Setting up installation of iCub.ini to /home/straversaro/src/robotology-superbuild/build/install/share/yarp/config/path.d folder. CMake Error at src/libraries/icubmod/CMakeLists.txt:8 (include): include could not find load file: YarpPlugin ~~~ Reverting this commit: https://github.com/robotology/icub-main/commit/fe10cefe4d19958d8d7618d7a3867205fce2980d seems to fix the problem. @pattacini @drdanz
non_process
icub devel is not configuring with latest yarp devel configuring latest icub devel with latest yarp devel there are a lot of cmake errors like cmake error at cmakelists txt include include could not find load file yarpinstallationhelpers setting up installation of icub ini to home straversaro src robotology superbuild build install share yarp config path d folder cmake error at src libraries icubmod cmakelists txt include include could not find load file yarpplugin reverting this commit seems to fix the problem pattacini drdanz
0
80,106
15,355,072,047
IssuesEvent
2021-03-01 10:36:46
gitpod-io/gitpod
https://api.github.com/repos/gitpod-io/gitpod
closed
Support content store backed user storage in Code
editor: code
In Code extensions can write files which are expected live beyond a single workspace (e.g. settings or caches). With the advent of the content store (#2001), we should make use of the content store for storing this data.
1.0
Support content store backed user storage in Code - In Code extensions can write files which are expected live beyond a single workspace (e.g. settings or caches). With the advent of the content store (#2001), we should make use of the content store for storing this data.
non_process
support content store backed user storage in code in code extensions can write files which are expected live beyond a single workspace e g settings or caches with the advent of the content store we should make use of the content store for storing this data
0
19,471
25,769,811,641
IssuesEvent
2022-12-09 06:47:54
Agility-Internship/Tai-nguyen-html-css
https://api.github.com/repos/Agility-Internship/Tai-nguyen-html-css
closed
Setup structure project
in-process
- [x] Create images - [x] Create fonts - [x] Create videos - [x] Edit README.md - [x] Install parcel - [x] Update gitignore
1.0
Setup structure project - - [x] Create images - [x] Create fonts - [x] Create videos - [x] Edit README.md - [x] Install parcel - [x] Update gitignore
process
setup structure project create images create fonts create videos edit readme md install parcel update gitignore
1
265,522
20,101,113,248
IssuesEvent
2022-02-07 04:21:55
faunists/deal-go
https://api.github.com/repos/faunists/deal-go
closed
Update `README.md` with examples of how to use `deal`
documentation good first issue hacktoberfest
We need to put some usage examples of deal: - [ ] Normal client - [ ] Stub client - [ ] Server test As inspiration you can see our example project [faunists/deal-go-example](https://github.com/faunists/deal-go-example)
1.0
Update `README.md` with examples of how to use `deal` - We need to put some usage examples of deal: - [ ] Normal client - [ ] Stub client - [ ] Server test As inspiration you can see our example project [faunists/deal-go-example](https://github.com/faunists/deal-go-example)
non_process
update readme md with examples of how to use deal we need to put some usage examples of deal normal client stub client server test as inspiration you can see our example project
0
17,254
23,038,439,000
IssuesEvent
2022-07-22 22:08:56
retaildevcrews/pib-cli
https://api.github.com/repos/retaildevcrews/pib-cli
closed
azure capacity issues
Process Demo
- the new subscription has azure capacity issues - VM SKUs aren't available in any region I've found - can someone check with Azure capacity and find a 4 core SKU with SSD that's available in a region? - we'll need 8 core machines for voe-fleet anne -> Assuming I'm looking in the right place, I see Standard_D8s_v5 and Standard_D4s_v5 available in eastus, westcentralus, and westus3. I ran: `az vm list-skus --output table | grep -e Standard_D8s_v5 -e Standard_D4s_v5`
1.0
azure capacity issues - - the new subscription has azure capacity issues - VM SKUs aren't available in any region I've found - can someone check with Azure capacity and find a 4 core SKU with SSD that's available in a region? - we'll need 8 core machines for voe-fleet anne -> Assuming I'm looking in the right place, I see Standard_D8s_v5 and Standard_D4s_v5 available in eastus, westcentralus, and westus3. I ran: `az vm list-skus --output table | grep -e Standard_D8s_v5 -e Standard_D4s_v5`
process
azure capacity issues the new subscription has azure capacity issues vm skus aren t available in any region i ve found can someone check with azure capacity and find a core sku with ssd that s available in a region we ll need core machines for voe fleet anne assuming i m looking in the right place i see standard and standard available in eastus westcentralus and i ran az vm list skus output table grep e standard e standard
1
22,741
32,056,196,867
IssuesEvent
2023-09-24 05:19:53
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/logstransform] Issue with replace_pattern transform function
Stale processor/logstransform closed as inactive
### Component(s) processor/logstransform ### Describe the issue you're reporting We are using below transform processor rule to replace phone number. It is working fine with regex101 but some issues with splunk client v0.61.0 or v0.63.0 process (AKS). Here are the details. Please check and advice rule: - replace_pattern(body.message, "([\\\"> ])?([0-9]{3})[- ]*([0-9]{3})[- ]*([0-9]{4})([\\\"< ])?", "$1**$4$5") Message: Address { addressType: Permanent address1: 1017 THRUSH LN address2: null city: AUDUBON state: PA zip: 19403 }] contact: class Contact { eMail: [t-rex@example.com](mailto:t-rex@example.com) phoneNumber: 2745888692 } citizenshipStatus: class Regex101 output: address2: null city: AUDUBON state: PA zip: 19403 }] contact: class Contact { eMail: [t-rex@example.com](mailto:t-rex@example.com) phoneNumber: **8692 } citizenshipStatus: class Splunk Log Observer output: addresses: [class Address { addressType: Permanent address1: 1017 THRUSH LN address2: null city: AUDUBON state: PA zip: 19403 }] contact: class Contact { eMail: [t-rex@example.com](mailto:t-rex@example.com) phoneNumber: **2 } citizenshipStatus: class CitizenshipStatus {
1.0
[processor/logstransform] Issue with replace_pattern transform function - ### Component(s) processor/logstransform ### Describe the issue you're reporting We are using below transform processor rule to replace phone number. It is working fine with regex101 but some issues with splunk client v0.61.0 or v0.63.0 process (AKS). Here are the details. Please check and advice rule: - replace_pattern(body.message, "([\\\"> ])?([0-9]{3})[- ]*([0-9]{3})[- ]*([0-9]{4})([\\\"< ])?", "$1**$4$5") Message: Address { addressType: Permanent address1: 1017 THRUSH LN address2: null city: AUDUBON state: PA zip: 19403 }] contact: class Contact { eMail: [t-rex@example.com](mailto:t-rex@example.com) phoneNumber: 2745888692 } citizenshipStatus: class Regex101 output: address2: null city: AUDUBON state: PA zip: 19403 }] contact: class Contact { eMail: [t-rex@example.com](mailto:t-rex@example.com) phoneNumber: **8692 } citizenshipStatus: class Splunk Log Observer output: addresses: [class Address { addressType: Permanent address1: 1017 THRUSH LN address2: null city: AUDUBON state: PA zip: 19403 }] contact: class Contact { eMail: [t-rex@example.com](mailto:t-rex@example.com) phoneNumber: **2 } citizenshipStatus: class CitizenshipStatus {
process
issue with replace pattern transform function component s processor logstransform describe the issue you re reporting we are using below transform processor rule to replace phone number it is working fine with but some issues with splunk client or process aks here are the details please check and advice rule replace pattern body message message address addresstype permanent thrush ln null city audubon state pa zip contact class contact email mailto t rex example com phonenumber citizenshipstatus class output null city audubon state pa zip contact class contact email mailto t rex example com phonenumber citizenshipstatus class splunk log observer output addresses contact class contact email mailto t rex example com phonenumber citizenshipstatus class citizenshipstatus
1
9,083
12,151,622,803
IssuesEvent
2020-04-24 20:20:57
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Regions not listed as supported.
Pri2 automation/svc process-automation/subsvc
Why is it that you don't mention locations such as "southcentralus" as being supported, however it's listed in allowed values for the "location" property of the ARM template which addresses deployment of a workspace and linking of an automation account on this page: https://docs.microsoft.com/en-us/azure/automation/automation-create-account-template. If the location is not supported, what does that mean? a) It wont work, or b) it may work but if I encounter a problem you will not provide assistance? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: f8f86bd0-7555-9be2-1015-76e3ab88062f * Version Independent ID: 1d310402-fadf-d602-048a-b2e16bd86a7e * Content: [Azure Automation and Log Analytics workspace mappings](https://docs.microsoft.com/en-us/azure/automation/how-to/region-mappings) * Content Source: [articles/automation/how-to/region-mappings.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/how-to/region-mappings.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
1.0
Regions not listed as supported. - Why is it that you don't mention locations such as "southcentralus" as being supported, however it's listed in allowed values for the "location" property of the ARM template which addresses deployment of a workspace and linking of an automation account on this page: https://docs.microsoft.com/en-us/azure/automation/automation-create-account-template. If the location is not supported, what does that mean? a) It wont work, or b) it may work but if I encounter a problem you will not provide assistance? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: f8f86bd0-7555-9be2-1015-76e3ab88062f * Version Independent ID: 1d310402-fadf-d602-048a-b2e16bd86a7e * Content: [Azure Automation and Log Analytics workspace mappings](https://docs.microsoft.com/en-us/azure/automation/how-to/region-mappings) * Content Source: [articles/automation/how-to/region-mappings.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/how-to/region-mappings.md) * Service: **automation** * Sub-service: **process-automation** * GitHub Login: @MGoedtel * Microsoft Alias: **magoedte**
process
regions not listed as supported why is it that you don t mention locations such as southcentralus as being supported however it s listed in allowed values for the location property of the arm template which addresses deployment of a workspace and linking of an automation account on this page if the location is not supported what does that mean a it wont work or b it may work but if i encounter a problem you will not provide assistance document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id fadf content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
1
16,049
20,192,886,550
IssuesEvent
2022-02-11 07:51:02
soederpop/active-mdx-software-project-test-repo
https://api.github.com/repos/soederpop/active-mdx-software-project-test-repo
closed
A customer should be able to pay with a gift card
story-created epic-payment-processing
# A customer should be able to pay with a gift card As a customer I want to be able to pay with a gift card so I can complete my order
1.0
A customer should be able to pay with a gift card - # A customer should be able to pay with a gift card As a customer I want to be able to pay with a gift card so I can complete my order
process
a customer should be able to pay with a gift card a customer should be able to pay with a gift card as a customer i want to be able to pay with a gift card so i can complete my order
1
269,075
23,417,181,251
IssuesEvent
2022-08-13 05:42:37
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
ccl/sqlproxyccl/tenant: TestDeleteTenant failed
C-test-failure O-robot branch-master
ccl/sqlproxyccl/tenant.TestDeleteTenant [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6079885?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6079885?buildTab=artifacts#/) on master @ [0dd438d3dc0b42543890455945a7a6b42811def1](https://github.com/cockroachdb/cockroach/commits/0dd438d3dc0b42543890455945a7a6b42811def1): ``` === RUN TestDeleteTenant test_log_scope.go:162: test logs captured to: /artifacts/tmp/_tmp/a6458abb05e6ab358f897a749cce5d50/logTestDeleteTenant1748159223 directory_cache_test.go:506: starting tenant 50 directory_cache_test.go:511: tenant 50 started directory_cache_test.go:320: Error Trace: directory_cache_test.go:320 Error: Should be empty, but was [tenant_id:50 addr:"127.0.0.1:46875" state:RUNNING stateTimestamp:<seconds:1660369353 nanos:353356988 > ] Test: TestDeleteTenant panic.go:500: -- test log scope end -- --- FAIL: TestDeleteTenant (1.66s) ``` <p>Parameters: <code>TAGS=bazel,gss</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/sql-experience @cockroachdb/server <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDeleteTenant.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
1.0
ccl/sqlproxyccl/tenant: TestDeleteTenant failed - ccl/sqlproxyccl/tenant.TestDeleteTenant [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6079885?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/6079885?buildTab=artifacts#/) on master @ [0dd438d3dc0b42543890455945a7a6b42811def1](https://github.com/cockroachdb/cockroach/commits/0dd438d3dc0b42543890455945a7a6b42811def1): ``` === RUN TestDeleteTenant test_log_scope.go:162: test logs captured to: /artifacts/tmp/_tmp/a6458abb05e6ab358f897a749cce5d50/logTestDeleteTenant1748159223 directory_cache_test.go:506: starting tenant 50 directory_cache_test.go:511: tenant 50 started directory_cache_test.go:320: Error Trace: directory_cache_test.go:320 Error: Should be empty, but was [tenant_id:50 addr:"127.0.0.1:46875" state:RUNNING stateTimestamp:<seconds:1660369353 nanos:353356988 > ] Test: TestDeleteTenant panic.go:500: -- test log scope end -- --- FAIL: TestDeleteTenant (1.66s) ``` <p>Parameters: <code>TAGS=bazel,gss</code> </p> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/sql-experience @cockroachdb/server <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDeleteTenant.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub>
non_process
ccl sqlproxyccl tenant testdeletetenant failed ccl sqlproxyccl tenant testdeletetenant with on master run testdeletetenant test log scope go test logs captured to artifacts tmp tmp directory cache test go starting tenant directory cache test go tenant started directory cache test go error trace directory cache test go error should be empty but was test testdeletetenant panic go test log scope end fail testdeletetenant parameters tags bazel gss help see also cc cockroachdb sql experience cockroachdb server
0
7,234
10,384,005,399
IssuesEvent
2019-09-10 10:57:33
RIOT-OS/RIOT
https://api.github.com/repos/RIOT-OS/RIOT
closed
RFC: merge candev API into netdev API
Area: drivers Discussion: RFC Process: API change State: stale
Looking at the newly merged candev API I still don't see, why this has to be its own API apart from netdev: * The receive mechanism is nearly the same (except that there is no receive function, I'm guessing the frame is passed via the event callback). * The only difference in the two `send()` is that `can` gives the explicit frame, instead of a vector array with the data * `init()`, `isr()`, `set()` and `get()` are the same (the latter to just get a new option set) * `abort()`, `set_filter()` and `remove_filter()` can be implemented using `set()` and corresponding `NETOPT`s I understand that CAN's communication paradigm is different from how other packet-switched radios work, so there might never be an IP over CAN ([but people tried](https://tools.ietf.org/html/draft-cafi-can-ip-00)), but at least for a programmer it might be nice to not get used to a same but different API.
1.0
RFC: merge candev API into netdev API - Looking at the newly merged candev API I still don't see, why this has to be its own API apart from netdev: * The receive mechanism is nearly the same (except that there is no receive function, I'm guessing the frame is passed via the event callback). * The only difference in the two `send()` is that `can` gives the explicit frame, instead of a vector array with the data * `init()`, `isr()`, `set()` and `get()` are the same (the latter to just get a new option set) * `abort()`, `set_filter()` and `remove_filter()` can be implemented using `set()` and corresponding `NETOPT`s I understand that CAN's communication paradigm is different from how other packet-switched radios work, so there might never be an IP over CAN ([but people tried](https://tools.ietf.org/html/draft-cafi-can-ip-00)), but at least for a programmer it might be nice to not get used to a same but different API.
process
rfc merge candev api into netdev api looking at the newly merged candev api i still don t see why this has to be its own api apart from netdev the receive mechanism is nearly the same except that there is no receive function i m guessing the frame is passed via the event callback the only difference in the two send is that can gives the explicit frame instead of a vector array with the data init isr set and get are the same the latter to just get a new option set abort set filter and remove filter can be implemented using set and corresponding netopt s i understand that can s communication paradigm is different from how other packet switched radios work so there might never be an ip over can but at least for a programmer it might be nice to not get used to a same but different api
1
13,522
16,058,127,930
IssuesEvent
2021-04-23 08:39:26
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field clubs_clubsTousers on model User.
bug/1-repro-available kind/bug process/candidate team/migrations
<!-- If required, please update the title to be clear and descriptive --> Command: `prisma introspect` Version: `2.21.2` Binary Version: `e421996c87d5f3c8f7eeadd502d4ad402c89464d` Report: https://prisma-errors.netlify.app/report/13240 OS: `x64 linux 5.8.0-50-generic` JS Stacktrace: ``` Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field clubs_clubsTousers on model User. at ChildProcess.<anonymous> (/home/yanpro/_SparkRead/deploy/node_modules/prisma/build/index.js:39953:28) at ChildProcess.emit (events.js:315:20) at ChildProcess.EventEmitter.emit (domain.js:467:12) at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12) ``` Rust Stacktrace: ``` 0: user_facing_errors::Error::new_in_panic_hook 1: user_facing_errors::panic_hook::set_panic_hook::{{closure}} 2: std::panicking::rust_panic_with_hook at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:595:17 3: std::panicking::begin_panic_handler::{{closure}} at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:497:13 4: std::sys_common::backtrace::__rust_end_short_backtrace at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/sys_common/backtrace.rs:141:18 5: rust_begin_unwind at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:493:5 6: core::panicking::panic_fmt at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/panicking.rs:92:14 7: core::option::expect_failed at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/option.rs:1292:5 8: dml::model::Model::find_relation_field_mut 9: sql_introspection_connector::re_introspection::enrich 10: sql_introspection_connector::calculate_datamodel::calculate_datamodel 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 13: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 14: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll 15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 16: introspection_engine::main::{{closure}} 17: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 18: introspection_engine::main 19: std::sys_common::backtrace::__rust_begin_short_backtrace 20: std::rt::lang_start::{{closure}} 21: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/ops/function.rs:259:13 std::panicking::try::do_call at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:379:40 std::panicking::try at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:343:19 std::panic::catch_unwind at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panic.rs:431:14 std::rt::lang_start_internal at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/rt.rs:51:25 22: std::rt::lang_start 23: __libc_start_main 24: <unknown> ```
1.0
Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field clubs_clubsTousers on model User. - <!-- If required, please update the title to be clear and descriptive --> Command: `prisma introspect` Version: `2.21.2` Binary Version: `e421996c87d5f3c8f7eeadd502d4ad402c89464d` Report: https://prisma-errors.netlify.app/report/13240 OS: `x64 linux 5.8.0-50-generic` JS Stacktrace: ``` Error: [libs/datamodel/connectors/dml/src/model.rs:161:64] Could not find relation field clubs_clubsTousers on model User. at ChildProcess.<anonymous> (/home/yanpro/_SparkRead/deploy/node_modules/prisma/build/index.js:39953:28) at ChildProcess.emit (events.js:315:20) at ChildProcess.EventEmitter.emit (domain.js:467:12) at Process.ChildProcess._handle.onexit (internal/child_process.js:277:12) ``` Rust Stacktrace: ``` 0: user_facing_errors::Error::new_in_panic_hook 1: user_facing_errors::panic_hook::set_panic_hook::{{closure}} 2: std::panicking::rust_panic_with_hook at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:595:17 3: std::panicking::begin_panic_handler::{{closure}} at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:497:13 4: std::sys_common::backtrace::__rust_end_short_backtrace at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/sys_common/backtrace.rs:141:18 5: rust_begin_unwind at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:493:5 6: core::panicking::panic_fmt at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/panicking.rs:92:14 7: core::option::expect_failed at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/option.rs:1292:5 8: dml::model::Model::find_relation_field_mut 9: sql_introspection_connector::re_introspection::enrich 10: sql_introspection_connector::calculate_datamodel::calculate_datamodel 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 13: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 14: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll 15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 16: introspection_engine::main::{{closure}} 17: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 18: introspection_engine::main 19: std::sys_common::backtrace::__rust_begin_short_backtrace 20: std::rt::lang_start::{{closure}} 21: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/ops/function.rs:259:13 std::panicking::try::do_call at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:379:40 std::panicking::try at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:343:19 std::panic::catch_unwind at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panic.rs:431:14 std::rt::lang_start_internal at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/rt.rs:51:25 22: std::rt::lang_start 23: __libc_start_main 24: <unknown> ```
process
error could not find relation field clubs clubstousers on model user command prisma introspect version binary version report os linux generic js stacktrace error could not find relation field clubs clubstousers on model user at childprocess home yanpro sparkread deploy node modules prisma build index js at childprocess emit events js at childprocess eventemitter emit domain js at process childprocess handle onexit internal child process js rust stacktrace user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook at rustc library std src panicking rs std panicking begin panic handler closure at rustc library std src panicking rs std sys common backtrace rust end short backtrace at rustc library std src sys common backtrace rs rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core option expect failed at rustc library core src option rs dml model model find relation field mut sql introspection connector re introspection enrich sql introspection connector calculate datamodel calculate datamodel as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll introspection engine main closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure core ops function impls for f call once at rustc library core src ops function rs std panicking try do call at rustc library std src panicking rs std panicking try at rustc library std src panicking rs std panic catch unwind at rustc library std src panic rs std rt lang start internal at rustc library std src rt rs std rt lang start libc start main
1
6,597
9,669,645,176
IssuesEvent
2019-05-21 17:53:36
bazelbuild/rules_swift
https://api.github.com/repos/bazelbuild/rules_swift
closed
Determine if protoc build artifacts can be cached by Travis
P2 type: process
A significant amount of the time spent running checks on Travis is from compiling `protoc` from source for use by the `swift_proto_library` rule. We don't need to build this every time we run our tests—we should only be doing it if it changes (which we have control over, because we lock in a version in our repository rule). Determine if Travis's caching capabilities would work for us here, and use them if so.
1.0
Determine if protoc build artifacts can be cached by Travis - A significant amount of the time spent running checks on Travis is from compiling `protoc` from source for use by the `swift_proto_library` rule. We don't need to build this every time we run our tests—we should only be doing it if it changes (which we have control over, because we lock in a version in our repository rule). Determine if Travis's caching capabilities would work for us here, and use them if so.
process
determine if protoc build artifacts can be cached by travis a significant amount of the time spent running checks on travis is from compiling protoc from source for use by the swift proto library rule we don t need to build this every time we run our tests—we should only be doing it if it changes which we have control over because we lock in a version in our repository rule determine if travis s caching capabilities would work for us here and use them if so
1
11,916
14,701,870,398
IssuesEvent
2021-01-04 12:39:20
prisma/prisma-client-js
https://api.github.com/repos/prisma/prisma-client-js
closed
Generated types (index.d.ts) contains invalid expressions
bug/2-confirmed kind/bug process/candidate team/client tech/engines
## Bug description The generated types file (`node_modules/.prisma/client/index.d.ts`) generates type definitions with invalid signatures. These signatures seem to come from schema definitions that have multiple unique fields. Here's an example of an invalid signature from my code: ``` node_modules/.prisma/client/index.d.ts:4703:25 - error TS1005: '=' expected. 4703 export type MyModel.myFieldId_myOtherFieldId_uniqueCompoundUniqueInput = { ~ ``` ## How to reproduce I'm working to find minimal/exact steps to reproduce this. Thus far, it appears to happen with types that have multiple `@unique` fields or multiple unique together with `@@unique`. ## Expected behavior The types files that are being generated have syntax errors with the following pattern: ``` export type Model.field1Id_field2Id_uniqueCompoundUniqueInput = Prisma.Model.field1Id_field2Id_uniqueCompoundUniqueInput ``` Typescript won't compile with a dot in the name on the left hand side. ## Prisma information Will update this as soon as I can with a minimal schema, but once again multiple `@unique`s seem to be the culprit. ## Environment & setup - OS: MacOS Big Sur - Database: PostgreSQL 13 - Node.js version: v15.0.1 - Prisma version: v2.13.0
1.0
Generated types (index.d.ts) contains invalid expressions - ## Bug description The generated types file (`node_modules/.prisma/client/index.d.ts`) generates type definitions with invalid signatures. These signatures seem to come from schema definitions that have multiple unique fields. Here's an example of an invalid signature from my code: ``` node_modules/.prisma/client/index.d.ts:4703:25 - error TS1005: '=' expected. 4703 export type MyModel.myFieldId_myOtherFieldId_uniqueCompoundUniqueInput = { ~ ``` ## How to reproduce I'm working to find minimal/exact steps to reproduce this. Thus far, it appears to happen with types that have multiple `@unique` fields or multiple unique together with `@@unique`. ## Expected behavior The types files that are being generated have syntax errors with the following pattern: ``` export type Model.field1Id_field2Id_uniqueCompoundUniqueInput = Prisma.Model.field1Id_field2Id_uniqueCompoundUniqueInput ``` Typescript won't compile with a dot in the name on the left hand side. ## Prisma information Will update this as soon as I can with a minimal schema, but once again multiple `@unique`s seem to be the culprit. ## Environment & setup - OS: MacOS Big Sur - Database: PostgreSQL 13 - Node.js version: v15.0.1 - Prisma version: v2.13.0
process
generated types index d ts contains invalid expressions bug description the generated types file node modules prisma client index d ts generates type definitions with invalid signatures these signatures seem to come from schema definitions that have multiple unique fields here s an example of an invalid signature from my code node modules prisma client index d ts error expected export type mymodel myfieldid myotherfieldid uniquecompounduniqueinput how to reproduce i m working to find minimal exact steps to reproduce this thus far it appears to happen with types that have multiple unique fields or multiple unique together with unique expected behavior the types files that are being generated have syntax errors with the following pattern export type model uniquecompounduniqueinput prisma model uniquecompounduniqueinput typescript won t compile with a dot in the name on the left hand side prisma information will update this as soon as i can with a minimal schema but once again multiple unique s seem to be the culprit environment setup os macos big sur database postgresql node js version prisma version
1
10,102
13,044,162,124
IssuesEvent
2020-07-29 03:47:29
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `UTCTimestampWithArg` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `UTCTimestampWithArg` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `UTCTimestampWithArg` from TiDB - ## Description Port the scalar function `UTCTimestampWithArg` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @andylokandy ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function utctimestampwitharg from tidb description port the scalar function utctimestampwitharg from tidb to coprocessor score mentor s andylokandy recommended skills rust programming learning materials already implemented expressions ported from tidb
1
20,779
27,516,250,994
IssuesEvent
2023-03-06 12:11:18
zotero/zotero
https://api.github.com/repos/zotero/zotero
closed
Make "Add Note" button small and move elsewhere in toolbar
Papercuts Word Processor Integration
Too many people are clicking "Add Note" while trying to insert a citation (and "Note" is confusing for some citation styles).
1.0
Make "Add Note" button small and move elsewhere in toolbar - Too many people are clicking "Add Note" while trying to insert a citation (and "Note" is confusing for some citation styles).
process
make add note button small and move elsewhere in toolbar too many people are clicking add note while trying to insert a citation and note is confusing for some citation styles
1
10,484
13,252,919,533
IssuesEvent
2020-08-20 06:34:01
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
Re-design coprocessor busy detection logic
sig/coprocessor status/discussion type/enhancement
We met a case that there are not many coprocessor requests, but with each one took a long time. In such cases, our current endpoint busy logic is not useful anymore, which presumed that requests are handled in a short time. We need to implement a more accurate busy detection logic. For example, maybe we should use "the waiting time of the head item in the queue" to measure whether or not server is busy, instead of simply "the queue length".
1.0
Re-design coprocessor busy detection logic - We met a case that there are not many coprocessor requests, but with each one took a long time. In such cases, our current endpoint busy logic is not useful anymore, which presumed that requests are handled in a short time. We need to implement a more accurate busy detection logic. For example, maybe we should use "the waiting time of the head item in the queue" to measure whether or not server is busy, instead of simply "the queue length".
process
re design coprocessor busy detection logic we met a case that there are not many coprocessor requests but with each one took a long time in such cases our current endpoint busy logic is not useful anymore which presumed that requests are handled in a short time we need to implement a more accurate busy detection logic for example maybe we should use the waiting time of the head item in the queue to measure whether or not server is busy instead of simply the queue length
1
17,949
23,940,914,060
IssuesEvent
2022-09-11 22:05:06
GregTechCEu/gt-ideas
https://api.github.com/repos/GregTechCEu/gt-ideas
opened
More Masochistic Yttrium Barium Cuprate Chain
processing chain
## Details Currently, YBC is produced simply by mixing the dusts together and putting it in a blast furnace. This processing chain suggestion is shamelessly copied directly from GCYL. Because I think GCYL has a lot of cool thingies in it that could be salvaged. ## Products Main Product: Side Product(s): ## Steps 1 Tiny Potassium Dichromate Dust + 1 b Hypochlorous Acid + 2 b Hydrochloric Acid + 1 b Glycerol + 3 b Water + 3 b Hydrogen Cyanide ---> 1 b Citric Acid + 3 b Ammonium Chloride 1 Copper Dust + 2 b Nitric Acid -> 9 Copper Nitrate + 1 b Nitrogen Dioxide + 2 b Hydrogen 2 Barium Sulfide + 2 b Nitric Acid -> 9 Barium Nitrate + 1 b Hydrogen Sulfide 5 Yttrium Oxide + 6 b Nitric Acid -> 26 Yttrium Nitrate + 3 b Water 27 Copper Nitrate + 18 Barium Nitrate + 13 Yttrium Nitrate + 1 b Citric Acid + 2 b Ammonia -> 12 Well Mixed YBC Oxides Dust + 15 b Nitrogen Dioxide + 6 b Carbon Monoxide + 4 b Water + 6 b Hydrogen 12 Well Mixed YBC Oxides Dust + 1 Oxygen -> Blast Furnace -> 13 Hot Yttrium Barium Cuprate Ingots (Which are then put in a vacuum freezer) ## Sources Gregicality Legacy Mod
1.0
More Masochistic Yttrium Barium Cuprate Chain - ## Details Currently, YBC is produced simply by mixing the dusts together and putting it in a blast furnace. This processing chain suggestion is shamelessly copied directly from GCYL. Because I think GCYL has a lot of cool thingies in it that could be salvaged. ## Products Main Product: Side Product(s): ## Steps 1 Tiny Potassium Dichromate Dust + 1 b Hypochlorous Acid + 2 b Hydrochloric Acid + 1 b Glycerol + 3 b Water + 3 b Hydrogen Cyanide ---> 1 b Citric Acid + 3 b Ammonium Chloride 1 Copper Dust + 2 b Nitric Acid -> 9 Copper Nitrate + 1 b Nitrogen Dioxide + 2 b Hydrogen 2 Barium Sulfide + 2 b Nitric Acid -> 9 Barium Nitrate + 1 b Hydrogen Sulfide 5 Yttrium Oxide + 6 b Nitric Acid -> 26 Yttrium Nitrate + 3 b Water 27 Copper Nitrate + 18 Barium Nitrate + 13 Yttrium Nitrate + 1 b Citric Acid + 2 b Ammonia -> 12 Well Mixed YBC Oxides Dust + 15 b Nitrogen Dioxide + 6 b Carbon Monoxide + 4 b Water + 6 b Hydrogen 12 Well Mixed YBC Oxides Dust + 1 Oxygen -> Blast Furnace -> 13 Hot Yttrium Barium Cuprate Ingots (Which are then put in a vacuum freezer) ## Sources Gregicality Legacy Mod
process
more masochistic yttrium barium cuprate chain details currently ybc is produced simply by mixing the dusts together and putting it in a blast furnace this processing chain suggestion is shamelessly copied directly from gcyl because i think gcyl has a lot of cool thingies in it that could be salvaged products main product side product s steps tiny potassium dichromate dust b hypochlorous acid b hydrochloric acid b glycerol b water b hydrogen cyanide b citric acid b ammonium chloride copper dust b nitric acid copper nitrate b nitrogen dioxide b hydrogen barium sulfide b nitric acid barium nitrate b hydrogen sulfide yttrium oxide b nitric acid yttrium nitrate b water copper nitrate barium nitrate yttrium nitrate b citric acid b ammonia well mixed ybc oxides dust b nitrogen dioxide b carbon monoxide b water b hydrogen well mixed ybc oxides dust oxygen blast furnace hot yttrium barium cuprate ingots which are then put in a vacuum freezer sources gregicality legacy mod
1
138,735
11,215,041,851
IssuesEvent
2020-01-07 00:37:19
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
"Learn more" under "Congratulations" screen pointing to MM Zenhub
QA/Test-Plan-Specified QA/Yes branding feature/crypto-wallets priority/P2 regression support
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> Looks like this was originally supposed to be addressed via https://github.com/brave/brave-browser/issues/4772 but it was closed without it ever being resolved. I looked around and couldn't find an issue under : ```is:open label:feature/crypto-wallets https://metamask.zendesk.com/hc/en-us/articles/360015489591-Basic-Safety-Tips``` ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. launch brave and open `brave://wallet` 2. click on `Restore` and restore a wallet (MM or CW) 3. click on `Learn more.` and you'll be taken to https://metamask.zendesk.com/hc/en-us/articles/360015489591-Basic-Safety-Tips ## Actual result: <!--Please add screenshots if needed--> Shouldn't be using https://metamask.zendesk.com/hc/en-us/articles/360015489591-Basic-Safety-Tips ## Expected result: <img width="1030" alt="Screen Shot 2019-12-18 at 12 44 58 AM" src="https://user-images.githubusercontent.com/2602313/71059191-3506b400-2130-11ea-8c61-f48274be2529.png"> Should open a help page that's located on https://support.brave.com and not https://metamask.zendesk.com. ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> 100% reproducible using the above STR. ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.1.22 Chromium: 79.0.3945.79 (Official Build) (64-bit) -- | -- Revision | 29f75ce3f42b007bd80361b0dfcfee3a13ff90b8-refs/branch-heads/3945@{#916} OS | macOS Version 10.15.1 (Build 19B88) ## Version/Channel Information: <!--Does this issue happen on any other channels? Or is it specific to a certain channel?--> - Can you reproduce this issue with the current release? `Yes` - Can you reproduce this issue with the beta channel? `Yes` - Can you reproduce this issue with the dev channel? `Yes` - Can you reproduce this issue with the nightly channel? `Yes` ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? `N/A` - Does the issue resolve itself when disabling Brave Rewards? `N/A` - Is the issue reproducible on the latest version of Chrome? `N/A` ## Miscellaneous Information: <!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue--> CCing @ryanml @srirambv
1.0
"Learn more" under "Congratulations" screen pointing to MM Zenhub - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue. PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE. INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED--> ## Description <!--Provide a brief description of the issue--> Looks like this was originally supposed to be addressed via https://github.com/brave/brave-browser/issues/4772 but it was closed without it ever being resolved. I looked around and couldn't find an issue under : ```is:open label:feature/crypto-wallets https://metamask.zendesk.com/hc/en-us/articles/360015489591-Basic-Safety-Tips``` ## Steps to Reproduce <!--Please add a series of steps to reproduce the issue--> 1. launch brave and open `brave://wallet` 2. click on `Restore` and restore a wallet (MM or CW) 3. click on `Learn more.` and you'll be taken to https://metamask.zendesk.com/hc/en-us/articles/360015489591-Basic-Safety-Tips ## Actual result: <!--Please add screenshots if needed--> Shouldn't be using https://metamask.zendesk.com/hc/en-us/articles/360015489591-Basic-Safety-Tips ## Expected result: <img width="1030" alt="Screen Shot 2019-12-18 at 12 44 58 AM" src="https://user-images.githubusercontent.com/2602313/71059191-3506b400-2130-11ea-8c61-f48274be2529.png"> Should open a help page that's located on https://support.brave.com and not https://metamask.zendesk.com. ## Reproduces how often: <!--[Easily reproduced/Intermittent issue/No steps to reproduce]--> 100% reproducible using the above STR. ## Brave version (brave://version info) <!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details--> Brave | 1.1.22 Chromium: 79.0.3945.79 (Official Build) (64-bit) -- | -- Revision | 29f75ce3f42b007bd80361b0dfcfee3a13ff90b8-refs/branch-heads/3945@{#916} OS | macOS Version 10.15.1 (Build 19B88) ## Version/Channel Information: <!--Does this issue happen on any other channels? Or is it specific to a certain channel?--> - Can you reproduce this issue with the current release? `Yes` - Can you reproduce this issue with the beta channel? `Yes` - Can you reproduce this issue with the dev channel? `Yes` - Can you reproduce this issue with the nightly channel? `Yes` ## Other Additional Information: - Does the issue resolve itself when disabling Brave Shields? `N/A` - Does the issue resolve itself when disabling Brave Rewards? `N/A` - Is the issue reproducible on the latest version of Chrome? `N/A` ## Miscellaneous Information: <!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue--> CCing @ryanml @srirambv
non_process
learn more under congratulations screen pointing to mm zenhub have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description looks like this was originally supposed to be addressed via but it was closed without it ever being resolved i looked around and couldn t find an issue under is open label feature crypto wallets steps to reproduce launch brave and open brave wallet click on restore and restore a wallet mm or cw click on learn more and you ll be taken to actual result shouldn t be using expected result img width alt screen shot at am src should open a help page that s located on and not reproduces how often reproducible using the above str brave version brave version info brave chromium   official build   bit revision refs branch heads os macos version build version channel information can you reproduce this issue with the current release yes can you reproduce this issue with the beta channel yes can you reproduce this issue with the dev channel yes can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields n a does the issue resolve itself when disabling brave rewards n a is the issue reproducible on the latest version of chrome n a miscellaneous information ccing ryanml srirambv
0
743,477
25,900,919,198
IssuesEvent
2022-12-15 05:32:14
wso2/api-manager
https://api.github.com/repos/wso2/api-manager
closed
Releasing MI and MI-Dashboard 4.2.0 alpha
Type/Task Priority/Normal Component/MI Component/MIDashboard 4.2.0-alpha
### Description Perform MI and MI-Dashboard 4.2.0 alpha release ### Affected Component MI ### Version 4.2.0-alpha ### Related Issues _No response_ ### Suggested Labels _No response_
1.0
Releasing MI and MI-Dashboard 4.2.0 alpha - ### Description Perform MI and MI-Dashboard 4.2.0 alpha release ### Affected Component MI ### Version 4.2.0-alpha ### Related Issues _No response_ ### Suggested Labels _No response_
non_process
releasing mi and mi dashboard alpha description perform mi and mi dashboard alpha release affected component mi version alpha related issues no response suggested labels no response
0
15,254
11,429,242,891
IssuesEvent
2020-02-04 07:24:53
clarity-h2020/csis
https://api.github.com/repos/clarity-h2020/csis
reopened
myclimateservice.eu DNS Entries
BB: Infrastructure
Register (sub) domain names for myclimateservice.eu and point DNS entries to IP of csis.clarity-h2020.eu - myclimateservice.eu - www.myclimateservice.eu - csis.myclimateservice.eu.eu - geoserver.myclimateservice.eu - erdapp.myclimateservice.eu - tiles.myclimateservice.eu - api.myclimateservice.eu - geonode.myclimateservice.eu
1.0
myclimateservice.eu DNS Entries - Register (sub) domain names for myclimateservice.eu and point DNS entries to IP of csis.clarity-h2020.eu - myclimateservice.eu - www.myclimateservice.eu - csis.myclimateservice.eu.eu - geoserver.myclimateservice.eu - erdapp.myclimateservice.eu - tiles.myclimateservice.eu - api.myclimateservice.eu - geonode.myclimateservice.eu
non_process
myclimateservice eu dns entries register sub domain names for myclimateservice eu and point dns entries to ip of csis clarity eu myclimateservice eu csis myclimateservice eu eu geoserver myclimateservice eu erdapp myclimateservice eu tiles myclimateservice eu api myclimateservice eu geonode myclimateservice eu
0
3,657
6,694,215,680
IssuesEvent
2017-10-10 00:23:06
ncbo/bioportal-project
https://api.github.com/repos/ncbo/bioportal-project
closed
MP: uploaded but failed to process
ontology processing problem
The latest submission of [MP](http://bioportal.bioontology.org/ontologies/MP) shows a status of "uploaded", but nothing else. The UI also shows a blank release date in the submissions table. The parsing log indicates that processing started, then halted with no error message: ``` # Logfile created on 2017-08-28 18:59:08 -0700 by logger.rb/1.2.8 I, [2017-08-28T18:59:08.325998 #16730] INFO -- : ["Starting to process http://data.bioontology.org/ontologies/MP/submissions/324"] I, [2017-08-28T18:59:08.382663 #16730] INFO -- : ["Starting to process MP/submissions/324"] ```
1.0
MP: uploaded but failed to process - The latest submission of [MP](http://bioportal.bioontology.org/ontologies/MP) shows a status of "uploaded", but nothing else. The UI also shows a blank release date in the submissions table. The parsing log indicates that processing started, then halted with no error message: ``` # Logfile created on 2017-08-28 18:59:08 -0700 by logger.rb/1.2.8 I, [2017-08-28T18:59:08.325998 #16730] INFO -- : ["Starting to process http://data.bioontology.org/ontologies/MP/submissions/324"] I, [2017-08-28T18:59:08.382663 #16730] INFO -- : ["Starting to process MP/submissions/324"] ```
process
mp uploaded but failed to process the latest submission of shows a status of uploaded but nothing else the ui also shows a blank release date in the submissions table the parsing log indicates that processing started then halted with no error message logfile created on by logger rb i info i info
1
34,762
9,460,294,103
IssuesEvent
2019-04-17 10:34:51
Scirra/Construct-3-bugs
https://api.github.com/repos/Scirra/Construct-3-bugs
opened
Invalid project name causes build failure
Build Service
## Problem description Via: https://www.construct.net/en/forum/construct-3/general-discussion-7/build-failed-143270 Using an invalid project name will cause an Android debug APK build to fail. ## Attach a .c3p Use built-in Ghost Shooter example ## Steps to reproduce 1. Rename project to `1` 2. Build as debug APK ## Observed result Build failed ## Expected result The build should either succeed, or show an error before exporting about the invalid project name before attempting a build. The "build failed" dialog in this case also does not appear to show any useful diagnostics. Further: - Construct should never just dump an error message in a dialog - it's a poor user experience and leads to confusion, especially since in this case, there is nothing in the log that is helpful. It also will not be translated. The dialog should have a translated header e.g. "Sorry, the build failed. The build log is included below in case it includes any information about the problem. You may need to adjust project settings or included plugins." and then include the build log beneath that. - The build log text isn't selectable - it should be so bug reports can easily copy-paste.
1.0
Invalid project name causes build failure - ## Problem description Via: https://www.construct.net/en/forum/construct-3/general-discussion-7/build-failed-143270 Using an invalid project name will cause an Android debug APK build to fail. ## Attach a .c3p Use built-in Ghost Shooter example ## Steps to reproduce 1. Rename project to `1` 2. Build as debug APK ## Observed result Build failed ## Expected result The build should either succeed, or show an error before exporting about the invalid project name before attempting a build. The "build failed" dialog in this case also does not appear to show any useful diagnostics. Further: - Construct should never just dump an error message in a dialog - it's a poor user experience and leads to confusion, especially since in this case, there is nothing in the log that is helpful. It also will not be translated. The dialog should have a translated header e.g. "Sorry, the build failed. The build log is included below in case it includes any information about the problem. You may need to adjust project settings or included plugins." and then include the build log beneath that. - The build log text isn't selectable - it should be so bug reports can easily copy-paste.
non_process
invalid project name causes build failure problem description via using an invalid project name will cause an android debug apk build to fail attach a use built in ghost shooter example steps to reproduce rename project to build as debug apk observed result build failed expected result the build should either succeed or show an error before exporting about the invalid project name before attempting a build the build failed dialog in this case also does not appear to show any useful diagnostics further construct should never just dump an error message in a dialog it s a poor user experience and leads to confusion especially since in this case there is nothing in the log that is helpful it also will not be translated the dialog should have a translated header e g sorry the build failed the build log is included below in case it includes any information about the problem you may need to adjust project settings or included plugins and then include the build log beneath that the build log text isn t selectable it should be so bug reports can easily copy paste
0
190,582
14,562,862,212
IssuesEvent
2020-12-17 01:04:14
microsoft/STL
https://api.github.com/repos/microsoft/STL
closed
tests/std: Harness CUDA/NVCC
help wanted test
We need to add `nvcc` to the set of configurations we test. - We need to determine which version we need to support. - CUDA 10.1 unblocked VS2019 users, however 10.1 Update 2 added `if constexpr` support. Which we want. - We need to actually add tests which use `nvcc`. - We have some internally but they aren't fit for external consumption. Microsoft-internal: Tracked by VSO-750842 / [AB#750842](https://devdiv.visualstudio.com/0bdbc590-a062-4c3f-b0f6-9383f67865ee/_workitems/edit/750842). After we add CUDA 10.1 Update 2 coverage to GitHub, we should remove our completely-hacked-up CUDA 9.2 coverage from the MSVC-internal `vcr` test suite.
1.0
tests/std: Harness CUDA/NVCC - We need to add `nvcc` to the set of configurations we test. - We need to determine which version we need to support. - CUDA 10.1 unblocked VS2019 users, however 10.1 Update 2 added `if constexpr` support. Which we want. - We need to actually add tests which use `nvcc`. - We have some internally but they aren't fit for external consumption. Microsoft-internal: Tracked by VSO-750842 / [AB#750842](https://devdiv.visualstudio.com/0bdbc590-a062-4c3f-b0f6-9383f67865ee/_workitems/edit/750842). After we add CUDA 10.1 Update 2 coverage to GitHub, we should remove our completely-hacked-up CUDA 9.2 coverage from the MSVC-internal `vcr` test suite.
non_process
tests std harness cuda nvcc we need to add nvcc to the set of configurations we test we need to determine which version we need to support cuda unblocked users however update added if constexpr support which we want we need to actually add tests which use nvcc we have some internally but they aren t fit for external consumption microsoft internal tracked by vso after we add cuda update coverage to github we should remove our completely hacked up cuda coverage from the msvc internal vcr test suite
0
9,133
12,202,801,439
IssuesEvent
2020-04-30 09:32:17
w3c/aria-at
https://api.github.com/repos/w3c/aria-at
closed
Use linear git history
process
See https://www.bitsnbites.eu/a-tidy-linear-git-history/ -- I think there are advantages to a linear git history. In particular, when reviewing what has happened in the past, a clean linear history is much easier to work with. Right now there are a few merge commits in master. It GitHub, this can be enforced by: * Uncheck "Allow merge commits" under [Options, Merge button](https://github.com/w3c/aria-at/settings#merge-button-settings). * Check "Require linear history" for `master` under [Branches](https://github.com/w3c/aria-at/settings/branches). In PRs, the merge button would only allow "squash and merge" (a single commit) or "rebase and merge" (if you want to preserve multiple commits). @mfairchild365 thoughts? @michael-n-cooper can you change this in the settings?
1.0
Use linear git history - See https://www.bitsnbites.eu/a-tidy-linear-git-history/ -- I think there are advantages to a linear git history. In particular, when reviewing what has happened in the past, a clean linear history is much easier to work with. Right now there are a few merge commits in master. It GitHub, this can be enforced by: * Uncheck "Allow merge commits" under [Options, Merge button](https://github.com/w3c/aria-at/settings#merge-button-settings). * Check "Require linear history" for `master` under [Branches](https://github.com/w3c/aria-at/settings/branches). In PRs, the merge button would only allow "squash and merge" (a single commit) or "rebase and merge" (if you want to preserve multiple commits). @mfairchild365 thoughts? @michael-n-cooper can you change this in the settings?
process
use linear git history see i think there are advantages to a linear git history in particular when reviewing what has happened in the past a clean linear history is much easier to work with right now there are a few merge commits in master it github this can be enforced by uncheck allow merge commits under check require linear history for master under in prs the merge button would only allow squash and merge a single commit or rebase and merge if you want to preserve multiple commits thoughts michael n cooper can you change this in the settings
1
16,920
22,266,562,202
IssuesEvent
2022-06-10 08:03:28
apache/arrow-rs
https://api.github.com/repos/apache/arrow-rs
opened
Larger CI Runners to Prevent MIRI OOMing and Improve CI Times
question enhancement development-process
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.** Since updating MIRI in #1828 it is periodically OOMing - https://github.com/apache/arrow-rs/actions/workflows/miri.yaml ![image](https://user-images.githubusercontent.com/1781103/173018511-f61ee955-f90a-467e-8666-f9e1f017d088.png) https://github.com/apache/arrow-rs/actions/runs/2473012537 It has also been a long-time annoyance of mine that **Describe the solution you'd like** I'm not entirely sure what the best course of action here is, rolling back to a 6 month old MIRI is not ideal and would require backing out changes in #1822, but then neither is having it randomly fail. It has been a long-time annoyance of mine that the the CI currently takes ~40 minutes to chug through, despite significant caching. This is largely because the runners are rather piddly - https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources. This also precludes automatically running any meaningful benchmarks (#1274). Perhaps we should invest some time into a more powerful CI system such as [buildkite](https://buildkite.com/apache-arrow) which is used by other arrow projects... **Describe alternatives you've considered** None
1.0
Larger CI Runners to Prevent MIRI OOMing and Improve CI Times - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.** Since updating MIRI in #1828 it is periodically OOMing - https://github.com/apache/arrow-rs/actions/workflows/miri.yaml ![image](https://user-images.githubusercontent.com/1781103/173018511-f61ee955-f90a-467e-8666-f9e1f017d088.png) https://github.com/apache/arrow-rs/actions/runs/2473012537 It has also been a long-time annoyance of mine that **Describe the solution you'd like** I'm not entirely sure what the best course of action here is, rolling back to a 6 month old MIRI is not ideal and would require backing out changes in #1822, but then neither is having it randomly fail. It has been a long-time annoyance of mine that the the CI currently takes ~40 minutes to chug through, despite significant caching. This is largely because the runners are rather piddly - https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources. This also precludes automatically running any meaningful benchmarks (#1274). Perhaps we should invest some time into a more powerful CI system such as [buildkite](https://buildkite.com/apache-arrow) which is used by other arrow projects... **Describe alternatives you've considered** None
process
larger ci runners to prevent miri ooming and improve ci times is your feature request related to a problem or challenge please describe what you are trying to do since updating miri in it is periodically ooming it has also been a long time annoyance of mine that describe the solution you d like i m not entirely sure what the best course of action here is rolling back to a month old miri is not ideal and would require backing out changes in but then neither is having it randomly fail it has been a long time annoyance of mine that the the ci currently takes minutes to chug through despite significant caching this is largely because the runners are rather piddly this also precludes automatically running any meaningful benchmarks perhaps we should invest some time into a more powerful ci system such as which is used by other arrow projects describe alternatives you ve considered none
1
7,225
10,352,117,277
IssuesEvent
2019-09-05 08:34:05
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
Post processor `vsphere` is unnecessary strict on which builders it accepts artifacts from
enhancement post-processor/vsphere
Version: `Packer v1.4.3` Bit of a bug/feature mashup: Post processor `vsphere` as a hardcoded dependency on artifacts generated by the vmware builders. While the only thing used is the first ovf/ova/vmx file found in the files array https://github.com/hashicorp/packer/blob/613d8ef6ab6f8182039e2d430497f5f6457d6a42/post-processor/vsphere/post-processor.go#L119 https://github.com/hashicorp/packer/blob/613d8ef6ab6f8182039e2d430497f5f6457d6a42/post-processor/vsphere/post-processor.go#L22-L25 Usecase: I'm want to build a vsphere ovf image and upload this the vsphere, but without vmware workstation or remote esxi build. workflow: - build image with qemu - use qemu-img and ovftool to convert the qemu image to a .ovf - upload to vsphere with vsphere post processor This fails on the last step, the vsphere upload, because the artifacts are not accepted. Potential sources of ovf's different then now whitelisted - some self build generator - virtualbox Suggested solutions: - remove the check altogether, - add `artifice` as a allowed artifact source a testbuild of packer without the check on builder id's works like expected for my usecase A stripped version of my configuration, it is incomplete https://gist.github.com/mjrider/d390e470a60de7f4afdae6851aca165e
1.0
Post processor `vsphere` is unnecessary strict on which builders it accepts artifacts from - Version: `Packer v1.4.3` Bit of a bug/feature mashup: Post processor `vsphere` as a hardcoded dependency on artifacts generated by the vmware builders. While the only thing used is the first ovf/ova/vmx file found in the files array https://github.com/hashicorp/packer/blob/613d8ef6ab6f8182039e2d430497f5f6457d6a42/post-processor/vsphere/post-processor.go#L119 https://github.com/hashicorp/packer/blob/613d8ef6ab6f8182039e2d430497f5f6457d6a42/post-processor/vsphere/post-processor.go#L22-L25 Usecase: I'm want to build a vsphere ovf image and upload this the vsphere, but without vmware workstation or remote esxi build. workflow: - build image with qemu - use qemu-img and ovftool to convert the qemu image to a .ovf - upload to vsphere with vsphere post processor This fails on the last step, the vsphere upload, because the artifacts are not accepted. Potential sources of ovf's different then now whitelisted - some self build generator - virtualbox Suggested solutions: - remove the check altogether, - add `artifice` as a allowed artifact source a testbuild of packer without the check on builder id's works like expected for my usecase A stripped version of my configuration, it is incomplete https://gist.github.com/mjrider/d390e470a60de7f4afdae6851aca165e
process
post processor vsphere is unnecessary strict on which builders it accepts artifacts from version packer bit of a bug feature mashup post processor vsphere as a hardcoded dependency on artifacts generated by the vmware builders while the only thing used is the first ovf ova vmx file found in the files array usecase i m want to build a vsphere ovf image and upload this the vsphere but without vmware workstation or remote esxi build workflow build image with qemu use qemu img and ovftool to convert the qemu image to a ovf upload to vsphere with vsphere post processor this fails on the last step the vsphere upload because the artifacts are not accepted potential sources of ovf s different then now whitelisted some self build generator virtualbox suggested solutions remove the check altogether add artifice as a allowed artifact source a testbuild of packer without the check on builder id s works like expected for my usecase a stripped version of my configuration it is incomplete
1
7,877
11,046,606,921
IssuesEvent
2019-12-09 17:10:58
raxod502/straight.el
https://api.github.com/repos/raxod502/straight.el
closed
changes to straight-format-timestamp lead to failed find command on start
bug eager modification detection external command find modification detection process buffer regression
## What's wrong Recent commit breaks on OSX. See https://github.com/raxod502/straight.el/commit/dce8208f5d9d95127f0a47ff39d5a98eac1cfc7b Contents of `*straight-process*`: ``` $ cd / $ find /dev/null -newermt 2018-01-01\ 12\:00\:00 /dev/null [Return code: 0] $ cd /Users/cwhatley/.emacs.d/straight/repos/melpa/ $ find . -name .git -prune -o -newermt 2019-12-05\ 18\:54\:12.656686 -print find: Can't parse date/time: 2019-12-05 18:54:12.656686 [Return code: 1] ``` ## Directions to reproduce Run with straight.el off develop branch. ### Version information * Emacs version: 27.0.50 built from head on 12/3/2019 * Operating system: MacOS 10.15.2 beta
1.0
changes to straight-format-timestamp lead to failed find command on start - ## What's wrong Recent commit breaks on OSX. See https://github.com/raxod502/straight.el/commit/dce8208f5d9d95127f0a47ff39d5a98eac1cfc7b Contents of `*straight-process*`: ``` $ cd / $ find /dev/null -newermt 2018-01-01\ 12\:00\:00 /dev/null [Return code: 0] $ cd /Users/cwhatley/.emacs.d/straight/repos/melpa/ $ find . -name .git -prune -o -newermt 2019-12-05\ 18\:54\:12.656686 -print find: Can't parse date/time: 2019-12-05 18:54:12.656686 [Return code: 1] ``` ## Directions to reproduce Run with straight.el off develop branch. ### Version information * Emacs version: 27.0.50 built from head on 12/3/2019 * Operating system: MacOS 10.15.2 beta
process
changes to straight format timestamp lead to failed find command on start what s wrong recent commit breaks on osx see contents of straight process cd find dev null newermt dev null cd users cwhatley emacs d straight repos melpa find name git prune o newermt print find can t parse date time directions to reproduce run with straight el off develop branch version information emacs version built from head on operating system macos beta
1
301,971
22,782,628,894
IssuesEvent
2022-07-08 22:02:42
croixbleueqc/python-noops
https://api.github.com/repos/croixbleueqc/python-noops
closed
Documentation for experimental features
documentation
Improve the documentation to use experimental features.
1.0
Documentation for experimental features - Improve the documentation to use experimental features.
non_process
documentation for experimental features improve the documentation to use experimental features
0
198,847
6,978,386,650
IssuesEvent
2017-12-12 17:21:14
dotnet-architecture/eShopOnContainers
https://api.github.com/repos/dotnet-architecture/eShopOnContainers
closed
Need to implement Validations on every Domain Entity constructor and update method (Ordering Domain Model)
Feature Post-VS2017-Migration Priority 2
The current implementation in the Ordering Domain Model has almost no Validations in place. This is a very important area in a Domain Model as Domain Entities and Aggregates are responsible for their "always valid" state. Invariant enforcement is the responsibility of the domain entity itself (especially of the Aggregate-Root) and therefore an entity shouldn't even be able to exist without being valid. Take into account: http://gorodinski.com/blog/2012/05/19/validation-in-domain-driven-design-ddd/ http://enterprisecraftsmanship.com/2015/03/07/functional-c-primitive-obsession/ Consider to use the SPECIFICATION VALIDATION pattern for this. To be confirmed, might not be the best approach..: https://www.codeproject.com/Tips/790758/Specification-and-Notification-Patterns http://web.archive.org/web/20130117134221/http://codeinsanity.com/archive/2008/12/02/a-framework-for-validation-and-business-rules.aspx Specifications: https://github.com/riteshrao/ncommon/tree/v1.2/NCommon/src/Specifications Validations: https://github.com/riteshrao/ncommon/tree/v1.2/NCommon/src/Rules (especially the classes for ValidationResult and ValidationError) https://lostechies.com/jimmybogard/2007/10/25/specifications-versus-validators/ **[OPTION 1: Single validation per attribute/field with Data Annotations]** For simple validations, we could use DataAnnotations with optional regular expressions: like explained here: https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/validation But I don't like how intrusive are those data annotations in my domain model... as it takes a dependency on ModelState.IsValid and MVC controllers, I believe. The model validation occurs prior to each controller action being invoked, and it is the action method’s responsibility to inspect ModelState.IsValid and react appropriately. So, use it or not depending on how coupled you’d like your model to be with that infrastructure In any case, it would be important to be able not to create an entity object if those data annotations are not satisfied, so we follow the "always valid entity" principle. I don't like the "ModelState.IsValid()" used in ASP.NET. Entities should always be valid or not created if the validation failed. ...Instead of data annotations attributes we could also have model validations by inheriting the entity class from IValidateObject... See code below at the end of this issue for sample code with IValidateObject. **OPTION 2: [Validation rules are specific to your business]** In addition to that, there are other cases which are not specific to a single data attribute but involves multiple fields with more complex invariants. In that case we need to implement that validation logic in the constructor or update methods or we could also use "custom validation attributes" as explained here: https://docs.microsoft.com/en-us/aspnet/core/mvc/models/validation Exceptions or notifications (Notification pattern) should be thrown when the invariant rules are violated. We need to implement a consistent/unified way to propagate validation notification to the multiple client apps, Web MVC, Web SPA and Xamarin Mobile Apps. ------------------------------------------------------------------------------------------------------------------- SAMPLE CODE TO USE IValidatableObject instead of Data Annotations attributes public abstract class EntityBase : IValidatableObject { #region Implementation of IValidatableObject public abstract System.Collections.Generic.IEnumerable<ValidationResult> Validate(ValidationContext validationContext); #endregion } public class Employee:EntityBase { public string FullName { get; set; } public virtual Address Address { get; set; } public ICollection<Benefit> Benefits {get;set;} public override IEnumerable<ValidationResult> Validate(ValidationContext validationContext) { if (Address == null) { yield return new EnhancedMappedValidationResult<Employee>(d => d.Address, "Address is mandatory"); } if (FullName == null) { yield return new EnhancedMappedValidationResult<Employee>(d => d.FullName, "FullName is mandatory"); } } } public class EnhancedMappedValidationResult<TEntity> : ValidationResult { /// <summary> /// Property in error on which the validation result error is assigned. /// Getter only because we want the exception to be set when the validation result /// is created. /// </summary> /// <value> /// The property. /// </value> public Expression<Func<TEntity, object>> Property { get; private set; } /// <summary> /// Initializes by setting the propertu and the error. /// </summary> /// <param name="property">The property.</param> /// <param name="errorMessage">The error message.</param> public EnhancedMappedValidationResult(Expression<Func<TEntity, object>> property, string errorMessage) : base(errorMessage, new List<string>()) { Property = property; ((List<string>)base.MemberNames).Add(LambdaUtilities.GetExpressionText(Property)); } }
1.0
Need to implement Validations on every Domain Entity constructor and update method (Ordering Domain Model) - The current implementation in the Ordering Domain Model has almost no Validations in place. This is a very important area in a Domain Model as Domain Entities and Aggregates are responsible for their "always valid" state. Invariant enforcement is the responsibility of the domain entity itself (especially of the Aggregate-Root) and therefore an entity shouldn't even be able to exist without being valid. Take into account: http://gorodinski.com/blog/2012/05/19/validation-in-domain-driven-design-ddd/ http://enterprisecraftsmanship.com/2015/03/07/functional-c-primitive-obsession/ Consider to use the SPECIFICATION VALIDATION pattern for this. To be confirmed, might not be the best approach..: https://www.codeproject.com/Tips/790758/Specification-and-Notification-Patterns http://web.archive.org/web/20130117134221/http://codeinsanity.com/archive/2008/12/02/a-framework-for-validation-and-business-rules.aspx Specifications: https://github.com/riteshrao/ncommon/tree/v1.2/NCommon/src/Specifications Validations: https://github.com/riteshrao/ncommon/tree/v1.2/NCommon/src/Rules (especially the classes for ValidationResult and ValidationError) https://lostechies.com/jimmybogard/2007/10/25/specifications-versus-validators/ **[OPTION 1: Single validation per attribute/field with Data Annotations]** For simple validations, we could use DataAnnotations with optional regular expressions: like explained here: https://docs.microsoft.com/en-us/aspnet/core/tutorials/first-mvc-app/validation But I don't like how intrusive are those data annotations in my domain model... as it takes a dependency on ModelState.IsValid and MVC controllers, I believe. The model validation occurs prior to each controller action being invoked, and it is the action method’s responsibility to inspect ModelState.IsValid and react appropriately. So, use it or not depending on how coupled you’d like your model to be with that infrastructure In any case, it would be important to be able not to create an entity object if those data annotations are not satisfied, so we follow the "always valid entity" principle. I don't like the "ModelState.IsValid()" used in ASP.NET. Entities should always be valid or not created if the validation failed. ...Instead of data annotations attributes we could also have model validations by inheriting the entity class from IValidateObject... See code below at the end of this issue for sample code with IValidateObject. **OPTION 2: [Validation rules are specific to your business]** In addition to that, there are other cases which are not specific to a single data attribute but involves multiple fields with more complex invariants. In that case we need to implement that validation logic in the constructor or update methods or we could also use "custom validation attributes" as explained here: https://docs.microsoft.com/en-us/aspnet/core/mvc/models/validation Exceptions or notifications (Notification pattern) should be thrown when the invariant rules are violated. We need to implement a consistent/unified way to propagate validation notification to the multiple client apps, Web MVC, Web SPA and Xamarin Mobile Apps. ------------------------------------------------------------------------------------------------------------------- SAMPLE CODE TO USE IValidatableObject instead of Data Annotations attributes public abstract class EntityBase : IValidatableObject { #region Implementation of IValidatableObject public abstract System.Collections.Generic.IEnumerable<ValidationResult> Validate(ValidationContext validationContext); #endregion } public class Employee:EntityBase { public string FullName { get; set; } public virtual Address Address { get; set; } public ICollection<Benefit> Benefits {get;set;} public override IEnumerable<ValidationResult> Validate(ValidationContext validationContext) { if (Address == null) { yield return new EnhancedMappedValidationResult<Employee>(d => d.Address, "Address is mandatory"); } if (FullName == null) { yield return new EnhancedMappedValidationResult<Employee>(d => d.FullName, "FullName is mandatory"); } } } public class EnhancedMappedValidationResult<TEntity> : ValidationResult { /// <summary> /// Property in error on which the validation result error is assigned. /// Getter only because we want the exception to be set when the validation result /// is created. /// </summary> /// <value> /// The property. /// </value> public Expression<Func<TEntity, object>> Property { get; private set; } /// <summary> /// Initializes by setting the propertu and the error. /// </summary> /// <param name="property">The property.</param> /// <param name="errorMessage">The error message.</param> public EnhancedMappedValidationResult(Expression<Func<TEntity, object>> property, string errorMessage) : base(errorMessage, new List<string>()) { Property = property; ((List<string>)base.MemberNames).Add(LambdaUtilities.GetExpressionText(Property)); } }
non_process
need to implement validations on every domain entity constructor and update method ordering domain model the current implementation in the ordering domain model has almost no validations in place this is a very important area in a domain model as domain entities and aggregates are responsible for their always valid state invariant enforcement is the responsibility of the domain entity itself especially of the aggregate root and therefore an entity shouldn t even be able to exist without being valid take into account consider to use the specification validation pattern for this to be confirmed might not be the best approach specifications validations especially the classes for validationresult and validationerror for simple validations we could use dataannotations with optional regular expressions like explained here but i don t like how intrusive are those data annotations in my domain model as it takes a dependency on modelstate isvalid and mvc controllers i believe the model validation occurs prior to each controller action being invoked and it is the action method’s responsibility to inspect modelstate isvalid and react appropriately so use it or not depending on how coupled you’d like your model to be with that infrastructure in any case it would be important to be able not to create an entity object if those data annotations are not satisfied so we follow the always valid entity principle i don t like the modelstate isvalid used in asp net entities should always be valid or not created if the validation failed instead of data annotations attributes we could also have model validations by inheriting the entity class from ivalidateobject see code below at the end of this issue for sample code with ivalidateobject option in addition to that there are other cases which are not specific to a single data attribute but involves multiple fields with more complex invariants in that case we need to implement that validation logic in the constructor or update methods or we could also use custom validation attributes as explained here exceptions or notifications notification pattern should be thrown when the invariant rules are violated we need to implement a consistent unified way to propagate validation notification to the multiple client apps web mvc web spa and xamarin mobile apps sample code to use ivalidatableobject instead of data annotations attributes public abstract class entitybase ivalidatableobject region implementation of ivalidatableobject public abstract system collections generic ienumerable validate validationcontext validationcontext endregion public class employee entitybase public string fullname get set public virtual address address get set public icollection benefits get set public override ienumerable validate validationcontext validationcontext if address null yield return new enhancedmappedvalidationresult d d address address is mandatory if fullname null yield return new enhancedmappedvalidationresult d d fullname fullname is mandatory public class enhancedmappedvalidationresult validationresult property in error on which the validation result error is assigned getter only because we want the exception to be set when the validation result is created the property public expression property get private set initializes by setting the propertu and the error the property the error message public enhancedmappedvalidationresult expression property string errormessage base errormessage new list property property list base membernames add lambdautilities getexpressiontext property
0
13,090
15,440,018,495
IssuesEvent
2021-03-08 02:08:53
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Use the Public Suffix List for domain validation
AREA: client AREA: server STATE: Stale SYSTEM: URL processing SYSTEM: cookie TYPE: enhancement
* cookie validation * same-origin check https://publicsuffix.org/ https://publicsuffix.org/list/public_suffix_list.dat https://github.com/wrangr/psl
1.0
Use the Public Suffix List for domain validation - * cookie validation * same-origin check https://publicsuffix.org/ https://publicsuffix.org/list/public_suffix_list.dat https://github.com/wrangr/psl
process
use the public suffix list for domain validation cookie validation same origin check
1
6,836
9,979,036,985
IssuesEvent
2019-07-09 21:30:10
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
The force-unique parameter does not work when the duplicated topicref is chunked
bug preprocess/chunking priority/medium stale
- I believe this to be a bug, not a question about using DITA-OT. - I read the [CONTRIBUTING][] file. ```xml <map> <title>Map</title> <topicref href="topic.dita"/> <topicref href="topic.dita" chunk="to-content"> <topicref href="subtopic.dita"/> </topicref> </map> ``` When transforming the above DITA Map to XHTML, with the `force-unique` parameter set to `true`, a single HTML file is generated for both `topicref`s that reference the `"topic.dita"` topic. [force-unique.zip](https://github.com/dita-ot/dita-ot/files/1091307/force-unique.zip) ## Expected Behavior - Two HTML files should be generated for `topic.dita`: `topic.html` and `topic_2.html` - The content of the `index.html` file should be: ```xml <div> <ul class="map"> <li class="topicref"><a href="topic.html">Topic</a></li> <li class="topicref"><a href="topic_2.html#topic">Topic</a><ul> <li class="topicref"><a href="topic_2.html#subtopic">Subtopic</a></li> </ul></li> </ul> </div> ``` ## Actual Behavior - Only one HTML file is generated, namely, `topic_2.html` (the one with the chunked content). - `index.html` contains: ```xml <div> <ul class="map"> <li class="topicref"><a href="topic_2.html#topic">Topic</a></li> <li class="topicref"><a href="topic_2.html#topic">Topic</a><ul> <li class="topicref"><a href="topic_2.html#subtopic">Subtopic</a></li> </ul></li> </ul> </div> ``` ## Steps to Reproduce 1. Transform the attached DITA map to XHTML setting the `force-unique` parameter to `true`. ## Copy of the error message, log file or stack trace ## Environment * DITA-OT version: 2.4.4 * Operating system and version _(Linux, macOS, Windows)_: Windows 10 Pro * How did you run DITA-OT? oXygen 19 * Transformation type _(HTML5, PDF, custom, etc.)_: XHTML [CONTRIBUTING]: https://github.com/dita-ot/dita-ot/blob/develop/.github/CONTRIBUTING.md
1.0
The force-unique parameter does not work when the duplicated topicref is chunked - - I believe this to be a bug, not a question about using DITA-OT. - I read the [CONTRIBUTING][] file. ```xml <map> <title>Map</title> <topicref href="topic.dita"/> <topicref href="topic.dita" chunk="to-content"> <topicref href="subtopic.dita"/> </topicref> </map> ``` When transforming the above DITA Map to XHTML, with the `force-unique` parameter set to `true`, a single HTML file is generated for both `topicref`s that reference the `"topic.dita"` topic. [force-unique.zip](https://github.com/dita-ot/dita-ot/files/1091307/force-unique.zip) ## Expected Behavior - Two HTML files should be generated for `topic.dita`: `topic.html` and `topic_2.html` - The content of the `index.html` file should be: ```xml <div> <ul class="map"> <li class="topicref"><a href="topic.html">Topic</a></li> <li class="topicref"><a href="topic_2.html#topic">Topic</a><ul> <li class="topicref"><a href="topic_2.html#subtopic">Subtopic</a></li> </ul></li> </ul> </div> ``` ## Actual Behavior - Only one HTML file is generated, namely, `topic_2.html` (the one with the chunked content). - `index.html` contains: ```xml <div> <ul class="map"> <li class="topicref"><a href="topic_2.html#topic">Topic</a></li> <li class="topicref"><a href="topic_2.html#topic">Topic</a><ul> <li class="topicref"><a href="topic_2.html#subtopic">Subtopic</a></li> </ul></li> </ul> </div> ``` ## Steps to Reproduce 1. Transform the attached DITA map to XHTML setting the `force-unique` parameter to `true`. ## Copy of the error message, log file or stack trace ## Environment * DITA-OT version: 2.4.4 * Operating system and version _(Linux, macOS, Windows)_: Windows 10 Pro * How did you run DITA-OT? oXygen 19 * Transformation type _(HTML5, PDF, custom, etc.)_: XHTML [CONTRIBUTING]: https://github.com/dita-ot/dita-ot/blob/develop/.github/CONTRIBUTING.md
process
the force unique parameter does not work when the duplicated topicref is chunked i believe this to be a bug not a question about using dita ot i read the file xml map when transforming the above dita map to xhtml with the force unique parameter set to true a single html file is generated for both topicref s that reference the topic dita topic expected behavior two html files should be generated for topic dita topic html and topic html the content of the index html file should be xml topic topic subtopic actual behavior only one html file is generated namely topic html the one with the chunked content index html contains xml topic topic subtopic steps to reproduce transform the attached dita map to xhtml setting the force unique parameter to true copy of the error message log file or stack trace environment dita ot version operating system and version linux macos windows windows pro how did you run dita ot oxygen transformation type pdf custom etc xhtml
1
90,316
15,856,106,199
IssuesEvent
2021-04-08 01:32:05
KingdomB/liri-node-app
https://api.github.com/repos/KingdomB/liri-node-app
opened
CVE-2020-7754 (High) detected in npm-user-validate-1.0.0.tgz
security vulnerability
## CVE-2020-7754 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-user-validate-1.0.0.tgz</b></p></summary> <p>User validations for npm</p> <p>Library home page: <a href="https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz">https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz</a></p> <p>Path to dependency file: /liri-node-app/package.json</p> <p>Path to vulnerable library: liri-node-app/node_modules/npm/node_modules/npm-user-validate/package.json</p> <p> Dependency Hierarchy: - npm-6.9.0.tgz (Root Library) - :x: **npm-user-validate-1.0.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package npm-user-validate before 1.0.1. The regex that validates user emails took exponentially longer to process long input strings beginning with @ characters. <p>Publish Date: 2020-10-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7754>CVE-2020-7754</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 1.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7754 (High) detected in npm-user-validate-1.0.0.tgz - ## CVE-2020-7754 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-user-validate-1.0.0.tgz</b></p></summary> <p>User validations for npm</p> <p>Library home page: <a href="https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz">https://registry.npmjs.org/npm-user-validate/-/npm-user-validate-1.0.0.tgz</a></p> <p>Path to dependency file: /liri-node-app/package.json</p> <p>Path to vulnerable library: liri-node-app/node_modules/npm/node_modules/npm-user-validate/package.json</p> <p> Dependency Hierarchy: - npm-6.9.0.tgz (Root Library) - :x: **npm-user-validate-1.0.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects the package npm-user-validate before 1.0.1. The regex that validates user emails took exponentially longer to process long input strings beginning with @ characters. <p>Publish Date: 2020-10-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7754>CVE-2020-7754</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7754</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 1.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in npm user validate tgz cve high severity vulnerability vulnerable library npm user validate tgz user validations for npm library home page a href path to dependency file liri node app package json path to vulnerable library liri node app node modules npm node modules npm user validate package json dependency hierarchy npm tgz root library x npm user validate tgz vulnerable library vulnerability details this affects the package npm user validate before the regex that validates user emails took exponentially longer to process long input strings beginning with characters publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
5,229
8,029,977,014
IssuesEvent
2018-07-27 17:55:40
GoogleCloudPlatform/google-cloud-python
https://api.github.com/repos/GoogleCloudPlatform/google-cloud-python
closed
Bigtable: 'list_instances' systest flakes w/ simultaneous runs
api: bigtable flaky testing type: process
See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7441 The fix from #5476 didn't account for the fact that the "other" test run might be deleting instances, too.
1.0
Bigtable: 'list_instances' systest flakes w/ simultaneous runs - See: https://circleci.com/gh/GoogleCloudPlatform/google-cloud-python/7441 The fix from #5476 didn't account for the fact that the "other" test run might be deleting instances, too.
process
bigtable list instances systest flakes w simultaneous runs see the fix from didn t account for the fact that the other test run might be deleting instances too
1
650,906
21,435,889,225
IssuesEvent
2022-04-24 01:38:07
ChrisNZL/Tallowmere2
https://api.github.com/repos/ChrisNZL/Tallowmere2
closed
Tilted: Gold and souls sometimes unable to be collected?
🦟 bug ⚠ priority ✨ alterants ⚛ physics
Manual report. 0.2.3. Feedback ID: 20210202-DJVRP Claiming Tilted room modifier sometimes has mob's coins, hearts, and souls being stuck in the walls, unable to be retrieved. I _think_ only Keys have the robust world-position checking happening; not so much for coins, hearts, and souls – need to investigate.
1.0
Tilted: Gold and souls sometimes unable to be collected? - Manual report. 0.2.3. Feedback ID: 20210202-DJVRP Claiming Tilted room modifier sometimes has mob's coins, hearts, and souls being stuck in the walls, unable to be retrieved. I _think_ only Keys have the robust world-position checking happening; not so much for coins, hearts, and souls – need to investigate.
non_process
tilted gold and souls sometimes unable to be collected manual report feedback id djvrp claiming tilted room modifier sometimes has mob s coins hearts and souls being stuck in the walls unable to be retrieved i think only keys have the robust world position checking happening not so much for coins hearts and souls – need to investigate
0
150,542
11,965,324,715
IssuesEvent
2020-04-05 22:55:18
reactive-firewall/Pocket-PiAP
https://api.github.com/repos/reactive-firewall/Pocket-PiAP
closed
ACL issue on older upgrades causes lockout by 406 error
Blocker Bug (Regression) Testing Wanted
# Basic Info >> When reporting an issue, please list the version of Pocket you are using and any relevant information about your software environment: Version: Pocket v0.3.5 env: multiple - OS type >> `uname -a` - [x] linux - [x] Mac OS X (darwin) - `piaplib` version: >> `sudo -u pocket-admin python3 -m piaplib.book.version --verbose --all` output ```plain piaplib: 0.3.5 Python: 3.4.2 (default, Oct 19 2014, 13:31:11) [GCC 4.9.1] sys.flags(debug=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=0, hash_randomization=1, isolated=0) Copyright (c) 2001-2014 Python Software Foundation. All Rights Reserved. Copyright (c) 2000 BeOpen.com. All Rights Reserved. Copyright (c) 1995-2001 Corporation for National Research Initiatives. All Rights Reserved. Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam. All Rights Reserved. Backend Python Library: /usr/bin/python3 Platform: linux ``` ## Steps to Reproduce Issue: >> Avoid vague language like "it doesn't work." Please describe as specifically as you can what behavior you are actually seeing (eg: an error message? a nil return value?). 1. run upgrade script as per https://github.com/reactive-firewall/Pocket-PiAP/wiki/Upgrading-Manually-while-in-Beta 2. reboot via `sudo shutdown -hr now` 3. wait for reboot and network to come back up 4. attempt to login to web UI via `https://pocket.piaplib.local` with valid user (previously working user) 5. get 406 error ## Logs (If Available): >> Please attach any logs from Pocket-PiAP relevant to the bug. **REDACTED** ## Additional Information: >> Anything else relevant to this issue. cause of this issue is a permissions issue on the backend workaround: ```bash sudo find /usr/local/lib/python3.4/dist-packages/ -type d -print0 2>/dev/null | xargs -0 -L 1 -I{} sudo chmod -v 755 {} sudo find /usr/local/lib/python*/dist-packages/ -type f -iname "*.py" -print0 2>/dev/null | xargs -0 -L 1 -I{} sudo chmod -v 755 {} # test sudo -u pocket python3 -m piaplib.pocket keyring saltify --msg test -S test | grep -F "9ba1f63365a6caf66e46348f43cdef956015bea997adeb06e69007ee3ff517df10fc5eb860da3d43b82c2a040c931119d2dfc6d08e253742293a868cc2d82015" || echo "FAILED" ```
1.0
ACL issue on older upgrades causes lockout by 406 error - # Basic Info >> When reporting an issue, please list the version of Pocket you are using and any relevant information about your software environment: Version: Pocket v0.3.5 env: multiple - OS type >> `uname -a` - [x] linux - [x] Mac OS X (darwin) - `piaplib` version: >> `sudo -u pocket-admin python3 -m piaplib.book.version --verbose --all` output ```plain piaplib: 0.3.5 Python: 3.4.2 (default, Oct 19 2014, 13:31:11) [GCC 4.9.1] sys.flags(debug=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=0, hash_randomization=1, isolated=0) Copyright (c) 2001-2014 Python Software Foundation. All Rights Reserved. Copyright (c) 2000 BeOpen.com. All Rights Reserved. Copyright (c) 1995-2001 Corporation for National Research Initiatives. All Rights Reserved. Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam. All Rights Reserved. Backend Python Library: /usr/bin/python3 Platform: linux ``` ## Steps to Reproduce Issue: >> Avoid vague language like "it doesn't work." Please describe as specifically as you can what behavior you are actually seeing (eg: an error message? a nil return value?). 1. run upgrade script as per https://github.com/reactive-firewall/Pocket-PiAP/wiki/Upgrading-Manually-while-in-Beta 2. reboot via `sudo shutdown -hr now` 3. wait for reboot and network to come back up 4. attempt to login to web UI via `https://pocket.piaplib.local` with valid user (previously working user) 5. get 406 error ## Logs (If Available): >> Please attach any logs from Pocket-PiAP relevant to the bug. **REDACTED** ## Additional Information: >> Anything else relevant to this issue. cause of this issue is a permissions issue on the backend workaround: ```bash sudo find /usr/local/lib/python3.4/dist-packages/ -type d -print0 2>/dev/null | xargs -0 -L 1 -I{} sudo chmod -v 755 {} sudo find /usr/local/lib/python*/dist-packages/ -type f -iname "*.py" -print0 2>/dev/null | xargs -0 -L 1 -I{} sudo chmod -v 755 {} # test sudo -u pocket python3 -m piaplib.pocket keyring saltify --msg test -S test | grep -F "9ba1f63365a6caf66e46348f43cdef956015bea997adeb06e69007ee3ff517df10fc5eb860da3d43b82c2a040c931119d2dfc6d08e253742293a868cc2d82015" || echo "FAILED" ```
non_process
acl issue on older upgrades causes lockout by error basic info when reporting an issue please list the version of pocket you are using and any relevant information about your software environment version pocket env multiple os type uname a linux mac os x darwin piaplib version sudo u pocket admin m piaplib book version verbose all output plain piaplib python default oct sys flags debug inspect interactive optimize dont write bytecode no user site no site ignore environment verbose bytes warning quiet hash randomization isolated copyright c python software foundation all rights reserved copyright c beopen com all rights reserved copyright c corporation for national research initiatives all rights reserved copyright c stichting mathematisch centrum amsterdam all rights reserved backend python library usr bin platform linux steps to reproduce issue avoid vague language like it doesn t work please describe as specifically as you can what behavior you are actually seeing eg an error message a nil return value run upgrade script as per reboot via sudo shutdown hr now wait for reboot and network to come back up attempt to login to web ui via with valid user previously working user get error logs if available please attach any logs from pocket piap relevant to the bug redacted additional information anything else relevant to this issue cause of this issue is a permissions issue on the backend workaround bash sudo find usr local lib dist packages type d dev null xargs l i sudo chmod v sudo find usr local lib python dist packages type f iname py dev null xargs l i sudo chmod v test sudo u pocket m piaplib pocket keyring saltify msg test s test grep f echo failed
0
13,751
16,503,145,138
IssuesEvent
2021-05-25 16:12:17
apache/iotdb
https://api.github.com/repos/apache/iotdb
closed
Do not return null in group by time query
Easy-Fixed Improvement Module - Query Processing Module - SQL
If all points in one row are null, we'd better give users the choice not to return this row. 在 group by time 查询中,让用户可选择,不返回空的行 ![image](https://user-images.githubusercontent.com/7240743/101747809-b6f0a980-3b06-11eb-8b9a-34ec4c5f539f.png)
1.0
Do not return null in group by time query - If all points in one row are null, we'd better give users the choice not to return this row. 在 group by time 查询中,让用户可选择,不返回空的行 ![image](https://user-images.githubusercontent.com/7240743/101747809-b6f0a980-3b06-11eb-8b9a-34ec4c5f539f.png)
process
do not return null in group by time query if all points in one row are null we d better give users the choice not to return this row 在 group by time 查询中,让用户可选择,不返回空的行
1
72,526
9,596,556,756
IssuesEvent
2019-05-09 18:54:50
backdrop-ops/backdropcms.org
https://api.github.com/repos/backdrop-ops/backdropcms.org
closed
Improve "Upgrade from Drupal 7" documentation
content - documentation
On the page https://backdropcms.org/upgrade-from-drupal we have detailed step-by-step instructions about upgrading a Drupal 7 site to Backdrop. I've recently upgraded a D7 site to Backdrop. I had some problems to get the upgrade process running, but at the end I managed to do the upgrade. The instructions were generally helpful. I had however several questions that are not covered by the documentation. Other subjects are covered but not clear enough, in my opinion. I'll make some improvement suggestions below. Will also post some questions to better understand the upgrade process, and its prerequisites.
1.0
Improve "Upgrade from Drupal 7" documentation - On the page https://backdropcms.org/upgrade-from-drupal we have detailed step-by-step instructions about upgrading a Drupal 7 site to Backdrop. I've recently upgraded a D7 site to Backdrop. I had some problems to get the upgrade process running, but at the end I managed to do the upgrade. The instructions were generally helpful. I had however several questions that are not covered by the documentation. Other subjects are covered but not clear enough, in my opinion. I'll make some improvement suggestions below. Will also post some questions to better understand the upgrade process, and its prerequisites.
non_process
improve upgrade from drupal documentation on the page we have detailed step by step instructions about upgrading a drupal site to backdrop i ve recently upgraded a site to backdrop i had some problems to get the upgrade process running but at the end i managed to do the upgrade the instructions were generally helpful i had however several questions that are not covered by the documentation other subjects are covered but not clear enough in my opinion i ll make some improvement suggestions below will also post some questions to better understand the upgrade process and its prerequisites
0
99,396
30,444,681,160
IssuesEvent
2023-07-15 14:07:51
Krypton-Suite/Standard-Toolkit
https://api.github.com/repos/Krypton-Suite/Standard-Toolkit
closed
[Bug]: Nightly "Build" option causes erros in build sequences
bug build system
Using latest VS preview: and alpha branch: ![image](https://github.com/Krypton-Suite/Standard-Toolkit/assets/2418812/1218df78-b4f9-4eef-8ff9-dd6ca178d499)
1.0
[Bug]: Nightly "Build" option causes erros in build sequences - Using latest VS preview: and alpha branch: ![image](https://github.com/Krypton-Suite/Standard-Toolkit/assets/2418812/1218df78-b4f9-4eef-8ff9-dd6ca178d499)
non_process
nightly build option causes erros in build sequences using latest vs preview and alpha branch
0
20,361
27,020,691,849
IssuesEvent
2023-02-11 01:26:36
MikaylaFischler/cc-mek-scada
https://api.github.com/repos/MikaylaFischler/cc-mek-scada
closed
Unit Ready Condition Should Include PLC Data Received
bug supervisor stability process control
Process start before struct received crashes the supervisor since struct is an empty array
1.0
Unit Ready Condition Should Include PLC Data Received - Process start before struct received crashes the supervisor since struct is an empty array
process
unit ready condition should include plc data received process start before struct received crashes the supervisor since struct is an empty array
1
2,809
5,738,519,486
IssuesEvent
2017-04-23 05:07:27
SIMEXP/niak
https://api.github.com/repos/SIMEXP/niak
closed
fade of the overlay in the QC of the fMRI preprocessing
enhancement preprocessing quality control
@illdopejake wrote: I find myself modifying the "fade" of the overlay every time. When the program opens, the overlay is set to be 50% each image. I always find myself bring it toward the left, so its about 75% anatomical(pane 2) and 25% from pane 1. This is because the image in Pane 1 default color scale is a heat map, which is much brighter, and thus much more visible in the overlay.
1.0
fade of the overlay in the QC of the fMRI preprocessing - @illdopejake wrote: I find myself modifying the "fade" of the overlay every time. When the program opens, the overlay is set to be 50% each image. I always find myself bring it toward the left, so its about 75% anatomical(pane 2) and 25% from pane 1. This is because the image in Pane 1 default color scale is a heat map, which is much brighter, and thus much more visible in the overlay.
process
fade of the overlay in the qc of the fmri preprocessing illdopejake wrote i find myself modifying the fade of the overlay every time when the program opens the overlay is set to be each image i always find myself bring it toward the left so its about anatomical pane and from pane this is because the image in pane default color scale is a heat map which is much brighter and thus much more visible in the overlay
1
468,036
13,460,232,635
IssuesEvent
2020-09-09 13:21:47
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
vm-mpassard.rezopole.net - SVG mouse event not happening with element
browser-firefox engine-gecko form-v2-experiment os-linux priority-normal severity-minor
<!-- @browser: Firefox 72.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0 --> <!-- @reported_with: --> <!-- @extra_labels: form-v2-experiment --> **URL**: http://vm-mpassard.rezopole.net/index.php?id=1111 **Browser / Version**: Firefox 72.0 **Operating System**: Ubuntu **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: SVG mouse event not happening with <use> element **Steps to Reproduce**: I have an issue using <use> on svg. I'm using d3js library. On this graph, the horizontal colored graph is using <use> element. The mouse event linked to the element in <def> isn't firing while it should, as we specified pointer-events to get them to that level. The comportement is inconsistant with different browsers : Opera & Chrome are working great, Firefox & Edge are not. Plus, second issue, there is a problem when zomming on svg using scale and translate. Here, you can zoom on a specific area using the graph reduction under. The problem here is that the svg isn't reloading it's draw unless a mouse event happen somewhere, and we get a blur image. [Screenshot](https://webcompat.com/uploads/2020/1/ef5ae42d-9729-42da-af49-b6b22c27db33.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
vm-mpassard.rezopole.net - SVG mouse event not happening with element - <!-- @browser: Firefox 72.0 --> <!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0 --> <!-- @reported_with: --> <!-- @extra_labels: form-v2-experiment --> **URL**: http://vm-mpassard.rezopole.net/index.php?id=1111 **Browser / Version**: Firefox 72.0 **Operating System**: Ubuntu **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: SVG mouse event not happening with <use> element **Steps to Reproduce**: I have an issue using <use> on svg. I'm using d3js library. On this graph, the horizontal colored graph is using <use> element. The mouse event linked to the element in <def> isn't firing while it should, as we specified pointer-events to get them to that level. The comportement is inconsistant with different browsers : Opera & Chrome are working great, Firefox & Edge are not. Plus, second issue, there is a problem when zomming on svg using scale and translate. Here, you can zoom on a specific area using the graph reduction under. The problem here is that the svg isn't reloading it's draw unless a mouse event happen somewhere, and we get a blur image. [Screenshot](https://webcompat.com/uploads/2020/1/ef5ae42d-9729-42da-af49-b6b22c27db33.jpg) <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
vm mpassard rezopole net svg mouse event not happening with element url browser version firefox operating system ubuntu tested another browser yes chrome problem type something else description svg mouse event not happening with element steps to reproduce i have an issue using on svg i m using library on this graph the horizontal colored graph is using element the mouse event linked to the element in isn t firing while it should as we specified pointer events to get them to that level the comportement is inconsistant with different browsers opera chrome are working great firefox edge are not plus second issue there is a problem when zomming on svg using scale and translate here you can zoom on a specific area using the graph reduction under the problem here is that the svg isn t reloading it s draw unless a mouse event happen somewhere and we get a blur image browser configuration none from with ❤️
0
65,935
14,761,967,302
IssuesEvent
2021-01-09 01:11:44
TIBCOSoftware/bw-sample-for-amazon-sns
https://api.github.com/repos/TIBCOSoftware/bw-sample-for-amazon-sns
opened
CVE-2020-36188 (Medium) detected in jackson-databind-2.6.6.jar
security vulnerability
## CVE-2020-36188 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: bw-sample-for-amazon-sns/SNSTCI/SNSImpl/lib/jackson-databind-2.6.6.jar,bw-sample-for-amazon-sns/SNSTestSuite/SNSImpl/lib/jackson-databind-2.6.6.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.6.6.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36188>CVE-2020-36188</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2996">https://github.com/FasterXML/jackson-databind/issues/2996</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.6","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"vulnerabilityIdentifier":"CVE-2020-36188","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36188","cvss3Severity":"medium","cvss3Score":"4.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-36188 (Medium) detected in jackson-databind-2.6.6.jar - ## CVE-2020-36188 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.6.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to vulnerable library: bw-sample-for-amazon-sns/SNSTCI/SNSImpl/lib/jackson-databind-2.6.6.jar,bw-sample-for-amazon-sns/SNSTestSuite/SNSImpl/lib/jackson-databind-2.6.6.jar</p> <p> Dependency Hierarchy: - :x: **jackson-databind-2.6.6.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource. <p>Publish Date: 2021-01-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36188>CVE-2020-36188</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2996">https://github.com/FasterXML/jackson-databind/issues/2996</a></p> <p>Release Date: 2021-01-06</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.6","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.6.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.8"}],"vulnerabilityIdentifier":"CVE-2020-36188","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36188","cvss3Severity":"medium","cvss3Score":"4.2","cvss3Metrics":{"A":"Low","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library bw sample for amazon sns snstci snsimpl lib jackson databind jar bw sample for amazon sns snstestsuite snsimpl lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db jndiconnectionsource publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db jndiconnectionsource vulnerabilityurl
0
35,130
2,789,809,746
IssuesEvent
2015-05-08 21:37:59
google/google-visualization-api-issues
https://api.github.com/repos/google/google-visualization-api-issues
opened
Allow user to change the size of a column in a table
Priority-Low Type-Enhancement
Original [issue 171](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=171) created by orwant on 2010-01-23T00:29:59.000Z: <b>What would you like to see us add to this API?</b> A lot of data tables have very long column header but very short column value, therefore it is very inconvenient for user to see the real data. It will be nice if user can drag and change the width of a column in a table. <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> DataTable <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
1.0
Allow user to change the size of a column in a table - Original [issue 171](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=171) created by orwant on 2010-01-23T00:29:59.000Z: <b>What would you like to see us add to this API?</b> A lot of data tables have very long column header but very short column value, therefore it is very inconvenient for user to see the real data. It will be nice if user can drag and change the width of a column in a table. <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> DataTable <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
non_process
allow user to change the size of a column in a table original created by orwant on what would you like to see us add to this api a lot of data tables have very long column header but very short column value therefore it is very inconvenient for user to see the real data it will be nice if user can drag and change the width of a column in a table what component is this issue related to piechart linechart datatable query etc datatable for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
0
382,316
26,492,504,117
IssuesEvent
2023-01-18 00:37:42
beercss/beercss
https://api.github.com/repos/beercss/beercss
closed
discussions
documentation
what about an official discord server so we can talk all things _beercss_, help each other, etc ? ;-)
1.0
discussions - what about an official discord server so we can talk all things _beercss_, help each other, etc ? ;-)
non_process
discussions what about an official discord server so we can talk all things beercss help each other etc
0
9,427
12,418,368,037
IssuesEvent
2020-05-22 23:59:39
nion-software/nionswift
https://api.github.com/repos/nion-software/nionswift
opened
Add ability to map 1d and 2d operations to collections/sequences
f - computations f - processing feature stage - planning type - enhancement
e.g. take FFT of every item in collection.
1.0
Add ability to map 1d and 2d operations to collections/sequences - e.g. take FFT of every item in collection.
process
add ability to map and operations to collections sequences e g take fft of every item in collection
1
6,772
9,912,973,026
IssuesEvent
2019-06-28 10:22:46
wso2/docs-ei
https://api.github.com/repos/wso2/docs-ei
closed
Adding initial infrastructure for GitHub docs
Priority/Highest Severity/Blocker ballerina micro-integrator stream-processor
This involves getting the set of configurations from the template repo and adding it into the docs-ei repo.
1.0
Adding initial infrastructure for GitHub docs - This involves getting the set of configurations from the template repo and adding it into the docs-ei repo.
process
adding initial infrastructure for github docs this involves getting the set of configurations from the template repo and adding it into the docs ei repo
1
4,640
2,610,135,931
IssuesEvent
2015-02-26 18:42:51
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
You can not let go of the Rope while pressing Up & Left at the same time
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Shoot a Rope 2. Press up & left at the same time 3. Try to release the Rope (it works when you press up & right) What is the expected output? What do you see instead? That you can let go of it of course :) What version of the product are you using? On what operating system? 0.9.13, Win7 ``` ----- Original issue reported on code.google.com by `joship...@gmail.com` on 16 Sep 2010 at 6:03 * Merged into: #58
1.0
You can not let go of the Rope while pressing Up & Left at the same time - ``` What steps will reproduce the problem? 1. Shoot a Rope 2. Press up & left at the same time 3. Try to release the Rope (it works when you press up & right) What is the expected output? What do you see instead? That you can let go of it of course :) What version of the product are you using? On what operating system? 0.9.13, Win7 ``` ----- Original issue reported on code.google.com by `joship...@gmail.com` on 16 Sep 2010 at 6:03 * Merged into: #58
non_process
you can not let go of the rope while pressing up left at the same time what steps will reproduce the problem shoot a rope press up left at the same time try to release the rope it works when you press up right what is the expected output what do you see instead that you can let go of it of course what version of the product are you using on what operating system original issue reported on code google com by joship gmail com on sep at merged into
0
21,613
30,016,340,848
IssuesEvent
2023-06-26 19:02:10
Home-modules/webapp-android
https://api.github.com/repos/Home-modules/webapp-android
closed
Remembering hub IP
enhancement In Process
Currently the app will ask every time about the hub's IP address and port. It has to remember the IP and launch the web app directly on subsequent starts.
1.0
Remembering hub IP - Currently the app will ask every time about the hub's IP address and port. It has to remember the IP and launch the web app directly on subsequent starts.
process
remembering hub ip currently the app will ask every time about the hub s ip address and port it has to remember the ip and launch the web app directly on subsequent starts
1
16,315
20,969,961,725
IssuesEvent
2022-03-28 10:24:02
huutho77/CNPMNC_ThayAi
https://api.github.com/repos/huutho77/CNPMNC_ThayAi
opened
Coding UI for Home page, Login page and Register page
dev/thnguyen processing
The User Interface is built on the design
1.0
Coding UI for Home page, Login page and Register page - The User Interface is built on the design
process
coding ui for home page login page and register page the user interface is built on the design
1
19,592
25,934,542,919
IssuesEvent
2022-12-16 13:01:07
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Release 5.4.0 - December 2022
P1 type: process release team-OSS
# Status of Bazel 5.4.0 - Expected release date: 2022-12-15 - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/45) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 5.4, simply send a PR against the `release-5.4.0` branch. Task list: - [x] [Create draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit#heading=h.1ldj0zgvx6cd) - [x] Send for review the release announcement PR - [x] Push the release, notify package maintainers - [x] Update the documentation - [x] ~Push the blog post~ - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Release 5.4.0 - December 2022 - # Status of Bazel 5.4.0 - Expected release date: 2022-12-15 - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/45) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 5.4, simply send a PR against the `release-5.4.0` branch. Task list: - [x] [Create draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit#heading=h.1ldj0zgvx6cd) - [x] Send for review the release announcement PR - [x] Push the release, notify package maintainers - [x] Update the documentation - [x] ~Push the blog post~ - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
release december status of bazel expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
1
105,581
13,196,955,998
IssuesEvent
2020-08-13 21:47:23
phetsims/natural-selection
https://api.github.com/repos/phetsims/natural-selection
closed
Are overlapping plots in the Population graph a significant usability problem?
design:interviews
In #101, we identified overlapping plots as a potential usability problem: > * If plots overlap, one or more sets of data will be hidden. For example, if there are equal numbers of bunnies with white fur and brown fur, only one data set will be visible. In https://github.com/phetsims/natural-selection/issues/101#issuecomment-648349693, @amanda-phet concluded: > I am not sure there is much we can do about overlapping graphs, and the current graph seems superior to the Java version because of the handy data probe, and the fact that you can choose which graphs to view at any given time. We should watch for this as a potential problem in interviews.
1.0
Are overlapping plots in the Population graph a significant usability problem? - In #101, we identified overlapping plots as a potential usability problem: > * If plots overlap, one or more sets of data will be hidden. For example, if there are equal numbers of bunnies with white fur and brown fur, only one data set will be visible. In https://github.com/phetsims/natural-selection/issues/101#issuecomment-648349693, @amanda-phet concluded: > I am not sure there is much we can do about overlapping graphs, and the current graph seems superior to the Java version because of the handy data probe, and the fact that you can choose which graphs to view at any given time. We should watch for this as a potential problem in interviews.
non_process
are overlapping plots in the population graph a significant usability problem in we identified overlapping plots as a potential usability problem if plots overlap one or more sets of data will be hidden for example if there are equal numbers of bunnies with white fur and brown fur only one data set will be visible in amanda phet concluded i am not sure there is much we can do about overlapping graphs and the current graph seems superior to the java version because of the handy data probe and the fact that you can choose which graphs to view at any given time we should watch for this as a potential problem in interviews
0
31,575
7,395,910,237
IssuesEvent
2018-03-18 04:53:34
bcgov/api-specs
https://api.github.com/repos/bcgov/api-specs
closed
Users prefer a single text box to enter occupant names or addresses
GEOCODER api
Currently geocoder/occupants/addresses assumes an occupant name and geocoder/addresses assumes an address. Is there a way of looking for both occupants and addresses at the same time? Maybe there could be a new maxOccupantResults where maxResults <= maxResults. For example, if maxOccupantResults=5 and maxResults=10 then 5 results come from occupants and 5 come from addresses. Or is this unholy combining of results best left to a widget?
1.0
Users prefer a single text box to enter occupant names or addresses - Currently geocoder/occupants/addresses assumes an occupant name and geocoder/addresses assumes an address. Is there a way of looking for both occupants and addresses at the same time? Maybe there could be a new maxOccupantResults where maxResults <= maxResults. For example, if maxOccupantResults=5 and maxResults=10 then 5 results come from occupants and 5 come from addresses. Or is this unholy combining of results best left to a widget?
non_process
users prefer a single text box to enter occupant names or addresses currently geocoder occupants addresses assumes an occupant name and geocoder addresses assumes an address is there a way of looking for both occupants and addresses at the same time maybe there could be a new maxoccupantresults where maxresults maxresults for example if maxoccupantresults and maxresults then results come from occupants and come from addresses or is this unholy combining of results best left to a widget
0
9,150
12,203,235,337
IssuesEvent
2020-04-30 10:15:10
MHRA/products
https://api.github.com/repos/MHRA/products
reopened
PARs form - Upload / amend
EPIC - PARs process HIGH PRIORITY :arrow_double_up: PARKED 🚗 STORY :book:
### User want As a Medical Writer I would like to upload and amend PARs to the Products website, in an easy format so that I can ensure the latest versions of the PARs are always available on the site. (Linked to #397 and #398) **Customer acceptance criteria** 1. Medical writers can upload a file to the website easily 2. Medical Writers can find a document easily to make amends to files 3. The PAR can be linked to one or multiple PL numbers and products 4. The PAR is related to its relevant SPC's and PIL's via the PL numbers 5. The data is stored to be able to be resurfaced onto the website at some point. 6. The relevant fields to add data that will match existing PILs and SPCs metadata: High Priority - Product name (Must have - several different product names) - Active substance (multiple answers) - TMS List (+60000 terms) - PL / PLPI / THR number (can we do a lookup?) **Technical acceptance criteria** **Data acceptance criteria** **Testing acceptance criteria** **Size** XL **Value** **Effort** ### Exit Criteria met - [x] Backlog - [x] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
1.0
PARs form - Upload / amend - ### User want As a Medical Writer I would like to upload and amend PARs to the Products website, in an easy format so that I can ensure the latest versions of the PARs are always available on the site. (Linked to #397 and #398) **Customer acceptance criteria** 1. Medical writers can upload a file to the website easily 2. Medical Writers can find a document easily to make amends to files 3. The PAR can be linked to one or multiple PL numbers and products 4. The PAR is related to its relevant SPC's and PIL's via the PL numbers 5. The data is stored to be able to be resurfaced onto the website at some point. 6. The relevant fields to add data that will match existing PILs and SPCs metadata: High Priority - Product name (Must have - several different product names) - Active substance (multiple answers) - TMS List (+60000 terms) - PL / PLPI / THR number (can we do a lookup?) **Technical acceptance criteria** **Data acceptance criteria** **Testing acceptance criteria** **Size** XL **Value** **Effort** ### Exit Criteria met - [x] Backlog - [x] Discovery - [ ] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
process
pars form upload amend user want as a medical writer i would like to upload and amend pars to the products website in an easy format so that i can ensure the latest versions of the pars are always available on the site linked to and customer acceptance criteria medical writers can upload a file to the website easily medical writers can find a document easily to make amends to files the par can be linked to one or multiple pl numbers and products the par is related to its relevant spc s and pil s via the pl numbers the data is stored to be able to be resurfaced onto the website at some point the relevant fields to add data that will match existing pils and spcs metadata high priority product name must have several different product names active substance multiple answers tms list terms pl plpi thr number can we do a lookup technical acceptance criteria data acceptance criteria testing acceptance criteria size xl value effort exit criteria met backlog discovery duxd development quality assurance release and validate
1
299,689
25,918,381,614
IssuesEvent
2022-12-15 19:26:45
w3c/mathml-core
https://api.github.com/repos/w3c/mathml-core
opened
Remove mathvariant from core (except 'normal')
need polyfill need specification update needs-tests
I'm starting a new issue based on a side topic mentioned in #181: > Regarding mathvariant, this is controversial because really only the automatic italicization of the single-char <mi> can be characterized as "stylistic". We failed to convince the CSS WG to extend text-transform for other attribute values, so we didn't implement them in chromium and a large amount of tests are failing. Googlers even suggested to remove this mathvariant thing to resolve that problem. My preference would be to just drop mathvariant from MathML Core (only keeping mathvariant="normal" on mi) and tell people to use proper Unicode code points instead. The Math WG (full) discussed this today and the reluctant resolution is that now is a good time has come to get rid of `mathvariant` from core, especially since it sounds like it can't be supported in core. `mathvariant="normal"` does need to be kept. It is a little strange to have an attribute with that name, but it, like many things in other languages, is has that name because of history.
1.0
Remove mathvariant from core (except 'normal') - I'm starting a new issue based on a side topic mentioned in #181: > Regarding mathvariant, this is controversial because really only the automatic italicization of the single-char <mi> can be characterized as "stylistic". We failed to convince the CSS WG to extend text-transform for other attribute values, so we didn't implement them in chromium and a large amount of tests are failing. Googlers even suggested to remove this mathvariant thing to resolve that problem. My preference would be to just drop mathvariant from MathML Core (only keeping mathvariant="normal" on mi) and tell people to use proper Unicode code points instead. The Math WG (full) discussed this today and the reluctant resolution is that now is a good time has come to get rid of `mathvariant` from core, especially since it sounds like it can't be supported in core. `mathvariant="normal"` does need to be kept. It is a little strange to have an attribute with that name, but it, like many things in other languages, is has that name because of history.
non_process
remove mathvariant from core except normal i m starting a new issue based on a side topic mentioned in regarding mathvariant this is controversial because really only the automatic italicization of the single char can be characterized as stylistic we failed to convince the css wg to extend text transform for other attribute values so we didn t implement them in chromium and a large amount of tests are failing googlers even suggested to remove this mathvariant thing to resolve that problem my preference would be to just drop mathvariant from mathml core only keeping mathvariant normal on mi and tell people to use proper unicode code points instead the math wg full discussed this today and the reluctant resolution is that now is a good time has come to get rid of mathvariant from core especially since it sounds like it can t be supported in core mathvariant normal does need to be kept it is a little strange to have an attribute with that name but it like many things in other languages is has that name because of history
0
32,084
6,711,928,911
IssuesEvent
2017-10-13 07:14:33
RIOT-OS/RIOT
https://api.github.com/repos/RIOT-OS/RIOT
closed
core: thread_flags: THREAD_FLAG_MUTEX_UNLOCKED, THREAD_FLAG_TIMEOUT are not used
core quality defect timer
Follow up to #7533. I found that there are two other predefined constants which are not used in the tree. - `THREAD_FLAG_MUTEX_UNLOCKED` - `THREAD_FLAG_TIMEOUT` I think the constants may be intended for `xtimer_msg_timeout`, which mentions core_thread_flags in the documentation, but does not in fact use thread flags for the implementation.
1.0
core: thread_flags: THREAD_FLAG_MUTEX_UNLOCKED, THREAD_FLAG_TIMEOUT are not used - Follow up to #7533. I found that there are two other predefined constants which are not used in the tree. - `THREAD_FLAG_MUTEX_UNLOCKED` - `THREAD_FLAG_TIMEOUT` I think the constants may be intended for `xtimer_msg_timeout`, which mentions core_thread_flags in the documentation, but does not in fact use thread flags for the implementation.
non_process
core thread flags thread flag mutex unlocked thread flag timeout are not used follow up to i found that there are two other predefined constants which are not used in the tree thread flag mutex unlocked thread flag timeout i think the constants may be intended for xtimer msg timeout which mentions core thread flags in the documentation but does not in fact use thread flags for the implementation
0
5,015
7,844,925,286
IssuesEvent
2018-06-19 11:16:45
lbassin/okty
https://api.github.com/repos/lbassin/okty
closed
Add projects templates
enhancement in process
**Describe the solution you'd like** Add some templates to avoid creating standards containers
1.0
Add projects templates - **Describe the solution you'd like** Add some templates to avoid creating standards containers
process
add projects templates describe the solution you d like add some templates to avoid creating standards containers
1
410,436
11,991,624,244
IssuesEvent
2020-04-08 08:41:46
muccg/rdrf
https://api.github.com/repos/muccg/rdrf
closed
Another issue with CICLungProms login
area/cic area/lung cancer area/proms priority/p0
``` Request Method: | GET -- | -- https://rdrf.ccgapps.com.au/ciclungproms/admin/ 2.1.15 NoReverseMatch Reverse for 'custom_action' not found. 'custom_action' is not a valid view function or pattern name. /env/lib/python3.7/site-packages/django/urls/resolvers.py in _reverse_with_prefix, line 622 /env/bin/uwsgi 3.7.7 ['.', '/app', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7', '/usr/local/lib/python3.7/lib-dynload', '/env/lib/python3.7/site-packages'] Tue, 31 Mar 2020 12:12:45 +0800 ```
1.0
Another issue with CICLungProms login - ``` Request Method: | GET -- | -- https://rdrf.ccgapps.com.au/ciclungproms/admin/ 2.1.15 NoReverseMatch Reverse for 'custom_action' not found. 'custom_action' is not a valid view function or pattern name. /env/lib/python3.7/site-packages/django/urls/resolvers.py in _reverse_with_prefix, line 622 /env/bin/uwsgi 3.7.7 ['.', '/app', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7', '/usr/local/lib/python3.7/lib-dynload', '/env/lib/python3.7/site-packages'] Tue, 31 Mar 2020 12:12:45 +0800 ```
non_process
another issue with ciclungproms login request method get noreversematch reverse for custom action not found custom action is not a valid view function or pattern name env lib site packages django urls resolvers py in reverse with prefix line env bin uwsgi tue mar
0
85,968
10,699,930,094
IssuesEvent
2019-10-23 22:13:02
lyft/envoy-mobile
https://api.github.com/repos/lyft/envoy-mobile
closed
roadmap: v0.2 release deliverable breakdown
design proposal no stalebot
**The deliverables for v0.2 and their respective owners are available in [this Google Doc](https://docs.google.com/document/d/1eLbJEXog2Rn7wTBEbDIjpDV7dHecV_DV_dw8P5ZgbL4/edit).** This doc represents a subset of the functionality detailed in our [ongoing roadmap](https://docs.google.com/document/d/1N0ZFJktK8m01uqqgfDRVB9mpC1iEn9dqkQaa_yMn_kE/edit) (issue #4).
1.0
roadmap: v0.2 release deliverable breakdown - **The deliverables for v0.2 and their respective owners are available in [this Google Doc](https://docs.google.com/document/d/1eLbJEXog2Rn7wTBEbDIjpDV7dHecV_DV_dw8P5ZgbL4/edit).** This doc represents a subset of the functionality detailed in our [ongoing roadmap](https://docs.google.com/document/d/1N0ZFJktK8m01uqqgfDRVB9mpC1iEn9dqkQaa_yMn_kE/edit) (issue #4).
non_process
roadmap release deliverable breakdown the deliverables for and their respective owners are available in this doc represents a subset of the functionality detailed in our issue
0
7,190
10,330,382,316
IssuesEvent
2019-09-02 14:32:33
ESMValGroup/ESMValCore
https://api.github.com/repos/ESMValGroup/ESMValCore
closed
Extract odd-shaped (irregular) regions
preprocessor
I would like to extract regions in the preprocessor of the ESMValTool which are not rectangular but as they are defined in the IPCC AR5 or AR6. For example to create the panels for a figure like this: ![image](https://user-images.githubusercontent.com/25476740/53959556-38bad400-40e4-11e9-9ccd-501bff7c4295.png) Is there a possibility to include this in the preprocessor functions? At the moment I do not have the specific region definitions but I could share them when I received them.
1.0
Extract odd-shaped (irregular) regions - I would like to extract regions in the preprocessor of the ESMValTool which are not rectangular but as they are defined in the IPCC AR5 or AR6. For example to create the panels for a figure like this: ![image](https://user-images.githubusercontent.com/25476740/53959556-38bad400-40e4-11e9-9ccd-501bff7c4295.png) Is there a possibility to include this in the preprocessor functions? At the moment I do not have the specific region definitions but I could share them when I received them.
process
extract odd shaped irregular regions i would like to extract regions in the preprocessor of the esmvaltool which are not rectangular but as they are defined in the ipcc or for example to create the panels for a figure like this is there a possibility to include this in the preprocessor functions at the moment i do not have the specific region definitions but i could share them when i received them
1
134,036
10,879,033,461
IssuesEvent
2019-11-16 22:00:34
pandas-dev/pandas
https://api.github.com/repos/pandas-dev/pandas
closed
BUG: inserting a Categorical with the wrong length into a DataFrame is allowed
Needs Tests good first issue
This leaves the DataFrame in a very weird/buggy state: ``` In [33]: cat = pd.Categorical.from_codes([0, 1, 1, 0, 1, 2], ['a', 'b', 'c']) In [34]: df = pd.DataFrame() In [35]: df['bar'] = range(10) In [36]: df['foo'] = cat In [37]: df Out[37]: bar foo 0 0 a 1 1 b 2 2 b 3 3 a 4 4 b 5 5 c 6 6 7 7 8 8 9 9 In [38]: df.foo.shape Out[38]: (6,) ``` I was expecting something like: ``` In [49]: df['foo'] = np.array(cat) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-49-2c5dea325c23> in <module>() ----> 1 df['foo'] = np.array(cat) /Users/shoyer/dev/pandas/pandas/core/frame.py in __setitem__(self, key, value) 2108 else: 2109 # set column -> 2110 self._set_item(key, value) 2111 2112 def _setitem_slice(self, key, value): /Users/shoyer/dev/pandas/pandas/core/frame.py in _set_item(self, key, value) 2185 2186 self._ensure_valid_index(value) -> 2187 value = self._sanitize_column(key, value) 2188 NDFrame._set_item(self, key, value) 2189 /Users/shoyer/dev/pandas/pandas/core/frame.py in _sanitize_column(self, key, value) 2258 elif (isinstance(value, Index) or is_sequence(value)): 2259 from pandas.core.series import _sanitize_index -> 2260 value = _sanitize_index(value, self.index, copy=False) 2261 if not isinstance(value, (np.ndarray, Index)): 2262 if isinstance(value, list) and len(value) > 0: /Users/shoyer/dev/pandas/pandas/core/series.py in _sanitize_index(data, index, copy) 2562 2563 if len(data) != len(index): -> 2564 raise ValueError('Length of values does not match length of ' 2565 'index') 2566 ValueError: Length of values does not match length of index ``` Tested on master.
1.0
BUG: inserting a Categorical with the wrong length into a DataFrame is allowed - This leaves the DataFrame in a very weird/buggy state: ``` In [33]: cat = pd.Categorical.from_codes([0, 1, 1, 0, 1, 2], ['a', 'b', 'c']) In [34]: df = pd.DataFrame() In [35]: df['bar'] = range(10) In [36]: df['foo'] = cat In [37]: df Out[37]: bar foo 0 0 a 1 1 b 2 2 b 3 3 a 4 4 b 5 5 c 6 6 7 7 8 8 9 9 In [38]: df.foo.shape Out[38]: (6,) ``` I was expecting something like: ``` In [49]: df['foo'] = np.array(cat) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-49-2c5dea325c23> in <module>() ----> 1 df['foo'] = np.array(cat) /Users/shoyer/dev/pandas/pandas/core/frame.py in __setitem__(self, key, value) 2108 else: 2109 # set column -> 2110 self._set_item(key, value) 2111 2112 def _setitem_slice(self, key, value): /Users/shoyer/dev/pandas/pandas/core/frame.py in _set_item(self, key, value) 2185 2186 self._ensure_valid_index(value) -> 2187 value = self._sanitize_column(key, value) 2188 NDFrame._set_item(self, key, value) 2189 /Users/shoyer/dev/pandas/pandas/core/frame.py in _sanitize_column(self, key, value) 2258 elif (isinstance(value, Index) or is_sequence(value)): 2259 from pandas.core.series import _sanitize_index -> 2260 value = _sanitize_index(value, self.index, copy=False) 2261 if not isinstance(value, (np.ndarray, Index)): 2262 if isinstance(value, list) and len(value) > 0: /Users/shoyer/dev/pandas/pandas/core/series.py in _sanitize_index(data, index, copy) 2562 2563 if len(data) != len(index): -> 2564 raise ValueError('Length of values does not match length of ' 2565 'index') 2566 ValueError: Length of values does not match length of index ``` Tested on master.
non_process
bug inserting a categorical with the wrong length into a dataframe is allowed this leaves the dataframe in a very weird buggy state in cat pd categorical from codes in df pd dataframe in df range in df cat in df out bar foo a b b a b c in df foo shape out i was expecting something like in df np array cat valueerror traceback most recent call last in df np array cat users shoyer dev pandas pandas core frame py in setitem self key value else set column self set item key value def setitem slice self key value users shoyer dev pandas pandas core frame py in set item self key value self ensure valid index value value self sanitize column key value ndframe set item self key value users shoyer dev pandas pandas core frame py in sanitize column self key value elif isinstance value index or is sequence value from pandas core series import sanitize index value sanitize index value self index copy false if not isinstance value np ndarray index if isinstance value list and len value users shoyer dev pandas pandas core series py in sanitize index data index copy if len data len index raise valueerror length of values does not match length of index valueerror length of values does not match length of index tested on master
0
11,207
3,193,179,500
IssuesEvent
2015-09-30 02:29:05
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
TestProcWithExceededActionQueueDepth is flaky
area/platform/mesos kind/flake priority/P0 team/test-infra
@jdef @karlkfi @davidopp Can we get this fixed ASAP? Thanks! ``` proc_test.go:288: starting test case nested at 2015-09-25 00:21:13.875176228 +0000 UTC proc_test.go:304: delegate chain invoked for nested at 2015-09-25 00:21:13.930546473 +0000 UTC proc_test.go:323: executing deferred action: nested at 2015-09-25 00:21:13.933936289 +0000 UTC proc_test.go:335: runDelegationTest received executed signal at 2015-09-25 00:21:13.949041708 +0000 UTC proc_test.go:290: runDelegationTest finished at 2015-09-25 00:21:14.035916116 +0000 UTC proc_test.go:390: unexpected error: cannot execute action because process has terminated ``` https://app.shippable.com/builds/56048e527291610b002dea79
1.0
TestProcWithExceededActionQueueDepth is flaky - @jdef @karlkfi @davidopp Can we get this fixed ASAP? Thanks! ``` proc_test.go:288: starting test case nested at 2015-09-25 00:21:13.875176228 +0000 UTC proc_test.go:304: delegate chain invoked for nested at 2015-09-25 00:21:13.930546473 +0000 UTC proc_test.go:323: executing deferred action: nested at 2015-09-25 00:21:13.933936289 +0000 UTC proc_test.go:335: runDelegationTest received executed signal at 2015-09-25 00:21:13.949041708 +0000 UTC proc_test.go:290: runDelegationTest finished at 2015-09-25 00:21:14.035916116 +0000 UTC proc_test.go:390: unexpected error: cannot execute action because process has terminated ``` https://app.shippable.com/builds/56048e527291610b002dea79
non_process
testprocwithexceededactionqueuedepth is flaky jdef karlkfi davidopp can we get this fixed asap thanks proc test go starting test case nested at utc proc test go delegate chain invoked for nested at utc proc test go executing deferred action nested at utc proc test go rundelegationtest received executed signal at utc proc test go rundelegationtest finished at utc proc test go unexpected error cannot execute action because process has terminated
0
107,044
9,200,738,996
IssuesEvent
2019-03-07 17:47:28
kcigeospatial/Fred_Co_Land-Management
https://api.github.com/repos/kcigeospatial/Fred_Co_Land-Management
closed
Building - Electrical Permit - Building Mounted Sign
Ready For Retest
In the Inspection milestone, the inspection that should populate is Final. AP# 236109 Logged by K. Vanaman on 2/7 ![electrical_permit-signs](https://user-images.githubusercontent.com/4129185/52869247-367fde00-3113-11e9-8df1-1457ff05c6ef.png)
1.0
Building - Electrical Permit - Building Mounted Sign - In the Inspection milestone, the inspection that should populate is Final. AP# 236109 Logged by K. Vanaman on 2/7 ![electrical_permit-signs](https://user-images.githubusercontent.com/4129185/52869247-367fde00-3113-11e9-8df1-1457ff05c6ef.png)
non_process
building electrical permit building mounted sign in the inspection milestone the inspection that should populate is final ap logged by k vanaman on
0
100,802
11,205,063,221
IssuesEvent
2020-01-05 11:34:24
mribrgr/StuRa-Mitgliederdatenbank
https://api.github.com/repos/mribrgr/StuRa-Mitgliederdatenbank
opened
Soll die Änderungs-Historie filterbar sein?
Stakeholder documentation
Soll die Historie über alle Änderungen, die an der Datenbank vorgenommen wurden, filterbar sein (z.B. nach User, Zeitpunkt, Art der Änderung, ...)?
1.0
Soll die Änderungs-Historie filterbar sein? - Soll die Historie über alle Änderungen, die an der Datenbank vorgenommen wurden, filterbar sein (z.B. nach User, Zeitpunkt, Art der Änderung, ...)?
non_process
soll die änderungs historie filterbar sein soll die historie über alle änderungen die an der datenbank vorgenommen wurden filterbar sein z b nach user zeitpunkt art der änderung
0
76,657
14,660,146,367
IssuesEvent
2020-12-28 22:36:59
opentibiabr/otservbr-global
https://api.github.com/repos/opentibiabr/otservbr-global
closed
Guildwar Emblems and Npc's Speechbubbble
Type: Bug Where: Code
1) the guildwar emblems only appear after re-login, also when the guildwar ends the emblems only disappear after re-login **Expected behavior** emblems must appear and disappear without the need of re-login 2) the speechbubble doesn't appear when the server starts, but it does when you use the /n command, and after you leave the viewing range and return the speechbubble disappears **Screenshots** If applicable, add screenshots to help explain your problem. ![image](https://user-images.githubusercontent.com/65622633/85245601-1b636380-b416-11ea-819c-338ea2dfb403.png) ![image](https://user-images.githubusercontent.com/65622633/85245595-16061900-b416-11ea-941d-068ce3c3cc11.png)
1.0
Guildwar Emblems and Npc's Speechbubbble - 1) the guildwar emblems only appear after re-login, also when the guildwar ends the emblems only disappear after re-login **Expected behavior** emblems must appear and disappear without the need of re-login 2) the speechbubble doesn't appear when the server starts, but it does when you use the /n command, and after you leave the viewing range and return the speechbubble disappears **Screenshots** If applicable, add screenshots to help explain your problem. ![image](https://user-images.githubusercontent.com/65622633/85245601-1b636380-b416-11ea-819c-338ea2dfb403.png) ![image](https://user-images.githubusercontent.com/65622633/85245595-16061900-b416-11ea-941d-068ce3c3cc11.png)
non_process
guildwar emblems and npc s speechbubbble the guildwar emblems only appear after re login also when the guildwar ends the emblems only disappear after re login expected behavior emblems must appear and disappear without the need of re login the speechbubble doesn t appear when the server starts but it does when you use the n command and after you leave the viewing range and return the speechbubble disappears screenshots if applicable add screenshots to help explain your problem
0
21,457
29,497,046,132
IssuesEvent
2023-06-02 17:56:10
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Release 6.2.1 - June 2023
P1 type: process release team-OSS
# Status of Bazel 6.2.1 - Expected first release candidate date: 2023-05-26 - Expected release date: 2023-06-02 - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/54) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 6.2.1, simply send a PR against the `release-6.2.1` branch. **Task list:** - [x] Create release candidate - [x] Check downstream projects - [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) - [ ] Push the release and notify package maintainers - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Release 6.2.1 - June 2023 - # Status of Bazel 6.2.1 - Expected first release candidate date: 2023-05-26 - Expected release date: 2023-06-02 - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/54) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 6.2.1, simply send a PR against the `release-6.2.1` branch. **Task list:** - [x] Create release candidate - [x] Check downstream projects - [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) - [ ] Push the release and notify package maintainers - [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
release june status of bazel expected first release candidate date expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list create release candidate check downstream projects create push the release and notify package maintainers update the
1
165,882
12,882,288,581
IssuesEvent
2020-07-12 16:12:43
osquery/osquery
https://api.github.com/repos/osquery/osquery
closed
Fix FSEventsTests.test_fsevents_fire_event which crashes on macOS
bug flaky test macOS test
FSEventsTests.test_fsevents_fire_event sometimes crashes on the macOS agent of the CI, in a Debug build: ``` 70: Test command: /Users/runner/runners/2.160.0/work/1/b/build/osquery/events/tests/osquery_events_tests_fseventstests-test 70: Test timeout computed to be: 10000000 70: Running main() from /Users/runner/runners/2.160.0/work/1/b/build/libs/fb/googletest/googletest-release-1.8.1/googletest/src/gtest_main.cc 70: [==========] Running 8 tests from 1 test case. 70: [----------] Global test environment set-up. 70: [----------] 8 tests from FSEventsTests 70: [ RUN ] FSEventsTests.test_register_event_pub 70: [ OK ] FSEventsTests.test_register_event_pub (4 ms) 70: [ RUN ] FSEventsTests.test_fsevents_add_subscription_missing_path 70: [ OK ] FSEventsTests.test_fsevents_add_subscription_missing_path (1 ms) 70: [ RUN ] FSEventsTests.test_fsevents_add_subscription_success 70: [ OK ] FSEventsTests.test_fsevents_add_subscription_success (1 ms) 70: [ RUN ] FSEventsTests.test_fsevents_match_subscription 70: [ OK ] FSEventsTests.test_fsevents_match_subscription (1 ms) 70: [ RUN ] FSEventsTests.test_fsevents_run 70: [ OK ] FSEventsTests.test_fsevents_run (230 ms) 70: [ RUN ] FSEventsTests.test_fsevents_fire_event 70: WARNING: Logging before InitGoogleLogging() is written to STDERR 70: E1109 00:51:32.897339 267681792 events.cpp:991] Requested unknown/failed event publisher: fsevents 70/88 Test #70: osquery_events_tests_fseventstests-test ...............................***Exception: SegFault 0.27 sec ```
2.0
Fix FSEventsTests.test_fsevents_fire_event which crashes on macOS - FSEventsTests.test_fsevents_fire_event sometimes crashes on the macOS agent of the CI, in a Debug build: ``` 70: Test command: /Users/runner/runners/2.160.0/work/1/b/build/osquery/events/tests/osquery_events_tests_fseventstests-test 70: Test timeout computed to be: 10000000 70: Running main() from /Users/runner/runners/2.160.0/work/1/b/build/libs/fb/googletest/googletest-release-1.8.1/googletest/src/gtest_main.cc 70: [==========] Running 8 tests from 1 test case. 70: [----------] Global test environment set-up. 70: [----------] 8 tests from FSEventsTests 70: [ RUN ] FSEventsTests.test_register_event_pub 70: [ OK ] FSEventsTests.test_register_event_pub (4 ms) 70: [ RUN ] FSEventsTests.test_fsevents_add_subscription_missing_path 70: [ OK ] FSEventsTests.test_fsevents_add_subscription_missing_path (1 ms) 70: [ RUN ] FSEventsTests.test_fsevents_add_subscription_success 70: [ OK ] FSEventsTests.test_fsevents_add_subscription_success (1 ms) 70: [ RUN ] FSEventsTests.test_fsevents_match_subscription 70: [ OK ] FSEventsTests.test_fsevents_match_subscription (1 ms) 70: [ RUN ] FSEventsTests.test_fsevents_run 70: [ OK ] FSEventsTests.test_fsevents_run (230 ms) 70: [ RUN ] FSEventsTests.test_fsevents_fire_event 70: WARNING: Logging before InitGoogleLogging() is written to STDERR 70: E1109 00:51:32.897339 267681792 events.cpp:991] Requested unknown/failed event publisher: fsevents 70/88 Test #70: osquery_events_tests_fseventstests-test ...............................***Exception: SegFault 0.27 sec ```
non_process
fix fseventstests test fsevents fire event which crashes on macos fseventstests test fsevents fire event sometimes crashes on the macos agent of the ci in a debug build test command users runner runners work b build osquery events tests osquery events tests fseventstests test test timeout computed to be running main from users runner runners work b build libs fb googletest googletest release googletest src gtest main cc running tests from test case global test environment set up tests from fseventstests fseventstests test register event pub fseventstests test register event pub ms fseventstests test fsevents add subscription missing path fseventstests test fsevents add subscription missing path ms fseventstests test fsevents add subscription success fseventstests test fsevents add subscription success ms fseventstests test fsevents match subscription fseventstests test fsevents match subscription ms fseventstests test fsevents run fseventstests test fsevents run ms fseventstests test fsevents fire event warning logging before initgooglelogging is written to stderr events cpp requested unknown failed event publisher fsevents test osquery events tests fseventstests test exception segfault sec
0
172,980
27,365,299,277
IssuesEvent
2023-02-27 18:40:50
webstudio-is/webstudio-builder
https://api.github.com/repos/webstudio-is/webstudio-builder
closed
new CSS Value List Item component
type:enhancement complexity:medium area:design system prio:1
This component is needed for the Backgrounds section #877 and other future sections. The thumbnail is a nested **Thumbnail** component. It's nothing special, I just organized it this way so the thumbnail could be changed on the list item without breaking the instance in figma. Let me know if I need to make a separate issue for the thumbnail. https://www.figma.com/file/sfCE7iLS0k25qCxiifQNLE/%F0%9F%93%9A-Webstudio-Library?node-id=0%3A1&t=GFXJS5fMcAiXXhiQ-1 <img width="346" alt="Screenshot 2023-02-23 at 4 19 05 PM" src="https://user-images.githubusercontent.com/86495896/221061391-540f8336-2ea9-4735-9592-3a29a5c915c4.png">
1.0
new CSS Value List Item component - This component is needed for the Backgrounds section #877 and other future sections. The thumbnail is a nested **Thumbnail** component. It's nothing special, I just organized it this way so the thumbnail could be changed on the list item without breaking the instance in figma. Let me know if I need to make a separate issue for the thumbnail. https://www.figma.com/file/sfCE7iLS0k25qCxiifQNLE/%F0%9F%93%9A-Webstudio-Library?node-id=0%3A1&t=GFXJS5fMcAiXXhiQ-1 <img width="346" alt="Screenshot 2023-02-23 at 4 19 05 PM" src="https://user-images.githubusercontent.com/86495896/221061391-540f8336-2ea9-4735-9592-3a29a5c915c4.png">
non_process
new css value list item component this component is needed for the backgrounds section and other future sections the thumbnail is a nested thumbnail component it s nothing special i just organized it this way so the thumbnail could be changed on the list item without breaking the instance in figma let me know if i need to make a separate issue for the thumbnail img width alt screenshot at pm src
0
10,780
13,608,969,813
IssuesEvent
2020-09-23 03:53:43
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
Dependency Dashboard
type: process
This issue contains a list of Renovate updates and their statuses. ## Repository problems These problems occurred while renovating this repository. - WARN: Failed to read setup.py file ## Open These updates have all been created already. Click a checkbox below to force a retry/rebase of any. - [ ] <!-- rebase-branch=renovate/apache-airflow-1.x -->chore(deps): update dependency apache-airflow to v1.10.12 - [ ] <!-- rebase-branch=renovate/boto3-1.x -->chore(deps): update dependency boto3 to v1.15.2 - [ ] <!-- rebase-branch=renovate/cryptography-3.x -->chore(deps): update dependency cryptography to v3.1.1 - [ ] <!-- rebase-branch=renovate/flask-1.x -->chore(deps): update dependency flask to v1.1.2 (`Flask`, `flask`) - [ ] <!-- rebase-branch=renovate/google-api-python-client-1.x -->chore(deps): update dependency google-api-python-client to v1.12.1 - [ ] <!-- rebase-branch=renovate/google-cloud-storage-1.x -->chore(deps): update dependency google-cloud-storage to v1.31.1 - [ ] <!-- rebase-branch=renovate/pytest-6.x -->chore(deps): update dependency pytest to v6.0.2 - [ ] <!-- rebase-branch=renovate/twilio-6.x -->chore(deps): update dependency twilio to v6.45.3 - [ ] <!-- rebase-branch=renovate/docker-ubuntu-18.x -->chore(deps): update ubuntu docker tag to v18.10 - [ ] <!-- rebase-branch=renovate/google-cloud-automl-2.x -->chore(deps): update dependency google-cloud-automl to v2 - [ ] <!-- rebase-branch=renovate/google-cloud-iot-2.x -->chore(deps): update dependency google-cloud-iot to v2 - [ ] <!-- rebase-branch=renovate/google-cloud-pubsub-2.x -->chore(deps): update dependency google-cloud-pubsub to v2 - [ ] <!-- rebase-branch=renovate/google-cloud-trace-1.x -->chore(deps): update dependency google-cloud-trace to v1 - [ ] <!-- rebase-branch=renovate/google-cloud-translate-3.x -->chore(deps): update dependency google-cloud-translate to v3 - [ ] <!-- rebase-branch=renovate/mock-4.x -->chore(deps): update dependency mock to v4 - [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once** --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
1.0
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses. ## Repository problems These problems occurred while renovating this repository. - WARN: Failed to read setup.py file ## Open These updates have all been created already. Click a checkbox below to force a retry/rebase of any. - [ ] <!-- rebase-branch=renovate/apache-airflow-1.x -->chore(deps): update dependency apache-airflow to v1.10.12 - [ ] <!-- rebase-branch=renovate/boto3-1.x -->chore(deps): update dependency boto3 to v1.15.2 - [ ] <!-- rebase-branch=renovate/cryptography-3.x -->chore(deps): update dependency cryptography to v3.1.1 - [ ] <!-- rebase-branch=renovate/flask-1.x -->chore(deps): update dependency flask to v1.1.2 (`Flask`, `flask`) - [ ] <!-- rebase-branch=renovate/google-api-python-client-1.x -->chore(deps): update dependency google-api-python-client to v1.12.1 - [ ] <!-- rebase-branch=renovate/google-cloud-storage-1.x -->chore(deps): update dependency google-cloud-storage to v1.31.1 - [ ] <!-- rebase-branch=renovate/pytest-6.x -->chore(deps): update dependency pytest to v6.0.2 - [ ] <!-- rebase-branch=renovate/twilio-6.x -->chore(deps): update dependency twilio to v6.45.3 - [ ] <!-- rebase-branch=renovate/docker-ubuntu-18.x -->chore(deps): update ubuntu docker tag to v18.10 - [ ] <!-- rebase-branch=renovate/google-cloud-automl-2.x -->chore(deps): update dependency google-cloud-automl to v2 - [ ] <!-- rebase-branch=renovate/google-cloud-iot-2.x -->chore(deps): update dependency google-cloud-iot to v2 - [ ] <!-- rebase-branch=renovate/google-cloud-pubsub-2.x -->chore(deps): update dependency google-cloud-pubsub to v2 - [ ] <!-- rebase-branch=renovate/google-cloud-trace-1.x -->chore(deps): update dependency google-cloud-trace to v1 - [ ] <!-- rebase-branch=renovate/google-cloud-translate-3.x -->chore(deps): update dependency google-cloud-translate to v3 - [ ] <!-- rebase-branch=renovate/mock-4.x -->chore(deps): update dependency mock to v4 - [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once** --- - [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
process
dependency dashboard this issue contains a list of renovate updates and their statuses repository problems these problems occurred while renovating this repository warn failed to read setup py file open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency apache airflow to chore deps update dependency to chore deps update dependency cryptography to chore deps update dependency flask to flask flask chore deps update dependency google api python client to chore deps update dependency google cloud storage to chore deps update dependency pytest to chore deps update dependency twilio to chore deps update ubuntu docker tag to chore deps update dependency google cloud automl to chore deps update dependency google cloud iot to chore deps update dependency google cloud pubsub to chore deps update dependency google cloud trace to chore deps update dependency google cloud translate to chore deps update dependency mock to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository
1
6,352
9,412,095,117
IssuesEvent
2019-04-10 02:28:35
plazi/arcadia-project
https://api.github.com/repos/plazi/arcadia-project
closed
problem with running QCTool: interefernence with batch
Article processing GoldenGate
@gsautter after installing the QCTool, there is an interference with the batch tool. That means I can't run the batch too anymore Microsoft Windows [Version 10.0.17134.648] (c) 2018 Microsoft Corporation. All rights reserved. D:\GoldenGateImagine20170823>java -jar -Xmx10240m GgImagineBatch.jar "E:\diglib\europeanJournalOfTaxonomy\temp17" CACHE=./BatchCache FM=U GoldenGATE Imagine Batch can take the following parameters: PATH: the folder to run GoldenGATE Imagine Batch in (defaults to the installation folder) CONF: the (path and) name of the configuration file to run GoldenGATE Imagine Batch with (defaults to 'GgImagineBatch.cnfg' in the folder GoldenGATE Imagine Batch is running in) CACHE: the root folder for all data caching folders (defaults to the path folder, useful for directing caching to a RAM disc, etc.) DATA: the PDF files to process: - set to PDF file path and name to process that file - set to folder path and name to process all PDF files in that folder - set to TXT file to process all PDF files listed in that file DT: the type of the PDF files to process (defaults to 'G' for 'generic'): - set to 'D' or 'BD' to indicate born-digital PDF files - set to 'S' to indicate scanned PDF files - set to 'G' or omit to indicate generic PDF files (expects both born-digital and scanned, determining type on a per-file basis) FM: the way of handling embedded fonts (relevant only for 'DT=D' and 'DT=G'): - set to D to completely decode embedded fonts - set to V to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping, and verify existing Unicode mappings - set to U to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping (the default) - set to R to only render embedded fonts, but do not decode glyphs - set to Q for quick mode, using Unicode mapping only CS: the char set for decoding embedded fonts (relevant only for 'FM=D' and 'FM=U'): - set to U to use all of Unicode - set to S to use Latin characters and scientific symbols only (the default) - set to M to use Latin characters and mathematical symbols only - set to F to use Full Latin and derived characters only - set to L to use Extended Latin characters only - set to B to use Basic Latin characters only - set to C for custom, using 'CP' parameter to specify path (file or URL) to load from, or name of a named charset to load from a provider CP: the file or URL to load the charset for embedded font decoding from (relevant only for 'CS=C', and required then; implies 'CS=C' if 'CS' parameter omitted); can also be the name of a named charset to resolve via some provider (prefix with '@' to indicate such a name) OUT: the folder to store the produced IMF files in (defaults to the folder each individual source PDF file was loaded from) OT: the way of storing the produced IMF files (defaults to 'F' for 'file'): - set to 'F' or omit to indicate (zipped) single file storage - set to 'D' to indicate indicate (non-zipped) folder storage LOG: the name for the log files to write respective information to (file names are suffixed with '.out.log' and '.err.log', set to 'IDE' or 'NO' to log directly to the console, or to DOC to create one log file per document, located next to the IMF) ST: no value, just add this token to the command to make the batch run on a single core (e.g. if resources required for other simultaneous tasks) VC: no value, just add this token to the command to make the batch produce verbose console output HELP: print this help text The file configuration file ('GgImagineBatch.cnfg' by default) specifies how to process PDF documents after decoding, and can also provide environmental and PDF decoding parameters: - imageMarkupTools: a space separated list of the Image Markup Tools to run - documentExporters: a space separated list of the Document Exporters to run after processing is finished (defaults to all available) - configName: the name of the GoldenGATE Imagine configuration to load the Image Markup Tools and Document Exporters from - cacheRootFolder: configurable default for 'CACHE' parameter - fonts.decoding.mode: configurable default for 'FM' parameter - fonts.decoding.charset: configurable default for 'CS' parameter - fonts.decoding.charsetPath: configurable default for 'FP' parameter D:\GoldenGateImagine20170823>java -jar -Xmx10240m GgImagineBatch.jar "E:\diglib\europeanJournalOfTaxonomy\temp17" CACHE=./BatchCache FM=U GoldenGATE Imagine Batch can take the following parameters: PATH: the folder to run GoldenGATE Imagine Batch in (defaults to the installation folder) CONF: the (path and) name of the configuration file to run GoldenGATE Imagine Batch with (defaults to 'GgImagineBatch.cnfg' in the folder GoldenGATE Imagine Batch is running in) CACHE: the root folder for all data caching folders (defaults to the path folder, useful for directing caching to a RAM disc, etc.) DATA: the PDF files to process: - set to PDF file path and name to process that file - set to folder path and name to process all PDF files in that folder - set to TXT file to process all PDF files listed in that file DT: the type of the PDF files to process (defaults to 'G' for 'generic'): - set to 'D' or 'BD' to indicate born-digital PDF files - set to 'S' to indicate scanned PDF files - set to 'G' or omit to indicate generic PDF files (expects both born-digital and scanned, determining type on a per-file basis) FM: the way of handling embedded fonts (relevant only for 'DT=D' and 'DT=G'): - set to D to completely decode embedded fonts - set to V to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping, and verify existing Unicode mappings - set to U to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping (the default) - set to R to only render embedded fonts, but do not decode glyphs - set to Q for quick mode, using Unicode mapping only CS: the char set for decoding embedded fonts (relevant only for 'FM=D' and 'FM=U'): - set to U to use all of Unicode - set to S to use Latin characters and scientific symbols only (the default) - set to M to use Latin characters and mathematical symbols only - set to F to use Full Latin and derived characters only - set to L to use Extended Latin characters only - set to B to use Basic Latin characters only - set to C for custom, using 'CP' parameter to specify path (file or URL) to load from, or name of a named charset to load from a provider CP: the file or URL to load the charset for embedded font decoding from (relevant only for 'CS=C', and required then; implies 'CS=C' if 'CS' parameter omitted); can also be the name of a named charset to resolve via some provider (prefix with '@' to indicate such a name) OUT: the folder to store the produced IMF files in (defaults to the folder each individual source PDF file was loaded from) OT: the way of storing the produced IMF files (defaults to 'F' for 'file'): - set to 'F' or omit to indicate (zipped) single file storage - set to 'D' to indicate indicate (non-zipped) folder storage LOG: the name for the log files to write respective information to (file names are suffixed with '.out.log' and '.err.log', set to 'IDE' or 'NO' to log directly to the console, or to DOC to create one log file per document, located next to the IMF) ST: no value, just add this token to the command to make the batch run on a single core (e.g. if resources required for other simultaneous tasks) VC: no value, just add this token to the command to make the batch produce verbose console output HELP: print this help text The file configuration file ('GgImagineBatch.cnfg' by default) specifies how to process PDF documents after decoding, and can also provide environmental and PDF decoding parameters: - imageMarkupTools: a space separated list of the Image Markup Tools to run - documentExporters: a space separated list of the Document Exporters to run after processing is finished (defaults to all available) - configName: the name of the GoldenGATE Imagine configuration to load the Image Markup Tools and Document Exporters from - cacheRootFolder: configurable default for 'CACHE' parameter - fonts.decoding.mode: configurable default for 'FM' parameter - fonts.decoding.charset: configurable default for 'CS' parameter - fonts.decoding.charsetPath: configurable default for 'FP' parameter D:\GoldenGateImagine20170823>
1.0
problem with running QCTool: interefernence with batch - @gsautter after installing the QCTool, there is an interference with the batch tool. That means I can't run the batch too anymore Microsoft Windows [Version 10.0.17134.648] (c) 2018 Microsoft Corporation. All rights reserved. D:\GoldenGateImagine20170823>java -jar -Xmx10240m GgImagineBatch.jar "E:\diglib\europeanJournalOfTaxonomy\temp17" CACHE=./BatchCache FM=U GoldenGATE Imagine Batch can take the following parameters: PATH: the folder to run GoldenGATE Imagine Batch in (defaults to the installation folder) CONF: the (path and) name of the configuration file to run GoldenGATE Imagine Batch with (defaults to 'GgImagineBatch.cnfg' in the folder GoldenGATE Imagine Batch is running in) CACHE: the root folder for all data caching folders (defaults to the path folder, useful for directing caching to a RAM disc, etc.) DATA: the PDF files to process: - set to PDF file path and name to process that file - set to folder path and name to process all PDF files in that folder - set to TXT file to process all PDF files listed in that file DT: the type of the PDF files to process (defaults to 'G' for 'generic'): - set to 'D' or 'BD' to indicate born-digital PDF files - set to 'S' to indicate scanned PDF files - set to 'G' or omit to indicate generic PDF files (expects both born-digital and scanned, determining type on a per-file basis) FM: the way of handling embedded fonts (relevant only for 'DT=D' and 'DT=G'): - set to D to completely decode embedded fonts - set to V to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping, and verify existing Unicode mappings - set to U to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping (the default) - set to R to only render embedded fonts, but do not decode glyphs - set to Q for quick mode, using Unicode mapping only CS: the char set for decoding embedded fonts (relevant only for 'FM=D' and 'FM=U'): - set to U to use all of Unicode - set to S to use Latin characters and scientific symbols only (the default) - set to M to use Latin characters and mathematical symbols only - set to F to use Full Latin and derived characters only - set to L to use Extended Latin characters only - set to B to use Basic Latin characters only - set to C for custom, using 'CP' parameter to specify path (file or URL) to load from, or name of a named charset to load from a provider CP: the file or URL to load the charset for embedded font decoding from (relevant only for 'CS=C', and required then; implies 'CS=C' if 'CS' parameter omitted); can also be the name of a named charset to resolve via some provider (prefix with '@' to indicate such a name) OUT: the folder to store the produced IMF files in (defaults to the folder each individual source PDF file was loaded from) OT: the way of storing the produced IMF files (defaults to 'F' for 'file'): - set to 'F' or omit to indicate (zipped) single file storage - set to 'D' to indicate indicate (non-zipped) folder storage LOG: the name for the log files to write respective information to (file names are suffixed with '.out.log' and '.err.log', set to 'IDE' or 'NO' to log directly to the console, or to DOC to create one log file per document, located next to the IMF) ST: no value, just add this token to the command to make the batch run on a single core (e.g. if resources required for other simultaneous tasks) VC: no value, just add this token to the command to make the batch produce verbose console output HELP: print this help text The file configuration file ('GgImagineBatch.cnfg' by default) specifies how to process PDF documents after decoding, and can also provide environmental and PDF decoding parameters: - imageMarkupTools: a space separated list of the Image Markup Tools to run - documentExporters: a space separated list of the Document Exporters to run after processing is finished (defaults to all available) - configName: the name of the GoldenGATE Imagine configuration to load the Image Markup Tools and Document Exporters from - cacheRootFolder: configurable default for 'CACHE' parameter - fonts.decoding.mode: configurable default for 'FM' parameter - fonts.decoding.charset: configurable default for 'CS' parameter - fonts.decoding.charsetPath: configurable default for 'FP' parameter D:\GoldenGateImagine20170823>java -jar -Xmx10240m GgImagineBatch.jar "E:\diglib\europeanJournalOfTaxonomy\temp17" CACHE=./BatchCache FM=U GoldenGATE Imagine Batch can take the following parameters: PATH: the folder to run GoldenGATE Imagine Batch in (defaults to the installation folder) CONF: the (path and) name of the configuration file to run GoldenGATE Imagine Batch with (defaults to 'GgImagineBatch.cnfg' in the folder GoldenGATE Imagine Batch is running in) CACHE: the root folder for all data caching folders (defaults to the path folder, useful for directing caching to a RAM disc, etc.) DATA: the PDF files to process: - set to PDF file path and name to process that file - set to folder path and name to process all PDF files in that folder - set to TXT file to process all PDF files listed in that file DT: the type of the PDF files to process (defaults to 'G' for 'generic'): - set to 'D' or 'BD' to indicate born-digital PDF files - set to 'S' to indicate scanned PDF files - set to 'G' or omit to indicate generic PDF files (expects both born-digital and scanned, determining type on a per-file basis) FM: the way of handling embedded fonts (relevant only for 'DT=D' and 'DT=G'): - set to D to completely decode embedded fonts - set to V to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping, and verify existing Unicode mappings - set to U to decode un-mapped characters from embedded fonts, i.e., ones without a Unicode mapping (the default) - set to R to only render embedded fonts, but do not decode glyphs - set to Q for quick mode, using Unicode mapping only CS: the char set for decoding embedded fonts (relevant only for 'FM=D' and 'FM=U'): - set to U to use all of Unicode - set to S to use Latin characters and scientific symbols only (the default) - set to M to use Latin characters and mathematical symbols only - set to F to use Full Latin and derived characters only - set to L to use Extended Latin characters only - set to B to use Basic Latin characters only - set to C for custom, using 'CP' parameter to specify path (file or URL) to load from, or name of a named charset to load from a provider CP: the file or URL to load the charset for embedded font decoding from (relevant only for 'CS=C', and required then; implies 'CS=C' if 'CS' parameter omitted); can also be the name of a named charset to resolve via some provider (prefix with '@' to indicate such a name) OUT: the folder to store the produced IMF files in (defaults to the folder each individual source PDF file was loaded from) OT: the way of storing the produced IMF files (defaults to 'F' for 'file'): - set to 'F' or omit to indicate (zipped) single file storage - set to 'D' to indicate indicate (non-zipped) folder storage LOG: the name for the log files to write respective information to (file names are suffixed with '.out.log' and '.err.log', set to 'IDE' or 'NO' to log directly to the console, or to DOC to create one log file per document, located next to the IMF) ST: no value, just add this token to the command to make the batch run on a single core (e.g. if resources required for other simultaneous tasks) VC: no value, just add this token to the command to make the batch produce verbose console output HELP: print this help text The file configuration file ('GgImagineBatch.cnfg' by default) specifies how to process PDF documents after decoding, and can also provide environmental and PDF decoding parameters: - imageMarkupTools: a space separated list of the Image Markup Tools to run - documentExporters: a space separated list of the Document Exporters to run after processing is finished (defaults to all available) - configName: the name of the GoldenGATE Imagine configuration to load the Image Markup Tools and Document Exporters from - cacheRootFolder: configurable default for 'CACHE' parameter - fonts.decoding.mode: configurable default for 'FM' parameter - fonts.decoding.charset: configurable default for 'CS' parameter - fonts.decoding.charsetPath: configurable default for 'FP' parameter D:\GoldenGateImagine20170823>
process
problem with running qctool interefernence with batch gsautter after installing the qctool there is an interference with the batch tool that means i can t run the batch too anymore microsoft windows c microsoft corporation all rights reserved d java jar ggimaginebatch jar e diglib europeanjournaloftaxonomy cache batchcache fm u goldengate imagine batch can take the following parameters path the folder to run goldengate imagine batch in defaults to the installation folder conf the path and name of the configuration file to run goldengate imagine batch with defaults to ggimaginebatch cnfg in the folder goldengate imagine batch is running in cache the root folder for all data caching folders defaults to the path folder useful for directing caching to a ram disc etc data the pdf files to process set to pdf file path and name to process that file set to folder path and name to process all pdf files in that folder set to txt file to process all pdf files listed in that file dt the type of the pdf files to process defaults to g for generic set to d or bd to indicate born digital pdf files set to s to indicate scanned pdf files set to g or omit to indicate generic pdf files expects both born digital and scanned determining type on a per file basis fm the way of handling embedded fonts relevant only for dt d and dt g set to d to completely decode embedded fonts set to v to decode un mapped characters from embedded fonts i e ones without a unicode mapping and verify existing unicode mappings set to u to decode un mapped characters from embedded fonts i e ones without a unicode mapping the default set to r to only render embedded fonts but do not decode glyphs set to q for quick mode using unicode mapping only cs the char set for decoding embedded fonts relevant only for fm d and fm u set to u to use all of unicode set to s to use latin characters and scientific symbols only the default set to m to use latin characters and mathematical symbols only set to f to use full latin and derived characters only set to l to use extended latin characters only set to b to use basic latin characters only set to c for custom using cp parameter to specify path file or url to load from or name of a named charset to load from a provider cp the file or url to load the charset for embedded font decoding from relevant only for cs c and required then implies cs c if cs parameter omitted can also be the name of a named charset to resolve via some provider prefix with to indicate such a name out the folder to store the produced imf files in defaults to the folder each individual source pdf file was loaded from ot the way of storing the produced imf files defaults to f for file set to f or omit to indicate zipped single file storage set to d to indicate indicate non zipped folder storage log the name for the log files to write respective information to file names are suffixed with out log and err log set to ide or no to log directly to the console or to doc to create one log file per document located next to the imf st no value just add this token to the command to make the batch run on a single core e g if resources required for other simultaneous tasks vc no value just add this token to the command to make the batch produce verbose console output help print this help text the file configuration file ggimaginebatch cnfg by default specifies how to process pdf documents after decoding and can also provide environmental and pdf decoding parameters imagemarkuptools a space separated list of the image markup tools to run documentexporters a space separated list of the document exporters to run after processing is finished defaults to all available configname the name of the goldengate imagine configuration to load the image markup tools and document exporters from cacherootfolder configurable default for cache parameter fonts decoding mode configurable default for fm parameter fonts decoding charset configurable default for cs parameter fonts decoding charsetpath configurable default for fp parameter d java jar ggimaginebatch jar e diglib europeanjournaloftaxonomy cache batchcache fm u goldengate imagine batch can take the following parameters path the folder to run goldengate imagine batch in defaults to the installation folder conf the path and name of the configuration file to run goldengate imagine batch with defaults to ggimaginebatch cnfg in the folder goldengate imagine batch is running in cache the root folder for all data caching folders defaults to the path folder useful for directing caching to a ram disc etc data the pdf files to process set to pdf file path and name to process that file set to folder path and name to process all pdf files in that folder set to txt file to process all pdf files listed in that file dt the type of the pdf files to process defaults to g for generic set to d or bd to indicate born digital pdf files set to s to indicate scanned pdf files set to g or omit to indicate generic pdf files expects both born digital and scanned determining type on a per file basis fm the way of handling embedded fonts relevant only for dt d and dt g set to d to completely decode embedded fonts set to v to decode un mapped characters from embedded fonts i e ones without a unicode mapping and verify existing unicode mappings set to u to decode un mapped characters from embedded fonts i e ones without a unicode mapping the default set to r to only render embedded fonts but do not decode glyphs set to q for quick mode using unicode mapping only cs the char set for decoding embedded fonts relevant only for fm d and fm u set to u to use all of unicode set to s to use latin characters and scientific symbols only the default set to m to use latin characters and mathematical symbols only set to f to use full latin and derived characters only set to l to use extended latin characters only set to b to use basic latin characters only set to c for custom using cp parameter to specify path file or url to load from or name of a named charset to load from a provider cp the file or url to load the charset for embedded font decoding from relevant only for cs c and required then implies cs c if cs parameter omitted can also be the name of a named charset to resolve via some provider prefix with to indicate such a name out the folder to store the produced imf files in defaults to the folder each individual source pdf file was loaded from ot the way of storing the produced imf files defaults to f for file set to f or omit to indicate zipped single file storage set to d to indicate indicate non zipped folder storage log the name for the log files to write respective information to file names are suffixed with out log and err log set to ide or no to log directly to the console or to doc to create one log file per document located next to the imf st no value just add this token to the command to make the batch run on a single core e g if resources required for other simultaneous tasks vc no value just add this token to the command to make the batch produce verbose console output help print this help text the file configuration file ggimaginebatch cnfg by default specifies how to process pdf documents after decoding and can also provide environmental and pdf decoding parameters imagemarkuptools a space separated list of the image markup tools to run documentexporters a space separated list of the document exporters to run after processing is finished defaults to all available configname the name of the goldengate imagine configuration to load the image markup tools and document exporters from cacherootfolder configurable default for cache parameter fonts decoding mode configurable default for fm parameter fonts decoding charset configurable default for cs parameter fonts decoding charsetpath configurable default for fp parameter d
1
17,125
22,646,329,345
IssuesEvent
2022-07-01 09:03:57
pycaret/pycaret
https://api.github.com/repos/pycaret/pycaret
closed
[ENH]: Implementation of more imputation methods
enhancement preprocessing impute
### Describe the feature you want to add to this project Implementation of more imputation methods ### Describe your proposed solution I found an article with interesting imputation methods (https://towardsdatascience.com/implementation-and-limitations-of-imputation-methods-b6576bf31a6c) ### Describe alternatives you've considered, if relevant _No response_ ### Additional context _No response_
1.0
[ENH]: Implementation of more imputation methods - ### Describe the feature you want to add to this project Implementation of more imputation methods ### Describe your proposed solution I found an article with interesting imputation methods (https://towardsdatascience.com/implementation-and-limitations-of-imputation-methods-b6576bf31a6c) ### Describe alternatives you've considered, if relevant _No response_ ### Additional context _No response_
process
implementation of more imputation methods describe the feature you want to add to this project implementation of more imputation methods describe your proposed solution i found an article with interesting imputation methods describe alternatives you ve considered if relevant no response additional context no response
1
2,687
5,537,506,862
IssuesEvent
2017-03-21 22:17:04
powertac/powertac-server
https://api.github.com/repos/powertac/powertac-server
closed
factored-customer.jar 1.4.1 not being resolved in maven central
Bug (blocker) Process
1.4.1 release works if run with `mvn -Pweb2`. However, it fails to find factored-customer.jar when run with `mvn -Pcli`. The 1.4.1 version of factored-customer appears to be installed correctly on [maven central](http://repo1.maven.org/maven2/org/powertac/factored-customer/1.4.1/), along with the other modules. Here is the error message: ``` [INFO] ------------------------------------------------------------------------ [INFO] Building Power TAC distribution 1.4.1 [INFO] ------------------------------------------------------------------------ Downloading: https://oss.sonatype.org/content/repositories/snapshots/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.pom Downloading: https://repo.maven.apache.org/maven2/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.pom Downloaded: https://repo.maven.apache.org/maven2/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.pom (2 KB at 4.7 KB/sec) [WARNING] The POM for org.powertac:customer-models:jar:1.4.1 is missing, no dependency information available Downloading: https://oss.sonatype.org/content/repositories/snapshots/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.jar Downloading: https://repo.maven.apache.org/maven2/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.131 s [INFO] Finished at: 2017-03-21T11:46:58-07:00 [INFO] Final Memory: 14M/163M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal on project server-distribution: Could not resolve dependencies for project org.powertac:server-distribution:pom:1.4.1: Could not find artifact org.powertac:factored-customer:jar:1.4.1 in sonatype (https://oss.sonatype.org/content/repositories/snapshots/) -> [Help 1] ``` Note that there is also a complaint about customer-models, and that it did download the pom.xml from maven central.
1.0
factored-customer.jar 1.4.1 not being resolved in maven central - 1.4.1 release works if run with `mvn -Pweb2`. However, it fails to find factored-customer.jar when run with `mvn -Pcli`. The 1.4.1 version of factored-customer appears to be installed correctly on [maven central](http://repo1.maven.org/maven2/org/powertac/factored-customer/1.4.1/), along with the other modules. Here is the error message: ``` [INFO] ------------------------------------------------------------------------ [INFO] Building Power TAC distribution 1.4.1 [INFO] ------------------------------------------------------------------------ Downloading: https://oss.sonatype.org/content/repositories/snapshots/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.pom Downloading: https://repo.maven.apache.org/maven2/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.pom Downloaded: https://repo.maven.apache.org/maven2/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.pom (2 KB at 4.7 KB/sec) [WARNING] The POM for org.powertac:customer-models:jar:1.4.1 is missing, no dependency information available Downloading: https://oss.sonatype.org/content/repositories/snapshots/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.jar Downloading: https://repo.maven.apache.org/maven2/org/powertac/factored-customer/1.4.1/factored-customer-1.4.1.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.131 s [INFO] Finished at: 2017-03-21T11:46:58-07:00 [INFO] Final Memory: 14M/163M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal on project server-distribution: Could not resolve dependencies for project org.powertac:server-distribution:pom:1.4.1: Could not find artifact org.powertac:factored-customer:jar:1.4.1 in sonatype (https://oss.sonatype.org/content/repositories/snapshots/) -> [Help 1] ``` Note that there is also a complaint about customer-models, and that it did download the pom.xml from maven central.
process
factored customer jar not being resolved in maven central release works if run with mvn however it fails to find factored customer jar when run with mvn pcli the version of factored customer appears to be installed correctly on along with the other modules here is the error message building power tac distribution downloading downloading downloaded kb at kb sec the pom for org powertac customer models jar is missing no dependency information available downloading downloading build failure total time s finished at final memory failed to execute goal on project server distribution could not resolve dependencies for project org powertac server distribution pom could not find artifact org powertac factored customer jar in sonatype note that there is also a complaint about customer models and that it did download the pom xml from maven central
1
18,457
24,549,032,728
IssuesEvent
2022-10-12 11:05:24
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Admins are navigated to dashboard by using temporary password in the below scenario
Bug P0 Participant manager Process: Fixed Process: Tested dev
**Steps:** 1. In Sign in screen, Click on forgot password link 2. Enter the registered admin's email and Click on Submit button 3. Now, sign in with Superadmin's account and navigate to Admins tab 4. Deactivate the admin account which was entered in forgot password in step 2 5. Again reactivate the same account 6. And then Sign out as Superadmin 7. Again Sign in with admin account email which was entered in forgot password in step 2 and with temporary password and then Verify **AR:** Admins are navigated to dashboard by using temporary password in the above scenario **ER:** Admins should be navigated to set up password screen in the above scenario
2.0
[PM] Admins are navigated to dashboard by using temporary password in the below scenario - **Steps:** 1. In Sign in screen, Click on forgot password link 2. Enter the registered admin's email and Click on Submit button 3. Now, sign in with Superadmin's account and navigate to Admins tab 4. Deactivate the admin account which was entered in forgot password in step 2 5. Again reactivate the same account 6. And then Sign out as Superadmin 7. Again Sign in with admin account email which was entered in forgot password in step 2 and with temporary password and then Verify **AR:** Admins are navigated to dashboard by using temporary password in the above scenario **ER:** Admins should be navigated to set up password screen in the above scenario
process
admins are navigated to dashboard by using temporary password in the below scenario steps in sign in screen click on forgot password link enter the registered admin s email and click on submit button now sign in with superadmin s account and navigate to admins tab deactivate the admin account which was entered in forgot password in step again reactivate the same account and then sign out as superadmin again sign in with admin account email which was entered in forgot password in step and with temporary password and then verify ar admins are navigated to dashboard by using temporary password in the above scenario er admins should be navigated to set up password screen in the above scenario
1
17,475
23,298,435,452
IssuesEvent
2022-08-07 00:15:56
mdsreq-fga-unb/2022.1-GDS
https://api.github.com/repos/mdsreq-fga-unb/2022.1-GDS
closed
Processo de Desenvolvimento - Construção/Teste
Processo de Desenvolvimento
**Descrição** entre as atividades de construção e teste tem-se: - construção: Código pronto para testagem - teste: Testes unitários automatizados e containerização Não será realizado nenhum teste manual? da equipe, do usuário,...?
1.0
Processo de Desenvolvimento - Construção/Teste - **Descrição** entre as atividades de construção e teste tem-se: - construção: Código pronto para testagem - teste: Testes unitários automatizados e containerização Não será realizado nenhum teste manual? da equipe, do usuário,...?
process
processo de desenvolvimento construção teste descrição entre as atividades de construção e teste tem se construção código pronto para testagem teste testes unitários automatizados e containerização não será realizado nenhum teste manual da equipe do usuário
1
349,437
10,469,326,521
IssuesEvent
2019-09-22 19:54:24
Radarr/Radarr
https://api.github.com/repos/Radarr/Radarr
closed
[Feature Request] Unselect All in Movie Editor
done aphrodite enhancement feature request priority:medium ui
**Description:** When using the Movie Editor to manage a movies matching a particular filter; if they are updated that they no longer match the filter they remain selected and subsequent save/organize apply to those non-visible entries as well. This caught me by surprise when updating Monitored and Profile status of existing movies that I imported. An Unselect All button next to the count would allow easily resetting of selected entries without having to exit the Movie Editor screen and going back in (or manually finding and unselecting previous changed entries) **Radarr Version:** 0.2.0.452 **Logs:**
1.0
[Feature Request] Unselect All in Movie Editor - **Description:** When using the Movie Editor to manage a movies matching a particular filter; if they are updated that they no longer match the filter they remain selected and subsequent save/organize apply to those non-visible entries as well. This caught me by surprise when updating Monitored and Profile status of existing movies that I imported. An Unselect All button next to the count would allow easily resetting of selected entries without having to exit the Movie Editor screen and going back in (or manually finding and unselecting previous changed entries) **Radarr Version:** 0.2.0.452 **Logs:**
non_process
unselect all in movie editor description when using the movie editor to manage a movies matching a particular filter if they are updated that they no longer match the filter they remain selected and subsequent save organize apply to those non visible entries as well this caught me by surprise when updating monitored and profile status of existing movies that i imported an unselect all button next to the count would allow easily resetting of selected entries without having to exit the movie editor screen and going back in or manually finding and unselecting previous changed entries radarr version logs
0
343,086
30,651,850,992
IssuesEvent
2023-07-25 09:25:35
Rothamsted/knetminer
https://api.github.com/repos/Rothamsted/knetminer
closed
[BasicLocal UI Test] Copy gene accessions does not work anymore
bug project:client UI-testing
*This is a manual UI test based on [this template][TPLREF]. Other instances of this test can be found [here][TPLINST]. Tests from the template that aren't mentioned hereby are intended as passed.* [TPLREF]: https://github.com/Rothamsted/knetminer-testing/blob/v2.1/manual-ui-testing/ui-test-templates/basic-local/README.md [TPLINST]: https://github.com/Rothamsted/knetminer/issues?q=BasicLocal ## Misc UI tests ### In evidence view -> Genes, click on a row, the copy and donwload buttons work fine. Paste some list in the search box In the console, I'm getting the error: ``` evidence-table.js:297 Uncaught ReferenceError: geneTable is not defined at HTMLAnchorElement.<anonymous> (evidence-table.js:297:57) at HTMLAnchorElement.dispatch (jquery-bstrap.min.js:32:39900) at v.handle (jquery-bstrap.min.js:32:36635) ```
1.0
[BasicLocal UI Test] Copy gene accessions does not work anymore - *This is a manual UI test based on [this template][TPLREF]. Other instances of this test can be found [here][TPLINST]. Tests from the template that aren't mentioned hereby are intended as passed.* [TPLREF]: https://github.com/Rothamsted/knetminer-testing/blob/v2.1/manual-ui-testing/ui-test-templates/basic-local/README.md [TPLINST]: https://github.com/Rothamsted/knetminer/issues?q=BasicLocal ## Misc UI tests ### In evidence view -> Genes, click on a row, the copy and donwload buttons work fine. Paste some list in the search box In the console, I'm getting the error: ``` evidence-table.js:297 Uncaught ReferenceError: geneTable is not defined at HTMLAnchorElement.<anonymous> (evidence-table.js:297:57) at HTMLAnchorElement.dispatch (jquery-bstrap.min.js:32:39900) at v.handle (jquery-bstrap.min.js:32:36635) ```
non_process
copy gene accessions does not work anymore this is a manual ui test based on other instances of this test can be found tests from the template that aren t mentioned hereby are intended as passed misc ui tests in evidence view genes click on a row the copy and donwload buttons work fine paste some list in the search box in the console i m getting the error evidence table js uncaught referenceerror genetable is not defined at htmlanchorelement evidence table js at htmlanchorelement dispatch jquery bstrap min js at v handle jquery bstrap min js
0
3,963
6,895,494,504
IssuesEvent
2017-11-23 14:05:00
2017-d13/gestion-des-absences
https://api.github.com/repos/2017-d13/gestion-des-absences
reopened
USGDA003 - menu manager
in process
En tant que manager d'un département j'ai accès aux fonctionnalités de menu suivantes:\Accueil, Gestion des absences, Planning des absences, Validation des demandes, Vues synthétiques, Jours feriés ![](https://github.com/DiginamicFormation/ressources-atelier/raw/master/gestion-des-absences/Accueil.manager.png)
1.0
USGDA003 - menu manager - En tant que manager d'un département j'ai accès aux fonctionnalités de menu suivantes:\Accueil, Gestion des absences, Planning des absences, Validation des demandes, Vues synthétiques, Jours feriés ![](https://github.com/DiginamicFormation/ressources-atelier/raw/master/gestion-des-absences/Accueil.manager.png)
process
menu manager en tant que manager d un département j ai accès aux fonctionnalités de menu suivantes accueil gestion des absences planning des absences validation des demandes vues synthétiques jours feriés
1
22,591
31,815,048,489
IssuesEvent
2023-09-13 19:46:56
0xPolygonMiden/miden-vm
https://api.github.com/repos/0xPolygonMiden/miden-vm
opened
Replace AdviceProvider with Host interface
processor
As mentioned in https://github.com/0xPolygonMiden/miden-vm/pull/1069#issuecomment-1717088131, it could be a good idea to replace the current [AdviceProvider](https://github.com/0xPolygonMiden/miden-vm/blob/next/processor/src/advice/mod.rs#L42) with a more general `Host` interface. This host interface would assume the responsibilities of the current `AdviceProvider` but would also have additional responsibilities like capturing events, debug info, and probably other things. One simple way to do this is to rename the current `AdviceProvider` into `Host` and add a few more methods to it. This will work, but might make the interface a bit unwieldy. Another option is to use it as a wrapper around more specialized traits. For example: ```Rust pub trait Host { type AdviceProvider: AdviceProvider; type DebugHandler: DebugHandler; type SignalHandler: SignalHandler; fn adv(&self) -> &Self::AdviceProvider; fn adv_mut(&mut self) -> &mut Self::AdviceProvider; fn debug(&mut self) -> &Self::DebugHandler; fn signal(&mut self) -> &Self::SignalHandler; } ``` This should also work, but also may be "too generalized" and could be a bit awkward to work with. So, I'm wondering if there is a better way to define the `Host` trait.
1.0
Replace AdviceProvider with Host interface - As mentioned in https://github.com/0xPolygonMiden/miden-vm/pull/1069#issuecomment-1717088131, it could be a good idea to replace the current [AdviceProvider](https://github.com/0xPolygonMiden/miden-vm/blob/next/processor/src/advice/mod.rs#L42) with a more general `Host` interface. This host interface would assume the responsibilities of the current `AdviceProvider` but would also have additional responsibilities like capturing events, debug info, and probably other things. One simple way to do this is to rename the current `AdviceProvider` into `Host` and add a few more methods to it. This will work, but might make the interface a bit unwieldy. Another option is to use it as a wrapper around more specialized traits. For example: ```Rust pub trait Host { type AdviceProvider: AdviceProvider; type DebugHandler: DebugHandler; type SignalHandler: SignalHandler; fn adv(&self) -> &Self::AdviceProvider; fn adv_mut(&mut self) -> &mut Self::AdviceProvider; fn debug(&mut self) -> &Self::DebugHandler; fn signal(&mut self) -> &Self::SignalHandler; } ``` This should also work, but also may be "too generalized" and could be a bit awkward to work with. So, I'm wondering if there is a better way to define the `Host` trait.
process
replace adviceprovider with host interface as mentioned in it could be a good idea to replace the current with a more general host interface this host interface would assume the responsibilities of the current adviceprovider but would also have additional responsibilities like capturing events debug info and probably other things one simple way to do this is to rename the current adviceprovider into host and add a few more methods to it this will work but might make the interface a bit unwieldy another option is to use it as a wrapper around more specialized traits for example rust pub trait host type adviceprovider adviceprovider type debughandler debughandler type signalhandler signalhandler fn adv self self adviceprovider fn adv mut mut self mut self adviceprovider fn debug mut self self debughandler fn signal mut self self signalhandler this should also work but also may be too generalized and could be a bit awkward to work with so i m wondering if there is a better way to define the host trait
1
18,652
24,581,182,251
IssuesEvent
2022-10-13 15:43:39
maticnetwork/miden
https://api.github.com/repos/maticnetwork/miden
closed
Improvements in memory lookup layout
processor air
Currently, the way we compute memory lookups in the stack and in the memory chiplet lead to minor inconsistencies, and it would be good to fix these. Specifically, in the memory chiplet, words are laid out with the first element listed first. However, on the stack side, the words on the stack have the last element listed first (because they are in the stack order) while helper registers get populated with the fist element listed first. Since we can't change the order of word elements on the stack, a potential way to get rid of this inconsistency would be to always list words with the last element first (both in the memory chiplet and in the stack helper registers). Another thing that can be improved is how we handle `MLOAD` operation. Currently, this operation puts only 3 out of the 4 elements of the word into the helper registers. While this works, it creates some extra work when computing memory request values. So, a better approach could be to put the entire word (4 elements) into the helper registers and then to add an extra constraint like this: $$ s_0' - h_0 = 0 $$
1.0
Improvements in memory lookup layout - Currently, the way we compute memory lookups in the stack and in the memory chiplet lead to minor inconsistencies, and it would be good to fix these. Specifically, in the memory chiplet, words are laid out with the first element listed first. However, on the stack side, the words on the stack have the last element listed first (because they are in the stack order) while helper registers get populated with the fist element listed first. Since we can't change the order of word elements on the stack, a potential way to get rid of this inconsistency would be to always list words with the last element first (both in the memory chiplet and in the stack helper registers). Another thing that can be improved is how we handle `MLOAD` operation. Currently, this operation puts only 3 out of the 4 elements of the word into the helper registers. While this works, it creates some extra work when computing memory request values. So, a better approach could be to put the entire word (4 elements) into the helper registers and then to add an extra constraint like this: $$ s_0' - h_0 = 0 $$
process
improvements in memory lookup layout currently the way we compute memory lookups in the stack and in the memory chiplet lead to minor inconsistencies and it would be good to fix these specifically in the memory chiplet words are laid out with the first element listed first however on the stack side the words on the stack have the last element listed first because they are in the stack order while helper registers get populated with the fist element listed first since we can t change the order of word elements on the stack a potential way to get rid of this inconsistency would be to always list words with the last element first both in the memory chiplet and in the stack helper registers another thing that can be improved is how we handle mload operation currently this operation puts only out of the elements of the word into the helper registers while this works it creates some extra work when computing memory request values so a better approach could be to put the entire word elements into the helper registers and then to add an extra constraint like this s h
1
287,713
31,856,311,727
IssuesEvent
2023-09-15 07:43:21
Trinadh465/linux-4.1.15_CVE-2023-26607
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-26607
opened
CVE-2016-6187 (High) detected in linuxlinux-4.6
Mend: dependency security vulnerability
## CVE-2016-6187 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/apparmor/lsm.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/apparmor/lsm.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The apparmor_setprocattr function in security/apparmor/lsm.c in the Linux kernel before 4.6.5 does not validate the buffer size, which allows local users to gain privileges by triggering an AppArmor setprocattr hook. <p>Publish Date: 2016-08-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-6187>CVE-2016-6187</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-6187">https://nvd.nist.gov/vuln/detail/CVE-2016-6187</a></p> <p>Release Date: 2016-08-06</p> <p>Fix Resolution: 4.6.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-6187 (High) detected in linuxlinux-4.6 - ## CVE-2016-6187 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-26607/commit/6fca0e3f2f14e1e851258fd815766531370084b0">6fca0e3f2f14e1e851258fd815766531370084b0</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/apparmor/lsm.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/security/apparmor/lsm.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The apparmor_setprocattr function in security/apparmor/lsm.c in the Linux kernel before 4.6.5 does not validate the buffer size, which allows local users to gain privileges by triggering an AppArmor setprocattr hook. <p>Publish Date: 2016-08-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-6187>CVE-2016-6187</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-6187">https://nvd.nist.gov/vuln/detail/CVE-2016-6187</a></p> <p>Release Date: 2016-08-06</p> <p>Fix Resolution: 4.6.5</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files security apparmor lsm c security apparmor lsm c vulnerability details the apparmor setprocattr function in security apparmor lsm c in the linux kernel before does not validate the buffer size which allows local users to gain privileges by triggering an apparmor setprocattr hook publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
3,299
6,395,595,392
IssuesEvent
2017-08-04 13:39:25
pelias/pelias
https://api.github.com/repos/pelias/pelias
closed
Unify Private and Vagrant Chef Cookbooks
enhancement nyresolution-2016 processed
Right now there are two sets of cookbooks for building Pelias: the full Chef cookbooks (which are in a private repository), and the Chef cookbooks in the Pelias [Vagrant](https://github.com/pelias/vagrant) image. While this makes it easy to ensure our private credentials for AWS, etc don't accidentally leak out, it has the following disadvantages: - any functionality in the vagrant image is duplicated in the full cookbooks - the functionality that exists in the vagrant image is usually a bit more limited than in the full cookbooks - Most of us (I think) use the Vagrant image fairly infrequently, so it's possible we could break it inadvertently and not know until someone else reports it - the vagrant image often falls behind the full cookbooks, for example it was missing Openaddresses support until [recently](https://github.com/pelias/vagrant/pull/13) - Pull requests and contributions to the Vagrant cookbooks can't easily be incorporated into the full cookbooks Even if it's not a full unification, it would be great if more of the cookbook code was shared. _This comment written by @orangejulius_
1.0
Unify Private and Vagrant Chef Cookbooks - Right now there are two sets of cookbooks for building Pelias: the full Chef cookbooks (which are in a private repository), and the Chef cookbooks in the Pelias [Vagrant](https://github.com/pelias/vagrant) image. While this makes it easy to ensure our private credentials for AWS, etc don't accidentally leak out, it has the following disadvantages: - any functionality in the vagrant image is duplicated in the full cookbooks - the functionality that exists in the vagrant image is usually a bit more limited than in the full cookbooks - Most of us (I think) use the Vagrant image fairly infrequently, so it's possible we could break it inadvertently and not know until someone else reports it - the vagrant image often falls behind the full cookbooks, for example it was missing Openaddresses support until [recently](https://github.com/pelias/vagrant/pull/13) - Pull requests and contributions to the Vagrant cookbooks can't easily be incorporated into the full cookbooks Even if it's not a full unification, it would be great if more of the cookbook code was shared. _This comment written by @orangejulius_
process
unify private and vagrant chef cookbooks right now there are two sets of cookbooks for building pelias the full chef cookbooks which are in a private repository and the chef cookbooks in the pelias image while this makes it easy to ensure our private credentials for aws etc don t accidentally leak out it has the following disadvantages any functionality in the vagrant image is duplicated in the full cookbooks the functionality that exists in the vagrant image is usually a bit more limited than in the full cookbooks most of us i think use the vagrant image fairly infrequently so it s possible we could break it inadvertently and not know until someone else reports it the vagrant image often falls behind the full cookbooks for example it was missing openaddresses support until pull requests and contributions to the vagrant cookbooks can t easily be incorporated into the full cookbooks even if it s not a full unification it would be great if more of the cookbook code was shared this comment written by orangejulius
1
20,054
26,541,651,312
IssuesEvent
2023-01-19 19:47:48
GoogleCloudPlatform/spring-cloud-gcp
https://api.github.com/repos/GoogleCloudPlatform/spring-cloud-gcp
closed
Switch r2dbc-postgresql from groupId io.r2dbc to org.postgresql
priority: p2 cloud-sql type: process spring-boot-3.0
The [Spring Boot 2.7 release notes say](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.7-Release-Notes#r2dbc-driver-changes): > In the [Borca release](https://r2dbc.io/2022/02/04/r2dbc-borca-released), the group ID of `r2dbc-postgresql`, the driver for PostgreSQL, has changed from `io.r2dbc` to `org.postgresql`. `r2dbc-mysql`, the driver for MySQL, has been removed. Consider using `r2dbc-mariadb` as a replacement. `com.google.cloud:spring-cloud-gcp-starter-sql-postgres-r2dbc:3.3.0` depends on `io.r2dbc:r2dbc-postgresql:0.8.12.RELEASE`. This needs to be changed to `org.postgresql:r2dbc-postgresql` in a future version to keep up with developments.
1.0
Switch r2dbc-postgresql from groupId io.r2dbc to org.postgresql - The [Spring Boot 2.7 release notes say](https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.7-Release-Notes#r2dbc-driver-changes): > In the [Borca release](https://r2dbc.io/2022/02/04/r2dbc-borca-released), the group ID of `r2dbc-postgresql`, the driver for PostgreSQL, has changed from `io.r2dbc` to `org.postgresql`. `r2dbc-mysql`, the driver for MySQL, has been removed. Consider using `r2dbc-mariadb` as a replacement. `com.google.cloud:spring-cloud-gcp-starter-sql-postgres-r2dbc:3.3.0` depends on `io.r2dbc:r2dbc-postgresql:0.8.12.RELEASE`. This needs to be changed to `org.postgresql:r2dbc-postgresql` in a future version to keep up with developments.
process
switch postgresql from groupid io to org postgresql the in the the group id of postgresql the driver for postgresql has changed from io to org postgresql mysql the driver for mysql has been removed consider using mariadb as a replacement com google cloud spring cloud gcp starter sql postgres depends on io postgresql release this needs to be changed to org postgresql postgresql in a future version to keep up with developments
1
9,475
12,468,170,624
IssuesEvent
2020-05-28 18:22:20
nanoframework/Home
https://api.github.com/repos/nanoframework/Home
closed
Compile problem with enums in class
Area: Metadata Processor Status: FIXED
### Details about Problem **nanoFramework area:** Visual Studio extension **VS version<!--(if relevant)-->:** 2017 **VS extension version<!--(if relevant)-->:** 2017.2.0.6 **Target<!--(if relevant)-->:** **Firmware image version<!--(if relevant)-->:** **Device capabilities output<!--(if relevant)-->:** ### Description With version 2017.2.0.6 its problem with enums ### Detailed repro steps so we can see the same problem Cannot compile this code: ```cs public class LinkHelper { public class LinkModel { public enum LinkEvent { Created, Removed } public readonly LinkEvent eventType; } } ``` ![image](https://user-images.githubusercontent.com/47118568/82200346-27cc3c00-98ff-11ea-9511-70ad6019d9c5.png) ... ### Other suggested things <!-- if applicable/relevant -->
1.0
Compile problem with enums in class - ### Details about Problem **nanoFramework area:** Visual Studio extension **VS version<!--(if relevant)-->:** 2017 **VS extension version<!--(if relevant)-->:** 2017.2.0.6 **Target<!--(if relevant)-->:** **Firmware image version<!--(if relevant)-->:** **Device capabilities output<!--(if relevant)-->:** ### Description With version 2017.2.0.6 its problem with enums ### Detailed repro steps so we can see the same problem Cannot compile this code: ```cs public class LinkHelper { public class LinkModel { public enum LinkEvent { Created, Removed } public readonly LinkEvent eventType; } } ``` ![image](https://user-images.githubusercontent.com/47118568/82200346-27cc3c00-98ff-11ea-9511-70ad6019d9c5.png) ... ### Other suggested things <!-- if applicable/relevant -->
process
compile problem with enums in class details about problem nanoframework area visual studio extension vs version vs extension version target firmware image version device capabilities output description with version its problem with enums detailed repro steps so we can see the same problem cannot compile this code cs public class linkhelper public class linkmodel public enum linkevent created removed public readonly linkevent eventtype other suggested things
1
331,053
10,059,345,151
IssuesEvent
2019-07-22 16:14:41
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.airbnb.com - site is not usable
browser-firefox-tablet engine-gecko priority-important type-tracking-protection-basic
<!-- @browser: Firefox Mobile (Tablet) 68.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Tablet; rv:68.0) Gecko/68.0 Firefox/68.0 --> <!-- @reported_with: mobile-reporter --> <!-- @extra_labels: type-tracking-protection-basic --> **URL**: https://www.airbnb.com/login **Browser / Version**: Firefox Mobile (Tablet) 68.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: cannot login **Steps to Reproduce**: Cannot login in mobile mode <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190716192356</li><li>tracking content blocked: true (basic)</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "The resource at https://www.google-analytics.com/analytics.js was blocked because content blocking is enabled." {file: "https://www.airbnb.com/login" line: 0}]', u'[console.log(JQMIGRATE: Logging is active) https://a0.muscache.com/airbnb/static/client/packages/libs_jquery.bundle-5cda6300.js:15:480]', u'[console.warn(JQMIGRATE: jQuery.browser is deprecated) https://a0.muscache.com/airbnb/static/client/packages/libs_jquery.bundle-5cda6300.js:15:128]', u'[console.trace() https://a0.muscache.com/airbnb/static/client/packages/libs_jquery.bundle-5cda6300.js:15:177]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.airbnb.com - site is not usable - <!-- @browser: Firefox Mobile (Tablet) 68.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Tablet; rv:68.0) Gecko/68.0 Firefox/68.0 --> <!-- @reported_with: mobile-reporter --> <!-- @extra_labels: type-tracking-protection-basic --> **URL**: https://www.airbnb.com/login **Browser / Version**: Firefox Mobile (Tablet) 68.0 **Operating System**: Android **Tested Another Browser**: No **Problem type**: Site is not usable **Description**: cannot login **Steps to Reproduce**: Cannot login in mobile mode <details> <summary>Browser Configuration</summary> <ul> <li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190716192356</li><li>tracking content blocked: true (basic)</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li> </ul> <p>Console Messages:</p> <pre> [u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "Content Security Policy: Directive child-src has been deprecated. Please use directive worker-src to control workers, or directive frame-src to control frames respectively."]', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src or style-src: nonce-source or hash-source specified"]', u'[JavaScript Warning: "The resource at https://www.google-analytics.com/analytics.js was blocked because content blocking is enabled." {file: "https://www.airbnb.com/login" line: 0}]', u'[console.log(JQMIGRATE: Logging is active) https://a0.muscache.com/airbnb/static/client/packages/libs_jquery.bundle-5cda6300.js:15:480]', u'[console.warn(JQMIGRATE: jQuery.browser is deprecated) https://a0.muscache.com/airbnb/static/client/packages/libs_jquery.bundle-5cda6300.js:15:128]', u'[console.trace() https://a0.muscache.com/airbnb/static/client/packages/libs_jquery.bundle-5cda6300.js:15:177]'] </pre> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
site is not usable url browser version firefox mobile tablet operating system android tested another browser no problem type site is not usable description cannot login steps to reproduce cannot login in mobile mode browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked true basic gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly console messages u u u u u u u from with ❤️
0
13,334
15,790,880,242
IssuesEvent
2021-04-02 02:47:41
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Question about Dashboard and search
log-processing question
First i have to say, thank for the program i really enjoying study it :smile: Now the question, based on this image: ![novo_titulo](https://user-images.githubusercontent.com/65990907/108209380-fe984080-7108-11eb-815c-a9065142392d.png) For exemplo this PW00125, is possible to have another search to look all the 1751 Hits and make the same analiys? in another words, istead of put all PW00125 in one group and create a line, put the single hit in a line, create a serch of 1751 line for each hits of PW00125? I hope you undertands what i'm try to say :sweat_smile:
1.0
Question about Dashboard and search - First i have to say, thank for the program i really enjoying study it :smile: Now the question, based on this image: ![novo_titulo](https://user-images.githubusercontent.com/65990907/108209380-fe984080-7108-11eb-815c-a9065142392d.png) For exemplo this PW00125, is possible to have another search to look all the 1751 Hits and make the same analiys? in another words, istead of put all PW00125 in one group and create a line, put the single hit in a line, create a serch of 1751 line for each hits of PW00125? I hope you undertands what i'm try to say :sweat_smile:
process
question about dashboard and search first i have to say thank for the program i really enjoying study it smile now the question based on this image for exemplo this is possible to have another search to look all the hits and make the same analiys in another words istead of put all in one group and create a line put the single hit in a line create a serch of line for each hits of i hope you undertands what i m try to say sweat smile
1
344,988
10,351,217,617
IssuesEvent
2019-09-05 06:11:52
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.paypal.com - site is not usable
browser-firefox-mobile engine-gecko priority-critical
<!-- @browser: Firefox Mobile 69.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 --> <!-- @reported_with: --> **URL**: https://www.paypal.com/webapps/hermes?country.x=GB **Browser / Version**: Firefox Mobile 69.0 **Operating System**: Android **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: can't login **Steps to Reproduce**: It keeps looping back to the login screen after entering my 2 factor code <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.paypal.com - site is not usable - <!-- @browser: Firefox Mobile 69.0 --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:69.0) Gecko/69.0 Firefox/69.0 --> <!-- @reported_with: --> **URL**: https://www.paypal.com/webapps/hermes?country.x=GB **Browser / Version**: Firefox Mobile 69.0 **Operating System**: Android **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: can't login **Steps to Reproduce**: It keeps looping back to the login screen after entering my 2 factor code <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description can t login steps to reproduce it keeps looping back to the login screen after entering my factor code browser configuration none from with ❤️
0
686,547
23,495,328,340
IssuesEvent
2022-08-18 00:13:58
zot4plan/Zot4Plan
https://api.github.com/repos/zot4plan/Zot4Plan
opened
Google reCAPTCHA
Priority: high Type: feature request
**Story** Need to prevent spamming and abusing save and load schedules **Requirement** reCAPTCHA before saving or loading reCAPTCHA should only be enable on the first save and load.
1.0
Google reCAPTCHA - **Story** Need to prevent spamming and abusing save and load schedules **Requirement** reCAPTCHA before saving or loading reCAPTCHA should only be enable on the first save and load.
non_process
google recaptcha story need to prevent spamming and abusing save and load schedules requirement recaptcha before saving or loading recaptcha should only be enable on the first save and load
0
112,655
17,095,378,599
IssuesEvent
2021-07-09 01:08:59
RG4421/openedr
https://api.github.com/repos/RG4421/openedr
closed
CVE-2018-16890 (High) detected in curlcurl-7_63_0 - autoclosed
security vulnerability
## CVE-2018-16890 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary> <p> <p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p> <p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/RG4421/openedr/commit/f991dbd97bf34917a1d61c43ef4b41832708779c">f991dbd97bf34917a1d61c43ef4b41832708779c</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vauth/ntlm.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vauth/ntlm.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> libcurl versions from 7.36.0 to before 7.64.0 is vulnerable to a heap buffer out-of-bounds read. The function handling incoming NTLM type-2 messages (`lib/vauth/ntlm.c:ntlm_decode_type2_target`) does not validate incoming data correctly and is subject to an integer overflow vulnerability. Using that overflow, a malicious or broken NTLM server could trick libcurl to accept a bad length + offset combination that would lead to a buffer read out-of-bounds. <p>Publish Date: 2019-02-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16890>CVE-2018-16890</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16890">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16890</a></p> <p>Release Date: 2019-02-06</p> <p>Fix Resolution: curl-7_64_0</p> </p> </details> <p></p>
True
CVE-2018-16890 (High) detected in curlcurl-7_63_0 - autoclosed - ## CVE-2018-16890 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>curlcurl-7_63_0</b></p></summary> <p> <p>A command line tool and library for transferring data with URL syntax, supporting HTTP, HTTPS, FTP, FTPS, GOPHER, TFTP, SCP, SFTP, SMB, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, SMTP, POP3, RTSP and RTMP. libcurl offers a myriad of powerful features</p> <p>Library home page: <a href=https://github.com/curl/curl.git>https://github.com/curl/curl.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/RG4421/openedr/commit/f991dbd97bf34917a1d61c43ef4b41832708779c">f991dbd97bf34917a1d61c43ef4b41832708779c</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vauth/ntlm.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>openedr/edrav2/eprj/curl/lib/vauth/ntlm.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> libcurl versions from 7.36.0 to before 7.64.0 is vulnerable to a heap buffer out-of-bounds read. The function handling incoming NTLM type-2 messages (`lib/vauth/ntlm.c:ntlm_decode_type2_target`) does not validate incoming data correctly and is subject to an integer overflow vulnerability. Using that overflow, a malicious or broken NTLM server could trick libcurl to accept a bad length + offset combination that would lead to a buffer read out-of-bounds. <p>Publish Date: 2019-02-06 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-16890>CVE-2018-16890</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16890">https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2018-16890</a></p> <p>Release Date: 2019-02-06</p> <p>Fix Resolution: curl-7_64_0</p> </p> </details> <p></p>
non_process
cve high detected in curlcurl autoclosed cve high severity vulnerability vulnerable library curlcurl a command line tool and library for transferring data with url syntax supporting http https ftp ftps gopher tftp scp sftp smb telnet dict ldap ldaps file imap smtp rtsp and rtmp libcurl offers a myriad of powerful features library home page a href found in head commit a href found in base branch main vulnerable source files openedr eprj curl lib vauth ntlm c openedr eprj curl lib vauth ntlm c vulnerability details libcurl versions from to before is vulnerable to a heap buffer out of bounds read the function handling incoming ntlm type messages lib vauth ntlm c ntlm decode target does not validate incoming data correctly and is subject to an integer overflow vulnerability using that overflow a malicious or broken ntlm server could trick libcurl to accept a bad length offset combination that would lead to a buffer read out of bounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution curl
0
150,764
23,711,055,281
IssuesEvent
2022-08-30 07:54:20
Budibase/budibase
https://api.github.com/repos/Budibase/budibase
opened
Handle preview in the builder better when there is no URL variable present for a filter (all data being returned)
enhancement design
### Discussed in https://github.com/Budibase/budibase/discussions/7443 <div type='discussions-op-text'> <sup>Originally posted by **R2bEEaton** August 24, 2022</sup> I have some complicated pages on my app, and a couple of them require data sources with high limits. In reality though, a normal user would enter the page and the source would be filtered down to a manageable limit. However, in the builder, the filters don't work because the URL variable isn't set, for example. In this case, the data provider returns all of the data which freezes the client and makes editing nigh on impossible sometimes. I found four solutions, none of which are ideal: 1. Making the change within 2-3 seconds before the builder loads the preview. 2. Change the data source to be limited (which you have to do within 2-3 seconds before the builder loads the preview). 3. Wait minutes for the page to fully load and become responsive. 4. Not make the edit you want because it's a pain to deal with. My proposed solution is a global toggle somewhere in the builder where you can disable the site preview. _Another option would be to be able to set URL variables within the builder, for example, or a default value for data source filters. This might be preferable for UX purposes._</div>
1.0
Handle preview in the builder better when there is no URL variable present for a filter (all data being returned) - ### Discussed in https://github.com/Budibase/budibase/discussions/7443 <div type='discussions-op-text'> <sup>Originally posted by **R2bEEaton** August 24, 2022</sup> I have some complicated pages on my app, and a couple of them require data sources with high limits. In reality though, a normal user would enter the page and the source would be filtered down to a manageable limit. However, in the builder, the filters don't work because the URL variable isn't set, for example. In this case, the data provider returns all of the data which freezes the client and makes editing nigh on impossible sometimes. I found four solutions, none of which are ideal: 1. Making the change within 2-3 seconds before the builder loads the preview. 2. Change the data source to be limited (which you have to do within 2-3 seconds before the builder loads the preview). 3. Wait minutes for the page to fully load and become responsive. 4. Not make the edit you want because it's a pain to deal with. My proposed solution is a global toggle somewhere in the builder where you can disable the site preview. _Another option would be to be able to set URL variables within the builder, for example, or a default value for data source filters. This might be preferable for UX purposes._</div>
non_process
handle preview in the builder better when there is no url variable present for a filter all data being returned discussed in originally posted by august i have some complicated pages on my app and a couple of them require data sources with high limits in reality though a normal user would enter the page and the source would be filtered down to a manageable limit however in the builder the filters don t work because the url variable isn t set for example in this case the data provider returns all of the data which freezes the client and makes editing nigh on impossible sometimes i found four solutions none of which are ideal making the change within seconds before the builder loads the preview change the data source to be limited which you have to do within seconds before the builder loads the preview wait minutes for the page to fully load and become responsive not make the edit you want because it s a pain to deal with my proposed solution is a global toggle somewhere in the builder where you can disable the site preview another option would be to be able to set url variables within the builder for example or a default value for data source filters this might be preferable for ux purposes
0
224,824
17,201,835,221
IssuesEvent
2021-07-17 11:58:54
microsoft/MixedRealityToolkit-Unity
https://api.github.com/repos/microsoft/MixedRealityToolkit-Unity
opened
How to make a Hologram appear directly in fornt of the user's face?
Documentation
## Describe the issue From the documentation and solver overview it is not clear to me how I can make a Hologram to face always the user's face? E.g. no matter if the user is looking up to the ceiling or looking down to the floor, changing a Hologram's state suddenly to activate should make it appear always in front of the user so that the user does not have to search for the hologram. Which solver do I need to use and what settings? ## Feature area Documentation does not contain examples. ## Existing doc link https://docs.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/features/ux-building-blocks/solvers/solver?view=mrtkunity-2021-05
1.0
How to make a Hologram appear directly in fornt of the user's face? - ## Describe the issue From the documentation and solver overview it is not clear to me how I can make a Hologram to face always the user's face? E.g. no matter if the user is looking up to the ceiling or looking down to the floor, changing a Hologram's state suddenly to activate should make it appear always in front of the user so that the user does not have to search for the hologram. Which solver do I need to use and what settings? ## Feature area Documentation does not contain examples. ## Existing doc link https://docs.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/features/ux-building-blocks/solvers/solver?view=mrtkunity-2021-05
non_process
how to make a hologram appear directly in fornt of the user s face describe the issue from the documentation and solver overview it is not clear to me how i can make a hologram to face always the user s face e g no matter if the user is looking up to the ceiling or looking down to the floor changing a hologram s state suddenly to activate should make it appear always in front of the user so that the user does not have to search for the hologram which solver do i need to use and what settings feature area documentation does not contain examples existing doc link
0
584,647
17,460,944,608
IssuesEvent
2021-08-06 10:20:37
inverse-inc/packetfence
https://api.github.com/repos/inverse-inc/packetfence
reopened
Unreg date is not completely honored when doing registration on portal
Type: Bug Priority: Medium
**Describe the bug** When you register a node using a user which has an unregistration date or a source which defined an unregistration date, I noticed that time saved in DB doesn't take seconds into account. It means that if you want to register a node until `2020-06-22 11:00:00`, it will not works because unregistration date saved in DB will be `2020-06-22 00:00:00`. Consequently, your device will be deregistered at the beginning of the day. **To Reproduce** Steps to reproduce the behavior: 1. Create a local user with following actions: * Role: `gaming` * Unregistration date: `2020-06-22 11:00:00` 2. Try to register a device on captive portal using Username/Password login with local user => You got "Role gaming has been assigned to your device with unregistration date: 2020-06-22" Node saved in DB: ```sql select unregdate from node where mac LIKE "00:%"\G; *************************** 1. row *************************** unregdate: 2020-06-22 00:00:00 ``` **Expected behavior** Unregistration date saved in DB should be the one defined on user or in authentication rule. **Additional context** After I put `httpd.portal` in debug, I got following logs in `packetfence.log`: ``` Jun 18 09:36:01 pfcen7stable packetfence_httpd.portal: httpd.portal(23401) DEBUG: [mac:00:11:22:33:44:55] [null] Returning '2020-06-22 11:00:00' for action 'set_unreg_date' for username default (pf::authentication::match) [..] Jun 18 09:36:01 pfcen7stable packetfence_httpd.portal: httpd.portal(23401) DEBUG: [mac:00:11:22:33:44:55] new_node_info after auth module actions : $VAR1 = { 'pid' => 'default', 'time_balance' => undef, 'category' => 'guest', 'bandwidth_balance' => undef, 'unregdate' => '2020-06-22' }; (captiveportal::PacketFence::DynamicRouting::Module::Authentication::execute_actions) ``` It seems that last part of `unregdate` is truncated in the captive portal code.
1.0
Unreg date is not completely honored when doing registration on portal - **Describe the bug** When you register a node using a user which has an unregistration date or a source which defined an unregistration date, I noticed that time saved in DB doesn't take seconds into account. It means that if you want to register a node until `2020-06-22 11:00:00`, it will not works because unregistration date saved in DB will be `2020-06-22 00:00:00`. Consequently, your device will be deregistered at the beginning of the day. **To Reproduce** Steps to reproduce the behavior: 1. Create a local user with following actions: * Role: `gaming` * Unregistration date: `2020-06-22 11:00:00` 2. Try to register a device on captive portal using Username/Password login with local user => You got "Role gaming has been assigned to your device with unregistration date: 2020-06-22" Node saved in DB: ```sql select unregdate from node where mac LIKE "00:%"\G; *************************** 1. row *************************** unregdate: 2020-06-22 00:00:00 ``` **Expected behavior** Unregistration date saved in DB should be the one defined on user or in authentication rule. **Additional context** After I put `httpd.portal` in debug, I got following logs in `packetfence.log`: ``` Jun 18 09:36:01 pfcen7stable packetfence_httpd.portal: httpd.portal(23401) DEBUG: [mac:00:11:22:33:44:55] [null] Returning '2020-06-22 11:00:00' for action 'set_unreg_date' for username default (pf::authentication::match) [..] Jun 18 09:36:01 pfcen7stable packetfence_httpd.portal: httpd.portal(23401) DEBUG: [mac:00:11:22:33:44:55] new_node_info after auth module actions : $VAR1 = { 'pid' => 'default', 'time_balance' => undef, 'category' => 'guest', 'bandwidth_balance' => undef, 'unregdate' => '2020-06-22' }; (captiveportal::PacketFence::DynamicRouting::Module::Authentication::execute_actions) ``` It seems that last part of `unregdate` is truncated in the captive portal code.
non_process
unreg date is not completely honored when doing registration on portal describe the bug when you register a node using a user which has an unregistration date or a source which defined an unregistration date i noticed that time saved in db doesn t take seconds into account it means that if you want to register a node until it will not works because unregistration date saved in db will be consequently your device will be deregistered at the beginning of the day to reproduce steps to reproduce the behavior create a local user with following actions role gaming unregistration date try to register a device on captive portal using username password login with local user you got role gaming has been assigned to your device with unregistration date node saved in db sql select unregdate from node where mac like g row unregdate expected behavior unregistration date saved in db should be the one defined on user or in authentication rule additional context after i put httpd portal in debug i got following logs in packetfence log jun packetfence httpd portal httpd portal debug returning for action set unreg date for username default pf authentication match jun packetfence httpd portal httpd portal debug new node info after auth module actions pid default time balance undef category guest bandwidth balance undef unregdate captiveportal packetfence dynamicrouting module authentication execute actions it seems that last part of unregdate is truncated in the captive portal code
0
400
2,847,696,595
IssuesEvent
2015-05-29 18:26:50
mitchellh/packer
https://api.github.com/repos/mitchellh/packer
closed
'No common prefix for achiving files' fails on windows while a common prefix is available
bug post-processor/atlas windows
Hi, my build fails with: * Post-processor failed: No common prefix for achiving files: [output-virtualbox-iso\packer-virtualbox-iso-1422310773-disk1.vmdk output-virtualbox-iso\packer-virtualbox-iso-1422310773.ovf] I suspect this is happening because at packer/post-processor/atlas/util.go, line 29 only checks a for a forward slash '/', while on windows the paths are with backslashes '\'.
1.0
'No common prefix for achiving files' fails on windows while a common prefix is available - Hi, my build fails with: * Post-processor failed: No common prefix for achiving files: [output-virtualbox-iso\packer-virtualbox-iso-1422310773-disk1.vmdk output-virtualbox-iso\packer-virtualbox-iso-1422310773.ovf] I suspect this is happening because at packer/post-processor/atlas/util.go, line 29 only checks a for a forward slash '/', while on windows the paths are with backslashes '\'.
process
no common prefix for achiving files fails on windows while a common prefix is available hi my build fails with post processor failed no common prefix for achiving files i suspect this is happening because at packer post processor atlas util go line only checks a for a forward slash while on windows the paths are with backslashes
1