Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
13,354
3,329,864,070
IssuesEvent
2015-11-11 06:08:57
connolly/desc
https://api.github.com/repos/connolly/desc
opened
WL3-DC2-DP0:T2
DC2 DC2 DP: Precursor survey data on which to test the shear pipeline DP Images to shear catalog I wl
If necessary pre-process the data to make it usable with the DM and WL pipelines. (Note: This may not require any work, as LSSTDM intends their code to work with a variety of data formats.)
1.0
WL3-DC2-DP0:T2 - If necessary pre-process the data to make it usable with the DM and WL pipelines. (Note: This may not require any work, as LSSTDM intends their code to work with a variety of data formats.)
non_process
if necessary pre process the data to make it usable with the dm and wl pipelines note this may not require any work as lsstdm intends their code to work with a variety of data formats
0
4
2,491,245,002
IssuesEvent
2015-01-03 05:45:41
AAndharia/ZIMS-School-Mgmt
https://api.github.com/repos/AAndharia/ZIMS-School-Mgmt
opened
Student Insurance Cover
Initial Requirement New
Following information should be captured for Students who are insured  Year  Charter ID  Student ID  Policy Reference  Date Insured  Insurance Broker  Premium  Description  Period  Summary Insured Details
1.0
Student Insurance Cover - Following information should be captured for Students who are insured  Year  Charter ID  Student ID  Policy Reference  Date Insured  Insurance Broker  Premium  Description  Period  Summary Insured Details
non_process
student insurance cover following information should be captured for students who are insured  year  charter id  student id  policy reference  date insured  insurance broker  premium  description  period  summary insured details
0
295,335
25,468,814,563
IssuesEvent
2022-11-25 08:15:57
atoptima/Coluna.jl
https://api.github.com/repos/atoptima/Coluna.jl
closed
Model parser for tests
tests
We need to build a simple formulation or a reformulation for several algorithms tests. However, even if the formulation and the reformulation are simple, they are difficult to write and read. See for instance https://github.com/atoptima/Coluna.jl/blob/2e24949f459b00d47eddebb81ae3cdabbad0c1aa/test/unit/Algorithm/colgen.jl#L1-L40 In MathOptInterface.jl, they created a parser: https://github.com/jump-dev/MathOptInterface.jl/blob/b34036eabd91affcfd2b9eb26df115d59da800e2/src/Utilities/parser.jl It would be nice to create something on top of their parser to create a reformulation. For instance (syntax should be adapted to MOI parser if we decide to build on top of it): ``` """ master: min x + y + z1 + z2 st x + y + z1 + 2*z2 >= 1 y >= 2 dw_sp: x + z1 <= 3 dw_sp: x + 3z2 <= 2 representatives: x """ ``` where `y` is a master variable, `x` a dw sp representative, `z1` a variable of the first subproblem, and `z2` a variable of the second subproblem. Advantage: test easier to understand Disadvantage: it seems to be quite a lot of work & we have to make sure there is no bug in the parser.
1.0
Model parser for tests - We need to build a simple formulation or a reformulation for several algorithms tests. However, even if the formulation and the reformulation are simple, they are difficult to write and read. See for instance https://github.com/atoptima/Coluna.jl/blob/2e24949f459b00d47eddebb81ae3cdabbad0c1aa/test/unit/Algorithm/colgen.jl#L1-L40 In MathOptInterface.jl, they created a parser: https://github.com/jump-dev/MathOptInterface.jl/blob/b34036eabd91affcfd2b9eb26df115d59da800e2/src/Utilities/parser.jl It would be nice to create something on top of their parser to create a reformulation. For instance (syntax should be adapted to MOI parser if we decide to build on top of it): ``` """ master: min x + y + z1 + z2 st x + y + z1 + 2*z2 >= 1 y >= 2 dw_sp: x + z1 <= 3 dw_sp: x + 3z2 <= 2 representatives: x """ ``` where `y` is a master variable, `x` a dw sp representative, `z1` a variable of the first subproblem, and `z2` a variable of the second subproblem. Advantage: test easier to understand Disadvantage: it seems to be quite a lot of work & we have to make sure there is no bug in the parser.
non_process
model parser for tests we need to build a simple formulation or a reformulation for several algorithms tests however even if the formulation and the reformulation are simple they are difficult to write and read see for instance in mathoptinterface jl they created a parser it would be nice to create something on top of their parser to create a reformulation for instance syntax should be adapted to moi parser if we decide to build on top of it master min x y st x y y dw sp x dw sp x representatives x where y is a master variable x a dw sp representative a variable of the first subproblem and a variable of the second subproblem advantage test easier to understand disadvantage it seems to be quite a lot of work we have to make sure there is no bug in the parser
0
10,133
13,044,162,397
IssuesEvent
2020-07-29 03:47:32
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Migrate scalar function `JsonValidOthersSig` from TiDB
challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor
## Description Port the scalar function `JsonValidOthersSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
2.0
UCP: Migrate scalar function `JsonValidOthersSig` from TiDB - ## Description Port the scalar function `JsonValidOthersSig` from TiDB to coprocessor. ## Score * 50 ## Mentor(s) * @iosmanthus ## Recommended Skills * Rust programming ## Learning Materials Already implemented expressions ported from TiDB - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr) - https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
process
ucp migrate scalar function jsonvalidotherssig from tidb description port the scalar function jsonvalidotherssig from tidb to coprocessor score mentor s iosmanthus recommended skills rust programming learning materials already implemented expressions ported from tidb
1
17,957
23,960,847,059
IssuesEvent
2022-09-12 18:58:00
JeroenMathon/NeosVR-Research-Initiative
https://api.github.com/repos/JeroenMathon/NeosVR-Research-Initiative
opened
Process information from Neos-Archive
help wanted processing
Process and archive information from the Neos-Archive, we are looking for anything that is relevant to the goal of this research, filtering out the messages that are not related and pasting the messages that are into its own folder and text file for further analyzing, The first step is obtaining information that is related from this archive Just put the information in chronological order with a reference to the date of the message and channel it was posted in in a text file. This text file will later be used as a source reference for other information
1.0
Process information from Neos-Archive - Process and archive information from the Neos-Archive, we are looking for anything that is relevant to the goal of this research, filtering out the messages that are not related and pasting the messages that are into its own folder and text file for further analyzing, The first step is obtaining information that is related from this archive Just put the information in chronological order with a reference to the date of the message and channel it was posted in in a text file. This text file will later be used as a source reference for other information
process
process information from neos archive process and archive information from the neos archive we are looking for anything that is relevant to the goal of this research filtering out the messages that are not related and pasting the messages that are into its own folder and text file for further analyzing the first step is obtaining information that is related from this archive just put the information in chronological order with a reference to the date of the message and channel it was posted in in a text file this text file will later be used as a source reference for other information
1
16,257
20,817,692,178
IssuesEvent
2022-03-18 12:15:26
tushushu/ulist
https://api.github.com/repos/tushushu/ulist
closed
Implement `endswith` method for `StringList`
data processing string
Example: ```Python >>> import ulist as ul >>> arr = ul.from_seq(["abc", "abcd", "bcd"], dtype='string') >>> arr.ends_with('bc') [True, False, False] ```
1.0
Implement `endswith` method for `StringList` - Example: ```Python >>> import ulist as ul >>> arr = ul.from_seq(["abc", "abcd", "bcd"], dtype='string') >>> arr.ends_with('bc') [True, False, False] ```
process
implement endswith method for stringlist example python import ulist as ul arr ul from seq dtype string arr ends with bc
1
222,206
24,691,484,185
IssuesEvent
2022-10-19 08:52:45
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
[Security Solution] Entity Analytics - anomalies throws errors and doesn't work on an empty cluster
bug Team:Threat Hunting Team: SecuritySolution Team:Threat Hunting:Explore
**How to reproduce it:** * Create a local ES cluster. * Add some events with packetbeat. * Open Entity analytics page It throws errors and doesn't work
True
[Security Solution] Entity Analytics - anomalies throws errors and doesn't work on an empty cluster - **How to reproduce it:** * Create a local ES cluster. * Add some events with packetbeat. * Open Entity analytics page It throws errors and doesn't work
non_process
entity analytics anomalies throws errors and doesn t work on an empty cluster how to reproduce it create a local es cluster add some events with packetbeat open entity analytics page it throws errors and doesn t work
0
8,289
11,454,793,560
IssuesEvent
2020-02-06 17:46:19
googleapis/google-cloud-cpp-common
https://api.github.com/repos/googleapis/google-cloud-cpp-common
closed
release google-cloud-cpp-common when #129 is resolved
type: process
https://github.com/googleapis/google-cloud-cpp-spanner/issues/1171 (and 1172) will (*) require a version of the `CompletionQueue` with #129 fixed, so we'll need to re-release `common` (a point release) to be able to use it in `spanner`. (*) is because I realized I _may_ be able to just work around the current `CompletionQueue` behavior in `SessionPool` without requiring a new release.
1.0
release google-cloud-cpp-common when #129 is resolved - https://github.com/googleapis/google-cloud-cpp-spanner/issues/1171 (and 1172) will (*) require a version of the `CompletionQueue` with #129 fixed, so we'll need to re-release `common` (a point release) to be able to use it in `spanner`. (*) is because I realized I _may_ be able to just work around the current `CompletionQueue` behavior in `SessionPool` without requiring a new release.
process
release google cloud cpp common when is resolved and will require a version of the completionqueue with fixed so we ll need to re release common a point release to be able to use it in spanner is because i realized i may be able to just work around the current completionqueue behavior in sessionpool without requiring a new release
1
6,144
9,014,004,966
IssuesEvent
2019-02-05 21:07:02
Jeffail/benthos
https://api.github.com/repos/Jeffail/benthos
closed
Case statement processor
enhancement processors
I have a use case where I want to normalise some data from a JSON blob that looks something like this: ```json "field": { "type": "value", "a": "b" }, ``` The contents of "field" depends on the "type" value. We want to normalise the field to have a single, consistent value based on the "type" and some other information. Currently we do this in python with a dictionary of lambdas ```python def normalise_field(record): return { 'TypeA': lambda: "somevalue", 'TypeB': lambda: "someothervalue", }[record["field"]["type"]]() ``` Alternatively this could be done in a case statement. Trying to implement this in Benthos I created a rather complex nested conditional processor. This isn't very succinct and is hard to read. Is there a better way of doing this? Should there be a "switch" processor that enables you to execute arbitrary processors depending on the value of a field?
1.0
Case statement processor - I have a use case where I want to normalise some data from a JSON blob that looks something like this: ```json "field": { "type": "value", "a": "b" }, ``` The contents of "field" depends on the "type" value. We want to normalise the field to have a single, consistent value based on the "type" and some other information. Currently we do this in python with a dictionary of lambdas ```python def normalise_field(record): return { 'TypeA': lambda: "somevalue", 'TypeB': lambda: "someothervalue", }[record["field"]["type"]]() ``` Alternatively this could be done in a case statement. Trying to implement this in Benthos I created a rather complex nested conditional processor. This isn't very succinct and is hard to read. Is there a better way of doing this? Should there be a "switch" processor that enables you to execute arbitrary processors depending on the value of a field?
process
case statement processor i have a use case where i want to normalise some data from a json blob that looks something like this json field type value a b the contents of field depends on the type value we want to normalise the field to have a single consistent value based on the type and some other information currently we do this in python with a dictionary of lambdas python def normalise field record return typea lambda somevalue typeb lambda someothervalue alternatively this could be done in a case statement trying to implement this in benthos i created a rather complex nested conditional processor this isn t very succinct and is hard to read is there a better way of doing this should there be a switch processor that enables you to execute arbitrary processors depending on the value of a field
1
198,031
6,968,866,947
IssuesEvent
2017-12-11 00:54:08
Backdash/MonikaModDev
https://api.github.com/repos/Backdash/MonikaModDev
closed
[Suggestion] Have Monika surrender instead of draw in Chess
low priority suggestion
Maybe if Monika saw no way of winning (e.g. only her King remaining) she'd surrender; or at least on the lower levels. As her difficulty level increases, she might not surrender simply to force a draw, but otherwise it'd make for a change to see her surrender in an unwinnable scenario where both players will just go on endlessly.
1.0
[Suggestion] Have Monika surrender instead of draw in Chess - Maybe if Monika saw no way of winning (e.g. only her King remaining) she'd surrender; or at least on the lower levels. As her difficulty level increases, she might not surrender simply to force a draw, but otherwise it'd make for a change to see her surrender in an unwinnable scenario where both players will just go on endlessly.
non_process
have monika surrender instead of draw in chess maybe if monika saw no way of winning e g only her king remaining she d surrender or at least on the lower levels as her difficulty level increases she might not surrender simply to force a draw but otherwise it d make for a change to see her surrender in an unwinnable scenario where both players will just go on endlessly
0
47,789
2,985,240,226
IssuesEvent
2015-07-18 20:49:52
openshift/origin
https://api.github.com/repos/openshift/origin
closed
[beta4] creating an instance of a database template doesn't result in any deployments
area/usability kind/question priority/P2
template JSON: ```JSON { "kind": "Template", "apiVersion": "v1beta3", "metadata": { "name": "mysql-ephemeral", "creationTimestamp": null, "annotations": { "description": "MySQL database service, without persistent storage. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing", "iconClass": "icon-mysql-database", "tags": "database,mysql" } }, "objects": [ { "kind": "Service", "apiVersion": "v1beta3", "metadata": { "name": "mysql", "creationTimestamp": null }, "spec": { "ports": [ { "name": "mysql", "protocol": "TCP", "port": 3306, "targetPort": 3306, "nodePort": 0 } ], "selector": { "name": "mysql" }, "portalIP": "", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } }, { "kind": "DeploymentConfig", "apiVersion": "v1beta3", "metadata": { "name": "mysql", "creationTimestamp": null }, "spec": { "strategy": { "type": "Recreate", "resources": {} }, "triggers": [ { "type": "ImageChange", "imageChangeParams": { "automatic": true, "containerNames": [ "mysql" ], "from": { "kind": "ImageStreamTag", "name": "mysql:latest" }, "lastTriggeredImage": "" } }, { "type": "ConfigChange" } ], "replicas": 1, "selector": { "name": "mysql" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "name": "mysql" } }, "spec": { "containers": [ { "name": "mysql", "image": "registry.access.redhat.com/openshift3_beta/mysql-55-rhel7", "ports": [ { "containerPort": 3306, "protocol": "TCP" } ], "env": [ { "name": "MYSQL_USER", "value": "${MYSQL_USER}" }, { "name": "MYSQL_PASSWORD", "value": "${MYSQL_PASSWORD}" }, { "name": "MYSQL_DATABASE", "value": "${MYSQL_DATABASE}" } ], "resources": {}, "terminationMessagePath": "/dev/termination-log", "imagePullPolicy": "IfNotPresent", "capabilities": {}, "securityContext": { "capabilities": {}, "privileged": false } } ], "restartPolicy": "Always", "dnsPolicy": "ClusterFirst" } } }, "status": {} } ], "parameters": [ { "name": "MYSQL_USER", "description": "Username for MySQL user that will be used for accessing the database", "generate": "expression", "from": "user[A-Z0-9]{3}" }, { "name": "MYSQL_PASSWORD", "description": "Password for the MySQL user", "generate": "expression", "from": "[a-zA-Z0-9]{16}" }, { "name": "MYSQL_DATABASE", "description": "Database name", "value": "sampledb" } ], "labels": { "template": "mysql-ephemeral-template" } } ``` ``` osc v0.5.2.2-26-g701be15 kubernetes v0.17.1-804-g496be63 ``` resulting DC: ```YAML apiVersion: v1beta3 kind: DeploymentConfig metadata: creationTimestamp: 2015-06-08T17:48:52Z labels: template: mysql-ephemeral-template name: mysql namespace: wiring resourceVersion: "3467" selfLink: /osapi/v1beta3/namespaces/wiring/deploymentconfigs/mysql uid: a0285801-0e06-11e5-b85c-525400b33d1d spec: replicas: 1 selector: name: mysql strategy: resources: {} type: Recreate template: metadata: creationTimestamp: null labels: name: mysql spec: containers: - capabilities: {} env: - name: MYSQL_USER value: user5OS - name: MYSQL_PASSWORD value: RJWN0yPj - name: MYSQL_DATABASE value: root image: registry.access.redhat.com/openshift3_beta/mysql-55-rhel7 imagePullPolicy: IfNotPresent name: mysql ports: - containerPort: 3306 protocol: TCP resources: {} securityContext: capabilities: {} privileged: false terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always serviceAccount: "" triggers: - imageChangeParams: automatic: true containerNames: - mysql from: kind: ImageStreamTag name: mysql:latest lastTriggeredImage: "" type: ImageChange - type: ConfigChange status: {} ``` no rcs: ``` [alice@ose3-master beta4]$ osc get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS frontend-1 ruby-helloworld 172.30.118.110:5000/wiring/origin-ruby-sample@sha256:2708c8e3aa0e8c2e76ba3fd72ccf8133c0aa9ca4db2fba61b2e6c38d7078937b deployment=frontend-1,deploymentconfig=frontend,name=frontend 1 ```
1.0
[beta4] creating an instance of a database template doesn't result in any deployments - template JSON: ```JSON { "kind": "Template", "apiVersion": "v1beta3", "metadata": { "name": "mysql-ephemeral", "creationTimestamp": null, "annotations": { "description": "MySQL database service, without persistent storage. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing", "iconClass": "icon-mysql-database", "tags": "database,mysql" } }, "objects": [ { "kind": "Service", "apiVersion": "v1beta3", "metadata": { "name": "mysql", "creationTimestamp": null }, "spec": { "ports": [ { "name": "mysql", "protocol": "TCP", "port": 3306, "targetPort": 3306, "nodePort": 0 } ], "selector": { "name": "mysql" }, "portalIP": "", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } }, { "kind": "DeploymentConfig", "apiVersion": "v1beta3", "metadata": { "name": "mysql", "creationTimestamp": null }, "spec": { "strategy": { "type": "Recreate", "resources": {} }, "triggers": [ { "type": "ImageChange", "imageChangeParams": { "automatic": true, "containerNames": [ "mysql" ], "from": { "kind": "ImageStreamTag", "name": "mysql:latest" }, "lastTriggeredImage": "" } }, { "type": "ConfigChange" } ], "replicas": 1, "selector": { "name": "mysql" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "name": "mysql" } }, "spec": { "containers": [ { "name": "mysql", "image": "registry.access.redhat.com/openshift3_beta/mysql-55-rhel7", "ports": [ { "containerPort": 3306, "protocol": "TCP" } ], "env": [ { "name": "MYSQL_USER", "value": "${MYSQL_USER}" }, { "name": "MYSQL_PASSWORD", "value": "${MYSQL_PASSWORD}" }, { "name": "MYSQL_DATABASE", "value": "${MYSQL_DATABASE}" } ], "resources": {}, "terminationMessagePath": "/dev/termination-log", "imagePullPolicy": "IfNotPresent", "capabilities": {}, "securityContext": { "capabilities": {}, "privileged": false } } ], "restartPolicy": "Always", "dnsPolicy": "ClusterFirst" } } }, "status": {} } ], "parameters": [ { "name": "MYSQL_USER", "description": "Username for MySQL user that will be used for accessing the database", "generate": "expression", "from": "user[A-Z0-9]{3}" }, { "name": "MYSQL_PASSWORD", "description": "Password for the MySQL user", "generate": "expression", "from": "[a-zA-Z0-9]{16}" }, { "name": "MYSQL_DATABASE", "description": "Database name", "value": "sampledb" } ], "labels": { "template": "mysql-ephemeral-template" } } ``` ``` osc v0.5.2.2-26-g701be15 kubernetes v0.17.1-804-g496be63 ``` resulting DC: ```YAML apiVersion: v1beta3 kind: DeploymentConfig metadata: creationTimestamp: 2015-06-08T17:48:52Z labels: template: mysql-ephemeral-template name: mysql namespace: wiring resourceVersion: "3467" selfLink: /osapi/v1beta3/namespaces/wiring/deploymentconfigs/mysql uid: a0285801-0e06-11e5-b85c-525400b33d1d spec: replicas: 1 selector: name: mysql strategy: resources: {} type: Recreate template: metadata: creationTimestamp: null labels: name: mysql spec: containers: - capabilities: {} env: - name: MYSQL_USER value: user5OS - name: MYSQL_PASSWORD value: RJWN0yPj - name: MYSQL_DATABASE value: root image: registry.access.redhat.com/openshift3_beta/mysql-55-rhel7 imagePullPolicy: IfNotPresent name: mysql ports: - containerPort: 3306 protocol: TCP resources: {} securityContext: capabilities: {} privileged: false terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always serviceAccount: "" triggers: - imageChangeParams: automatic: true containerNames: - mysql from: kind: ImageStreamTag name: mysql:latest lastTriggeredImage: "" type: ImageChange - type: ConfigChange status: {} ``` no rcs: ``` [alice@ose3-master beta4]$ osc get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS frontend-1 ruby-helloworld 172.30.118.110:5000/wiring/origin-ruby-sample@sha256:2708c8e3aa0e8c2e76ba3fd72ccf8133c0aa9ca4db2fba61b2e6c38d7078937b deployment=frontend-1,deploymentconfig=frontend,name=frontend 1 ```
non_process
creating an instance of a database template doesn t result in any deployments template json json kind template apiversion metadata name mysql ephemeral creationtimestamp null annotations description mysql database service without persistent storage warning any data stored will be lost upon pod destruction only use this template for testing iconclass icon mysql database tags database mysql objects kind service apiversion metadata name mysql creationtimestamp null spec ports name mysql protocol tcp port targetport nodeport selector name mysql portalip type clusterip sessionaffinity none status loadbalancer kind deploymentconfig apiversion metadata name mysql creationtimestamp null spec strategy type recreate resources triggers type imagechange imagechangeparams automatic true containernames mysql from kind imagestreamtag name mysql latest lasttriggeredimage type configchange replicas selector name mysql template metadata creationtimestamp null labels name mysql spec containers name mysql image registry access redhat com beta mysql ports containerport protocol tcp env name mysql user value mysql user name mysql password value mysql password name mysql database value mysql database resources terminationmessagepath dev termination log imagepullpolicy ifnotpresent capabilities securitycontext capabilities privileged false restartpolicy always dnspolicy clusterfirst status parameters name mysql user description username for mysql user that will be used for accessing the database generate expression from user name mysql password description password for the mysql user generate expression from name mysql database description database name value sampledb labels template mysql ephemeral template osc kubernetes resulting dc yaml apiversion kind deploymentconfig metadata creationtimestamp labels template mysql ephemeral template name mysql namespace wiring resourceversion selflink osapi namespaces wiring deploymentconfigs mysql uid spec replicas selector name mysql strategy resources type recreate template metadata creationtimestamp null labels name mysql spec containers capabilities env name mysql user value name mysql password value name mysql database value root image registry access redhat com beta mysql imagepullpolicy ifnotpresent name mysql ports containerport protocol tcp resources securitycontext capabilities privileged false terminationmessagepath dev termination log dnspolicy clusterfirst restartpolicy always serviceaccount triggers imagechangeparams automatic true containernames mysql from kind imagestreamtag name mysql latest lasttriggeredimage type imagechange type configchange status no rcs osc get rc controller container s image s selector replicas frontend ruby helloworld wiring origin ruby sample deployment frontend deploymentconfig frontend name frontend
0
15,454
19,667,614,637
IssuesEvent
2022-01-11 01:15:29
Project-Reclass/toynet-flask
https://api.github.com/repos/Project-Reclass/toynet-flask
closed
Separate out CI checks into different steps
help wanted process improvement
Figuring out why CI is failing can be challenging for new software engineers. By splitting up successful builds, lint errors, and pytests into separate steps, we help bring visibility to the issue before investigation ![image](https://user-images.githubusercontent.com/4872808/140189574-9584c0ec-1caa-4f90-aa55-e86c77151287.png)
1.0
Separate out CI checks into different steps - Figuring out why CI is failing can be challenging for new software engineers. By splitting up successful builds, lint errors, and pytests into separate steps, we help bring visibility to the issue before investigation ![image](https://user-images.githubusercontent.com/4872808/140189574-9584c0ec-1caa-4f90-aa55-e86c77151287.png)
process
separate out ci checks into different steps figuring out why ci is failing can be challenging for new software engineers by splitting up successful builds lint errors and pytests into separate steps we help bring visibility to the issue before investigation
1
1,302
3,845,808,212
IssuesEvent
2016-04-05 00:06:53
moxie-leean/ng-cms
https://api.github.com/repos/moxie-leean/ng-cms
closed
Add a default route to home
process
When the browser points to base_url/ (not base_url/#/), it will display not-found instead of home view. This (or similar) should be added to the router to fix it: ```$urlRouterProvider.when('', '/');```
1.0
Add a default route to home - When the browser points to base_url/ (not base_url/#/), it will display not-found instead of home view. This (or similar) should be added to the router to fix it: ```$urlRouterProvider.when('', '/');```
process
add a default route to home when the browser points to base url not base url it will display not found instead of home view this or similar should be added to the router to fix it urlrouterprovider when
1
163,636
25,850,415,733
IssuesEvent
2022-12-13 10:00:50
sourcegraph/sourcegraph
https://api.github.com/repos/sourcegraph/sourcegraph
closed
webhooks: Basic CRUD UI
design Epic webhooks team/repo-management
This is a supertask for webhooks site admin page implementation. Pages should be hidden by default so that we can iterate on them without impacting customers. ### Note [Figma mockups](https://www.figma.com/file/ngPEwLqRVFAfXQC7lE0WLz/Webhooks?node-id=0%3A1) are not final and subject to change. ### Subtasks (new tasks can be added) - [x] #43486 - [x] #43487 - [x] #43488 - [x] #43490 - [x] #44000 - [x] #44135 - [x] #44696 - [x] #44720 - [x] #44730 - [x] #44758 - [x] #45150 - [x] #45206 - [x] #45237 - [x] #45325 - [x] #45401 /cc @jplahn @ryphil
1.0
webhooks: Basic CRUD UI - This is a supertask for webhooks site admin page implementation. Pages should be hidden by default so that we can iterate on them without impacting customers. ### Note [Figma mockups](https://www.figma.com/file/ngPEwLqRVFAfXQC7lE0WLz/Webhooks?node-id=0%3A1) are not final and subject to change. ### Subtasks (new tasks can be added) - [x] #43486 - [x] #43487 - [x] #43488 - [x] #43490 - [x] #44000 - [x] #44135 - [x] #44696 - [x] #44720 - [x] #44730 - [x] #44758 - [x] #45150 - [x] #45206 - [x] #45237 - [x] #45325 - [x] #45401 /cc @jplahn @ryphil
non_process
webhooks basic crud ui this is a supertask for webhooks site admin page implementation pages should be hidden by default so that we can iterate on them without impacting customers note are not final and subject to change subtasks new tasks can be added cc jplahn ryphil
0
3,512
6,561,318,008
IssuesEvent
2017-09-07 12:54:08
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
closed
RD's Rename doesn't capture classes' references to external Enums
bug critical parse-tree-processing
I have a number of public Enums. And some classes that reference them. When using RD's rename feature on said Enums, it doesn't rename any of the references within class modules.
1.0
RD's Rename doesn't capture classes' references to external Enums - I have a number of public Enums. And some classes that reference them. When using RD's rename feature on said Enums, it doesn't rename any of the references within class modules.
process
rd s rename doesn t capture classes references to external enums i have a number of public enums and some classes that reference them when using rd s rename feature on said enums it doesn t rename any of the references within class modules
1
190,491
14,547,892,858
IssuesEvent
2020-12-15 23:57:54
mozilla/foundation.mozilla.org
https://api.github.com/repos/mozilla/foundation.mozilla.org
opened
PNI follow-up: make PNI test data product images use our "fake product" images
Buyer's Guide 🛍 backend engineering stretch testing unplanned
Follow-up to https://github.com/mozilla/foundation.mozilla.org/issues/5874: it would be a nice if we could use the old product images, which means: - [ ] figure out how to get the "product images" in our static asset dir into the CMS as wagtail images - [ ] switch from `ImageFactory()` to picking a random (using `choice`) wagtail `Image` as target
1.0
PNI follow-up: make PNI test data product images use our "fake product" images - Follow-up to https://github.com/mozilla/foundation.mozilla.org/issues/5874: it would be a nice if we could use the old product images, which means: - [ ] figure out how to get the "product images" in our static asset dir into the CMS as wagtail images - [ ] switch from `ImageFactory()` to picking a random (using `choice`) wagtail `Image` as target
non_process
pni follow up make pni test data product images use our fake product images follow up to it would be a nice if we could use the old product images which means figure out how to get the product images in our static asset dir into the cms as wagtail images switch from imagefactory to picking a random using choice wagtail image as target
0
41,719
6,928,131,664
IssuesEvent
2017-12-01 02:44:47
vmware/docker-volume-vsphere
https://api.github.com/repos/vmware/docker-volume-vsphere
closed
Validate vDVS against latest stable Docker release - 17.9
invalid kind/documentation P0 wontfix
We need update the documentation after we successfully validate this.
1.0
Validate vDVS against latest stable Docker release - 17.9 - We need update the documentation after we successfully validate this.
non_process
validate vdvs against latest stable docker release we need update the documentation after we successfully validate this
0
8,277
2,611,486,015
IssuesEvent
2015-02-27 05:27:20
chrsmith/switchlist
https://api.github.com/repos/chrsmith/switchlist
closed
"Print All" doesn't work with user-defined HTML switchlist templates.
auto-migrated Priority-Medium Type-Defect
``` Choosing "Print" from the menu when the main switchlist window is open causes all switchlists for all trains to be printed at once. This is handy before an operating session. This functionality doesn't work for switchlists defined as HTML template - which for now is only user defined ones. It'll currently print using the "Handwritten" built-in template. ``` Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 11 Sep 2011 at 4:32
1.0
"Print All" doesn't work with user-defined HTML switchlist templates. - ``` Choosing "Print" from the menu when the main switchlist window is open causes all switchlists for all trains to be printed at once. This is handy before an operating session. This functionality doesn't work for switchlists defined as HTML template - which for now is only user defined ones. It'll currently print using the "Handwritten" built-in template. ``` Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 11 Sep 2011 at 4:32
non_process
print all doesn t work with user defined html switchlist templates choosing print from the menu when the main switchlist window is open causes all switchlists for all trains to be printed at once this is handy before an operating session this functionality doesn t work for switchlists defined as html template which for now is only user defined ones it ll currently print using the handwritten built in template original issue reported on code google com by rwbowdi gmail com on sep at
0
12,022
14,738,507,993
IssuesEvent
2021-01-07 04:58:11
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Residences @ Daniel Webster 123-E0679
anc-ops anc-process anp-important ant-support
In GitLab by @kdjstudios on Jun 6, 2018, 13:33 **Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-06-12351 **Server:** Internal **Client/Site:** Billerica **Account:** 123-E0679 **Issue:** I had noticed the account patching activity was not active so I have attempted to reactivate it and it will not activate. So, I went in and added a new field for patching and when I went to check that one is also not active. Can you tell me why it will not allow me to add this billing code or activity in this account? Please try and let me know what happens when you try.
1.0
Residences @ Daniel Webster 123-E0679 - In GitLab by @kdjstudios on Jun 6, 2018, 13:33 **Submitted by:** "Kimberly Gagner" <kimberly.gagner@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-06-06-12351 **Server:** Internal **Client/Site:** Billerica **Account:** 123-E0679 **Issue:** I had noticed the account patching activity was not active so I have attempted to reactivate it and it will not activate. So, I went in and added a new field for patching and when I went to check that one is also not active. Can you tell me why it will not allow me to add this billing code or activity in this account? Please try and let me know what happens when you try.
process
residences daniel webster in gitlab by kdjstudios on jun submitted by kimberly gagner helpdesk server internal client site billerica account issue i had noticed the account patching activity was not active so i have attempted to reactivate it and it will not activate so i went in and added a new field for patching and when i went to check that one is also not active can you tell me why it will not allow me to add this billing code or activity in this account please try and let me know what happens when you try
1
101,877
16,530,159,071
IssuesEvent
2021-05-27 04:11:27
alpersonalwebsite/react-state-fetch
https://api.github.com/repos/alpersonalwebsite/react-state-fetch
opened
CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz
security vulnerability
## CVE-2021-23343 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary> <p>Node.js path.parse() ponyfill</p> <p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p> <p>Path to dependency file: react-state-fetch/package.json</p> <p>Path to vulnerable library: react-state-fetch/node_modules/path-parse/package.json</p> <p> Dependency Hierarchy: - babel-eslint-10.0.3.tgz (Root Library) - resolve-1.12.2.tgz - :x: **path-parse-1.0.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/react-state-fetch/commit/62509be090040495f597b966648d37cda4bed1dd">62509be090040495f597b966648d37cda4bed1dd</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity. <p>Publish Date: 2021-05-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23343 (High) detected in path-parse-1.0.6.tgz - ## CVE-2021-23343 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>path-parse-1.0.6.tgz</b></p></summary> <p>Node.js path.parse() ponyfill</p> <p>Library home page: <a href="https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz">https://registry.npmjs.org/path-parse/-/path-parse-1.0.6.tgz</a></p> <p>Path to dependency file: react-state-fetch/package.json</p> <p>Path to vulnerable library: react-state-fetch/node_modules/path-parse/package.json</p> <p> Dependency Hierarchy: - babel-eslint-10.0.3.tgz (Root Library) - resolve-1.12.2.tgz - :x: **path-parse-1.0.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/alpersonalwebsite/react-state-fetch/commit/62509be090040495f597b966648d37cda4bed1dd">62509be090040495f597b966648d37cda4bed1dd</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> All versions of package path-parse are vulnerable to Regular Expression Denial of Service (ReDoS) via splitDeviceRe, splitTailRe, and splitPathRe regular expressions. ReDoS exhibits polynomial worst-case time complexity. <p>Publish Date: 2021-05-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23343>CVE-2021-23343</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in path parse tgz cve high severity vulnerability vulnerable library path parse tgz node js path parse ponyfill library home page a href path to dependency file react state fetch package json path to vulnerable library react state fetch node modules path parse package json dependency hierarchy babel eslint tgz root library resolve tgz x path parse tgz vulnerable library found in head commit a href vulnerability details all versions of package path parse are vulnerable to regular expression denial of service redos via splitdevicere splittailre and splitpathre regular expressions redos exhibits polynomial worst case time complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource
0
21,162
28,136,025,217
IssuesEvent
2023-04-01 11:48:59
firebase/firebase-cpp-sdk
https://api.github.com/repos/firebase/firebase-cpp-sdk
closed
[C++] Nightly Integration Testing Report for Firestore
type: process nightly-testing
<hidden value="integration-test-status-comment"></hidden> ### [build against repo] Integration test with FLAKINESS (succeeded after retry) Requested by @sunmou99 on commit 73ce6feb70d3e830676aafa1d0ded64a57f07fb8 Last updated: Fri Mar 31 09:49 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4573877315)** | Failures | Configs | |----------|---------| | firestore | [TEST] [FLAKINESS] [Android] [2/3 os: windows ubuntu] [1/4 android_device: android_target]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;CRASH/TIMEOUT</details> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 73ce6feb70d3e830676aafa1d0ded64a57f07fb8 Last updated: Fri Mar 31 09:55 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4574922030)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit 73ce6feb70d3e830676aafa1d0ded64a57f07fb8 Last updated: Sat Apr 1 04:39 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4582793169)**
1.0
[C++] Nightly Integration Testing Report for Firestore - <hidden value="integration-test-status-comment"></hidden> ### [build against repo] Integration test with FLAKINESS (succeeded after retry) Requested by @sunmou99 on commit 73ce6feb70d3e830676aafa1d0ded64a57f07fb8 Last updated: Fri Mar 31 09:49 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4573877315)** | Failures | Configs | |----------|---------| | firestore | [TEST] [FLAKINESS] [Android] [2/3 os: windows ubuntu] [1/4 android_device: android_target]<details><summary>(1 failed tests)</summary>&nbsp;&nbsp;CRASH/TIMEOUT</details> | Add flaky tests to **[go/fpl-cpp-flake-tracker](http://go/fpl-cpp-flake-tracker)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against SDK] Integration test succeeded! Requested by @firebase-workflow-trigger[bot] on commit 73ce6feb70d3e830676aafa1d0ded64a57f07fb8 Last updated: Fri Mar 31 09:55 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4574922030)** <hidden value="integration-test-status-comment"></hidden> *** ### ✅&nbsp; [build against tip] Integration test succeeded! Requested by @sunmou99 on commit 73ce6feb70d3e830676aafa1d0ded64a57f07fb8 Last updated: Sat Apr 1 04:39 PDT 2023 **[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4582793169)**
process
nightly integration testing report for firestore integration test with flakiness succeeded after retry requested by on commit last updated fri mar pdt failures configs firestore failed tests nbsp nbsp crash timeout add flaky tests to ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated fri mar pdt ✅ nbsp integration test succeeded requested by on commit last updated sat apr pdt
1
13,238
22,351,571,710
IssuesEvent
2022-06-15 12:32:09
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Bumping from alpha to another alpha with different minor
type:bug status:requirements priority-5-triage
### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. _No response_ ### Please select which platform you are using if self-hosting. _No response_ ### If you're self-hosting Renovate, tell us what version of the platform you run. _No response_ ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug I have a dependency that was using the version 0.1.0-alpha.10, after a new release to 0.2.0-alpha.1, Renovate doesn't create a PR for it. public repo: https://github.com/javiersc/gradle-plugins/ ### Relevant debug logs <details><summary>Logs</summary> > There was an error creating your Issue: body is too long (maximum is 65536 characters). As I am unable to create the issue with the logs, I have pasted them in a pastebin. https://pastebin.com/V7ttK9bQ </details> ### Have you created a minimal reproduction repository? No reproduction, but I have linked to a public repo where it occurs
1.0
Bumping from alpha to another alpha with different minor - ### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. _No response_ ### Please select which platform you are using if self-hosting. _No response_ ### If you're self-hosting Renovate, tell us what version of the platform you run. _No response_ ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug I have a dependency that was using the version 0.1.0-alpha.10, after a new release to 0.2.0-alpha.1, Renovate doesn't create a PR for it. public repo: https://github.com/javiersc/gradle-plugins/ ### Relevant debug logs <details><summary>Logs</summary> > There was an error creating your Issue: body is too long (maximum is 65536 characters). As I am unable to create the issue with the logs, I have pasted them in a pastebin. https://pastebin.com/V7ttK9bQ </details> ### Have you created a minimal reproduction repository? No reproduction, but I have linked to a public repo where it occurs
non_process
bumping from alpha to another alpha with different minor how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response please select which platform you are using if self hosting no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug i have a dependency that was using the version alpha after a new release to alpha renovate doesn t create a pr for it public repo relevant debug logs logs there was an error creating your issue body is too long maximum is characters as i am unable to create the issue with the logs i have pasted them in a pastebin have you created a minimal reproduction repository no reproduction but i have linked to a public repo where it occurs
0
823
4,445,331,355
IssuesEvent
2016-08-20 01:07:03
OpenLightingProject/ola
https://api.github.com/repos/OpenLightingProject/ola
closed
gcc6 build issues
bug Maintainability OpSys-Linux
Hi, `std::auto_ptr` is deprecated in the latest C++ standard. As of version 6, GCC emits a warning for this. Due to ola's default of enabling `-Werror` for builds, this causes builds with gcc6 to fail. By adding `-Wno-error=deprecated-declarations`, this can be worked around without having to disable *all* warning->error conversions, but perhaps ola should consider migrating away from deprecated classes.
True
gcc6 build issues - Hi, `std::auto_ptr` is deprecated in the latest C++ standard. As of version 6, GCC emits a warning for this. Due to ola's default of enabling `-Werror` for builds, this causes builds with gcc6 to fail. By adding `-Wno-error=deprecated-declarations`, this can be worked around without having to disable *all* warning->error conversions, but perhaps ola should consider migrating away from deprecated classes.
non_process
build issues hi std auto ptr is deprecated in the latest c standard as of version gcc emits a warning for this due to ola s default of enabling werror for builds this causes builds with to fail by adding wno error deprecated declarations this can be worked around without having to disable all warning error conversions but perhaps ola should consider migrating away from deprecated classes
0
27,178
12,520,793,004
IssuesEvent
2020-06-03 16:25:16
cityofaustin/atd-data-tech
https://api.github.com/repos/cityofaustin/atd-data-tech
closed
TDM requesting MS Teams/SharePoint page for Teleworking Policy creation guidance
Service: Apps Type: IT Support Workgroup: SDD
<!-- Email --> <!-- julie.anderson@austintexas.gov --> > What application are you using? Other / Not Sure > Describe the problem. City Council approved a resolution to enhance the City's telework policy. While HRD will be providing general guidelines to department heads, the ATD Transportation Demand Management will providing more in-depth, detailed resources for managers needing to create their own telework policies. We would like to create a Teams page that houses resources and information for employees and managers; we believe creating a Teams page for that is the best way to do that. > How soon do you need this? Soon — This week > Is there anything else we should know? We are wanting to be responsive to the request from Council. Requester: Julie A. - SDD/TDM Request ID: DTS20-100720
1.0
TDM requesting MS Teams/SharePoint page for Teleworking Policy creation guidance - <!-- Email --> <!-- julie.anderson@austintexas.gov --> > What application are you using? Other / Not Sure > Describe the problem. City Council approved a resolution to enhance the City's telework policy. While HRD will be providing general guidelines to department heads, the ATD Transportation Demand Management will providing more in-depth, detailed resources for managers needing to create their own telework policies. We would like to create a Teams page that houses resources and information for employees and managers; we believe creating a Teams page for that is the best way to do that. > How soon do you need this? Soon — This week > Is there anything else we should know? We are wanting to be responsive to the request from Council. Requester: Julie A. - SDD/TDM Request ID: DTS20-100720
non_process
tdm requesting ms teams sharepoint page for teleworking policy creation guidance what application are you using other not sure describe the problem city council approved a resolution to enhance the city s telework policy while hrd will be providing general guidelines to department heads the atd transportation demand management will providing more in depth detailed resources for managers needing to create their own telework policies we would like to create a teams page that houses resources and information for employees and managers we believe creating a teams page for that is the best way to do that how soon do you need this soon — this week is there anything else we should know we are wanting to be responsive to the request from council requester julie a sdd tdm request id
0
502,705
14,565,328,129
IssuesEvent
2020-12-17 07:03:28
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
onlyfans.com - design is broken
browser-fenix engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal
<!-- @browser: Firefox Mobile 84.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/63773 --> <!-- @extra_labels: browser-fenix --> **URL**: https://onlyfans.com/ **Browser / Version**: Firefox Mobile 84.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: No **Problem type**: Design is broken **Description**: Images not loaded **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201206192040</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/12/88f82e88-cc90-412f-ad40-512b2c316e8f) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
onlyfans.com - design is broken - <!-- @browser: Firefox Mobile 84.0 --> <!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:84.0) Gecko/84.0 Firefox/84.0 --> <!-- @reported_with: android-components-reporter --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/63773 --> <!-- @extra_labels: browser-fenix --> **URL**: https://onlyfans.com/ **Browser / Version**: Firefox Mobile 84.0 **Operating System**: Android 8.0.0 **Tested Another Browser**: No **Problem type**: Design is broken **Description**: Images not loaded **Steps to Reproduce**: <details> <summary>Browser Configuration</summary> <ul> <li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20201206192040</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li> </ul> </details> [View console log messages](https://webcompat.com/console_logs/2020/12/88f82e88-cc90-412f-ad40-512b2c316e8f) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
onlyfans com design is broken url browser version firefox mobile operating system android tested another browser no problem type design is broken description images not loaded steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
0
149,279
23,453,966,890
IssuesEvent
2022-08-16 07:08:26
ta-mu-aa/workout-share
https://api.github.com/repos/ta-mu-aa/workout-share
closed
ユーザー新規作成のリクエストに対してエラーレスポンスを返す時の処理を実装
feature Front design-layout
## 概要 ユーザー新規登録の際に登録情報をサーバーに送信した際にAPI側でエラーが出た場合そのエラーレスポンスをフロントにレスポンスし、そのレスポンスを受けとった際の処理と描画を実装する ## やること - エラーコード400が帰ってきた場合入力情報に誤りがあることを知らせる - エラーコード403が帰ってきた場合既に登録済みのメールアドレスであることを知らせる - エラーコード500の場合サーバー側に何か問題があったことを知らせる
1.0
ユーザー新規作成のリクエストに対してエラーレスポンスを返す時の処理を実装 - ## 概要 ユーザー新規登録の際に登録情報をサーバーに送信した際にAPI側でエラーが出た場合そのエラーレスポンスをフロントにレスポンスし、そのレスポンスを受けとった際の処理と描画を実装する ## やること - エラーコード400が帰ってきた場合入力情報に誤りがあることを知らせる - エラーコード403が帰ってきた場合既に登録済みのメールアドレスであることを知らせる - エラーコード500の場合サーバー側に何か問題があったことを知らせる
non_process
ユーザー新規作成のリクエストに対してエラーレスポンスを返す時の処理を実装 概要 ユーザー新規登録の際に登録情報をサーバーに送信した際にapi側でエラーが出た場合そのエラーレスポンスをフロントにレスポンスし、そのレスポンスを受けとった際の処理と描画を実装する やること
0
66,666
14,792,353,118
IssuesEvent
2021-01-12 14:39:26
criticalstack/ui
https://api.github.com/repos/criticalstack/ui
closed
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz
security vulnerability
## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary> <p> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: ui/client/package.json</p> <p>Path to vulnerable library: ui/client/node_modules/extract-zip/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - cypress-3.8.3.tgz (Root Library) - extract-zip-1.6.7.tgz - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: ui/client/package.json</p> <p>Path to vulnerable library: ui/client/node_modules/cypress/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - cypress-3.8.3.tgz (Root Library) - :x: **minimist-1.2.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/criticalstack/ui/commit/b6d49f3fca5e6e61ebb78c82e1c3265019669456">b6d49f3fca5e6e61ebb78c82e1c3265019669456</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: minimist - 0.2.1,1.2.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7598 (Medium) detected in minimist-0.0.8.tgz, minimist-1.2.0.tgz - ## CVE-2020-7598 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary> <p> <details><summary><b>minimist-0.0.8.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p> <p>Path to dependency file: ui/client/package.json</p> <p>Path to vulnerable library: ui/client/node_modules/extract-zip/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - cypress-3.8.3.tgz (Root Library) - extract-zip-1.6.7.tgz - mkdirp-0.5.1.tgz - :x: **minimist-0.0.8.tgz** (Vulnerable Library) </details> <details><summary><b>minimist-1.2.0.tgz</b></p></summary> <p>parse argument options</p> <p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p> <p>Path to dependency file: ui/client/package.json</p> <p>Path to vulnerable library: ui/client/node_modules/cypress/node_modules/minimist/package.json</p> <p> Dependency Hierarchy: - cypress-3.8.3.tgz (Root Library) - :x: **minimist-1.2.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/criticalstack/ui/commit/b6d49f3fca5e6e61ebb78c82e1c3265019669456">b6d49f3fca5e6e61ebb78c82e1c3265019669456</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> minimist before 1.2.2 could be tricked into adding or modifying properties of Object.prototype using a "constructor" or "__proto__" payload. <p>Publish Date: 2020-03-11 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7598>CVE-2020-7598</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94">https://github.com/substack/minimist/commit/63e7ed05aa4b1889ec2f3b196426db4500cbda94</a></p> <p>Release Date: 2020-03-11</p> <p>Fix Resolution: minimist - 0.2.1,1.2.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in minimist tgz minimist tgz cve medium severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file ui client package json path to vulnerable library ui client node modules extract zip node modules minimist package json dependency hierarchy cypress tgz root library extract zip tgz mkdirp tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file ui client package json path to vulnerable library ui client node modules cypress node modules minimist package json dependency hierarchy cypress tgz root library x minimist tgz vulnerable library found in head commit a href found in base branch main vulnerability details minimist before could be tricked into adding or modifying properties of object prototype using a constructor or proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist step up your open source security game with whitesource
0
58,041
14,268,013,651
IssuesEvent
2020-11-20 21:34:57
hashicorp/packer
https://api.github.com/repos/hashicorp/packer
closed
vsphere-clone Multiple Disk Support
builder/vsphere enhancement track-internal
#### Description In #8749, the ability to add multiple disks was added to the `vsphere-iso` builder. Can this ability be added to the `vsphere-clone` builder as well? #### Use Case(s) I have a large number of templates I would like to build with Packer that all use the same OS, but varying number of drives. In order to save time and compute resources, I use the `vsphere-iso` builder to create a template that only installs the OS and then make several templates using the `vsphere-clone` template that make any tweaks needed for each use case. But because `vsphere-clone` doesn't support configuring multiple drives, I have to make a `vsphere-iso` template for every number of drives that are added to the template. If the ability of adding additional disks was added to `vsphere-clone`, then only 1 `vsphere-iso` template would be needed and each `vsphere-clone` template would add any additional disks that are needed. #### Potential configuration The same as `vsphere-iso` builder, specifically the [disk_controller_type](https://www.packer.io/docs/builders/vmware/vsphere-iso#disk_controller_type) and [storage](https://www.packer.io/docs/builders/vmware/vsphere-iso#storage) attributes.
1.0
vsphere-clone Multiple Disk Support - #### Description In #8749, the ability to add multiple disks was added to the `vsphere-iso` builder. Can this ability be added to the `vsphere-clone` builder as well? #### Use Case(s) I have a large number of templates I would like to build with Packer that all use the same OS, but varying number of drives. In order to save time and compute resources, I use the `vsphere-iso` builder to create a template that only installs the OS and then make several templates using the `vsphere-clone` template that make any tweaks needed for each use case. But because `vsphere-clone` doesn't support configuring multiple drives, I have to make a `vsphere-iso` template for every number of drives that are added to the template. If the ability of adding additional disks was added to `vsphere-clone`, then only 1 `vsphere-iso` template would be needed and each `vsphere-clone` template would add any additional disks that are needed. #### Potential configuration The same as `vsphere-iso` builder, specifically the [disk_controller_type](https://www.packer.io/docs/builders/vmware/vsphere-iso#disk_controller_type) and [storage](https://www.packer.io/docs/builders/vmware/vsphere-iso#storage) attributes.
non_process
vsphere clone multiple disk support description in the ability to add multiple disks was added to the vsphere iso builder can this ability be added to the vsphere clone builder as well use case s i have a large number of templates i would like to build with packer that all use the same os but varying number of drives in order to save time and compute resources i use the vsphere iso builder to create a template that only installs the os and then make several templates using the vsphere clone template that make any tweaks needed for each use case but because vsphere clone doesn t support configuring multiple drives i have to make a vsphere iso template for every number of drives that are added to the template if the ability of adding additional disks was added to vsphere clone then only vsphere iso template would be needed and each vsphere clone template would add any additional disks that are needed potential configuration the same as vsphere iso builder specifically the and attributes
0
13,895
16,656,533,302
IssuesEvent
2021-06-05 16:24:27
codeanit/til
https://api.github.com/repos/codeanit/til
opened
The Mikado Method
inprogress process refactor
# The Mikado Method The Mikado method is a technique used to explore and understand how a task could be performed, identifying the key actions to complete it. It is what an experienced developer would normally do subconsciously, plus the discipline and courage to undo changes”. ## Usage ### Focus When working on a task that stretches over several weeks or months a developer may lose focus of the end goal or get lost in a task as more and more complexity is added. This can lead to scope creep and ultimately inefficient working. The Mikado graph allows work to be tracked and a clear working path to be visualised. ### Teams A piece of work completed by a pair or team, either as a mob or splitting up the work and coding asynchronously, must be well understood by all members of the team so that the result is what everyone expects. Using a graph to illustrate this allows the team to agree on the abstract steps that need to be taken, an end goal, and the direction to get there. This is particularly important where a team may be working remotely and quick discussions about the approach may be more challenging to coordinate. ### Code Review Using the Mikado method allows an individual or team’s thought processes to be written down in a clear way that can be understood by someone who is not well versed with the problem. This means that during a refactoring code review where a reviewer may be faced with a 150+ file change pull request, the thought process and reasoning behind why a piece of code has been changed can be understood much more easily. This means that code reviews can be more effective and faster. ## The Method How many times have you wanted to fix something, and while doing it not break the code? Or think of all the times that development work hasn't started from an empty codebase and you've inherited the constraints of the previous team. What you'd normally do is look over the documentation left behind; if there are automatic tests you'd see if they pass. Well, what happens if there aren't any tests left behind, and all that's left is the source code? How do you understand what's going on without breaking the code? In this article, based on chapter 1 of The Mikado Method, the authors talk about how the Mikado Method is a structured way to make significant changes to complex code. The Mikado Method is a structured way to make major changes to complex code. When a codebase gets larger and complicated, as they often do, there usually comes a time when you want to improve portions of it to meet new functional requirements, new legal requirements, or a new business model. You may also just want to change it to make it more comprehensible. For small changes you can keep things in your head, but for larger ones the chances of getting lost in a jungle of dependencies or in a sea of broken code increases dramatically. The Mikado Method helps you visualize, plan, and perform business value-focused improvements over several iterations and increments of work, without ever having a broken codebase during the process. The framework that the Method provides can help individuals and whole teams to morph a system into a new desired shape. The Method itself is straightforward and simple and can be used by anyone at any time. In this article, we are going to look at the core concepts, benefits, and when you can use it. ## Basic Concepts There are four basic and well-known concepts that summarize the "process" of the Mikado Method: Set a goal, Experiments, Visualization, and Undo. These concepts, when used together in the Mikado context, create the core of the method itself. Without these key pieces, the method wouldn't be able to help you make changes without breaking the codebase. By no means are these concepts new, but put together they become very powerful. In the context of the method they may serve a different purpose than how you might know them. ### Set a goal To set a goal, think about what you want for the future. For instance, if you have a package with several web services that need to grow but it is already responsible for too much, perhaps what you'd want for the goal to be is "to have the admin services be in a separate package that can be deployed without the customer web services." After you clearly state the goal, write it down. The goal serves two purposes: 1) represents a starting point for the change 2) determines if the method has achieved success or not. ### Experiments An experiment is a procedure that makes a discovery or establishes the validity of a hypothesis. In the context of the Mikado Method you use experiments to change the code so that you can see what parts of the system that breaks. Whatever is breaking gives you feedback on what types of prerequisites are needed before you can actually do that change. A typical experiment could be to move a method from one class to another, extract a class or reduce the scope of a variable. Those prerequisites are what you visualize. ### Visualization Visualization is when we write down the goal and the prerequisites to that goal. ### Mikado Methods for Agile Software Development The picture shows a small map and the content of such maps normally comes after we've experimented and are ready to create what we call a Mikado Graph. Beside the changes to your codebase, the map is the only artifact of the Mikado Method. A refactoring map is the goal plus all the prerequisites of that goal, and it tells us what our next step is. ### Undo When an experiment for implementing a goal or a prerequisite has broken your system, and you have visualized what you need to change in the system to avoid it breaking, you want to your changes to restore a previously undo working state. In the Mikado method, you'll always visualize your prerequisites, and then undo your breaking changes. This process, experiments, visualization, and undo, is iterated for each of the prerequisites. In order for the experiments to be meaningful, the code needs to be in a known working state when the experiments start. If this isn't making sense to you now, it will when we get to the recipe where we take you step by step through the method. Of these four concepts, the undo part is what people struggle with the most, because at first, undoing feels very unintuitive and wasteful. It's not waste, it is an important part of the learning. ## When to use the Mikado Method If we want to be successful software developers, we need to learn how to morph an existing system into a desired new shape. Maybe you have tried to implement a new feature, but the system is constantly working against you? Maybe you've thought once or twice that it's time to stop developing new features for a while and clean up a bit? Maybe you've done a refactoring project, or you have tried to do a bigger improvement to your system, but wasn't able to pull it off so you just threw it all away? We bet that you've been in at least one of the situations described above, and we know that the Mikado Method and this book could have helped. It doesn't really matter if the code was yours or someone else's; it doesn't matter if the code was old or new. Sooner or later that shiny new greenfield project where everything can fit in your head and changes are easy to perform will become more and more complex. As time passes the code fades just like grass does when it's heavily used and visited. The green grass field turns into a brown field, and sooner or later you, or your successors, become afraid of changing code. So, let's face it, we're stuck with brownfield development and we need to be able to morph code we're afraid of touching, in mid-flight. Let's look at a few common scenarios where the Mikado Method can help. ### Improve a system architecture on the fly When you've hit a wall and a design doesn't lend itself easily to change developers get frustrated. It could be an API that is hard to understand and your customers are complaining or your nightly batch jobs barely make it because the data that needs to be processed has increased by 10 times. It can be times like that when the code seems so complex and the only way to solve your problems is by stopping development and focus solely on improving the codebase for awhile or maybe run an improvement effort as a side project. Improvement projects make stakeholders nervous and rightfully so, because they see nothing of value coming out. The Mikado Method helps in changing the architecture in small steps, allowing improvements and continuous delivery of new features to co-exist, in the same branch. ### Brownfield development Brownfield development is probably the most common situation developers are in and in order to continue business upgrading and improving an existing application infrastructure is necessary. Whether it's adding a new feature, or altering functionality, the Mikado Method helps in these situations because it works with what you've got and improves upon it. Just like any other codebase, these also needs to change for several reasons, but often you don't know the whole code base inside out so changes become inefficient, or down right scary. The Mikado Method provides a way to take on a reasonable amount of improvements for each feature. ### Refactoring projects Imagine that you want to extract a reusable module in a heavily entangled system, or to replace an external API that has leaked deep into your codebase. Ideas and improvements like that are really big and they usually take several weeks, or even months, to complete. Making improvements for large tasks require a nondestructive way forward. The Mikado Method helps you uncover that nondestructive way and keeps you on track even if the effort takes months. This way refactoring projects can be avoided entirely. ### Benefits of the Method We now know that the Mikado Method is a way to improve code more without breaking it, what situations we'd be in when we'd want to use it, now let's look at the benefits before we dive into how it works. ### Stability to the code base Stakeholders will love the Mikado Method because it provides stability to the codebase while changing it. No more, "We can't release now, we're ironing out a few wrinkles." The path to a change becomes a nondestructive one from lots of small changes instead of a big integration in the end. Due to its visual nature, interested parties can also follow along easily and watch the map evolve and then see how the changes are being performed and checked off on the map. ### Increases communication and collaboration From a teams perspective it works really well too. By communicating the change map collaboration becomes easier and a change effort can be spread across the team. This way the whole teams competencies, abilities, and existing knowledge can be leveraged and the workload can also be distributed throughout the team. ### Lightweight and goal focused Last, but not least, the Mikado Method supports an individual by being quick to learn and easy to use. The Method has very little ceremony and consists of a lightweight process that requires almost no additional tools, just pen and paper or a whiteboard. In its simplicity, it still helps you keep your eye on the price. As a bonus, you can use the refactoring map you get from the process to assist you when you reflection over the work done and this improves learning. # Resource - [x] https://medium.com/ingeniouslysimple/how-the-honeybees-use-the-mikado-method-d2b9fa34184f - [x] https://danielbrolund.wordpress.com/2009/03/28/start-paying-your-technical-debt-the-mikado-method - [x] https://www.methodsandtools.com/archive/mikado.php
1.0
The Mikado Method - # The Mikado Method The Mikado method is a technique used to explore and understand how a task could be performed, identifying the key actions to complete it. It is what an experienced developer would normally do subconsciously, plus the discipline and courage to undo changes”. ## Usage ### Focus When working on a task that stretches over several weeks or months a developer may lose focus of the end goal or get lost in a task as more and more complexity is added. This can lead to scope creep and ultimately inefficient working. The Mikado graph allows work to be tracked and a clear working path to be visualised. ### Teams A piece of work completed by a pair or team, either as a mob or splitting up the work and coding asynchronously, must be well understood by all members of the team so that the result is what everyone expects. Using a graph to illustrate this allows the team to agree on the abstract steps that need to be taken, an end goal, and the direction to get there. This is particularly important where a team may be working remotely and quick discussions about the approach may be more challenging to coordinate. ### Code Review Using the Mikado method allows an individual or team’s thought processes to be written down in a clear way that can be understood by someone who is not well versed with the problem. This means that during a refactoring code review where a reviewer may be faced with a 150+ file change pull request, the thought process and reasoning behind why a piece of code has been changed can be understood much more easily. This means that code reviews can be more effective and faster. ## The Method How many times have you wanted to fix something, and while doing it not break the code? Or think of all the times that development work hasn't started from an empty codebase and you've inherited the constraints of the previous team. What you'd normally do is look over the documentation left behind; if there are automatic tests you'd see if they pass. Well, what happens if there aren't any tests left behind, and all that's left is the source code? How do you understand what's going on without breaking the code? In this article, based on chapter 1 of The Mikado Method, the authors talk about how the Mikado Method is a structured way to make significant changes to complex code. The Mikado Method is a structured way to make major changes to complex code. When a codebase gets larger and complicated, as they often do, there usually comes a time when you want to improve portions of it to meet new functional requirements, new legal requirements, or a new business model. You may also just want to change it to make it more comprehensible. For small changes you can keep things in your head, but for larger ones the chances of getting lost in a jungle of dependencies or in a sea of broken code increases dramatically. The Mikado Method helps you visualize, plan, and perform business value-focused improvements over several iterations and increments of work, without ever having a broken codebase during the process. The framework that the Method provides can help individuals and whole teams to morph a system into a new desired shape. The Method itself is straightforward and simple and can be used by anyone at any time. In this article, we are going to look at the core concepts, benefits, and when you can use it. ## Basic Concepts There are four basic and well-known concepts that summarize the "process" of the Mikado Method: Set a goal, Experiments, Visualization, and Undo. These concepts, when used together in the Mikado context, create the core of the method itself. Without these key pieces, the method wouldn't be able to help you make changes without breaking the codebase. By no means are these concepts new, but put together they become very powerful. In the context of the method they may serve a different purpose than how you might know them. ### Set a goal To set a goal, think about what you want for the future. For instance, if you have a package with several web services that need to grow but it is already responsible for too much, perhaps what you'd want for the goal to be is "to have the admin services be in a separate package that can be deployed without the customer web services." After you clearly state the goal, write it down. The goal serves two purposes: 1) represents a starting point for the change 2) determines if the method has achieved success or not. ### Experiments An experiment is a procedure that makes a discovery or establishes the validity of a hypothesis. In the context of the Mikado Method you use experiments to change the code so that you can see what parts of the system that breaks. Whatever is breaking gives you feedback on what types of prerequisites are needed before you can actually do that change. A typical experiment could be to move a method from one class to another, extract a class or reduce the scope of a variable. Those prerequisites are what you visualize. ### Visualization Visualization is when we write down the goal and the prerequisites to that goal. ### Mikado Methods for Agile Software Development The picture shows a small map and the content of such maps normally comes after we've experimented and are ready to create what we call a Mikado Graph. Beside the changes to your codebase, the map is the only artifact of the Mikado Method. A refactoring map is the goal plus all the prerequisites of that goal, and it tells us what our next step is. ### Undo When an experiment for implementing a goal or a prerequisite has broken your system, and you have visualized what you need to change in the system to avoid it breaking, you want to your changes to restore a previously undo working state. In the Mikado method, you'll always visualize your prerequisites, and then undo your breaking changes. This process, experiments, visualization, and undo, is iterated for each of the prerequisites. In order for the experiments to be meaningful, the code needs to be in a known working state when the experiments start. If this isn't making sense to you now, it will when we get to the recipe where we take you step by step through the method. Of these four concepts, the undo part is what people struggle with the most, because at first, undoing feels very unintuitive and wasteful. It's not waste, it is an important part of the learning. ## When to use the Mikado Method If we want to be successful software developers, we need to learn how to morph an existing system into a desired new shape. Maybe you have tried to implement a new feature, but the system is constantly working against you? Maybe you've thought once or twice that it's time to stop developing new features for a while and clean up a bit? Maybe you've done a refactoring project, or you have tried to do a bigger improvement to your system, but wasn't able to pull it off so you just threw it all away? We bet that you've been in at least one of the situations described above, and we know that the Mikado Method and this book could have helped. It doesn't really matter if the code was yours or someone else's; it doesn't matter if the code was old or new. Sooner or later that shiny new greenfield project where everything can fit in your head and changes are easy to perform will become more and more complex. As time passes the code fades just like grass does when it's heavily used and visited. The green grass field turns into a brown field, and sooner or later you, or your successors, become afraid of changing code. So, let's face it, we're stuck with brownfield development and we need to be able to morph code we're afraid of touching, in mid-flight. Let's look at a few common scenarios where the Mikado Method can help. ### Improve a system architecture on the fly When you've hit a wall and a design doesn't lend itself easily to change developers get frustrated. It could be an API that is hard to understand and your customers are complaining or your nightly batch jobs barely make it because the data that needs to be processed has increased by 10 times. It can be times like that when the code seems so complex and the only way to solve your problems is by stopping development and focus solely on improving the codebase for awhile or maybe run an improvement effort as a side project. Improvement projects make stakeholders nervous and rightfully so, because they see nothing of value coming out. The Mikado Method helps in changing the architecture in small steps, allowing improvements and continuous delivery of new features to co-exist, in the same branch. ### Brownfield development Brownfield development is probably the most common situation developers are in and in order to continue business upgrading and improving an existing application infrastructure is necessary. Whether it's adding a new feature, or altering functionality, the Mikado Method helps in these situations because it works with what you've got and improves upon it. Just like any other codebase, these also needs to change for several reasons, but often you don't know the whole code base inside out so changes become inefficient, or down right scary. The Mikado Method provides a way to take on a reasonable amount of improvements for each feature. ### Refactoring projects Imagine that you want to extract a reusable module in a heavily entangled system, or to replace an external API that has leaked deep into your codebase. Ideas and improvements like that are really big and they usually take several weeks, or even months, to complete. Making improvements for large tasks require a nondestructive way forward. The Mikado Method helps you uncover that nondestructive way and keeps you on track even if the effort takes months. This way refactoring projects can be avoided entirely. ### Benefits of the Method We now know that the Mikado Method is a way to improve code more without breaking it, what situations we'd be in when we'd want to use it, now let's look at the benefits before we dive into how it works. ### Stability to the code base Stakeholders will love the Mikado Method because it provides stability to the codebase while changing it. No more, "We can't release now, we're ironing out a few wrinkles." The path to a change becomes a nondestructive one from lots of small changes instead of a big integration in the end. Due to its visual nature, interested parties can also follow along easily and watch the map evolve and then see how the changes are being performed and checked off on the map. ### Increases communication and collaboration From a teams perspective it works really well too. By communicating the change map collaboration becomes easier and a change effort can be spread across the team. This way the whole teams competencies, abilities, and existing knowledge can be leveraged and the workload can also be distributed throughout the team. ### Lightweight and goal focused Last, but not least, the Mikado Method supports an individual by being quick to learn and easy to use. The Method has very little ceremony and consists of a lightweight process that requires almost no additional tools, just pen and paper or a whiteboard. In its simplicity, it still helps you keep your eye on the price. As a bonus, you can use the refactoring map you get from the process to assist you when you reflection over the work done and this improves learning. # Resource - [x] https://medium.com/ingeniouslysimple/how-the-honeybees-use-the-mikado-method-d2b9fa34184f - [x] https://danielbrolund.wordpress.com/2009/03/28/start-paying-your-technical-debt-the-mikado-method - [x] https://www.methodsandtools.com/archive/mikado.php
process
the mikado method the mikado method the mikado method is a technique used to explore and understand how a task could be performed identifying the key actions to complete it it is what an experienced developer would normally do subconsciously plus the discipline and courage to undo changes” usage focus when working on a task that stretches over several weeks or months a developer may lose focus of the end goal or get lost in a task as more and more complexity is added this can lead to scope creep and ultimately inefficient working the mikado graph allows work to be tracked and a clear working path to be visualised teams a piece of work completed by a pair or team either as a mob or splitting up the work and coding asynchronously must be well understood by all members of the team so that the result is what everyone expects using a graph to illustrate this allows the team to agree on the abstract steps that need to be taken an end goal and the direction to get there this is particularly important where a team may be working remotely and quick discussions about the approach may be more challenging to coordinate code review using the mikado method allows an individual or team’s thought processes to be written down in a clear way that can be understood by someone who is not well versed with the problem this means that during a refactoring code review where a reviewer may be faced with a file change pull request the thought process and reasoning behind why a piece of code has been changed can be understood much more easily this means that code reviews can be more effective and faster the method how many times have you wanted to fix something and while doing it not break the code or think of all the times that development work hasn t started from an empty codebase and you ve inherited the constraints of the previous team what you d normally do is look over the documentation left behind if there are automatic tests you d see if they pass well what happens if there aren t any tests left behind and all that s left is the source code how do you understand what s going on without breaking the code in this article based on chapter of the mikado method the authors talk about how the mikado method is a structured way to make significant changes to complex code the mikado method is a structured way to make major changes to complex code when a codebase gets larger and complicated as they often do there usually comes a time when you want to improve portions of it to meet new functional requirements new legal requirements or a new business model you may also just want to change it to make it more comprehensible for small changes you can keep things in your head but for larger ones the chances of getting lost in a jungle of dependencies or in a sea of broken code increases dramatically the mikado method helps you visualize plan and perform business value focused improvements over several iterations and increments of work without ever having a broken codebase during the process the framework that the method provides can help individuals and whole teams to morph a system into a new desired shape the method itself is straightforward and simple and can be used by anyone at any time in this article we are going to look at the core concepts benefits and when you can use it basic concepts there are four basic and well known concepts that summarize the process of the mikado method set a goal experiments visualization and undo these concepts when used together in the mikado context create the core of the method itself without these key pieces the method wouldn t be able to help you make changes without breaking the codebase by no means are these concepts new but put together they become very powerful in the context of the method they may serve a different purpose than how you might know them set a goal to set a goal think about what you want for the future for instance if you have a package with several web services that need to grow but it is already responsible for too much perhaps what you d want for the goal to be is to have the admin services be in a separate package that can be deployed without the customer web services after you clearly state the goal write it down the goal serves two purposes represents a starting point for the change determines if the method has achieved success or not experiments an experiment is a procedure that makes a discovery or establishes the validity of a hypothesis in the context of the mikado method you use experiments to change the code so that you can see what parts of the system that breaks whatever is breaking gives you feedback on what types of prerequisites are needed before you can actually do that change a typical experiment could be to move a method from one class to another extract a class or reduce the scope of a variable those prerequisites are what you visualize visualization visualization is when we write down the goal and the prerequisites to that goal mikado methods for agile software development the picture shows a small map and the content of such maps normally comes after we ve experimented and are ready to create what we call a mikado graph beside the changes to your codebase the map is the only artifact of the mikado method a refactoring map is the goal plus all the prerequisites of that goal and it tells us what our next step is undo when an experiment for implementing a goal or a prerequisite has broken your system and you have visualized what you need to change in the system to avoid it breaking you want to your changes to restore a previously undo working state in the mikado method you ll always visualize your prerequisites and then undo your breaking changes this process experiments visualization and undo is iterated for each of the prerequisites in order for the experiments to be meaningful the code needs to be in a known working state when the experiments start if this isn t making sense to you now it will when we get to the recipe where we take you step by step through the method of these four concepts the undo part is what people struggle with the most because at first undoing feels very unintuitive and wasteful it s not waste it is an important part of the learning when to use the mikado method if we want to be successful software developers we need to learn how to morph an existing system into a desired new shape maybe you have tried to implement a new feature but the system is constantly working against you maybe you ve thought once or twice that it s time to stop developing new features for a while and clean up a bit maybe you ve done a refactoring project or you have tried to do a bigger improvement to your system but wasn t able to pull it off so you just threw it all away we bet that you ve been in at least one of the situations described above and we know that the mikado method and this book could have helped it doesn t really matter if the code was yours or someone else s it doesn t matter if the code was old or new sooner or later that shiny new greenfield project where everything can fit in your head and changes are easy to perform will become more and more complex as time passes the code fades just like grass does when it s heavily used and visited the green grass field turns into a brown field and sooner or later you or your successors become afraid of changing code so let s face it we re stuck with brownfield development and we need to be able to morph code we re afraid of touching in mid flight let s look at a few common scenarios where the mikado method can help improve a system architecture on the fly when you ve hit a wall and a design doesn t lend itself easily to change developers get frustrated it could be an api that is hard to understand and your customers are complaining or your nightly batch jobs barely make it because the data that needs to be processed has increased by times it can be times like that when the code seems so complex and the only way to solve your problems is by stopping development and focus solely on improving the codebase for awhile or maybe run an improvement effort as a side project improvement projects make stakeholders nervous and rightfully so because they see nothing of value coming out the mikado method helps in changing the architecture in small steps allowing improvements and continuous delivery of new features to co exist in the same branch brownfield development brownfield development is probably the most common situation developers are in and in order to continue business upgrading and improving an existing application infrastructure is necessary whether it s adding a new feature or altering functionality the mikado method helps in these situations because it works with what you ve got and improves upon it just like any other codebase these also needs to change for several reasons but often you don t know the whole code base inside out so changes become inefficient or down right scary the mikado method provides a way to take on a reasonable amount of improvements for each feature refactoring projects imagine that you want to extract a reusable module in a heavily entangled system or to replace an external api that has leaked deep into your codebase ideas and improvements like that are really big and they usually take several weeks or even months to complete making improvements for large tasks require a nondestructive way forward the mikado method helps you uncover that nondestructive way and keeps you on track even if the effort takes months this way refactoring projects can be avoided entirely benefits of the method we now know that the mikado method is a way to improve code more without breaking it what situations we d be in when we d want to use it now let s look at the benefits before we dive into how it works stability to the code base stakeholders will love the mikado method because it provides stability to the codebase while changing it no more we can t release now we re ironing out a few wrinkles the path to a change becomes a nondestructive one from lots of small changes instead of a big integration in the end due to its visual nature interested parties can also follow along easily and watch the map evolve and then see how the changes are being performed and checked off on the map increases communication and collaboration from a teams perspective it works really well too by communicating the change map collaboration becomes easier and a change effort can be spread across the team this way the whole teams competencies abilities and existing knowledge can be leveraged and the workload can also be distributed throughout the team lightweight and goal focused last but not least the mikado method supports an individual by being quick to learn and easy to use the method has very little ceremony and consists of a lightweight process that requires almost no additional tools just pen and paper or a whiteboard in its simplicity it still helps you keep your eye on the price as a bonus you can use the refactoring map you get from the process to assist you when you reflection over the work done and this improves learning resource
1
52,258
3,022,458,956
IssuesEvent
2015-07-31 20:29:07
information-artifact-ontology/IAO
https://api.github.com/repos/information-artifact-ontology/IAO
closed
label and symbol as subclasses of data item
bug imported Priority-Medium
_From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on July 23, 2009 07:24:39_ By our definition of data item (a data item is an information content entity that is intended to be a truthful statement about something (modulo, e.g., measurement precision or other systematic errors) and is constructed/acquired by a method which reliably tends to produce (approximately) truthful statements.), label and symbol shouldn't be subclasses. They have been moved under ICE for now. Potentially true for data about an ontology part as well. _Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=29_
1.0
label and symbol as subclasses of data item - _From [mcour...@gmail.com](https://code.google.com/u/116795168307825520406/) on July 23, 2009 07:24:39_ By our definition of data item (a data item is an information content entity that is intended to be a truthful statement about something (modulo, e.g., measurement precision or other systematic errors) and is constructed/acquired by a method which reliably tends to produce (approximately) truthful statements.), label and symbol shouldn't be subclasses. They have been moved under ICE for now. Potentially true for data about an ontology part as well. _Original issue: http://code.google.com/p/information-artifact-ontology/issues/detail?id=29_
non_process
label and symbol as subclasses of data item from on july by our definition of data item a data item is an information content entity that is intended to be a truthful statement about something modulo e g measurement precision or other systematic errors and is constructed acquired by a method which reliably tends to produce approximately truthful statements label and symbol shouldn t be subclasses they have been moved under ice for now potentially true for data about an ontology part as well original issue
0
9,806
12,819,911,988
IssuesEvent
2020-07-06 03:59:23
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Unexpected result with Snap Geometries alg using QgsProcessingFeatureSourceDefinition
Bug Processing Regression
On QGIS v3.14, the Snap Geometries algorithm returns different results depending on whether its input layers are defined using a `QgsProcessingFeatureSourceDefinition` or not. However, that's not the case on QGIS v3.10.6, where results do match, independently of whether `QgsProcessingFeatureSourceDefinition` is used or not. **Screencast in QGIS v3.14** ![screencast](https://user-images.githubusercontent.com/652785/85647216-9077a880-b663-11ea-8df0-1c90707d20b0.gif) **How to Reproduce** Here you can find [sample_data](https://github.com/qgis/QGIS/files/4828884/sample_data.gpkg.zip) (1 GPKG polygon layer). Scenario 1: 1. Load the sample polygon layer into QGIS. 2. Run 'Snap Geometries' algorithm with tolerance of 10.1m. and choose the behavior 2: `Prefer aligning nodes, don't insert new vertices`. Scenario 2: 1. Load the sample polygon layer into QGIS. 2. Select both polygons. 3. Run 'Snap Geometries' algorithm activating the `Selected features only` checkboxes, with tolerance of 10.1m. and choose the behavior 2: `Prefer aligning nodes, don't insert new vertices`. Both schenarios return different results, as it can be seen in the screencast above. (QGIS v3.14) Both scenarios return the same results using QGIS v3.10.6. **Additional info** On QGIS v3.14, running the algorithm from Python console, using the `QgsProcessingFeatureSourceDefinition` with `selectedFeaturesOnly=False` returns the same result as the scenario 2 mentioned above. **QGIS and OS versions** QGIS version | 3.14.0-Pi | QGIS code revision | 9f7028fd23 -- | -- | -- | -- Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8 Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4 Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1 Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1 PostgreSQL Client Version | 12.2 (Ubuntu 12.2-4) | SpatiaLite Version | 4.3.0a QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2 Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020 OS Version | Ubuntu 20.04 LTS | This copy of QGIS writes debugging output.
1.0
Unexpected result with Snap Geometries alg using QgsProcessingFeatureSourceDefinition - On QGIS v3.14, the Snap Geometries algorithm returns different results depending on whether its input layers are defined using a `QgsProcessingFeatureSourceDefinition` or not. However, that's not the case on QGIS v3.10.6, where results do match, independently of whether `QgsProcessingFeatureSourceDefinition` is used or not. **Screencast in QGIS v3.14** ![screencast](https://user-images.githubusercontent.com/652785/85647216-9077a880-b663-11ea-8df0-1c90707d20b0.gif) **How to Reproduce** Here you can find [sample_data](https://github.com/qgis/QGIS/files/4828884/sample_data.gpkg.zip) (1 GPKG polygon layer). Scenario 1: 1. Load the sample polygon layer into QGIS. 2. Run 'Snap Geometries' algorithm with tolerance of 10.1m. and choose the behavior 2: `Prefer aligning nodes, don't insert new vertices`. Scenario 2: 1. Load the sample polygon layer into QGIS. 2. Select both polygons. 3. Run 'Snap Geometries' algorithm activating the `Selected features only` checkboxes, with tolerance of 10.1m. and choose the behavior 2: `Prefer aligning nodes, don't insert new vertices`. Both schenarios return different results, as it can be seen in the screencast above. (QGIS v3.14) Both scenarios return the same results using QGIS v3.10.6. **Additional info** On QGIS v3.14, running the algorithm from Python console, using the `QgsProcessingFeatureSourceDefinition` with `selectedFeaturesOnly=False` returns the same result as the scenario 2 mentioned above. **QGIS and OS versions** QGIS version | 3.14.0-Pi | QGIS code revision | 9f7028fd23 -- | -- | -- | -- Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8 Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4 Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1 Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1 PostgreSQL Client Version | 12.2 (Ubuntu 12.2-4) | SpatiaLite Version | 4.3.0a QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2 Compiled against PROJ | 6.3.1 | Running against PROJ | Rel. 6.3.1, February 10th, 2020 OS Version | Ubuntu 20.04 LTS | This copy of QGIS writes debugging output.
process
unexpected result with snap geometries alg using qgsprocessingfeaturesourcedefinition on qgis the snap geometries algorithm returns different results depending on whether its input layers are defined using a qgsprocessingfeaturesourcedefinition or not however that s not the case on qgis where results do match independently of whether qgsprocessingfeaturesourcedefinition is used or not screencast in qgis how to reproduce here you can find gpkg polygon layer scenario load the sample polygon layer into qgis run snap geometries algorithm with tolerance of and choose the behavior prefer aligning nodes don t insert new vertices scenario load the sample polygon layer into qgis select both polygons run snap geometries algorithm activating the selected features only checkboxes with tolerance of and choose the behavior prefer aligning nodes don t insert new vertices both schenarios return different results as it can be seen in the screencast above qgis both scenarios return the same results using qgis additional info on qgis running the algorithm from python console using the qgsprocessingfeaturesourcedefinition with selectedfeaturesonly false returns the same result as the scenario mentioned above qgis and os versions qgis version pi qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version ubuntu spatialite version qwt version version compiled against proj running against proj rel february os version ubuntu lts this copy of qgis writes debugging output
1
662,001
22,100,251,613
IssuesEvent
2022-06-01 13:16:50
stackabletech/documentation
https://api.github.com/repos/stackabletech/documentation
closed
Documentation Style Guide
priority/high
For consistent documentation, we need a style guide. Typically this would cover formatting and capitalization rules (things that are immediately visible) as well as tone/writing style rules such as "Are we using 'we' or 'you' for instructional texts?" This also covers the formatting of code blocks. In this ticket, the content and its structure are explicitly out of scope. This guide will then apply to every documentation document. Reference documentation, tutorials, guides etc. It is also a *guide*, writers and reviewers should use their own best judgement when writing/reviewing documentation. The main driver behind this was that I noticed a wide range of superficial formatting styles (when to capitalize, starting command line snippets with a prompt or not, active or passive voice, contractions, ...) which makes the documentation inconsistent on a surface level, making me trust the documents less. Some resources on this I have gathered so far: - [Write The Docs overview on style guides](https://www.writethedocs.org/guide/writing/style-guides/) - [Google developer documentation style guide](https://developers.google.com/style) - [Gitlab style guide](https://docs.gitlab.com/ee/development/documentation/styleguide) - [Kubernetes: Documentation style overview](https://kubernetes.io/docs/contribute/style/) Acceptance criteria - [ ] have decided on a style guide to use (or written our own) - [ ] A page in the Contributor's Guide is added to show how to write documentation
1.0
Documentation Style Guide - For consistent documentation, we need a style guide. Typically this would cover formatting and capitalization rules (things that are immediately visible) as well as tone/writing style rules such as "Are we using 'we' or 'you' for instructional texts?" This also covers the formatting of code blocks. In this ticket, the content and its structure are explicitly out of scope. This guide will then apply to every documentation document. Reference documentation, tutorials, guides etc. It is also a *guide*, writers and reviewers should use their own best judgement when writing/reviewing documentation. The main driver behind this was that I noticed a wide range of superficial formatting styles (when to capitalize, starting command line snippets with a prompt or not, active or passive voice, contractions, ...) which makes the documentation inconsistent on a surface level, making me trust the documents less. Some resources on this I have gathered so far: - [Write The Docs overview on style guides](https://www.writethedocs.org/guide/writing/style-guides/) - [Google developer documentation style guide](https://developers.google.com/style) - [Gitlab style guide](https://docs.gitlab.com/ee/development/documentation/styleguide) - [Kubernetes: Documentation style overview](https://kubernetes.io/docs/contribute/style/) Acceptance criteria - [ ] have decided on a style guide to use (or written our own) - [ ] A page in the Contributor's Guide is added to show how to write documentation
non_process
documentation style guide for consistent documentation we need a style guide typically this would cover formatting and capitalization rules things that are immediately visible as well as tone writing style rules such as are we using we or you for instructional texts this also covers the formatting of code blocks in this ticket the content and its structure are explicitly out of scope this guide will then apply to every documentation document reference documentation tutorials guides etc it is also a guide writers and reviewers should use their own best judgement when writing reviewing documentation the main driver behind this was that i noticed a wide range of superficial formatting styles when to capitalize starting command line snippets with a prompt or not active or passive voice contractions which makes the documentation inconsistent on a surface level making me trust the documents less some resources on this i have gathered so far acceptance criteria have decided on a style guide to use or written our own a page in the contributor s guide is added to show how to write documentation
0
288,938
8,853,332,959
IssuesEvent
2019-01-08 21:02:30
Airblader/ngqp
https://api.github.com/repos/Airblader/ngqp
closed
Provide a proper showcase application
Comp: Docs Priority: Critical Status: Accepted Type: Feature
The ngqp-demo should document all features, be deployed (Github pages?) and linked in the Github header.
1.0
Provide a proper showcase application - The ngqp-demo should document all features, be deployed (Github pages?) and linked in the Github header.
non_process
provide a proper showcase application the ngqp demo should document all features be deployed github pages and linked in the github header
0
48,184
13,067,500,363
IssuesEvent
2020-07-31 00:39:38
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
steamshovel - python bits still explicitly import PyQt4 (Trac #1919)
Migrated from Trac combo core defect
fix that Migrated from https://code.icecube.wisc.edu/ticket/1919 ```json { "status": "closed", "changetime": "2017-10-03T13:07:30", "description": "fix that", "reporter": "nega", "cc": "david.schultz", "resolution": "fixed", "_ts": "1507036050170209", "component": "combo core", "summary": "steamshovel - python bits still explicitly import PyQt4", "priority": "normal", "keywords": "python pyqt qt5", "time": "2016-12-05T21:58:34", "milestone": "Long-Term Future", "owner": "nega", "type": "defect" } ```
1.0
steamshovel - python bits still explicitly import PyQt4 (Trac #1919) - fix that Migrated from https://code.icecube.wisc.edu/ticket/1919 ```json { "status": "closed", "changetime": "2017-10-03T13:07:30", "description": "fix that", "reporter": "nega", "cc": "david.schultz", "resolution": "fixed", "_ts": "1507036050170209", "component": "combo core", "summary": "steamshovel - python bits still explicitly import PyQt4", "priority": "normal", "keywords": "python pyqt qt5", "time": "2016-12-05T21:58:34", "milestone": "Long-Term Future", "owner": "nega", "type": "defect" } ```
non_process
steamshovel python bits still explicitly import trac fix that migrated from json status closed changetime description fix that reporter nega cc david schultz resolution fixed ts component combo core summary steamshovel python bits still explicitly import priority normal keywords python pyqt time milestone long term future owner nega type defect
0
270,772
20,609,216,023
IssuesEvent
2022-03-07 06:22:07
Attendence-Web-Application/web-app-server
https://api.github.com/repos/Attendence-Web-Application/web-app-server
opened
Suggestion on improving collaboration procedures
documentation good first issue
I suggest using some sort of commit message formatting so when we want to use the commit messages as a reference in the future, it is easier to know what changes we have made in each commit. Resource: 1. [Semantic Commit Messages Example](https://gist.github.com/joshbuchea/6f47e86d2510bce28f8e7f42ae84c716) 2. [A more detailed doc](https://github.com/joelparkerhenderson/git_commit_message#begin-with-a-short-summary-line)
1.0
Suggestion on improving collaboration procedures - I suggest using some sort of commit message formatting so when we want to use the commit messages as a reference in the future, it is easier to know what changes we have made in each commit. Resource: 1. [Semantic Commit Messages Example](https://gist.github.com/joshbuchea/6f47e86d2510bce28f8e7f42ae84c716) 2. [A more detailed doc](https://github.com/joelparkerhenderson/git_commit_message#begin-with-a-short-summary-line)
non_process
suggestion on improving collaboration procedures i suggest using some sort of commit message formatting so when we want to use the commit messages as a reference in the future it is easier to know what changes we have made in each commit resource
0
22,170
30,720,362,528
IssuesEvent
2023-07-27 15:31:11
esmero/ami
https://api.github.com/repos/esmero/ami
closed
CSV exporter might fail if the CID used by the temporary storage surpasses the max DB length for the name
bug Find and Replace VBO Actions CSV Processing
# What? Unheard of before. But I should have known better bc I saw something (and fixed) similar while building the LoD reconciliation service. During an CSV export, to keep the order of children/parents in place we generate a Batch that uses temporary storage. Temporary storage requires a unique ID per item, and that one (to avoid overlaps while multiple users export at the same time or a single user does the same) is generated using a combination of the Views, the Display ID, etc See: https://github.com/esmero/ami/blob/9283bf06670296c29ed3bec43edbaf9769f23947/src/Plugin/Action/AmiStrawberryfieldCSVexport.php#L553-L555 This name, when the Views Machine name + the Display name are very long (happened to me, I promise) will fail badly at the DB level! (gosh drupal) giving you a truly scary exception 👻 Solution is to reduce the whole thing to an md5() and done.
1.0
CSV exporter might fail if the CID used by the temporary storage surpasses the max DB length for the name - # What? Unheard of before. But I should have known better bc I saw something (and fixed) similar while building the LoD reconciliation service. During an CSV export, to keep the order of children/parents in place we generate a Batch that uses temporary storage. Temporary storage requires a unique ID per item, and that one (to avoid overlaps while multiple users export at the same time or a single user does the same) is generated using a combination of the Views, the Display ID, etc See: https://github.com/esmero/ami/blob/9283bf06670296c29ed3bec43edbaf9769f23947/src/Plugin/Action/AmiStrawberryfieldCSVexport.php#L553-L555 This name, when the Views Machine name + the Display name are very long (happened to me, I promise) will fail badly at the DB level! (gosh drupal) giving you a truly scary exception 👻 Solution is to reduce the whole thing to an md5() and done.
process
csv exporter might fail if the cid used by the temporary storage surpasses the max db length for the name what unheard of before but i should have known better bc i saw something and fixed similar while building the lod reconciliation service during an csv export to keep the order of children parents in place we generate a batch that uses temporary storage temporary storage requires a unique id per item and that one to avoid overlaps while multiple users export at the same time or a single user does the same is generated using a combination of the views the display id etc see this name when the views machine name the display name are very long happened to me i promise will fail badly at the db level gosh drupal giving you a truly scary exception 👻 solution is to reduce the whole thing to an and done
1
19,710
26,053,590,301
IssuesEvent
2022-12-22 21:36:59
MPMG-DCC-UFMG/C01
https://api.github.com/repos/MPMG-DCC-UFMG/C01
opened
Interface de passos com Vue.js - Profundidade do passo durante adição
[2] Baixa Prioridade [0] Desenvolvimento [1] Aprimoramento [3] Processamento Dinâmico
## Comportamento Esperado Quando um novo passo é adicionado, ele deve ser inserido na mesma profundidade (parâmetro "depth") que o passo anterior, da mesma forma que era feito na versão anterior interface de passos. ## Comportamento Atual Um novo passo é sempre inserido na profundidade mínima. ## Sistema Branch `issue-882`.
1.0
Interface de passos com Vue.js - Profundidade do passo durante adição - ## Comportamento Esperado Quando um novo passo é adicionado, ele deve ser inserido na mesma profundidade (parâmetro "depth") que o passo anterior, da mesma forma que era feito na versão anterior interface de passos. ## Comportamento Atual Um novo passo é sempre inserido na profundidade mínima. ## Sistema Branch `issue-882`.
process
interface de passos com vue js profundidade do passo durante adição comportamento esperado quando um novo passo é adicionado ele deve ser inserido na mesma profundidade parâmetro depth que o passo anterior da mesma forma que era feito na versão anterior interface de passos comportamento atual um novo passo é sempre inserido na profundidade mínima sistema branch issue
1
5,032
7,851,550,075
IssuesEvent
2018-06-20 12:07:56
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Include OS and Browsers version
log-processing log/date/time format question
By default goaccess not showing OS and Browsers version, (FAQ page told me it does). How I can configure log-format to get this info? Example of my log string: `172.31.16.82 - - [12/Jun/2018:20:01:07 +0000] "GET /portal_static/static/media/icon_walk.ffa02836.svg HTTP/1.1" 200 1537 "https://path" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36"`
1.0
Include OS and Browsers version - By default goaccess not showing OS and Browsers version, (FAQ page told me it does). How I can configure log-format to get this info? Example of my log string: `172.31.16.82 - - [12/Jun/2018:20:01:07 +0000] "GET /portal_static/static/media/icon_walk.ffa02836.svg HTTP/1.1" 200 1537 "https://path" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.79 Safari/537.36"`
process
include os and browsers version by default goaccess not showing os and browsers version faq page told me it does how i can configure log format to get this info example of my log string get portal static static media icon walk svg http mozilla windows nt applewebkit khtml like gecko chrome safari
1
155,243
19,768,364,741
IssuesEvent
2022-01-17 07:06:26
panasalap/linux-4.19.72
https://api.github.com/repos/panasalap/linux-4.19.72
opened
CVE-2021-28713 (Medium) detected in multiple libraries
security vulnerability
## CVE-2021-28713 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Rogue backends can cause DoS of guests via high frequency events T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] Xen offers the ability to run PV backends in regular unprivileged guests, typically referred to as "driver domains". Running PV backends in driver domains has one primary security advantage: if a driver domain gets compromised, it doesn't have the privileges to take over the system. However, a malicious driver domain could try to attack other guests via sending events at a high frequency leading to a Denial of Service in the guest due to trying to service interrupts for elongated amounts of time. There are three affected backends: * blkfront patch 1, CVE-2021-28711 * netfront patch 2, CVE-2021-28712 * hvc_xen (console) patch 3, CVE-2021-28713 <p>Publish Date: 2022-01-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28713>CVE-2021-28713</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-28713">https://www.linuxkernelcves.com/cves/CVE-2021-28713</a></p> <p>Release Date: 2022-01-05</p> <p>Fix Resolution: v4.4.296,v4.9.294,v4.14.259,v4.19.222,v5.4.168,v5.10.88,v5.15.11,v5.16-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-28713 (Medium) detected in multiple libraries - ## CVE-2021-28713 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b>, <b>linux-yoctov5.4.51</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Rogue backends can cause DoS of guests via high frequency events T[his CNA information record relates to multiple CVEs; the text explains which aspects/vulnerabilities correspond to which CVE.] Xen offers the ability to run PV backends in regular unprivileged guests, typically referred to as "driver domains". Running PV backends in driver domains has one primary security advantage: if a driver domain gets compromised, it doesn't have the privileges to take over the system. However, a malicious driver domain could try to attack other guests via sending events at a high frequency leading to a Denial of Service in the guest due to trying to service interrupts for elongated amounts of time. There are three affected backends: * blkfront patch 1, CVE-2021-28711 * netfront patch 2, CVE-2021-28712 * hvc_xen (console) patch 3, CVE-2021-28713 <p>Publish Date: 2022-01-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-28713>CVE-2021-28713</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-28713">https://www.linuxkernelcves.com/cves/CVE-2021-28713</a></p> <p>Release Date: 2022-01-05</p> <p>Fix Resolution: v4.4.296,v4.9.294,v4.14.259,v4.19.222,v5.4.168,v5.10.88,v5.15.11,v5.16-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries linux linux linux vulnerability details rogue backends can cause dos of guests via high frequency events t xen offers the ability to run pv backends in regular unprivileged guests typically referred to as driver domains running pv backends in driver domains has one primary security advantage if a driver domain gets compromised it doesn t have the privileges to take over the system however a malicious driver domain could try to attack other guests via sending events at a high frequency leading to a denial of service in the guest due to trying to service interrupts for elongated amounts of time there are three affected backends blkfront patch cve netfront patch cve hvc xen console patch cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
10,554
13,340,660,877
IssuesEvent
2020-08-28 14:45:49
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Code sample need more clarity
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
It was unclear to me if the "name" variable needs to be set for the job, stage, or task level. It would be useful if the sample scripts are expanded to show a complete setup of a simple pipeline. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93 * Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7 * Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Code sample need more clarity - It was unclear to me if the "name" variable needs to be set for the job, stage, or task level. It would be useful if the sample scripts are expanded to show a complete setup of a simple pipeline. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a57f8545-bb15-3a71-1876-3a9ec1a59b93 * Version Independent ID: 28c87c8d-c28d-7493-0c7c-8c38b04fbcd7 * Content: [Run (build) number - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/run-number?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/run-number.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/run-number.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
code sample need more clarity it was unclear to me if the name variable needs to be set for the job stage or task level it would be useful if the sample scripts are expanded to show a complete setup of a simple pipeline document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
236,395
7,749,009,089
IssuesEvent
2018-05-30 10:01:49
Gloirin/m2gTest
https://api.github.com/repos/Gloirin/m2gTest
closed
0003150: Broken alignment in compose mail
Felamimail low priority
**Reported by robert.lischke on 20 Oct 2010 11:43** **Version:** git master The alignment of the recipient field in the compose mail dialogue is broken (right hand side) The drop-down box should align with the &quot;From:&quot; (above) and &quot;Subject:&quot; (below) fields.
1.0
0003150: Broken alignment in compose mail - **Reported by robert.lischke on 20 Oct 2010 11:43** **Version:** git master The alignment of the recipient field in the compose mail dialogue is broken (right hand side) The drop-down box should align with the &quot;From:&quot; (above) and &quot;Subject:&quot; (below) fields.
non_process
broken alignment in compose mail reported by robert lischke on oct version git master the alignment of the recipient field in the compose mail dialogue is broken right hand side the drop down box should align with the quot from quot above and quot subject quot below fields
0
127,693
17,353,993,864
IssuesEvent
2021-07-29 12:22:25
Joystream/atlas
https://api.github.com/repos/Joystream/atlas
closed
Figure out the empty search results view
design ⚙ component:searchbar 📄 page:search
Currently, the design provides a screen to be displayed when there are no search results for a specified phrase. However, the search view contains the tabs to filter the results - "All result", "Channels" and "Videos". It's possible for the search result to contain only channels or only videos, making one of the tabs empty. We should figure out what should be displayed in that case - should one of the tabs get hidden, etc.
1.0
Figure out the empty search results view - Currently, the design provides a screen to be displayed when there are no search results for a specified phrase. However, the search view contains the tabs to filter the results - "All result", "Channels" and "Videos". It's possible for the search result to contain only channels or only videos, making one of the tabs empty. We should figure out what should be displayed in that case - should one of the tabs get hidden, etc.
non_process
figure out the empty search results view currently the design provides a screen to be displayed when there are no search results for a specified phrase however the search view contains the tabs to filter the results all result channels and videos it s possible for the search result to contain only channels or only videos making one of the tabs empty we should figure out what should be displayed in that case should one of the tabs get hidden etc
0
179,906
13,910,596,417
IssuesEvent
2020-10-20 16:16:08
SvetlanaSurzhan/recipes-site
https://api.github.com/repos/SvetlanaSurzhan/recipes-site
opened
User story #1: Add button "Home" to the header.
testing
As a user of the Recipe web application, I want to add a button "Home" to the header so that will help me to navigate to the home page of the website.
1.0
User story #1: Add button "Home" to the header. - As a user of the Recipe web application, I want to add a button "Home" to the header so that will help me to navigate to the home page of the website.
non_process
user story add button home to the header as a user of the recipe web application i want to add a button home to the header so that will help me to navigate to the home page of the website
0
6,728
9,830,319,175
IssuesEvent
2019-06-16 07:51:42
symfony/symfony
https://api.github.com/repos/symfony/symfony
closed
Process gets SIGPIPE signal
Bug Process Status: Needs Review Status: Waiting feedback
**Symfony version(s) affected**: Process v 2.5.0 **Description** I've come here from https://github.com/klaussilveira/gitlist/issues/839, which is a description of the same problem with Gitlist, a software that depends on Gitter which in turn relies on some Symfony components, among others Process 2.5.0. I think the problem might lie with the Symfony Process component in general, hence I'm posting here. The issue is that when the process -- in this case `git ls-tree` -- produces over ~8000 bytes of output, something happens which causes a SIGPIPE signal to be sent (13), which puts the exit code at 141 and therefore causes `isSuccessful()` to return false. Note that I was only able to test this on HP UX on Itanium64, which admittedly is not a very common system. Also note that HP UX' version of PHP is not compiled with sigchild support, meaning they cannot do anything with process control. **How to reproduce** * In this case, install the latest version of Gitlist on a server which hosts git repositories with a few hundred files in a directory (so that `git ls-tree -l master` will output >8000 bytes of data) * Use the web interface to browse to that directory * See the Runtime Error being thrown (which is due to the exit code of the `git ls-tree` command not being 0) **Possible Solution** I was able to work around this problem (read: botch into submission) by replacing ``` $process->run(); if (!$process->isSuccessful()) { throw new ProcessFailedException($process); } return $process->getOutput(); ``` by ``` $oldCwd = getCwd(); chdir($repository->getPath()); $output= exec($this->getPath().' '.$command, $data); chdir($oldCwd); return implode("\n", $data); ``` But clearly this is not an acceptable solution :)
1.0
Process gets SIGPIPE signal - **Symfony version(s) affected**: Process v 2.5.0 **Description** I've come here from https://github.com/klaussilveira/gitlist/issues/839, which is a description of the same problem with Gitlist, a software that depends on Gitter which in turn relies on some Symfony components, among others Process 2.5.0. I think the problem might lie with the Symfony Process component in general, hence I'm posting here. The issue is that when the process -- in this case `git ls-tree` -- produces over ~8000 bytes of output, something happens which causes a SIGPIPE signal to be sent (13), which puts the exit code at 141 and therefore causes `isSuccessful()` to return false. Note that I was only able to test this on HP UX on Itanium64, which admittedly is not a very common system. Also note that HP UX' version of PHP is not compiled with sigchild support, meaning they cannot do anything with process control. **How to reproduce** * In this case, install the latest version of Gitlist on a server which hosts git repositories with a few hundred files in a directory (so that `git ls-tree -l master` will output >8000 bytes of data) * Use the web interface to browse to that directory * See the Runtime Error being thrown (which is due to the exit code of the `git ls-tree` command not being 0) **Possible Solution** I was able to work around this problem (read: botch into submission) by replacing ``` $process->run(); if (!$process->isSuccessful()) { throw new ProcessFailedException($process); } return $process->getOutput(); ``` by ``` $oldCwd = getCwd(); chdir($repository->getPath()); $output= exec($this->getPath().' '.$command, $data); chdir($oldCwd); return implode("\n", $data); ``` But clearly this is not an acceptable solution :)
process
process gets sigpipe signal symfony version s affected process v description i ve come here from which is a description of the same problem with gitlist a software that depends on gitter which in turn relies on some symfony components among others process i think the problem might lie with the symfony process component in general hence i m posting here the issue is that when the process in this case git ls tree produces over bytes of output something happens which causes a sigpipe signal to be sent which puts the exit code at and therefore causes issuccessful to return false note that i was only able to test this on hp ux on which admittedly is not a very common system also note that hp ux version of php is not compiled with sigchild support meaning they cannot do anything with process control how to reproduce in this case install the latest version of gitlist on a server which hosts git repositories with a few hundred files in a directory so that git ls tree l master will output bytes of data use the web interface to browse to that directory see the runtime error being thrown which is due to the exit code of the git ls tree command not being possible solution i was able to work around this problem read botch into submission by replacing process run if process issuccessful throw new processfailedexception process return process getoutput by oldcwd getcwd chdir repository getpath output exec this getpath command data chdir oldcwd return implode n data but clearly this is not an acceptable solution
1
2,349
5,157,359,371
IssuesEvent
2017-01-16 06:17:03
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
closed
Use labels in same style as Bazel
type: process
I ran across some interesting github issue label usage in the Bazel project: https://github.com/bazelbuild/bazel/labels I like how they have their labels set up, any objects if we migrate to roughly the same style of labels? I think they have something of a similar system as us, but self-documented a bit more in the label names. I think we can also remove some of the label usage documentation we have migrating to the bazel project labels.
1.0
Use labels in same style as Bazel - I ran across some interesting github issue label usage in the Bazel project: https://github.com/bazelbuild/bazel/labels I like how they have their labels set up, any objects if we migrate to roughly the same style of labels? I think they have something of a similar system as us, but self-documented a bit more in the label names. I think we can also remove some of the label usage documentation we have migrating to the bazel project labels.
process
use labels in same style as bazel i ran across some interesting github issue label usage in the bazel project i like how they have their labels set up any objects if we migrate to roughly the same style of labels i think they have something of a similar system as us but self documented a bit more in the label names i think we can also remove some of the label usage documentation we have migrating to the bazel project labels
1
32,856
6,130,797,074
IssuesEvent
2017-06-24 09:09:54
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Create cookbooks / playbooks / cheatsheets for common use cases and scenarios
kind/documentation priority/important-soon sig/docs
Deployment, troubleshooting, etc.
1.0
Create cookbooks / playbooks / cheatsheets for common use cases and scenarios - Deployment, troubleshooting, etc.
non_process
create cookbooks playbooks cheatsheets for common use cases and scenarios deployment troubleshooting etc
0
20,889
27,714,590,662
IssuesEvent
2023-03-14 16:09:34
OliverKillane/Imperial-Computing-Notes
https://api.github.com/repos/OliverKillane/Imperial-Computing-Notes
opened
Complete "Algorithms and Indicies" Chapter
60029 - Data Processing Systems Content Missing
Specifically: - Sorting algorithm implementations - Database normalisation - Partitioning - B* Trees
1.0
Complete "Algorithms and Indicies" Chapter - Specifically: - Sorting algorithm implementations - Database normalisation - Partitioning - B* Trees
process
complete algorithms and indicies chapter specifically sorting algorithm implementations database normalisation partitioning b trees
1
131,734
18,249,187,653
IssuesEvent
2021-10-02 00:12:37
ghc-dev/Brenda-Ruiz
https://api.github.com/repos/ghc-dev/Brenda-Ruiz
closed
CVE-2020-9488 (Low) detected in log4j-core-2.8.2.jar - autoclosed
security vulnerability
## CVE-2020-9488 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.2.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p> <p>Path to dependency file: Brenda-Ruiz/pom.xml</p> <p>Path to vulnerable library: epository/org/apache/logging/log4j/log4j-core/2.8.2/log4j-core-2.8.2.jar</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.8.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Brenda-Ruiz/commit/a063c6ce8a94718a0f3292d86e371a5ab1d3083a">a063c6ce8a94718a0f3292d86e371a5ab1d3083a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. <p>Publish Date: 2020-04-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488>CVE-2020-9488</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://issues.apache.org/jira/browse/LOG4J2-2819">https://issues.apache.org/jira/browse/LOG4J2-2819</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.13.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.8.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.8.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.13.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9488","vulnerabilityDetails":"Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488","cvss3Severity":"low","cvss3Score":"3.7","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-9488 (Low) detected in log4j-core-2.8.2.jar - autoclosed - ## CVE-2020-9488 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.2.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p> <p>Path to dependency file: Brenda-Ruiz/pom.xml</p> <p>Path to vulnerable library: epository/org/apache/logging/log4j/log4j-core/2.8.2/log4j-core-2.8.2.jar</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.8.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Brenda-Ruiz/commit/a063c6ce8a94718a0f3292d86e371a5ab1d3083a">a063c6ce8a94718a0f3292d86e371a5ab1d3083a</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. <p>Publish Date: 2020-04-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488>CVE-2020-9488</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://issues.apache.org/jira/browse/LOG4J2-2819">https://issues.apache.org/jira/browse/LOG4J2-2819</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.13.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.8.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.8.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.13.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9488","vulnerabilityDetails":"Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488","cvss3Severity":"low","cvss3Score":"3.7","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve low detected in core jar autoclosed cve low severity vulnerability vulnerable library core jar the apache implementation library home page a href path to dependency file brenda ruiz pom xml path to vulnerable library epository org apache logging core core jar dependency hierarchy x core jar vulnerable library found in head commit a href found in base branch master vulnerability details improper validation of certificate with host mismatch in apache smtp appender this could allow an smtps connection to be intercepted by a man in the middle attack which could leak any log messages sent through that appender publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache logging core rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache logging core isminimumfixversionavailable true minimumfixversion org apache logging core basebranches vulnerabilityidentifier cve vulnerabilitydetails improper validation of certificate with host mismatch in apache smtp appender this could allow an smtps connection to be intercepted by a man in the middle attack which could leak any log messages sent through that appender vulnerabilityurl
0
803
3,283,337,404
IssuesEvent
2015-10-28 12:08:19
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Missing reltable links with DITA 1.3 schemas
bug DITA 1.3 P1 preprocess
Docs output generated with recent builds of the OT `develop` branch no longer include related links from reltable entries. Links derived from `<related-links>` elements in topics are generated correctly. A bit of sleuthing with `git bisect` suggests the regression may have been introduced with a71c54f, which adds the DITA 1.3 schemas.
1.0
Missing reltable links with DITA 1.3 schemas - Docs output generated with recent builds of the OT `develop` branch no longer include related links from reltable entries. Links derived from `<related-links>` elements in topics are generated correctly. A bit of sleuthing with `git bisect` suggests the regression may have been introduced with a71c54f, which adds the DITA 1.3 schemas.
process
missing reltable links with dita schemas docs output generated with recent builds of the ot develop branch no longer include related links from reltable entries links derived from elements in topics are generated correctly a bit of sleuthing with git bisect suggests the regression may have been introduced with which adds the dita schemas
1
396,477
11,709,655,829
IssuesEvent
2020-03-08 20:05:25
tensorwerk/hangar-py
https://api.github.com/repos/tensorwerk/hangar-py
opened
[BUG REPORT] Diff status always returns CLEAN inside CM
Bug: Awaiting Priority Assignment
**Describe the bug** Diff status always returns CLEAN inside CM **Severity** <!--- fill in the space between `[ ]` with and `x` (ie. `[x]`) ---> Select an option: - [ ] Data Corruption / Loss of Any Kind - [x] Unexpected Behavior, Exceptions or Error Thrown - [ ] Performance Bottleneck **To Reproduce** ```python from hangar import Repository import numpy as np repo = Repository('.') repo.init(user_name='me', user_email='a@b.c', remove_old=True) co = repo.checkout(write=True) co.add_ndarray_column('x', prototype=np.array([1])) co.commit('added columns') co.close() co = repo.checkout(write=True) x = co.columns['x'] with x: for i in range(10): x[i] = np.array([i]) print(co.diff.status()) # this should return DIRTY but returns CLEAN print(co.diff.status()) # this returns DIRTY as expected co.commit('adding file') ```
1.0
[BUG REPORT] Diff status always returns CLEAN inside CM - **Describe the bug** Diff status always returns CLEAN inside CM **Severity** <!--- fill in the space between `[ ]` with and `x` (ie. `[x]`) ---> Select an option: - [ ] Data Corruption / Loss of Any Kind - [x] Unexpected Behavior, Exceptions or Error Thrown - [ ] Performance Bottleneck **To Reproduce** ```python from hangar import Repository import numpy as np repo = Repository('.') repo.init(user_name='me', user_email='a@b.c', remove_old=True) co = repo.checkout(write=True) co.add_ndarray_column('x', prototype=np.array([1])) co.commit('added columns') co.close() co = repo.checkout(write=True) x = co.columns['x'] with x: for i in range(10): x[i] = np.array([i]) print(co.diff.status()) # this should return DIRTY but returns CLEAN print(co.diff.status()) # this returns DIRTY as expected co.commit('adding file') ```
non_process
diff status always returns clean inside cm describe the bug diff status always returns clean inside cm severity select an option data corruption loss of any kind unexpected behavior exceptions or error thrown performance bottleneck to reproduce python from hangar import repository import numpy as np repo repository repo init user name me user email a b c remove old true co repo checkout write true co add ndarray column x prototype np array co commit added columns co close co repo checkout write true x co columns with x for i in range x np array print co diff status this should return dirty but returns clean print co diff status this returns dirty as expected co commit adding file
0
256,998
22,141,211,667
IssuesEvent
2022-06-03 07:07:06
streamnative/pulsar
https://api.github.com/repos/streamnative/pulsar
opened
ISSUE-15916: Flaky-test: AdminApiTransactionTest.testGetPendingAckInternalStats
component/test flaky-tests
Original Issue: apache/pulsar#15916 --- AdminApiTransactionTest.testGetPendingAckInternalStats is flaky. It fails sporadically. [example failure](https://github.com/apache/pulsar/runs/6713195445?check_suite_focus=true#step:10:2573) ``` Error: Tests run: 32, Failures: 1, Errors: 0, Skipped: 30, Time elapsed: 22.768 s <<< FAILURE! - in org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest Error: testGetPendingAckInternalStats(org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest) Time elapsed: 2.63 s <<< FAILURE! java.lang.AssertionError: expected object to not be null at org.testng.Assert.fail(Assert.java:99) at org.testng.Assert.assertNotNull(Assert.java:942) at org.testng.Assert.assertNotNull(Assert.java:926) at org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest.verifyManagedLegerInternalStats(AdminApiTransactionTest.java:608) at org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest.testGetPendingAckInternalStats(AdminApiTransactionTest.java:496) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132) at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) ```
2.0
ISSUE-15916: Flaky-test: AdminApiTransactionTest.testGetPendingAckInternalStats - Original Issue: apache/pulsar#15916 --- AdminApiTransactionTest.testGetPendingAckInternalStats is flaky. It fails sporadically. [example failure](https://github.com/apache/pulsar/runs/6713195445?check_suite_focus=true#step:10:2573) ``` Error: Tests run: 32, Failures: 1, Errors: 0, Skipped: 30, Time elapsed: 22.768 s <<< FAILURE! - in org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest Error: testGetPendingAckInternalStats(org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest) Time elapsed: 2.63 s <<< FAILURE! java.lang.AssertionError: expected object to not be null at org.testng.Assert.fail(Assert.java:99) at org.testng.Assert.assertNotNull(Assert.java:942) at org.testng.Assert.assertNotNull(Assert.java:926) at org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest.verifyManagedLegerInternalStats(AdminApiTransactionTest.java:608) at org.apache.pulsar.broker.admin.v3.AdminApiTransactionTest.testGetPendingAckInternalStats(AdminApiTransactionTest.java:496) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132) at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73) at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) ```
non_process
issue flaky test adminapitransactiontest testgetpendingackinternalstats original issue apache pulsar adminapitransactiontest testgetpendingackinternalstats is flaky it fails sporadically error tests run failures errors skipped time elapsed s failure in org apache pulsar broker admin adminapitransactiontest error testgetpendingackinternalstats org apache pulsar broker admin adminapitransactiontest time elapsed s failure java lang assertionerror expected object to not be null at org testng assert fail assert java at org testng assert assertnotnull assert java at org testng assert assertnotnull assert java at org apache pulsar broker admin adminapitransactiontest verifymanagedlegerinternalstats adminapitransactiontest java at org apache pulsar broker admin adminapitransactiontest testgetpendingackinternalstats adminapitransactiontest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invokemethodrunnable runone invokemethodrunnable java at org testng internal invokemethodrunnable call invokemethodrunnable java at org testng internal invokemethodrunnable call invokemethodrunnable java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java
0
7,502
10,586,013,899
IssuesEvent
2019-10-08 18:47:23
googleapis/gapic-generator
https://api.github.com/repos/googleapis/gapic-generator
opened
go: update Go version used in CI
Lang: Go Priority: P3 type: process
The `go-1.10-test` [job](https://github.com/googleapis/gapic-generator/blob/1bfe1a98586744038019f1d0bd90817a170f22d2/.circleci/config.yml#L670) should be updated to Go 1.13. This will require some refactoring as the test job is heavily based on pre-Go modules code structure
1.0
go: update Go version used in CI - The `go-1.10-test` [job](https://github.com/googleapis/gapic-generator/blob/1bfe1a98586744038019f1d0bd90817a170f22d2/.circleci/config.yml#L670) should be updated to Go 1.13. This will require some refactoring as the test job is heavily based on pre-Go modules code structure
process
go update go version used in ci the go test should be updated to go this will require some refactoring as the test job is heavily based on pre go modules code structure
1
387,699
11,466,732,580
IssuesEvent
2020-02-08 00:09:19
lfrankel/GGJ2020
https://api.github.com/repos/lfrankel/GGJ2020
closed
When you win the level, the "you won the level!" screen doesn't display.
Linux bug done high priority
If you press enter, it does take you back to the menu
1.0
When you win the level, the "you won the level!" screen doesn't display. - If you press enter, it does take you back to the menu
non_process
when you win the level the you won the level screen doesn t display if you press enter it does take you back to the menu
0
3,811
6,796,081,552
IssuesEvent
2017-11-01 17:47:33
loogart/Project-Mountain
https://api.github.com/repos/loogart/Project-Mountain
reopened
Country codes
Business process
Should be in a +XX format, not 001, etc... For example: Canada number: +1 (613) 555-8989 Mexico Number: +52 (area code) 555-5565 etc https://en.wikipedia.org/wiki/List_of_country_calling_codes
1.0
Country codes - Should be in a +XX format, not 001, etc... For example: Canada number: +1 (613) 555-8989 Mexico Number: +52 (area code) 555-5565 etc https://en.wikipedia.org/wiki/List_of_country_calling_codes
process
country codes should be in a xx format not etc for example canada number mexico number area code etc
1
19,800
26,186,765,476
IssuesEvent
2023-01-03 02:11:00
hsmusic/hsmusic-data
https://api.github.com/repos/hsmusic/hsmusic-data
closed
Update/write content pages for release
type: involved process
- [x] News entry - [x] About & Credits - [x] Review changelog (new features - hsmusic-wiki changes) - [x] Other stuff???
1.0
Update/write content pages for release - - [x] News entry - [x] About & Credits - [x] Review changelog (new features - hsmusic-wiki changes) - [x] Other stuff???
process
update write content pages for release news entry about credits review changelog new features hsmusic wiki changes other stuff
1
60,341
14,787,420,516
IssuesEvent
2021-01-12 07:34:46
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
SB > Study List UI
Bug P2 Process: Tested dev Study builder UI
Increase left margin for the Study ID column Space out the column widths in a proportionate manner to be well-distributed across the screen ![SB list](https://user-images.githubusercontent.com/63093896/102919710-5b7fcd80-44af-11eb-9820-2318da914602.JPG)
1.0
SB > Study List UI - Increase left margin for the Study ID column Space out the column widths in a proportionate manner to be well-distributed across the screen ![SB list](https://user-images.githubusercontent.com/63093896/102919710-5b7fcd80-44af-11eb-9820-2318da914602.JPG)
non_process
sb study list ui increase left margin for the study id column space out the column widths in a proportionate manner to be well distributed across the screen
0
7,301
10,443,048,468
IssuesEvent
2019-09-18 14:11:43
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
opened
PH Calc: Flue for HVAC
Calculator Process Heating
Develop calculator from ESC calculator. Excel file found in Dropbox > AMO Tools > Other Tools > Energy Solutions Center Tools escenter.org > No 5 HVAC air heating using
1.0
PH Calc: Flue for HVAC - Develop calculator from ESC calculator. Excel file found in Dropbox > AMO Tools > Other Tools > Energy Solutions Center Tools escenter.org > No 5 HVAC air heating using
process
ph calc flue for hvac develop calculator from esc calculator excel file found in dropbox amo tools other tools energy solutions center tools escenter org no hvac air heating using
1
11,912
14,699,961,453
IssuesEvent
2021-01-04 09:27:05
threefoldfoundation/tft-stellar
https://api.github.com/repos/threefoldfoundation/tft-stellar
closed
protect activation service with a secret token
priority_critical process_wontfix type_feature
related to https://github.com/threefoldtech/home/issues/989 implementation should add another arg (token) that should match a value previously defined in `j.core.config.get('TF_TRUSTED_SERVICE_TOKEN')` if so continue with activation
1.0
protect activation service with a secret token - related to https://github.com/threefoldtech/home/issues/989 implementation should add another arg (token) that should match a value previously defined in `j.core.config.get('TF_TRUSTED_SERVICE_TOKEN')` if so continue with activation
process
protect activation service with a secret token related to implementation should add another arg token that should match a value previously defined in j core config get tf trusted service token if so continue with activation
1
4,009
6,937,797,356
IssuesEvent
2017-12-04 07:17:14
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Feedback for the map-first processing functionality
preprocess2
Using DITA OT 2.5.2. We have a DITA OT WebHelp plugin in which based on the topic: http://www.dita-ot.org/dev/dev_ref/map-first-preprocessing.html I made changes in its build files to use "preprocess2" instead of "preprocess". I encountered two problems when building the output: 1) The "user.input.file" property is no longer defined and we made use of it. 2) The "fullditatopic.list" property is no longer defined and we also make use of it. The "debug-filter" target defines lots of such properties. For compatibility reasons could maybe some of these definitions be also copied to an equivalent target in the new preprocessing stage?
1.0
Feedback for the map-first processing functionality - Using DITA OT 2.5.2. We have a DITA OT WebHelp plugin in which based on the topic: http://www.dita-ot.org/dev/dev_ref/map-first-preprocessing.html I made changes in its build files to use "preprocess2" instead of "preprocess". I encountered two problems when building the output: 1) The "user.input.file" property is no longer defined and we made use of it. 2) The "fullditatopic.list" property is no longer defined and we also make use of it. The "debug-filter" target defines lots of such properties. For compatibility reasons could maybe some of these definitions be also copied to an equivalent target in the new preprocessing stage?
process
feedback for the map first processing functionality using dita ot we have a dita ot webhelp plugin in which based on the topic i made changes in its build files to use instead of preprocess i encountered two problems when building the output the user input file property is no longer defined and we made use of it the fullditatopic list property is no longer defined and we also make use of it the debug filter target defines lots of such properties for compatibility reasons could maybe some of these definitions be also copied to an equivalent target in the new preprocessing stage
1
13,676
16,420,022,248
IssuesEvent
2021-05-19 11:26:39
Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
https://api.github.com/repos/Bedrohung-der-Bienen/Transformationsfelder-Digitalisierung
closed
Checkbox bei der anmeldung einfügen
bootstrap frontend login process
# Szenario: Der Benutzer will angemeldet bleiben. - **Gegeben** Der Benutzer ist auf der Startseite angelangt und meldet sich an - **Wenn** der Benutzer sich anmeldet - **Dann** klickt er auf angemeldet bleiben - **Und** gibt seine Zugangsdaten ein Der Benutzer bleibt somit angemeldet und muss für eine bestimmte Zeit nicht immer wieder sich anmelden.
1.0
Checkbox bei der anmeldung einfügen - # Szenario: Der Benutzer will angemeldet bleiben. - **Gegeben** Der Benutzer ist auf der Startseite angelangt und meldet sich an - **Wenn** der Benutzer sich anmeldet - **Dann** klickt er auf angemeldet bleiben - **Und** gibt seine Zugangsdaten ein Der Benutzer bleibt somit angemeldet und muss für eine bestimmte Zeit nicht immer wieder sich anmelden.
process
checkbox bei der anmeldung einfügen szenario der benutzer will angemeldet bleiben gegeben der benutzer ist auf der startseite angelangt und meldet sich an wenn der benutzer sich anmeldet dann klickt er auf angemeldet bleiben und gibt seine zugangsdaten ein der benutzer bleibt somit angemeldet und muss für eine bestimmte zeit nicht immer wieder sich anmelden
1
15,984
9,663,811,605
IssuesEvent
2019-05-21 02:27:34
aspnet/AspNetCore
https://api.github.com/repos/aspnet/AspNetCore
closed
Custom claims is not persisting with ASP.NET Core Identity default claims
area-security
I have added two custom claims in the following login code: if (model.ApplicationUser != null) { SignInResult signInResult = await _signInManager.CheckPasswordSignInAsync(model.ApplicationUser, model.Password, lockoutOnFailure: false); if (signInResult.Succeeded) { User userInfo = await _userService.GetUserInfoByEmailAsync(model.Email); var customClaims = new[] { new Claim("UserId", userInfo.UserId.ToString()), // My custom claim new Claim("ProfileName", userInfo.ProfileName) // My custom claim }; var claimsPrincipal = await _signInManager.CreateUserPrincipalAsync(model.ApplicationUser); if (claimsPrincipal?.Identity is ClaimsIdentity claimsIdentity) { claimsIdentity.AddClaims(customClaims); } await _signInManager.Context.SignInAsync(IdentityConstants.ApplicationScheme, claimsPrincipal, new AuthenticationProperties { IsPersistent = model.RememberMe }); return RedirectToAction("Profile", "Learner"); } else { ModelState.AddModelError(string.Empty, "Either user name/email or password is invalid!"); return View(model); } } Problem is that initially everything works fine. After certain amount of time (Lets say 15 minutes) `UserId` and `ProfileName` are getting null although ASP.NET Core Identity default claims has value: var userId = User.FindFirst("UserId").Value // Getting null var profileName = User.FindFirst("ProfileName").Value // Getting null var id = User.FindFirst(ClaimTypes.NameIdentifier).Value // Has value var userName = User.FindFirst(ClaimTypes.Name).Value // Has value Is it a bug or am I missing something? Please help!
True
Custom claims is not persisting with ASP.NET Core Identity default claims - I have added two custom claims in the following login code: if (model.ApplicationUser != null) { SignInResult signInResult = await _signInManager.CheckPasswordSignInAsync(model.ApplicationUser, model.Password, lockoutOnFailure: false); if (signInResult.Succeeded) { User userInfo = await _userService.GetUserInfoByEmailAsync(model.Email); var customClaims = new[] { new Claim("UserId", userInfo.UserId.ToString()), // My custom claim new Claim("ProfileName", userInfo.ProfileName) // My custom claim }; var claimsPrincipal = await _signInManager.CreateUserPrincipalAsync(model.ApplicationUser); if (claimsPrincipal?.Identity is ClaimsIdentity claimsIdentity) { claimsIdentity.AddClaims(customClaims); } await _signInManager.Context.SignInAsync(IdentityConstants.ApplicationScheme, claimsPrincipal, new AuthenticationProperties { IsPersistent = model.RememberMe }); return RedirectToAction("Profile", "Learner"); } else { ModelState.AddModelError(string.Empty, "Either user name/email or password is invalid!"); return View(model); } } Problem is that initially everything works fine. After certain amount of time (Lets say 15 minutes) `UserId` and `ProfileName` are getting null although ASP.NET Core Identity default claims has value: var userId = User.FindFirst("UserId").Value // Getting null var profileName = User.FindFirst("ProfileName").Value // Getting null var id = User.FindFirst(ClaimTypes.NameIdentifier).Value // Has value var userName = User.FindFirst(ClaimTypes.Name).Value // Has value Is it a bug or am I missing something? Please help!
non_process
custom claims is not persisting with asp net core identity default claims i have added two custom claims in the following login code if model applicationuser null signinresult signinresult await signinmanager checkpasswordsigninasync model applicationuser model password lockoutonfailure false if signinresult succeeded user userinfo await userservice getuserinfobyemailasync model email var customclaims new new claim userid userinfo userid tostring my custom claim new claim profilename userinfo profilename my custom claim var claimsprincipal await signinmanager createuserprincipalasync model applicationuser if claimsprincipal identity is claimsidentity claimsidentity claimsidentity addclaims customclaims await signinmanager context signinasync identityconstants applicationscheme claimsprincipal new authenticationproperties ispersistent model rememberme return redirecttoaction profile learner else modelstate addmodelerror string empty either user name email or password is invalid return view model problem is that initially everything works fine after certain amount of time lets say minutes userid and profilename are getting null although asp net core identity default claims has value var userid user findfirst userid value getting null var profilename user findfirst profilename value getting null var id user findfirst claimtypes nameidentifier value has value var username user findfirst claimtypes name value has value is it a bug or am i missing something please help
0
111,348
4,469,123,588
IssuesEvent
2016-08-25 11:58:21
chrisdone/hindent
https://api.github.com/repos/chrisdone/hindent
closed
Trailing Haddock after constructors moved past "|"-separator
component: hindent priority: high type: bug
I'm working on an `hindent` style for the guts of Idris, but I'm encountering what I think might be a fundamental limitation. The following code: ``` data Binder b = Lam { binderTy :: !b {-^ type annotation for bound variable-}} -- ^ A function binding | Pi { binderImpl :: Maybe ImplicitInfo, binderTy :: !b, binderKind :: !b } -- ^ A binding that occurs in a function type -- expression, e.g. @(x:Int) -> ...@ The 'binderImpl' -- flag says whether it was a scoped implicit -- (i.e. forall bound) in the high level Idris, but -- otherwise has no relevance in TT. | Let { binderTy :: !b, binderVal :: b {-^ value for bound variable-}} -- ^ A binding that occurs in a @let@ expression | NLet { binderTy :: !b, binderVal :: b } -- ^ NLet is an intermediate product in the evaluator -- that's used for temporarily naming locals during -- reduction. It won't occur outside the evaluator. | Hole { binderTy :: !b} -- ^ A hole in a term under construction in the -- elaborator. If this is not filled during -- elaboration, it is an error. | GHole { envlen :: Int, localnames :: [Name], binderTy :: !b} -- ^ A saved TT hole that will later be converted to a -- top-level Idris metavariable applied to all -- elements of its local environment. | Guess { binderTy :: !b, binderVal :: b } -- ^ A provided value for a hole. It will later be -- substituted - the guess is to keep it -- computationally inert while working on other things -- if necessary. | PVar { binderTy :: !b } -- ^ A pattern variable (these are bound around terms -- that make up pattern-match clauses) | PVTy { binderTy :: !b } -- ^ The type of a pattern binding deriving (Show, Eq, Ord, Functor, Foldable, Traversable, Data, Generic, Typeable) ``` is reformatted as ``` data Binder b = Lam { binderTy :: !b {-^ type annotation for bound variable-} } | -- ^ A function binding Pi { binderImpl :: Maybe ImplicitInfo, binderTy :: !b, binderKind :: !b } | -- ^ A binding that occurs in a function type expression, e.g. @(x:Int) -> ...@ The -- 'binderImpl' flag says whether it was a scoped implicit (i.e. forall bound) in the -- high level Idris, but otherwise has no relevance in TT. Let { binderTy :: !b , binderVal :: b {-^ value for bound variable-} } | -- ^ A binding that occurs in a @let@ expression NLet { binderTy :: !b, binderVal :: b } | -- ^ NLet is an intermediate product in the evaluator that's used for temporarily -- naming locals during reduction. It won't occur outside the evaluator. Hole { binderTy :: !b } | -- ^ A hole in a term under construction in the elaborator. If this is not filled -- during elaboration, it is an error. GHole { envlen :: Int, localnames :: [Name], binderTy :: !b } | -- ^ A saved TT hole that will later be converted to a top-level Idris metavariable -- applied to all elements of its local environment. Guess { binderTy :: !b, binderVal :: b } | -- ^ A provided value for a hole. It will later be substituted - the guess is to keep -- it computationally inert while working on other things if necessary. PVar { binderTy :: !b } | -- ^ A pattern variable (these are bound around terms that make up pattern-match -- clauses) PVTy { binderTy :: !b } -- ^ The type of a pattern binding deriving (Show, Eq, Ord, Functor, Foldable, Traversable, Data, Generic, Typeable) ``` in the `gibiansky` style, as ``` data Binder b = Lam {binderTy :: !b {-^ type annotation for bound variable-}} | -- ^ A function binding Pi {binderImpl :: Maybe ImplicitInfo ,binderTy :: !b ,binderKind :: !b} | -- ^ A binding that occurs in a function type -- expression, e.g. @(x:Int) -> ...@ The 'binderImpl' -- flag says whether it was a scoped implicit -- (i.e. forall bound) in the high level Idris, but -- otherwise has no relevance in TT. Let {binderTy :: !b ,binderVal :: b {-^ value for bound variable-}} | -- ^ A binding that occurs in a @let@ expression NLet {binderTy :: !b ,binderVal :: b} | -- ^ NLet is an intermediate product in the evaluator -- that's used for temporarily naming locals during -- reduction. It won't occur outside the evaluator. Hole {binderTy :: !b} | -- ^ A hole in a term under construction in the -- elaborator. If this is not filled during -- elaboration, it is an error. GHole {envlen :: Int ,localnames :: [Name] ,binderTy :: !b} | -- ^ A saved TT hole that will later be converted to a -- top-level Idris metavariable applied to all -- elements of its local environment. Guess {binderTy :: !b ,binderVal :: b} | -- ^ A provided value for a hole. It will later be -- substituted - the guess is to keep it -- computationally inert while working on other things -- if necessary. PVar {binderTy :: !b} | -- ^ A pattern variable (these are bound around terms -- that make up pattern-match clauses) PVTy {binderTy :: !b} -- ^ The type of a pattern binding deriving (Show,Eq,Ord,Functor,Foldable,Traversable,Data,Generic,Typeable) ``` in the `fundamental` style, and as ``` data Binder b = Lam {binderTy :: !b {-^ type annotation for bound variable-}} | -- ^ A function binding Pi {binderImpl :: Maybe ImplicitInfo ,binderTy :: !b ,binderKind :: !b} | -- ^ A binding that occurs in a function type -- expression, e.g. @(x:Int) -> ...@ The 'binderImpl' -- flag says whether it was a scoped implicit -- (i.e. forall bound) in the high level Idris, but -- otherwise has no relevance in TT. Let {binderTy :: !b ,binderVal :: b {-^ value for bound variable-}} | -- ^ A binding that occurs in a @let@ expression NLet {binderTy :: !b ,binderVal :: b} | -- ^ NLet is an intermediate product in the evaluator -- that's used for temporarily naming locals during -- reduction. It won't occur outside the evaluator. Hole {binderTy :: !b} | -- ^ A hole in a term under construction in the -- elaborator. If this is not filled during -- elaboration, it is an error. GHole {envlen :: Int ,localnames :: [Name] ,binderTy :: !b} | -- ^ A saved TT hole that will later be converted to a -- top-level Idris metavariable applied to all -- elements of its local environment. Guess {binderTy :: !b ,binderVal :: b} | -- ^ A provided value for a hole. It will later be -- substituted - the guess is to keep it -- computationally inert while working on other things -- if necessary. PVar {binderTy :: !b} | -- ^ A pattern variable (these are bound around terms -- that make up pattern-match clauses) PVTy {binderTy :: !b} -- ^ The type of a pattern binding deriving (Show,Eq,Ord,Functor,Foldable,Traversable,Data,Generic,Typeable) ``` in the `chris-done` style. All of these have in common that the trailing Haddock is moved after the `|` separator, removing it from what it documents. As far as I can tell, the AST is associating them with the following constructor. Is there a good way for a style to work around this? Or to instruct the parser to associate them with the preceding constructor when they begin with a `^`?
1.0
Trailing Haddock after constructors moved past "|"-separator - I'm working on an `hindent` style for the guts of Idris, but I'm encountering what I think might be a fundamental limitation. The following code: ``` data Binder b = Lam { binderTy :: !b {-^ type annotation for bound variable-}} -- ^ A function binding | Pi { binderImpl :: Maybe ImplicitInfo, binderTy :: !b, binderKind :: !b } -- ^ A binding that occurs in a function type -- expression, e.g. @(x:Int) -> ...@ The 'binderImpl' -- flag says whether it was a scoped implicit -- (i.e. forall bound) in the high level Idris, but -- otherwise has no relevance in TT. | Let { binderTy :: !b, binderVal :: b {-^ value for bound variable-}} -- ^ A binding that occurs in a @let@ expression | NLet { binderTy :: !b, binderVal :: b } -- ^ NLet is an intermediate product in the evaluator -- that's used for temporarily naming locals during -- reduction. It won't occur outside the evaluator. | Hole { binderTy :: !b} -- ^ A hole in a term under construction in the -- elaborator. If this is not filled during -- elaboration, it is an error. | GHole { envlen :: Int, localnames :: [Name], binderTy :: !b} -- ^ A saved TT hole that will later be converted to a -- top-level Idris metavariable applied to all -- elements of its local environment. | Guess { binderTy :: !b, binderVal :: b } -- ^ A provided value for a hole. It will later be -- substituted - the guess is to keep it -- computationally inert while working on other things -- if necessary. | PVar { binderTy :: !b } -- ^ A pattern variable (these are bound around terms -- that make up pattern-match clauses) | PVTy { binderTy :: !b } -- ^ The type of a pattern binding deriving (Show, Eq, Ord, Functor, Foldable, Traversable, Data, Generic, Typeable) ``` is reformatted as ``` data Binder b = Lam { binderTy :: !b {-^ type annotation for bound variable-} } | -- ^ A function binding Pi { binderImpl :: Maybe ImplicitInfo, binderTy :: !b, binderKind :: !b } | -- ^ A binding that occurs in a function type expression, e.g. @(x:Int) -> ...@ The -- 'binderImpl' flag says whether it was a scoped implicit (i.e. forall bound) in the -- high level Idris, but otherwise has no relevance in TT. Let { binderTy :: !b , binderVal :: b {-^ value for bound variable-} } | -- ^ A binding that occurs in a @let@ expression NLet { binderTy :: !b, binderVal :: b } | -- ^ NLet is an intermediate product in the evaluator that's used for temporarily -- naming locals during reduction. It won't occur outside the evaluator. Hole { binderTy :: !b } | -- ^ A hole in a term under construction in the elaborator. If this is not filled -- during elaboration, it is an error. GHole { envlen :: Int, localnames :: [Name], binderTy :: !b } | -- ^ A saved TT hole that will later be converted to a top-level Idris metavariable -- applied to all elements of its local environment. Guess { binderTy :: !b, binderVal :: b } | -- ^ A provided value for a hole. It will later be substituted - the guess is to keep -- it computationally inert while working on other things if necessary. PVar { binderTy :: !b } | -- ^ A pattern variable (these are bound around terms that make up pattern-match -- clauses) PVTy { binderTy :: !b } -- ^ The type of a pattern binding deriving (Show, Eq, Ord, Functor, Foldable, Traversable, Data, Generic, Typeable) ``` in the `gibiansky` style, as ``` data Binder b = Lam {binderTy :: !b {-^ type annotation for bound variable-}} | -- ^ A function binding Pi {binderImpl :: Maybe ImplicitInfo ,binderTy :: !b ,binderKind :: !b} | -- ^ A binding that occurs in a function type -- expression, e.g. @(x:Int) -> ...@ The 'binderImpl' -- flag says whether it was a scoped implicit -- (i.e. forall bound) in the high level Idris, but -- otherwise has no relevance in TT. Let {binderTy :: !b ,binderVal :: b {-^ value for bound variable-}} | -- ^ A binding that occurs in a @let@ expression NLet {binderTy :: !b ,binderVal :: b} | -- ^ NLet is an intermediate product in the evaluator -- that's used for temporarily naming locals during -- reduction. It won't occur outside the evaluator. Hole {binderTy :: !b} | -- ^ A hole in a term under construction in the -- elaborator. If this is not filled during -- elaboration, it is an error. GHole {envlen :: Int ,localnames :: [Name] ,binderTy :: !b} | -- ^ A saved TT hole that will later be converted to a -- top-level Idris metavariable applied to all -- elements of its local environment. Guess {binderTy :: !b ,binderVal :: b} | -- ^ A provided value for a hole. It will later be -- substituted - the guess is to keep it -- computationally inert while working on other things -- if necessary. PVar {binderTy :: !b} | -- ^ A pattern variable (these are bound around terms -- that make up pattern-match clauses) PVTy {binderTy :: !b} -- ^ The type of a pattern binding deriving (Show,Eq,Ord,Functor,Foldable,Traversable,Data,Generic,Typeable) ``` in the `fundamental` style, and as ``` data Binder b = Lam {binderTy :: !b {-^ type annotation for bound variable-}} | -- ^ A function binding Pi {binderImpl :: Maybe ImplicitInfo ,binderTy :: !b ,binderKind :: !b} | -- ^ A binding that occurs in a function type -- expression, e.g. @(x:Int) -> ...@ The 'binderImpl' -- flag says whether it was a scoped implicit -- (i.e. forall bound) in the high level Idris, but -- otherwise has no relevance in TT. Let {binderTy :: !b ,binderVal :: b {-^ value for bound variable-}} | -- ^ A binding that occurs in a @let@ expression NLet {binderTy :: !b ,binderVal :: b} | -- ^ NLet is an intermediate product in the evaluator -- that's used for temporarily naming locals during -- reduction. It won't occur outside the evaluator. Hole {binderTy :: !b} | -- ^ A hole in a term under construction in the -- elaborator. If this is not filled during -- elaboration, it is an error. GHole {envlen :: Int ,localnames :: [Name] ,binderTy :: !b} | -- ^ A saved TT hole that will later be converted to a -- top-level Idris metavariable applied to all -- elements of its local environment. Guess {binderTy :: !b ,binderVal :: b} | -- ^ A provided value for a hole. It will later be -- substituted - the guess is to keep it -- computationally inert while working on other things -- if necessary. PVar {binderTy :: !b} | -- ^ A pattern variable (these are bound around terms -- that make up pattern-match clauses) PVTy {binderTy :: !b} -- ^ The type of a pattern binding deriving (Show,Eq,Ord,Functor,Foldable,Traversable,Data,Generic,Typeable) ``` in the `chris-done` style. All of these have in common that the trailing Haddock is moved after the `|` separator, removing it from what it documents. As far as I can tell, the AST is associating them with the following constructor. Is there a good way for a style to work around this? Or to instruct the parser to associate them with the preceding constructor when they begin with a `^`?
non_process
trailing haddock after constructors moved past separator i m working on an hindent style for the guts of idris but i m encountering what i think might be a fundamental limitation the following code data binder b lam binderty b type annotation for bound variable a function binding pi binderimpl maybe implicitinfo binderty b binderkind b a binding that occurs in a function type expression e g x int the binderimpl flag says whether it was a scoped implicit i e forall bound in the high level idris but otherwise has no relevance in tt let binderty b binderval b value for bound variable a binding that occurs in a let expression nlet binderty b binderval b nlet is an intermediate product in the evaluator that s used for temporarily naming locals during reduction it won t occur outside the evaluator hole binderty b a hole in a term under construction in the elaborator if this is not filled during elaboration it is an error ghole envlen int localnames binderty b a saved tt hole that will later be converted to a top level idris metavariable applied to all elements of its local environment guess binderty b binderval b a provided value for a hole it will later be substituted the guess is to keep it computationally inert while working on other things if necessary pvar binderty b a pattern variable these are bound around terms that make up pattern match clauses pvty binderty b the type of a pattern binding deriving show eq ord functor foldable traversable data generic typeable is reformatted as data binder b lam binderty b type annotation for bound variable a function binding pi binderimpl maybe implicitinfo binderty b binderkind b a binding that occurs in a function type expression e g x int the binderimpl flag says whether it was a scoped implicit i e forall bound in the high level idris but otherwise has no relevance in tt let binderty b binderval b value for bound variable a binding that occurs in a let expression nlet binderty b binderval b nlet is an intermediate product in the evaluator that s used for temporarily naming locals during reduction it won t occur outside the evaluator hole binderty b a hole in a term under construction in the elaborator if this is not filled during elaboration it is an error ghole envlen int localnames binderty b a saved tt hole that will later be converted to a top level idris metavariable applied to all elements of its local environment guess binderty b binderval b a provided value for a hole it will later be substituted the guess is to keep it computationally inert while working on other things if necessary pvar binderty b a pattern variable these are bound around terms that make up pattern match clauses pvty binderty b the type of a pattern binding deriving show eq ord functor foldable traversable data generic typeable in the gibiansky style as data binder b lam binderty b type annotation for bound variable a function binding pi binderimpl maybe implicitinfo binderty b binderkind b a binding that occurs in a function type expression e g x int the binderimpl flag says whether it was a scoped implicit i e forall bound in the high level idris but otherwise has no relevance in tt let binderty b binderval b value for bound variable a binding that occurs in a let expression nlet binderty b binderval b nlet is an intermediate product in the evaluator that s used for temporarily naming locals during reduction it won t occur outside the evaluator hole binderty b a hole in a term under construction in the elaborator if this is not filled during elaboration it is an error ghole envlen int localnames binderty b a saved tt hole that will later be converted to a top level idris metavariable applied to all elements of its local environment guess binderty b binderval b a provided value for a hole it will later be substituted the guess is to keep it computationally inert while working on other things if necessary pvar binderty b a pattern variable these are bound around terms that make up pattern match clauses pvty binderty b the type of a pattern binding deriving show eq ord functor foldable traversable data generic typeable in the fundamental style and as data binder b lam binderty b type annotation for bound variable a function binding pi binderimpl maybe implicitinfo binderty b binderkind b a binding that occurs in a function type expression e g x int the binderimpl flag says whether it was a scoped implicit i e forall bound in the high level idris but otherwise has no relevance in tt let binderty b binderval b value for bound variable a binding that occurs in a let expression nlet binderty b binderval b nlet is an intermediate product in the evaluator that s used for temporarily naming locals during reduction it won t occur outside the evaluator hole binderty b a hole in a term under construction in the elaborator if this is not filled during elaboration it is an error ghole envlen int localnames binderty b a saved tt hole that will later be converted to a top level idris metavariable applied to all elements of its local environment guess binderty b binderval b a provided value for a hole it will later be substituted the guess is to keep it computationally inert while working on other things if necessary pvar binderty b a pattern variable these are bound around terms that make up pattern match clauses pvty binderty b the type of a pattern binding deriving show eq ord functor foldable traversable data generic typeable in the chris done style all of these have in common that the trailing haddock is moved after the separator removing it from what it documents as far as i can tell the ast is associating them with the following constructor is there a good way for a style to work around this or to instruct the parser to associate them with the preceding constructor when they begin with a
0
491,989
14,174,864,805
IssuesEvent
2020-11-12 20:36:36
eclipse-ee4j/openmq
https://api.github.com/repos/eclipse-ee4j/openmq
closed
Document new MQ ConnectionFactory property imqSocketConnectTimeout
Component: doc ERR: Assignee Priority: Major Type: Sub-task
Please update the MQ admin guide to reflect the changes described in #87. Please see that issue for details. #### Affected Versions [4.5.1, 5.0]
1.0
Document new MQ ConnectionFactory property imqSocketConnectTimeout - Please update the MQ admin guide to reflect the changes described in #87. Please see that issue for details. #### Affected Versions [4.5.1, 5.0]
non_process
document new mq connectionfactory property imqsocketconnecttimeout please update the mq admin guide to reflect the changes described in please see that issue for details affected versions
0
10,553
13,340,229,945
IssuesEvent
2020-08-28 14:08:00
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Need more information on how to get the fully-qualified Id of Marketplace tasks
Pri1 devops-cicd-process/tech devops/prod doc-enhancement
In the Custom tasks section when you mention Marketplace tasks a more elaborate description on how to refer these tasks would be very helpful. It isn't obvious - at least for me - to get the fully-qualified name of a downloaded task extension. A sentence or two on this topic would be very handy either here or in another part of the documentation accessed via a link. BTW, I still could not get the ids, what I have after hours of searching is I just a 'trick' of creating a classic pipeline adding the task and exporting it to YAML. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8098f527-ebdf-60d5-3989-5228b7a207c1 * Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6 * Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Need more information on how to get the fully-qualified Id of Marketplace tasks - In the Custom tasks section when you mention Marketplace tasks a more elaborate description on how to refer these tasks would be very helpful. It isn't obvious - at least for me - to get the fully-qualified name of a downloaded task extension. A sentence or two on this topic would be very handy either here or in another part of the documentation accessed via a link. BTW, I still could not get the ids, what I have after hours of searching is I just a 'trick' of creating a classic pipeline adding the task and exporting it to YAML. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8098f527-ebdf-60d5-3989-5228b7a207c1 * Version Independent ID: ce27c817-9599-00ef-5af2-3ac1dbad8dc6 * Content: [Build and Release Tasks - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml) * Content Source: [docs/pipelines/process/tasks.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/tasks.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
need more information on how to get the fully qualified id of marketplace tasks in the custom tasks section when you mention marketplace tasks a more elaborate description on how to refer these tasks would be very helpful it isn t obvious at least for me to get the fully qualified name of a downloaded task extension a sentence or two on this topic would be very handy either here or in another part of the documentation accessed via a link btw i still could not get the ids what i have after hours of searching is i just a trick of creating a classic pipeline adding the task and exporting it to yaml document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id ebdf version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
16,330
20,986,710,442
IssuesEvent
2022-03-29 04:32:04
keras-team/keras-cv
https://api.github.com/repos/keras-team/keras-cv
closed
All the KerasCV layers should register to tf.keras.utils.register_keras_serializable
preprocessing
This will allow layers to be serialized and deserialized correctly by keras framework, and enable save model etc.
1.0
All the KerasCV layers should register to tf.keras.utils.register_keras_serializable - This will allow layers to be serialized and deserialized correctly by keras framework, and enable save model etc.
process
all the kerascv layers should register to tf keras utils register keras serializable this will allow layers to be serialized and deserialized correctly by keras framework and enable save model etc
1
44,658
11,486,073,970
IssuesEvent
2020-02-11 09:13:25
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
undeclared inclusion(s) in rule when build from source
TF 2.1 stat:awaiting response subtype: ubuntu/linux type:build/install
**System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): RHEL 7.7 - TensorFlow installed from (source or binary): source - TensorFlow version: v2.1.0-rc2 - Python version: intel 3.6.9 - Installed using virtualenv? pip? conda?: conda - Bazel version (if compiling from source): 0.29.1 - GCC/Compiler version (if compiling from source): 7.3 - CUDA/cuDNN version: 10.0 or 10.1 (tested the boths) - GPU model and memory: Tesla K80 **Describe the problem** FAILED: Build did NOT complete successfully **Provide the exact sequence of commands / steps that you executed before running into the problem** #compiling from source cd /xxxx/install/tf2 git clone https://github.com/tensorflow/tensorflow cd tensorflow git checkout tags/v2.1.0-rc2 git checkout -b v2.1 conda activate tf2 source scl_source enable devtoolset-7 llvm-toolset-6.0 ./configure #input: /usr/bin/python3 #input: /opt/anaconda3/lib/python3.6/site-packages #sequence: ok=enter, ok, ok, ok, y, y, 10.0; 7.6.4; 6.0.1; ok #input: /usr/local/cuda-10.0,/usr/local/cuda-10.0/bin,/usr/local/cuda-10.0/lib64,/usr/local/cuda-10.0/include,/opt/TensorRT-6.0.1.5,/opt/TensorRT-6.0.1.5/bin,/opt/TensorRT-6.0.1.5/lib,/opt/TensorRT-6.0.1.5/include,/opt/rh/devtoolset-7/root/usr/bin,/opt/rh/devtoolset-7/root/usr/lib64,/opt/rh/devtoolset-7/root/usr/include,/opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7,/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include #input: 3.5,3.7,6.0,7.0,7.5 #sequence: ok, ok, ok, ok bazel build --verbose_failures --config=v2 --config=opt --config=mkl --config=cuda //tensorflow/tools/pip_package:build_pip_package #ERROR IS HERE ... #cleaning for next try bazel clean --expunge --async rm -rf $HOME/.cache/bazel git checkout tags/v2.1.0-rc2 git branch -d v2.1 git checkout -b v2.1 **Any other info / logs** ERROR: /home/xxx/.cache/bazel/_bazel_xxx/26195a8102390a78bf7f86e6341cc9bb/external/boringssl/BUILD:130:1: undeclared inclusion(s) in rule '@boringssl//:crypto': this rule is missing dependency declarations for the following files included by 'external/boringssl/src/crypto/asn1/tasn_fre.c': '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stddef.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdint.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdarg.h' Target //tensorflow/tools/pip_package:build_pip_package failed to build ERROR: /home/xxx/install/tf2/tensorflow/tensorflow/lite/toco/python/BUILD:77:1 undeclared inclusion(s) in rule '@boringssl//:crypto': this rule is missing dependency declarations for the following files included by 'external/boringssl/src/crypto/asn1/tasn_fre.c': '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stddef.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdint.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdarg.h' INFO: Elapsed time: 61.246s, Critical Path: 3.70s INFO: 198 processes: 198 local. FAILED: Build did NOT complete successfully
1.0
undeclared inclusion(s) in rule when build from source - **System information** - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): RHEL 7.7 - TensorFlow installed from (source or binary): source - TensorFlow version: v2.1.0-rc2 - Python version: intel 3.6.9 - Installed using virtualenv? pip? conda?: conda - Bazel version (if compiling from source): 0.29.1 - GCC/Compiler version (if compiling from source): 7.3 - CUDA/cuDNN version: 10.0 or 10.1 (tested the boths) - GPU model and memory: Tesla K80 **Describe the problem** FAILED: Build did NOT complete successfully **Provide the exact sequence of commands / steps that you executed before running into the problem** #compiling from source cd /xxxx/install/tf2 git clone https://github.com/tensorflow/tensorflow cd tensorflow git checkout tags/v2.1.0-rc2 git checkout -b v2.1 conda activate tf2 source scl_source enable devtoolset-7 llvm-toolset-6.0 ./configure #input: /usr/bin/python3 #input: /opt/anaconda3/lib/python3.6/site-packages #sequence: ok=enter, ok, ok, ok, y, y, 10.0; 7.6.4; 6.0.1; ok #input: /usr/local/cuda-10.0,/usr/local/cuda-10.0/bin,/usr/local/cuda-10.0/lib64,/usr/local/cuda-10.0/include,/opt/TensorRT-6.0.1.5,/opt/TensorRT-6.0.1.5/bin,/opt/TensorRT-6.0.1.5/lib,/opt/TensorRT-6.0.1.5/include,/opt/rh/devtoolset-7/root/usr/bin,/opt/rh/devtoolset-7/root/usr/lib64,/opt/rh/devtoolset-7/root/usr/include,/opt/rh/devtoolset-7/root/usr/libexec/gcc/x86_64-redhat-linux/7,/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include #input: 3.5,3.7,6.0,7.0,7.5 #sequence: ok, ok, ok, ok bazel build --verbose_failures --config=v2 --config=opt --config=mkl --config=cuda //tensorflow/tools/pip_package:build_pip_package #ERROR IS HERE ... #cleaning for next try bazel clean --expunge --async rm -rf $HOME/.cache/bazel git checkout tags/v2.1.0-rc2 git branch -d v2.1 git checkout -b v2.1 **Any other info / logs** ERROR: /home/xxx/.cache/bazel/_bazel_xxx/26195a8102390a78bf7f86e6341cc9bb/external/boringssl/BUILD:130:1: undeclared inclusion(s) in rule '@boringssl//:crypto': this rule is missing dependency declarations for the following files included by 'external/boringssl/src/crypto/asn1/tasn_fre.c': '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stddef.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdint.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdarg.h' Target //tensorflow/tools/pip_package:build_pip_package failed to build ERROR: /home/xxx/install/tf2/tensorflow/tensorflow/lite/toco/python/BUILD:77:1 undeclared inclusion(s) in rule '@boringssl//:crypto': this rule is missing dependency declarations for the following files included by 'external/boringssl/src/crypto/asn1/tasn_fre.c': '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stddef.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdint.h' '/opt/rh/devtoolset-7/root/usr/lib/gcc/x86_64-redhat-linux/7/include/stdarg.h' INFO: Elapsed time: 61.246s, Critical Path: 3.70s INFO: 198 processes: 198 local. FAILED: Build did NOT complete successfully
non_process
undeclared inclusion s in rule when build from source system information os platform and distribution e g linux ubuntu rhel tensorflow installed from source or binary source tensorflow version python version intel installed using virtualenv pip conda conda bazel version if compiling from source gcc compiler version if compiling from source cuda cudnn version or tested the boths gpu model and memory tesla describe the problem failed build did not complete successfully provide the exact sequence of commands steps that you executed before running into the problem compiling from source cd xxxx install git clone cd tensorflow git checkout tags git checkout b conda activate source scl source enable devtoolset llvm toolset configure input usr bin input opt lib site packages sequence ok enter ok ok ok y y ok input usr local cuda usr local cuda bin usr local cuda usr local cuda include opt tensorrt opt tensorrt bin opt tensorrt lib opt tensorrt include opt rh devtoolset root usr bin opt rh devtoolset root usr opt rh devtoolset root usr include opt rh devtoolset root usr libexec gcc redhat linux opt rh devtoolset root usr lib gcc redhat linux include input sequence ok ok ok ok bazel build verbose failures config config opt config mkl config cuda tensorflow tools pip package build pip package error is here cleaning for next try bazel clean expunge async rm rf home cache bazel git checkout tags git branch d git checkout b any other info logs error home xxx cache bazel bazel xxx external boringssl build undeclared inclusion s in rule boringssl crypto this rule is missing dependency declarations for the following files included by external boringssl src crypto tasn fre c opt rh devtoolset root usr lib gcc redhat linux include stddef h opt rh devtoolset root usr lib gcc redhat linux include stdint h opt rh devtoolset root usr lib gcc redhat linux include stdarg h target tensorflow tools pip package build pip package failed to build error home xxx install tensorflow tensorflow lite toco python build undeclared inclusion s in rule boringssl crypto this rule is missing dependency declarations for the following files included by external boringssl src crypto tasn fre c opt rh devtoolset root usr lib gcc redhat linux include stddef h opt rh devtoolset root usr lib gcc redhat linux include stdint h opt rh devtoolset root usr lib gcc redhat linux include stdarg h info elapsed time critical path info processes local failed build did not complete successfully
0
7,904
11,089,585,822
IssuesEvent
2019-12-14 19:34:17
threefoldtech/jumpscaleX_core
https://api.github.com/repos/threefoldtech/jumpscaleX_core
closed
BCDB schema with Enum data type
process_wontfix
Our BCDB schema must be able to handle ```None``` data type instead of enforcing the first element in the Enum as a default value example : * I have Enum in our schema with the days of the week Sunday, Monday , Tuesday , ..... * I have an actor to handle the days' retrieval. * I want to retrieve all the days of the week by sending the days of the week enum as None. In the current implementation, it is impossible as the ```schema in``` will set a default value from the enum
1.0
BCDB schema with Enum data type - Our BCDB schema must be able to handle ```None``` data type instead of enforcing the first element in the Enum as a default value example : * I have Enum in our schema with the days of the week Sunday, Monday , Tuesday , ..... * I have an actor to handle the days' retrieval. * I want to retrieve all the days of the week by sending the days of the week enum as None. In the current implementation, it is impossible as the ```schema in``` will set a default value from the enum
process
bcdb schema with enum data type our bcdb schema must be able to handle none data type instead of enforcing the first element in the enum as a default value example i have enum in our schema with the days of the week sunday monday tuesday i have an actor to handle the days retrieval i want to retrieve all the days of the week by sending the days of the week enum as none in the current implementation it is impossible as the schema in will set a default value from the enum
1
16,678
21,780,901,470
IssuesEvent
2022-05-13 18:46:16
NVIDIA/open-gpu-kernel-modules
https://api.github.com/repos/NVIDIA/open-gpu-kernel-modules
closed
(suggestions) use a linter in your internal dev environment
process
I'm seeing dozens of tickets that could be avoided if Nvidia used a linter in their own internal dev environment. GitHub Actions could also be implemented to autocheck code for compliance, security issues, quality, etc. At my work we use cppcheck for static code analysis and it's been amazing so far. SonarQube/Lint is also an option as well as being open source/free.
1.0
(suggestions) use a linter in your internal dev environment - I'm seeing dozens of tickets that could be avoided if Nvidia used a linter in their own internal dev environment. GitHub Actions could also be implemented to autocheck code for compliance, security issues, quality, etc. At my work we use cppcheck for static code analysis and it's been amazing so far. SonarQube/Lint is also an option as well as being open source/free.
process
suggestions use a linter in your internal dev environment i m seeing dozens of tickets that could be avoided if nvidia used a linter in their own internal dev environment github actions could also be implemented to autocheck code for compliance security issues quality etc at my work we use cppcheck for static code analysis and it s been amazing so far sonarqube lint is also an option as well as being open source free
1
433,097
30,311,970,158
IssuesEvent
2023-07-10 13:22:29
gravitational/teleport
https://api.github.com/repos/gravitational/teleport
closed
Document which configuration parameters can be referenced as a file path
documentation c-wc
## Details Currently, the `auth_token` parameter can be passed as a string or file path but our configuration reference file only mentions the string method: https://goteleport.com/docs/setup/reference/config/ Request is to update the configuration reference file to include the file path method on any option(s) where this functionality is avaialable
1.0
Document which configuration parameters can be referenced as a file path - ## Details Currently, the `auth_token` parameter can be passed as a string or file path but our configuration reference file only mentions the string method: https://goteleport.com/docs/setup/reference/config/ Request is to update the configuration reference file to include the file path method on any option(s) where this functionality is avaialable
non_process
document which configuration parameters can be referenced as a file path details currently the auth token parameter can be passed as a string or file path but our configuration reference file only mentions the string method request is to update the configuration reference file to include the file path method on any option s where this functionality is avaialable
0
15,552
19,703,502,741
IssuesEvent
2022-01-12 19:07:57
googleapis/java-pubsublite-kafka
https://api.github.com/repos/googleapis/java-pubsublite-kafka
opened
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'pubsublite-kafka' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * release_level must be equal to one of the allowed values in .repo-metadata.json * api_shortname 'pubsublite-kafka' invalid in .repo-metadata.json ☝️ Once you correct these problems, you can close this issue. Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 release level must be equal to one of the allowed values in repo metadata json api shortname pubsublite kafka invalid in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
1
7,746
10,864,237,425
IssuesEvent
2019-11-14 16:30:21
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
[processing][needs-docs] force multipart output from GDAL-based dissolve algorithm (fix #20025)
3.4 Automatic new feature Processing Alg
Original commit: https://github.com/qgis/QGIS/commit/f8893d769b05dac137a8623863cecd27ad8fcf67 by nyalldawson (cherry picked from commit 32f6034be708b305ed4e19b4f6ade1a8b409993b)
1.0
[processing][needs-docs] force multipart output from GDAL-based dissolve algorithm (fix #20025) - Original commit: https://github.com/qgis/QGIS/commit/f8893d769b05dac137a8623863cecd27ad8fcf67 by nyalldawson (cherry picked from commit 32f6034be708b305ed4e19b4f6ade1a8b409993b)
process
force multipart output from gdal based dissolve algorithm fix original commit by nyalldawson cherry picked from commit
1
6,639
9,747,761,672
IssuesEvent
2019-06-03 15:01:16
EthVM/EthVM
https://api.github.com/repos/EthVM/EthVM
closed
Reconnectivity issue when fetching on complete Block traces
bug project:processing
* **I'm submitting a ...** - [x] bug report * **Bug Report** Here's a complete trace of the exception (feature/parity-processing-improvements): ``` [2019-06-01 ,888] ERROR ParitySourceTask - Exception detected (com.ethvm.kafka.connect.sources.web3.ParitySourceTask) java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at com.ethvm.kafka.connect.sources.web3.sources.ParityFullBlockSource.fetchRange(ParityFullBlockSource.kt:70) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.poll(AbstractParityEntitySource.kt:60) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.poll(ParitySourceTask.kt:63) at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2019-06-01 ,890] ERROR WorkerSourceTask{id=parity-source-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at com.ethvm.kafka.connect.sources.web3.sources.ParityFullBlockSource.fetchRange(ParityFullBlockSource.kt:70) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.poll(AbstractParityEntitySource.kt:60) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.poll(ParitySourceTask.kt:63) at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2019-06-01 ,891] ERROR WorkerSourceTask{id=parity-source-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) [2019-06-01 ,891] DEBUG Stopping ParitySourceTask (com.ethvm.kafka.connect.sources.web3.ParitySourceTask) [2019-06-01 ,898] WARN [Consumer clientId=consumer-134, groupId=connect-postgres-balance-delta-sink] Error while fetching metadata with correlation id 53515 : {non_fungible_balance_delta=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) io.reactivex.exceptions.UndeliverableException: The exception could not be delivered to the consumer because it has already canceled/disposed the flow or the exception has nowhere to go to begin with. Further reading: https://github.com/ReactiveX/RxJava/wiki/What's-different-in-2.0#error-handling | null at io.reactivex.plugins.RxJavaPlugins.onError(RxJavaPlugins.java:367) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:83) at io.reactivex.internal.operators.flowable.FlowableFromObservable$SubscriberObserver.cancel(FlowableFromObservable.java:64) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.dispose(FlowableBufferTimed.java:537) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.cancel(FlowableBufferTimed.java:528) at io.reactivex.subscribers.SerializedSubscriber.cancel(SerializedSubscriber.java:197) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscriptions.SubscriptionHelper.cancel(SubscriptionHelper.java:189) at io.reactivex.internal.subscribers.LambdaSubscriber.cancel(LambdaSubscriber.java:119) at io.reactivex.internal.subscribers.LambdaSubscriber.dispose(LambdaSubscriber.java:104) at com.ethvm.kafka.connect.sources.web3.tracker.CanonicalChainTracker.stop(CanonicalChainTracker.kt:71) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.stop(AbstractParityEntitySource.kt:43) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.stop(ParitySourceTask.kt:45) at org.apache.kafka.connect.runtime.WorkerSourceTask.tryStop(WorkerSourceTask.java:187) at org.apache.kafka.connect.runtime.WorkerSourceTask.close(WorkerSourceTask.java:151) at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:154) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:181) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.java_websocket.exceptions.WebsocketNotConnectedException at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:608) at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:585) at org.java_websocket.client.WebSocketClient.send(WebSocketClient.java:309) at org.web3j.protocol.websocket.WebSocketService.sendRequest(WebSocketService.java:165) at org.web3j.protocol.websocket.WebSocketService.sendAsync(WebSocketService.java:154) at org.web3j.protocol.websocket.WebSocketService.unsubscribeFromEventsStream(WebSocketService.java:402) at org.web3j.protocol.websocket.WebSocketService.closeSubscription(WebSocketService.java:395) at org.web3j.protocol.websocket.WebSocketService.lambda$subscribe$2(WebSocketService.java:369) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:80) ... 25 more Exception in thread "pool-9-thread-4" io.reactivex.exceptions.UndeliverableException: The exception could not be delivered to the consumer because it has already canceled/disposed the flow or the exception has nowhere to go to begin with. Further reading: https://github.com/ReactiveX/RxJava/wiki/What's-different-in-2.0#error-handling | null at io.reactivex.plugins.RxJavaPlugins.onError(RxJavaPlugins.java:367) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:83) at io.reactivex.internal.operators.flowable.FlowableFromObservable$SubscriberObserver.cancel(FlowableFromObservable.java:64) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.dispose(FlowableBufferTimed.java:537) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.cancel(FlowableBufferTimed.java:528) at io.reactivex.subscribers.SerializedSubscriber.cancel(SerializedSubscriber.java:197) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscriptions.SubscriptionHelper.cancel(SubscriptionHelper.java:189) at io.reactivex.internal.subscribers.LambdaSubscriber.cancel(LambdaSubscriber.java:119) at io.reactivex.internal.subscribers.LambdaSubscriber.dispose(LambdaSubscriber.java:104) at com.ethvm.kafka.connect.sources.web3.tracker.CanonicalChainTracker.stop(CanonicalChainTracker.kt:71) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.stop(AbstractParityEntitySource.kt:43) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.stop(ParitySourceTask.kt:45) at org.apache.kafka.connect.runtime.WorkerSourceTask.tryStop(WorkerSourceTask.java:187) at org.apache.kafka.connect.runtime.WorkerSourceTask.close(WorkerSourceTask.java:151) at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:154) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:181) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.java_websocket.exceptions.WebsocketNotConnectedException at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:608) at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:585) at org.java_websocket.client.WebSocketClient.send(WebSocketClient.java:309) at org.web3j.protocol.websocket.WebSocketService.sendRequest(WebSocketService.java:165) at org.web3j.protocol.websocket.WebSocketService.sendAsync(WebSocketService.java:154) at org.web3j.protocol.websocket.WebSocketService.unsubscribeFromEventsStream(WebSocketService.java:402) at org.web3j.protocol.websocket.WebSocketService.closeSubscription(WebSocketService.java:395) at org.web3j.protocol.websocket.WebSocketService.lambda$subscribe$2(WebSocketService.java:369) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:80) [2019-06-01 ,916] DEBUG Stopped ParitySourceTask (com.ethvm.kafka.connect.sources.web3.ParitySourceTask) ```
1.0
Reconnectivity issue when fetching on complete Block traces - * **I'm submitting a ...** - [x] bug report * **Bug Report** Here's a complete trace of the exception (feature/parity-processing-improvements): ``` [2019-06-01 ,888] ERROR ParitySourceTask - Exception detected (com.ethvm.kafka.connect.sources.web3.ParitySourceTask) java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at com.ethvm.kafka.connect.sources.web3.sources.ParityFullBlockSource.fetchRange(ParityFullBlockSource.kt:70) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.poll(AbstractParityEntitySource.kt:60) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.poll(ParitySourceTask.kt:63) at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2019-06-01 ,890] ERROR WorkerSourceTask{id=parity-source-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask) java.util.concurrent.TimeoutException at java.util.concurrent.FutureTask.get(FutureTask.java:205) at com.ethvm.kafka.connect.sources.web3.sources.ParityFullBlockSource.fetchRange(ParityFullBlockSource.kt:70) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.poll(AbstractParityEntitySource.kt:60) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.poll(ParitySourceTask.kt:63) at org.apache.kafka.connect.runtime.WorkerSourceTask.poll(WorkerSourceTask.java:245) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:221) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2019-06-01 ,891] ERROR WorkerSourceTask{id=parity-source-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask) [2019-06-01 ,891] DEBUG Stopping ParitySourceTask (com.ethvm.kafka.connect.sources.web3.ParitySourceTask) [2019-06-01 ,898] WARN [Consumer clientId=consumer-134, groupId=connect-postgres-balance-delta-sink] Error while fetching metadata with correlation id 53515 : {non_fungible_balance_delta=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) io.reactivex.exceptions.UndeliverableException: The exception could not be delivered to the consumer because it has already canceled/disposed the flow or the exception has nowhere to go to begin with. Further reading: https://github.com/ReactiveX/RxJava/wiki/What's-different-in-2.0#error-handling | null at io.reactivex.plugins.RxJavaPlugins.onError(RxJavaPlugins.java:367) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:83) at io.reactivex.internal.operators.flowable.FlowableFromObservable$SubscriberObserver.cancel(FlowableFromObservable.java:64) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.dispose(FlowableBufferTimed.java:537) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.cancel(FlowableBufferTimed.java:528) at io.reactivex.subscribers.SerializedSubscriber.cancel(SerializedSubscriber.java:197) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscriptions.SubscriptionHelper.cancel(SubscriptionHelper.java:189) at io.reactivex.internal.subscribers.LambdaSubscriber.cancel(LambdaSubscriber.java:119) at io.reactivex.internal.subscribers.LambdaSubscriber.dispose(LambdaSubscriber.java:104) at com.ethvm.kafka.connect.sources.web3.tracker.CanonicalChainTracker.stop(CanonicalChainTracker.kt:71) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.stop(AbstractParityEntitySource.kt:43) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.stop(ParitySourceTask.kt:45) at org.apache.kafka.connect.runtime.WorkerSourceTask.tryStop(WorkerSourceTask.java:187) at org.apache.kafka.connect.runtime.WorkerSourceTask.close(WorkerSourceTask.java:151) at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:154) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:181) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.java_websocket.exceptions.WebsocketNotConnectedException at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:608) at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:585) at org.java_websocket.client.WebSocketClient.send(WebSocketClient.java:309) at org.web3j.protocol.websocket.WebSocketService.sendRequest(WebSocketService.java:165) at org.web3j.protocol.websocket.WebSocketService.sendAsync(WebSocketService.java:154) at org.web3j.protocol.websocket.WebSocketService.unsubscribeFromEventsStream(WebSocketService.java:402) at org.web3j.protocol.websocket.WebSocketService.closeSubscription(WebSocketService.java:395) at org.web3j.protocol.websocket.WebSocketService.lambda$subscribe$2(WebSocketService.java:369) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:80) ... 25 more Exception in thread "pool-9-thread-4" io.reactivex.exceptions.UndeliverableException: The exception could not be delivered to the consumer because it has already canceled/disposed the flow or the exception has nowhere to go to begin with. Further reading: https://github.com/ReactiveX/RxJava/wiki/What's-different-in-2.0#error-handling | null at io.reactivex.plugins.RxJavaPlugins.onError(RxJavaPlugins.java:367) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:83) at io.reactivex.internal.operators.flowable.FlowableFromObservable$SubscriberObserver.cancel(FlowableFromObservable.java:64) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.dispose(FlowableBufferTimed.java:537) at io.reactivex.internal.operators.flowable.FlowableBufferTimed$BufferExactBoundedSubscriber.cancel(FlowableBufferTimed.java:528) at io.reactivex.subscribers.SerializedSubscriber.cancel(SerializedSubscriber.java:197) at io.reactivex.internal.operators.flowable.FlowableOnBackpressureBuffer$BackpressureBufferSubscriber.cancel(FlowableOnBackpressureBuffer.java:151) at io.reactivex.internal.subscribers.BasicFuseableSubscriber.cancel(BasicFuseableSubscriber.java:158) at io.reactivex.internal.subscriptions.SubscriptionHelper.cancel(SubscriptionHelper.java:189) at io.reactivex.internal.subscribers.LambdaSubscriber.cancel(LambdaSubscriber.java:119) at io.reactivex.internal.subscribers.LambdaSubscriber.dispose(LambdaSubscriber.java:104) at com.ethvm.kafka.connect.sources.web3.tracker.CanonicalChainTracker.stop(CanonicalChainTracker.kt:71) at com.ethvm.kafka.connect.sources.web3.sources.AbstractParityEntitySource.stop(AbstractParityEntitySource.kt:43) at com.ethvm.kafka.connect.sources.web3.ParitySourceTask.stop(ParitySourceTask.kt:45) at org.apache.kafka.connect.runtime.WorkerSourceTask.tryStop(WorkerSourceTask.java:187) at org.apache.kafka.connect.runtime.WorkerSourceTask.close(WorkerSourceTask.java:151) at org.apache.kafka.connect.runtime.WorkerTask.doClose(WorkerTask.java:154) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:181) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.java_websocket.exceptions.WebsocketNotConnectedException at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:608) at org.java_websocket.WebSocketImpl.send(WebSocketImpl.java:585) at org.java_websocket.client.WebSocketClient.send(WebSocketClient.java:309) at org.web3j.protocol.websocket.WebSocketService.sendRequest(WebSocketService.java:165) at org.web3j.protocol.websocket.WebSocketService.sendAsync(WebSocketService.java:154) at org.web3j.protocol.websocket.WebSocketService.unsubscribeFromEventsStream(WebSocketService.java:402) at org.web3j.protocol.websocket.WebSocketService.closeSubscription(WebSocketService.java:395) at org.web3j.protocol.websocket.WebSocketService.lambda$subscribe$2(WebSocketService.java:369) at io.reactivex.internal.observers.DisposableLambdaObserver.dispose(DisposableLambdaObserver.java:80) [2019-06-01 ,916] DEBUG Stopped ParitySourceTask (com.ethvm.kafka.connect.sources.web3.ParitySourceTask) ```
process
reconnectivity issue when fetching on complete block traces i m submitting a bug report bug report here s a complete trace of the exception feature parity processing improvements error paritysourcetask exception detected com ethvm kafka connect sources paritysourcetask java util concurrent timeoutexception at java util concurrent futuretask get futuretask java at com ethvm kafka connect sources sources parityfullblocksource fetchrange parityfullblocksource kt at com ethvm kafka connect sources sources abstractparityentitysource poll abstractparityentitysource kt at com ethvm kafka connect sources paritysourcetask poll paritysourcetask kt at org apache kafka connect runtime workersourcetask poll workersourcetask java at org apache kafka connect runtime workersourcetask execute workersourcetask java at org apache kafka connect runtime workertask dorun workertask java at org apache kafka connect runtime workertask run workertask java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java error workersourcetask id parity source task threw an uncaught and unrecoverable exception org apache kafka connect runtime workertask java util concurrent timeoutexception at java util concurrent futuretask get futuretask java at com ethvm kafka connect sources sources parityfullblocksource fetchrange parityfullblocksource kt at com ethvm kafka connect sources sources abstractparityentitysource poll abstractparityentitysource kt at com ethvm kafka connect sources paritysourcetask poll paritysourcetask kt at org apache kafka connect runtime workersourcetask poll workersourcetask java at org apache kafka connect runtime workersourcetask execute workersourcetask java at org apache kafka connect runtime workertask dorun workertask java at org apache kafka connect runtime workertask run workertask java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java error workersourcetask id parity source task is being killed and will not recover until manually restarted org apache kafka connect runtime workertask debug stopping paritysourcetask com ethvm kafka connect sources paritysourcetask warn error while fetching metadata with correlation id non fungible balance delta unknown topic or partition org apache kafka clients networkclient io reactivex exceptions undeliverableexception the exception could not be delivered to the consumer because it has already canceled disposed the flow or the exception has nowhere to go to begin with further reading null at io reactivex plugins rxjavaplugins onerror rxjavaplugins java at io reactivex internal observers disposablelambdaobserver dispose disposablelambdaobserver java at io reactivex internal operators flowable flowablefromobservable subscriberobserver cancel flowablefromobservable java at io reactivex internal operators flowable flowableonbackpressurebuffer backpressurebuffersubscriber cancel flowableonbackpressurebuffer java at io reactivex internal subscribers basicfuseablesubscriber cancel basicfuseablesubscriber java at io reactivex internal subscribers basicfuseablesubscriber cancel basicfuseablesubscriber java at io reactivex internal operators flowable flowablebuffertimed bufferexactboundedsubscriber dispose flowablebuffertimed java at io reactivex internal operators flowable flowablebuffertimed bufferexactboundedsubscriber cancel flowablebuffertimed java at io reactivex subscribers serializedsubscriber cancel serializedsubscriber java at io reactivex internal operators flowable flowableonbackpressurebuffer backpressurebuffersubscriber cancel flowableonbackpressurebuffer java at io reactivex internal subscribers basicfuseablesubscriber cancel basicfuseablesubscriber java at io reactivex internal subscriptions subscriptionhelper cancel subscriptionhelper java at io reactivex internal subscribers lambdasubscriber cancel lambdasubscriber java at io reactivex internal subscribers lambdasubscriber dispose lambdasubscriber java at com ethvm kafka connect sources tracker canonicalchaintracker stop canonicalchaintracker kt at com ethvm kafka connect sources sources abstractparityentitysource stop abstractparityentitysource kt at com ethvm kafka connect sources paritysourcetask stop paritysourcetask kt at org apache kafka connect runtime workersourcetask trystop workersourcetask java at org apache kafka connect runtime workersourcetask close workersourcetask java at org apache kafka connect runtime workertask doclose workertask java at org apache kafka connect runtime workertask dorun workertask java at org apache kafka connect runtime workertask run workertask java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org java websocket exceptions websocketnotconnectedexception at org java websocket websocketimpl send websocketimpl java at org java websocket websocketimpl send websocketimpl java at org java websocket client websocketclient send websocketclient java at org protocol websocket websocketservice sendrequest websocketservice java at org protocol websocket websocketservice sendasync websocketservice java at org protocol websocket websocketservice unsubscribefromeventsstream websocketservice java at org protocol websocket websocketservice closesubscription websocketservice java at org protocol websocket websocketservice lambda subscribe websocketservice java at io reactivex internal observers disposablelambdaobserver dispose disposablelambdaobserver java more exception in thread pool thread io reactivex exceptions undeliverableexception the exception could not be delivered to the consumer because it has already canceled disposed the flow or the exception has nowhere to go to begin with further reading null at io reactivex plugins rxjavaplugins onerror rxjavaplugins java at io reactivex internal observers disposablelambdaobserver dispose disposablelambdaobserver java at io reactivex internal operators flowable flowablefromobservable subscriberobserver cancel flowablefromobservable java at io reactivex internal operators flowable flowableonbackpressurebuffer backpressurebuffersubscriber cancel flowableonbackpressurebuffer java at io reactivex internal subscribers basicfuseablesubscriber cancel basicfuseablesubscriber java at io reactivex internal subscribers basicfuseablesubscriber cancel basicfuseablesubscriber java at io reactivex internal operators flowable flowablebuffertimed bufferexactboundedsubscriber dispose flowablebuffertimed java at io reactivex internal operators flowable flowablebuffertimed bufferexactboundedsubscriber cancel flowablebuffertimed java at io reactivex subscribers serializedsubscriber cancel serializedsubscriber java at io reactivex internal operators flowable flowableonbackpressurebuffer backpressurebuffersubscriber cancel flowableonbackpressurebuffer java at io reactivex internal subscribers basicfuseablesubscriber cancel basicfuseablesubscriber java at io reactivex internal subscriptions subscriptionhelper cancel subscriptionhelper java at io reactivex internal subscribers lambdasubscriber cancel lambdasubscriber java at io reactivex internal subscribers lambdasubscriber dispose lambdasubscriber java at com ethvm kafka connect sources tracker canonicalchaintracker stop canonicalchaintracker kt at com ethvm kafka connect sources sources abstractparityentitysource stop abstractparityentitysource kt at com ethvm kafka connect sources paritysourcetask stop paritysourcetask kt at org apache kafka connect runtime workersourcetask trystop workersourcetask java at org apache kafka connect runtime workersourcetask close workersourcetask java at org apache kafka connect runtime workertask doclose workertask java at org apache kafka connect runtime workertask dorun workertask java at org apache kafka connect runtime workertask run workertask java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by org java websocket exceptions websocketnotconnectedexception at org java websocket websocketimpl send websocketimpl java at org java websocket websocketimpl send websocketimpl java at org java websocket client websocketclient send websocketclient java at org protocol websocket websocketservice sendrequest websocketservice java at org protocol websocket websocketservice sendasync websocketservice java at org protocol websocket websocketservice unsubscribefromeventsstream websocketservice java at org protocol websocket websocketservice closesubscription websocketservice java at org protocol websocket websocketservice lambda subscribe websocketservice java at io reactivex internal observers disposablelambdaobserver dispose disposablelambdaobserver java debug stopped paritysourcetask com ethvm kafka connect sources paritysourcetask
1
16,736
21,899,953,795
IssuesEvent
2022-05-20 12:29:42
camunda/zeebe-process-test
https://api.github.com/repos/camunda/zeebe-process-test
opened
Examples are hard to understand
kind/feature team/process-automation
**Description** All examples just extend an abstract class with the test implementation. As a new user looking for examples it's quite difficult to understand, especially if you look up the code on github directly in your browser. Easy examples without full featured DRY would be much better to learn from.
1.0
Examples are hard to understand - **Description** All examples just extend an abstract class with the test implementation. As a new user looking for examples it's quite difficult to understand, especially if you look up the code on github directly in your browser. Easy examples without full featured DRY would be much better to learn from.
process
examples are hard to understand description all examples just extend an abstract class with the test implementation as a new user looking for examples it s quite difficult to understand especially if you look up the code on github directly in your browser easy examples without full featured dry would be much better to learn from
1
13,916
16,675,844,554
IssuesEvent
2021-06-07 16:04:32
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
closed
Add Python 3.9 in the CI
process + tools
Since Python 3.9 is released: https://docs.python.org/3/whatsnew/3.9.html It would be good to start testing this in CI as early as we can, we should add something to mark this is **expected to fail**, but would be good to know what's failing.
1.0
Add Python 3.9 in the CI - Since Python 3.9 is released: https://docs.python.org/3/whatsnew/3.9.html It would be good to start testing this in CI as early as we can, we should add something to mark this is **expected to fail**, but would be good to know what's failing.
process
add python in the ci since python is released it would be good to start testing this in ci as early as we can we should add something to mark this is expected to fail but would be good to know what s failing
1
13,926
16,683,376,991
IssuesEvent
2021-06-08 04:26:24
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Wrong error output location when using Check validity on data with certain type of 3D error
Bug Processing
This issue is regarding the location of errors when using the `Vector->Geometry Tools->Check validity...` when having a polygon with different elevation in the start and the end point. 1. Add a feature to a vector layer containing the following geometry (I use the QuickWKT plugin): POLYGON((1 1 0, 1 2 1, 2 2 2, 2 1 3, 1 1 4)) _Note that this is not a closed ring but rather a spiral._ 2. Run the `Vector->Geometry Tools->Check validity` tool. 3. Select the layer containing the geometry and use `QGIS` as the method. 4. Run the tool using the default layer names. 5. Check the layer `Error output` which contains the locations of each error. 6. Even though the error is at coordinate (1,1) the reported location is at (0,0). ![screenshot](https://user-images.githubusercontent.com/6311686/120989919-5625ef00-c780-11eb-9cfe-f6a132fa8b0c.JPG) I use QGIS 3.18.3 on windows 10 x64.
1.0
Wrong error output location when using Check validity on data with certain type of 3D error - This issue is regarding the location of errors when using the `Vector->Geometry Tools->Check validity...` when having a polygon with different elevation in the start and the end point. 1. Add a feature to a vector layer containing the following geometry (I use the QuickWKT plugin): POLYGON((1 1 0, 1 2 1, 2 2 2, 2 1 3, 1 1 4)) _Note that this is not a closed ring but rather a spiral._ 2. Run the `Vector->Geometry Tools->Check validity` tool. 3. Select the layer containing the geometry and use `QGIS` as the method. 4. Run the tool using the default layer names. 5. Check the layer `Error output` which contains the locations of each error. 6. Even though the error is at coordinate (1,1) the reported location is at (0,0). ![screenshot](https://user-images.githubusercontent.com/6311686/120989919-5625ef00-c780-11eb-9cfe-f6a132fa8b0c.JPG) I use QGIS 3.18.3 on windows 10 x64.
process
wrong error output location when using check validity on data with certain type of error this issue is regarding the location of errors when using the vector geometry tools check validity when having a polygon with different elevation in the start and the end point add a feature to a vector layer containing the following geometry i use the quickwkt plugin polygon note that this is not a closed ring but rather a spiral run the vector geometry tools check validity tool select the layer containing the geometry and use qgis as the method run the tool using the default layer names check the layer error output which contains the locations of each error even though the error is at coordinate the reported location is at i use qgis on windows
1
87,164
10,880,728,302
IssuesEvent
2019-11-17 13:14:59
andrewfstratton/quando
https://api.github.com/repos/andrewfstratton/quando
opened
Add user levels
design enhancement usability
To be: - Beginner, startup, starting, **lite**, casual, intro - **advanced**, intermediate - Pro, **developer**, hacker, elite Also: - cloud - local - ?mobile? - ?hub?
1.0
Add user levels - To be: - Beginner, startup, starting, **lite**, casual, intro - **advanced**, intermediate - Pro, **developer**, hacker, elite Also: - cloud - local - ?mobile? - ?hub?
non_process
add user levels to be beginner startup starting lite casual intro advanced intermediate pro developer hacker elite also cloud local mobile hub
0
13,417
15,880,144,233
IssuesEvent
2021-04-09 13:21:48
digitalmethodsinitiative/4cat
https://api.github.com/repos/digitalmethodsinitiative/4cat
opened
'Get entites' processor crashes on certain words
processors
``` Processor get-entities raised KeyError while processing dataset 544e67d0dd470bef8c413ed5411a7e3e (via 55b055c3fc513ec8071632b1d077feb1) in get_entities.py:108->_serialize.py:136->doc.pyx:268->vocab.pyx:163->strings.pyx:132: "[E018] Can't retrieve string for hash '12398297404960798361'. This usually refers to an issue with the Vocab or StringStore." ``` Goes wrong at https://github.com/digitalmethodsinitiative/4cat/blob/master/processors/text-analysis/get_entities.py#L108
1.0
'Get entites' processor crashes on certain words - ``` Processor get-entities raised KeyError while processing dataset 544e67d0dd470bef8c413ed5411a7e3e (via 55b055c3fc513ec8071632b1d077feb1) in get_entities.py:108->_serialize.py:136->doc.pyx:268->vocab.pyx:163->strings.pyx:132: "[E018] Can't retrieve string for hash '12398297404960798361'. This usually refers to an issue with the Vocab or StringStore." ``` Goes wrong at https://github.com/digitalmethodsinitiative/4cat/blob/master/processors/text-analysis/get_entities.py#L108
process
get entites processor crashes on certain words processor get entities raised keyerror while processing dataset via in get entities py serialize py doc pyx vocab pyx strings pyx can t retrieve string for hash this usually refers to an issue with the vocab or stringstore goes wrong at
1
492,912
14,222,589,684
IssuesEvent
2020-11-17 17:04:14
NCI-Agency/anet
https://api.github.com/repos/NCI-Agency/anet
closed
Report bounced back to draft after submission
bug priority
**Description** After report submission the user receives an a-mail: > The date of your upcoming engagement into a draft report. You can find and edit it by going to the "My drafts" on the "My reports" page, by clicking here. > > ANET Support team which is requesting the user to re-submit the report. **Expected behavior** The report is not ending up as draft after submission without a clear issue.
1.0
Report bounced back to draft after submission - **Description** After report submission the user receives an a-mail: > The date of your upcoming engagement into a draft report. You can find and edit it by going to the "My drafts" on the "My reports" page, by clicking here. > > ANET Support team which is requesting the user to re-submit the report. **Expected behavior** The report is not ending up as draft after submission without a clear issue.
non_process
report bounced back to draft after submission description after report submission the user receives an a mail the date of your upcoming engagement into a draft report you can find and edit it by going to the my drafts on the my reports page by clicking here anet support team which is requesting the user to re submit the report expected behavior the report is not ending up as draft after submission without a clear issue
0
6,215
9,126,050,117
IssuesEvent
2019-02-24 18:35:12
rtcharity/eahub.org
https://api.github.com/repos/rtcharity/eahub.org
opened
Do we still want to use project boards?
Process
Would like to figure out what their role in our process is going forward.
1.0
Do we still want to use project boards? - Would like to figure out what their role in our process is going forward.
process
do we still want to use project boards would like to figure out what their role in our process is going forward
1
21,303
28,499,532,658
IssuesEvent
2023-04-18 16:17:01
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Release 6.1.2 - April 2023
P1 type: process release team-OSS
# Status of Bazel 6.1.2 - Expected release date: 2023-04-18 - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/51) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 6.1.2, simply send a PR against the `release-6.1.2` branch. **Task list:** - [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. --> - [x] Push the release and notify package maintainers - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Release 6.1.2 - April 2023 - # Status of Bazel 6.1.2 - Expected release date: 2023-04-18 - [List of release blockers](https://github.com/bazelbuild/bazel/milestone/51) To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone. To cherry-pick a mainline commit into 6.1.2, simply send a PR against the `release-6.1.2` branch. **Task list:** - [x] Create [draft release announcement](https://docs.google.com/document/d/1pu2ARPweOCTxPsRR8snoDtkC9R51XWRyBXeiC6Ql5so/edit) <!-- Note that there should be a new Bazel Release Announcement document for every major release. For minor and patch releases, use the latest open doc. --> - [x] Push the release and notify package maintainers - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
release april status of bazel expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list create push the release and notify package maintainers update the
1
8,578
10,563,772,308
IssuesEvent
2019-10-04 22:02:16
sass/libsass
https://api.github.com/repos/sass/libsass
closed
Parse pseudo selector arguments as declaration values
Compatibility - P3 Sass 3.5
Issue https://github.com/sass/sass/issues/2120 PR https://github.com/sass/sass/pull/2304 Spec sass/sass-spec#1137 Possibly related to #2383
True
Parse pseudo selector arguments as declaration values - Issue https://github.com/sass/sass/issues/2120 PR https://github.com/sass/sass/pull/2304 Spec sass/sass-spec#1137 Possibly related to #2383
non_process
parse pseudo selector arguments as declaration values issue pr spec sass sass spec possibly related to
0
488,800
14,086,567,749
IssuesEvent
2020-11-05 04:05:26
codidact/qpixel
https://api.github.com/repos/codidact/qpixel
opened
"Enter" from "add link" falls through to submitting the post
area: frontend complexity: unassessed priority: medium type: bug
https://software.codidact.com/questions/278393 > I think we have a UI problem with the "add link" dialog box in the editor: it doesn't capture enter keystrokes which then fall through to the submit button. > > I'm merrily typing a post when I realize that it would be improve by linking a phrase. So, I copy the link, highlight the text, click the Link button in the GUI, and the dialog box comes up > > So far all is well. > > I hot key for paste and then hit [enter]. > > At which point my partially complete post is submitted.
1.0
"Enter" from "add link" falls through to submitting the post - https://software.codidact.com/questions/278393 > I think we have a UI problem with the "add link" dialog box in the editor: it doesn't capture enter keystrokes which then fall through to the submit button. > > I'm merrily typing a post when I realize that it would be improve by linking a phrase. So, I copy the link, highlight the text, click the Link button in the GUI, and the dialog box comes up > > So far all is well. > > I hot key for paste and then hit [enter]. > > At which point my partially complete post is submitted.
non_process
enter from add link falls through to submitting the post i think we have a ui problem with the add link dialog box in the editor it doesn t capture enter keystrokes which then fall through to the submit button i m merrily typing a post when i realize that it would be improve by linking a phrase so i copy the link highlight the text click the link button in the gui and the dialog box comes up so far all is well i hot key for paste and then hit at which point my partially complete post is submitted
0
4,787
7,668,103,006
IssuesEvent
2018-05-14 03:13:37
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
run in script vs. in terminal
command-line options log-processing question
I installed goaccess today and after some testing I tried to parse multiple logs in while loop. If I run same command in terminal and in bash script I not get same result. In terminal I don't see any error and everything run fine, but in bash script I get error: ``` goaccess -p "/etc/goaccess.conf" -f "/var/log/nginx/vhost.fr.log.1" -o "/var/www/fr.html" - Parsed 10 lines producing the following errors: Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Format Errors - Verify your log/date/time format ``` If I run same command directly in terminal then everything is fine. Same goaccess config, same log ( rotated no changes ), same server. Once in script, once in terminal. Different result. Server is Ubuntu, parsed log is Nginx. Thanks for help.
1.0
run in script vs. in terminal - I installed goaccess today and after some testing I tried to parse multiple logs in while loop. If I run same command in terminal and in bash script I not get same result. In terminal I don't see any error and everything run fine, but in bash script I get error: ``` goaccess -p "/etc/goaccess.conf" -f "/var/log/nginx/vhost.fr.log.1" -o "/var/www/fr.html" - Parsed 10 lines producing the following errors: Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Token for '%h' specifier is NULL. Format Errors - Verify your log/date/time format ``` If I run same command directly in terminal then everything is fine. Same goaccess config, same log ( rotated no changes ), same server. Once in script, once in terminal. Different result. Server is Ubuntu, parsed log is Nginx. Thanks for help.
process
run in script vs in terminal i installed goaccess today and after some testing i tried to parse multiple logs in while loop if i run same command in terminal and in bash script i not get same result in terminal i don t see any error and everything run fine but in bash script i get error goaccess p etc goaccess conf f var log nginx vhost fr log o var www fr html parsed lines producing the following errors token for h specifier is null token for h specifier is null token for h specifier is null token for h specifier is null token for h specifier is null token for h specifier is null token for h specifier is null token for h specifier is null token for h specifier is null token for h specifier is null format errors verify your log date time format if i run same command directly in terminal then everything is fine same goaccess config same log rotated no changes same server once in script once in terminal different result server is ubuntu parsed log is nginx thanks for help
1
4,389
7,278,793,095
IssuesEvent
2018-02-22 00:38:27
ODiogoSilva/TriFusion
https://api.github.com/repos/ODiogoSilva/TriFusion
closed
Upper case output sequences
enhancement process
Include the option to write TriSeq/TriFusion outputs in upper case.
1.0
Upper case output sequences - Include the option to write TriSeq/TriFusion outputs in upper case.
process
upper case output sequences include the option to write triseq trifusion outputs in upper case
1
11,733
14,576,704,603
IssuesEvent
2020-12-18 00:07:46
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
Add the ability to parse requests delimited by a pipe
bug enhancement log-processing log/date/time format
## 1.Nginx Access Log: ```log 120.193.204.43 | 183.61.177.66 | 17/Nov/2018:09:40:21 +0800 | - | GET | GET /Common/NextIssueCache/?v=1542418820912 HTTP/1.1 |200 | 981 | - | - | okhttp/2.4.0 |119.9.104.192:80 | 0.011 | 0.011 ``` ## 2.Goaccess Conf: ```log time-format %T date-format date-format %d/%b/%Y log_format %^ %h|%h|%d:%t %^ %m %U %H %^|%s|%b|%R|%u|%^|%T|%^ ``` ## 3.Parse Error: ```shell [root@cc access]# goaccess access.log -o report.html --log-format=COMBINED mproxy.obchy.com.access.log Parsed 10 lines producing the following errors: Token '183.232.51.140 | 17/Nov/2018' doesn't match specifier '%d' Token '117.34.13.90 | 17/Nov/2018' doesn't match specifier '%d' Token '183.61.177.78 | 17/Nov/2018' doesn't match specifier '%d' Token '157.255.25.12 | 17/Nov/2018' doesn't match specifier '%d' Token '183.232.51.86 | 17/Nov/2018' doesn't match specifier '%d' Token '58.211.2.72 | 17/Nov/2018' doesn't match specifier '%d' Token '183.61.177.66 | 17/Nov/2018' doesn't match specifier '%d' Token '183.232.51.140 | 17/Nov/2018' doesn't match specifier '%d' Token '117.34.13.90 | 17/Nov/2018' doesn't match specifier '%d' Token '183.232.51.140 | 17/Nov/2018' doesn't match specifier '%d' Format Errors - Verify your log/date/time format ``` why am I getting parsing error, even-though I use proper format ? Kindly give correct logformat to generate report without any parse error.
1.0
Add the ability to parse requests delimited by a pipe - ## 1.Nginx Access Log: ```log 120.193.204.43 | 183.61.177.66 | 17/Nov/2018:09:40:21 +0800 | - | GET | GET /Common/NextIssueCache/?v=1542418820912 HTTP/1.1 |200 | 981 | - | - | okhttp/2.4.0 |119.9.104.192:80 | 0.011 | 0.011 ``` ## 2.Goaccess Conf: ```log time-format %T date-format date-format %d/%b/%Y log_format %^ %h|%h|%d:%t %^ %m %U %H %^|%s|%b|%R|%u|%^|%T|%^ ``` ## 3.Parse Error: ```shell [root@cc access]# goaccess access.log -o report.html --log-format=COMBINED mproxy.obchy.com.access.log Parsed 10 lines producing the following errors: Token '183.232.51.140 | 17/Nov/2018' doesn't match specifier '%d' Token '117.34.13.90 | 17/Nov/2018' doesn't match specifier '%d' Token '183.61.177.78 | 17/Nov/2018' doesn't match specifier '%d' Token '157.255.25.12 | 17/Nov/2018' doesn't match specifier '%d' Token '183.232.51.86 | 17/Nov/2018' doesn't match specifier '%d' Token '58.211.2.72 | 17/Nov/2018' doesn't match specifier '%d' Token '183.61.177.66 | 17/Nov/2018' doesn't match specifier '%d' Token '183.232.51.140 | 17/Nov/2018' doesn't match specifier '%d' Token '117.34.13.90 | 17/Nov/2018' doesn't match specifier '%d' Token '183.232.51.140 | 17/Nov/2018' doesn't match specifier '%d' Format Errors - Verify your log/date/time format ``` why am I getting parsing error, even-though I use proper format ? Kindly give correct logformat to generate report without any parse error.
process
add the ability to parse requests delimited by a pipe nginx access log: log nov get get common nextissuecache v http okhttp goaccess conf log time format t date format date format d b y log format h h d t m u h s b r u t parse error shell goaccess access log o report html log format combined mproxy obchy com access log parsed lines producing the following errors token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d token nov doesn t match specifier d format errors verify your log date time format why am i getting parsing error even though i use proper format kindly give correct logformat to generate report without any parse error
1
18,462
24,549,701,074
IssuesEvent
2022-10-12 11:37:41
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Admins > Search user by name or email ID > Getting 'No records found' when searched admin with their email ID
Bug P1 Participant manager Process: Fixed Process: Tested dev
**Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Click on 'Add new admin' button and add admin in the application 4. Search that admin with email ID and Verify **AR:** Getting 'No records found' when searched admin with their email ID **ER:** When searched admin with their email ID, their record should get displayed ![PMIssue](https://user-images.githubusercontent.com/86007179/173840744-8bcfda19-66e4-41e8-84c3-168dcf04418e.png)
2.0
[PM] Admins > Search user by name or email ID > Getting 'No records found' when searched admin with their email ID - **Steps:** 1. Login to PM 2. Click on 'Admins' tab 3. Click on 'Add new admin' button and add admin in the application 4. Search that admin with email ID and Verify **AR:** Getting 'No records found' when searched admin with their email ID **ER:** When searched admin with their email ID, their record should get displayed ![PMIssue](https://user-images.githubusercontent.com/86007179/173840744-8bcfda19-66e4-41e8-84c3-168dcf04418e.png)
process
admins search user by name or email id getting no records found when searched admin with their email id steps login to pm click on admins tab click on add new admin button and add admin in the application search that admin with email id and verify ar getting no records found when searched admin with their email id er when searched admin with their email id their record should get displayed
1
46,068
18,943,732,247
IssuesEvent
2021-11-18 07:46:36
hashicorp/terraform-provider-azurerm
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
closed
add the `ip_version` attribute to azurerm_public_ip_prefix
enhancement service/public-ip
### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Terraform (and AzureRM Provider) Version ``` me@here:~/github/terraform-azurerm-msdn-enrolment (develop/deployed) $ terraform -v Terraform v0.12.28 + provider.azurerm v2.17.0 ``` ### Affected Resource(s) * `azurerm_public_ip_prefix` ### Terraform Configuration Files ```hcl # https://www.terraform.io/docs/providers/azurerm/r/public_ip_prefix.html resource "azurerm_public_ip_prefix" "terraform-azurerm-msdn-enrolment-vnet-prfx" { location = var.LOCATION name = "${var.PREFIX}${var.ENVIRONMENT}-vnet-prfx" resource_group_name = azurerm_resource_group.terraform-azurerm-msdn-enrolment.name ip_version = "IPv6" } ``` ### Debug Output ``` me@here:~/github/terraform-azurerm-msdn-enrolment (develop/deployed) $ pre-commit run -a Terraform fmt............................................................Passed Terraform validate.......................................................Failed - hook id: terraform_validate - exit code: 1 Error: Unsupported argument on vnet.tf line 24, in resource "azurerm_public_ip_prefix" "terraform-azurerm-msdn-enrolment-vnet-prfx": 24: ip_version = "IPv6" An argument named "ip_version" is not expected here. ``` ### Panic Output Not a panic ### Expected Behavior Terraform should provision an IPv6 prefix ### Actual Behavior See output above. ### Steps to Reproduce <!--- Please list the steps required to reproduce the issue. ---> 1. create the underlying RG, vNet and Subnet resources with standard IPv4 support. 1. add the `azurerm_public_ip_prefix` resource. 1. `terraform init` 1. witness above debug output ### Important Factoids It's entirely possible Im just going about this the wrong way. Our production environment has IPv6 support on its roadmap now and im getting in early with some simple prototyping. Happy to take advice/feedback. ### References nil current
1.0
add the `ip_version` attribute to azurerm_public_ip_prefix - ### Community Note * Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment ### Terraform (and AzureRM Provider) Version ``` me@here:~/github/terraform-azurerm-msdn-enrolment (develop/deployed) $ terraform -v Terraform v0.12.28 + provider.azurerm v2.17.0 ``` ### Affected Resource(s) * `azurerm_public_ip_prefix` ### Terraform Configuration Files ```hcl # https://www.terraform.io/docs/providers/azurerm/r/public_ip_prefix.html resource "azurerm_public_ip_prefix" "terraform-azurerm-msdn-enrolment-vnet-prfx" { location = var.LOCATION name = "${var.PREFIX}${var.ENVIRONMENT}-vnet-prfx" resource_group_name = azurerm_resource_group.terraform-azurerm-msdn-enrolment.name ip_version = "IPv6" } ``` ### Debug Output ``` me@here:~/github/terraform-azurerm-msdn-enrolment (develop/deployed) $ pre-commit run -a Terraform fmt............................................................Passed Terraform validate.......................................................Failed - hook id: terraform_validate - exit code: 1 Error: Unsupported argument on vnet.tf line 24, in resource "azurerm_public_ip_prefix" "terraform-azurerm-msdn-enrolment-vnet-prfx": 24: ip_version = "IPv6" An argument named "ip_version" is not expected here. ``` ### Panic Output Not a panic ### Expected Behavior Terraform should provision an IPv6 prefix ### Actual Behavior See output above. ### Steps to Reproduce <!--- Please list the steps required to reproduce the issue. ---> 1. create the underlying RG, vNet and Subnet resources with standard IPv4 support. 1. add the `azurerm_public_ip_prefix` resource. 1. `terraform init` 1. witness above debug output ### Important Factoids It's entirely possible Im just going about this the wrong way. Our production environment has IPv6 support on its roadmap now and im getting in early with some simple prototyping. Happy to take advice/feedback. ### References nil current
non_process
add the ip version attribute to azurerm public ip prefix community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version me here github terraform azurerm msdn enrolment develop deployed terraform v terraform provider azurerm affected resource s azurerm public ip prefix terraform configuration files hcl resource azurerm public ip prefix terraform azurerm msdn enrolment vnet prfx location var location name var prefix var environment vnet prfx resource group name azurerm resource group terraform azurerm msdn enrolment name ip version debug output me here github terraform azurerm msdn enrolment develop deployed pre commit run a terraform fmt passed terraform validate failed hook id terraform validate exit code error unsupported argument on vnet tf line in resource azurerm public ip prefix terraform azurerm msdn enrolment vnet prfx ip version an argument named ip version is not expected here panic output not a panic expected behavior terraform should provision an prefix actual behavior see output above steps to reproduce create the underlying rg vnet and subnet resources with standard support add the azurerm public ip prefix resource terraform init witness above debug output important factoids it s entirely possible im just going about this the wrong way our production environment has support on its roadmap now and im getting in early with some simple prototyping happy to take advice feedback references nil current
0
3,474
6,552,183,536
IssuesEvent
2017-09-05 17:18:17
brucemiller/LaTeXML
https://api.github.com/repos/brucemiller/LaTeXML
closed
Support postprocessing to bundle FirefoxOS Web app
enhancement postprocessing
I've recently been using LaTeXML and bash scripts to generate (packaged) FirefoxOS Web apps. Such apps are essentially: - Web pages - a manifest (http://cermat.org/events/MathUI/14/proceedings/FirefoxOS-for-Science/demos/app/manifest.webapp) - everything zipped in one archive (e.g. http://www.maths-informatique-jeux.com/webapps/mathui-demos/package.zip) - that's all! They can be installed on FirefoxOS, Android and Desktop: http://www.maths-informatique-jeux.com/webapps/mathui-demos/ (in general apps are available from https://marketplace.firefox.com/). So I believe that should be quite easy to use the work done for EPUB to support FirefoxOS Web apps, and that would be helpful for people willing to publish some apps in the "books" category of the FirefoxOS marketplace. Some issues I noticed: - Links in LaTeXML to the main index have the form "./" (without the page name), which shows the directory content in FirefoxOS web apps (similar to what happens for local files) and is probably not what people want. A solution is to use the form "./index.hml" (or "./index.xhml") instead. - Web apps don't have a "back" button, so there should not be a page where the user is "blocked" (this is a requirement for the app to pass the MarketPlace review). By default, LaTeXML has good navigation links, so that's not a problem. However, it is a problem for external links (for example the "generated by LaTeXML" link or other links with the href package). A solution proposed by the reviewer is to use target="_blank" on external links, so that they will be opened in the browser. Other optional things: - Would be good to be able to bundle Web fonts too (#461) - Have some kind of "mobile UI". FYI, here is what I use: @namespace "http://www.w3.org/1999/xhtml"; /\* Update LaTeXML css style to get some kind of "mobile UI" _/ body { margin: 0; background: linear-gradient(45deg, #b50, #e80); } .ltx_page_main { padding: 0; } .ltx_page_header, .ltx_page_footer { border: 0; } .ltx_page_header a[rel="up"] { font-size: 250%; } .ltx_page_header a[rel!="up"] { font-size: 0.8em; } .ltx_page_header a { color: white !important; } .ltx_page_footer a { color: white !important; } .ltx_page_logo { display: none; } .ltx_page_content { background: #ffc; padding: .5em; } /_ Links */ a, a:visited { color: blue; } a:hover { color: red; } - support receipt verification for paid app (https://developer.mozilla.org/en-US/Marketplace/Monetization/Validating_a_receipt).
1.0
Support postprocessing to bundle FirefoxOS Web app - I've recently been using LaTeXML and bash scripts to generate (packaged) FirefoxOS Web apps. Such apps are essentially: - Web pages - a manifest (http://cermat.org/events/MathUI/14/proceedings/FirefoxOS-for-Science/demos/app/manifest.webapp) - everything zipped in one archive (e.g. http://www.maths-informatique-jeux.com/webapps/mathui-demos/package.zip) - that's all! They can be installed on FirefoxOS, Android and Desktop: http://www.maths-informatique-jeux.com/webapps/mathui-demos/ (in general apps are available from https://marketplace.firefox.com/). So I believe that should be quite easy to use the work done for EPUB to support FirefoxOS Web apps, and that would be helpful for people willing to publish some apps in the "books" category of the FirefoxOS marketplace. Some issues I noticed: - Links in LaTeXML to the main index have the form "./" (without the page name), which shows the directory content in FirefoxOS web apps (similar to what happens for local files) and is probably not what people want. A solution is to use the form "./index.hml" (or "./index.xhml") instead. - Web apps don't have a "back" button, so there should not be a page where the user is "blocked" (this is a requirement for the app to pass the MarketPlace review). By default, LaTeXML has good navigation links, so that's not a problem. However, it is a problem for external links (for example the "generated by LaTeXML" link or other links with the href package). A solution proposed by the reviewer is to use target="_blank" on external links, so that they will be opened in the browser. Other optional things: - Would be good to be able to bundle Web fonts too (#461) - Have some kind of "mobile UI". FYI, here is what I use: @namespace "http://www.w3.org/1999/xhtml"; /\* Update LaTeXML css style to get some kind of "mobile UI" _/ body { margin: 0; background: linear-gradient(45deg, #b50, #e80); } .ltx_page_main { padding: 0; } .ltx_page_header, .ltx_page_footer { border: 0; } .ltx_page_header a[rel="up"] { font-size: 250%; } .ltx_page_header a[rel!="up"] { font-size: 0.8em; } .ltx_page_header a { color: white !important; } .ltx_page_footer a { color: white !important; } .ltx_page_logo { display: none; } .ltx_page_content { background: #ffc; padding: .5em; } /_ Links */ a, a:visited { color: blue; } a:hover { color: red; } - support receipt verification for paid app (https://developer.mozilla.org/en-US/Marketplace/Monetization/Validating_a_receipt).
process
support postprocessing to bundle firefoxos web app i ve recently been using latexml and bash scripts to generate packaged firefoxos web apps such apps are essentially web pages a manifest everything zipped in one archive e g that s all they can be installed on firefoxos android and desktop in general apps are available from so i believe that should be quite easy to use the work done for epub to support firefoxos web apps and that would be helpful for people willing to publish some apps in the books category of the firefoxos marketplace some issues i noticed links in latexml to the main index have the form without the page name which shows the directory content in firefoxos web apps similar to what happens for local files and is probably not what people want a solution is to use the form index hml or index xhml instead web apps don t have a back button so there should not be a page where the user is blocked this is a requirement for the app to pass the marketplace review by default latexml has good navigation links so that s not a problem however it is a problem for external links for example the generated by latexml link or other links with the href package a solution proposed by the reviewer is to use target blank on external links so that they will be opened in the browser other optional things would be good to be able to bundle web fonts too have some kind of mobile ui fyi here is what i use namespace update latexml css style to get some kind of mobile ui body margin background linear gradient ltx page main padding ltx page header ltx page footer border ltx page header a font size ltx page header a font size ltx page header a color white important ltx page footer a color white important ltx page logo display none ltx page content background ffc padding links a a visited color blue a hover color red support receipt verification for paid app
1
81,914
23,613,663,001
IssuesEvent
2022-08-24 14:17:10
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
closed
gnomeExtensions.freon fails to build successfully
0.kind: build failure
### Steps To Reproduce Steps to reproduce the behavior: 1. build *X* ### Build log ``` error: builder for '/nix/store/x4nqrz2zl51sxmgvdds7v5frw56zx150-gnome-shell-extension-freon-48.drv' failed with exit code 1 error: 1 dependencies of derivation '/nix/store/6mpsjbp6jgxv9qlfk629m4sglannppb2-system-path.drv' failed to build error: 1 dependencies of derivation '/nix/store/ibn0484j5n1dx0rjr13jci2pfhjqdqyj-nixos-system-nixos-22.05.2609.52527082ea2.drv' failed to build ``` ### Additional context Add any other context about the problem here. ### Notify maintainers @piegamesde <!-- Please @ people who are in the `meta.maintainers` list of the offending package or module. If in doubt, check `git blame` for whoever last touched something. --> ### Metadata Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result. ```console [user@system:~]$ nix-shell -p nix-info --run "nix-info -m" - system: `"x86_64-linux"` - host os: `Linux 5.18.17, NixOS, 22.05 (Quokka), 22.05.2609.52527082ea2` - multi-user?: `yes` - sandbox: `yes` - version: `nix-env (Nix) 2.8.1` - channels(root): `"nixos-22.05, nixos-unstable"` - nixpkgs: `/nix/var/nix/profiles/per-user/root/channels/nixos` ```
1.0
gnomeExtensions.freon fails to build successfully - ### Steps To Reproduce Steps to reproduce the behavior: 1. build *X* ### Build log ``` error: builder for '/nix/store/x4nqrz2zl51sxmgvdds7v5frw56zx150-gnome-shell-extension-freon-48.drv' failed with exit code 1 error: 1 dependencies of derivation '/nix/store/6mpsjbp6jgxv9qlfk629m4sglannppb2-system-path.drv' failed to build error: 1 dependencies of derivation '/nix/store/ibn0484j5n1dx0rjr13jci2pfhjqdqyj-nixos-system-nixos-22.05.2609.52527082ea2.drv' failed to build ``` ### Additional context Add any other context about the problem here. ### Notify maintainers @piegamesde <!-- Please @ people who are in the `meta.maintainers` list of the offending package or module. If in doubt, check `git blame` for whoever last touched something. --> ### Metadata Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result. ```console [user@system:~]$ nix-shell -p nix-info --run "nix-info -m" - system: `"x86_64-linux"` - host os: `Linux 5.18.17, NixOS, 22.05 (Quokka), 22.05.2609.52527082ea2` - multi-user?: `yes` - sandbox: `yes` - version: `nix-env (Nix) 2.8.1` - channels(root): `"nixos-22.05, nixos-unstable"` - nixpkgs: `/nix/var/nix/profiles/per-user/root/channels/nixos` ```
non_process
gnomeextensions freon fails to build successfully steps to reproduce steps to reproduce the behavior build x build log error builder for nix store gnome shell extension freon drv failed with exit code error dependencies of derivation nix store system path drv failed to build error dependencies of derivation nix store nixos system nixos drv failed to build additional context add any other context about the problem here notify maintainers piegamesde please people who are in the meta maintainers list of the offending package or module if in doubt check git blame for whoever last touched something metadata please run nix shell p nix info run nix info m and paste the result console nix shell p nix info run nix info m system linux host os linux nixos quokka multi user yes sandbox yes version nix env nix channels root nixos nixos unstable nixpkgs nix var nix profiles per user root channels nixos
0
171,120
27,064,462,629
IssuesEvent
2023-02-13 22:42:52
angelolab/toffy
https://api.github.com/repos/angelolab/toffy
opened
Allow users to run subsequent rounds of Rosetta for specific channel
design_doc
**Relevant background** For specific channel crosstalk smoothing, Rosetta will need to be run subsequently after the first round. We'll refer to this as Rosetta V2, and the first round as Rosetta V1. Rosetta V2 will need additional custom logic to accommodate this. **Design overview** **Note that this process will be run using the original test set from V1**. The following accommodations will need to be made for V2: 1. The test image generator and will need to receive a custom input and output channel name. This differs from the V1 implementation where it's assumed the default input channel is `'Noodle'` and compensation will be done against all other channels as output. 2. The tiled comparison function will also need to receive the custom output channel defined in the previous point. This differs from V1 in that its assumed that all the test data channels are used; it is not currently possible to limit this to just one channel. 3. The image compensation process needs to receive a Gaussian radius of 0 and a normalization constant of 1. This differs from V1 where currently, the default values are a Gaussian radius of 1 and a norm constant of 200. For V2, these params can be easily set for the compensation function itself. However, the test image generation function will also need to explicitly take in these parameters since it calls the compensation function under the hood. 4. The function that adds the source channel to the tiled image needs to now explicitly receive the 'rescaled' folder as an image sub folder. This differs from V1, where no image sub folder was needed. 5. V2 should not call the function that adds the source channel to the tile image (whereas V1 needs this). This is because for V2, the images have already been passed through Rosetta once. 6. The default functionality for setting the run names to use should now be to programmatically list them out. Users should still have an option to manually set this, but it should be commented out with clear instructions on what to do if this is desired (similar to how we handle manually setting lists of FOVs for the segmentation notebook in `ark`). 7. The "official" call to the compensation function should now take the rescaled folder itself as a data sub folder. This differs from V1, where the call to the final compensation function has no data sub folder specified. **Code mockup** _`4a_compensate_image_data`_ The structure of this notebook will need to be modified to accommodate V2. While certain changes will need to be made, a lot of the underlying logic will remain the same. In section 2, either inside or directly underneath the cell that sets the channel name, multipliers, and folder name, we should allow the user to specify the following: - `gaus_rad` (defaults to 1) - `norm_const` (defaults to 200) - `output_channels` (defaults to `None`) - `rosetta_sub_folder` (defaults to `''`) In this way, we can explicitly pass these arguments to their respective functions so that Rosetta V2 can run seamlessly. We should guide the users towards the values they need to set for V2 in the documentation. When listing out the run names, the default should now be to use `os.listdir(path/to/run/folders)`, although we can provide a comment describing how the user can manually set this if needed. _`rosetta.generate_rosetta_test_imgs`_ This function now needs to explicitly receive `gaus_rad` and `norm_const` params so it can pass them to `compensate_image_data`. _Testing_ This should mostly remain the same, as we're not changing the underlying functionality of Rosetta, just utilizing parameters that have otherwise been for niche use cases. **Required inputs** For V2, the user will need to explicitly specify the parameters defined in the `Code mockup` section (`gaus_rad`, `norm_const`, `output_channels`, and `rosetta_sub_folder`). **Output files** Same as before. **Timeline** Give a rough estimate for how long you think the project will take. In general, it's better to be too conservative rather than too optimistic. - [ ] A couple days - [ ] A week - [X] Multiple weeks. For large projects, make sure to agree on a plan that isn't just a single monster PR at the end. Estimated date when a fully implemented version will be ready for review: 02/25/23 Estimated date when the finalized project will be merged in: 03/02/23
1.0
Allow users to run subsequent rounds of Rosetta for specific channel - **Relevant background** For specific channel crosstalk smoothing, Rosetta will need to be run subsequently after the first round. We'll refer to this as Rosetta V2, and the first round as Rosetta V1. Rosetta V2 will need additional custom logic to accommodate this. **Design overview** **Note that this process will be run using the original test set from V1**. The following accommodations will need to be made for V2: 1. The test image generator and will need to receive a custom input and output channel name. This differs from the V1 implementation where it's assumed the default input channel is `'Noodle'` and compensation will be done against all other channels as output. 2. The tiled comparison function will also need to receive the custom output channel defined in the previous point. This differs from V1 in that its assumed that all the test data channels are used; it is not currently possible to limit this to just one channel. 3. The image compensation process needs to receive a Gaussian radius of 0 and a normalization constant of 1. This differs from V1 where currently, the default values are a Gaussian radius of 1 and a norm constant of 200. For V2, these params can be easily set for the compensation function itself. However, the test image generation function will also need to explicitly take in these parameters since it calls the compensation function under the hood. 4. The function that adds the source channel to the tiled image needs to now explicitly receive the 'rescaled' folder as an image sub folder. This differs from V1, where no image sub folder was needed. 5. V2 should not call the function that adds the source channel to the tile image (whereas V1 needs this). This is because for V2, the images have already been passed through Rosetta once. 6. The default functionality for setting the run names to use should now be to programmatically list them out. Users should still have an option to manually set this, but it should be commented out with clear instructions on what to do if this is desired (similar to how we handle manually setting lists of FOVs for the segmentation notebook in `ark`). 7. The "official" call to the compensation function should now take the rescaled folder itself as a data sub folder. This differs from V1, where the call to the final compensation function has no data sub folder specified. **Code mockup** _`4a_compensate_image_data`_ The structure of this notebook will need to be modified to accommodate V2. While certain changes will need to be made, a lot of the underlying logic will remain the same. In section 2, either inside or directly underneath the cell that sets the channel name, multipliers, and folder name, we should allow the user to specify the following: - `gaus_rad` (defaults to 1) - `norm_const` (defaults to 200) - `output_channels` (defaults to `None`) - `rosetta_sub_folder` (defaults to `''`) In this way, we can explicitly pass these arguments to their respective functions so that Rosetta V2 can run seamlessly. We should guide the users towards the values they need to set for V2 in the documentation. When listing out the run names, the default should now be to use `os.listdir(path/to/run/folders)`, although we can provide a comment describing how the user can manually set this if needed. _`rosetta.generate_rosetta_test_imgs`_ This function now needs to explicitly receive `gaus_rad` and `norm_const` params so it can pass them to `compensate_image_data`. _Testing_ This should mostly remain the same, as we're not changing the underlying functionality of Rosetta, just utilizing parameters that have otherwise been for niche use cases. **Required inputs** For V2, the user will need to explicitly specify the parameters defined in the `Code mockup` section (`gaus_rad`, `norm_const`, `output_channels`, and `rosetta_sub_folder`). **Output files** Same as before. **Timeline** Give a rough estimate for how long you think the project will take. In general, it's better to be too conservative rather than too optimistic. - [ ] A couple days - [ ] A week - [X] Multiple weeks. For large projects, make sure to agree on a plan that isn't just a single monster PR at the end. Estimated date when a fully implemented version will be ready for review: 02/25/23 Estimated date when the finalized project will be merged in: 03/02/23
non_process
allow users to run subsequent rounds of rosetta for specific channel relevant background for specific channel crosstalk smoothing rosetta will need to be run subsequently after the first round we ll refer to this as rosetta and the first round as rosetta rosetta will need additional custom logic to accommodate this design overview note that this process will be run using the original test set from the following accommodations will need to be made for the test image generator and will need to receive a custom input and output channel name this differs from the implementation where it s assumed the default input channel is noodle and compensation will be done against all other channels as output the tiled comparison function will also need to receive the custom output channel defined in the previous point this differs from in that its assumed that all the test data channels are used it is not currently possible to limit this to just one channel the image compensation process needs to receive a gaussian radius of and a normalization constant of this differs from where currently the default values are a gaussian radius of and a norm constant of for these params can be easily set for the compensation function itself however the test image generation function will also need to explicitly take in these parameters since it calls the compensation function under the hood the function that adds the source channel to the tiled image needs to now explicitly receive the rescaled folder as an image sub folder this differs from where no image sub folder was needed should not call the function that adds the source channel to the tile image whereas needs this this is because for the images have already been passed through rosetta once the default functionality for setting the run names to use should now be to programmatically list them out users should still have an option to manually set this but it should be commented out with clear instructions on what to do if this is desired similar to how we handle manually setting lists of fovs for the segmentation notebook in ark the official call to the compensation function should now take the rescaled folder itself as a data sub folder this differs from where the call to the final compensation function has no data sub folder specified code mockup compensate image data the structure of this notebook will need to be modified to accommodate while certain changes will need to be made a lot of the underlying logic will remain the same in section either inside or directly underneath the cell that sets the channel name multipliers and folder name we should allow the user to specify the following gaus rad defaults to norm const defaults to output channels defaults to none rosetta sub folder defaults to in this way we can explicitly pass these arguments to their respective functions so that rosetta can run seamlessly we should guide the users towards the values they need to set for in the documentation when listing out the run names the default should now be to use os listdir path to run folders although we can provide a comment describing how the user can manually set this if needed rosetta generate rosetta test imgs this function now needs to explicitly receive gaus rad and norm const params so it can pass them to compensate image data testing this should mostly remain the same as we re not changing the underlying functionality of rosetta just utilizing parameters that have otherwise been for niche use cases required inputs for the user will need to explicitly specify the parameters defined in the code mockup section gaus rad norm const output channels and rosetta sub folder output files same as before timeline give a rough estimate for how long you think the project will take in general it s better to be too conservative rather than too optimistic a couple days a week multiple weeks for large projects make sure to agree on a plan that isn t just a single monster pr at the end estimated date when a fully implemented version will be ready for review estimated date when the finalized project will be merged in
0
231,752
7,643,000,017
IssuesEvent
2018-05-08 11:09:59
HGustavs/LenaSYS
https://api.github.com/repos/HGustavs/LenaSYS
opened
DuggaED: Sorting resets when creating a new test
Grupp 3 (2018) Grupp 3 (2018) Dugga-Editor highPriority
At the moment when creating a new test while the table is sorted, the new entry resets the sorting back to how it was before sorting it. ## Before new test <img width="1434" alt="skarmavbild 2018-05-08 kl 1 08 53 em" src="https://user-images.githubusercontent.com/37795608/39754003-04e57200-52c1-11e8-8fcd-79028e026590.png"> ## After new test <img width="1431" alt="skarmavbild 2018-05-08 kl 1 09 02 em" src="https://user-images.githubusercontent.com/37795608/39754011-0a48f244-52c1-11e8-899c-69a798ea16cc.png">
1.0
DuggaED: Sorting resets when creating a new test - At the moment when creating a new test while the table is sorted, the new entry resets the sorting back to how it was before sorting it. ## Before new test <img width="1434" alt="skarmavbild 2018-05-08 kl 1 08 53 em" src="https://user-images.githubusercontent.com/37795608/39754003-04e57200-52c1-11e8-8fcd-79028e026590.png"> ## After new test <img width="1431" alt="skarmavbild 2018-05-08 kl 1 09 02 em" src="https://user-images.githubusercontent.com/37795608/39754011-0a48f244-52c1-11e8-899c-69a798ea16cc.png">
non_process
duggaed sorting resets when creating a new test at the moment when creating a new test while the table is sorted the new entry resets the sorting back to how it was before sorting it before new test img width alt skarmavbild kl em src after new test img width alt skarmavbild kl em src
0
68,794
7,110,666,724
IssuesEvent
2018-01-17 11:28:15
bitcoin/bitcoin
https://api.github.com/repos/bitcoin/bitcoin
closed
rpc-tests.py --help
Tests
``` pull-tester bitcoin$ ./rpc-tests.py --help | head -1 Usage: bip68-112-113-p2p.py [options] pull-tester bitcoin$ ```
1.0
rpc-tests.py --help - ``` pull-tester bitcoin$ ./rpc-tests.py --help | head -1 Usage: bip68-112-113-p2p.py [options] pull-tester bitcoin$ ```
non_process
rpc tests py help pull tester bitcoin rpc tests py help head usage py pull tester bitcoin
0
67,829
17,084,815,556
IssuesEvent
2021-07-08 10:23:14
appsmithorg/appsmith
https://api.github.com/repos/appsmithorg/appsmith
closed
[Bug] By default filter is visible and filterable property is disabled after drag and drop of select/drop-down widget
Bug Dropdown Widget High UI Building Pod Widgets
## Description : when you drag and drop the select/drop-down widget by default filter is visible and filterable property in property pane is disabled. ### Steps to reproduce the behaviour: 1. Drag and drop select/drop-down widget on canvas 2. Click on the widget to see the default drop-down options 3. Observe that the filter is visible 4. Now observe that in property pane the filterable property is disabled Environment: Release <img width="772" alt="Screenshot 2021-05-21 at 11 01 39 AM" src="https://user-images.githubusercontent.com/83569920/119092032-886ae900-ba2b-11eb-908b-48a41d0e76e2.png">
1.0
[Bug] By default filter is visible and filterable property is disabled after drag and drop of select/drop-down widget - ## Description : when you drag and drop the select/drop-down widget by default filter is visible and filterable property in property pane is disabled. ### Steps to reproduce the behaviour: 1. Drag and drop select/drop-down widget on canvas 2. Click on the widget to see the default drop-down options 3. Observe that the filter is visible 4. Now observe that in property pane the filterable property is disabled Environment: Release <img width="772" alt="Screenshot 2021-05-21 at 11 01 39 AM" src="https://user-images.githubusercontent.com/83569920/119092032-886ae900-ba2b-11eb-908b-48a41d0e76e2.png">
non_process
by default filter is visible and filterable property is disabled after drag and drop of select drop down widget description when you drag and drop the select drop down widget by default filter is visible and filterable property in property pane is disabled steps to reproduce the behaviour drag and drop select drop down widget on canvas click on the widget to see the default drop down options observe that the filter is visible now observe that in property pane the filterable property is disabled environment release img width alt screenshot at am src
0
394,772
11,648,494,203
IssuesEvent
2020-03-01 21:02:38
SkriptLang/Skript
https://api.github.com/repos/SkriptLang/Skript
closed
apply effect fix
completed enhancement priority: low
### Description fix an apply effect because it adds additional time for effect but it should set time
1.0
apply effect fix - ### Description fix an apply effect because it adds additional time for effect but it should set time
non_process
apply effect fix description fix an apply effect because it adds additional time for effect but it should set time
0
114,100
11,837,628,632
IssuesEvent
2020-03-23 14:29:10
spring-projects/spring-boot
https://api.github.com/repos/spring-projects/spring-boot
closed
Externalized Configuration Constructor Binding Incorrect Code Example
status: forward-port type: documentation
Forward port of issue #20378 to 2.3.0.M4.
1.0
Externalized Configuration Constructor Binding Incorrect Code Example - Forward port of issue #20378 to 2.3.0.M4.
non_process
externalized configuration constructor binding incorrect code example forward port of issue to
0
14,501
17,604,292,734
IssuesEvent
2021-08-17 15:13:32
qgis/QGIS-Documentation
https://api.github.com/repos/qgis/QGIS-Documentation
closed
Add new "Select within distance" and "Extract within distance" algorithms (Request in QGIS)
Processing Alg 3.22
### Request for documentation From pull request QGIS/qgis#44593 Author: @nyalldawson QGIS version: 3.22 **Add new "Select within distance" and "Extract within distance" algorithms** ### PR Description: These algorithms allow users to select or extract features from one layer which are within a certain distance of features from another reference layer. The distance checking is heavily optimised, using spatial indices to restrict the number of features retrieved, and also automatically handing the check off to the database server for postgis layers. The distance parameter can also be data-defined. Sponsored by QTIBIA Engineering ### Commits tagged with [need-docs] or [FEATURE] "[feature][processing] Add new \"Select within distance\" and\n\"Extract within distance\" algorithms\n\nThese algorithms allow users to select or extract features from one\nlayer which are within a certain distance of features from another\nreference layer.\n\nThe distance checking is heavily optimised, using spatial indices\nto restrict the number of features retrieved, and also automatically\nhanding the check off to the database server for postgis layers.\n\nThe distance parameter can also be data-defined.\n\nSponsored by QTIBIA Engineering"
1.0
Add new "Select within distance" and "Extract within distance" algorithms (Request in QGIS) - ### Request for documentation From pull request QGIS/qgis#44593 Author: @nyalldawson QGIS version: 3.22 **Add new "Select within distance" and "Extract within distance" algorithms** ### PR Description: These algorithms allow users to select or extract features from one layer which are within a certain distance of features from another reference layer. The distance checking is heavily optimised, using spatial indices to restrict the number of features retrieved, and also automatically handing the check off to the database server for postgis layers. The distance parameter can also be data-defined. Sponsored by QTIBIA Engineering ### Commits tagged with [need-docs] or [FEATURE] "[feature][processing] Add new \"Select within distance\" and\n\"Extract within distance\" algorithms\n\nThese algorithms allow users to select or extract features from one\nlayer which are within a certain distance of features from another\nreference layer.\n\nThe distance checking is heavily optimised, using spatial indices\nto restrict the number of features retrieved, and also automatically\nhanding the check off to the database server for postgis layers.\n\nThe distance parameter can also be data-defined.\n\nSponsored by QTIBIA Engineering"
process
add new select within distance and extract within distance algorithms request in qgis request for documentation from pull request qgis qgis author nyalldawson qgis version add new select within distance and extract within distance algorithms pr description these algorithms allow users to select or extract features from one layer which are within a certain distance of features from another reference layer the distance checking is heavily optimised using spatial indices to restrict the number of features retrieved and also automatically handing the check off to the database server for postgis layers the distance parameter can also be data defined sponsored by qtibia engineering commits tagged with or add new select within distance and n extract within distance algorithms n nthese algorithms allow users to select or extract features from one nlayer which are within a certain distance of features from another nreference layer n nthe distance checking is heavily optimised using spatial indices nto restrict the number of features retrieved and also automatically nhanding the check off to the database server for postgis layers n nthe distance parameter can also be data defined n nsponsored by qtibia engineering
1
20,045
26,533,360,535
IssuesEvent
2023-01-19 14:05:13
gchq/stroom
https://api.github.com/repos/gchq/stroom
closed
Tasks are taking a long time to create
bug f:processing
In the latest 7.1 branch, the following is a typical message reported in logs: ``` Finished creating tasks in 11s. Created 62 tasks in total, for 4 filters ``` This is despite the following conditions being true: 1. `Processor Task Retention` job enabled 2. `stroom.processor.deleteAge` set to `PT1M`
1.0
Tasks are taking a long time to create - In the latest 7.1 branch, the following is a typical message reported in logs: ``` Finished creating tasks in 11s. Created 62 tasks in total, for 4 filters ``` This is despite the following conditions being true: 1. `Processor Task Retention` job enabled 2. `stroom.processor.deleteAge` set to `PT1M`
process
tasks are taking a long time to create in the latest branch the following is a typical message reported in logs finished creating tasks in created tasks in total for filters this is despite the following conditions being true processor task retention job enabled stroom processor deleteage set to
1
36,610
6,542,170,907
IssuesEvent
2017-09-02 01:39:15
StackExchange/StackExchange.Redis
https://api.github.com/repos/StackExchange/StackExchange.Redis
closed
Test requirements?
documentation
I downloaded and ran the tests but a number of them failed or could not be run. The ones that failed seemed to try to connect to port 7xxx. There are a number of tests (particularly the SSL tests) that try to read in various .txt files. So a general question would be what is the format of the .txt files and what do I need to have running in order (and configured correctly) for the tests to pass?
1.0
Test requirements? - I downloaded and ran the tests but a number of them failed or could not be run. The ones that failed seemed to try to connect to port 7xxx. There are a number of tests (particularly the SSL tests) that try to read in various .txt files. So a general question would be what is the format of the .txt files and what do I need to have running in order (and configured correctly) for the tests to pass?
non_process
test requirements i downloaded and ran the tests but a number of them failed or could not be run the ones that failed seemed to try to connect to port there are a number of tests particularly the ssl tests that try to read in various txt files so a general question would be what is the format of the txt files and what do i need to have running in order and configured correctly for the tests to pass
0
403,514
11,842,183,976
IssuesEvent
2020-03-23 22:25:51
bcgov/entity
https://api.github.com/repos/bcgov/entity
closed
Page not found error when user who has joined the account clicks on Manage Business in Navigation bar
Priority2 Relationships bug
**Describe the bug in current situation** When a user has joined the account clicks on Manage Businesses in Navigation bar on Welcome Page, 'Page not found' error occurs **Link bug to the User Story** https://app.zenhub.com/workspaces/entity-5bf2f2164b5806bc2bf60531/issues/bcgov/entity/2844 **Impact of this bug** User will be confused why there is navigation bar with Manage Businesses as well as a button on the welcome screen and behaviour for both of them is different **Chance of Occurring (high/medium/low/very low)** Medium **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** - Test - Firefox - BCREG2002 user id used **Steps to Reproduce** Steps to reproduce the behavior: 1. As an owner, Go to 'Team Members' 2. Click on 'Invite People' button 3. Add the email and the role and click on 'Save' button 4. As a member/admin, click on 'Join account' in the email 5. Login and 'Thank you for joining.....' message appears 6. Go to Welcome screen **Actual/ observed behavior/ results** 1. There are 2 Manage Businesses on the screen 2. The one next to 'Create new BC Registries account' is disabled 3. The one in the navigation bar is active and user is directed to Page not found screen. **Expected behavior** - The behaviour should be consistent for both the Manage Businesses on Welcome screen - Need to confirm if Manage Businesse should appear at 2 places on the Welcome screen **Screenshots/ Visual Reference/ Source** If applicable, add screenshots to help explain your problem. You an use screengrab. ![Screen Shot 2020-03-11 at 12.29.48 PM.png](https://images.zenhubusercontent.com/5e3c69ca69d8804ca9f11729/cb550289-ae01-433e-b840-f01dddf30bec) ![Screen Shot 2020-03-11 at 12.34.58 PM.png](https://images.zenhubusercontent.com/5e3c69ca69d8804ca9f11729/7a21c6e0-6006-4a98-bee8-89382383ea09)
1.0
Page not found error when user who has joined the account clicks on Manage Business in Navigation bar - **Describe the bug in current situation** When a user has joined the account clicks on Manage Businesses in Navigation bar on Welcome Page, 'Page not found' error occurs **Link bug to the User Story** https://app.zenhub.com/workspaces/entity-5bf2f2164b5806bc2bf60531/issues/bcgov/entity/2844 **Impact of this bug** User will be confused why there is navigation bar with Manage Businesses as well as a button on the welcome screen and behaviour for both of them is different **Chance of Occurring (high/medium/low/very low)** Medium **Pre Conditions: which Env, any pre-requesites or assumptions to execute steps?** - Test - Firefox - BCREG2002 user id used **Steps to Reproduce** Steps to reproduce the behavior: 1. As an owner, Go to 'Team Members' 2. Click on 'Invite People' button 3. Add the email and the role and click on 'Save' button 4. As a member/admin, click on 'Join account' in the email 5. Login and 'Thank you for joining.....' message appears 6. Go to Welcome screen **Actual/ observed behavior/ results** 1. There are 2 Manage Businesses on the screen 2. The one next to 'Create new BC Registries account' is disabled 3. The one in the navigation bar is active and user is directed to Page not found screen. **Expected behavior** - The behaviour should be consistent for both the Manage Businesses on Welcome screen - Need to confirm if Manage Businesse should appear at 2 places on the Welcome screen **Screenshots/ Visual Reference/ Source** If applicable, add screenshots to help explain your problem. You an use screengrab. ![Screen Shot 2020-03-11 at 12.29.48 PM.png](https://images.zenhubusercontent.com/5e3c69ca69d8804ca9f11729/cb550289-ae01-433e-b840-f01dddf30bec) ![Screen Shot 2020-03-11 at 12.34.58 PM.png](https://images.zenhubusercontent.com/5e3c69ca69d8804ca9f11729/7a21c6e0-6006-4a98-bee8-89382383ea09)
non_process
page not found error when user who has joined the account clicks on manage business in navigation bar describe the bug in current situation when a user has joined the account clicks on manage businesses in navigation bar on welcome page page not found error occurs link bug to the user story impact of this bug user will be confused why there is navigation bar with manage businesses as well as a button on the welcome screen and behaviour for both of them is different chance of occurring high medium low very low medium pre conditions which env any pre requesites or assumptions to execute steps test firefox user id used steps to reproduce steps to reproduce the behavior as an owner go to team members click on invite people button add the email and the role and click on save button as a member admin click on join account in the email login and thank you for joining message appears go to welcome screen actual observed behavior results there are manage businesses on the screen the one next to create new bc registries account is disabled the one in the navigation bar is active and user is directed to page not found screen expected behavior the behaviour should be consistent for both the manage businesses on welcome screen need to confirm if manage businesse should appear at places on the welcome screen screenshots visual reference source if applicable add screenshots to help explain your problem you an use screengrab
0
196,692
15,606,984,610
IssuesEvent
2021-03-19 08:48:20
ocaml/ocaml
https://api.github.com/repos/ocaml/ocaml
closed
Document why, when and how to update magic numbers
Stale bug documentation
**Original bug ID:** 7599 **Reporter:** @gasche **Status:** acknowledged (set by @xavierleroy on 2017-09-21T08:32:40Z) **Resolution:** open **Priority:** low **Severity:** text **Target version:** 4.07.0+dev/beta2/rc1/rc2 **Category:** documentation **Tags:** junior_job **Related to:** #7598 #7600 ## Bug description Currently external contributors may not know about magic numbers, when to update them, and what is the process to update them (in particular, I believe that a bootstrap is needed?). This could be explained in comments in utils/config.mlp (I like having documentation close to the code) or maybe in utils/HACKING.adoc.
1.0
Document why, when and how to update magic numbers - **Original bug ID:** 7599 **Reporter:** @gasche **Status:** acknowledged (set by @xavierleroy on 2017-09-21T08:32:40Z) **Resolution:** open **Priority:** low **Severity:** text **Target version:** 4.07.0+dev/beta2/rc1/rc2 **Category:** documentation **Tags:** junior_job **Related to:** #7598 #7600 ## Bug description Currently external contributors may not know about magic numbers, when to update them, and what is the process to update them (in particular, I believe that a bootstrap is needed?). This could be explained in comments in utils/config.mlp (I like having documentation close to the code) or maybe in utils/HACKING.adoc.
non_process
document why when and how to update magic numbers original bug id reporter gasche status acknowledged set by xavierleroy on resolution open priority low severity text target version dev category documentation tags junior job related to bug description currently external contributors may not know about magic numbers when to update them and what is the process to update them in particular i believe that a bootstrap is needed this could be explained in comments in utils config mlp i like having documentation close to the code or maybe in utils hacking adoc
0