Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,858 | 12,958,894,885 | IssuesEvent | 2020-07-20 12:09:49 | mdsaj/LP-AngularSpring | https://api.github.com/repos/mdsaj/LP-AngularSpring | closed | CVE-2019-16942 (High) detected in jackson-databind-2.9.7.jar | security vulnerability | ## CVE-2019-16942 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/LP-AngularSpring/AngularSpring/bin/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.7/jackson-databind-2.9.7.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.1.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.1.RELEASE.jar
- :x: **jackson-databind-2.9.7.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mdsaj/LP-AngularSpring/commit/4e7f828d4ba904c93c26ef7d7af0f753a3c44a38">4e7f828d4ba904c93c26ef7d7af0f753a3c44a38</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-16942 (High) detected in jackson-databind-2.9.7.jar - ## CVE-2019-16942 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.7.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/LP-AngularSpring/AngularSpring/bin/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.7/jackson-databind-2.9.7.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.1.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.1.1.RELEASE.jar
- :x: **jackson-databind-2.9.7.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mdsaj/LP-AngularSpring/commit/4e7f828d4ba904c93c26ef7d7af0f753a3c44a38">4e7f828d4ba904c93c26ef7d7af0f753a3c44a38</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the commons-dbcp (1.4) jar in the classpath, and an attacker can find an RMI service endpoint to access, it is possible to make the service execute a malicious payload. This issue exists because of org.apache.commons.dbcp.datasources.SharedPoolDataSource and org.apache.commons.dbcp.datasources.PerUserPoolDataSource mishandling.
<p>Publish Date: 2019-10-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16942>CVE-2019-16942</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16942</a></p>
<p>Release Date: 2019-10-01</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm lp angularspring angularspring bin pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the commons dbcp jar in the classpath and an attacker can find an rmi service endpoint to access it is possible to make the service execute a malicious payload this issue exists because of org apache commons dbcp datasources sharedpooldatasource and org apache commons dbcp datasources peruserpooldatasource mishandling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
413,709 | 27,965,438,014 | IssuesEvent | 2023-03-24 19:04:43 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | qgis was failed to create a layer in my project folder when i run the script two or more times getting the error as Could not create layer D:/R-Lpms/test/modeltest/modeltest\Boundary.shp: Creation of data source failed (OGR error: D:/modeltest\Boundary.shp is not a directory.) Execution failed after 1.44 seconds | API Documentation PyQGIS | ### What is the issue about the documentation?
# -*- coding: utf-8 -*-
"""
/***************************************************************************
LpmProcessingModel
A QGIS plugin
Description
Generated by Plugin Builder: http://g-sherman.github.io/Qgis-Plugin-Builder/
-------------------
begin : 2023-03-12
copyright : (C) 2023 by SurveyoStories
email : surveyorstories@gmail.com
***************************************************************************/
/***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************/
"""
from multiprocessing.spawn import import_main_path
from logging import _STYLES
import inspect
from qgis.PyQt.QtGui import QIcon
import processing
from qgis.core import QgsProcessingParameterVectorDestination
from qgis.core import QgsProcessingParameterFeatureSource
from qgis.core import QgsGeometry
from qgis.core import QgsAttributeEditorContainer
from qgis.core import QgsMapLayer
from qgis.core import QgsVectorFileWriter
from qgis.core import QgsFileUtils
from qgis.core import QgsProject
from qgis.core import QgsFeature
from qgis.core import QgsField
from qgis.core import QgsVectorLayer
from qgis.core import QgsExpressionContext
from qgis.core import QgsExpression
from qgis.core import QgsProcessingParameterString
from qgis.core import QgsProcessingParameterFile
from qgis.core import QgsProcessingParameterDistance
from qgis.core import QgsProcessingParameterField
from qgis.core import QgsProcessingParameterVectorLayer
from qgis.core import QgsProcessingMultiStepFeedback
from qgis.core import QgsProcessingAlgorithm
from qgis.core import QgsProcessing
import os
import sys
__author__ = 'SurveyoStories'
__date__ = '2023-03-12'
__copyright__ = '(C) 2023 by SurveyoStories'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
from qgis.PyQt.QtCore import QCoreApplication
from qgis.core import (QgsProcessing,
QgsFeatureSink,
QgsProcessingAlgorithm,
QgsProcessingParameterFeatureSource,
QgsProcessingParameterFeatureSink)
"""
Model exported as python.
Name : LPM Processing Model
Group :
With QGIS : 32601
"""
# Get the path to the current project folder
project_folder = QgsProject.instance().readPath("./")
class LpmProcessingModelAlgorithm(QgsProcessingAlgorithm):
def icon(self):
cmd_folder = os.path.split(inspect.getfile(inspect.currentframe()))[0]
icon = QIcon(os.path.join(os.path.join(cmd_folder, 'icon.png')))
return icon
def flags(self):
return super().flags() | QgsProcessingAlgorithm.FlagNoThreading
def initAlgorithm(self, config=None):
self.addParameter(QgsProcessingParameterVectorLayer('village_final_shape_file',
'Village Final Shape File', types=[QgsProcessing.TypeVectorPolygon], defaultValue=None))
self.addParameter(QgsProcessingParameterField('lpm_number_column', 'LPM Number Column', type=QgsProcessingParameterField.Any,
parentLayerParameterName='village_final_shape_file', allowMultiple=False, defaultValue=None))
self.addParameter(QgsProcessingParameterField('area_in_acres_column_choose_your_acre_column_according_to_your_shape_file', 'Area in Acres Column (Choose Your Acre column according to your shape File)',
type=QgsProcessingParameterField.Any, parentLayerParameterName='village_final_shape_file', allowMultiple=False, defaultValue=None))
self.addParameter(QgsProcessingParameterField('area_in_hectares_column_choose_your_hectares_column_according_to_your_shape_file', 'Area in Hectares Column (Choose Your Hectares column according to your shape File)',
type=QgsProcessingParameterField.Any, parentLayerParameterName='village_final_shape_file', allowMultiple=False, defaultValue=None))
self.addParameter(QgsProcessingParameterDistance('prill_line_length_multi_ring_length', 'Prill Line Length (Multi Ring Length)',
parentParameterName='village_final_shape_file', minValue=1, maxValue=20, defaultValue=3))
self.addParameter(QgsProcessingParameterFile('choose_your_rlrdlr_form38_resurvey_land_register', 'Choose Your RLR/DLR (Form-38 Resurvey Land Register)',
behavior=QgsProcessingParameterFile.File, fileFilter='All Files (*.xlsx*)', defaultValue="C:\\Users\\lokes\Desktop\\Install\layer\\Form38data_Report (4).xlsx"))
self.addParameter(QgsProcessingParameterFile('choose_your_ulpin_layer', 'Choose Your ULPIN Layer',
behavior=QgsProcessingParameterFile.File, fileFilter='All Files (*.csv*)',
self.addParameter(QgsProcessingParameterString('please_enter_7_digit_village_code',
'Please enter 7 Digit village Code ', multiLine=False, defaultValue='code'))
self.addParameter(QgsProcessingParameterString('village_name_please_enter_your_village_name_in_telugu',
'Village Name (Please enter your Village Name in Telugu)', multiLine=False, defaultValue='vill name'))
self.addParameter(QgsProcessingParameterString('mandal_name_please_enter_mandal_name_in_telugu',
'Mandal Name (Please enter Mandal Name in Telugu)', multiLine=False, defaultValue='mandal name'))
self.addParameter(QgsProcessingParameterString('district_name_please_enter_district_name_in_telugu',
'District Name (Please enter District name in Telugu)', multiLine=False, defaultValue='distri'))
self.addParameter(QgsProcessingParameterString(
'village_surveyor_name', 'Village Surveyor Name', multiLine=False, defaultValue='vs name'))
def processAlgorithm(self, parameters, context, model_feedback):
# Use a multi-step feedback, so that individual child algorithm progress reports are adjusted for the
# overall progress through the model
# Get a reference to the current project
project = QgsProject.instance()
# Set the names of the layers to remove
layer_names = ['ULPIN', 'Resurvey_Land_Register', 'Prill_Lines',
'Final_Vertices', 'Exploded_Lines', 'Boundary', 'Vertices', 'Clip', 'Multi_Ring']
# Loop through all the layers in the project
for layer in project.mapLayers().values():
# Check if the layer's name is in the list of names to remove
if layer.name() in layer_names:
# Remove the layer from the project
project.removeMapLayer(layer.id())
feedback = QgsProcessingMultiStepFeedback(40, model_feedback)
results = {}
outputs = {}
# Extract RLR by expression
alg_params = {
'EXPRESSION': ' "Field4" != \'4a\' and\r\n "Field4" != \'Extent\'\r',
'INPUT': parameters['choose_your_rlrdlr_form38_resurvey_land_register'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['ExtractRlrByExpression'] = processing.run(
'native:extractbyexpression', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(1)
if feedback.isCanceled():
return {}
# Village_Name
alg_params = {
'NAME': 'Village_Name',
'VALUE': parameters['village_name_please_enter_your_village_name_in_telugu']
}
outputs['Village_name'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(2)
if feedback.isCanceled():
return {}
# Save ULPIN Layer to Project Folder
alg_params = {
'DATASOURCE_OPTIONS': '',
'INPUT': parameters['choose_your_ulpin_layer'],
'LAYER_NAME': '',
'LAYER_OPTIONS': '',
'OUTPUT': (project_folder + '/ULPIN.csv'),
}
outputs['SaveUlpinLayerToProjectFolder'] = processing.run(
'native:savefeatures', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(3)
if feedback.isCanceled():
return {}
# Polygon spatial index
alg_params = {
'INPUT': parameters['village_final_shape_file']
}
outputs['PolygonSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(4)
if feedback.isCanceled():
return {}
# Village Surveyor Name
alg_params = {
'NAME': 'VS_Name',
'VALUE': parameters['village_surveyor_name']
}
outputs['VillageSurveyorName'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(5)
if feedback.isCanceled():
return {}
# Mandal_Name
alg_params = {
'NAME': 'Mandal_Name',
'VALUE': parameters['mandal_name_please_enter_mandal_name_in_telugu']
}
outputs['Mandal_name'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(6)
if feedback.isCanceled():
return {}
# Village_Code
alg_params = {
'NAME': 'Village_Code',
'VALUE': parameters['please_enter_7_digit_village_code']
}
outputs['Village_code'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(7)
if feedback.isCanceled():
return {}
# District_Name
alg_params = {
'NAME': 'District_Name',
'VALUE': parameters['district_name_please_enter_district_name_in_telugu']
}
outputs['District_name'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(8)
if feedback.isCanceled():
return {}
# LPM_NO
alg_params = {
'NAME': 'LPM_NO',
'VALUE': parameters['lpm_number_column']
}
outputs['Lpm_no'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(9)
if feedback.isCanceled():
return {}
# Ref_Col Calcluation
alg_params = {
'FIELD_LENGTH': 10,
'FIELD_NAME': 'Ref_Col',
'FIELD_PRECISION': 0,
'FIELD_TYPE': 1, # Integer (32 bit)
'FORMULA': 'attribute(@LPM_NO)',
'INPUT': outputs['PolygonSpatialIndex']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['Ref_colCalcluation'] = processing.run(
'native:fieldcalculator', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(10)
if feedback.isCanceled():
return {}
# Retain fields In RLR
alg_params = {
'FIELDS': ['Field1', 'Field4', 'Field6', 'Field8', 'Field12', 'Field13', 'Field14'],
'INPUT': outputs['ExtractRlrByExpression']['OUTPUT'],
'OUTPUT': project_folder + '/RLR_excel.xlsx',
'OUTPUT': project_folder + '/RLR_excel.xlsx'
}
outputs['RetainFieldsInRlr'] = processing.run(
'native:retainfields', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(11)
if feedback.isCanceled():
return {}
# Boundary
alg_params = {
'INPUT': outputs['Ref_colCalcluation']['OUTPUT'],
'OUTPUT': os.path.join(project_folder, 'Boundary.shp'),
'OVERWRITE': True
}
outputs['Boundary'] = processing.run(
'native:boundary', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(12)
if feedback.isCanceled():
return {}
# Load ULPIN Layer into project
alg_params = {
'INPUT': outputs['SaveUlpinLayerToProjectFolder']['OUTPUT'],
'NAME': 'ULPIN'
}
outputs['LoadUlpinLayerIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(13)
if feedback.isCanceled():
return {}
# Area_in_Acres
alg_params = {
'NAME': 'Area_in_Acres',
'VALUE': parameters['area_in_acres_column_choose_your_acre_column_according_to_your_shape_file']
}
outputs['Area_in_acres'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(14)
if feedback.isCanceled():
return {}
# Area_in_Hectares
alg_params = {
'NAME': 'Area_in_Hectares',
'VALUE': parameters['area_in_hectares_column_choose_your_hectares_column_according_to_your_shape_file']
}
outputs['Area_in_hectares'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(15)
if feedback.isCanceled():
return {}
# Load RLR into project
alg_params = {
'INPUT': outputs['RetainFieldsInRlr']['OUTPUT'],
'NAME': 'Resurvey_Land_Register'
}
outputs['LoadRlrIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(16)
if feedback.isCanceled():
return {}
# Extract vertices
alg_params = {
'INPUT': outputs['Ref_colCalcluation']['OUTPUT'],
'OUTPUT': project_folder + '/Vertices.shp',
'OUTPUT': project_folder + '/Vertices.shp'
}
outputs['ExtractVertices'] = processing.run(
'native:extractvertices', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(17)
if feedback.isCanceled():
return {}
# Refactor Vertices
alg_params = {
'FIELDS_MAPPING': [{'expression': "y( transform( $geometry,@layer_crs,\r\n'EPSG:4326'))", 'length': 10, 'name': 'LATITUDE', 'precision': 6, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': "x( transform( $geometry,@layer_crs,\r\n'EPSG:4326'))", 'length': 10, 'name': 'LONGITUDE', 'precision': 6, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': '$x', 'length': 10, 'name': 'EASTING_X', 'precision': 1, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': '$y', 'length': 10, 'name': 'NORTHING_Y', 'precision': 1, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': '"Ref_Col"', 'length': 10, 'name': 'Ref_Col', 'precision': 0, 'sub_type': 0, 'type': 4, 'type_name': 'int8'}],
'INPUT': outputs['ExtractVertices']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['RefactorVertices'] = processing.run(
'native:refactorfields', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(18)
if feedback.isCanceled():
return {}
# Boundary spatial index
alg_params = {
'INPUT': outputs['Boundary']['OUTPUT']
}
outputs['BoundarySpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(19)
if feedback.isCanceled():
return {}
# Explode lines
alg_params = {
'INPUT': outputs['BoundarySpatialIndex']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['ExplodeLines'] = processing.run(
'native:explodelines', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(20)
if feedback.isCanceled():
return {}
# Refactor Exploded Lines
alg_params = {
'FIELDS_MAPPING': [{'expression': '"Ref_Col"', 'length': 10, 'name': 'Ref_Col', 'precision': 0, 'sub_type': 0, 'type': 2, 'type_name': 'integer'}, {'expression': 'length3D( $geometry)', 'length': 10, 'name': 'Length', 'precision': 1, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}],
'INPUT': outputs['ExplodeLines']['OUTPUT'],
'OUTPUT': project_folder + '/Exploded_Lines.shp',
'OUTPUT': project_folder + '/Exploded_Lines.shp'
}
outputs['RefactorExplodedLines'] = processing.run(
'native:refactorfields', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(21)
if feedback.isCanceled():
return {}
# Load Exploded_Lines into project
alg_params = {
'INPUT': outputs['RefactorExplodedLines']['OUTPUT'],
'NAME': 'Exploded_Lines'
}
outputs['LoadExploded_linesIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(22)
if feedback.isCanceled():
return {}
# Create ULPIN attribute index
# alg_params = {
# 'FIELD': "pniu",
# 'INPUT': outputs['LoadUlpinLayerIntoProject']['OUTPUT']
# }
# outputs['CreateUlpinAttributeIndex'] = processing.run(
# 'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
# feedback.setCurrentStep(23)
# if feedback.isCanceled():
# return {}
# Multi-ring buffer (constant distance)
alg_params = {
'DISTANCE': parameters['prill_line_length_multi_ring_length'],
'INPUT': outputs['ExtractVertices']['OUTPUT'],
'OUTPUT': project_folder + '/Multi_Ring.shp',
'RINGS': 1,
'OUTPUT': project_folder + '/Multi_Ring.shp'
}
outputs['MultiringBufferConstantDistance'] = processing.run(
'native:multiringconstantbuffer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(23)
if feedback.isCanceled():
return {}
# # Create RLR attribute index
# alg_params = {
# 'FIELD': 'Field1',
# 'INPUT': outputs['LoadRlrIntoProject']['OUTPUT']
# }
# outputs['CreateRlrAttributeIndex'] = processing.run(
# 'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
# feedback.setCurrentStep(25)
# if feedback.isCanceled():
# return {}
# Delete duplicates by attribute in Vertices
alg_params = {
'FIELDS': QgsExpression(" 'Ref_Col;EASTING_X;NORTHING_Y'").evaluate(),
'INPUT': outputs['RefactorVertices']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['DeleteDuplicatesByAttributeInVertices'] = processing.run(
'native:removeduplicatesbyattribute', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(24)
if feedback.isCanceled():
return {}
# Multiring spatial index
alg_params = {
'INPUT': outputs['MultiringBufferConstantDistance']['OUTPUT']
}
outputs['MultiringSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(25)
if feedback.isCanceled():
return {}
# Clip
alg_params = {
'INPUT': outputs['BoundarySpatialIndex']['OUTPUT'],
'OUTPUT': project_folder + '/Clip.shp',
'OVERLAY': outputs['MultiringSpatialIndex']['OUTPUT'],
'OUTPUT': project_folder + '/Clip.shp'
}
outputs['Clip'] = processing.run(
'native:clip', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(26)
if feedback.isCanceled():
return {}
# Explode Lines spatial index
alg_params = {
'INPUT': outputs['LoadExploded_linesIntoProject']['OUTPUT']
}
outputs['ExplodeLinesSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(27)
if feedback.isCanceled():
return {}
# Prill Lines
alg_params = {
'INPUT': outputs['Clip']['OUTPUT'],
'OUTPUT': project_folder + '/Prill_Lines.shp',
'OUTPUT': project_folder + '/Prill_Lines.shp'
}
outputs['PrillLines'] = processing.run(
'native:explodelines', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(28)
if feedback.isCanceled():
return {}
# Add Point_ID field
alg_params = {
'FIELD_NAME': 'Point_ID',
'GROUP_FIELDS': ['Ref_Col'],
'INPUT': outputs['DeleteDuplicatesByAttributeInVertices']['OUTPUT'],
'MODULUS': 0,
'OUTPUT': project_folder + '/Final_Vertices.shp',
'SORT_ASCENDING': True,
'SORT_EXPRESSION': '',
'SORT_NULLS_FIRST': False,
'START': 1,
'OUTPUT': project_folder + '/Final_Vertices.shp'
}
outputs['AddPoint_idField'] = processing.run(
'native:addautoincrementalfield', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(29)
if feedback.isCanceled():
return {}
# Load Final_Vertices into project
alg_params = {
'INPUT': outputs['AddPoint_idField']['OUTPUT'],
'NAME': 'Final_Vertices'
}
outputs['LoadFinal_verticesIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(30)
if feedback.isCanceled():
return {}
# Load Prill_Lines into project
alg_params = {
'INPUT': outputs['PrillLines']['OUTPUT'],
'NAME': 'Prill_Lines'
}
outputs['LoadPrill_linesIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(31)
if feedback.isCanceled():
return {}
# Create Final vertices spatial index
alg_params = {
'INPUT': outputs['LoadFinal_verticesIntoProject']['OUTPUT']
}
outputs['CreateFinalVerticesSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(32)
if feedback.isCanceled():
return {}
# Create attribute index Final_Vertices
alg_params = {
'FIELD': 'Ref_Col',
'INPUT': outputs['LoadFinal_verticesIntoProject']['OUTPUT']
}
outputs['CreateAttributeIndexFinal_vertices'] = processing.run(
'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(33)
if feedback.isCanceled():
return {}
# Create Prill line spatial index
alg_params = {
'INPUT': outputs['LoadPrill_linesIntoProject']['OUTPUT']
}
outputs['CreatePrillLineSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(34)
if feedback.isCanceled():
return {}
# Create explde line attribute index
alg_params = {
'FIELD': 'Ref_Col',
'INPUT': outputs['LoadExploded_linesIntoProject']['OUTPUT']
}
outputs['CreateExpldeLineAttributeIndex'] = processing.run(
'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(35)
if feedback.isCanceled():
return {}
# Create polygon attribute index
alg_params = {
'FIELD': parameters['lpm_number_column'],
'INPUT': parameters['village_final_shape_file']
}
outputs['CreatePolygonAttributeIndex'] = processing.run(
'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(36)
if feedback.isCanceled():
return {}
# Set Polygon style
alg_params = {
'INPUT': parameters['village_final_shape_file'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Polygon_Style.qml"
}
outputs['SetPolygonStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(37)
if feedback.isCanceled():
return {}
# Set Exploded Lines style
alg_params = {
'INPUT': outputs['LoadExploded_linesIntoProject']['OUTPUT'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Exploded_Lines_Style.qml"
}
outputs['SetExplodedLinesStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(38)
if feedback.isCanceled():
return {}
# Set Final Vertices style
alg_params = {
'INPUT': outputs['LoadFinal_verticesIntoProject']['OUTPUT'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Final_Vertices_Style.qml"
}
outputs['SetFinalVerticesStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(39)
if feedback.isCanceled():
return {}
# Set Prill Line style
alg_params = {
'INPUT': outputs['LoadPrill_linesIntoProject']['OUTPUT'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Prill_lines_Style.qml"
}
outputs['SetPrillLineStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
return results
def name(self):
return 'LPM Processing Model'
def displayName(self):
return 'LPM Processing Model'
def group(self):
return ''
def groupId(self):
return ''
def shortHelpString(self):
return """help"""
def createInstance(self):
return LpmProcessingModelAlgorithm()
| 1.0 | qgis was failed to create a layer in my project folder when i run the script two or more times getting the error as Could not create layer D:/R-Lpms/test/modeltest/modeltest\Boundary.shp: Creation of data source failed (OGR error: D:/modeltest\Boundary.shp is not a directory.) Execution failed after 1.44 seconds - ### What is the issue about the documentation?
# -*- coding: utf-8 -*-
"""
/***************************************************************************
LpmProcessingModel
A QGIS plugin
Description
Generated by Plugin Builder: http://g-sherman.github.io/Qgis-Plugin-Builder/
-------------------
begin : 2023-03-12
copyright : (C) 2023 by SurveyoStories
email : surveyorstories@gmail.com
***************************************************************************/
/***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************/
"""
from multiprocessing.spawn import import_main_path
from logging import _STYLES
import inspect
from qgis.PyQt.QtGui import QIcon
import processing
from qgis.core import QgsProcessingParameterVectorDestination
from qgis.core import QgsProcessingParameterFeatureSource
from qgis.core import QgsGeometry
from qgis.core import QgsAttributeEditorContainer
from qgis.core import QgsMapLayer
from qgis.core import QgsVectorFileWriter
from qgis.core import QgsFileUtils
from qgis.core import QgsProject
from qgis.core import QgsFeature
from qgis.core import QgsField
from qgis.core import QgsVectorLayer
from qgis.core import QgsExpressionContext
from qgis.core import QgsExpression
from qgis.core import QgsProcessingParameterString
from qgis.core import QgsProcessingParameterFile
from qgis.core import QgsProcessingParameterDistance
from qgis.core import QgsProcessingParameterField
from qgis.core import QgsProcessingParameterVectorLayer
from qgis.core import QgsProcessingMultiStepFeedback
from qgis.core import QgsProcessingAlgorithm
from qgis.core import QgsProcessing
import os
import sys
__author__ = 'SurveyoStories'
__date__ = '2023-03-12'
__copyright__ = '(C) 2023 by SurveyoStories'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
from qgis.PyQt.QtCore import QCoreApplication
from qgis.core import (QgsProcessing,
QgsFeatureSink,
QgsProcessingAlgorithm,
QgsProcessingParameterFeatureSource,
QgsProcessingParameterFeatureSink)
"""
Model exported as python.
Name : LPM Processing Model
Group :
With QGIS : 32601
"""
# Get the path to the current project folder
project_folder = QgsProject.instance().readPath("./")
class LpmProcessingModelAlgorithm(QgsProcessingAlgorithm):
def icon(self):
cmd_folder = os.path.split(inspect.getfile(inspect.currentframe()))[0]
icon = QIcon(os.path.join(os.path.join(cmd_folder, 'icon.png')))
return icon
def flags(self):
return super().flags() | QgsProcessingAlgorithm.FlagNoThreading
def initAlgorithm(self, config=None):
self.addParameter(QgsProcessingParameterVectorLayer('village_final_shape_file',
'Village Final Shape File', types=[QgsProcessing.TypeVectorPolygon], defaultValue=None))
self.addParameter(QgsProcessingParameterField('lpm_number_column', 'LPM Number Column', type=QgsProcessingParameterField.Any,
parentLayerParameterName='village_final_shape_file', allowMultiple=False, defaultValue=None))
self.addParameter(QgsProcessingParameterField('area_in_acres_column_choose_your_acre_column_according_to_your_shape_file', 'Area in Acres Column (Choose Your Acre column according to your shape File)',
type=QgsProcessingParameterField.Any, parentLayerParameterName='village_final_shape_file', allowMultiple=False, defaultValue=None))
self.addParameter(QgsProcessingParameterField('area_in_hectares_column_choose_your_hectares_column_according_to_your_shape_file', 'Area in Hectares Column (Choose Your Hectares column according to your shape File)',
type=QgsProcessingParameterField.Any, parentLayerParameterName='village_final_shape_file', allowMultiple=False, defaultValue=None))
self.addParameter(QgsProcessingParameterDistance('prill_line_length_multi_ring_length', 'Prill Line Length (Multi Ring Length)',
parentParameterName='village_final_shape_file', minValue=1, maxValue=20, defaultValue=3))
self.addParameter(QgsProcessingParameterFile('choose_your_rlrdlr_form38_resurvey_land_register', 'Choose Your RLR/DLR (Form-38 Resurvey Land Register)',
behavior=QgsProcessingParameterFile.File, fileFilter='All Files (*.xlsx*)', defaultValue="C:\\Users\\lokes\Desktop\\Install\layer\\Form38data_Report (4).xlsx"))
self.addParameter(QgsProcessingParameterFile('choose_your_ulpin_layer', 'Choose Your ULPIN Layer',
behavior=QgsProcessingParameterFile.File, fileFilter='All Files (*.csv*)',
self.addParameter(QgsProcessingParameterString('please_enter_7_digit_village_code',
'Please enter 7 Digit village Code ', multiLine=False, defaultValue='code'))
self.addParameter(QgsProcessingParameterString('village_name_please_enter_your_village_name_in_telugu',
'Village Name (Please enter your Village Name in Telugu)', multiLine=False, defaultValue='vill name'))
self.addParameter(QgsProcessingParameterString('mandal_name_please_enter_mandal_name_in_telugu',
'Mandal Name (Please enter Mandal Name in Telugu)', multiLine=False, defaultValue='mandal name'))
self.addParameter(QgsProcessingParameterString('district_name_please_enter_district_name_in_telugu',
'District Name (Please enter District name in Telugu)', multiLine=False, defaultValue='distri'))
self.addParameter(QgsProcessingParameterString(
'village_surveyor_name', 'Village Surveyor Name', multiLine=False, defaultValue='vs name'))
def processAlgorithm(self, parameters, context, model_feedback):
# Use a multi-step feedback, so that individual child algorithm progress reports are adjusted for the
# overall progress through the model
# Get a reference to the current project
project = QgsProject.instance()
# Set the names of the layers to remove
layer_names = ['ULPIN', 'Resurvey_Land_Register', 'Prill_Lines',
'Final_Vertices', 'Exploded_Lines', 'Boundary', 'Vertices', 'Clip', 'Multi_Ring']
# Loop through all the layers in the project
for layer in project.mapLayers().values():
# Check if the layer's name is in the list of names to remove
if layer.name() in layer_names:
# Remove the layer from the project
project.removeMapLayer(layer.id())
feedback = QgsProcessingMultiStepFeedback(40, model_feedback)
results = {}
outputs = {}
# Extract RLR by expression
alg_params = {
'EXPRESSION': ' "Field4" != \'4a\' and\r\n "Field4" != \'Extent\'\r',
'INPUT': parameters['choose_your_rlrdlr_form38_resurvey_land_register'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['ExtractRlrByExpression'] = processing.run(
'native:extractbyexpression', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(1)
if feedback.isCanceled():
return {}
# Village_Name
alg_params = {
'NAME': 'Village_Name',
'VALUE': parameters['village_name_please_enter_your_village_name_in_telugu']
}
outputs['Village_name'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(2)
if feedback.isCanceled():
return {}
# Save ULPIN Layer to Project Folder
alg_params = {
'DATASOURCE_OPTIONS': '',
'INPUT': parameters['choose_your_ulpin_layer'],
'LAYER_NAME': '',
'LAYER_OPTIONS': '',
'OUTPUT': (project_folder + '/ULPIN.csv'),
}
outputs['SaveUlpinLayerToProjectFolder'] = processing.run(
'native:savefeatures', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(3)
if feedback.isCanceled():
return {}
# Polygon spatial index
alg_params = {
'INPUT': parameters['village_final_shape_file']
}
outputs['PolygonSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(4)
if feedback.isCanceled():
return {}
# Village Surveyor Name
alg_params = {
'NAME': 'VS_Name',
'VALUE': parameters['village_surveyor_name']
}
outputs['VillageSurveyorName'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(5)
if feedback.isCanceled():
return {}
# Mandal_Name
alg_params = {
'NAME': 'Mandal_Name',
'VALUE': parameters['mandal_name_please_enter_mandal_name_in_telugu']
}
outputs['Mandal_name'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(6)
if feedback.isCanceled():
return {}
# Village_Code
alg_params = {
'NAME': 'Village_Code',
'VALUE': parameters['please_enter_7_digit_village_code']
}
outputs['Village_code'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(7)
if feedback.isCanceled():
return {}
# District_Name
alg_params = {
'NAME': 'District_Name',
'VALUE': parameters['district_name_please_enter_district_name_in_telugu']
}
outputs['District_name'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(8)
if feedback.isCanceled():
return {}
# LPM_NO
alg_params = {
'NAME': 'LPM_NO',
'VALUE': parameters['lpm_number_column']
}
outputs['Lpm_no'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(9)
if feedback.isCanceled():
return {}
# Ref_Col Calcluation
alg_params = {
'FIELD_LENGTH': 10,
'FIELD_NAME': 'Ref_Col',
'FIELD_PRECISION': 0,
'FIELD_TYPE': 1, # Integer (32 bit)
'FORMULA': 'attribute(@LPM_NO)',
'INPUT': outputs['PolygonSpatialIndex']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['Ref_colCalcluation'] = processing.run(
'native:fieldcalculator', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(10)
if feedback.isCanceled():
return {}
# Retain fields In RLR
alg_params = {
'FIELDS': ['Field1', 'Field4', 'Field6', 'Field8', 'Field12', 'Field13', 'Field14'],
'INPUT': outputs['ExtractRlrByExpression']['OUTPUT'],
'OUTPUT': project_folder + '/RLR_excel.xlsx',
'OUTPUT': project_folder + '/RLR_excel.xlsx'
}
outputs['RetainFieldsInRlr'] = processing.run(
'native:retainfields', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(11)
if feedback.isCanceled():
return {}
# Boundary
alg_params = {
'INPUT': outputs['Ref_colCalcluation']['OUTPUT'],
'OUTPUT': os.path.join(project_folder, 'Boundary.shp'),
'OVERWRITE': True
}
outputs['Boundary'] = processing.run(
'native:boundary', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(12)
if feedback.isCanceled():
return {}
# Load ULPIN Layer into project
alg_params = {
'INPUT': outputs['SaveUlpinLayerToProjectFolder']['OUTPUT'],
'NAME': 'ULPIN'
}
outputs['LoadUlpinLayerIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(13)
if feedback.isCanceled():
return {}
# Area_in_Acres
alg_params = {
'NAME': 'Area_in_Acres',
'VALUE': parameters['area_in_acres_column_choose_your_acre_column_according_to_your_shape_file']
}
outputs['Area_in_acres'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(14)
if feedback.isCanceled():
return {}
# Area_in_Hectares
alg_params = {
'NAME': 'Area_in_Hectares',
'VALUE': parameters['area_in_hectares_column_choose_your_hectares_column_according_to_your_shape_file']
}
outputs['Area_in_hectares'] = processing.run(
'native:setprojectvariable', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(15)
if feedback.isCanceled():
return {}
# Load RLR into project
alg_params = {
'INPUT': outputs['RetainFieldsInRlr']['OUTPUT'],
'NAME': 'Resurvey_Land_Register'
}
outputs['LoadRlrIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(16)
if feedback.isCanceled():
return {}
# Extract vertices
alg_params = {
'INPUT': outputs['Ref_colCalcluation']['OUTPUT'],
'OUTPUT': project_folder + '/Vertices.shp',
'OUTPUT': project_folder + '/Vertices.shp'
}
outputs['ExtractVertices'] = processing.run(
'native:extractvertices', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(17)
if feedback.isCanceled():
return {}
# Refactor Vertices
alg_params = {
'FIELDS_MAPPING': [{'expression': "y( transform( $geometry,@layer_crs,\r\n'EPSG:4326'))", 'length': 10, 'name': 'LATITUDE', 'precision': 6, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': "x( transform( $geometry,@layer_crs,\r\n'EPSG:4326'))", 'length': 10, 'name': 'LONGITUDE', 'precision': 6, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': '$x', 'length': 10, 'name': 'EASTING_X', 'precision': 1, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': '$y', 'length': 10, 'name': 'NORTHING_Y', 'precision': 1, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}, {'expression': '"Ref_Col"', 'length': 10, 'name': 'Ref_Col', 'precision': 0, 'sub_type': 0, 'type': 4, 'type_name': 'int8'}],
'INPUT': outputs['ExtractVertices']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['RefactorVertices'] = processing.run(
'native:refactorfields', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(18)
if feedback.isCanceled():
return {}
# Boundary spatial index
alg_params = {
'INPUT': outputs['Boundary']['OUTPUT']
}
outputs['BoundarySpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(19)
if feedback.isCanceled():
return {}
# Explode lines
alg_params = {
'INPUT': outputs['BoundarySpatialIndex']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['ExplodeLines'] = processing.run(
'native:explodelines', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(20)
if feedback.isCanceled():
return {}
# Refactor Exploded Lines
alg_params = {
'FIELDS_MAPPING': [{'expression': '"Ref_Col"', 'length': 10, 'name': 'Ref_Col', 'precision': 0, 'sub_type': 0, 'type': 2, 'type_name': 'integer'}, {'expression': 'length3D( $geometry)', 'length': 10, 'name': 'Length', 'precision': 1, 'sub_type': 0, 'type': 6, 'type_name': 'double precision'}],
'INPUT': outputs['ExplodeLines']['OUTPUT'],
'OUTPUT': project_folder + '/Exploded_Lines.shp',
'OUTPUT': project_folder + '/Exploded_Lines.shp'
}
outputs['RefactorExplodedLines'] = processing.run(
'native:refactorfields', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(21)
if feedback.isCanceled():
return {}
# Load Exploded_Lines into project
alg_params = {
'INPUT': outputs['RefactorExplodedLines']['OUTPUT'],
'NAME': 'Exploded_Lines'
}
outputs['LoadExploded_linesIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(22)
if feedback.isCanceled():
return {}
# Create ULPIN attribute index
# alg_params = {
# 'FIELD': "pniu",
# 'INPUT': outputs['LoadUlpinLayerIntoProject']['OUTPUT']
# }
# outputs['CreateUlpinAttributeIndex'] = processing.run(
# 'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
# feedback.setCurrentStep(23)
# if feedback.isCanceled():
# return {}
# Multi-ring buffer (constant distance)
alg_params = {
'DISTANCE': parameters['prill_line_length_multi_ring_length'],
'INPUT': outputs['ExtractVertices']['OUTPUT'],
'OUTPUT': project_folder + '/Multi_Ring.shp',
'RINGS': 1,
'OUTPUT': project_folder + '/Multi_Ring.shp'
}
outputs['MultiringBufferConstantDistance'] = processing.run(
'native:multiringconstantbuffer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(23)
if feedback.isCanceled():
return {}
# # Create RLR attribute index
# alg_params = {
# 'FIELD': 'Field1',
# 'INPUT': outputs['LoadRlrIntoProject']['OUTPUT']
# }
# outputs['CreateRlrAttributeIndex'] = processing.run(
# 'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
# feedback.setCurrentStep(25)
# if feedback.isCanceled():
# return {}
# Delete duplicates by attribute in Vertices
alg_params = {
'FIELDS': QgsExpression(" 'Ref_Col;EASTING_X;NORTHING_Y'").evaluate(),
'INPUT': outputs['RefactorVertices']['OUTPUT'],
'OUTPUT': QgsProcessing.TEMPORARY_OUTPUT
}
outputs['DeleteDuplicatesByAttributeInVertices'] = processing.run(
'native:removeduplicatesbyattribute', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(24)
if feedback.isCanceled():
return {}
# Multiring spatial index
alg_params = {
'INPUT': outputs['MultiringBufferConstantDistance']['OUTPUT']
}
outputs['MultiringSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(25)
if feedback.isCanceled():
return {}
# Clip
alg_params = {
'INPUT': outputs['BoundarySpatialIndex']['OUTPUT'],
'OUTPUT': project_folder + '/Clip.shp',
'OVERLAY': outputs['MultiringSpatialIndex']['OUTPUT'],
'OUTPUT': project_folder + '/Clip.shp'
}
outputs['Clip'] = processing.run(
'native:clip', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(26)
if feedback.isCanceled():
return {}
# Explode Lines spatial index
alg_params = {
'INPUT': outputs['LoadExploded_linesIntoProject']['OUTPUT']
}
outputs['ExplodeLinesSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(27)
if feedback.isCanceled():
return {}
# Prill Lines
alg_params = {
'INPUT': outputs['Clip']['OUTPUT'],
'OUTPUT': project_folder + '/Prill_Lines.shp',
'OUTPUT': project_folder + '/Prill_Lines.shp'
}
outputs['PrillLines'] = processing.run(
'native:explodelines', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(28)
if feedback.isCanceled():
return {}
# Add Point_ID field
alg_params = {
'FIELD_NAME': 'Point_ID',
'GROUP_FIELDS': ['Ref_Col'],
'INPUT': outputs['DeleteDuplicatesByAttributeInVertices']['OUTPUT'],
'MODULUS': 0,
'OUTPUT': project_folder + '/Final_Vertices.shp',
'SORT_ASCENDING': True,
'SORT_EXPRESSION': '',
'SORT_NULLS_FIRST': False,
'START': 1,
'OUTPUT': project_folder + '/Final_Vertices.shp'
}
outputs['AddPoint_idField'] = processing.run(
'native:addautoincrementalfield', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(29)
if feedback.isCanceled():
return {}
# Load Final_Vertices into project
alg_params = {
'INPUT': outputs['AddPoint_idField']['OUTPUT'],
'NAME': 'Final_Vertices'
}
outputs['LoadFinal_verticesIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(30)
if feedback.isCanceled():
return {}
# Load Prill_Lines into project
alg_params = {
'INPUT': outputs['PrillLines']['OUTPUT'],
'NAME': 'Prill_Lines'
}
outputs['LoadPrill_linesIntoProject'] = processing.run(
'native:loadlayer', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(31)
if feedback.isCanceled():
return {}
# Create Final vertices spatial index
alg_params = {
'INPUT': outputs['LoadFinal_verticesIntoProject']['OUTPUT']
}
outputs['CreateFinalVerticesSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(32)
if feedback.isCanceled():
return {}
# Create attribute index Final_Vertices
alg_params = {
'FIELD': 'Ref_Col',
'INPUT': outputs['LoadFinal_verticesIntoProject']['OUTPUT']
}
outputs['CreateAttributeIndexFinal_vertices'] = processing.run(
'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(33)
if feedback.isCanceled():
return {}
# Create Prill line spatial index
alg_params = {
'INPUT': outputs['LoadPrill_linesIntoProject']['OUTPUT']
}
outputs['CreatePrillLineSpatialIndex'] = processing.run(
'native:createspatialindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(34)
if feedback.isCanceled():
return {}
# Create explde line attribute index
alg_params = {
'FIELD': 'Ref_Col',
'INPUT': outputs['LoadExploded_linesIntoProject']['OUTPUT']
}
outputs['CreateExpldeLineAttributeIndex'] = processing.run(
'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(35)
if feedback.isCanceled():
return {}
# Create polygon attribute index
alg_params = {
'FIELD': parameters['lpm_number_column'],
'INPUT': parameters['village_final_shape_file']
}
outputs['CreatePolygonAttributeIndex'] = processing.run(
'native:createattributeindex', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(36)
if feedback.isCanceled():
return {}
# Set Polygon style
alg_params = {
'INPUT': parameters['village_final_shape_file'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Polygon_Style.qml"
}
outputs['SetPolygonStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(37)
if feedback.isCanceled():
return {}
# Set Exploded Lines style
alg_params = {
'INPUT': outputs['LoadExploded_linesIntoProject']['OUTPUT'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Exploded_Lines_Style.qml"
}
outputs['SetExplodedLinesStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(38)
if feedback.isCanceled():
return {}
# Set Final Vertices style
alg_params = {
'INPUT': outputs['LoadFinal_verticesIntoProject']['OUTPUT'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Final_Vertices_Style.qml"
}
outputs['SetFinalVerticesStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
feedback.setCurrentStep(39)
if feedback.isCanceled():
return {}
# Set Prill Line style
alg_params = {
'INPUT': outputs['LoadPrill_linesIntoProject']['OUTPUT'],
'STYLE': os.path.dirname(__file__)+"/qmlfiles/Prill_lines_Style.qml"
}
outputs['SetPrillLineStyle'] = processing.run(
'native:setlayerstyle', alg_params, context=context, feedback=feedback, is_child_algorithm=True)
return results
def name(self):
return 'LPM Processing Model'
def displayName(self):
return 'LPM Processing Model'
def group(self):
return ''
def groupId(self):
return ''
def shortHelpString(self):
return """help"""
def createInstance(self):
return LpmProcessingModelAlgorithm()
| non_test | qgis was failed to create a layer in my project folder when i run the script two or more times getting the error as could not create layer d r lpms test modeltest modeltest boundary shp creation of data source failed ogr error d modeltest boundary shp is not a directory execution failed after seconds what is the issue about the documentation coding utf lpmprocessingmodel a qgis plugin description generated by plugin builder begin copyright c by surveyostories email surveyorstories gmail com this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version of the license or at your option any later version from multiprocessing spawn import import main path from logging import styles import inspect from qgis pyqt qtgui import qicon import processing from qgis core import qgsprocessingparametervectordestination from qgis core import qgsprocessingparameterfeaturesource from qgis core import qgsgeometry from qgis core import qgsattributeeditorcontainer from qgis core import qgsmaplayer from qgis core import qgsvectorfilewriter from qgis core import qgsfileutils from qgis core import qgsproject from qgis core import qgsfeature from qgis core import qgsfield from qgis core import qgsvectorlayer from qgis core import qgsexpressioncontext from qgis core import qgsexpression from qgis core import qgsprocessingparameterstring from qgis core import qgsprocessingparameterfile from qgis core import qgsprocessingparameterdistance from qgis core import qgsprocessingparameterfield from qgis core import qgsprocessingparametervectorlayer from qgis core import qgsprocessingmultistepfeedback from qgis core import qgsprocessingalgorithm from qgis core import qgsprocessing import os import sys author surveyostories date copyright c by surveyostories this will get replaced with a git when you do a git archive revision format h from qgis pyqt qtcore import qcoreapplication from qgis core import qgsprocessing qgsfeaturesink qgsprocessingalgorithm qgsprocessingparameterfeaturesource qgsprocessingparameterfeaturesink model exported as python name lpm processing model group with qgis get the path to the current project folder project folder qgsproject instance readpath class lpmprocessingmodelalgorithm qgsprocessingalgorithm def icon self cmd folder os path split inspect getfile inspect currentframe icon qicon os path join os path join cmd folder icon png return icon def flags self return super flags qgsprocessingalgorithm flagnothreading def initalgorithm self config none self addparameter qgsprocessingparametervectorlayer village final shape file village final shape file types defaultvalue none self addparameter qgsprocessingparameterfield lpm number column lpm number column type qgsprocessingparameterfield any parentlayerparametername village final shape file allowmultiple false defaultvalue none self addparameter qgsprocessingparameterfield area in acres column choose your acre column according to your shape file area in acres column choose your acre column according to your shape file type qgsprocessingparameterfield any parentlayerparametername village final shape file allowmultiple false defaultvalue none self addparameter qgsprocessingparameterfield area in hectares column choose your hectares column according to your shape file area in hectares column choose your hectares column according to your shape file type qgsprocessingparameterfield any parentlayerparametername village final shape file allowmultiple false defaultvalue none self addparameter qgsprocessingparameterdistance prill line length multi ring length prill line length multi ring length parentparametername village final shape file minvalue maxvalue defaultvalue self addparameter qgsprocessingparameterfile choose your rlrdlr resurvey land register choose your rlr dlr form resurvey land register behavior qgsprocessingparameterfile file filefilter all files xlsx defaultvalue c users lokes desktop install layer report xlsx self addparameter qgsprocessingparameterfile choose your ulpin layer choose your ulpin layer behavior qgsprocessingparameterfile file filefilter all files csv self addparameter qgsprocessingparameterstring please enter digit village code please enter digit village code multiline false defaultvalue code self addparameter qgsprocessingparameterstring village name please enter your village name in telugu village name please enter your village name in telugu multiline false defaultvalue vill name self addparameter qgsprocessingparameterstring mandal name please enter mandal name in telugu mandal name please enter mandal name in telugu multiline false defaultvalue mandal name self addparameter qgsprocessingparameterstring district name please enter district name in telugu district name please enter district name in telugu multiline false defaultvalue distri self addparameter qgsprocessingparameterstring village surveyor name village surveyor name multiline false defaultvalue vs name def processalgorithm self parameters context model feedback use a multi step feedback so that individual child algorithm progress reports are adjusted for the overall progress through the model get a reference to the current project project qgsproject instance set the names of the layers to remove layer names ulpin resurvey land register prill lines final vertices exploded lines boundary vertices clip multi ring loop through all the layers in the project for layer in project maplayers values check if the layer s name is in the list of names to remove if layer name in layer names remove the layer from the project project removemaplayer layer id feedback qgsprocessingmultistepfeedback model feedback results outputs extract rlr by expression alg params expression and r n extent r input parameters output qgsprocessing temporary output outputs processing run native extractbyexpression alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return village name alg params name village name value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return save ulpin layer to project folder alg params datasource options input parameters layer name layer options output project folder ulpin csv outputs processing run native savefeatures alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return polygon spatial index alg params input parameters outputs processing run native createspatialindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return village surveyor name alg params name vs name value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return mandal name alg params name mandal name value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return village code alg params name village code value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return district name alg params name district name value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return lpm no alg params name lpm no value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return ref col calcluation alg params field length field name ref col field precision field type integer bit formula attribute lpm no input outputs output qgsprocessing temporary output outputs processing run native fieldcalculator alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return retain fields in rlr alg params fields input outputs output project folder rlr excel xlsx output project folder rlr excel xlsx outputs processing run native retainfields alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return boundary alg params input outputs output os path join project folder boundary shp overwrite true outputs processing run native boundary alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return load ulpin layer into project alg params input outputs name ulpin outputs processing run native loadlayer alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return area in acres alg params name area in acres value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return area in hectares alg params name area in hectares value parameters outputs processing run native setprojectvariable alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return load rlr into project alg params input outputs name resurvey land register outputs processing run native loadlayer alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return extract vertices alg params input outputs output project folder vertices shp output project folder vertices shp outputs processing run native extractvertices alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return refactor vertices alg params fields mapping input outputs output qgsprocessing temporary output outputs processing run native refactorfields alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return boundary spatial index alg params input outputs outputs processing run native createspatialindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return explode lines alg params input outputs output qgsprocessing temporary output outputs processing run native explodelines alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return refactor exploded lines alg params fields mapping input outputs output project folder exploded lines shp output project folder exploded lines shp outputs processing run native refactorfields alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return load exploded lines into project alg params input outputs name exploded lines outputs processing run native loadlayer alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return create ulpin attribute index alg params field pniu input outputs outputs processing run native createattributeindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return multi ring buffer constant distance alg params distance parameters input outputs output project folder multi ring shp rings output project folder multi ring shp outputs processing run native multiringconstantbuffer alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return create rlr attribute index alg params field input outputs outputs processing run native createattributeindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return delete duplicates by attribute in vertices alg params fields qgsexpression ref col easting x northing y evaluate input outputs output qgsprocessing temporary output outputs processing run native removeduplicatesbyattribute alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return multiring spatial index alg params input outputs outputs processing run native createspatialindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return clip alg params input outputs output project folder clip shp overlay outputs output project folder clip shp outputs processing run native clip alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return explode lines spatial index alg params input outputs outputs processing run native createspatialindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return prill lines alg params input outputs output project folder prill lines shp output project folder prill lines shp outputs processing run native explodelines alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return add point id field alg params field name point id group fields input outputs modulus output project folder final vertices shp sort ascending true sort expression sort nulls first false start output project folder final vertices shp outputs processing run native addautoincrementalfield alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return load final vertices into project alg params input outputs name final vertices outputs processing run native loadlayer alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return load prill lines into project alg params input outputs name prill lines outputs processing run native loadlayer alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return create final vertices spatial index alg params input outputs outputs processing run native createspatialindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return create attribute index final vertices alg params field ref col input outputs outputs processing run native createattributeindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return create prill line spatial index alg params input outputs outputs processing run native createspatialindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return create explde line attribute index alg params field ref col input outputs outputs processing run native createattributeindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return create polygon attribute index alg params field parameters input parameters outputs processing run native createattributeindex alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return set polygon style alg params input parameters style os path dirname file qmlfiles polygon style qml outputs processing run native setlayerstyle alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return set exploded lines style alg params input outputs style os path dirname file qmlfiles exploded lines style qml outputs processing run native setlayerstyle alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return set final vertices style alg params input outputs style os path dirname file qmlfiles final vertices style qml outputs processing run native setlayerstyle alg params context context feedback feedback is child algorithm true feedback setcurrentstep if feedback iscanceled return set prill line style alg params input outputs style os path dirname file qmlfiles prill lines style qml outputs processing run native setlayerstyle alg params context context feedback feedback is child algorithm true return results def name self return lpm processing model def displayname self return lpm processing model def group self return def groupid self return def shorthelpstring self return help def createinstance self return lpmprocessingmodelalgorithm | 0 |
54,407 | 13,907,239,039 | IssuesEvent | 2020-10-20 12:21:41 | jgeraigery/agile-service-manager-gis-map-plugin | https://api.github.com/repos/jgeraigery/agile-service-manager-gis-map-plugin | opened | CVE-2019-11358 (Medium) detected in jquery-3.3.1.js | security vulnerability | ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to dependency file: agile-service-manager-gis-map-plugin/node_modules/leaflet-search/examples/ajax-jquery.html</p>
<p>Path to vulnerable library: agile-service-manager-gis-map-plugin/node_modules/leaflet-search/examples/ajax-jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/agile-service-manager-gis-map-plugin/commit/b7b9c30bb3a86c5875e3c92e26ef70760dbfbd12">b7b9c30bb3a86c5875e3c92e26ef70760dbfbd12</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-11358 (Medium) detected in jquery-3.3.1.js - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-3.3.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.js</a></p>
<p>Path to dependency file: agile-service-manager-gis-map-plugin/node_modules/leaflet-search/examples/ajax-jquery.html</p>
<p>Path to vulnerable library: agile-service-manager-gis-map-plugin/node_modules/leaflet-search/examples/ajax-jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.3.1.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/jgeraigery/agile-service-manager-gis-map-plugin/commit/b7b9c30bb3a86c5875e3c92e26ef70760dbfbd12">b7b9c30bb3a86c5875e3c92e26ef70760dbfbd12</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"3.3.1","isTransitiveDependency":false,"dependencyTree":"jquery:3.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"3.4.0"}],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in jquery js cve medium severity vulnerability vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file agile service manager gis map plugin node modules leaflet search examples ajax jquery html path to vulnerable library agile service manager gis map plugin node modules leaflet search examples ajax jquery html dependency hierarchy x jquery js vulnerable library found in head commit a href found in base branch develop vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype vulnerabilityurl | 0 |
161,072 | 12,530,304,971 | IssuesEvent | 2020-06-04 12:52:17 | sebuaa2020/Team208 | https://api.github.com/repos/sebuaa2020/Team208 | closed | Test-navi | test | 1. 运动信息重定向函数`trans`测试:
- 函数功能:将导航模块确定的控制机器人运动信息导入自定义的`move`模块进行统一处理
- 输入:`Twist.msg`类型数据
- 输出:`Movemsg.msg`类型
- 保证:
1. 二者之间的转化正确
2. 输出的`x,y <= 0.8`
3. 输出的`z <= 0.7`
2.`/robot2077/robot_move/vel`主题:
数据要求
数据格式为上述`Twist.msg`格式
1. `linear.z` = 0
2. `angular.x = 0, angular.y =0`
3. 该部分主要通过用户运行导航模块操作测试:
要测试:
* 导航的地图是否可以正常加载
* 导航的目的点是否可以正常设置
* 导航的路径是否可以正确规划
* 导航过程中是否可以主动避障并重新规划路径 | 1.0 | Test-navi - 1. 运动信息重定向函数`trans`测试:
- 函数功能:将导航模块确定的控制机器人运动信息导入自定义的`move`模块进行统一处理
- 输入:`Twist.msg`类型数据
- 输出:`Movemsg.msg`类型
- 保证:
1. 二者之间的转化正确
2. 输出的`x,y <= 0.8`
3. 输出的`z <= 0.7`
2.`/robot2077/robot_move/vel`主题:
数据要求
数据格式为上述`Twist.msg`格式
1. `linear.z` = 0
2. `angular.x = 0, angular.y =0`
3. 该部分主要通过用户运行导航模块操作测试:
要测试:
* 导航的地图是否可以正常加载
* 导航的目的点是否可以正常设置
* 导航的路径是否可以正确规划
* 导航过程中是否可以主动避障并重新规划路径 | test | test navi 运动信息重定向函数 trans 测试: 函数功能:将导航模块确定的控制机器人运动信息导入自定义的 move 模块进行统一处理 输入: twist msg 类型数据 输出: movemsg msg 类型 保证: 二者之间的转化正确 输出的 x y 输出的 z robot move vel 主题: 数据要求 数据格式为上述 twist msg 格式 linear z angular x angular y 该部分主要通过用户运行导航模块操作测试: 要测试: 导航的地图是否可以正常加载 导航的目的点是否可以正常设置 导航的路径是否可以正确规划 导航过程中是否可以主动避障并重新规划路径 | 1 |
13,325 | 5,341,152,829 | IssuesEvent | 2017-02-17 01:37:13 | blackbaud/skyux-cli | https://api.github.com/repos/blackbaud/skyux-cli | closed | Evaluate sky-pages-cli footprint | builder | Evaluate https://github.com/facebookincubator/create-react-app/blob/master/packages/create-react-app/index.js for inspiration to minimize footprint (and need to update) sky-pages-cli.
| 1.0 | Evaluate sky-pages-cli footprint - Evaluate https://github.com/facebookincubator/create-react-app/blob/master/packages/create-react-app/index.js for inspiration to minimize footprint (and need to update) sky-pages-cli.
| non_test | evaluate sky pages cli footprint evaluate for inspiration to minimize footprint and need to update sky pages cli | 0 |
276,160 | 23,970,717,932 | IssuesEvent | 2022-09-13 07:30:49 | paritytech/polkadot | https://api.github.com/repos/paritytech/polkadot | closed | Integration test suite | F5-task F4-tests F8-enhancement T1-Parachains-Engineering | * [x] Integrate `ZombieNet` https://github.com/paritytech/polkadot/pull/4131
* [x] Expand test spec to provide support for
* [x] RPC queries as asserts in test spec
* [ ] `zipkin` `Span`
* [x] Add backchannel support of `scale` encoded messages for _coordinate_ or _data dependent_ malicious setups
* [x] Introduce malicious test cases as outline #4131
* [ ] More malicious test cases https://github.com/paritytech/polkadot/issues/4776
* [x] Implement crate to use with the backchannel on polkadot (in `/node` dir in the git tree) https://github.com/paritytech/polkadot/pull/4377
* [ ] Assure with devops the individual cluster machines have sufficient power until this can be checked with #4196 | 1.0 | Integration test suite - * [x] Integrate `ZombieNet` https://github.com/paritytech/polkadot/pull/4131
* [x] Expand test spec to provide support for
* [x] RPC queries as asserts in test spec
* [ ] `zipkin` `Span`
* [x] Add backchannel support of `scale` encoded messages for _coordinate_ or _data dependent_ malicious setups
* [x] Introduce malicious test cases as outline #4131
* [ ] More malicious test cases https://github.com/paritytech/polkadot/issues/4776
* [x] Implement crate to use with the backchannel on polkadot (in `/node` dir in the git tree) https://github.com/paritytech/polkadot/pull/4377
* [ ] Assure with devops the individual cluster machines have sufficient power until this can be checked with #4196 | test | integration test suite integrate zombienet expand test spec to provide support for rpc queries as asserts in test spec zipkin span add backchannel support of scale encoded messages for coordinate or data dependent malicious setups introduce malicious test cases as outline more malicious test cases implement crate to use with the backchannel on polkadot in node dir in the git tree assure with devops the individual cluster machines have sufficient power until this can be checked with | 1 |
60,811 | 6,715,882,548 | IssuesEvent | 2017-10-14 00:00:58 | phetsims/QA | https://api.github.com/repos/phetsims/QA | opened | RC Spot Check Forces and Motion Basics 2.3.0-phetiorc.4 | QA:phet-io QA:rc-test | @kathy-phet @ariel-phet @zepumph @samreid Forces and Motion: Basics 2.3.0-phetiorc.4 is ready for RC testing. There were a few changes from the last RC, but all were small and "cosmetic" in nature for PhET-iO. A full RC is not necessary, but we should spot check the sim on a few platforms and verify that these issues were fixed.
This RC is to support adding Legends of Learning features for Forces and Motion: Basics. But we wanted the improvements in the PhET-iO version as well so new versions are going out for both brands. If this version passes testing, a PhET brand 2.3 version will go out with the same SHAs. Here is the link to index.
[Root Directory](https://www.colorado.edu/physics/phet/dev/html/forces-and-motion-basics/2.3.0-phetiorc.4/wrappers/)
Please verify that the following issues were fixed:
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/248
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/245
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/244
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/242
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/240
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/237
If any new issues are found, please note them in phetsims/forces-and-motion-basics/issues and reference this issue.
@ariel-phet could you please prioritize this issue? Also assigning to @phet-steele. | 1.0 | RC Spot Check Forces and Motion Basics 2.3.0-phetiorc.4 - @kathy-phet @ariel-phet @zepumph @samreid Forces and Motion: Basics 2.3.0-phetiorc.4 is ready for RC testing. There were a few changes from the last RC, but all were small and "cosmetic" in nature for PhET-iO. A full RC is not necessary, but we should spot check the sim on a few platforms and verify that these issues were fixed.
This RC is to support adding Legends of Learning features for Forces and Motion: Basics. But we wanted the improvements in the PhET-iO version as well so new versions are going out for both brands. If this version passes testing, a PhET brand 2.3 version will go out with the same SHAs. Here is the link to index.
[Root Directory](https://www.colorado.edu/physics/phet/dev/html/forces-and-motion-basics/2.3.0-phetiorc.4/wrappers/)
Please verify that the following issues were fixed:
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/248
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/245
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/244
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/242
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/240
- [ ] https://github.com/phetsims/forces-and-motion-basics/issues/237
If any new issues are found, please note them in phetsims/forces-and-motion-basics/issues and reference this issue.
@ariel-phet could you please prioritize this issue? Also assigning to @phet-steele. | test | rc spot check forces and motion basics phetiorc kathy phet ariel phet zepumph samreid forces and motion basics phetiorc is ready for rc testing there were a few changes from the last rc but all were small and cosmetic in nature for phet io a full rc is not necessary but we should spot check the sim on a few platforms and verify that these issues were fixed this rc is to support adding legends of learning features for forces and motion basics but we wanted the improvements in the phet io version as well so new versions are going out for both brands if this version passes testing a phet brand version will go out with the same shas here is the link to index please verify that the following issues were fixed if any new issues are found please note them in phetsims forces and motion basics issues and reference this issue ariel phet could you please prioritize this issue also assigning to phet steele | 1 |
233,304 | 18,959,484,969 | IssuesEvent | 2021-11-19 01:39:38 | tracer-protocol/perpetual-pools-contracts | https://api.github.com/repos/tracer-protocol/perpetual-pools-contracts | opened | Write up enforced structure of test suite | triage tests | - Currently the test suit is disorganised.
- We need to settle on a rigorous way to organise and structure the test.
- We then need to refactor tests
- We then need to collectively enforce it | 1.0 | Write up enforced structure of test suite - - Currently the test suit is disorganised.
- We need to settle on a rigorous way to organise and structure the test.
- We then need to refactor tests
- We then need to collectively enforce it | test | write up enforced structure of test suite currently the test suit is disorganised we need to settle on a rigorous way to organise and structure the test we then need to refactor tests we then need to collectively enforce it | 1 |
291,184 | 25,128,301,585 | IssuesEvent | 2022-11-09 13:26:27 | rancher/dashboard | https://api.github.com/repos/rancher/dashboard | opened | Add unit tests for Namespace filtering in resources | area/unit-test | # Description
Ensure namespace filtering from the top bar to apply changes.
## Context
Part of the group: https://github.com/rancher/dashboard/issues/6613
* When a user selects a namespace in the top nav of cluster explorer, then goes to the list view of a namespaced resource such as Deployments, confirm that only resources in the selected namespaces are listed
* After refreshing the page, the namespace filter is still persisted
## Required tests
- Assure the selection of the namespace from the top bar to be applied in the store or resource (tbd source file)
- Given a faux resource list from the endpoint, render based on filtered values from the store (example case on Development)
- Ensure localstorage saving and loading capabilities (tbd source file) | 1.0 | Add unit tests for Namespace filtering in resources - # Description
Ensure namespace filtering from the top bar to apply changes.
## Context
Part of the group: https://github.com/rancher/dashboard/issues/6613
* When a user selects a namespace in the top nav of cluster explorer, then goes to the list view of a namespaced resource such as Deployments, confirm that only resources in the selected namespaces are listed
* After refreshing the page, the namespace filter is still persisted
## Required tests
- Assure the selection of the namespace from the top bar to be applied in the store or resource (tbd source file)
- Given a faux resource list from the endpoint, render based on filtered values from the store (example case on Development)
- Ensure localstorage saving and loading capabilities (tbd source file) | test | add unit tests for namespace filtering in resources description ensure namespace filtering from the top bar to apply changes context part of the group when a user selects a namespace in the top nav of cluster explorer then goes to the list view of a namespaced resource such as deployments confirm that only resources in the selected namespaces are listed after refreshing the page the namespace filter is still persisted required tests assure the selection of the namespace from the top bar to be applied in the store or resource tbd source file given a faux resource list from the endpoint render based on filtered values from the store example case on development ensure localstorage saving and loading capabilities tbd source file | 1 |
779,517 | 27,355,710,278 | IssuesEvent | 2023-02-27 12:43:40 | bats-core/bats-core | https://api.github.com/repos/bats-core/bats-core | opened | Improve documentation how to use the docker image to run tests with bats | Type: Enhancement Priority: NeedsTriage | The current documentation does not provide enough information so that users could just start working with `bats-core` using a container runtime instead of a location installation.
This issue is related to #150 (which is quite old and this I created a new ticket since the image has probably changes since 2018).
**My goal** was to create a test suite for my bash program and execute it with the docker image provided by `bats-core`.
Since I have no experience with `bats` I started the tutorial but instead of having a local installation with `git submodule` I run the test/tutorial with
```
docker run -it -v "${PWD}:/code" bats/bats:latest tes
```
I could follow the tutorial until the first helper module was loaded via `load 'test_helper/bats-support/load` where I got the following error.
```
➜ docker run -it -v "${PWD}:/code" bats/bats:latest test
test.bats
✗ calling without arguments shows usage
(from function `setup' in test file test/test.bats, line 1)
`setup() {' failed
bats_load_safe: Could not find '/code/test/test_helper/bats-support/load'[.bash]
1 test, 1 failure
```
**Solution**
The solution to my problem was simple, but it toke me a quite a while to figure it out.
The reason for this error was the location in which the helper libraries are installed in the container image. `bats` itself is installed in `/opt/bats`, the helper libraries in `/usr/lib/bats`.
I had to create a symbolic link in the test directory, which would provide me tha access
```
ln -sf /usr/lib/bats test/test_helper
```
It would be nice to improve the tutorial so other users have a better time to get started with `bats`. | 1.0 | Improve documentation how to use the docker image to run tests with bats - The current documentation does not provide enough information so that users could just start working with `bats-core` using a container runtime instead of a location installation.
This issue is related to #150 (which is quite old and this I created a new ticket since the image has probably changes since 2018).
**My goal** was to create a test suite for my bash program and execute it with the docker image provided by `bats-core`.
Since I have no experience with `bats` I started the tutorial but instead of having a local installation with `git submodule` I run the test/tutorial with
```
docker run -it -v "${PWD}:/code" bats/bats:latest tes
```
I could follow the tutorial until the first helper module was loaded via `load 'test_helper/bats-support/load` where I got the following error.
```
➜ docker run -it -v "${PWD}:/code" bats/bats:latest test
test.bats
✗ calling without arguments shows usage
(from function `setup' in test file test/test.bats, line 1)
`setup() {' failed
bats_load_safe: Could not find '/code/test/test_helper/bats-support/load'[.bash]
1 test, 1 failure
```
**Solution**
The solution to my problem was simple, but it toke me a quite a while to figure it out.
The reason for this error was the location in which the helper libraries are installed in the container image. `bats` itself is installed in `/opt/bats`, the helper libraries in `/usr/lib/bats`.
I had to create a symbolic link in the test directory, which would provide me tha access
```
ln -sf /usr/lib/bats test/test_helper
```
It would be nice to improve the tutorial so other users have a better time to get started with `bats`. | non_test | improve documentation how to use the docker image to run tests with bats the current documentation does not provide enough information so that users could just start working with bats core using a container runtime instead of a location installation this issue is related to which is quite old and this i created a new ticket since the image has probably changes since my goal was to create a test suite for my bash program and execute it with the docker image provided by bats core since i have no experience with bats i started the tutorial but instead of having a local installation with git submodule i run the test tutorial with docker run it v pwd code bats bats latest tes i could follow the tutorial until the first helper module was loaded via load test helper bats support load where i got the following error ➜ docker run it v pwd code bats bats latest test test bats ✗ calling without arguments shows usage from function setup in test file test test bats line setup failed bats load safe could not find code test test helper bats support load test failure solution the solution to my problem was simple but it toke me a quite a while to figure it out the reason for this error was the location in which the helper libraries are installed in the container image bats itself is installed in opt bats the helper libraries in usr lib bats i had to create a symbolic link in the test directory which would provide me tha access ln sf usr lib bats test test helper it would be nice to improve the tutorial so other users have a better time to get started with bats | 0 |
83,278 | 7,867,928,544 | IssuesEvent | 2018-06-23 15:06:27 | jbeard4/SCION | https://api.github.com/repos/jbeard4/SCION | closed | fail test/scxml-test-framework/test/w3c-ecma/test183.txml.scxml | 2.0.0 Node v0.10.24 Tests component:runtime fail feature:send | [https://github.com/jbeard4/scxml-test-framework/tree/master/test/scxml-test-framework/test/w3c-ecma/test183.txml.scxml](https://github.com/jbeard4/scxml-test-framework/tree/master/test/scxml-test-framework/test/w3c-ecma/test183.txml.scxml)
**Error** <code><pre>{
"name": "AssertionError",
"actual": [
"fail"
],
"expected": [
"pass"
],
"operator": "deepEqual",
"message": "[\"fail\"] deepEqual [\"pass\"]"
}</code></pre>
**Data**: <code><pre>{
"sessionToken": 554,
"nextConfiguration": [
"fail"
]
}</code></pre>
**scxml**:
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!-- we test that <send> stores the value of the sendid in idlocation. If it does,
var1 has a value and we pass. Otherwise we fail -->
<scxml xmlns="http://www.w3.org/2005/07/scxml" xmlns:conf="http://www.w3.org/2005/scxml-conformance" initial="s0" datamodel="ecmascript" version="1.0">
<datamodel>
<data id="Var1"/>
</datamodel>
<state id="s0">
<onentry>
<send event="event1" idlocation="Var1"/>
</onentry>
<transition cond="Var1" target="pass"/>
<transition target="fail"/>
</state>
<final id="pass"><onentry><log label="Outcome" expr="'pass'"/></onentry></final>
<final id="fail"><onentry><log label="Outcome" expr="'fail'"/></onentry></final>
</scxml>
```
**JSON**: <code><pre>{
"initialConfiguration": [
"pass"
],
"events": []
}</code></pre>
| 1.0 | fail test/scxml-test-framework/test/w3c-ecma/test183.txml.scxml - [https://github.com/jbeard4/scxml-test-framework/tree/master/test/scxml-test-framework/test/w3c-ecma/test183.txml.scxml](https://github.com/jbeard4/scxml-test-framework/tree/master/test/scxml-test-framework/test/w3c-ecma/test183.txml.scxml)
**Error** <code><pre>{
"name": "AssertionError",
"actual": [
"fail"
],
"expected": [
"pass"
],
"operator": "deepEqual",
"message": "[\"fail\"] deepEqual [\"pass\"]"
}</code></pre>
**Data**: <code><pre>{
"sessionToken": 554,
"nextConfiguration": [
"fail"
]
}</code></pre>
**scxml**:
``` xml
<?xml version="1.0" encoding="UTF-8"?>
<!-- we test that <send> stores the value of the sendid in idlocation. If it does,
var1 has a value and we pass. Otherwise we fail -->
<scxml xmlns="http://www.w3.org/2005/07/scxml" xmlns:conf="http://www.w3.org/2005/scxml-conformance" initial="s0" datamodel="ecmascript" version="1.0">
<datamodel>
<data id="Var1"/>
</datamodel>
<state id="s0">
<onentry>
<send event="event1" idlocation="Var1"/>
</onentry>
<transition cond="Var1" target="pass"/>
<transition target="fail"/>
</state>
<final id="pass"><onentry><log label="Outcome" expr="'pass'"/></onentry></final>
<final id="fail"><onentry><log label="Outcome" expr="'fail'"/></onentry></final>
</scxml>
```
**JSON**: <code><pre>{
"initialConfiguration": [
"pass"
],
"events": []
}</code></pre>
| test | fail test scxml test framework test ecma txml scxml error name assertionerror actual fail expected pass operator deepequal message deepequal data sessiontoken nextconfiguration fail scxml xml stores the value of the sendid in idlocation if it does has a value and we pass otherwise we fail json initialconfiguration pass events | 1 |
296,243 | 22,293,058,734 | IssuesEvent | 2022-06-12 16:48:56 | syl3n7/CatchMeIfYouCan | https://api.github.com/repos/syl3n7/CatchMeIfYouCan | closed | Level3 | documentation | Level 2 design by @margaridaexe
@Mariana041 is responsible for maintaining coherence between levels. | 1.0 | Level3 - Level 2 design by @margaridaexe
@Mariana041 is responsible for maintaining coherence between levels. | non_test | level design by margaridaexe is responsible for maintaining coherence between levels | 0 |
638,781 | 20,738,235,283 | IssuesEvent | 2022-03-14 15:25:22 | rothlisbergerc/GroomingSalonWebsite | https://api.github.com/repos/rothlisbergerc/GroomingSalonWebsite | closed | Find a different way to implement the blur effect | front end low priority | The current blur function in the site.css file applies to all containers and nav-containers. When the site is opened on a browser with hardware acceloration turned off, the blur function does not work properly and makes text hard to read.
Try and find an alternative way of producing the blur effect using other means, and then implement it.
| 1.0 | Find a different way to implement the blur effect - The current blur function in the site.css file applies to all containers and nav-containers. When the site is opened on a browser with hardware acceloration turned off, the blur function does not work properly and makes text hard to read.
Try and find an alternative way of producing the blur effect using other means, and then implement it.
| non_test | find a different way to implement the blur effect the current blur function in the site css file applies to all containers and nav containers when the site is opened on a browser with hardware acceloration turned off the blur function does not work properly and makes text hard to read try and find an alternative way of producing the blur effect using other means and then implement it | 0 |
381,963 | 11,298,799,750 | IssuesEvent | 2020-01-17 09:48:45 | lupino3/edumips64 | https://api.github.com/repos/lupino3/edumips64 | opened | Make `DATA_LIMIT` customizable via UI | component:core component:swing-ui priority:2 type:feature-request | After the investigation in #259, if it makes sense we should let users customize `DATA_LIMIT` from the UI.
This will also require setting an upper bound and fixing the documentation. | 1.0 | Make `DATA_LIMIT` customizable via UI - After the investigation in #259, if it makes sense we should let users customize `DATA_LIMIT` from the UI.
This will also require setting an upper bound and fixing the documentation. | non_test | make data limit customizable via ui after the investigation in if it makes sense we should let users customize data limit from the ui this will also require setting an upper bound and fixing the documentation | 0 |
287,255 | 24,818,910,200 | IssuesEvent | 2022-10-25 15:00:19 | UCL/TDMS | https://api.github.com/repos/UCL/TDMS | closed | New FDTD tests | testing priority:1 | Here are tests for the FDTD functionality:
arc_13: 3D
arc_12: 2D
Both have been developed so as to be fast tests rather than physically meaningful.
In both cases, the matlab script run_fdtd_bscan creates the test output.
Both tests can be downloaded from UCL dropbox with the following information:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1. arc_12.zip:
You will need the claim passcode '77df7451'
to retrieve the file(s).
pick-up URL: https://wwwapps-live.ucl.ac.uk/cgi-bin/dropbox/dropbox-https.cgi?state=pickup_info&id=36265c34
claim id: 36265c34
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1. arc_13.zip:
You will need the claim passcode '44c99120'
to retrieve the file(s).
pick-up URL: https://wwwapps-live.ucl.ac.uk/cgi-bin/dropbox/dropbox-https.cgi?state=pickup_info&id=e2740b4
claim id: e2740b4
| 1.0 | New FDTD tests - Here are tests for the FDTD functionality:
arc_13: 3D
arc_12: 2D
Both have been developed so as to be fast tests rather than physically meaningful.
In both cases, the matlab script run_fdtd_bscan creates the test output.
Both tests can be downloaded from UCL dropbox with the following information:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1. arc_12.zip:
You will need the claim passcode '77df7451'
to retrieve the file(s).
pick-up URL: https://wwwapps-live.ucl.ac.uk/cgi-bin/dropbox/dropbox-https.cgi?state=pickup_info&id=36265c34
claim id: 36265c34
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
1. arc_13.zip:
You will need the claim passcode '44c99120'
to retrieve the file(s).
pick-up URL: https://wwwapps-live.ucl.ac.uk/cgi-bin/dropbox/dropbox-https.cgi?state=pickup_info&id=e2740b4
claim id: e2740b4
| test | new fdtd tests here are tests for the fdtd functionality arc arc both have been developed so as to be fast tests rather than physically meaningful in both cases the matlab script run fdtd bscan creates the test output both tests can be downloaded from ucl dropbox with the following information arc zip you will need the claim passcode to retrieve the file s pick up url claim id arc zip you will need the claim passcode to retrieve the file s pick up url claim id | 1 |
173,940 | 13,450,160,473 | IssuesEvent | 2020-09-08 18:04:08 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: remove imports from jobs/mixed-versions | O-roachtest O-robot | [(roachtest).jobs/mixed-versions failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2032053&tab=buildLog) on [provisional_202006230817_v20.1.3@7fd454f880f386cdd0eda6b21b12f6532c14f0db](https://github.com/cockroachdb/cockroach/commits/7fd454f880f386cdd0eda6b21b12f6532c14f0db):
```
The test failed on branch=provisional_202006230817_v20.1.3, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/jobs/mixed-versions/run_1
mixed_version_jobs.go:198,versionupgrade.go:167,mixed_version_jobs.go:296,mixed_version_jobs.go:320,test_runner.go:753: Cluster info
Node 1: 19.2
Node 2: 19.2
Node 3: 19.2
Node 4: 19.2
Unsuccessful job 566359205752569857 of type IMPORT, description IMPORT TABLE tpcc.public.history (rowid UUID NOT NULL DEFAULT gen_random_uuid(), h_c_id INT8 NOT NULL, h_c_d_id INT8 NOT NULL, h_c_w_id INT8 NOT NULL, h_d_id INT8 NOT NULL, h_w_id INT8 NOT NULL, h_date TIMESTAMP, h_amount DECIMAL(6,2), h_data VARCHAR(24), PRIMARY KEY (h_w_id, rowid), INDEX history_customer_fk_idx (h_c_w_id, h_c_d_id, h_c_id), INDEX history_district_fk_idx (h_w_id, h_d_id)) CSV DATA ('workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=1500000&row-start=0&seed=1&version=2.1.0&warehouses=200', 'workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=3000000&row-start=1500000&seed=1&version=2.1.0&warehouses=200', 'workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=4500000&row-start=3000000&seed=1&version=2.1.0&warehouses=200', 'workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=6000000&row-start=4500000&seed=1&version=2.1.0&warehouses=200') WITH "nullif" = 'NULL', status failed, error cannot update progress on pause-requested job (id 566359205752569857), coordinator 2
```
<details><summary>More</summary><p>
Artifacts: [/jobs/mixed-versions](https://teamcity.cockroachdb.com/viewLog.html?buildId=2032053&tab=artifacts#/jobs/mixed-versions)
Related:
- #50026 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006032224_v20.2.0-alpha.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006032224_v20.2.0-alpha.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49281 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202005191400_v20.1.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202005191400_v20.1.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49233 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202005182011_v20.1.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202005182011_v20.1.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48407 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202005041945_v19.1.9](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202005041945_v19.1.9) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48315 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48194 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48193 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ajobs%2Fmixed-versions.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 1.0 | roachtest: remove imports from jobs/mixed-versions - [(roachtest).jobs/mixed-versions failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2032053&tab=buildLog) on [provisional_202006230817_v20.1.3@7fd454f880f386cdd0eda6b21b12f6532c14f0db](https://github.com/cockroachdb/cockroach/commits/7fd454f880f386cdd0eda6b21b12f6532c14f0db):
```
The test failed on branch=provisional_202006230817_v20.1.3, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/jobs/mixed-versions/run_1
mixed_version_jobs.go:198,versionupgrade.go:167,mixed_version_jobs.go:296,mixed_version_jobs.go:320,test_runner.go:753: Cluster info
Node 1: 19.2
Node 2: 19.2
Node 3: 19.2
Node 4: 19.2
Unsuccessful job 566359205752569857 of type IMPORT, description IMPORT TABLE tpcc.public.history (rowid UUID NOT NULL DEFAULT gen_random_uuid(), h_c_id INT8 NOT NULL, h_c_d_id INT8 NOT NULL, h_c_w_id INT8 NOT NULL, h_d_id INT8 NOT NULL, h_w_id INT8 NOT NULL, h_date TIMESTAMP, h_amount DECIMAL(6,2), h_data VARCHAR(24), PRIMARY KEY (h_w_id, rowid), INDEX history_customer_fk_idx (h_c_w_id, h_c_d_id, h_c_id), INDEX history_district_fk_idx (h_w_id, h_d_id)) CSV DATA ('workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=1500000&row-start=0&seed=1&version=2.1.0&warehouses=200', 'workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=3000000&row-start=1500000&seed=1&version=2.1.0&warehouses=200', 'workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=4500000&row-start=3000000&seed=1&version=2.1.0&warehouses=200', 'workload:///csv/tpcc/history?fks=true&interleaved=false&row-end=6000000&row-start=4500000&seed=1&version=2.1.0&warehouses=200') WITH "nullif" = 'NULL', status failed, error cannot update progress on pause-requested job (id 566359205752569857), coordinator 2
```
<details><summary>More</summary><p>
Artifacts: [/jobs/mixed-versions](https://teamcity.cockroachdb.com/viewLog.html?buildId=2032053&tab=artifacts#/jobs/mixed-versions)
Related:
- #50026 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202006032224_v20.2.0-alpha.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202006032224_v20.2.0-alpha.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49281 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202005191400_v20.1.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202005191400_v20.1.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #49233 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202005182011_v20.1.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202005182011_v20.1.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48407 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202005041945_v19.1.9](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202005041945_v19.1.9) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48315 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48194 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48193 roachtest: jobs/mixed-versions failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-19.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-19.1) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Ajobs%2Fmixed-versions.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest remove imports from jobs mixed versions on the test failed on branch provisional cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts jobs mixed versions run mixed version jobs go versionupgrade go mixed version jobs go mixed version jobs go test runner go cluster info node node node node unsuccessful job of type import description import table tpcc public history rowid uuid not null default gen random uuid h c id not null h c d id not null h c w id not null h d id not null h w id not null h date timestamp h amount decimal h data varchar primary key h w id rowid index history customer fk idx h c w id h c d id h c id index history district fk idx h w id h d id csv data workload csv tpcc history fks true interleaved false row end row start seed version warehouses workload csv tpcc history fks true interleaved false row end row start seed version warehouses workload csv tpcc history fks true interleaved false row end row start seed version warehouses workload csv tpcc history fks true interleaved false row end row start seed version warehouses with nullif null status failed error cannot update progress on pause requested job id coordinator more artifacts related roachtest jobs mixed versions failed roachtest jobs mixed versions failed roachtest jobs mixed versions failed roachtest jobs mixed versions failed roachtest jobs mixed versions failed roachtest jobs mixed versions failed roachtest jobs mixed versions failed powered by | 1 |
46,078 | 7,236,115,249 | IssuesEvent | 2018-02-13 04:57:15 | druid-io/druid | https://api.github.com/repos/druid-io/druid | opened | Update Kafka docs for 0.12.0 | Area - Documentation Area - Kafka Indexing | After #4815 the Kafka indexing service changed how it generates segments, in a way that invalidates the "On the Subject of Segments" section of kafka-indexing.md. This is great since the new way is better! But, the docs need to be rewritten.
/cc @pjain1 @dclim | 1.0 | Update Kafka docs for 0.12.0 - After #4815 the Kafka indexing service changed how it generates segments, in a way that invalidates the "On the Subject of Segments" section of kafka-indexing.md. This is great since the new way is better! But, the docs need to be rewritten.
/cc @pjain1 @dclim | non_test | update kafka docs for after the kafka indexing service changed how it generates segments in a way that invalidates the on the subject of segments section of kafka indexing md this is great since the new way is better but the docs need to be rewritten cc dclim | 0 |
229,541 | 18,362,817,932 | IssuesEvent | 2021-10-09 14:23:45 | ColoredCow/portal | https://api.github.com/repos/ColoredCow/portal | closed | After adding a new tag name is not visible | status : ready to test module : hr Points: 2 | **Describe the bug**
After adding a new tag name is not visible in the manage tag dropdown.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'UAT'
2. Click on `HR>Manage Tags`
3. Add new Tag
4. After adding a new tag name is not visible in the list.
**Expected behavior**
The name of the new tag should be visible.
**Screenshots**
If applicable, add screenshots to help explain your problem.

| 1.0 | After adding a new tag name is not visible - **Describe the bug**
After adding a new tag name is not visible in the manage tag dropdown.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'UAT'
2. Click on `HR>Manage Tags`
3. Add new Tag
4. After adding a new tag name is not visible in the list.
**Expected behavior**
The name of the new tag should be visible.
**Screenshots**
If applicable, add screenshots to help explain your problem.

| test | after adding a new tag name is not visible describe the bug after adding a new tag name is not visible in the manage tag dropdown to reproduce steps to reproduce the behavior go to uat click on hr manage tags add new tag after adding a new tag name is not visible in the list expected behavior the name of the new tag should be visible screenshots if applicable add screenshots to help explain your problem | 1 |
217,315 | 24,326,525,189 | IssuesEvent | 2022-09-30 15:13:37 | GiantTreeLP/w2g-bot | https://api.github.com/repos/GiantTreeLP/w2g-bot | reopened | CVE-2022-38752 (Medium) detected in snakeyaml-1.30.jar | security vulnerability | ## CVE-2022-38752 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.30.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p>
<p>Path to dependency file: /build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p>
<p>
Dependency Hierarchy:
- detekt-cli-1.21.0.jar (Root Library)
- detekt-core-1.21.0.jar
- :x: **snakeyaml-1.30.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-9w3m-gqgf-c4p9">https://github.com/advisories/GHSA-9w3m-gqgf-c4p9</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.32
</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-38752 (Medium) detected in snakeyaml-1.30.jar - ## CVE-2022-38752 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.30.jar</b></p></summary>
<p>YAML 1.1 parser and emitter for Java</p>
<p>Library home page: <a href="https://bitbucket.org/snakeyaml/snakeyaml">https://bitbucket.org/snakeyaml/snakeyaml</a></p>
<p>Path to dependency file: /build.gradle.kts</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.yaml/snakeyaml/1.30/8fde7fe2586328ac3c68db92045e1c8759125000/snakeyaml-1.30.jar</p>
<p>
Dependency Hierarchy:
- detekt-cli-1.21.0.jar (Root Library)
- detekt-core-1.21.0.jar
- :x: **snakeyaml-1.30.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Using snakeYAML to parse untrusted YAML files may be vulnerable to Denial of Service attacks (DOS). If the parser is running on user supplied input, an attacker may supply content that causes the parser to crash by stack-overflow.
<p>Publish Date: 2022-09-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-38752>CVE-2022-38752</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-9w3m-gqgf-c4p9">https://github.com/advisories/GHSA-9w3m-gqgf-c4p9</a></p>
<p>Release Date: 2022-09-05</p>
<p>Fix Resolution: org.yaml:snakeyaml:1.32
</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in snakeyaml jar cve medium severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file build gradle kts path to vulnerable library home wss scanner gradle caches modules files org yaml snakeyaml snakeyaml jar dependency hierarchy detekt cli jar root library detekt core jar x snakeyaml jar vulnerable library found in base branch main vulnerability details using snakeyaml to parse untrusted yaml files may be vulnerable to denial of service attacks dos if the parser is running on user supplied input an attacker may supply content that causes the parser to crash by stack overflow publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml step up your open source security game with mend | 0 |
168,841 | 13,104,144,279 | IssuesEvent | 2020-08-04 09:45:31 | istio/istio.io | https://api.github.com/repos/istio/istio.io | opened | Mesh with istioctl describe: Verify POD fails | area/test and release community/testing days kind/docs | Ref: https://preliminary.istio.io/latest/docs/ops/diagnostic-tools/istioctl-describe/#verify-a-pod-is-in-the-mesh
```
$ export DASHBOARD_POD=$(kubectl -n kube-system get pod -l k8s-app=kubernetes-dashboard -o jsonpath='{.items[0].metadata.name}')
error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
template was:
{.items[0].metadata.name}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}
``` | 2.0 | Mesh with istioctl describe: Verify POD fails - Ref: https://preliminary.istio.io/latest/docs/ops/diagnostic-tools/istioctl-describe/#verify-a-pod-is-in-the-mesh
```
$ export DASHBOARD_POD=$(kubectl -n kube-system get pod -l k8s-app=kubernetes-dashboard -o jsonpath='{.items[0].metadata.name}')
error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
template was:
{.items[0].metadata.name}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}
``` | test | mesh with istioctl describe verify pod fails ref export dashboard pod kubectl n kube system get pod l app kubernetes dashboard o jsonpath items metadata name error error executing jsonpath items metadata name error executing template array index out of bounds index length printing more information for debugging the template template was items metadata name object given to jsonpath engine was map interface apiversion items interface kind list metadata map interface resourceversion selflink | 1 |
7,257 | 2,889,920,912 | IssuesEvent | 2015-06-13 21:57:40 | doc212/ades | https://api.github.com/repos/doc212/ades | closed | Avertissement au refresh de la liste des retenues | bug Testable | Quand on a sélectionné un type de retenues afin de lister les dates, si on fait un refresh, le browser émet un warning à propos du formulaire qui va être re-soumis


# Test
* from the menu, go to _Retenues_ > _Liste_
* choose a type of detention
* click date
* hit refresh
* _**the browser must not warn about a form being resubmitted and the list must just be refreshed**_
| 1.0 | Avertissement au refresh de la liste des retenues - Quand on a sélectionné un type de retenues afin de lister les dates, si on fait un refresh, le browser émet un warning à propos du formulaire qui va être re-soumis


# Test
* from the menu, go to _Retenues_ > _Liste_
* choose a type of detention
* click date
* hit refresh
* _**the browser must not warn about a form being resubmitted and the list must just be refreshed**_
| test | avertissement au refresh de la liste des retenues quand on a sélectionné un type de retenues afin de lister les dates si on fait un refresh le browser émet un warning à propos du formulaire qui va être re soumis test from the menu go to retenues liste choose a type of detention click date hit refresh the browser must not warn about a form being resubmitted and the list must just be refreshed | 1 |
503,806 | 14,598,156,392 | IssuesEvent | 2020-12-20 23:30:03 | VolmitSoftware/Iris | https://api.github.com/repos/VolmitSoftware/Iris | closed | Disabling bedrock leaves 1 layer of normal stone | Bug High Priority Plugin | **Describe the bug**
After disabling bedrock the bedrock is merely replaced by stone.
**To Reproduce**
Steps to reproduce the behavior:
1. Make server,
2. Download Overworld pack,
3. Disable bedrock in settings
4. See the bedrock replaced with stone
**Expected behavior**
Bedrock removed and the stone layer as well
**Screenshots or Video Recordings**

**Server and Plugin Informations**
Latest Iris and happens to at least 3 people with varying circumstances. | 1.0 | Disabling bedrock leaves 1 layer of normal stone - **Describe the bug**
After disabling bedrock the bedrock is merely replaced by stone.
**To Reproduce**
Steps to reproduce the behavior:
1. Make server,
2. Download Overworld pack,
3. Disable bedrock in settings
4. See the bedrock replaced with stone
**Expected behavior**
Bedrock removed and the stone layer as well
**Screenshots or Video Recordings**

**Server and Plugin Informations**
Latest Iris and happens to at least 3 people with varying circumstances. | non_test | disabling bedrock leaves layer of normal stone describe the bug after disabling bedrock the bedrock is merely replaced by stone to reproduce steps to reproduce the behavior make server download overworld pack disable bedrock in settings see the bedrock replaced with stone expected behavior bedrock removed and the stone layer as well screenshots or video recordings server and plugin informations latest iris and happens to at least people with varying circumstances | 0 |
54,044 | 11,177,054,111 | IssuesEvent | 2019-12-30 09:25:10 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] com_redirects alerts | No Code Attached Yet | I have no idea why we have the alerts on com_redirects to show that it is enabled but there has to be a better way than the current implementation which loads the alert on **every** page load. Personally I dont see the need for the notice at all.

| 1.0 | [4.0] com_redirects alerts - I have no idea why we have the alerts on com_redirects to show that it is enabled but there has to be a better way than the current implementation which loads the alert on **every** page load. Personally I dont see the need for the notice at all.

| non_test | com redirects alerts i have no idea why we have the alerts on com redirects to show that it is enabled but there has to be a better way than the current implementation which loads the alert on every page load personally i dont see the need for the notice at all | 0 |
260,346 | 22,613,685,872 | IssuesEvent | 2022-06-29 19:38:32 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | release-22.1: roachtest/sqlsmith: leftover bytes on the txn cleanup | C-test-failure O-robot O-roachtest T-sql-queries branch-release-22.1 | roachtest.sqlsmith/setup=empty/setting=no-ddl [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=artifacts#/sqlsmith/setup=empty/setting=no-ddl) on release-22.1 @ [e73379784b9f8921d116dc1ad0d2fd8a6256ff9f](https://github.com/cockroachdb/cockroach/commits/e73379784b9f8921d116dc1ad0d2fd8a6256ff9f):
```
JOIN (VALUES ('14:54:42.42701':::TIME)) AS tab_130099 (col_218773) ON
(tab_130098.col_218772) = (tab_130099.col_218773)
FULL JOIN (VALUES (tab_130094.col_218765)) AS tab_130100 (col_218774) ON NULL
WHERE
NULL
GROUP BY
tab_130098.col_218772, tab_130100.col_218774
ORDER BY
tab_130098.col_218772 ASC, tab_130098.col_218772 DESC, tab_130098.col_218772 DESC
LIMIT
1:::INT8
)
AS col_218776
FROM
(
VALUES
('-20 years -4 mons -960 days -17:43:54.228946':::INTERVAL, NULL),
(
'21 years 10 mons 899 days 17:53:39.838878':::INTERVAL,
(
SELECT
e'{";z>gyrXH``$": {}, "X,!6@?[,H": null, "b": "\\"7i?^K[JB>o", "foobar": "b"}':::JSONB
AS col_218764
FROM
(
VALUES
(0:::INT8),
(1809420385:::INT8),
((-531379822):::INT8),
(1887639644:::INT8),
((-1):::INT8),
(1671078266:::INT8)
)
AS tab_130093 (col_218763)
LIMIT
1:::INT8
)
),
('1 day':::INTERVAL, '[{"-biIz<Ge|Wnf": [null], "baz": true}, null, true]':::JSONB),
('-58 years -7 mons -835 days -13:46:16.380848':::INTERVAL, NULL),
(
'-60 years -6 mons -921 days -13:36:39.76583':::INTERVAL,
'[{"OD}_yC": {}, "bar": {"Zkm3=(b~": {}, "a": {}}}, null, [], {}, [], [], []]':::JSONB
)
)
AS tab_130094 (col_218765, col_218766)
WHERE
true
LIMIT
95:::INT8;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-ddl.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-16619 | 2.0 | release-22.1: roachtest/sqlsmith: leftover bytes on the txn cleanup - roachtest.sqlsmith/setup=empty/setting=no-ddl [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=5431532&tab=artifacts#/sqlsmith/setup=empty/setting=no-ddl) on release-22.1 @ [e73379784b9f8921d116dc1ad0d2fd8a6256ff9f](https://github.com/cockroachdb/cockroach/commits/e73379784b9f8921d116dc1ad0d2fd8a6256ff9f):
```
JOIN (VALUES ('14:54:42.42701':::TIME)) AS tab_130099 (col_218773) ON
(tab_130098.col_218772) = (tab_130099.col_218773)
FULL JOIN (VALUES (tab_130094.col_218765)) AS tab_130100 (col_218774) ON NULL
WHERE
NULL
GROUP BY
tab_130098.col_218772, tab_130100.col_218774
ORDER BY
tab_130098.col_218772 ASC, tab_130098.col_218772 DESC, tab_130098.col_218772 DESC
LIMIT
1:::INT8
)
AS col_218776
FROM
(
VALUES
('-20 years -4 mons -960 days -17:43:54.228946':::INTERVAL, NULL),
(
'21 years 10 mons 899 days 17:53:39.838878':::INTERVAL,
(
SELECT
e'{";z>gyrXH``$": {}, "X,!6@?[,H": null, "b": "\\"7i?^K[JB>o", "foobar": "b"}':::JSONB
AS col_218764
FROM
(
VALUES
(0:::INT8),
(1809420385:::INT8),
((-531379822):::INT8),
(1887639644:::INT8),
((-1):::INT8),
(1671078266:::INT8)
)
AS tab_130093 (col_218763)
LIMIT
1:::INT8
)
),
('1 day':::INTERVAL, '[{"-biIz<Ge|Wnf": [null], "baz": true}, null, true]':::JSONB),
('-58 years -7 mons -835 days -13:46:16.380848':::INTERVAL, NULL),
(
'-60 years -6 mons -921 days -13:36:39.76583':::INTERVAL,
'[{"OD}_yC": {}, "bar": {"Zkm3=(b~": {}, "a": {}}}, null, [], {}, [], [], []]':::JSONB
)
)
AS tab_130094 (col_218765, col_218766)
WHERE
true
LIMIT
95:::INT8;
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*sqlsmith/setup=empty/setting=no-ddl.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-16619 | test | release roachtest sqlsmith leftover bytes on the txn cleanup roachtest sqlsmith setup empty setting no ddl with on release join values time as tab col on tab col tab col full join values tab col as tab col on null where null group by tab col tab col order by tab col asc tab col desc tab col desc limit as col from values years mons days interval null years mons days interval select e z gyrxh x h null b k jb o foobar b jsonb as col from values as tab col limit day interval baz true null true jsonb years mons days interval null years mons days interval jsonb as tab col col where true limit help see see cc cockroachdb sql queries jira issue crdb | 1 |
732,298 | 25,253,654,361 | IssuesEvent | 2022-11-15 16:22:24 | ilaydakurtboun/swe573 | https://api.github.com/repos/ilaydakurtboun/swe573 | opened | Create Post Views and Urls | Status: To-Do Priority: Major Type: Task | - [ ] Post Create View
- [ ] Post Update View
- [ ] Post Delete View
- [ ] Post Create Url
- [ ] Post Update Url
- [ ] Post Delete Url | 1.0 | Create Post Views and Urls - - [ ] Post Create View
- [ ] Post Update View
- [ ] Post Delete View
- [ ] Post Create Url
- [ ] Post Update Url
- [ ] Post Delete Url | non_test | create post views and urls post create view post update view post delete view post create url post update url post delete url | 0 |
46,047 | 9,873,578,346 | IssuesEvent | 2019-06-22 15:42:04 | graphql/graphiql | https://api.github.com/repos/graphql/graphiql | closed | Would it be possible to render newline characters in string values for GraphQL results | codemirror-graphql potential plugin version 1 | I get results like this very often:

Especially in case of errors. It would be very helpful if these values were rendered parsing the newline characters.
It think that should be in this line, but I have no idea how to add the newline functionality there:
https://github.com/graphql/codemirror-graphql/blob/master/src/results/mode.js#L101 | 1.0 | Would it be possible to render newline characters in string values for GraphQL results - I get results like this very often:

Especially in case of errors. It would be very helpful if these values were rendered parsing the newline characters.
It think that should be in this line, but I have no idea how to add the newline functionality there:
https://github.com/graphql/codemirror-graphql/blob/master/src/results/mode.js#L101 | non_test | would it be possible to render newline characters in string values for graphql results i get results like this very often especially in case of errors it would be very helpful if these values were rendered parsing the newline characters it think that should be in this line but i have no idea how to add the newline functionality there | 0 |
452,962 | 13,062,395,078 | IssuesEvent | 2020-07-30 15:07:06 | kubernetes-sigs/cluster-api | https://api.github.com/repos/kubernetes-sigs/cluster-api | closed | [cabpk] Wiring up the Format field | area/bootstrap kind/feature priority/backlog | /kind feature
**Describe the solution you'd like**
A PR for adding the `Format` field to `KubeadmConfig.Spec` is open. This issue is to discuss how we want to use it in the controller.
**Anything else you would like to add:**
The `Format` field is used to specify the output format of the bootstrap data that the controller generates. | 1.0 | [cabpk] Wiring up the Format field - /kind feature
**Describe the solution you'd like**
A PR for adding the `Format` field to `KubeadmConfig.Spec` is open. This issue is to discuss how we want to use it in the controller.
**Anything else you would like to add:**
The `Format` field is used to specify the output format of the bootstrap data that the controller generates. | non_test | wiring up the format field kind feature describe the solution you d like a pr for adding the format field to kubeadmconfig spec is open this issue is to discuss how we want to use it in the controller anything else you would like to add the format field is used to specify the output format of the bootstrap data that the controller generates | 0 |
71,537 | 8,663,936,175 | IssuesEvent | 2018-11-28 18:43:16 | byucs340ta/Fall2018 | https://api.github.com/repos/byucs340ta/Fall2018 | closed | Wrong words on button | P4: Aesthetic or Design Flaw Team 15 | When exiting the commands activity, the button to return to the game says “Exit Chat” it should say “return to game” or “Exit Commands” or something like that. You can see this by clicking the command button for any game
| 1.0 | Wrong words on button - When exiting the commands activity, the button to return to the game says “Exit Chat” it should say “return to game” or “Exit Commands” or something like that. You can see this by clicking the command button for any game
| non_test | wrong words on button when exiting the commands activity the button to return to the game says “exit chat” it should say “return to game” or “exit commands” or something like that you can see this by clicking the command button for any game | 0 |
27,587 | 13,307,611,713 | IssuesEvent | 2020-08-25 22:35:59 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Typing extremely slow when editing .editorconfig file | Area-Compilers Area-IDE Area-Performance Bug | **Version Used**:
16.5.0 preview 3.0 [29818.62.d16.5] ⚠
16.6.0 preview 1.0 [29818.10.master]
**Steps to Reproduce**:
1. Open .editorconfig file
2. Type in it
**Expected Behavior**:
There is no lag. It's a simple text file
**Actual Behavior**:
There is a huge lag. VS seems unresponsive
VS has impact on the entire system which appears slow
VS may crash
PerfView trace reveals that `OnAnalyzerConfigDocumentOpened` takes up 96% of inclusive samples
`module microsoft.codeanalysis.workspaces.ni <<microsoft.codeanalysis.workspaces.ni!Microsoft.CodeAnalysis.Workspace+<>c.<OnAnalyzerConfigDocumentOpened>b__189_2(Microsoft.CodeAnalysis.Workspace, Microsoft.CodeAnalysis.DocumentId, Microsoft.CodeAnalysis.Text.SourceText, Microsoft.CodeAnalysis.PreservationMode)>>`
Dump is available at `\\amadeus\perf\editorconfig slow` on corpnet | True | Typing extremely slow when editing .editorconfig file - **Version Used**:
16.5.0 preview 3.0 [29818.62.d16.5] ⚠
16.6.0 preview 1.0 [29818.10.master]
**Steps to Reproduce**:
1. Open .editorconfig file
2. Type in it
**Expected Behavior**:
There is no lag. It's a simple text file
**Actual Behavior**:
There is a huge lag. VS seems unresponsive
VS has impact on the entire system which appears slow
VS may crash
PerfView trace reveals that `OnAnalyzerConfigDocumentOpened` takes up 96% of inclusive samples
`module microsoft.codeanalysis.workspaces.ni <<microsoft.codeanalysis.workspaces.ni!Microsoft.CodeAnalysis.Workspace+<>c.<OnAnalyzerConfigDocumentOpened>b__189_2(Microsoft.CodeAnalysis.Workspace, Microsoft.CodeAnalysis.DocumentId, Microsoft.CodeAnalysis.Text.SourceText, Microsoft.CodeAnalysis.PreservationMode)>>`
Dump is available at `\\amadeus\perf\editorconfig slow` on corpnet | non_test | typing extremely slow when editing editorconfig file version used preview ⚠ preview steps to reproduce open editorconfig file type in it expected behavior there is no lag it s a simple text file actual behavior there is a huge lag vs seems unresponsive vs has impact on the entire system which appears slow vs may crash perfview trace reveals that onanalyzerconfigdocumentopened takes up of inclusive samples module microsoft codeanalysis workspaces ni c b microsoft codeanalysis workspace microsoft codeanalysis documentid microsoft codeanalysis text sourcetext microsoft codeanalysis preservationmode dump is available at amadeus perf editorconfig slow on corpnet | 0 |
307,842 | 26,568,059,054 | IssuesEvent | 2023-01-20 22:35:33 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | [BUG] backup-restore-operator yaml file causes install error when including value for imagePullSecrets | kind/bug area/test area/backup-recover [zube]: QA Working team/area3 area/helm | **Rancher Server Setup**
- Rancher version: `2.6.9-rc4`
- Installation option (Docker install/Helm Chart): `Helm`
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): `local rke1`
- Proxy/Cert Details:
**Information about the Cluster**
- Kubernetes version: `v1.24.4`
- Cluster Type (Local/Downstream): `Local`
- If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):
<!--
* Custom = Running a docker command on a node
* Imported = Running kubectl apply onto an existing k8s cluster
* Hosted = EKS, GKE, AKS, etc
* Infrastructure Provider = Rancher provisioning the nodes using different node drivers (e.g. AWS, Digital Ocean, etc)
-->
**User Information**
- What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom)
- If custom, define the set of permissions: `admin`
**Describe the bug**
when passing in values for imagePullSecrets in values.yaml file, error is generated when attempting to pass in an array ['value1','value2']; on install, error thrown, and imagePullSecrets field is saved with `null`.
**To Reproduce**
1. begin installation of backup-restore-operator
2. edit values yaml
3. include value ['value1','value2','value3']
4. edit value to include a list and value is saved.
**Result**
error in logs:
`Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): invalid type for io.k8s.api.core.v1.LocalObjectReference: got "string", expected "map"`
**Expected Result**
backup-restore-operator installed with no errors. included array saved to values.yaml
**Additional context**
discovered when testing **https://github.com/rancher/backup-restore-operator/issues/264**
| 1.0 | [BUG] backup-restore-operator yaml file causes install error when including value for imagePullSecrets - **Rancher Server Setup**
- Rancher version: `2.6.9-rc4`
- Installation option (Docker install/Helm Chart): `Helm`
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): `local rke1`
- Proxy/Cert Details:
**Information about the Cluster**
- Kubernetes version: `v1.24.4`
- Cluster Type (Local/Downstream): `Local`
- If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):
<!--
* Custom = Running a docker command on a node
* Imported = Running kubectl apply onto an existing k8s cluster
* Hosted = EKS, GKE, AKS, etc
* Infrastructure Provider = Rancher provisioning the nodes using different node drivers (e.g. AWS, Digital Ocean, etc)
-->
**User Information**
- What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom)
- If custom, define the set of permissions: `admin`
**Describe the bug**
when passing in values for imagePullSecrets in values.yaml file, error is generated when attempting to pass in an array ['value1','value2']; on install, error thrown, and imagePullSecrets field is saved with `null`.
**To Reproduce**
1. begin installation of backup-restore-operator
2. edit values yaml
3. include value ['value1','value2','value3']
4. edit value to include a list and value is saved.
**Result**
error in logs:
`Error: UPGRADE FAILED: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.imagePullSecrets[0]): invalid type for io.k8s.api.core.v1.LocalObjectReference: got "string", expected "map"`
**Expected Result**
backup-restore-operator installed with no errors. included array saved to values.yaml
**Additional context**
discovered when testing **https://github.com/rancher/backup-restore-operator/issues/264**
| test | backup restore operator yaml file causes install error when including value for imagepullsecrets rancher server setup rancher version installation option docker install helm chart helm if helm chart kubernetes cluster and version eks etc local proxy cert details information about the cluster kubernetes version cluster type local downstream local if downstream what type of cluster custom imported or specify provider for hosted infrastructure provider custom running a docker command on a node imported running kubectl apply onto an existing cluster hosted eks gke aks etc infrastructure provider rancher provisioning the nodes using different node drivers e g aws digital ocean etc user information what is the role of the user logged in admin cluster owner cluster member project owner project member custom if custom define the set of permissions admin describe the bug when passing in values for imagepullsecrets in values yaml file error is generated when attempting to pass in an array on install error thrown and imagepullsecrets field is saved with null to reproduce begin installation of backup restore operator edit values yaml include value edit value to include a list and value is saved result error in logs error upgrade failed error validating error validating data validationerror deployment spec template spec imagepullsecrets invalid type for io api core localobjectreference got string expected map expected result backup restore operator installed with no errors included array saved to values yaml additional context discovered when testing | 1 |
169,482 | 13,148,520,232 | IssuesEvent | 2020-08-08 22:05:36 | deathlyrage/pot-demo-bugs | https://api.github.com/repos/deathlyrage/pot-demo-bugs | closed | bug | ios more info needs testing |
**Location:** (X=-18956.595703,Y=222925.234375,Z=10240.21875)
.jpg)
**Message:** unable to eat or drink
**Version:** 8600 ()
**Reporter:** Gooey (011-521-831)
| 1.0 | bug -
**Location:** (X=-18956.595703,Y=222925.234375,Z=10240.21875)
.jpg)
**Message:** unable to eat or drink
**Version:** 8600 ()
**Reporter:** Gooey (011-521-831)
| test | bug location x y z message unable to eat or drink version reporter gooey | 1 |
303,935 | 26,240,053,383 | IssuesEvent | 2023-01-05 10:43:11 | dart-lang/co19 | https://api.github.com/repos/dart-lang/co19 | closed | LanguageFeatures/Patterns/constant_A03_t05 | bad-test | Looks like a typo: test passes `const [1, 2]` as an argument and expects `"<dynamic>[1, -2]"` as a result, but argument is not matched in any of the cases:
https://github.com/dart-lang/co19/blob/74cb5e745fc4efc0cda2a9bf5912bb534eea23fc/LanguageFeatures/Patterns/constant_A03_t05.dart#L111
https://github.com/dart-lang/co19/blob/74cb5e745fc4efc0cda2a9bf5912bb534eea23fc/LanguageFeatures/Patterns/constant_A03_t05.dart#L35-L56
Maybe it should pass `const [1, -2]` instead of `const [1, 2]`. | 1.0 | LanguageFeatures/Patterns/constant_A03_t05 - Looks like a typo: test passes `const [1, 2]` as an argument and expects `"<dynamic>[1, -2]"` as a result, but argument is not matched in any of the cases:
https://github.com/dart-lang/co19/blob/74cb5e745fc4efc0cda2a9bf5912bb534eea23fc/LanguageFeatures/Patterns/constant_A03_t05.dart#L111
https://github.com/dart-lang/co19/blob/74cb5e745fc4efc0cda2a9bf5912bb534eea23fc/LanguageFeatures/Patterns/constant_A03_t05.dart#L35-L56
Maybe it should pass `const [1, -2]` instead of `const [1, 2]`. | test | languagefeatures patterns constant looks like a typo test passes const as an argument and expects as a result but argument is not matched in any of the cases maybe it should pass const instead of const | 1 |
25,533 | 4,161,147,373 | IssuesEvent | 2016-06-17 15:37:49 | julenpardo/DB-Connection-Watcher | https://api.github.com/repos/julenpardo/DB-Connection-Watcher | opened | mail() calling functions are not tested | testing | And they are not likely to be tested until a way to mock built-in `mail()` function is found. | 1.0 | mail() calling functions are not tested - And they are not likely to be tested until a way to mock built-in `mail()` function is found. | test | mail calling functions are not tested and they are not likely to be tested until a way to mock built in mail function is found | 1 |
69,772 | 9,332,887,647 | IssuesEvent | 2019-03-28 13:18:46 | Shopify/theme-scripts | https://api.github.com/repos/Shopify/theme-scripts | closed | [Documentation] Update CONTRIBUTING.md to document the publishing steps with lerna+Shipit | documentation | The publishing steps have change with the introduction of Shipit to the repo. We need to document the steps needed in order to publish packages to the npm registry. These steps leverages [lerna](https://lernajs.io/) to manage packages versions, dependencies, published state so we need to have a clear description of the necessary steps.
| 1.0 | [Documentation] Update CONTRIBUTING.md to document the publishing steps with lerna+Shipit - The publishing steps have change with the introduction of Shipit to the repo. We need to document the steps needed in order to publish packages to the npm registry. These steps leverages [lerna](https://lernajs.io/) to manage packages versions, dependencies, published state so we need to have a clear description of the necessary steps.
| non_test | update contributing md to document the publishing steps with lerna shipit the publishing steps have change with the introduction of shipit to the repo we need to document the steps needed in order to publish packages to the npm registry these steps leverages to manage packages versions dependencies published state so we need to have a clear description of the necessary steps | 0 |
172,757 | 13,340,547,580 | IssuesEvent | 2020-08-28 14:36:00 | ChainSafe/lodestar | https://api.github.com/repos/ChainSafe/lodestar | closed | Syncing Witti: Hanging peer on node shutdown | bot:stale testnet-debugging | After discovering+connecting to a few peers, shutdown the node (ctrl-c).
Instead of closing, it hangs. Inspecting the metrics, it still shows 1 connected peer. | 1.0 | Syncing Witti: Hanging peer on node shutdown - After discovering+connecting to a few peers, shutdown the node (ctrl-c).
Instead of closing, it hangs. Inspecting the metrics, it still shows 1 connected peer. | test | syncing witti hanging peer on node shutdown after discovering connecting to a few peers shutdown the node ctrl c instead of closing it hangs inspecting the metrics it still shows connected peer | 1 |
57,121 | 6,538,514,261 | IssuesEvent | 2017-09-01 06:46:26 | ajency/PO | https://api.github.com/repos/ajency/PO | closed | Live|Add Competing listing not saving the listing | bug Tested on test | Live|Add Competing listing not saving the listing
Steps to replicate:
1. Open seller popup "Product Info" tab
2. Edit -> Enter listing id and save -> listing id will be in crawling status
Current behaviour: is not updating the mapped listing count. Also listing not getting crawled.

| 2.0 | Live|Add Competing listing not saving the listing - Live|Add Competing listing not saving the listing
Steps to replicate:
1. Open seller popup "Product Info" tab
2. Edit -> Enter listing id and save -> listing id will be in crawling status
Current behaviour: is not updating the mapped listing count. Also listing not getting crawled.

| test | live add competing listing not saving the listing live add competing listing not saving the listing steps to replicate open seller popup product info tab edit enter listing id and save listing id will be in crawling status current behaviour is not updating the mapped listing count also listing not getting crawled | 1 |
335,716 | 10,165,415,117 | IssuesEvent | 2019-08-07 13:52:14 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | Chrome Edge Preview lists page as unsafe | Priority: Low | Go to https://lists.ubuntu.com/archives/ubuntu-announce/2018-October/000237.html in Edge Preview (you can get it at https://www.microsoftedgeinsider.com/ ) and you will see:
This site has been reported as unsafe
Hosted by lists.ubuntu.com
Microsoft recommends you don't continue to this site. It has been reported to Microsoft for containing harmful programs that may try to steal personal or financial information.
Microsoft Defender SmartScreen

---
*Reported from: https://ubuntu.com/* | 1.0 | Chrome Edge Preview lists page as unsafe - Go to https://lists.ubuntu.com/archives/ubuntu-announce/2018-October/000237.html in Edge Preview (you can get it at https://www.microsoftedgeinsider.com/ ) and you will see:
This site has been reported as unsafe
Hosted by lists.ubuntu.com
Microsoft recommends you don't continue to this site. It has been reported to Microsoft for containing harmful programs that may try to steal personal or financial information.
Microsoft Defender SmartScreen

---
*Reported from: https://ubuntu.com/* | non_test | chrome edge preview lists page as unsafe go to in edge preview you can get it at and you will see this site has been reported as unsafe hosted by lists ubuntu com microsoft recommends you don t continue to this site it has been reported to microsoft for containing harmful programs that may try to steal personal or financial information microsoft defender smartscreen reported from | 0 |
335,346 | 10,152,315,263 | IssuesEvent | 2019-08-05 23:11:02 | teambit/bit | https://api.github.com/repos/teambit/bit | closed | Workspace configuration's overrides should include imported components | area/config priority/high type/feature | ### Description
When using the `overrides` feature in a project, the rules defined in it must apply to imported components as well.
### Describe the solution you'd like
All override rules from the workspace should apply to imported components that fit the glob-pattern (or specific ID). Additionally, Bit should keep supporting the merge strategy for the override rules when taking to account the rule defined for the imported component (which should act as an override rule for a specific-ID in the workspace config).
### Additional context
This feature should only be released when #1863 is ready. | 1.0 | Workspace configuration's overrides should include imported components - ### Description
When using the `overrides` feature in a project, the rules defined in it must apply to imported components as well.
### Describe the solution you'd like
All override rules from the workspace should apply to imported components that fit the glob-pattern (or specific ID). Additionally, Bit should keep supporting the merge strategy for the override rules when taking to account the rule defined for the imported component (which should act as an override rule for a specific-ID in the workspace config).
### Additional context
This feature should only be released when #1863 is ready. | non_test | workspace configuration s overrides should include imported components description when using the overrides feature in a project the rules defined in it must apply to imported components as well describe the solution you d like all override rules from the workspace should apply to imported components that fit the glob pattern or specific id additionally bit should keep supporting the merge strategy for the override rules when taking to account the rule defined for the imported component which should act as an override rule for a specific id in the workspace config additional context this feature should only be released when is ready | 0 |
230,487 | 18,672,338,222 | IssuesEvent | 2021-10-30 23:50:00 | Airthics/denvair | https://api.github.com/repos/Airthics/denvair | opened | ✅ Test mongo.sh | ✅ Test | **Describe the solution you'd like**
Test python.sh shell script in order to check if installation can be done or not.
| 1.0 | ✅ Test mongo.sh - **Describe the solution you'd like**
Test python.sh shell script in order to check if installation can be done or not.
| test | ✅ test mongo sh describe the solution you d like test python sh shell script in order to check if installation can be done or not | 1 |
346,250 | 30,879,503,585 | IssuesEvent | 2023-08-03 16:28:08 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | closed | Fix eventing prow jobs failing because of npm not found. | area/eventing kind/failing-test | **Description**
The following jobs fail because `/home/prow/go/src/github.com/kyma-project/test-infra/prow/scripts/cluster-integration/helpers/eventing.sh: line 173: npm: command not found`:
- pre-main-kyma-gardener-gcp-eventing
- pre-main-kyma-gardener-gcp-eventing-upgrade
- kyma-gardener-gcp-eventing-with-auth-manager
**PRs**
- https://github.com/kyma-project/kyma/pull/17933
- https://github.com/kyma-project/test-infra/pull/8499 | 1.0 | Fix eventing prow jobs failing because of npm not found. - **Description**
The following jobs fail because `/home/prow/go/src/github.com/kyma-project/test-infra/prow/scripts/cluster-integration/helpers/eventing.sh: line 173: npm: command not found`:
- pre-main-kyma-gardener-gcp-eventing
- pre-main-kyma-gardener-gcp-eventing-upgrade
- kyma-gardener-gcp-eventing-with-auth-manager
**PRs**
- https://github.com/kyma-project/kyma/pull/17933
- https://github.com/kyma-project/test-infra/pull/8499 | test | fix eventing prow jobs failing because of npm not found description the following jobs fail because home prow go src github com kyma project test infra prow scripts cluster integration helpers eventing sh line npm command not found pre main kyma gardener gcp eventing pre main kyma gardener gcp eventing upgrade kyma gardener gcp eventing with auth manager prs | 1 |
743,882 | 25,918,077,125 | IssuesEvent | 2022-12-15 19:10:25 | OffchainLabs/arbitrum-docs | https://api.github.com/repos/OffchainLabs/arbitrum-docs | opened | fix md tables styling | Type: Enhancement Priority: P1 Medium Type: UI Changes | I.e., on https://developer.arbitrum.io/useful-addresses they're too wide for the page, italicized, etc. (Some of this is from residual stuff from the pre-nitro docs that just needs to be updated/removed) | 1.0 | fix md tables styling - I.e., on https://developer.arbitrum.io/useful-addresses they're too wide for the page, italicized, etc. (Some of this is from residual stuff from the pre-nitro docs that just needs to be updated/removed) | non_test | fix md tables styling i e on they re too wide for the page italicized etc some of this is from residual stuff from the pre nitro docs that just needs to be updated removed | 0 |
307,041 | 26,514,970,909 | IssuesEvent | 2023-01-18 20:01:31 | wazuh/wazuh | https://api.github.com/repos/wazuh/wazuh | reopened | Release 4.4.0 - Beta 1 - System tests | module/cluster module/rbac team/framework release test/4.4.0 | The following issue aims to run all `system tests` for the current release candidate, report the results, and open new issues for any encountered errors.
## System tests information
| | |
|--------------------------------------|--------------------------------------------|
| **Main release candidate issue** | #15891 |
| **Version** | 4.4.0 |
| **Release candidate #** | Beta 1 |
| **Tag** | [v4.4.0-beta1](https://github.com/wazuh/wazuh/tree/v4.4.0-beta1) |
| **Previous system tests issue** | https://github.com/wazuh/wazuh/issues/15751 |
## Test report procedure
All individual test checks must be marked as:
| | |
|---------------------------------|--------------------------------------------|
| Pass | The test ran successfully. |
| Xfail | The test was expected to fail and it failed. It must be properly justified and reported in an issue. |
| Skip | The test was not run. It must be properly justified and reported in an issue. |
| Fail | The test failed. A new issue must be opened to evaluate and address the problem. |
All test results must have one the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed check. |
| :yellow_circle: | There is at least one expected fail or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause. It must be included in the `Fixes` section of the current release candidate main issue.
Any expected fail or skipped test must have an issue justifying the reason. All auditors must validate the justification for an expected fail or skipped test.
An extended report of the test results must be attached as a zip or txt. This report can be used by the auditors to dig deeper into any possible failures and details.
## Conclusions
<!--
All tests have been executed and the results can be found [here]().
| | | | |
|----------------|-------------|---------------------|----------------|
| **Status** | **Test** | **Failure type** | **Notes** |
| | | | |
All tests have passed and the fails have been reported or justified. I therefore conclude that this issue is finished and OK for this release candidate.
-->
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [ @nico-stefani ]
- [ @davidjiglesias ]
| 1.0 | Release 4.4.0 - Beta 1 - System tests - The following issue aims to run all `system tests` for the current release candidate, report the results, and open new issues for any encountered errors.
## System tests information
| | |
|--------------------------------------|--------------------------------------------|
| **Main release candidate issue** | #15891 |
| **Version** | 4.4.0 |
| **Release candidate #** | Beta 1 |
| **Tag** | [v4.4.0-beta1](https://github.com/wazuh/wazuh/tree/v4.4.0-beta1) |
| **Previous system tests issue** | https://github.com/wazuh/wazuh/issues/15751 |
## Test report procedure
All individual test checks must be marked as:
| | |
|---------------------------------|--------------------------------------------|
| Pass | The test ran successfully. |
| Xfail | The test was expected to fail and it failed. It must be properly justified and reported in an issue. |
| Skip | The test was not run. It must be properly justified and reported in an issue. |
| Fail | The test failed. A new issue must be opened to evaluate and address the problem. |
All test results must have one the following statuses:
| | |
|---------------------------------|--------------------------------------------|
| :green_circle: | All checks passed. |
| :red_circle: | There is at least one failed check. |
| :yellow_circle: | There is at least one expected fail or skipped test and no failures. |
Any failing test must be properly addressed with a new issue, detailing the error and the possible cause. It must be included in the `Fixes` section of the current release candidate main issue.
Any expected fail or skipped test must have an issue justifying the reason. All auditors must validate the justification for an expected fail or skipped test.
An extended report of the test results must be attached as a zip or txt. This report can be used by the auditors to dig deeper into any possible failures and details.
## Conclusions
<!--
All tests have been executed and the results can be found [here]().
| | | | |
|----------------|-------------|---------------------|----------------|
| **Status** | **Test** | **Failure type** | **Notes** |
| | | | |
All tests have passed and the fails have been reported or justified. I therefore conclude that this issue is finished and OK for this release candidate.
-->
## Auditors validation
The definition of done for this one is the validation of the conclusions and the test results from all auditors.
All checks from below must be accepted in order to close this issue.
- [ @nico-stefani ]
- [ @davidjiglesias ]
| test | release beta system tests the following issue aims to run all system tests for the current release candidate report the results and open new issues for any encountered errors system tests information main release candidate issue version release candidate beta tag previous system tests issue test report procedure all individual test checks must be marked as pass the test ran successfully xfail the test was expected to fail and it failed it must be properly justified and reported in an issue skip the test was not run it must be properly justified and reported in an issue fail the test failed a new issue must be opened to evaluate and address the problem all test results must have one the following statuses green circle all checks passed red circle there is at least one failed check yellow circle there is at least one expected fail or skipped test and no failures any failing test must be properly addressed with a new issue detailing the error and the possible cause it must be included in the fixes section of the current release candidate main issue any expected fail or skipped test must have an issue justifying the reason all auditors must validate the justification for an expected fail or skipped test an extended report of the test results must be attached as a zip or txt this report can be used by the auditors to dig deeper into any possible failures and details conclusions all tests have been executed and the results can be found status test failure type notes all tests have passed and the fails have been reported or justified i therefore conclude that this issue is finished and ok for this release candidate auditors validation the definition of done for this one is the validation of the conclusions and the test results from all auditors all checks from below must be accepted in order to close this issue | 1 |
45,392 | 9,745,508,528 | IssuesEvent | 2019-06-03 09:47:44 | samthiriot/knime-shapefiles-as-WKT | https://api.github.com/repos/samthiriot/knime-shapefiles-as-WKT | closed | add a licence header to your source code files | quality of code | can use an eclipse plugin to do this for you : https://wiki.eclipse.org/Development_Resources/How_to_Use_Eclipse_Copyright_Tool | 1.0 | add a licence header to your source code files - can use an eclipse plugin to do this for you : https://wiki.eclipse.org/Development_Resources/How_to_Use_Eclipse_Copyright_Tool | non_test | add a licence header to your source code files can use an eclipse plugin to do this for you | 0 |
37,129 | 2,815,576,233 | IssuesEvent | 2015-05-19 05:44:46 | eaudeweb/scoreboard.visualization | https://api.github.com/repos/eaudeweb/scoreboard.visualization | closed | Exclude the breakdowns belonging to the total group from the titles and legend | Low priority | * [ ] country profile charts
* [ ] map chart
* [ ] bar chart
* [ ] line chart
* [ ] scatter and bubble charts | 1.0 | Exclude the breakdowns belonging to the total group from the titles and legend - * [ ] country profile charts
* [ ] map chart
* [ ] bar chart
* [ ] line chart
* [ ] scatter and bubble charts | non_test | exclude the breakdowns belonging to the total group from the titles and legend country profile charts map chart bar chart line chart scatter and bubble charts | 0 |
109,871 | 9,417,531,373 | IssuesEvent | 2019-04-10 16:57:42 | Iridescent-CM/technovation-app | https://api.github.com/repos/Iridescent-CM/technovation-app | closed | Certificates: Implement certificates datagrid in the Admin UI | 5 - Test <= 8 Feature request [sprint topic] scores / certs | **Is your feature request related to a problem? Please describe.**
We need to build a new datagrid page in the admin UI so that any certificates existing in the system can be searched for, downloaded, or deleted.
**Describe the solution you'd like**
This will function similarly to the Participants page view in the admin UI where we can view information about various student, mentor, and judge accounts in the system. It will add a Certificates menu item in the right-hand sidebar navigation for Admin accounts on their dashboard. This will lead to a page similar to Participants which contains a table (datagrid) featuring all certificates in the system, sortable by season and account type for all accounts in the system.
**Acceptance Criteria**
1. Log in as an admin user
2. Browse to the "Certificates" option in the right-hand side-bar navigation
3. Make sure that you can view certificates for our end-users by clicking the PDF icon in the table
4. Assert that certificates for judges, mentors, and students all appear and can be downloaded
5. Test to make sure that filters can be applied to the datagrid to filter the table data down
<!---
@huboard:{"order":1948.6014026365867,"milestone_order":2057,"custom_state":""}
-->
| 1.0 | Certificates: Implement certificates datagrid in the Admin UI - **Is your feature request related to a problem? Please describe.**
We need to build a new datagrid page in the admin UI so that any certificates existing in the system can be searched for, downloaded, or deleted.
**Describe the solution you'd like**
This will function similarly to the Participants page view in the admin UI where we can view information about various student, mentor, and judge accounts in the system. It will add a Certificates menu item in the right-hand sidebar navigation for Admin accounts on their dashboard. This will lead to a page similar to Participants which contains a table (datagrid) featuring all certificates in the system, sortable by season and account type for all accounts in the system.
**Acceptance Criteria**
1. Log in as an admin user
2. Browse to the "Certificates" option in the right-hand side-bar navigation
3. Make sure that you can view certificates for our end-users by clicking the PDF icon in the table
4. Assert that certificates for judges, mentors, and students all appear and can be downloaded
5. Test to make sure that filters can be applied to the datagrid to filter the table data down
<!---
@huboard:{"order":1948.6014026365867,"milestone_order":2057,"custom_state":""}
-->
| test | certificates implement certificates datagrid in the admin ui is your feature request related to a problem please describe we need to build a new datagrid page in the admin ui so that any certificates existing in the system can be searched for downloaded or deleted describe the solution you d like this will function similarly to the participants page view in the admin ui where we can view information about various student mentor and judge accounts in the system it will add a certificates menu item in the right hand sidebar navigation for admin accounts on their dashboard this will lead to a page similar to participants which contains a table datagrid featuring all certificates in the system sortable by season and account type for all accounts in the system acceptance criteria log in as an admin user browse to the certificates option in the right hand side bar navigation make sure that you can view certificates for our end users by clicking the pdf icon in the table assert that certificates for judges mentors and students all appear and can be downloaded test to make sure that filters can be applied to the datagrid to filter the table data down huboard order milestone order custom state | 1 |
208,637 | 15,897,065,154 | IssuesEvent | 2021-04-11 19:39:19 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: sqlsmith/setup=rand-tables/setting=no-mutations failed | C-test-failure O-roachtest O-robot branch-release-21.1 release-blocker | [(roachtest).sqlsmith/setup=rand-tables/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=buildLog) on [release-21.1@5323c44e1c3d3d9ded95a9f20be5ba738cd542ec](https://github.com/cockroachdb/cockroach/commits/5323c44e1c3d3d9ded95a9f20be5ba738cd542ec):
```
*
FROM
(
VALUES
(3618326136:::OID),
(NULL),
(1535022620:::OID),
(CASE WHEN NULL THEN 3809945684:::OID ELSE 1132124906:::OID END),
(3812966465:::OID)
)
AS tab_762 (col_1832)
),
with_271 (col_1833)
AS (SELECT * FROM (VALUES ('P':::STRING), ('8':::STRING), ('T':::STRING)) AS tab_763 (col_1833))
SELECT
regr_r2(NULL::INT8, (-0.0017706938632580105):::FLOAT8::INT8)::FLOAT8 AS col_1834
FROM
defaultdb.public.table3@[0] AS tab_764
GROUP BY
tab_764.col3_11, tab_764.col2_6, tab_764.col3_16
ORDER BY
tab_764.col3_16 ASC, tab_764.col3_11 ASC
LIMIT
1:::INT8
)
AS col_1835,
jsonb_insert('[false, []]':::JSONB::JSONB, NULL::STRING[], tab_757.col1_6::JSONB, false::BOOL)::JSONB AS col_1836,
tab_758.col2_3 AS col_1837,
((-1):::INT8::INT8 * (-6297526606271204945):::INT8::INT8)::INT8 AS col_1838,
tab_757.col1_4 AS col_1839,
'\x4a5c':::BYTES AS col_1840,
0:::OID AS col_1841,
tab_758.col2_5 AS col_1842,
tab_758.col2_0 AS col_1843,
tab_758.col2_2 AS col_1844,
tab_757.col1_15 AS col_1845,
tab_758.col2_9 AS col_1846,
tab_757.col1_0 AS col_1847,
(-0.9961179198744912):::FLOAT8 AS col_1848,
B'00100110011111110110101110010001101111000011100' AS col_1849,
'':::STRING AS col_1850
FROM
defaultdb.public.table1@table1_col1_0_col1_16_col1_15_col1_2_idx AS tab_757,
with_267 AS cte_ref_80,
defaultdb.public.table2@table2_col2_13_col2_8_col2_4_col2_14_col2_2_col2_9_col2_12_col2_11_col2_10_col2_3_col2_7_col2_1_col2_15_col2_0_col2_6_col2_5_idx
AS tab_758
WHERE
false
LIMIT
47:::INT8;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=rand-tables/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=artifacts#/sqlsmith/setup=rand-tables/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Drand-tables%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: sqlsmith/setup=rand-tables/setting=no-mutations failed - [(roachtest).sqlsmith/setup=rand-tables/setting=no-mutations failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=buildLog) on [release-21.1@5323c44e1c3d3d9ded95a9f20be5ba738cd542ec](https://github.com/cockroachdb/cockroach/commits/5323c44e1c3d3d9ded95a9f20be5ba738cd542ec):
```
*
FROM
(
VALUES
(3618326136:::OID),
(NULL),
(1535022620:::OID),
(CASE WHEN NULL THEN 3809945684:::OID ELSE 1132124906:::OID END),
(3812966465:::OID)
)
AS tab_762 (col_1832)
),
with_271 (col_1833)
AS (SELECT * FROM (VALUES ('P':::STRING), ('8':::STRING), ('T':::STRING)) AS tab_763 (col_1833))
SELECT
regr_r2(NULL::INT8, (-0.0017706938632580105):::FLOAT8::INT8)::FLOAT8 AS col_1834
FROM
defaultdb.public.table3@[0] AS tab_764
GROUP BY
tab_764.col3_11, tab_764.col2_6, tab_764.col3_16
ORDER BY
tab_764.col3_16 ASC, tab_764.col3_11 ASC
LIMIT
1:::INT8
)
AS col_1835,
jsonb_insert('[false, []]':::JSONB::JSONB, NULL::STRING[], tab_757.col1_6::JSONB, false::BOOL)::JSONB AS col_1836,
tab_758.col2_3 AS col_1837,
((-1):::INT8::INT8 * (-6297526606271204945):::INT8::INT8)::INT8 AS col_1838,
tab_757.col1_4 AS col_1839,
'\x4a5c':::BYTES AS col_1840,
0:::OID AS col_1841,
tab_758.col2_5 AS col_1842,
tab_758.col2_0 AS col_1843,
tab_758.col2_2 AS col_1844,
tab_757.col1_15 AS col_1845,
tab_758.col2_9 AS col_1846,
tab_757.col1_0 AS col_1847,
(-0.9961179198744912):::FLOAT8 AS col_1848,
B'00100110011111110110101110010001101111000011100' AS col_1849,
'':::STRING AS col_1850
FROM
defaultdb.public.table1@table1_col1_0_col1_16_col1_15_col1_2_idx AS tab_757,
with_267 AS cte_ref_80,
defaultdb.public.table2@table2_col2_13_col2_8_col2_4_col2_14_col2_2_col2_9_col2_12_col2_11_col2_10_col2_3_col2_7_col2_1_col2_15_col2_0_col2_6_col2_5_idx
AS tab_758
WHERE
false
LIMIT
47:::INT8;
```
<details><summary>More</summary><p>
Artifacts: [/sqlsmith/setup=rand-tables/setting=no-mutations](https://teamcity.cockroachdb.com/viewLog.html?buildId=2870055&tab=artifacts#/sqlsmith/setup=rand-tables/setting=no-mutations)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Drand-tables%2Fsetting%3Dno-mutations.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest sqlsmith setup rand tables setting no mutations failed on from values oid null oid case when null then oid else oid end oid as tab col with col as select from values p string string t string as tab col select regr null as col from defaultdb public as tab group by tab tab tab order by tab asc tab asc limit as col jsonb insert jsonb jsonb null string tab jsonb false bool jsonb as col tab as col as col tab as col bytes as col oid as col tab as col tab as col tab as col tab as col tab as col tab as col as col b as col string as col from defaultdb public idx as tab with as cte ref defaultdb public idx as tab where false limit more artifacts powered by | 1 |
238,244 | 19,705,609,507 | IssuesEvent | 2022-01-12 21:35:48 | ibis-project/ibis | https://api.github.com/repos/ibis-project/ibis | closed | refactor: move most dask/pandas udf tests to ibis/backends/tests/test_vectorized_udf | refactor tests udf backends - pandas backends - dask | Most udf tests in the pandas and ibis backends are not really backend specific and should be moved to the centralized ibis tests. | 1.0 | refactor: move most dask/pandas udf tests to ibis/backends/tests/test_vectorized_udf - Most udf tests in the pandas and ibis backends are not really backend specific and should be moved to the centralized ibis tests. | test | refactor move most dask pandas udf tests to ibis backends tests test vectorized udf most udf tests in the pandas and ibis backends are not really backend specific and should be moved to the centralized ibis tests | 1 |
196,086 | 14,798,714,483 | IssuesEvent | 2021-01-13 00:27:29 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | Some storage e2es using default storageclass instead of configured | kind/failing-test | <!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI -->
**Which jobs are failing**:
**Which test(s) are failing**:
provisioning should provision storage with pvc data source in parallel
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
```
go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:264
Jan 12 02:29:31.942: Unexpected error:
<*errors.errorString | 0xc0039ba7b0>: {
s: "No default storage class found",
}
No default storage class found
occurred
go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:355
```
**Anything else we need to know**:
| 1.0 | Some storage e2es using default storageclass instead of configured - <!-- Please only use this template for submitting reports about continuously failing tests or jobs in Kubernetes CI -->
**Which jobs are failing**:
**Which test(s) are failing**:
provisioning should provision storage with pvc data source in parallel
**Since when has it been failing**:
**Testgrid link**:
**Reason for failure**:
```
go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:264
Jan 12 02:29:31.942: Unexpected error:
<*errors.errorString | 0xc0039ba7b0>: {
s: "No default storage class found",
}
No default storage class found
occurred
go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:355
```
**Anything else we need to know**:
| test | some storage using default storageclass instead of configured which jobs are failing which test s are failing provisioning should provision storage with pvc data source in parallel since when has it been failing testgrid link reason for failure go src io kubernetes test storage testsuites provisioning go jan unexpected error s no default storage class found no default storage class found occurred go src io kubernetes test storage testsuites provisioning go anything else we need to know | 1 |
320,921 | 27,493,488,929 | IssuesEvent | 2023-03-04 22:28:23 | Roukys/HHauto | https://api.github.com/repos/Roukys/HHauto | closed | Not collecting season | bug to be tested | Log:
```
"02.03.2023, 10:11:34.38:autoLoop": "Time to go and check Season for collecting reward.",
"02.03.2023, 10:11:34.44:goAndCollectSeason": "Season end in {\"days\":30,\"hours\":5,\"minutes\":48,\"seconds\":27}",
"02.03.2023, 10:11:34.46:goAndCollectSeason": "Going to collect Season.",
"02.03.2023, 10:11:34.47:goAndCollectSeason": "setting autoloop to false",
"02.03.2023, 10:11:34.49:goAndCollectSeason": "No season collection to do.",
"02.03.2023, 10:11:34.51:setTimer": "nextSeasonCollectTime set to 19418:14:11:34 (04:00:00)",
```
Settings set to collect anything except fists, kisses, energy and exp. The season has just started, so there's many things to collect, but the script ignores it
Edit: the problem seems to be in .free_reward class, script lloks for .free-reward instead | 1.0 | Not collecting season - Log:
```
"02.03.2023, 10:11:34.38:autoLoop": "Time to go and check Season for collecting reward.",
"02.03.2023, 10:11:34.44:goAndCollectSeason": "Season end in {\"days\":30,\"hours\":5,\"minutes\":48,\"seconds\":27}",
"02.03.2023, 10:11:34.46:goAndCollectSeason": "Going to collect Season.",
"02.03.2023, 10:11:34.47:goAndCollectSeason": "setting autoloop to false",
"02.03.2023, 10:11:34.49:goAndCollectSeason": "No season collection to do.",
"02.03.2023, 10:11:34.51:setTimer": "nextSeasonCollectTime set to 19418:14:11:34 (04:00:00)",
```
Settings set to collect anything except fists, kisses, energy and exp. The season has just started, so there's many things to collect, but the script ignores it
Edit: the problem seems to be in .free_reward class, script lloks for .free-reward instead | test | not collecting season log autoloop time to go and check season for collecting reward goandcollectseason season end in days hours minutes seconds goandcollectseason going to collect season goandcollectseason setting autoloop to false goandcollectseason no season collection to do settimer nextseasoncollecttime set to settings set to collect anything except fists kisses energy and exp the season has just started so there s many things to collect but the script ignores it edit the problem seems to be in free reward class script lloks for free reward instead | 1 |
30,894 | 4,226,477,667 | IssuesEvent | 2016-07-02 13:49:57 | Laura-O4S/RobotsGoHome | https://api.github.com/repos/Laura-O4S/RobotsGoHome | closed | MENU: Tidy top bar | done enhancement graphic design | Tidy up top bar on the menu, consistent positioning of plus signs and move money closer to text box

| 1.0 | MENU: Tidy top bar - Tidy up top bar on the menu, consistent positioning of plus signs and move money closer to text box

| non_test | menu tidy top bar tidy up top bar on the menu consistent positioning of plus signs and move money closer to text box | 0 |
281,494 | 24,397,725,040 | IssuesEvent | 2022-10-04 20:57:40 | apache/beam | https://api.github.com/repos/apache/beam | closed | Python 2 precommit (Flink) is failing | P3 bug test-failures | [https://builds.apache.org/job/beam_PreCommit_Portable_Python_Commit/6065/console](https://builds.apache.org/job/beam_PreCommit_Portable_Python_Commit/6065/console)
14:01:17 debug_error_string = "\{"created":"@1568667677.197329889","description":"Error received from peer ipv4:127.0.0.1:55257","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Method org.apache.beam.model.job_management.v1.JobService/GetJobMetrics is unimplemented","grpc_status":12}"14:01:17 debug_error_string = "\{"created":"@1568667677.197329889","description":"Error received from peer ipv4:127.0.0.1:55257","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Method org.apache.beam.model.job_management.v1.JobService/GetJobMetrics is unimplemented","grpc_status":12}"
Imported from Jira [BEAM-8245](https://issues.apache.org/jira/browse/BEAM-8245). Original Jira may contain additional context.
Reported by: ibzib. | 1.0 | Python 2 precommit (Flink) is failing - [https://builds.apache.org/job/beam_PreCommit_Portable_Python_Commit/6065/console](https://builds.apache.org/job/beam_PreCommit_Portable_Python_Commit/6065/console)
14:01:17 debug_error_string = "\{"created":"@1568667677.197329889","description":"Error received from peer ipv4:127.0.0.1:55257","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Method org.apache.beam.model.job_management.v1.JobService/GetJobMetrics is unimplemented","grpc_status":12}"14:01:17 debug_error_string = "\{"created":"@1568667677.197329889","description":"Error received from peer ipv4:127.0.0.1:55257","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"Method org.apache.beam.model.job_management.v1.JobService/GetJobMetrics is unimplemented","grpc_status":12}"
Imported from Jira [BEAM-8245](https://issues.apache.org/jira/browse/BEAM-8245). Original Jira may contain additional context.
Reported by: ibzib. | test | python precommit flink is failing debug error string created description error received from peer file src core lib surface call cc file line grpc message method org apache beam model job management jobservice getjobmetrics is unimplemented grpc status debug error string created description error received from peer file src core lib surface call cc file line grpc message method org apache beam model job management jobservice getjobmetrics is unimplemented grpc status imported from jira original jira may contain additional context reported by ibzib | 1 |
6,846 | 5,693,709,419 | IssuesEvent | 2017-04-15 04:38:30 | briansmith/ring | https://api.github.com/repos/briansmith/ring | closed | Optimize SHA-1 and SHA-2 Further | performance | See http://www.hackersdelight.org/corres.txt, item 7: Some tricks useful for the Secure Hash Algorithm
| True | Optimize SHA-1 and SHA-2 Further - See http://www.hackersdelight.org/corres.txt, item 7: Some tricks useful for the Secure Hash Algorithm
| non_test | optimize sha and sha further see item some tricks useful for the secure hash algorithm | 0 |
404,420 | 11,857,206,757 | IssuesEvent | 2020-03-25 09:09:06 | thespacedoctor/sherlock | https://api.github.com/repos/thespacedoctor/sherlock | closed | The general problem | priority: 1 type: enhancement | Left in the eyeball list is a good, clean set of what is left after Michael/Owen have eyeballed all
https://star.pst.qub.ac.uk/sne/atlas4/followup/4/
This is a mix of UNCLEAR, ORPHAN and SN
They are all or nearly all misclassified. They are clearly stars.
But looking at the first 10-20 - they are all in the USNO B1, NOMAD and PPMXL as stars.
I now think the GSC is just rubbish and the classifications are clearly not trustworthy. We shouldn't be using. Let's immediately flip to the others.
| 1.0 | The general problem - Left in the eyeball list is a good, clean set of what is left after Michael/Owen have eyeballed all
https://star.pst.qub.ac.uk/sne/atlas4/followup/4/
This is a mix of UNCLEAR, ORPHAN and SN
They are all or nearly all misclassified. They are clearly stars.
But looking at the first 10-20 - they are all in the USNO B1, NOMAD and PPMXL as stars.
I now think the GSC is just rubbish and the classifications are clearly not trustworthy. We shouldn't be using. Let's immediately flip to the others.
| non_test | the general problem left in the eyeball list is a good clean set of what is left after michael owen have eyeballed all this is a mix of unclear orphan and sn they are all or nearly all misclassified they are clearly stars but looking at the first they are all in the usno nomad and ppmxl as stars i now think the gsc is just rubbish and the classifications are clearly not trustworthy we shouldn t be using let s immediately flip to the others | 0 |
71,976 | 7,274,385,812 | IssuesEvent | 2018-02-21 09:49:53 | teamdocs/kantu2018 | https://api.github.com/repos/teamdocs/kantu2018 | closed | If "no proxy" is selected the text of the proxy settings should be greyedout as well | Ready for Test | See the screenshot:

| 1.0 | If "no proxy" is selected the text of the proxy settings should be greyedout as well - See the screenshot:

| test | if no proxy is selected the text of the proxy settings should be greyedout as well see the screenshot | 1 |
100,490 | 4,097,294,680 | IssuesEvent | 2016-06-03 00:39:38 | USC-CSSL/TACIT | https://api.github.com/repos/USC-CSSL/TACIT | closed | Co-occurrence Analysis (Negative testing) | bug Medium Priority | Please find the input file and seed file along with this issue to reproduce

input:
[Cooccurrence-Analysis-run-report-06-01-16-14-37-07.txt](https://github.com/USC-CSSL/TACIT/files/296364/Cooccurrence-Analysis-run-report-06-01-16-14-37-07.txt)
Seed:
contains one word : Cooccurrence
The message at status bar says check at console, but nothing is mentioned in the console log.
| 1.0 | Co-occurrence Analysis (Negative testing) - Please find the input file and seed file along with this issue to reproduce

input:
[Cooccurrence-Analysis-run-report-06-01-16-14-37-07.txt](https://github.com/USC-CSSL/TACIT/files/296364/Cooccurrence-Analysis-run-report-06-01-16-14-37-07.txt)
Seed:
contains one word : Cooccurrence
The message at status bar says check at console, but nothing is mentioned in the console log.
| non_test | co occurrence analysis negative testing please find the input file and seed file along with this issue to reproduce input seed contains one word cooccurrence the message at status bar says check at console but nothing is mentioned in the console log | 0 |
283,886 | 24,569,487,532 | IssuesEvent | 2022-10-13 07:29:19 | mercedes-benz/sechub | https://api.github.com/repos/mercedes-benz/sechub | opened | Wiremock test for PDS adapter does not really check upload URL calling | bug testing | ## Situation
To have a red test before implementing #1689 I changed the wiremock test and tried to check for the `x-file-size` ... but .. the test did not fail.
After some analyze it became clear that the wiremock stubbing did work, but there was no automated verification (as it was in the past ?).
## Wanted
- Ensure upload really happens - but without code duplication in tests etc.
## Solution
- Implement a common usable and maintainable verification for testing wire mocks stubs has been called (or something else) inside sechub testing framework
- use it for upload mocking of PDS adapter wiremock tests | 1.0 | Wiremock test for PDS adapter does not really check upload URL calling - ## Situation
To have a red test before implementing #1689 I changed the wiremock test and tried to check for the `x-file-size` ... but .. the test did not fail.
After some analyze it became clear that the wiremock stubbing did work, but there was no automated verification (as it was in the past ?).
## Wanted
- Ensure upload really happens - but without code duplication in tests etc.
## Solution
- Implement a common usable and maintainable verification for testing wire mocks stubs has been called (or something else) inside sechub testing framework
- use it for upload mocking of PDS adapter wiremock tests | test | wiremock test for pds adapter does not really check upload url calling situation to have a red test before implementing i changed the wiremock test and tried to check for the x file size but the test did not fail after some analyze it became clear that the wiremock stubbing did work but there was no automated verification as it was in the past wanted ensure upload really happens but without code duplication in tests etc solution implement a common usable and maintainable verification for testing wire mocks stubs has been called or something else inside sechub testing framework use it for upload mocking of pds adapter wiremock tests | 1 |
545,132 | 15,936,896,739 | IssuesEvent | 2021-04-14 11:46:29 | hochschule-darmstadt/openartbrowser | https://api.github.com/repos/hochschule-darmstadt/openartbrowser | opened | Wrong related movements are displayed | bug good first issue medium priority small effort | **Describe the bug**
Related Movements are cached from the last movement. Therefore no new movements are loaded.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://openartbrowser.org/de/movement/Q4692
2. Click on 'Related Movements'
3. Click on eg. 'Italian Renaissance'
4. Reload Page
5. Click on 'Related Movements'
6. See error
**Expected behavior**
Only the related movements for the current movement should be shown in the 'Related Movements' tab.
**Additional context**
It probably helps to clear the related movements in `ngOnInit()`
| 1.0 | Wrong related movements are displayed - **Describe the bug**
Related Movements are cached from the last movement. Therefore no new movements are loaded.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://openartbrowser.org/de/movement/Q4692
2. Click on 'Related Movements'
3. Click on eg. 'Italian Renaissance'
4. Reload Page
5. Click on 'Related Movements'
6. See error
**Expected behavior**
Only the related movements for the current movement should be shown in the 'Related Movements' tab.
**Additional context**
It probably helps to clear the related movements in `ngOnInit()`
| non_test | wrong related movements are displayed describe the bug related movements are cached from the last movement therefore no new movements are loaded to reproduce steps to reproduce the behavior go to click on related movements click on eg italian renaissance reload page click on related movements see error expected behavior only the related movements for the current movement should be shown in the related movements tab additional context it probably helps to clear the related movements in ngoninit | 0 |
148,189 | 11,841,556,355 | IssuesEvent | 2020-03-23 21:01:48 | rapidsai/cuml | https://api.github.com/repos/rapidsai/cuml | reopened | [BUG] UMAP CI test failure | ? - Needs Triage bug gpuCI tests | The test `cuml/test/test_umap.py::test_umap_transform_on_iris` fails on CI. We were not able to reproduce the failure. This indicates that the specificities of GPU setup of the machine running the CI might be of importance.
The error appeared on a PR modifying the calculation of UMAP's exponential decay parameters : #1876
To reproduce the bug run `gpuCI/cuml/gpu-test` on commit 7e4f8b1d7109ecd92ffa7d9700a56a31a9f04be1 .
Environment details:
- Environment location: Docker
- Linux Distro/Architecture: CentOS 7 and Ubuntu 16.04 / 18.04 (amd64)
- GPU Model/Driver: single Tesla V100 with driver 440.33.01
- CUDA: 10.0 / 10.1 / 10.2
- Method of cuDF & cuML install: from source | 1.0 | [BUG] UMAP CI test failure - The test `cuml/test/test_umap.py::test_umap_transform_on_iris` fails on CI. We were not able to reproduce the failure. This indicates that the specificities of GPU setup of the machine running the CI might be of importance.
The error appeared on a PR modifying the calculation of UMAP's exponential decay parameters : #1876
To reproduce the bug run `gpuCI/cuml/gpu-test` on commit 7e4f8b1d7109ecd92ffa7d9700a56a31a9f04be1 .
Environment details:
- Environment location: Docker
- Linux Distro/Architecture: CentOS 7 and Ubuntu 16.04 / 18.04 (amd64)
- GPU Model/Driver: single Tesla V100 with driver 440.33.01
- CUDA: 10.0 / 10.1 / 10.2
- Method of cuDF & cuML install: from source | test | umap ci test failure the test cuml test test umap py test umap transform on iris fails on ci we were not able to reproduce the failure this indicates that the specificities of gpu setup of the machine running the ci might be of importance the error appeared on a pr modifying the calculation of umap s exponential decay parameters to reproduce the bug run gpuci cuml gpu test on commit environment details environment location docker linux distro architecture centos and ubuntu gpu model driver single tesla with driver cuda method of cudf cuml install from source | 1 |
280,176 | 30,805,131,713 | IssuesEvent | 2023-08-01 06:31:36 | Satheesh575555/linux-4.1.15 | https://api.github.com/repos/Satheesh575555/linux-4.1.15 | reopened | CVE-2015-7566 (Medium) detected in linuxlinux-4.6 | Mend: dependency security vulnerability | ## CVE-2015-7566 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The clie_5_attach function in drivers/usb/serial/visor.c in the Linux kernel through 4.4.1 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by inserting a USB device that lacks a bulk-out endpoint.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-7566>CVE-2015-7566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: v4.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-7566 (Medium) detected in linuxlinux-4.6 - ## CVE-2015-7566 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/usb/serial/visor.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The clie_5_attach function in drivers/usb/serial/visor.c in the Linux kernel through 4.4.1 allows physically proximate attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by inserting a USB device that lacks a bulk-out endpoint.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2015-7566>CVE-2015-7566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Physical
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2015-7566</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: v4.5-rc2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers usb serial visor c drivers usb serial visor c vulnerability details the clie attach function in drivers usb serial visor c in the linux kernel through allows physically proximate attackers to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact by inserting a usb device that lacks a bulk out endpoint publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
24,765 | 4,108,334,555 | IssuesEvent | 2016-06-06 15:48:13 | 0mp/io-touchpad | https://api.github.com/repos/0mp/io-touchpad | opened | Tests create temporary files and don't delete them | component: tests type: enhancement | These are the files created by during tests and not deleted afterwards:
- `app/test/classifier/data/user-defined/distance-tolerance_a.dat`
- `app/test/databox/data/settings.pickle` | 1.0 | Tests create temporary files and don't delete them - These are the files created by during tests and not deleted afterwards:
- `app/test/classifier/data/user-defined/distance-tolerance_a.dat`
- `app/test/databox/data/settings.pickle` | test | tests create temporary files and don t delete them these are the files created by during tests and not deleted afterwards app test classifier data user defined distance tolerance a dat app test databox data settings pickle | 1 |
349,079 | 31,772,953,810 | IssuesEvent | 2023-09-12 12:54:47 | void-linux/void-packages | https://api.github.com/repos/void-linux/void-packages | opened | Fix/Add RTL8821AU Driver | bug needs-testing | ### Is this a new report?
Yes
### System Info
Void 6.4.14_1 x86_64 GenuineIntel notuptodate rrrrmmnFFFFFFFFF
### Package(s) Affected
rtl8812au-dkms-20210629_3
### Does a report exist for this bug with the project's home (upstream) and/or another distro?
https://github.com/morrownr/8812au-20210629/issues/98
### Expected behaviour
The packaged driver is either supposed to support both RTL8812AU and RTL8821AU chipsets or have a separate package for the latter.
### Actual behaviour
The driver as it exists at the time being does not support the wifi adapter. I needed to install the driver present [here](https://github.com/morrownr/8821au-20210708) instead to support the wifi adapter.
### Steps to reproduce
1. `$ sudo xbps-install -S rtl8812au-dkms`
2. `sudo reboot`
3. `$ iwconfig` to check if interface shows up.
4. It does not show up. Nor does the wifi adapter LED blink. | 1.0 | Fix/Add RTL8821AU Driver - ### Is this a new report?
Yes
### System Info
Void 6.4.14_1 x86_64 GenuineIntel notuptodate rrrrmmnFFFFFFFFF
### Package(s) Affected
rtl8812au-dkms-20210629_3
### Does a report exist for this bug with the project's home (upstream) and/or another distro?
https://github.com/morrownr/8812au-20210629/issues/98
### Expected behaviour
The packaged driver is either supposed to support both RTL8812AU and RTL8821AU chipsets or have a separate package for the latter.
### Actual behaviour
The driver as it exists at the time being does not support the wifi adapter. I needed to install the driver present [here](https://github.com/morrownr/8821au-20210708) instead to support the wifi adapter.
### Steps to reproduce
1. `$ sudo xbps-install -S rtl8812au-dkms`
2. `sudo reboot`
3. `$ iwconfig` to check if interface shows up.
4. It does not show up. Nor does the wifi adapter LED blink. | test | fix add driver is this a new report yes system info void genuineintel notuptodate rrrrmmnfffffffff package s affected dkms does a report exist for this bug with the project s home upstream and or another distro expected behaviour the packaged driver is either supposed to support both and chipsets or have a separate package for the latter actual behaviour the driver as it exists at the time being does not support the wifi adapter i needed to install the driver present instead to support the wifi adapter steps to reproduce sudo xbps install s dkms sudo reboot iwconfig to check if interface shows up it does not show up nor does the wifi adapter led blink | 1 |
14,240 | 3,387,027,298 | IssuesEvent | 2015-11-28 01:15:43 | pawelczak/EasyAutocomplete | https://api.github.com/repos/pawelczak/EasyAutocomplete | closed | Tests::template - integration tests custom template | test | Missing intergration tests for custom template. | 1.0 | Tests::template - integration tests custom template - Missing intergration tests for custom template. | test | tests template integration tests custom template missing intergration tests for custom template | 1 |
80,469 | 7,748,559,184 | IssuesEvent | 2018-05-30 08:42:43 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: roachmart/partition=true failed on release-2.0 | C-test-failure O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/32b7aa635af34c5b150abba9df1cd51a5fafe804
Parameters:
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=686347&tab=buildLog
```
cluster.go:678,roachmart.go:27,roachmart.go:81: /home/agent/work/.go/bin/roachprod start teamcity-686347-roachmart-partition-true --encrypt: exit status 1
``` | 1.0 | roachtest: roachmart/partition=true failed on release-2.0 - SHA: https://github.com/cockroachdb/cockroach/commits/32b7aa635af34c5b150abba9df1cd51a5fafe804
Parameters:
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=686347&tab=buildLog
```
cluster.go:678,roachmart.go:27,roachmart.go:81: /home/agent/work/.go/bin/roachprod start teamcity-686347-roachmart-partition-true --encrypt: exit status 1
``` | test | roachtest roachmart partition true failed on release sha parameters failed test cluster go roachmart go roachmart go home agent work go bin roachprod start teamcity roachmart partition true encrypt exit status | 1 |
240,447 | 20,031,648,027 | IssuesEvent | 2022-02-02 07:04:50 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: tpch_concurrency failed | C-test-failure O-robot O-roachtest branch-master T-sql-queries | roachtest.tpch_concurrency [failed](https://server-url-not-found-in-env.com/viewLog.html?buildId=4263825&tab=buildLog) with [artifacts](https://server-url-not-found-in-env.com/viewLog.html?buildId=4263825&tab=artifacts#/tpch_concurrency) on master @ [8548987813ff9e1b8a9878023d3abfc6911c16db](https://github.com/cockroachdb/cockroach/commits/8548987813ff9e1b8a9878023d3abfc6911c16db):
```
The test failed on branch=master, cloud=gce:
test timed out (see artifacts for details)
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpch_concurrency.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: tpch_concurrency failed - roachtest.tpch_concurrency [failed](https://server-url-not-found-in-env.com/viewLog.html?buildId=4263825&tab=buildLog) with [artifacts](https://server-url-not-found-in-env.com/viewLog.html?buildId=4263825&tab=artifacts#/tpch_concurrency) on master @ [8548987813ff9e1b8a9878023d3abfc6911c16db](https://github.com/cockroachdb/cockroach/commits/8548987813ff9e1b8a9878023d3abfc6911c16db):
```
The test failed on branch=master, cloud=gce:
test timed out (see artifacts for details)
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpch_concurrency.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| test | roachtest tpch concurrency failed roachtest tpch concurrency with on master the test failed on branch master cloud gce test timed out see artifacts for details help see see cc cockroachdb sql queries | 1 |
167,220 | 13,015,663,143 | IssuesEvent | 2020-07-26 00:39:37 | nasa/osal | https://api.github.com/repos/nasa/osal | closed | Functional Timer Test Hard Codes Configuration Value | enhancement unit testing | The functional timer test (/src/tests/timer-test/timer-test.c) hard codes the number of timers to 4, which may be higher than the OS_MAX_TIMERS configuration set in osconfig.h.
It is recommended to update the test to use the OS_MAX_TIMERS configuration macro or add protection with an #if guard. | 1.0 | Functional Timer Test Hard Codes Configuration Value - The functional timer test (/src/tests/timer-test/timer-test.c) hard codes the number of timers to 4, which may be higher than the OS_MAX_TIMERS configuration set in osconfig.h.
It is recommended to update the test to use the OS_MAX_TIMERS configuration macro or add protection with an #if guard. | test | functional timer test hard codes configuration value the functional timer test src tests timer test timer test c hard codes the number of timers to which may be higher than the os max timers configuration set in osconfig h it is recommended to update the test to use the os max timers configuration macro or add protection with an if guard | 1 |
170,199 | 13,178,082,490 | IssuesEvent | 2020-08-12 08:32:37 | TerryCavanagh/diceydungeons.com | https://api.github.com/repos/TerryCavanagh/diceydungeons.com | closed | Version of halloweenspecial bundled with 1.9 preview is an earlier build than what was actually used for the Halloween event | reported in v1.9 (Testing Round 1) | For example, these bugs are still present:
TerryCavanagh/diceydungeonsbeta/issues/762 (Hole Puncher, Shredder, and Polarity Flip have no upgraded or downgraded versions in-game)
TerryCavanagh/diceydungeonsbeta/issues/741 (Mana and Fury need to reset at the start of each turn in the Witch episode)
TerryCavanagh/diceydungeonsbeta/issues/728 (Solenoid appears to start at 0 damage) | 1.0 | Version of halloweenspecial bundled with 1.9 preview is an earlier build than what was actually used for the Halloween event - For example, these bugs are still present:
TerryCavanagh/diceydungeonsbeta/issues/762 (Hole Puncher, Shredder, and Polarity Flip have no upgraded or downgraded versions in-game)
TerryCavanagh/diceydungeonsbeta/issues/741 (Mana and Fury need to reset at the start of each turn in the Witch episode)
TerryCavanagh/diceydungeonsbeta/issues/728 (Solenoid appears to start at 0 damage) | test | version of halloweenspecial bundled with preview is an earlier build than what was actually used for the halloween event for example these bugs are still present terrycavanagh diceydungeonsbeta issues hole puncher shredder and polarity flip have no upgraded or downgraded versions in game terrycavanagh diceydungeonsbeta issues mana and fury need to reset at the start of each turn in the witch episode terrycavanagh diceydungeonsbeta issues solenoid appears to start at damage | 1 |
260,165 | 8,204,646,632 | IssuesEvent | 2018-09-03 07:27:48 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | musiczum.com - see bug description | browser-firefox-mobile priority-normal | <!-- @browser: Firefox Mobile 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://musiczum.com/registration?theme=m-2-pantherNFP
**Browser / Version**: Firefox Mobile 63.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: spam
**Steps to Reproduce**:
Tried closing app
Restarted phone
Closed all tabs
Tried blocking site
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | musiczum.com - see bug description - <!-- @browser: Firefox Mobile 63.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:63.0) Gecko/63.0 Firefox/63.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://musiczum.com/registration?theme=m-2-pantherNFP
**Browser / Version**: Firefox Mobile 63.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: spam
**Steps to Reproduce**:
Tried closing app
Restarted phone
Closed all tabs
Tried blocking site
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | musiczum com see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description spam steps to reproduce tried closing app restarted phone closed all tabs tried blocking site from with ❤️ | 0 |
260,441 | 19,668,938,315 | IssuesEvent | 2022-01-11 03:41:13 | odpf/raccoon | https://api.github.com/repos/odpf/raccoon | closed | Documentation for protocol agnostic Raccoon | documentation | Problem
New features have been added to Raccoon that allows clients to send data using HTTP/gRPC along with the support for multiple data formatting options like JSON. It has also resulted in a new code design which is also missing in the existing documentation.
Is there any workaround?
NA
What is the impact?
NA
Which version was this found?
NA
Solution
Add the missing documentation or change the existing one where required. | 1.0 | Documentation for protocol agnostic Raccoon - Problem
New features have been added to Raccoon that allows clients to send data using HTTP/gRPC along with the support for multiple data formatting options like JSON. It has also resulted in a new code design which is also missing in the existing documentation.
Is there any workaround?
NA
What is the impact?
NA
Which version was this found?
NA
Solution
Add the missing documentation or change the existing one where required. | non_test | documentation for protocol agnostic raccoon problem new features have been added to raccoon that allows clients to send data using http grpc along with the support for multiple data formatting options like json it has also resulted in a new code design which is also missing in the existing documentation is there any workaround na what is the impact na which version was this found na solution add the missing documentation or change the existing one where required | 0 |
144,353 | 11,613,413,018 | IssuesEvent | 2020-02-26 10:40:59 | hypergraph-xyz/cli | https://api.github.com/repos/hypergraph-xyz/cli | closed | Follow & unfollow profile | feature in progress step: test | **User story**
As a Hypergraph user
I want to be able to follow a profile
so that I can stay up to date with that person's content
**Workflow**
- [x] Code
- [ ] Test
**Acceptance criteria**
- `p2pcommons.follows` metadata can be manipulated by the module's owner by adding or removing valid Dat archive keys (see [specs](https://github.com/p2pcommons/specs/blob/master/module.md#keyvalues))
- `p2pcommons.follows` may only be manipulated on profile modules
- `p2pcommons.follows` may only refer to profile modules
- `p2pcommons.follows` may be versioned | 1.0 | Follow & unfollow profile - **User story**
As a Hypergraph user
I want to be able to follow a profile
so that I can stay up to date with that person's content
**Workflow**
- [x] Code
- [ ] Test
**Acceptance criteria**
- `p2pcommons.follows` metadata can be manipulated by the module's owner by adding or removing valid Dat archive keys (see [specs](https://github.com/p2pcommons/specs/blob/master/module.md#keyvalues))
- `p2pcommons.follows` may only be manipulated on profile modules
- `p2pcommons.follows` may only refer to profile modules
- `p2pcommons.follows` may be versioned | test | follow unfollow profile user story as a hypergraph user i want to be able to follow a profile so that i can stay up to date with that person s content workflow code test acceptance criteria follows metadata can be manipulated by the module s owner by adding or removing valid dat archive keys see follows may only be manipulated on profile modules follows may only refer to profile modules follows may be versioned | 1 |
186,767 | 21,969,450,655 | IssuesEvent | 2022-05-25 01:22:01 | AOSC-Dev/aosc-os-abbs | https://api.github.com/repos/AOSC-Dev/aosc-os-abbs | opened | thunderbird: security update to 91.9.1 | upgrade security 0day | ### CVE IDs
**91.9.1** CVE-2022-1529, CVE-2022-1802 **91.9** CVE-2022-1520, CVE-2022-29909, CVE-2022-29911, CVE-2022-29912, CVE-2022-29913, CVE-2022-29914, CVE-2022-29916, CVE-2022-29917 **91.8** CVE-2022-1097, CVE-2022-1196, CVE-2022-1197, CVE-2022-24713, CVE-2022-28281, CVE-2022-28282, CVE-2022-28285, CVE-2022-28286, CVE-2022-28289
### Other security advisory IDs
- MFSA2022-15 (91.8): https://www.mozilla.org/en-US/security/advisories/mfsa2022-15/
- MFSA2022-18 (91.9): https://www.mozilla.org/en-US/security/advisories/mfsa2022-18/
- MFSA2022-19 (91.9.1): https://www.mozilla.org/en-US/security/advisories/mfsa2022-19/
### Description
See:
- MFSA2022-15 (91.8): https://www.mozilla.org/en-US/security/advisories/mfsa2022-15/
- MFSA2022-18 (91.9): https://www.mozilla.org/en-US/security/advisories/mfsa2022-18/
- MFSA2022-19 (91.9.1): https://www.mozilla.org/en-US/security/advisories/mfsa2022-19/
### Patches
N/A
### PoC(s)
See:
- MFSA2022-15 (91.8): https://www.mozilla.org/en-US/security/advisories/mfsa2022-15/
- MFSA2022-18 (91.9): https://www.mozilla.org/en-US/security/advisories/mfsa2022-18/
- MFSA2022-19 (91.9.1): https://www.mozilla.org/en-US/security/advisories/mfsa2022-19/ | True | thunderbird: security update to 91.9.1 - ### CVE IDs
**91.9.1** CVE-2022-1529, CVE-2022-1802 **91.9** CVE-2022-1520, CVE-2022-29909, CVE-2022-29911, CVE-2022-29912, CVE-2022-29913, CVE-2022-29914, CVE-2022-29916, CVE-2022-29917 **91.8** CVE-2022-1097, CVE-2022-1196, CVE-2022-1197, CVE-2022-24713, CVE-2022-28281, CVE-2022-28282, CVE-2022-28285, CVE-2022-28286, CVE-2022-28289
### Other security advisory IDs
- MFSA2022-15 (91.8): https://www.mozilla.org/en-US/security/advisories/mfsa2022-15/
- MFSA2022-18 (91.9): https://www.mozilla.org/en-US/security/advisories/mfsa2022-18/
- MFSA2022-19 (91.9.1): https://www.mozilla.org/en-US/security/advisories/mfsa2022-19/
### Description
See:
- MFSA2022-15 (91.8): https://www.mozilla.org/en-US/security/advisories/mfsa2022-15/
- MFSA2022-18 (91.9): https://www.mozilla.org/en-US/security/advisories/mfsa2022-18/
- MFSA2022-19 (91.9.1): https://www.mozilla.org/en-US/security/advisories/mfsa2022-19/
### Patches
N/A
### PoC(s)
See:
- MFSA2022-15 (91.8): https://www.mozilla.org/en-US/security/advisories/mfsa2022-15/
- MFSA2022-18 (91.9): https://www.mozilla.org/en-US/security/advisories/mfsa2022-18/
- MFSA2022-19 (91.9.1): https://www.mozilla.org/en-US/security/advisories/mfsa2022-19/ | non_test | thunderbird security update to cve ids cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve cve other security advisory ids description see patches n a poc s see | 0 |
144,459 | 22,356,150,874 | IssuesEvent | 2022-06-15 15:48:40 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [TSVB]: Explore adding text before or after table links | release_note:enhancement Feature:TSVB Project:Accessibility triage_needed design Team:VisEditors accessibility: cognition | **Describe the feature:**
Some users identified an accessibility issue where multiple link names have different URLs. Users can select a column of data and apply links to those by adding a pattern like `https://website_url/{{key}}` in the Item URL field. This can become a challenge for screen readers because users listen to a link like "Atlanta" that links to multiple different data series or visuals.
I'm proposing an additional few fields that would allow users to enter a string of text before or after the data being used to create a link. Screenshot with mocked up visual attached below.
**Describe a specific use case for the feature:**
* Create tables with links that are more descriptive of what the person will be looking at when they click or press ENTER
* If visual text is not a viable option, could we add an `aria-label` attribute so the semantic label could be `ATL Discovery log` instead of just `ATL`?
---
<img width="1147" alt="Screen Shot 2021-11-30 at 2 47 39 PM" src="https://user-images.githubusercontent.com/934879/144128663-b3486405-eae8-4417-97bf-01f57056896c.png">
| 1.0 | [TSVB]: Explore adding text before or after table links - **Describe the feature:**
Some users identified an accessibility issue where multiple link names have different URLs. Users can select a column of data and apply links to those by adding a pattern like `https://website_url/{{key}}` in the Item URL field. This can become a challenge for screen readers because users listen to a link like "Atlanta" that links to multiple different data series or visuals.
I'm proposing an additional few fields that would allow users to enter a string of text before or after the data being used to create a link. Screenshot with mocked up visual attached below.
**Describe a specific use case for the feature:**
* Create tables with links that are more descriptive of what the person will be looking at when they click or press ENTER
* If visual text is not a viable option, could we add an `aria-label` attribute so the semantic label could be `ATL Discovery log` instead of just `ATL`?
---
<img width="1147" alt="Screen Shot 2021-11-30 at 2 47 39 PM" src="https://user-images.githubusercontent.com/934879/144128663-b3486405-eae8-4417-97bf-01f57056896c.png">
| non_test | explore adding text before or after table links describe the feature some users identified an accessibility issue where multiple link names have different urls users can select a column of data and apply links to those by adding a pattern like in the item url field this can become a challenge for screen readers because users listen to a link like atlanta that links to multiple different data series or visuals i m proposing an additional few fields that would allow users to enter a string of text before or after the data being used to create a link screenshot with mocked up visual attached below describe a specific use case for the feature create tables with links that are more descriptive of what the person will be looking at when they click or press enter if visual text is not a viable option could we add an aria label attribute so the semantic label could be atl discovery log instead of just atl img width alt screen shot at pm src | 0 |
230,442 | 18,669,709,921 | IssuesEvent | 2021-10-30 13:24:21 | feelpp/feelpp | https://api.github.com/repos/feelpp/feelpp | closed | FTT in test_modelproperties | type: ftt component: testsuite | exception is thrown when reading the json with `ModelProperties` in `BoundaryConditions`
I put the gdb info and how to reproduce it
in gdb I use the follow commands:
```
catch throw
r --log-level=all -- --config-file modelproperties.cfg
```
here is the log
```shell
> gdb feelpp_test_modelproperties
...
[ Starting Feel++ ] application test_modelproperties version 0.109.0-rc.20 date 2021-Oct-28
. test_modelproperties files are stored in /home/prudhomm/feel/feelpp_test_modelproperties/np_1
.. logfiles :/home/prudhomm/feel/feelpp_test_modelproperties/np_1/logs
setup Feel++
Running 5 test cases...
Entering test module "model properties testsuite"
/home/prudhomm/Devel/Develop/feelpp/testsuite/feelmodels/test_modelproperties.cpp(60): Entering test suite "modelproperties"
/home/prudhomm/Devel/Develop/feelpp/testsuite/feelmodels/test_modelproperties.cpp(62): Entering test case "test_materials"
[loadMesh] no file name or unrecognized extension provided
[loadMesh] automatically generating amesh from gmsh.domain.shape in format geo+msh: "hypercube".geo
[modelProperties] Loading Model Properties : "/home/prudhomm/Devel/Develop/feelpp/build/dbg/testsuite/feelmodels/test.feelpp"
Thread 1 "feelpp_test_mod" hit Catchpoint 1 (exception thrown), 0x00007ffff40f1672 in __cxa_throw ()
from /lib/x86_64-linux-gnu/libstdc++.so.6
(gdb) up
#1 0x0000000000531b1e in boost::throw_exception<boost::exception_detail::error_info_injector<boost::property_tree::ptree_bad_path> > (e=...) at /usr/include/boost/throw_exception.hpp:69
69 throw enable_current_exception(enable_error_info(e));
(gdb) up
#2 0x0000000000530e04 in boost::exception_detail::throw_exception_<boost::property_tree::ptree_bad_path> (x=..., current_function=<optimized out>, file=<optimized out>, line=<optimized out>)
at /usr/include/boost/throw_exception.hpp:86
86 boost::throw_exception(
(gdb) up
#3 0x0000000000530c14 in boost::property_tree::basic_ptree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::get_child (this=<optimized out>, path=...)
at /usr/include/boost/property_tree/detail/ptree_implementation.hpp:576
576 BOOST_PROPERTY_TREE_THROW(ptree_bad_path("No such node", path));
(gdb) up
#4 0x00007ffff5d708a6 in boost::property_tree::basic_ptree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::get_child (this=0x17ee520, path=...)
at /usr/include/boost/property_tree/detail/ptree_implementation.hpp:585
585 return const_cast<self_type*>(this)->get_child(path);
(gdb) up
#5 0x00007ffff5e82a3f in boost::property_tree::basic_ptree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::get<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > (
this=0x17ee520, path=...) at /usr/include/boost/property_tree/detail/ptree_implementation.hpp:751
751 return get_child(path).BOOST_NESTED_TEMPLATE get_value<Type>();
(gdb) up
#6 0x00007ffff6f7a8da in Feel::BoundaryConditions::setup (this=0x7fffffffbdc0)
at /home/prudhomm/Devel/Develop/feelpp/feelpp/feel/feelpde/boundaryconditions.cpp:117
117 auto e= c.second.get<std::string>("expr");
(gdb) up
#7 0x00007ffff6f7b4b5 in Feel::BoundaryConditions::setPTree (this=0x7fffffffbdc0, p=...)
at /home/prudhomm/Devel/Develop/feelpp/feelpp/feel/feelpde/boundaryconditions.cpp:74
74 setup();
(gdb)
```
@romainhild test_modelproperties fails for me, could you check please ?
| 1.0 | FTT in test_modelproperties - exception is thrown when reading the json with `ModelProperties` in `BoundaryConditions`
I put the gdb info and how to reproduce it
in gdb I use the follow commands:
```
catch throw
r --log-level=all -- --config-file modelproperties.cfg
```
here is the log
```shell
> gdb feelpp_test_modelproperties
...
[ Starting Feel++ ] application test_modelproperties version 0.109.0-rc.20 date 2021-Oct-28
. test_modelproperties files are stored in /home/prudhomm/feel/feelpp_test_modelproperties/np_1
.. logfiles :/home/prudhomm/feel/feelpp_test_modelproperties/np_1/logs
setup Feel++
Running 5 test cases...
Entering test module "model properties testsuite"
/home/prudhomm/Devel/Develop/feelpp/testsuite/feelmodels/test_modelproperties.cpp(60): Entering test suite "modelproperties"
/home/prudhomm/Devel/Develop/feelpp/testsuite/feelmodels/test_modelproperties.cpp(62): Entering test case "test_materials"
[loadMesh] no file name or unrecognized extension provided
[loadMesh] automatically generating amesh from gmsh.domain.shape in format geo+msh: "hypercube".geo
[modelProperties] Loading Model Properties : "/home/prudhomm/Devel/Develop/feelpp/build/dbg/testsuite/feelmodels/test.feelpp"
Thread 1 "feelpp_test_mod" hit Catchpoint 1 (exception thrown), 0x00007ffff40f1672 in __cxa_throw ()
from /lib/x86_64-linux-gnu/libstdc++.so.6
(gdb) up
#1 0x0000000000531b1e in boost::throw_exception<boost::exception_detail::error_info_injector<boost::property_tree::ptree_bad_path> > (e=...) at /usr/include/boost/throw_exception.hpp:69
69 throw enable_current_exception(enable_error_info(e));
(gdb) up
#2 0x0000000000530e04 in boost::exception_detail::throw_exception_<boost::property_tree::ptree_bad_path> (x=..., current_function=<optimized out>, file=<optimized out>, line=<optimized out>)
at /usr/include/boost/throw_exception.hpp:86
86 boost::throw_exception(
(gdb) up
#3 0x0000000000530c14 in boost::property_tree::basic_ptree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::get_child (this=<optimized out>, path=...)
at /usr/include/boost/property_tree/detail/ptree_implementation.hpp:576
576 BOOST_PROPERTY_TREE_THROW(ptree_bad_path("No such node", path));
(gdb) up
#4 0x00007ffff5d708a6 in boost::property_tree::basic_ptree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::get_child (this=0x17ee520, path=...)
at /usr/include/boost/property_tree/detail/ptree_implementation.hpp:585
585 return const_cast<self_type*>(this)->get_child(path);
(gdb) up
#5 0x00007ffff5e82a3f in boost::property_tree::basic_ptree<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::get<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > (
this=0x17ee520, path=...) at /usr/include/boost/property_tree/detail/ptree_implementation.hpp:751
751 return get_child(path).BOOST_NESTED_TEMPLATE get_value<Type>();
(gdb) up
#6 0x00007ffff6f7a8da in Feel::BoundaryConditions::setup (this=0x7fffffffbdc0)
at /home/prudhomm/Devel/Develop/feelpp/feelpp/feel/feelpde/boundaryconditions.cpp:117
117 auto e= c.second.get<std::string>("expr");
(gdb) up
#7 0x00007ffff6f7b4b5 in Feel::BoundaryConditions::setPTree (this=0x7fffffffbdc0, p=...)
at /home/prudhomm/Devel/Develop/feelpp/feelpp/feel/feelpde/boundaryconditions.cpp:74
74 setup();
(gdb)
```
@romainhild test_modelproperties fails for me, could you check please ?
| test | ftt in test modelproperties exception is thrown when reading the json with modelproperties in boundaryconditions i put the gdb info and how to reproduce it in gdb i use the follow commands catch throw r log level all config file modelproperties cfg here is the log shell gdb feelpp test modelproperties application test modelproperties version rc date oct test modelproperties files are stored in home prudhomm feel feelpp test modelproperties np logfiles home prudhomm feel feelpp test modelproperties np logs setup feel running test cases entering test module model properties testsuite home prudhomm devel develop feelpp testsuite feelmodels test modelproperties cpp entering test suite modelproperties home prudhomm devel develop feelpp testsuite feelmodels test modelproperties cpp entering test case test materials no file name or unrecognized extension provided automatically generating amesh from gmsh domain shape in format geo msh hypercube geo loading model properties home prudhomm devel develop feelpp build dbg testsuite feelmodels test feelpp thread feelpp test mod hit catchpoint exception thrown in cxa throw from lib linux gnu libstdc so gdb up in boost throw exception e at usr include boost throw exception hpp throw enable current exception enable error info e gdb up in boost exception detail throw exception x current function file line at usr include boost throw exception hpp boost throw exception gdb up in boost property tree basic ptree std allocator std basic string std allocator std less std allocator get child this path at usr include boost property tree detail ptree implementation hpp boost property tree throw ptree bad path no such node path gdb up in boost property tree basic ptree std allocator std basic string std allocator std less std allocator get child this path at usr include boost property tree detail ptree implementation hpp return const cast this get child path gdb up in boost property tree basic ptree std allocator std basic string std allocator std less std allocator get std allocator this path at usr include boost property tree detail ptree implementation hpp return get child path boost nested template get value gdb up in feel boundaryconditions setup this at home prudhomm devel develop feelpp feelpp feel feelpde boundaryconditions cpp auto e c second get expr gdb up in feel boundaryconditions setptree this p at home prudhomm devel develop feelpp feelpp feel feelpde boundaryconditions cpp setup gdb romainhild test modelproperties fails for me could you check please | 1 |
622,761 | 19,656,206,829 | IssuesEvent | 2022-01-10 12:47:48 | MSFS-Mega-Pack/MSFS2020-livery-manager | https://api.github.com/repos/MSFS-Mega-Pack/MSFS2020-livery-manager | closed | [BUG] Livery CDN offline | bug type: backend priority: MEDIUM | ## Description
Hello a I have a issue my livries manager is stuck a downloading 1 of X and I restarte the pc the program and everything and still stuck
<!-- Describe the bug you're reporting -->
## To reproduce
<!-- Write a clear list of steps to take to reproduce the bug -->
1. Click this
2. Type that
3. Turn off your PC
## Environment
<!-- Find your manager version at the bottom of the Settings tab -->
**Manager version:**
**FS2020 Version:**
## Screenshots or videos
<details>
<summary>Click to expand</summary>
<!-- upload any screenshots or recordings demonstrating the issue here-->
</details>
| 1.0 | [BUG] Livery CDN offline - ## Description
Hello a I have a issue my livries manager is stuck a downloading 1 of X and I restarte the pc the program and everything and still stuck
<!-- Describe the bug you're reporting -->
## To reproduce
<!-- Write a clear list of steps to take to reproduce the bug -->
1. Click this
2. Type that
3. Turn off your PC
## Environment
<!-- Find your manager version at the bottom of the Settings tab -->
**Manager version:**
**FS2020 Version:**
## Screenshots or videos
<details>
<summary>Click to expand</summary>
<!-- upload any screenshots or recordings demonstrating the issue here-->
</details>
| non_test | livery cdn offline description hello a i have a issue my livries manager is stuck a downloading of x and i restarte the pc the program and everything and still stuck to reproduce click this type that turn off your pc environment manager version version screenshots or videos click to expand | 0 |
439,128 | 12,677,971,205 | IssuesEvent | 2020-06-19 08:54:01 | GoogleContainerTools/skaffold | https://api.github.com/repos/GoogleContainerTools/skaffold | closed | Custom builder command-line is not shown | area/cli kind/feature-request priority/p3 | ### Expected behavior
`skaffold xxx -v debug` usually shows the executed commands.
### Actual behavior
```console
$ skaffold build -v trace
[...]
Found [minikube] context, using local docker daemon.
Building [skaffold-custom-jib]...
DEBU[0000] Executing template &{envTemplate 0xc00012e100 0xc0007ffa00 } with environment map[BINPATH:/Users/bdealwis/bin BLOCKSIZE:1024 EDITOR:vi GIT_EDITOR:vim HOME:/Users/bdealwis LANG:en_CA.UTF-8 LESS:Remq LOGNAME:bdealwis PATH:/Users/bdealwis/bin/Darwin:/Users/bdealwis/bin:/Users/bdealwis/installs/bin:/Users/bdealwis/go/bin:/usr/local/git/current/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/go/bin PWD:/Users/bdealwis/Projects/Skaffold/master/examples/jib SHELL:/bin/zsh SHLVL:1 USER:bdealwis VISUAL:vi]
TRAC[0000] latest skaffold version: v1.11.0
INFO[0000] update check failed: get latest and current Skaffold version: parsing current semver, skipping update check: parsing semver: Version string empty
DEBU[0000] could not parse data
[INFO] Scanning for projects...
[INFO]
[INFO] -------------------< org.skaffold:hello-spring-boot >-------------------
```
I guess I can look at the `skaffold.yaml` and figure it out, but it would be useful to see the actual command-line.
### Information
- Skaffold version: head
- Operating system: macOS
- Contents of skaffold.yaml:
```yaml
apiVersion: skaffold/v2beta5
kind: Config
build:
artifacts:
- image: skaffold-custom-jib
custom:
buildCommand: ./mvnw package $(if [ $PUSH_IMAGE = true ]; then echo jib:build; else echo jib:dockerBuild; fi) -Djib.to.image=$IMAGE
dependencies:
paths:
- src/**
```
| 1.0 | Custom builder command-line is not shown - ### Expected behavior
`skaffold xxx -v debug` usually shows the executed commands.
### Actual behavior
```console
$ skaffold build -v trace
[...]
Found [minikube] context, using local docker daemon.
Building [skaffold-custom-jib]...
DEBU[0000] Executing template &{envTemplate 0xc00012e100 0xc0007ffa00 } with environment map[BINPATH:/Users/bdealwis/bin BLOCKSIZE:1024 EDITOR:vi GIT_EDITOR:vim HOME:/Users/bdealwis LANG:en_CA.UTF-8 LESS:Remq LOGNAME:bdealwis PATH:/Users/bdealwis/bin/Darwin:/Users/bdealwis/bin:/Users/bdealwis/installs/bin:/Users/bdealwis/go/bin:/usr/local/git/current/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/go/bin PWD:/Users/bdealwis/Projects/Skaffold/master/examples/jib SHELL:/bin/zsh SHLVL:1 USER:bdealwis VISUAL:vi]
TRAC[0000] latest skaffold version: v1.11.0
INFO[0000] update check failed: get latest and current Skaffold version: parsing current semver, skipping update check: parsing semver: Version string empty
DEBU[0000] could not parse data
[INFO] Scanning for projects...
[INFO]
[INFO] -------------------< org.skaffold:hello-spring-boot >-------------------
```
I guess I can look at the `skaffold.yaml` and figure it out, but it would be useful to see the actual command-line.
### Information
- Skaffold version: head
- Operating system: macOS
- Contents of skaffold.yaml:
```yaml
apiVersion: skaffold/v2beta5
kind: Config
build:
artifacts:
- image: skaffold-custom-jib
custom:
buildCommand: ./mvnw package $(if [ $PUSH_IMAGE = true ]; then echo jib:build; else echo jib:dockerBuild; fi) -Djib.to.image=$IMAGE
dependencies:
paths:
- src/**
```
| non_test | custom builder command line is not shown expected behavior skaffold xxx v debug usually shows the executed commands actual behavior console skaffold build v trace found context using local docker daemon building debu executing template envtemplate with environment map trac latest skaffold version info update check failed get latest and current skaffold version parsing current semver skipping update check parsing semver version string empty debu could not parse data scanning for projects i guess i can look at the skaffold yaml and figure it out but it would be useful to see the actual command line information skaffold version head operating system macos contents of skaffold yaml yaml apiversion skaffold kind config build artifacts image skaffold custom jib custom buildcommand mvnw package if then echo jib build else echo jib dockerbuild fi djib to image image dependencies paths src | 0 |
193,268 | 14,645,304,689 | IssuesEvent | 2020-12-26 06:45:39 | GTNewHorizons/GT-New-Horizons-Modpack | https://api.github.com/repos/GTNewHorizons/GT-New-Horizons-Modpack | closed | Should crushed ore oredict be changed? | Mod: AE2 Mod: EC2 Mod: GT Status: RFC (request for comment) Status: need to be tested Type: oredict | Apparently when you use the EC2 ore dictionary export bus, if you specify crushed*, it will pick up not only crushed ores, but crushed purified and crushed centrifuged. My suggestion is to change (for example) crushedIron -> crushedOreIron or something like that.
Any thoughts? Or is this not a real problem? | 1.0 | Should crushed ore oredict be changed? - Apparently when you use the EC2 ore dictionary export bus, if you specify crushed*, it will pick up not only crushed ores, but crushed purified and crushed centrifuged. My suggestion is to change (for example) crushedIron -> crushedOreIron or something like that.
Any thoughts? Or is this not a real problem? | test | should crushed ore oredict be changed apparently when you use the ore dictionary export bus if you specify crushed it will pick up not only crushed ores but crushed purified and crushed centrifuged my suggestion is to change for example crushediron crushedoreiron or something like that any thoughts or is this not a real problem | 1 |
89,640 | 25,864,015,208 | IssuesEvent | 2022-12-13 19:12:56 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | opened | [Task]: Enable auto height by default for JSONForm widget | UI Builders Pod Task Auto Height | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
- Enable auto height by default for JSONForm widget. | 1.0 | [Task]: Enable auto height by default for JSONForm widget - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### SubTasks
- Enable auto height by default for JSONForm widget. | non_test | enable auto height by default for jsonform widget is there an existing issue for this i have searched the existing issues subtasks enable auto height by default for jsonform widget | 0 |
322,439 | 27,604,201,263 | IssuesEvent | 2023-03-09 11:56:24 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Flaky test create_and_destroy: /obj/machinery/rnd/production/protolathe/department/service hard deleted 1 times out of a total del count of 1 | Hard Deletes 🤖 Flaky Test Report | <!-- This issue can be renamed, but do not change the next comment! -->
<!-- title: Flaky test create_and_destroy: /obj/machinery/rnd/production/protolathe/department/service hard deleted 1 times out of a total del count of 1 -->
Flaky tests were detected in [this test run](https://github.com/tgstation/tgstation/actions/runs/4369750555/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted.
Failures:
```
create_and_destroy: /obj/machinery/rnd/production/protolathe/department/service hard deleted 1 times out of a total del count of 1 at code/modules/unit_tests/create_and_destroy.dm:182
```
| 1.0 | Flaky test create_and_destroy: /obj/machinery/rnd/production/protolathe/department/service hard deleted 1 times out of a total del count of 1 - <!-- This issue can be renamed, but do not change the next comment! -->
<!-- title: Flaky test create_and_destroy: /obj/machinery/rnd/production/protolathe/department/service hard deleted 1 times out of a total del count of 1 -->
Flaky tests were detected in [this test run](https://github.com/tgstation/tgstation/actions/runs/4369750555/attempts/1). This means that there was a failure that was cleared when the tests were simply restarted.
Failures:
```
create_and_destroy: /obj/machinery/rnd/production/protolathe/department/service hard deleted 1 times out of a total del count of 1 at code/modules/unit_tests/create_and_destroy.dm:182
```
| test | flaky test create and destroy obj machinery rnd production protolathe department service hard deleted times out of a total del count of flaky tests were detected in this means that there was a failure that was cleared when the tests were simply restarted failures create and destroy obj machinery rnd production protolathe department service hard deleted times out of a total del count of at code modules unit tests create and destroy dm | 1 |
245,929 | 7,892,881,685 | IssuesEvent | 2018-06-28 16:13:18 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | opened | [studio] Support MediaConvert and the new MediaServices in AWS | new feature priority: medium | Add new APIs to support MediaConvert in addition to ElasticTranscoder and document.
Add additional mechanics for configuration of these new services. | 1.0 | [studio] Support MediaConvert and the new MediaServices in AWS - Add new APIs to support MediaConvert in addition to ElasticTranscoder and document.
Add additional mechanics for configuration of these new services. | non_test | support mediaconvert and the new mediaservices in aws add new apis to support mediaconvert in addition to elastictranscoder and document add additional mechanics for configuration of these new services | 0 |
292,432 | 25,213,766,964 | IssuesEvent | 2022-11-14 07:21:30 | quarkusio/quarkus | https://api.github.com/repos/quarkusio/quarkus | closed | @TestTransaction does not work for @Nested tests | kind/bug area/testing | ### Describe the bug
When you have lots of tests you might want to group them e.g. by method using a `@Nested` inner class.
```
@TestTransaction
@QuarkusTest
class MyServiceTest {
@Nested
class Method1 {
@Test
void myTest(){
// test implementation
}
}
}
```
This seems to break the functionality of `@TestTransaction`. All the nested test will run without a transaction, resulting in `Transaction is not active, consider adding @Transactional to your method to automatically activate one.`
It doesn't matter if the `@TestTransaction` is on the top test class `MyServiceTest `, on the nested class `Method1 ` or on the specific nested test `myTest`.
Currently, the fallback is to use `QuarkusTransaction` to manually create and rollback transactions.
### Expected behavior
When using `@TestTransaction` on the top class, nested test should run within a test transaction.
### Actual behavior
No transaction is open when a nested test is executed.
### How to Reproduce?
_No response_
### Output of `uname -a` or `ver`
_No response_
### Output of `java -version`
_No response_
### GraalVM version (if different from Java)
_No response_
### Quarkus version or git rev
_No response_
### Build tool (ie. output of `mvnw --version` or `gradlew --version`)
_No response_
### Additional information
_No response_ | 1.0 | @TestTransaction does not work for @Nested tests - ### Describe the bug
When you have lots of tests you might want to group them e.g. by method using a `@Nested` inner class.
```
@TestTransaction
@QuarkusTest
class MyServiceTest {
@Nested
class Method1 {
@Test
void myTest(){
// test implementation
}
}
}
```
This seems to break the functionality of `@TestTransaction`. All the nested test will run without a transaction, resulting in `Transaction is not active, consider adding @Transactional to your method to automatically activate one.`
It doesn't matter if the `@TestTransaction` is on the top test class `MyServiceTest `, on the nested class `Method1 ` or on the specific nested test `myTest`.
Currently, the fallback is to use `QuarkusTransaction` to manually create and rollback transactions.
### Expected behavior
When using `@TestTransaction` on the top class, nested test should run within a test transaction.
### Actual behavior
No transaction is open when a nested test is executed.
### How to Reproduce?
_No response_
### Output of `uname -a` or `ver`
_No response_
### Output of `java -version`
_No response_
### GraalVM version (if different from Java)
_No response_
### Quarkus version or git rev
_No response_
### Build tool (ie. output of `mvnw --version` or `gradlew --version`)
_No response_
### Additional information
_No response_ | test | testtransaction does not work for nested tests describe the bug when you have lots of tests you might want to group them e g by method using a nested inner class testtransaction quarkustest class myservicetest nested class test void mytest test implementation this seems to break the functionality of testtransaction all the nested test will run without a transaction resulting in transaction is not active consider adding transactional to your method to automatically activate one it doesn t matter if the testtransaction is on the top test class myservicetest on the nested class or on the specific nested test mytest currently the fallback is to use quarkustransaction to manually create and rollback transactions expected behavior when using testtransaction on the top class nested test should run within a test transaction actual behavior no transaction is open when a nested test is executed how to reproduce no response output of uname a or ver no response output of java version no response graalvm version if different from java no response quarkus version or git rev no response build tool ie output of mvnw version or gradlew version no response additional information no response | 1 |
7,604 | 7,992,509,663 | IssuesEvent | 2018-07-20 01:56:25 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | reopened | Add "write changeset to OSM API Db" as a Hootenanny command | Category: Core Category: Services Priority: Medium Status: In Progress Type: Feature in progress | Extend the work in #2356 to have API Db writing from the command line as part of the "changeset-apply" command.
* Add to branch #2356
* Modify ApplyChangesetCmd.cpp to handle *.osc files
* Add config options for specifying change set options.
NOTE: This will be done in branch #2356, not as a separate branch. | 1.0 | Add "write changeset to OSM API Db" as a Hootenanny command - Extend the work in #2356 to have API Db writing from the command line as part of the "changeset-apply" command.
* Add to branch #2356
* Modify ApplyChangesetCmd.cpp to handle *.osc files
* Add config options for specifying change set options.
NOTE: This will be done in branch #2356, not as a separate branch. | non_test | add write changeset to osm api db as a hootenanny command extend the work in to have api db writing from the command line as part of the changeset apply command add to branch modify applychangesetcmd cpp to handle osc files add config options for specifying change set options note this will be done in branch not as a separate branch | 0 |
343,153 | 10,325,732,075 | IssuesEvent | 2019-09-01 19:50:30 | OpenSRP/opensrp-client-chw | https://api.github.com/repos/OpenSRP/opensrp-client-chw | closed | Child visit week 14 immunizations service schedule for IPV incorrect | Boresha Afya high priority | - [ ] Service schedule for IPV under 14 weeks appears to have the wrong start date - like 4 wks instead of 14 weeks. Immunizations at 14 weeks for IPV appears when a child is only 4 wks.

| 1.0 | Child visit week 14 immunizations service schedule for IPV incorrect - - [ ] Service schedule for IPV under 14 weeks appears to have the wrong start date - like 4 wks instead of 14 weeks. Immunizations at 14 weeks for IPV appears when a child is only 4 wks.

| non_test | child visit week immunizations service schedule for ipv incorrect service schedule for ipv under weeks appears to have the wrong start date like wks instead of weeks immunizations at weeks for ipv appears when a child is only wks | 0 |
16,339 | 21,994,989,422 | IssuesEvent | 2022-05-26 04:50:36 | FlowCrypt/flowcrypt-browser | https://api.github.com/repos/FlowCrypt/flowcrypt-browser | reopened | Particular encrypted message composes of 3 attachment (noname, encrypted.asc, public_key.asc) doest appear correctly in the browser extension | compatibility actionable | Description:
The user reported a scenario wherein the appears to be encrypted message in the attachment is not appearing correctly within the browser extension. It doesn't even throw a generic decrypt error showing us an unusual case that maybe we didn't handle just yet.
The message contains 3 attachment (noname, encrypted.asc, public_key.asc) and rendered as:

Here is a short description of what I know about each attachment.
noname - it contains the detached signature
encrypted.asc - encrypted message
public_key.asc - public key of the sender
In comparison, this particular encrypted message can be decrypted on our android app very well.
The message originally came from msgsafe.io (another email service that supports PGP encryption).
Steps to reproduce:
1. I have saved the test message within our flowcrypt compatibility test account, just open the Test Email 2 with the label `decrypt error` and it should greet you with a similar problem.
Reference:
https://mail.google.com/mail/u/2/#inbox/FMfcgzGpFWLJPNQQFZcBjxKVTTCnMDWT | True | Particular encrypted message composes of 3 attachment (noname, encrypted.asc, public_key.asc) doest appear correctly in the browser extension - Description:
The user reported a scenario wherein the appears to be encrypted message in the attachment is not appearing correctly within the browser extension. It doesn't even throw a generic decrypt error showing us an unusual case that maybe we didn't handle just yet.
The message contains 3 attachment (noname, encrypted.asc, public_key.asc) and rendered as:

Here is a short description of what I know about each attachment.
noname - it contains the detached signature
encrypted.asc - encrypted message
public_key.asc - public key of the sender
In comparison, this particular encrypted message can be decrypted on our android app very well.
The message originally came from msgsafe.io (another email service that supports PGP encryption).
Steps to reproduce:
1. I have saved the test message within our flowcrypt compatibility test account, just open the Test Email 2 with the label `decrypt error` and it should greet you with a similar problem.
Reference:
https://mail.google.com/mail/u/2/#inbox/FMfcgzGpFWLJPNQQFZcBjxKVTTCnMDWT | non_test | particular encrypted message composes of attachment noname encrypted asc public key asc doest appear correctly in the browser extension description the user reported a scenario wherein the appears to be encrypted message in the attachment is not appearing correctly within the browser extension it doesn t even throw a generic decrypt error showing us an unusual case that maybe we didn t handle just yet the message contains attachment noname encrypted asc public key asc and rendered as here is a short description of what i know about each attachment noname it contains the detached signature encrypted asc encrypted message public key asc public key of the sender in comparison this particular encrypted message can be decrypted on our android app very well the message originally came from msgsafe io another email service that supports pgp encryption steps to reproduce i have saved the test message within our flowcrypt compatibility test account just open the test email with the label decrypt error and it should greet you with a similar problem reference | 0 |
111,818 | 9,542,506,888 | IssuesEvent | 2019-05-01 04:32:58 | andrewschultz/stale-tales-slate | https://api.github.com/repos/andrewschultz/stale-tales-slate | closed | Meatier Emerita Emirate = where to dump done objects | Roiling Shuffling architectural invalid/unplaytestable meta-task/docs/end/hints silly fun | This is just code munging fun, but "lalaland" is such a boring name. I always wanted to change it.
Make two commits: one for the entirety of Shuffling, one for the entirety of Roiling.
GR LALALAND = the way to check it's all done. | 1.0 | Meatier Emerita Emirate = where to dump done objects - This is just code munging fun, but "lalaland" is such a boring name. I always wanted to change it.
Make two commits: one for the entirety of Shuffling, one for the entirety of Roiling.
GR LALALAND = the way to check it's all done. | test | meatier emerita emirate where to dump done objects this is just code munging fun but lalaland is such a boring name i always wanted to change it make two commits one for the entirety of shuffling one for the entirety of roiling gr lalaland the way to check it s all done | 1 |
645,903 | 21,032,355,684 | IssuesEvent | 2022-03-31 02:44:24 | hackforla/tdm-calculator | https://api.github.com/repos/hackforla/tdm-calculator | reopened | Add icons for Accordion having external links | role: front-end level: medium priority: MUST HAVE p-Feature - Tool Tips | ### Overview
For any text having links to external websites, we should be adding an icon that indicates that.
### Action Items
- [ ] For external links add the icon that shows its going to an external website (https://designsystem.digital.gov/components/link/ )

- [ ] Add the image for the external links on the AIN/APN row on page 1
- [ ] Add the image for the external links on the Bike Share on page 2
- [ ] Add the image for the external links on the About Us page
### Resources/Instructions
REPLACE THIS TEXT -If there is a website which has documentation that helps with this issue provide the link(s) here.
| 1.0 | Add icons for Accordion having external links - ### Overview
For any text having links to external websites, we should be adding an icon that indicates that.
### Action Items
- [ ] For external links add the icon that shows its going to an external website (https://designsystem.digital.gov/components/link/ )

- [ ] Add the image for the external links on the AIN/APN row on page 1
- [ ] Add the image for the external links on the Bike Share on page 2
- [ ] Add the image for the external links on the About Us page
### Resources/Instructions
REPLACE THIS TEXT -If there is a website which has documentation that helps with this issue provide the link(s) here.
| non_test | add icons for accordion having external links overview for any text having links to external websites we should be adding an icon that indicates that action items for external links add the icon that shows its going to an external website add the image for the external links on the ain apn row on page add the image for the external links on the bike share on page add the image for the external links on the about us page resources instructions replace this text if there is a website which has documentation that helps with this issue provide the link s here | 0 |
119,517 | 25,530,806,652 | IssuesEvent | 2022-11-29 08:15:28 | arduino/arduino-ide | https://api.github.com/repos/arduino/arduino-ide | opened | Styling issues with Theia 1.31.1 | topic: code type: imperfection topic: theme | ### Describe the problem
This is a follow-up task of #1655.
As noted in https://github.com/arduino/arduino-ide/issues/1655#issuecomment-1308516491, IDE2 styling contradicts the stylings dictated by the Theia framework.
This issue should be used to track styling issues so that our designers can provide instructions on how to handle them:
From https://github.com/arduino/arduino-ide/issues/1655#issuecomment-1308516491:
> @91volt, Theia `1.31.0` comes with changes to have better accessibility support. Here and there the Theia styles contradict IDE2 styles. In this comment, I will attach screenshots depicting which UI parts are affected. (I will periodically update the comment with new findings if any.)
>
> <img alt="Screen Shot 2022-11-09 at 10 43 59" width="347" src="https://user-images.githubusercontent.com/1405703/200802401-17dcaa42-4270-41e1-a5b3-64c9f39b6e25.png">
>
> <img alt="Screen Shot 2022-11-09 at 10 44 34" width="252" src="https://user-images.githubusercontent.com/1405703/200802529-413d1233-1194-4c13-9d5d-4e15254ca2f8.png">
From https://github.com/arduino/arduino-ide/pull/1662#issuecomment-1330165727:
> This contextual menu icon in the sketch control toolbar seems a little off to me: <img alt="Screenshot 2022-11-28 at 14 58 11" width="84" src="https://user-images.githubusercontent.com/6939054/204458526-a52f3b7a-682e-4d71-b720-bc26fedf6282.png">
>
> I found that adding this CSS rule would fix it:
>
> ```
> #arduino-open-sketch-control--toolbar {
> text-align: center;
> }
> ```
### To reproduce
See description
### Expected behavior
All styling differences are adjusted or accepted
### Arduino IDE version
https://github.com/arduino/arduino-ide/pull/1662
### Operating system
macOS
### Operating system version
12.5.1
### Additional context
_No response_
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://www.arduino.cc/en/software#nightly-builds)
- [X] My report contains all necessary details | 1.0 | Styling issues with Theia 1.31.1 - ### Describe the problem
This is a follow-up task of #1655.
As noted in https://github.com/arduino/arduino-ide/issues/1655#issuecomment-1308516491, IDE2 styling contradicts the stylings dictated by the Theia framework.
This issue should be used to track styling issues so that our designers can provide instructions on how to handle them:
From https://github.com/arduino/arduino-ide/issues/1655#issuecomment-1308516491:
> @91volt, Theia `1.31.0` comes with changes to have better accessibility support. Here and there the Theia styles contradict IDE2 styles. In this comment, I will attach screenshots depicting which UI parts are affected. (I will periodically update the comment with new findings if any.)
>
> <img alt="Screen Shot 2022-11-09 at 10 43 59" width="347" src="https://user-images.githubusercontent.com/1405703/200802401-17dcaa42-4270-41e1-a5b3-64c9f39b6e25.png">
>
> <img alt="Screen Shot 2022-11-09 at 10 44 34" width="252" src="https://user-images.githubusercontent.com/1405703/200802529-413d1233-1194-4c13-9d5d-4e15254ca2f8.png">
From https://github.com/arduino/arduino-ide/pull/1662#issuecomment-1330165727:
> This contextual menu icon in the sketch control toolbar seems a little off to me: <img alt="Screenshot 2022-11-28 at 14 58 11" width="84" src="https://user-images.githubusercontent.com/6939054/204458526-a52f3b7a-682e-4d71-b720-bc26fedf6282.png">
>
> I found that adding this CSS rule would fix it:
>
> ```
> #arduino-open-sketch-control--toolbar {
> text-align: center;
> }
> ```
### To reproduce
See description
### Expected behavior
All styling differences are adjusted or accepted
### Arduino IDE version
https://github.com/arduino/arduino-ide/pull/1662
### Operating system
macOS
### Operating system version
12.5.1
### Additional context
_No response_
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://www.arduino.cc/en/software#nightly-builds)
- [X] My report contains all necessary details | non_test | styling issues with theia describe the problem this is a follow up task of as noted in styling contradicts the stylings dictated by the theia framework this issue should be used to track styling issues so that our designers can provide instructions on how to handle them from theia comes with changes to have better accessibility support here and there the theia styles contradict styles in this comment i will attach screenshots depicting which ui parts are affected i will periodically update the comment with new findings if any img alt screen shot at width src img alt screen shot at width src from this contextual menu icon in the sketch control toolbar seems a little off to me img alt screenshot at width src i found that adding this css rule would fix it arduino open sketch control toolbar text align center to reproduce see description expected behavior all styling differences are adjusted or accepted arduino ide version operating system macos operating system version additional context no response issue checklist i searched for previous reports in i verified the problem still occurs when using the latest my report contains all necessary details | 0 |
216,320 | 16,750,123,469 | IssuesEvent | 2021-06-11 21:29:24 | influxdata/influxdb | https://api.github.com/repos/influxdata/influxdb | reopened | Cannot use conditionals in status messages for Checks | kind/bug monitoring-and-alerting severity/sev-3 team/ui testing | According to the Flux team, the following should work:
`The Server: ${ r.host } is ${if r._level == "ok" then "up" else "down" }.`
But in the UI, when trying, i get the following error:

| 1.0 | Cannot use conditionals in status messages for Checks - According to the Flux team, the following should work:
`The Server: ${ r.host } is ${if r._level == "ok" then "up" else "down" }.`
But in the UI, when trying, i get the following error:

| test | cannot use conditionals in status messages for checks according to the flux team the following should work the server r host is if r level ok then up else down but in the ui when trying i get the following error | 1 |
100,885 | 8,757,680,908 | IssuesEvent | 2018-12-14 22:11:55 | phetsims/resistance-in-a-wire | https://api.github.com/repos/phetsims/resistance-in-a-wire | closed | Jaws reads out incorrect values in scene summary | priority:2-high status:fixed-pending-testing type:bug | **Test device:**
Dell Laptop
**Operating System:**
Win 10
**Browser:**
Chrome
**Problem description:**
For https://github.com/phetsims/QA/issues/219#event-1953042350 and https://github.com/phetsims/QA/issues/217
When the scene description first plays it reads out the initial values for all the sliders and then goes in to a bulleted list with those same values. If you manipulate those sliders and then prompt Jaws to read the scene description again it will read out the (now incorrect) values that the sim started with, then read a list with the actual values.
**Steps to reproduce:**
1. Start Jaws and the sim in chrome
2. Listen to the scene description
3. Use keyboard nav (likely will work without) to change the sliders around
4. Click out of keyboard nav and use the arrow keys to listen to the full scene description again.
**Screenshots:**
https://drive.google.com/file/d/1GgeMP0Nhgm7SZnTnZFrSWO02i-i79M1a/view?usp=sharing
Troubleshooting information (do not edit):
<details>
Name: Resistance in a Wire
URL: https://phet-dev.colorado.edu/html/resistance-in-a-wire/1.6.0-rc.1/phet/resistance-in-a-wire_en_phet.html
Version: 1.6.0-rc.1 2018-11-07 18:26:25 UTC
Features missing: touch
Flags: pixelRatioScaling
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36
Language: en-US
Window: 1536x732
Pixel Ratio: 2.5/1
WebGL: WebGL 1.0 (OpenGL ES 2.0 Chromium)
GLSL: WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium)
Vendor: WebKit (WebKit WebGL)
Vertex: attribs: 16 varying: 30 uniform: 4096
Texture: size: 16384 imageUnits: 16 (vertex: 16, combined: 32)
Max viewport: 16384x16384
OES_texture_float: true
Dependencies JSON: {"assert":{"sha":"928741cf","branch":"HEAD"},"axon":{"sha":"ff00a0bb","branch":"HEAD"},"brand":{"sha":"1fd6682e","branch":"HEAD"},"chipper":{"sha":"fa2fbadf","branch":"HEAD"},"dot":{"sha":"bbbd8526","branch":"HEAD"},"joist":{"sha":"82521d0c","branch":"HEAD"},"kite":{"sha":"380cef53","branch":"HEAD"},"phet-core":{"sha":"1b90ac2f","branch":"HEAD"},"phet-io":{"sha":"38d7b161","branch":"HEAD"},"phet-io-wrapper-classroom-activity":{"sha":"246085c1","branch":"HEAD"},"phet-io-wrapper-hookes-law-energy":{"sha":"7479b0ec","branch":"HEAD"},"phet-io-wrapper-lab-book":{"sha":"c46f7839","branch":"HEAD"},"phet-io-wrappers":{"sha":"a6bc62ca","branch":"HEAD"},"phetcommon":{"sha":"cd63d89a","branch":"HEAD"},"query-string-machine":{"sha":"06ed6276","branch":"HEAD"},"resistance-in-a-wire":{"sha":"38ae25d0","branch":"HEAD"},"scenery":{"sha":"51716674","branch":"HEAD"},"scenery-phet":{"sha":"15dec251","branch":"HEAD"},"sherpa":{"sha":"2cd50500","branch":"HEAD"},"sun":{"sha":"5244d45e","branch":"HEAD"},"tambo":{"sha":"ad355580","branch":"HEAD"},"tandem":{"sha":"4f37a910","branch":"HEAD"}}
</details> | 1.0 | Jaws reads out incorrect values in scene summary - **Test device:**
Dell Laptop
**Operating System:**
Win 10
**Browser:**
Chrome
**Problem description:**
For https://github.com/phetsims/QA/issues/219#event-1953042350 and https://github.com/phetsims/QA/issues/217
When the scene description first plays it reads out the initial values for all the sliders and then goes in to a bulleted list with those same values. If you manipulate those sliders and then prompt Jaws to read the scene description again it will read out the (now incorrect) values that the sim started with, then read a list with the actual values.
**Steps to reproduce:**
1. Start Jaws and the sim in chrome
2. Listen to the scene description
3. Use keyboard nav (likely will work without) to change the sliders around
4. Click out of keyboard nav and use the arrow keys to listen to the full scene description again.
**Screenshots:**
https://drive.google.com/file/d/1GgeMP0Nhgm7SZnTnZFrSWO02i-i79M1a/view?usp=sharing
Troubleshooting information (do not edit):
<details>
Name: Resistance in a Wire
URL: https://phet-dev.colorado.edu/html/resistance-in-a-wire/1.6.0-rc.1/phet/resistance-in-a-wire_en_phet.html
Version: 1.6.0-rc.1 2018-11-07 18:26:25 UTC
Features missing: touch
Flags: pixelRatioScaling
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36
Language: en-US
Window: 1536x732
Pixel Ratio: 2.5/1
WebGL: WebGL 1.0 (OpenGL ES 2.0 Chromium)
GLSL: WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium)
Vendor: WebKit (WebKit WebGL)
Vertex: attribs: 16 varying: 30 uniform: 4096
Texture: size: 16384 imageUnits: 16 (vertex: 16, combined: 32)
Max viewport: 16384x16384
OES_texture_float: true
Dependencies JSON: {"assert":{"sha":"928741cf","branch":"HEAD"},"axon":{"sha":"ff00a0bb","branch":"HEAD"},"brand":{"sha":"1fd6682e","branch":"HEAD"},"chipper":{"sha":"fa2fbadf","branch":"HEAD"},"dot":{"sha":"bbbd8526","branch":"HEAD"},"joist":{"sha":"82521d0c","branch":"HEAD"},"kite":{"sha":"380cef53","branch":"HEAD"},"phet-core":{"sha":"1b90ac2f","branch":"HEAD"},"phet-io":{"sha":"38d7b161","branch":"HEAD"},"phet-io-wrapper-classroom-activity":{"sha":"246085c1","branch":"HEAD"},"phet-io-wrapper-hookes-law-energy":{"sha":"7479b0ec","branch":"HEAD"},"phet-io-wrapper-lab-book":{"sha":"c46f7839","branch":"HEAD"},"phet-io-wrappers":{"sha":"a6bc62ca","branch":"HEAD"},"phetcommon":{"sha":"cd63d89a","branch":"HEAD"},"query-string-machine":{"sha":"06ed6276","branch":"HEAD"},"resistance-in-a-wire":{"sha":"38ae25d0","branch":"HEAD"},"scenery":{"sha":"51716674","branch":"HEAD"},"scenery-phet":{"sha":"15dec251","branch":"HEAD"},"sherpa":{"sha":"2cd50500","branch":"HEAD"},"sun":{"sha":"5244d45e","branch":"HEAD"},"tambo":{"sha":"ad355580","branch":"HEAD"},"tandem":{"sha":"4f37a910","branch":"HEAD"}}
</details> | test | jaws reads out incorrect values in scene summary test device dell laptop operating system win browser chrome problem description for and when the scene description first plays it reads out the initial values for all the sliders and then goes in to a bulleted list with those same values if you manipulate those sliders and then prompt jaws to read the scene description again it will read out the now incorrect values that the sim started with then read a list with the actual values steps to reproduce start jaws and the sim in chrome listen to the scene description use keyboard nav likely will work without to change the sliders around click out of keyboard nav and use the arrow keys to listen to the full scene description again screenshots troubleshooting information do not edit name resistance in a wire url version rc utc features missing touch flags pixelratioscaling user agent mozilla windows nt applewebkit khtml like gecko chrome safari language en us window pixel ratio webgl webgl opengl es chromium glsl webgl glsl es opengl es glsl es chromium vendor webkit webkit webgl vertex attribs varying uniform texture size imageunits vertex combined max viewport oes texture float true dependencies json assert sha branch head axon sha branch head brand sha branch head chipper sha branch head dot sha branch head joist sha branch head kite sha branch head phet core sha branch head phet io sha branch head phet io wrapper classroom activity sha branch head phet io wrapper hookes law energy sha branch head phet io wrapper lab book sha branch head phet io wrappers sha branch head phetcommon sha branch head query string machine sha branch head resistance in a wire sha branch head scenery sha branch head scenery phet sha branch head sherpa sha branch head sun sha branch head tambo sha branch head tandem sha branch head | 1 |
153,456 | 12,147,657,755 | IssuesEvent | 2020-04-24 13:23:34 | nvaccess/nvda | https://api.github.com/repos/nvaccess/nvda | opened | Chrome system tests failing intermittently | bug system-test | Occasionally the Chrome system tests fail, restarting the build usually results in the test passing.
These tests are new and need to be made more robust.
Example: https://ci.appveyor.com/project/NVAccess/nvda/builds/32216527/messages
System test logs attached: [systemTestResult.zip](https://github.com/nvaccess/nvda/files/4529071/systemTestResult.zip)
Error: `Specific speech did not occur before timeout: Google Chrome See NVDA log for dump of all speech.`
This test will be disabled until the issue has been investigated.
| 1.0 | Chrome system tests failing intermittently - Occasionally the Chrome system tests fail, restarting the build usually results in the test passing.
These tests are new and need to be made more robust.
Example: https://ci.appveyor.com/project/NVAccess/nvda/builds/32216527/messages
System test logs attached: [systemTestResult.zip](https://github.com/nvaccess/nvda/files/4529071/systemTestResult.zip)
Error: `Specific speech did not occur before timeout: Google Chrome See NVDA log for dump of all speech.`
This test will be disabled until the issue has been investigated.
| test | chrome system tests failing intermittently occasionally the chrome system tests fail restarting the build usually results in the test passing these tests are new and need to be made more robust example system test logs attached error specific speech did not occur before timeout google chrome see nvda log for dump of all speech this test will be disabled until the issue has been investigated | 1 |
113,334 | 24,400,963,384 | IssuesEvent | 2022-10-05 01:18:09 | brittanyjoiner15/eui-event-template | https://api.github.com/repos/brittanyjoiner15/eui-event-template | closed | Translate key text in app to Spanish | good first issue hacktoberfest up for grabs noCode | After we have a i18n available, then we'll want to be able to translate key parts of the app. If you speak Spanish and would feel comfortable translating the key text, that would be great!
This is what we need translated:
* tab titles
* `Event Details`
* `Speakers`
* `Talks`
* `Recordings`
* `FAQs/Frequently Asked Questions`
* Calendar Button text: `Save the {date} session`
* Second button text:
* `Sign up for updates`
* `Join the slack group`
* Button on talks page text: `Show times in EDT` and `Show times in Local`
| 1.0 | Translate key text in app to Spanish - After we have a i18n available, then we'll want to be able to translate key parts of the app. If you speak Spanish and would feel comfortable translating the key text, that would be great!
This is what we need translated:
* tab titles
* `Event Details`
* `Speakers`
* `Talks`
* `Recordings`
* `FAQs/Frequently Asked Questions`
* Calendar Button text: `Save the {date} session`
* Second button text:
* `Sign up for updates`
* `Join the slack group`
* Button on talks page text: `Show times in EDT` and `Show times in Local`
| non_test | translate key text in app to spanish after we have a available then we ll want to be able to translate key parts of the app if you speak spanish and would feel comfortable translating the key text that would be great this is what we need translated tab titles event details speakers talks recordings faqs frequently asked questions calendar button text save the date session second button text sign up for updates join the slack group button on talks page text show times in edt and show times in local | 0 |
351,073 | 10,512,465,795 | IssuesEvent | 2019-09-27 17:59:05 | jajajasalu2/DalalStreet | https://api.github.com/repos/jajajasalu2/DalalStreet | closed | Leftover shares don't sell | bug high priority | Upon ending the game, all the shares that still belong to the players should be sold, depositing the amount they earn by this in their accounts.
On February 2019, when the event was about to end, we realized this feature does not work. This resulted in the whole team having to sell each player's shares manually. Not the best of things to happen during such a fast paced event.
Thus, shares belonging to players at the end of the game should be sold. Upon (or while) fixing this bug, a corresponding test should also be written. | 1.0 | Leftover shares don't sell - Upon ending the game, all the shares that still belong to the players should be sold, depositing the amount they earn by this in their accounts.
On February 2019, when the event was about to end, we realized this feature does not work. This resulted in the whole team having to sell each player's shares manually. Not the best of things to happen during such a fast paced event.
Thus, shares belonging to players at the end of the game should be sold. Upon (or while) fixing this bug, a corresponding test should also be written. | non_test | leftover shares don t sell upon ending the game all the shares that still belong to the players should be sold depositing the amount they earn by this in their accounts on february when the event was about to end we realized this feature does not work this resulted in the whole team having to sell each player s shares manually not the best of things to happen during such a fast paced event thus shares belonging to players at the end of the game should be sold upon or while fixing this bug a corresponding test should also be written | 0 |
104,096 | 8,961,486,506 | IssuesEvent | 2019-01-28 09:47:41 | muqeed11/FXTesting | https://api.github.com/repos/muqeed11/FXTesting | opened | DemoBankingTest2 : ApiV1RecepientIdDeleteRecepientUsercDisallowAbac | DemoBankingTest2 | Project : DemoBankingTest2
Job : Default
Env : Default
Category : ABAC_Level1
Tags : [FX Top 10 - API Vulnerability, Data_Access_Control]
Severity : Major
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 401
Headers : {WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:36 GMT]}
Endpoint : http://54.215.136.217/api/v1/recepient/
Request :
Response :
Logs :
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : URL [http://54.215.136.217/api/v1/recepient]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Method [POST]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Request [{
"accountNumber" : "GkxEFafT",
"createdBy" : "",
"createdDate" : "",
"description" : "GkxEFafT",
"email" : "arielle.corwin@hotmail.com",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "GkxEFafT",
"phone" : "(872) 939-5469",
"user" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "GkxEFafT",
"version" : ""
},
"version" : ""
}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Request-Headers [{Content-Type=[*/*], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOmFkbWluMTIzJA==]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Response []
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : StatusCode [401]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Time [1152]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Size [0]
2019-01-28 09:47:36 ERROR [null] : Assertion [@StatusCode == 200 AND @Response.errors == false] resolved-to [401 == 200 AND == false] result [Failed]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : URL [http://54.215.136.217/api/v1/recepient/]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Method [DELETE]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Request []
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Request-Headers [{Content-Type=[*/*], Accept=[application/json], Authorization=[Basic dXNlckNAZnhsYWJzLmlvOmFkbWluMTIzJA==]}]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Response []
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:36 GMT]}]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : StatusCode [401]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Time [1073]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Size [0]
2019-01-28 09:47:37 INFO [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Assertion [@StatusCode == 401 OR @StatusCode == 403 OR @Response.errors == true] resolved-to [401 == 401 OR 401 == 403 OR == true] result [Passed]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : URL [http://54.215.136.217/api/v1/recepient/]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Method [DELETE]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Request [null]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Request-Headers [{Content-Type=[*/*], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOmFkbWluMTIzJA==]}]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Response []
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:37 GMT]}]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : StatusCode [401]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Time [953]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Size [0]
2019-01-28 09:47:38 ERROR [null] : Assertion [@StatusCode == 200] resolved-to [401 == 200] result [Failed]
--- FX Bot --- | 1.0 | DemoBankingTest2 : ApiV1RecepientIdDeleteRecepientUsercDisallowAbac - Project : DemoBankingTest2
Job : Default
Env : Default
Category : ABAC_Level1
Tags : [FX Top 10 - API Vulnerability, Data_Access_Control]
Severity : Major
Region : FXLabs/US_WEST_1
Result : fail
Status Code : 401
Headers : {WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:36 GMT]}
Endpoint : http://54.215.136.217/api/v1/recepient/
Request :
Response :
Logs :
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : URL [http://54.215.136.217/api/v1/recepient]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Method [POST]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Request [{
"accountNumber" : "GkxEFafT",
"createdBy" : "",
"createdDate" : "",
"description" : "GkxEFafT",
"email" : "arielle.corwin@hotmail.com",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "GkxEFafT",
"phone" : "(872) 939-5469",
"user" : {
"createdBy" : "",
"createdDate" : "",
"id" : "",
"inactive" : false,
"modifiedBy" : "",
"modifiedDate" : "",
"name" : "GkxEFafT",
"version" : ""
},
"version" : ""
}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Request-Headers [{Content-Type=[*/*], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOmFkbWluMTIzJA==]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Response []
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : StatusCode [401]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Time [1152]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac] : Size [0]
2019-01-28 09:47:36 ERROR [null] : Assertion [@StatusCode == 200 AND @Response.errors == false] resolved-to [401 == 200 AND == false] result [Failed]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:36 DEBUG [RecepientCreateUserAInitAbac_Headers[2]] : Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:35 GMT]}]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : URL [http://54.215.136.217/api/v1/recepient/]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Method [DELETE]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Request []
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Request-Headers [{Content-Type=[*/*], Accept=[application/json], Authorization=[Basic dXNlckNAZnhsYWJzLmlvOmFkbWluMTIzJA==]}]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Response []
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:36 GMT]}]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : StatusCode [401]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Time [1073]
2019-01-28 09:47:37 DEBUG [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Size [0]
2019-01-28 09:47:37 INFO [ApiV1RecepientIdDeleteRecepientUsercDisallowAbac] : Assertion [@StatusCode == 401 OR @StatusCode == 403 OR @Response.errors == true] resolved-to [401 == 401 OR 401 == 403 OR == true] result [Passed]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : URL [http://54.215.136.217/api/v1/recepient/]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Method [DELETE]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Request [null]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Request-Headers [{Content-Type=[*/*], Accept=[application/json], Authorization=[Basic dXNlckFAZnhsYWJzLmlvOmFkbWluMTIzJA==]}]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Response []
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Response-Headers [{WWW-Authenticate=[Basic realm="Realm", Basic realm="Realm"], X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Content-Length=[0], Date=[Mon, 28 Jan 2019 09:47:37 GMT]}]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : StatusCode [401]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Time [953]
2019-01-28 09:47:38 DEBUG [ApiV1RecepientIdDeleteAbstractAbac] : Size [0]
2019-01-28 09:47:38 ERROR [null] : Assertion [@StatusCode == 200] resolved-to [401 == 200] result [Failed]
--- FX Bot --- | test | project job default env default category abac tags severity major region fxlabs us west result fail status code headers www authenticate x content type options x xss protection cache control pragma expires x frame options content length date endpoint request response logs debug url debug method debug request accountnumber gkxefaft createdby createddate description gkxefaft email arielle corwin hotmail com id inactive false modifiedby modifieddate name gkxefaft phone user createdby createddate id inactive false modifiedby modifieddate name gkxefaft version version debug request headers accept authorization debug response debug response headers x content type options x xss protection cache control pragma expires x frame options content length date debug statuscode debug time debug size error assertion resolved to result debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug headers x content type options x xss protection cache control pragma expires x frame options content length date debug url debug method debug request debug request headers accept authorization debug response debug response headers x content type options x xss protection cache control pragma expires x frame options content length date debug statuscode debug time debug size info assertion resolved to result debug url debug method debug request debug request headers accept authorization debug response debug response headers x content type options x xss protection cache control pragma expires x frame options content length date debug statuscode debug time debug size error assertion resolved to result fx bot | 1 |
39,967 | 2,862,007,216 | IssuesEvent | 2015-06-04 00:13:32 | orc-lang/orc | https://api.github.com/repos/orc-lang/orc | closed | Move Orc release ZIP downloads from Google Code to Orc Web site | auto-migrated Priority-Medium Type-Task Website | ```
Action:
Move Orc release ZIP downloads from Google Code to Orc Web site
Reason:
Google Code project hosting is discontinuing its downloads feature at the end
of 2013.
```
Original issue reported on code.google.com by `jthywissen` on 21 Sep 2013 at 8:51 | 1.0 | Move Orc release ZIP downloads from Google Code to Orc Web site - ```
Action:
Move Orc release ZIP downloads from Google Code to Orc Web site
Reason:
Google Code project hosting is discontinuing its downloads feature at the end
of 2013.
```
Original issue reported on code.google.com by `jthywissen` on 21 Sep 2013 at 8:51 | non_test | move orc release zip downloads from google code to orc web site action move orc release zip downloads from google code to orc web site reason google code project hosting is discontinuing its downloads feature at the end of original issue reported on code google com by jthywissen on sep at | 0 |
251,866 | 18,977,319,876 | IssuesEvent | 2021-11-20 07:45:40 | KazumaOhura/Tau | https://api.github.com/repos/KazumaOhura/Tau | opened | .clang-formatファイルを最新バージョンに対応 | documentation Parent | ## 概要
<!-- このissueの概要 -->
.clang-formatファイルを最新バージョンに対応
<!-- 親issueなら子issueの番号 -->
## 目的
<!-- このissueを作る目的 -->
clang-formatをVer.13にアップデートした際に内部のパラメータ等に変更があり、今までのファイルでは動作しなくなったため
## タスク
<!-- 目的を達成するための詳細な作業内容 -->
- [ ] .clang-formatファイルに必要な情報を追記
| 1.0 | .clang-formatファイルを最新バージョンに対応 - ## 概要
<!-- このissueの概要 -->
.clang-formatファイルを最新バージョンに対応
<!-- 親issueなら子issueの番号 -->
## 目的
<!-- このissueを作る目的 -->
clang-formatをVer.13にアップデートした際に内部のパラメータ等に変更があり、今までのファイルでは動作しなくなったため
## タスク
<!-- 目的を達成するための詳細な作業内容 -->
- [ ] .clang-formatファイルに必要な情報を追記
| non_test | clang formatファイルを最新バージョンに対応 概要 clang formatファイルを最新バージョンに対応 目的 clang formatをver 、今までのファイルでは動作しなくなったため タスク clang formatファイルに必要な情報を追記 | 0 |
30,807 | 4,655,719,891 | IssuesEvent | 2016-10-04 00:37:37 | brave/browser-laptop | https://api.github.com/repos/brave/browser-laptop | closed | Manual tests for OS X 0.12.3 RC6 General Regression | OS/macOS tests | ## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, electron, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Ledger
1. [ ] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [ ] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [x] Check that disabling payments and enabling them again does not lose state.
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [x] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:preferences changing a preference takes effect right away
9. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar works
2. [x] Test that creating a bookmark folder on the bookmarks toolbar works
3. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
4. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
5. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Geolocation
1. [x] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs and Pinning
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://www.apple.com
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in https://badssl.com/ works.
6. [x] Test that Safe Browsing works (http://excellentmovies.net/)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for http://excellentmovies.net/.
8. [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [x] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [x] Open a github issue and type some misspellings, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [x] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [x] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [x] Test that flash placeholder appears on http://www.y8.com/games/superfighters
## Autofill tests
1. [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Test that windows and tabs restore when closed, including active tab.
3. [ ] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [ ] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly. | 1.0 | Manual tests for OS X 0.12.3 RC6 General Regression - ## Installer
1. [x] Check that installer is close to the size of last release.
2. [x] Check signature: If OS Run `spctl --assess --verbose /Applications/Brave.app/` and make sure it returns `accepted`. If Windows right click on the installer exe and go to Properties, go to the Digital Signatures tab and double click on the signature. Make sure it says "The digital signature is OK" in the popup window.
3. [x] Check Brave, electron, and libchromiumcontent version in About and make sure it is EXACTLY as expected.
## Ledger
1. [ ] Create a wallet with a value other than $5 selected in the monthly budget dropdown. Click on the 'Add Funds' button and check that Coinbase transactions are blocked.
2. [ ] Remove all `ledger-*.json` files from `~/Library/Application\ Support/Brave/`. Go to the Payments tab in about:preferences, enable payments, click on `create wallet`. Check that the `add funds` button appears after a wallet is created.
3. [ ] Click on `add funds` and verify that adding funds through Coinbase increases the account balance.
4. [ ] Repeat the step above but add funds by scanning the QR code in a mobile bitcoin app instead of through Coinbase.
5. [x] Visit nytimes.com for a few seconds and make sure it shows up in the Payments table.
6. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click the register button. In the Payments tab, click `add funds`. Verify that the `transfer funds` button is visible and that clicking on `transfer funds` opens a jsfiddle URL in a new tab.
7. [x] Go to https://jsfiddle.net/LnwtLckc/5/ and click `unregister`. Verify that the `transfer funds` button no longer appears in the `add funds` modal.
8. [x] Check that disabling payments and enabling them again does not lose state.
## Data
1. [x] Make sure that data from the last version appears in the new version OK.
2. [x] Test that the previous version's cookies are preserved in the next version.
## About pages
1. [x] Test that about:adblock loads
2. [x] Test that about:autofill loads
3. [x] Test that about:bookmarks loads bookmarks
4. [x] Test that about:downloads loads downloads
5. [x] Test that about:extensions loads
6. [x] Test that about:history loads history
7. [x] Test that about:passwords loads
8. [x] Test that about:preferences changing a preference takes effect right away
9. [x] Test that about:preferences language change takes effect on re-start
## Bookmarks
1. [x] Test that creating a bookmark on the bookmarks toolbar works
2. [x] Test that creating a bookmark folder on the bookmarks toolbar works
3. [x] Test that moving a bookmark into a folder by drag and drop on the bookmarks folder works
4. [x] Test that clicking a bookmark in the toolbar loads the bookmark.
5. [x] Test that clicking a bookmark in a bookmark toolbar folder loads the bookmark.
## Context menus
1. [x] Make sure context menu items in the URL bar work
2. [x] Make sure context menu items on content work with no selected text.
3. [x] Make sure context menu items on content work with selected text.
4. [x] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
## Find on page
1. [x] Ensure search box is shown with shortcut
2. [x] Test successful find
3. [x] Test forward and backward find navigation
4. [x] Test failed find shows 0 results
5. [x] Test match case find
## Geolocation
1. [x] Check that https://developer.mozilla.org/en-US/docs/Web/API/Geolocation/Using_geolocation works
## Site hacks
1. [x] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [x] Test downloading a file works and that all actions on the download item works.
## Fullscreen
1. [x] Test that entering full screen window works View -> Toggle Full Screen. And exit back (Not Esc).
2. [x] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Tabs and Pinning
1. [x] Test that tabs are pinnable
2. [x] Test that tabs are unpinnable
3. [x] Test that tabs are draggable to same tabset
4. [x] Test that tabs are draggable to alternate tabset
## Zoom
1. [x] Test zoom in / out shortcut works
2. [x] Test hamburger menu zooms.
3. [x] Test zoom saved when you close the browser and restore on a single site.
4. [x] Test zoom saved when you navigate within a single origin site.
5. [x] Test that navigating to a different origin resets the zoom
## Bravery settings
1. [x] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [x] Turning HTTPS Everywhere off and shields off both disable the redirect to https://www.apple.com
3. [x] Check that ad replacement works on http://slashdot.org
4. [x] Check that toggling to blocking and allow ads works as expected.
5. [x] Test that clicking through a cert error in https://badssl.com/ works.
6. [x] Test that Safe Browsing works (http://excellentmovies.net/)
7. [x] Turning Safe Browsing off and shields off both disable safe browsing for http://excellentmovies.net/.
8. [x] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [x] Test that about:preferences default Bravery settings take effect on pages with no site settings.
10. [x] Test that turning on fingerprinting protection in about:preferences shows 3 fingerprints blocked at https://jsfiddle.net/bkf50r8v/13/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [x] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/9/ when 3rd party cookies are blocked and not blank when 3rd party cookies are unblocked.
12. [x] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [x] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [x] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [x] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [x] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`. Then reload https://trac.torproject.org/projects/tor/login and make sure the password is autofilled.
5. [x] Open a github issue and type some misspellings, make sure they are underlined.
6. [x] Make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text.
7. [x] Make sure that Command + Click (Control + Click on Windows, Control + Click on Ubuntu) on a link opens a new tab but does NOT switch to it. Click on it and make sure it is already loaded.
8. [x] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
9. [x] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
10. [x] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
## Flash tests
1. [x] Turn on Flash in about:preferences#security. Test that clicking on 'Install Flash' banner on myspace.com shows a notification to allow Flash and that the banner disappears when 'Allow' is clicked.
2. [x] Test that flash placeholder appears on http://www.y8.com/games/superfighters
## Autofill tests
1. [x] Test that autofill works on http://www.roboform.com/filling-test-all-fields
## Session storage
Do not forget to make a backup of your entire `~/Library/Application\ Support/Brave` folder.
1. [x] Temporarily move away your `~/Library/Application\ Support/Brave/session-store-1` and test that clean session storage works. (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
2. [x] Test that windows and tabs restore when closed, including active tab.
3. [ ] Move away your entire `~/Library/Application\ Support/Brave` folder (`%appdata%\Brave in Windows`, `./config/brave` in Ubuntu)
## Cookie and Cache
1. [x] Make a backup of your profile, turn on all clearing in preferences and shut down. Make sure when you bring the browser back up everything is gone that is specified.
2. [ ] Go to http://samy.pl/evercookie/ and set an evercookie. Check that going to prefs, clearing site data and cache, and going back to the Evercookie site does not remember the old evercookie value.
## Update tests
1. [x] Test that updating using `BRAVE_UPDATE_VERSION=0.8.3` env variable works correctly. | test | manual tests for os x general regression installer check that installer is close to the size of last release check signature if os run spctl assess verbose applications brave app and make sure it returns accepted if windows right click on the installer exe and go to properties go to the digital signatures tab and double click on the signature make sure it says the digital signature is ok in the popup window check brave electron and libchromiumcontent version in about and make sure it is exactly as expected ledger create a wallet with a value other than selected in the monthly budget dropdown click on the add funds button and check that coinbase transactions are blocked remove all ledger json files from library application support brave go to the payments tab in about preferences enable payments click on create wallet check that the add funds button appears after a wallet is created click on add funds and verify that adding funds through coinbase increases the account balance repeat the step above but add funds by scanning the qr code in a mobile bitcoin app instead of through coinbase visit nytimes com for a few seconds and make sure it shows up in the payments table go to and click the register button in the payments tab click add funds verify that the transfer funds button is visible and that clicking on transfer funds opens a jsfiddle url in a new tab go to and click unregister verify that the transfer funds button no longer appears in the add funds modal check that disabling payments and enabling them again does not lose state data make sure that data from the last version appears in the new version ok test that the previous version s cookies are preserved in the next version about pages test that about adblock loads test that about autofill loads test that about bookmarks loads bookmarks test that about downloads loads downloads test that about extensions loads test that about history loads history test that about passwords loads test that about preferences changing a preference takes effect right away test that about preferences language change takes effect on re start bookmarks test that creating a bookmark on the bookmarks toolbar works test that creating a bookmark folder on the bookmarks toolbar works test that moving a bookmark into a folder by drag and drop on the bookmarks folder works test that clicking a bookmark in the toolbar loads the bookmark test that clicking a bookmark in a bookmark toolbar folder loads the bookmark context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control input textarea or contenteditable find on page ensure search box is shown with shortcut test successful find test forward and backward find navigation test failed find shows results test match case find geolocation check that works site hacks test sub page loads a video and you can play it downloads test downloading a file works and that all actions on the download item works fullscreen test that entering full screen window works view toggle full screen and exit back not esc test that entering full screen works and esc to go back youtube com tabs and pinning test that tabs are pinnable test that tabs are unpinnable test that tabs are draggable to same tabset test that tabs are draggable to alternate tabset zoom test zoom in out shortcut works test hamburger menu zooms test zoom saved when you close the browser and restore on a single site test zoom saved when you navigate within a single origin site test that navigating to a different origin resets the zoom bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in works test that safe browsing works turning safe browsing off and shields off both disable safe browsing for visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that about preferences default bravery settings take effect on pages with no site settings test that turning on fingerprinting protection in about preferences shows fingerprints blocked at test that turning it off in the bravery menu shows fingerprints blocked test that party storage results are blank at when party cookies are blocked and not blank when party cookies are unblocked test that audio fingerprint is blocked at when fingerprinting protection is on content tests go to and click on the twitter icon on the top right test that context menus work in the new twitter tab load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords then reload and make sure the password is autofilled open a github issue and type some misspellings make sure they are underlined make sure that right clicking on a word with suggestions gives a suggestion and that clicking on the suggestion replaces the text make sure that command click control click on windows control click on ubuntu on a link opens a new tab but does not switch to it click on it and make sure it is already loaded open an email on or inbox google com and click on a link make sure it works test that pdf is loaded at test that shows up as grey not red no mixed content scripts are run flash tests turn on flash in about preferences security test that clicking on install flash banner on myspace com shows a notification to allow flash and that the banner disappears when allow is clicked test that flash placeholder appears on autofill tests test that autofill works on session storage do not forget to make a backup of your entire library application support brave folder temporarily move away your library application support brave session store and test that clean session storage works appdata brave in windows config brave in ubuntu test that windows and tabs restore when closed including active tab move away your entire library application support brave folder appdata brave in windows config brave in ubuntu cookie and cache make a backup of your profile turn on all clearing in preferences and shut down make sure when you bring the browser back up everything is gone that is specified go to and set an evercookie check that going to prefs clearing site data and cache and going back to the evercookie site does not remember the old evercookie value update tests test that updating using brave update version env variable works correctly | 1 |
88,250 | 8,136,054,041 | IssuesEvent | 2018-08-20 06:57:21 | ladybirdweb/faveo-helpdesk | https://api.github.com/repos/ladybirdweb/faveo-helpdesk | closed | Faveo latest git version install | Final Testing required Need issuer's Feedback | Hello. I wanted to try Faveo so I tried to install the comunity edition. Tried the autoinstall and didn't work. Then tried to install all dependecies, copy folder to Apache, etc. but couldn't get it to work. First is opening this page:
https://github.com/ladybirdweb/faveo-helpdesk/blob/master/public/index.php
After trying on a new VM, from scratch, with an older tutorial I found over Google, I tried to connect to my site (/var/www/html/support) and gives me error 500. Even after giving 777 to all folders, the error still appears.
If someone could help me with a tutorial or steps to install would be great. As for now, I inform this error, if someone else also has it.
Regards,
Fábio | 1.0 | Faveo latest git version install - Hello. I wanted to try Faveo so I tried to install the comunity edition. Tried the autoinstall and didn't work. Then tried to install all dependecies, copy folder to Apache, etc. but couldn't get it to work. First is opening this page:
https://github.com/ladybirdweb/faveo-helpdesk/blob/master/public/index.php
After trying on a new VM, from scratch, with an older tutorial I found over Google, I tried to connect to my site (/var/www/html/support) and gives me error 500. Even after giving 777 to all folders, the error still appears.
If someone could help me with a tutorial or steps to install would be great. As for now, I inform this error, if someone else also has it.
Regards,
Fábio | test | faveo latest git version install hello i wanted to try faveo so i tried to install the comunity edition tried the autoinstall and didn t work then tried to install all dependecies copy folder to apache etc but couldn t get it to work first is opening this page after trying on a new vm from scratch with an older tutorial i found over google i tried to connect to my site var www html support and gives me error even after giving to all folders the error still appears if someone could help me with a tutorial or steps to install would be great as for now i inform this error if someone else also has it regards fábio | 1 |
259,786 | 27,725,040,769 | IssuesEvent | 2023-03-15 01:03:59 | Techini/vulnado | https://api.github.com/repos/Techini/vulnado | opened | CVE-2021-23369 (High) detected in handlebars-4.1.0.min.js | Mend: dependency security vulnerability | ## CVE-2021-23369 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.0.min.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.1.0/handlebars.min.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.1.0/handlebars.min.js</a></p>
<p>Path to dependency file: /client/login.html</p>
<p>Path to vulnerable library: /client/login.html</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.1.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Techini/vulnado/commit/4fb542733bde6ab3a1680292943de904d0621c33">4fb542733bde6ab3a1680292943de904d0621c33</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23369>CVE-2021-23369</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: com.github.jknack:handlebars:4.2.0, handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23369 (High) detected in handlebars-4.1.0.min.js - ## CVE-2021-23369 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.0.min.js</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.1.0/handlebars.min.js">https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/4.1.0/handlebars.min.js</a></p>
<p>Path to dependency file: /client/login.html</p>
<p>Path to vulnerable library: /client/login.html</p>
<p>
Dependency Hierarchy:
- :x: **handlebars-4.1.0.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Techini/vulnado/commit/4fb542733bde6ab3a1680292943de904d0621c33">4fb542733bde6ab3a1680292943de904d0621c33</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package handlebars before 4.7.7 are vulnerable to Remote Code Execution (RCE) when selecting certain compiling options to compile templates coming from an untrusted source.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23369>CVE-2021-23369</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: com.github.jknack:handlebars:4.2.0, handlebars - 4.7.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in handlebars min js cve high severity vulnerability vulnerable library handlebars min js handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file client login html path to vulnerable library client login html dependency hierarchy x handlebars min js vulnerable library found in head commit a href found in base branch master vulnerability details the package handlebars before are vulnerable to remote code execution rce when selecting certain compiling options to compile templates coming from an untrusted source publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com github jknack handlebars handlebars step up your open source security game with mend | 0 |
184,755 | 6,715,922,543 | IssuesEvent | 2017-10-14 00:23:26 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | net: tcp.c: prepare_segment() may unrightly unref packet in case of error | area: Networking priority: high | Looking at the prepare_segment() function (https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/net/ip/tcp.c#L328), it may be called either with a pre-existing packet, or create one itself. However, in multiple places, in case of error, it may call net_pkt_unref() on the pre-existing packet. This violates our policy that in the case of error, it's caller responsibility to clean up the packet (because e.g. it may want to retry it, or queue up for later retrying). | 1.0 | net: tcp.c: prepare_segment() may unrightly unref packet in case of error - Looking at the prepare_segment() function (https://github.com/zephyrproject-rtos/zephyr/blob/master/subsys/net/ip/tcp.c#L328), it may be called either with a pre-existing packet, or create one itself. However, in multiple places, in case of error, it may call net_pkt_unref() on the pre-existing packet. This violates our policy that in the case of error, it's caller responsibility to clean up the packet (because e.g. it may want to retry it, or queue up for later retrying). | non_test | net tcp c prepare segment may unrightly unref packet in case of error looking at the prepare segment function it may be called either with a pre existing packet or create one itself however in multiple places in case of error it may call net pkt unref on the pre existing packet this violates our policy that in the case of error it s caller responsibility to clean up the packet because e g it may want to retry it or queue up for later retrying | 0 |
215,166 | 16,654,008,872 | IssuesEvent | 2021-06-05 07:13:39 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: cdc/ledger failed | C-test-failure O-roachtest O-robot branch-release-20.1 | [(roachtest).cdc/ledger failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2615416&tab=buildLog) on [release-20.1@871056016d8a32ff20921b716f103e0e3abdd6e2](https://github.com/cockroachdb/cockroach/commits/871056016d8a32ff20921b716f103e0e3abdd6e2):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/ledger/run_1
cdc.go:880,cdc.go:200,cdc.go:570,test_runner.go:749: max latency was more than allowed: 1m18.597441227s vs 1m0s
```
<details><summary>More</summary><p>
Artifacts: [/cdc/ledger](https://teamcity.cockroachdb.com/viewLog.html?buildId=2615416&tab=artifacts#/cdc/ledger)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Fledger.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| 2.0 | roachtest: cdc/ledger failed - [(roachtest).cdc/ledger failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2615416&tab=buildLog) on [release-20.1@871056016d8a32ff20921b716f103e0e3abdd6e2](https://github.com/cockroachdb/cockroach/commits/871056016d8a32ff20921b716f103e0e3abdd6e2):
```
The test failed on branch=release-20.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/cdc/ledger/run_1
cdc.go:880,cdc.go:200,cdc.go:570,test_runner.go:749: max latency was more than allowed: 1m18.597441227s vs 1m0s
```
<details><summary>More</summary><p>
Artifacts: [/cdc/ledger](https://teamcity.cockroachdb.com/viewLog.html?buildId=2615416&tab=artifacts#/cdc/ledger)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Acdc%2Fledger.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
| test | roachtest cdc ledger failed on the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts cdc ledger run cdc go cdc go cdc go test runner go max latency was more than allowed vs more artifacts powered by | 1 |
306,327 | 26,458,657,757 | IssuesEvent | 2023-01-16 15:53:14 | OpenLiberty/guides-common | https://api.github.com/repos/OpenLiberty/guides-common | closed | Guide for GraphQL Client | ID reviewed new guide peer reviewed dev content reviewed SME reviewed user reviewed final end-to-end test completed signed off final content reviewed | ## Running GraphQL queries by GraphQL client
**What you’ll learn**
- Intro
- What is GraphQL client?
- Description of application with diagram as slide 3
- Ask user to study the Querying and modifying data by GraphQL
**Additional prerequisites**
- Docker
**Getting started**
**Implementing the Graphql client interface**
- Create the SystemClient interface
- Explain the detail
**Implementing Inventory service**
- Create the InventoryResource class
- Explain how the InventoryResource use the SystemClient
- Configure the pom.xml, and server.xml
- Create the microprofile-config.properties to configure the GraphQL client url
**Building and running the application**
- Run scripts to build and start the application: two system, graphql, and inventory services in docker container
- Run the openapi ui to invoke the inventory APIs to queries the system information, update the note, and queries system loads
**Tearing down the environment** | 1.0 | Guide for GraphQL Client - ## Running GraphQL queries by GraphQL client
**What you’ll learn**
- Intro
- What is GraphQL client?
- Description of application with diagram as slide 3
- Ask user to study the Querying and modifying data by GraphQL
**Additional prerequisites**
- Docker
**Getting started**
**Implementing the Graphql client interface**
- Create the SystemClient interface
- Explain the detail
**Implementing Inventory service**
- Create the InventoryResource class
- Explain how the InventoryResource use the SystemClient
- Configure the pom.xml, and server.xml
- Create the microprofile-config.properties to configure the GraphQL client url
**Building and running the application**
- Run scripts to build and start the application: two system, graphql, and inventory services in docker container
- Run the openapi ui to invoke the inventory APIs to queries the system information, update the note, and queries system loads
**Tearing down the environment** | test | guide for graphql client running graphql queries by graphql client what you’ll learn intro what is graphql client description of application with diagram as slide ask user to study the querying and modifying data by graphql additional prerequisites docker getting started implementing the graphql client interface create the systemclient interface explain the detail implementing inventory service create the inventoryresource class explain how the inventoryresource use the systemclient configure the pom xml and server xml create the microprofile config properties to configure the graphql client url building and running the application run scripts to build and start the application two system graphql and inventory services in docker container run the openapi ui to invoke the inventory apis to queries the system information update the note and queries system loads tearing down the environment | 1 |
175,337 | 13,546,846,180 | IssuesEvent | 2020-09-17 02:26:50 | JoshSevy/dreamboard | https://api.github.com/repos/JoshSevy/dreamboard | closed | ImageCard Testing | testing | - [ ] Write tests to make sure card is rendering correctly
- [ ] check ref and check urls
| 1.0 | ImageCard Testing - - [ ] Write tests to make sure card is rendering correctly
- [ ] check ref and check urls
| test | imagecard testing write tests to make sure card is rendering correctly check ref and check urls | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.