Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
49,225 | 10,331,090,553 | IssuesEvent | 2019-09-02 16:33:29 | scorelab/TensorMap | https://api.github.com/repos/scorelab/TensorMap | opened | Find an issue in TensorMap and open an issue | Google Code-In | Setup project then find an issue in the project and open an issue in the project’s GitHub page | 1.0 | Find an issue in TensorMap and open an issue - Setup project then find an issue in the project and open an issue in the project’s GitHub page | non_infrastructure | find an issue in tensormap and open an issue setup project then find an issue in the project and open an issue in the project’s github page | 0 |
3,476 | 4,330,141,814 | IssuesEvent | 2016-07-26 19:03:01 | yale-web-technologies/mirador-annotations | https://api.github.com/repos/yale-web-technologies/mirador-annotations | closed | Enforce unique IDs | infrastructure | Please enforce unique IDs on all objects. Any attempt to create a new object with an existing ID should cause an error and handled appropriately.
| 1.0 | Enforce unique IDs - Please enforce unique IDs on all objects. Any attempt to create a new object with an existing ID should cause an error and handled appropriately.
| infrastructure | enforce unique ids please enforce unique ids on all objects any attempt to create a new object with an existing id should cause an error and handled appropriately | 1 |
11,765 | 9,418,050,570 | IssuesEvent | 2019-04-10 18:14:44 | dotnet/dotnet-docker | https://api.github.com/repos/dotnet/dotnet-docker | closed | Update PowerShell test to not rely on Get-Date | area:infrastructure bug triaged | The VerifySDKImage_PowerShellScenario test attempts to verify that PowerShell works in the container by calling Get-Date and comparing the output value to the current date from the test code. There can be a difference in time between when the value is calculated in the container versus when the value is calculated in test code. If the time difference crosses over midnight, then the two values will correspond to different dates and fail the test. Another possibility is that the container and the test runner are running in different time zones and also cross the midnight time boundary.
The test should be updated to make use of a PowerShell command that is not date/time-dependent. | 1.0 | Update PowerShell test to not rely on Get-Date - The VerifySDKImage_PowerShellScenario test attempts to verify that PowerShell works in the container by calling Get-Date and comparing the output value to the current date from the test code. There can be a difference in time between when the value is calculated in the container versus when the value is calculated in test code. If the time difference crosses over midnight, then the two values will correspond to different dates and fail the test. Another possibility is that the container and the test runner are running in different time zones and also cross the midnight time boundary.
The test should be updated to make use of a PowerShell command that is not date/time-dependent. | infrastructure | update powershell test to not rely on get date the verifysdkimage powershellscenario test attempts to verify that powershell works in the container by calling get date and comparing the output value to the current date from the test code there can be a difference in time between when the value is calculated in the container versus when the value is calculated in test code if the time difference crosses over midnight then the two values will correspond to different dates and fail the test another possibility is that the container and the test runner are running in different time zones and also cross the midnight time boundary the test should be updated to make use of a powershell command that is not date time dependent | 1 |
5,442 | 5,656,500,947 | IssuesEvent | 2017-04-10 01:58:32 | openwhisk/openwhisk-wskdeploy | https://api.github.com/repos/openwhisk/openwhisk-wskdeploy | closed | Proposal: Enable the travis support with openwhisk and add the structure for integration tests | high infrastructure | In order to run integration tests against an openwhisk deployment, openwhisk-wskdeploy needs to enable travis support with core openwhisk installation. Currently this is missing in this project.
openwhisk-wskdeploy needs to have an integration tests folder containing all the integration use cases supported by OpenWhisk. These test cases should pass within the travis to assure the code quality. Currently there is no integration test available in this project.
As the first iteration, the supported integration test cases can start with an easy one: install a basic action and a basic package with manifest and deployment files, verify if they have been installed in openwhisk, remove them in openwhisk, and verify if they have been removed in openwhisk.
For further iterations, the integration test cases need to be documented as the basic features for wskdeploy to support.
| 1.0 | Proposal: Enable the travis support with openwhisk and add the structure for integration tests - In order to run integration tests against an openwhisk deployment, openwhisk-wskdeploy needs to enable travis support with core openwhisk installation. Currently this is missing in this project.
openwhisk-wskdeploy needs to have an integration tests folder containing all the integration use cases supported by OpenWhisk. These test cases should pass within the travis to assure the code quality. Currently there is no integration test available in this project.
As the first iteration, the supported integration test cases can start with an easy one: install a basic action and a basic package with manifest and deployment files, verify if they have been installed in openwhisk, remove them in openwhisk, and verify if they have been removed in openwhisk.
For further iterations, the integration test cases need to be documented as the basic features for wskdeploy to support.
| infrastructure | proposal enable the travis support with openwhisk and add the structure for integration tests in order to run integration tests against an openwhisk deployment openwhisk wskdeploy needs to enable travis support with core openwhisk installation currently this is missing in this project openwhisk wskdeploy needs to have an integration tests folder containing all the integration use cases supported by openwhisk these test cases should pass within the travis to assure the code quality currently there is no integration test available in this project as the first iteration the supported integration test cases can start with an easy one install a basic action and a basic package with manifest and deployment files verify if they have been installed in openwhisk remove them in openwhisk and verify if they have been removed in openwhisk for further iterations the integration test cases need to be documented as the basic features for wskdeploy to support | 1 |
8,974 | 7,753,803,342 | IssuesEvent | 2018-05-31 02:52:07 | dotnet/corefxlab | https://api.github.com/repos/dotnet/corefxlab | closed | Packages don't get generated when the build has warnings | area-Infrastructure | But we need to support warnings for some cases, e.g. ObsoleteAttribute. | 1.0 | Packages don't get generated when the build has warnings - But we need to support warnings for some cases, e.g. ObsoleteAttribute. | infrastructure | packages don t get generated when the build has warnings but we need to support warnings for some cases e g obsoleteattribute | 1 |
63,263 | 15,529,840,684 | IssuesEvent | 2021-03-13 16:45:24 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | PPC64: Unable to detect CPU features when optimization is disabled | affected: 3.4 bug category: build/install category: core platform: ppc (PowerPC) | which leads to a fatal error during initializing the core module if build option `DCV_ENABLE_INTRINSICS` was off.
#### Error message:
```Bash
******************************************************************
* FATAL ERROR: *
* This OpenCV build doesn't support current CPU/HW configuration *
* *
* Use OPENCV_DUMP_CONFIG=1 environment variable for details *
******************************************************************
Required baseline features:
ID=200 (VSX) - NOT AVAILABLE
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.5.2-pre) /opencv/opencv/modules/core/src/system.cpp:625: error: (-215:Assertion failed) Missing support for required CPU baseline features. Check OpenCV build configuration and required CPU/HW setup. in function 'initialize
````
#### steps to produce
build OpenCV with `-DDCV_ENABLE_INTRINSICS=OFF`
#### workaround
skip baseline validation via environment variable `OPENCV_SKIP_CPU_BASELINE_CHECK` | 1.0 | PPC64: Unable to detect CPU features when optimization is disabled - which leads to a fatal error during initializing the core module if build option `DCV_ENABLE_INTRINSICS` was off.
#### Error message:
```Bash
******************************************************************
* FATAL ERROR: *
* This OpenCV build doesn't support current CPU/HW configuration *
* *
* Use OPENCV_DUMP_CONFIG=1 environment variable for details *
******************************************************************
Required baseline features:
ID=200 (VSX) - NOT AVAILABLE
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.5.2-pre) /opencv/opencv/modules/core/src/system.cpp:625: error: (-215:Assertion failed) Missing support for required CPU baseline features. Check OpenCV build configuration and required CPU/HW setup. in function 'initialize
````
#### steps to produce
build OpenCV with `-DDCV_ENABLE_INTRINSICS=OFF`
#### workaround
skip baseline validation via environment variable `OPENCV_SKIP_CPU_BASELINE_CHECK` | non_infrastructure | unable to detect cpu features when optimization is disabled which leads to a fatal error during initializing the core module if build option dcv enable intrinsics was off error message bash fatal error this opencv build doesn t support current cpu hw configuration use opencv dump config environment variable for details required baseline features id vsx not available terminate called after throwing an instance of cv exception what opencv pre opencv opencv modules core src system cpp error assertion failed missing support for required cpu baseline features check opencv build configuration and required cpu hw setup in function initialize steps to produce build opencv with ddcv enable intrinsics off workaround skip baseline validation via environment variable opencv skip cpu baseline check | 0 |
645,399 | 21,003,685,252 | IssuesEvent | 2022-03-29 20:05:25 | feast-dev/feast | https://api.github.com/repos/feast-dev/feast | closed | feast apply fails for Spark Offline Store once the registry has been created | kind/bug priority/p2 | ## Expected Behavior
It should not throw any exception
## Current Behavior
feast apply throws an exception saying "ValueError: Could not identify the source type being added."
Error Log:
```$ feast apply
/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/scipy/sparse/sputils.py:16: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/scipy/fftpack/__init__.py:103: DeprecationWarning: The module numpy.dual is deprecated. Instead of using dual, use the functions directly from numpy or scipy.
from numpy.dual import register_func
/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/infra/offline_stores/contrib/spark_offline_store/spark_source.py:61: RuntimeWarning: The spark data source API is an experimental feature in alpha development. This API is unstable and it could and most probably will be changed in the future.
RuntimeWarning,
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2022-03-15 08:09:23,416 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2022-03-15 08:09:48,173 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2022-03-15 08:10:47,117 WARN util.package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
Traceback (most recent call last):
File "/grid/1/cremo/venvs/feast-spark/bin/feast", line 10, in <module>
sys.exit(cli())
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/cli.py", line 439, in apply_total_command
apply_total(repo_config, repo, skip_source_validation)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/repo_operations.py", line 251, in apply_total
store, project, registry, repo, skip_source_validation
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/repo_operations.py", line 210, in apply_total_with_repo_instance
registry_diff, infra_diff, new_infra = store._plan(repo)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/usage.py", line 280, in wrapper
raise exc.with_traceback(traceback)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/feature_store.py", line 543, in _plan
self._registry, self.project, desired_repo_contents
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/diff/registry_diff.py", line 215, in diff_between
registry, current_project, desired_repo_contents
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/diff/registry_diff.py", line 172, in extract_objects_for_keep_delete_update_add
] = FeastObjectType.get_objects_from_registry(registry, current_project)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/registry.py", line 84, in get_objects_from_registry
FeastObjectType.DATA_SOURCE: registry.list_data_sources(project=project),
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/registry.py", line 302, in list_data_sources
data_sources.append(DataSource.from_proto(data_source_proto))
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/data_source.py", line 252, in from_proto
raise ValueError("Could not identify the source type being added.")
ValueError: Could not identify the source type being added.```
## Steps to reproduce
For Spark Offline Store,
- Create the registry using feast apply
- Update registry using feast ap[ply again, with or without any change in example.py
### Specifications
- Version: 0.19.3
- Platform: linux
- Subsystem:
## Possible Solution
class DataSource(ABC):
def from_proto(data_source: DataSourceProto) -> Any:
# we should add check datasource and identify of it is of SparkSource type and use SparkSource.from_proto() | 1.0 | feast apply fails for Spark Offline Store once the registry has been created - ## Expected Behavior
It should not throw any exception
## Current Behavior
feast apply throws an exception saying "ValueError: Could not identify the source type being added."
Error Log:
```$ feast apply
/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/scipy/sparse/sputils.py:16: DeprecationWarning: `np.typeDict` is a deprecated alias for `np.sctypeDict`.
supported_dtypes = [np.typeDict[x] for x in supported_dtypes]
/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/scipy/fftpack/__init__.py:103: DeprecationWarning: The module numpy.dual is deprecated. Instead of using dual, use the functions directly from numpy or scipy.
from numpy.dual import register_func
/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/infra/offline_stores/contrib/spark_offline_store/spark_source.py:61: RuntimeWarning: The spark data source API is an experimental feature in alpha development. This API is unstable and it could and most probably will be changed in the future.
RuntimeWarning,
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2022-03-15 08:09:23,416 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2022-03-15 08:09:48,173 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
2022-03-15 08:10:47,117 WARN util.package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
Traceback (most recent call last):
File "/grid/1/cremo/venvs/feast-spark/bin/feast", line 10, in <module>
sys.exit(cli())
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/cli.py", line 439, in apply_total_command
apply_total(repo_config, repo, skip_source_validation)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/repo_operations.py", line 251, in apply_total
store, project, registry, repo, skip_source_validation
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/repo_operations.py", line 210, in apply_total_with_repo_instance
registry_diff, infra_diff, new_infra = store._plan(repo)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/usage.py", line 280, in wrapper
raise exc.with_traceback(traceback)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/usage.py", line 269, in wrapper
return func(*args, **kwargs)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/feature_store.py", line 543, in _plan
self._registry, self.project, desired_repo_contents
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/diff/registry_diff.py", line 215, in diff_between
registry, current_project, desired_repo_contents
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/diff/registry_diff.py", line 172, in extract_objects_for_keep_delete_update_add
] = FeastObjectType.get_objects_from_registry(registry, current_project)
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/registry.py", line 84, in get_objects_from_registry
FeastObjectType.DATA_SOURCE: registry.list_data_sources(project=project),
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/registry.py", line 302, in list_data_sources
data_sources.append(DataSource.from_proto(data_source_proto))
File "/grid/1/cremo/venvs/feast-spark/lib/python3.7/site-packages/feast/data_source.py", line 252, in from_proto
raise ValueError("Could not identify the source type being added.")
ValueError: Could not identify the source type being added.```
## Steps to reproduce
For Spark Offline Store,
- Create the registry using feast apply
- Update registry using feast ap[ply again, with or without any change in example.py
### Specifications
- Version: 0.19.3
- Platform: linux
- Subsystem:
## Possible Solution
class DataSource(ABC):
def from_proto(data_source: DataSourceProto) -> Any:
# we should add check datasource and identify of it is of SparkSource type and use SparkSource.from_proto() | non_infrastructure | feast apply fails for spark offline store once the registry has been created expected behavior it should not throw any exception current behavior feast apply throws an exception saying valueerror could not identify the source type being added error log feast apply grid cremo venvs feast spark lib site packages scipy sparse sputils py deprecationwarning np typedict is a deprecated alias for np sctypedict supported dtypes for x in supported dtypes grid cremo venvs feast spark lib site packages scipy fftpack init py deprecationwarning the module numpy dual is deprecated instead of using dual use the functions directly from numpy or scipy from numpy dual import register func grid cremo venvs feast spark lib site packages feast infra offline stores contrib spark offline store spark source py runtimewarning the spark data source api is an experimental feature in alpha development this api is unstable and it could and most probably will be changed in the future runtimewarning setting default log level to warn to adjust logging level use sc setloglevel newlevel for sparkr use setloglevel newlevel warn util nativecodeloader unable to load native hadoop library for your platform using builtin java classes where applicable warn yarn client neither spark yarn jars nor spark yarn archive is set falling back to uploading libraries under spark home warn util package truncated the string representation of a plan since it was too large this behavior can be adjusted by setting spark sql debug maxtostringfields traceback most recent call last file grid cremo venvs feast spark bin feast line in sys exit cli file grid cremo venvs feast spark lib site packages click core py line in call return self main args kwargs file grid cremo venvs feast spark lib site packages click core py line in main rv self invoke ctx file grid cremo venvs feast spark lib site packages click core py line in invoke return process result sub ctx command invoke sub ctx file grid cremo venvs feast spark lib site packages click core py line in invoke return ctx invoke self callback ctx params file grid cremo venvs feast spark lib site packages click core py line in invoke return callback args kwargs file grid cremo venvs feast spark lib site packages click decorators py line in new func return f get current context args kwargs file grid cremo venvs feast spark lib site packages feast cli py line in apply total command apply total repo config repo skip source validation file grid cremo venvs feast spark lib site packages feast usage py line in wrapper return func args kwargs file grid cremo venvs feast spark lib site packages feast repo operations py line in apply total store project registry repo skip source validation file grid cremo venvs feast spark lib site packages feast repo operations py line in apply total with repo instance registry diff infra diff new infra store plan repo file grid cremo venvs feast spark lib site packages feast usage py line in wrapper raise exc with traceback traceback file grid cremo venvs feast spark lib site packages feast usage py line in wrapper return func args kwargs file grid cremo venvs feast spark lib site packages feast feature store py line in plan self registry self project desired repo contents file grid cremo venvs feast spark lib site packages feast diff registry diff py line in diff between registry current project desired repo contents file grid cremo venvs feast spark lib site packages feast diff registry diff py line in extract objects for keep delete update add feastobjecttype get objects from registry registry current project file grid cremo venvs feast spark lib site packages feast registry py line in get objects from registry feastobjecttype data source registry list data sources project project file grid cremo venvs feast spark lib site packages feast registry py line in list data sources data sources append datasource from proto data source proto file grid cremo venvs feast spark lib site packages feast data source py line in from proto raise valueerror could not identify the source type being added valueerror could not identify the source type being added steps to reproduce for spark offline store create the registry using feast apply update registry using feast ap ply again with or without any change in example py specifications version platform linux subsystem possible solution class datasource abc def from proto data source datasourceproto any we should add check datasource and identify of it is of sparksource type and use sparksource from proto | 0 |
9,547 | 8,032,641,218 | IssuesEvent | 2018-07-28 17:44:00 | ionide/ionide-vscode-fsharp | https://api.github.com/repos/ionide/ionide-vscode-fsharp | closed | Can't start in devMode | infrastructure | I start FsAutoComplete from CLI:
```shell
➜ FsAutoComplete git:(master) ✗ dotnet src/FsAutoComplete.netcore/bin/Debug/netcoreapp2.0/fsautocomplete.dll --mode http
[21:32:52 INF] Smooth! Suave listener started in 40,563 with binding 127.0.0.1:8088
```
I set `devMode = true`.
And now, when `LanguageService` try to start it's failing.
```fs
doRetry startByDevMode
|> Promise.onSuccess (fun _ ->
printfn "success"
socketNotify <- startSocket "notify"
socketNotifyWorkspace <- startSocket "notifyWorkspace"
()
)
|> Promise.onFail (fun _ ->
printfn "Failed"
)
```
I only see `"Failed"` in the output. Any idea ? | 1.0 | Can't start in devMode - I start FsAutoComplete from CLI:
```shell
➜ FsAutoComplete git:(master) ✗ dotnet src/FsAutoComplete.netcore/bin/Debug/netcoreapp2.0/fsautocomplete.dll --mode http
[21:32:52 INF] Smooth! Suave listener started in 40,563 with binding 127.0.0.1:8088
```
I set `devMode = true`.
And now, when `LanguageService` try to start it's failing.
```fs
doRetry startByDevMode
|> Promise.onSuccess (fun _ ->
printfn "success"
socketNotify <- startSocket "notify"
socketNotifyWorkspace <- startSocket "notifyWorkspace"
()
)
|> Promise.onFail (fun _ ->
printfn "Failed"
)
```
I only see `"Failed"` in the output. Any idea ? | infrastructure | can t start in devmode i start fsautocomplete from cli shell ➜ fsautocomplete git master ✗ dotnet src fsautocomplete netcore bin debug fsautocomplete dll mode http smooth suave listener started in with binding i set devmode true and now when languageservice try to start it s failing fs doretry startbydevmode promise onsuccess fun printfn success socketnotify startsocket notify socketnotifyworkspace startsocket notifyworkspace promise onfail fun printfn failed i only see failed in the output any idea | 1 |
544,419 | 15,893,411,081 | IssuesEvent | 2021-04-11 05:41:57 | remnoteio/remnote-issues | https://api.github.com/repos/remnoteio/remnote-issues | closed | indelible strange line in rem | fixed-in-remnote-1.3 priority=3 | ERROR: type should be string, got "\r\n\r\nhttps://user-images.githubusercontent.com/20538090/105758826-c31eb200-5f60-11eb-8b79-4915893c348b.mp4\r\n\r\n\r\nI think the reason of that line is because parent is multiline card" | 1.0 | indelible strange line in rem -
https://user-images.githubusercontent.com/20538090/105758826-c31eb200-5f60-11eb-8b79-4915893c348b.mp4
I think the reason of that line is because parent is multiline card | non_infrastructure | indelible strange line in rem i think the reason of that line is because parent is multiline card | 0 |
26,410 | 11,305,769,003 | IssuesEvent | 2020-01-18 08:41:26 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | security: allow users to enable password auth for 'root' | A-security C-enhancement | tldr; this issue proposes to enable:
- setting/changing the password of the `root` user;
- allowing root to log in using their password on the UI;
- allowing root to log in using their password on SQL, subject to regular rules in the HBA configuration.
It also fixes the [regression](https://github.com/cockroachdb/cockroach/issues/43847#issuecomment-572615646) introduced by #42563, by enabling `root` authentication on the admin UI to obtain a login cookie.
### Background
CockroachDB currently offers 5 separate guardrails to ensure that `root` is always able to connect even when the authentication configuration is botched:
1. when running `--insecure`, anyone can log in without auth.
2. the crdb [HBA configuration](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html) enforces that `host all root all cert` is always the first rule, meaning that clients can log in as `root` if and only if they present a valid TLS client cert for `root` (and because the method is `cert` and not `cert-password`, password auth is rejected for `root` in any case).
3. `ALTER USER WITH PASSWORD` is disallowed for `root`, so root cannot have a password
4. the internal "check password" mechanism, shared by both SQL and HTTP connections, to compare a client-provided password with a user record fails with an error if the user is `root`.
5. the UI `UserLogin` HTTP API reports an error if the user is `root`.
6. the cockroach CLI commands using SQL connections report an error if the user is `root` and a password is supplied.
Meanwhile, there is a separate, unrelated (but relevant) rule:
7. if a user's password is unset/empty, then this user is not able to use password authentication.
### Proposal
This issue proposes to **remove rules 3-6** specifically, without changing the others.
This proposal would not change the security rules for a cluster using the default configuration: by default, root would not be able to log in using password anywhere (by default, the root account has no password so rule 6 applies) and is required to present a cert on SQL (due to rule 2).
### Non-Pitfalls
A possible counter-argument to this proposal is that CockroachdB uses `root` internally to establish SQL client connections towards itself.
This is not an obstacle: in insecure clusters, the password would be ignored anyway; in secure clusters, CockroachDB's internal connections use a valid client cert. | True | security: allow users to enable password auth for 'root' - tldr; this issue proposes to enable:
- setting/changing the password of the `root` user;
- allowing root to log in using their password on the UI;
- allowing root to log in using their password on SQL, subject to regular rules in the HBA configuration.
It also fixes the [regression](https://github.com/cockroachdb/cockroach/issues/43847#issuecomment-572615646) introduced by #42563, by enabling `root` authentication on the admin UI to obtain a login cookie.
### Background
CockroachDB currently offers 5 separate guardrails to ensure that `root` is always able to connect even when the authentication configuration is botched:
1. when running `--insecure`, anyone can log in without auth.
2. the crdb [HBA configuration](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html) enforces that `host all root all cert` is always the first rule, meaning that clients can log in as `root` if and only if they present a valid TLS client cert for `root` (and because the method is `cert` and not `cert-password`, password auth is rejected for `root` in any case).
3. `ALTER USER WITH PASSWORD` is disallowed for `root`, so root cannot have a password
4. the internal "check password" mechanism, shared by both SQL and HTTP connections, to compare a client-provided password with a user record fails with an error if the user is `root`.
5. the UI `UserLogin` HTTP API reports an error if the user is `root`.
6. the cockroach CLI commands using SQL connections report an error if the user is `root` and a password is supplied.
Meanwhile, there is a separate, unrelated (but relevant) rule:
7. if a user's password is unset/empty, then this user is not able to use password authentication.
### Proposal
This issue proposes to **remove rules 3-6** specifically, without changing the others.
This proposal would not change the security rules for a cluster using the default configuration: by default, root would not be able to log in using password anywhere (by default, the root account has no password so rule 6 applies) and is required to present a cert on SQL (due to rule 2).
### Non-Pitfalls
A possible counter-argument to this proposal is that CockroachdB uses `root` internally to establish SQL client connections towards itself.
This is not an obstacle: in insecure clusters, the password would be ignored anyway; in secure clusters, CockroachDB's internal connections use a valid client cert. | non_infrastructure | security allow users to enable password auth for root tldr this issue proposes to enable setting changing the password of the root user allowing root to log in using their password on the ui allowing root to log in using their password on sql subject to regular rules in the hba configuration it also fixes the introduced by by enabling root authentication on the admin ui to obtain a login cookie background cockroachdb currently offers separate guardrails to ensure that root is always able to connect even when the authentication configuration is botched when running insecure anyone can log in without auth the crdb enforces that host all root all cert is always the first rule meaning that clients can log in as root if and only if they present a valid tls client cert for root and because the method is cert and not cert password password auth is rejected for root in any case alter user with password is disallowed for root so root cannot have a password the internal check password mechanism shared by both sql and http connections to compare a client provided password with a user record fails with an error if the user is root the ui userlogin http api reports an error if the user is root the cockroach cli commands using sql connections report an error if the user is root and a password is supplied meanwhile there is a separate unrelated but relevant rule if a user s password is unset empty then this user is not able to use password authentication proposal this issue proposes to remove rules specifically without changing the others this proposal would not change the security rules for a cluster using the default configuration by default root would not be able to log in using password anywhere by default the root account has no password so rule applies and is required to present a cert on sql due to rule non pitfalls a possible counter argument to this proposal is that cockroachdb uses root internally to establish sql client connections towards itself this is not an obstacle in insecure clusters the password would be ignored anyway in secure clusters cockroachdb s internal connections use a valid client cert | 0 |
22,625 | 15,326,022,258 | IssuesEvent | 2021-02-26 02:39:36 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Threading.Tasks.DataFlow and ComponentModel.Annotations are not building NetCoreAppCurrent | area-Infrastructure-libraries easy | These libraries are included in the shared framework, but we aren't building them for `$(NetCoreAppCurrent)`.
https://github.com/dotnet/runtime/blob/8a3fd5a065b019cc70a600a759ea0c87c14cae8f/src/libraries/System.Threading.Tasks.Dataflow/src/System.Threading.Tasks.Dataflow.csproj#L3
https://github.com/dotnet/runtime/blob/8a3fd5a065b019cc70a600a759ea0c87c14cae8f/src/libraries/System.ComponentModel.Annotations/src/System.ComponentModel.Annotations.csproj#L3
This causes the following problems:
1. These libraries have `<Nullable>enable</Nullable>` in them, but they are building against netstandard APIs, which don't have nullablility annotations.
2. When someone wants to use a new API (or new attributes, like `DynamicDependency` and `UnconditionalSuppressMessage`), they are not available.
We should add a `$(NetCoreAppCurrent)` target for these libraries, and ship that in the shared framework / runtimepack. I analyzed a recent shared framework, and these were the only 2 libraries (outside of `System.Runtime.CompilerServices.Unsafe`) that didn't target `net6.0`.
cc @LakshanF @joperezr @safern @ViktorHofer @ericstj | 1.0 | Threading.Tasks.DataFlow and ComponentModel.Annotations are not building NetCoreAppCurrent - These libraries are included in the shared framework, but we aren't building them for `$(NetCoreAppCurrent)`.
https://github.com/dotnet/runtime/blob/8a3fd5a065b019cc70a600a759ea0c87c14cae8f/src/libraries/System.Threading.Tasks.Dataflow/src/System.Threading.Tasks.Dataflow.csproj#L3
https://github.com/dotnet/runtime/blob/8a3fd5a065b019cc70a600a759ea0c87c14cae8f/src/libraries/System.ComponentModel.Annotations/src/System.ComponentModel.Annotations.csproj#L3
This causes the following problems:
1. These libraries have `<Nullable>enable</Nullable>` in them, but they are building against netstandard APIs, which don't have nullablility annotations.
2. When someone wants to use a new API (or new attributes, like `DynamicDependency` and `UnconditionalSuppressMessage`), they are not available.
We should add a `$(NetCoreAppCurrent)` target for these libraries, and ship that in the shared framework / runtimepack. I analyzed a recent shared framework, and these were the only 2 libraries (outside of `System.Runtime.CompilerServices.Unsafe`) that didn't target `net6.0`.
cc @LakshanF @joperezr @safern @ViktorHofer @ericstj | infrastructure | threading tasks dataflow and componentmodel annotations are not building netcoreappcurrent these libraries are included in the shared framework but we aren t building them for netcoreappcurrent this causes the following problems these libraries have enable in them but they are building against netstandard apis which don t have nullablility annotations when someone wants to use a new api or new attributes like dynamicdependency and unconditionalsuppressmessage they are not available we should add a netcoreappcurrent target for these libraries and ship that in the shared framework runtimepack i analyzed a recent shared framework and these were the only libraries outside of system runtime compilerservices unsafe that didn t target cc lakshanf joperezr safern viktorhofer ericstj | 1 |
32,440 | 26,700,726,940 | IssuesEvent | 2023-01-27 14:11:56 | open-telemetry/opentelemetry.io | https://api.github.com/repos/open-telemetry/opentelemetry.io | opened | npm run serve misbehaving | bug infrastructure | It seems that `netlify-cli` serve is misbehaving. For example, page refresh, after a change doesn't always show the resulting change.
As a workaround, view the site served at http://localhost:1313/, which is the site as served by Hugo rather than Netlify dev. | 1.0 | npm run serve misbehaving - It seems that `netlify-cli` serve is misbehaving. For example, page refresh, after a change doesn't always show the resulting change.
As a workaround, view the site served at http://localhost:1313/, which is the site as served by Hugo rather than Netlify dev. | infrastructure | npm run serve misbehaving it seems that netlify cli serve is misbehaving for example page refresh after a change doesn t always show the resulting change as a workaround view the site served at which is the site as served by hugo rather than netlify dev | 1 |
18,674 | 13,066,059,137 | IssuesEvent | 2020-07-30 20:55:05 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | HitSpool interface: report running component (Trac #944) | Migrated from Trac enhancement infrastructure | The cronjob installed on expcont for doing an hourly:
fab hs_status
that reports running or not running comonents to i3live is still not working properly.
Migrated from https://code.icecube.wisc.edu/ticket/944
```json
{
"status": "closed",
"changetime": "2015-08-10T22:35:05",
"description": "The cronjob installed on expcont for doing an hourly:\n\nfab hs_status\n\nthat reports running or not running comonents to i3live is still not working properly. ",
"reporter": "dheereman",
"cc": "dheereman",
"resolution": "wontfix",
"_ts": "1439246105293024",
"component": "infrastructure",
"summary": "HitSpool interface: report running component",
"priority": "normal",
"keywords": "hitspool",
"time": "2015-04-21T09:52:00",
"milestone": "",
"owner": "dheereman",
"type": "enhancement"
}
```
| 1.0 | HitSpool interface: report running component (Trac #944) - The cronjob installed on expcont for doing an hourly:
fab hs_status
that reports running or not running comonents to i3live is still not working properly.
Migrated from https://code.icecube.wisc.edu/ticket/944
```json
{
"status": "closed",
"changetime": "2015-08-10T22:35:05",
"description": "The cronjob installed on expcont for doing an hourly:\n\nfab hs_status\n\nthat reports running or not running comonents to i3live is still not working properly. ",
"reporter": "dheereman",
"cc": "dheereman",
"resolution": "wontfix",
"_ts": "1439246105293024",
"component": "infrastructure",
"summary": "HitSpool interface: report running component",
"priority": "normal",
"keywords": "hitspool",
"time": "2015-04-21T09:52:00",
"milestone": "",
"owner": "dheereman",
"type": "enhancement"
}
```
| infrastructure | hitspool interface report running component trac the cronjob installed on expcont for doing an hourly fab hs status that reports running or not running comonents to is still not working properly migrated from json status closed changetime description the cronjob installed on expcont for doing an hourly n nfab hs status n nthat reports running or not running comonents to is still not working properly reporter dheereman cc dheereman resolution wontfix ts component infrastructure summary hitspool interface report running component priority normal keywords hitspool time milestone owner dheereman type enhancement | 1 |
58,787 | 6,620,564,310 | IssuesEvent | 2017-09-21 15:56:24 | Microsoft/vscode | https://api.github.com/repos/Microsoft/vscode | opened | Improved --wait behaviour | testplan-item | Refs: https://github.com/Microsoft/vscode/issues/24327
- [ ] Win
- [ ] Mac
- [ ] Linux
Complexity: 2
Running code with `--wait` parameter from the command line makes the command line process wait until the window that opens is closed. During this milestone we are now tracking the file to open as argument and also terminate the calling process when all the files are closed. This allows to reuse an existing Code instance for this purpose.
* verify you can use `--wait` with 0, 1 or many files and the calling process terminates when the files close
* verify the command line will terminate also when the target window closes | 1.0 | Improved --wait behaviour - Refs: https://github.com/Microsoft/vscode/issues/24327
- [ ] Win
- [ ] Mac
- [ ] Linux
Complexity: 2
Running code with `--wait` parameter from the command line makes the command line process wait until the window that opens is closed. During this milestone we are now tracking the file to open as argument and also terminate the calling process when all the files are closed. This allows to reuse an existing Code instance for this purpose.
* verify you can use `--wait` with 0, 1 or many files and the calling process terminates when the files close
* verify the command line will terminate also when the target window closes | non_infrastructure | improved wait behaviour refs win mac linux complexity running code with wait parameter from the command line makes the command line process wait until the window that opens is closed during this milestone we are now tracking the file to open as argument and also terminate the calling process when all the files are closed this allows to reuse an existing code instance for this purpose verify you can use wait with or many files and the calling process terminates when the files close verify the command line will terminate also when the target window closes | 0 |
64,204 | 8,718,340,856 | IssuesEvent | 2018-12-07 20:06:00 | publiclab/plots2 | https://api.github.com/repos/publiclab/plots2 | opened | Move some files from root directory into subfolders if possible | documentation help-wanted | We have many different files in the root directory, but they push down the README quite far when viewed on GitHub: https://github.com/publiclab/plots2

Some files seem like they could be kept in a subfolder to save vertical space -- some have already been moved to `.github/` -- but it'll take research to tell which we can move without breaking anything. Let's do some careful research and paste in links in the comments for any docs showing where we could stash these files.
Thanks! | 1.0 | Move some files from root directory into subfolders if possible - We have many different files in the root directory, but they push down the README quite far when viewed on GitHub: https://github.com/publiclab/plots2

Some files seem like they could be kept in a subfolder to save vertical space -- some have already been moved to `.github/` -- but it'll take research to tell which we can move without breaking anything. Let's do some careful research and paste in links in the comments for any docs showing where we could stash these files.
Thanks! | non_infrastructure | move some files from root directory into subfolders if possible we have many different files in the root directory but they push down the readme quite far when viewed on github some files seem like they could be kept in a subfolder to save vertical space some have already been moved to github but it ll take research to tell which we can move without breaking anything let s do some careful research and paste in links in the comments for any docs showing where we could stash these files thanks | 0 |
4,715 | 5,243,726,911 | IssuesEvent | 2017-01-31 21:29:55 | github/VisualStudio | https://api.github.com/repos/github/VisualStudio | closed | Update step 126 in Test Manifest to reflect maintainer workflow | infrastructure | It's currently:
> Clicking on a pull request title opens browser window to pull request on .com
But now clicking on the title displays a detailed view of the pull request | 1.0 | Update step 126 in Test Manifest to reflect maintainer workflow - It's currently:
> Clicking on a pull request title opens browser window to pull request on .com
But now clicking on the title displays a detailed view of the pull request | infrastructure | update step in test manifest to reflect maintainer workflow it s currently clicking on a pull request title opens browser window to pull request on com but now clicking on the title displays a detailed view of the pull request | 1 |
65,001 | 14,707,130,779 | IssuesEvent | 2021-01-04 21:04:23 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | opened | microProfile-4.0 performance issue with appSecurity-3.0/jaspic. | in:MicroProfile performance team:Core Security | I am noticing this extra time spent in an mp-4.0 app compared to a mp-3.3 app. (JaspiServiceImpl.isAnyProviderRegistered)
```
Parent 0 0.11 2.26 7 147 J:com/ibm/ws/webcontainer/security/WebAppSecurityCollaboratorImpl.performSecurityChecks(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;Ljavax/security/auth/Subject;Lcom/ibm/ws/webcontainer/security/WebSecurityContext;)V
Self 0 0.11 2.26 7 147 J:com/ibm/ws/security/jaspi/JaspiServiceImpl.isAnyProviderRegistered(Lcom/ibm/ws/webcontainer/security/WebRequest;)Z
Child 0 1.22 2.08 79 135 J:com/ibm/ws/security/javaeesec/BridgeBuilderImpl.buildBridgeIfNeeded(Ljava/lang/String;Ljavax/security/auth/message/config/AuthConfigFactory;)V
Child 0 0.06 0.06 4 4 J:com/ibm/wsspi/kernel/service/utils/AtomicServiceReference.getService()Ljava/lang/Object;
Child 0 0.00 0.02 0 1 J:com/ibm/ws/security/jaspi/JaspiServiceImpl.getAuthConfigFactory()Ljavax/security/auth/message/config/AuthConfigFactory;
```
It looks like mpJwt-1.2 pulls in appSecurity-3.0 (while mp3.3 used appSecurity-2.0), so now isJapsiEnabled is true here, which leads to the extra 2.26% time seen above:
https://github.com/OpenLiberty/open-liberty/blob/c2b0154dcec9728aea7bf3bb72b8c7d99f18b7d7/dev/com.ibm.ws.webcontainer.security/src/com/ibm/ws/webcontainer/security/WebAppSecurityCollaboratorImpl.java#L667
Is there anyway to disable jaspic since I am not using it, or is this something spec-wise we’re stuck with? I also wonder why it is doing the buildBridgeIfNeeded on every request,
| True | microProfile-4.0 performance issue with appSecurity-3.0/jaspic. - I am noticing this extra time spent in an mp-4.0 app compared to a mp-3.3 app. (JaspiServiceImpl.isAnyProviderRegistered)
```
Parent 0 0.11 2.26 7 147 J:com/ibm/ws/webcontainer/security/WebAppSecurityCollaboratorImpl.performSecurityChecks(Ljavax/servlet/http/HttpServletRequest;Ljavax/servlet/http/HttpServletResponse;Ljavax/security/auth/Subject;Lcom/ibm/ws/webcontainer/security/WebSecurityContext;)V
Self 0 0.11 2.26 7 147 J:com/ibm/ws/security/jaspi/JaspiServiceImpl.isAnyProviderRegistered(Lcom/ibm/ws/webcontainer/security/WebRequest;)Z
Child 0 1.22 2.08 79 135 J:com/ibm/ws/security/javaeesec/BridgeBuilderImpl.buildBridgeIfNeeded(Ljava/lang/String;Ljavax/security/auth/message/config/AuthConfigFactory;)V
Child 0 0.06 0.06 4 4 J:com/ibm/wsspi/kernel/service/utils/AtomicServiceReference.getService()Ljava/lang/Object;
Child 0 0.00 0.02 0 1 J:com/ibm/ws/security/jaspi/JaspiServiceImpl.getAuthConfigFactory()Ljavax/security/auth/message/config/AuthConfigFactory;
```
It looks like mpJwt-1.2 pulls in appSecurity-3.0 (while mp3.3 used appSecurity-2.0), so now isJapsiEnabled is true here, which leads to the extra 2.26% time seen above:
https://github.com/OpenLiberty/open-liberty/blob/c2b0154dcec9728aea7bf3bb72b8c7d99f18b7d7/dev/com.ibm.ws.webcontainer.security/src/com/ibm/ws/webcontainer/security/WebAppSecurityCollaboratorImpl.java#L667
Is there anyway to disable jaspic since I am not using it, or is this something spec-wise we’re stuck with? I also wonder why it is doing the buildBridgeIfNeeded on every request,
| non_infrastructure | microprofile performance issue with appsecurity jaspic i am noticing this extra time spent in an mp app compared to a mp app jaspiserviceimpl isanyproviderregistered parent j com ibm ws webcontainer security webappsecuritycollaboratorimpl performsecuritychecks ljavax servlet http httpservletrequest ljavax servlet http httpservletresponse ljavax security auth subject lcom ibm ws webcontainer security websecuritycontext v self j com ibm ws security jaspi jaspiserviceimpl isanyproviderregistered lcom ibm ws webcontainer security webrequest z child j com ibm ws security javaeesec bridgebuilderimpl buildbridgeifneeded ljava lang string ljavax security auth message config authconfigfactory v child j com ibm wsspi kernel service utils atomicservicereference getservice ljava lang object child j com ibm ws security jaspi jaspiserviceimpl getauthconfigfactory ljavax security auth message config authconfigfactory it looks like mpjwt pulls in appsecurity while used appsecurity so now isjapsienabled is true here which leads to the extra time seen above is there anyway to disable jaspic since i am not using it or is this something spec wise we’re stuck with i also wonder why it is doing the buildbridgeifneeded on every request | 0 |
6,551 | 6,510,168,559 | IssuesEvent | 2017-08-25 01:17:48 | proudcity/wp-proudcity | https://api.github.com/repos/proudcity/wp-proudcity | closed | Build in GKE Triggers instead of Jenkins? | infrastructure ready | @aschmoe With the switch to the bigger VMs, we got put on a newer version of Kubernetes that seems to have broken docker building in Jenkins (see last build in https://jenkins.proudcity.com/job/proudcity-api-swagger/). It sounds like the fix is to just rebuild the Jenkins task pod with the latest from the dind (docker in docker) image. https://github.com/jenkinsci/docker-jnlp-slave/issues/40
This got me testing the Build Triggers in JCE > Container Registry > Build Triggers. It was simple to setup and it just seems to work: https://console.cloud.google.com/gcr/builds/44aefd1e-67f8-48e1-83fc-348781d3549e?project=proudcity-1184&authuser=0.
@todo:
* discuss with @aschmoe
* evaluate pricing
* migrate over all docker build processes
* disable jenkins tasks (or at least stop git watching)
This would just be for the docker build tasks. The other tasks (build.sh, cmd.sh, etc) would still use Jenkins. | 1.0 | Build in GKE Triggers instead of Jenkins? - @aschmoe With the switch to the bigger VMs, we got put on a newer version of Kubernetes that seems to have broken docker building in Jenkins (see last build in https://jenkins.proudcity.com/job/proudcity-api-swagger/). It sounds like the fix is to just rebuild the Jenkins task pod with the latest from the dind (docker in docker) image. https://github.com/jenkinsci/docker-jnlp-slave/issues/40
This got me testing the Build Triggers in JCE > Container Registry > Build Triggers. It was simple to setup and it just seems to work: https://console.cloud.google.com/gcr/builds/44aefd1e-67f8-48e1-83fc-348781d3549e?project=proudcity-1184&authuser=0.
@todo:
* discuss with @aschmoe
* evaluate pricing
* migrate over all docker build processes
* disable jenkins tasks (or at least stop git watching)
This would just be for the docker build tasks. The other tasks (build.sh, cmd.sh, etc) would still use Jenkins. | infrastructure | build in gke triggers instead of jenkins aschmoe with the switch to the bigger vms we got put on a newer version of kubernetes that seems to have broken docker building in jenkins see last build in it sounds like the fix is to just rebuild the jenkins task pod with the latest from the dind docker in docker image this got me testing the build triggers in jce container registry build triggers it was simple to setup and it just seems to work todo discuss with aschmoe evaluate pricing migrate over all docker build processes disable jenkins tasks or at least stop git watching this would just be for the docker build tasks the other tasks build sh cmd sh etc would still use jenkins | 1 |
130,042 | 18,154,707,110 | IssuesEvent | 2021-09-26 21:38:39 | ghc-dev/Rachel-Christian | https://api.github.com/repos/ghc-dev/Rachel-Christian | opened | CVE-2019-16869 (High) detected in netty-codec-http-4.1.39.Final.jar | security vulnerability | ## CVE-2019-16869 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: Rachel-Christian/build.gradle</p>
<p>Path to vulnerable library: ches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Rachel-Christian/commit/b737b027c17d2099f7597b2b0401681337cf2af5">b737b027c17d2099f7597b2b0401681337cf2af5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a "Transfer-Encoding : chunked" line), which leads to HTTP request smuggling.
<p>Publish Date: 2019-09-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869>CVE-2019-16869</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869</a></p>
<p>Release Date: 2019-09-26</p>
<p>Fix Resolution: io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16869","vulnerabilityDetails":"Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a \"Transfer-Encoding : chunked\" line), which leads to HTTP request smuggling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-16869 (High) detected in netty-codec-http-4.1.39.Final.jar - ## CVE-2019-16869 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-codec-http-4.1.39.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p>
<p>Path to dependency file: Rachel-Christian/build.gradle</p>
<p>Path to vulnerable library: ches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.39.Final/732d06961162e27fa3ae5989541c4460853745d3/netty-codec-http-4.1.39.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-codec-http-4.1.39.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Rachel-Christian/commit/b737b027c17d2099f7597b2b0401681337cf2af5">b737b027c17d2099f7597b2b0401681337cf2af5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a "Transfer-Encoding : chunked" line), which leads to HTTP request smuggling.
<p>Publish Date: 2019-09-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869>CVE-2019-16869</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16869</a></p>
<p>Release Date: 2019-09-26</p>
<p>Fix Resolution: io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-codec-http","packageVersion":"4.1.39.Final","packageFilePaths":["/build.gradle"],"isTransitiveDependency":false,"dependencyTree":"io.netty:netty-codec-http:4.1.39.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all:4.1.42.Final,io.netty:netty-codec-http:4.1.42.Final"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-16869","vulnerabilityDetails":"Netty before 4.1.42.Final mishandles whitespace before the colon in HTTP headers (such as a \"Transfer-Encoding : chunked\" line), which leads to HTTP request smuggling.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16869","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in netty codec http final jar cve high severity vulnerability vulnerable library netty codec http final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file rachel christian build gradle path to vulnerable library ches modules files io netty netty codec http final netty codec http final jar dependency hierarchy x netty codec http final jar vulnerable library found in head commit a href found in base branch master vulnerability details netty before final mishandles whitespace before the colon in http headers such as a transfer encoding chunked line which leads to http request smuggling publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty all final io netty netty codec http final rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree io netty netty codec http final isminimumfixversionavailable true minimumfixversion io netty netty all final io netty netty codec http final basebranches vulnerabilityidentifier cve vulnerabilitydetails netty before final mishandles whitespace before the colon in http headers such as a transfer encoding chunked line which leads to http request smuggling vulnerabilityurl | 0 |
28,174 | 23,070,209,336 | IssuesEvent | 2022-07-25 17:17:50 | Zilliqa/scilla | https://api.github.com/repos/Zilliqa/scilla | closed | `make gold` should not update JSON gold files if it's only whitespace change | infrastructure tests | The Yojson library's pretty-printer is not stable across different versions of the library. We should check that the new JSON AST is actually different before updating the "gold" files, otherwise `make gold` may produce too many unrelated changes. | 1.0 | `make gold` should not update JSON gold files if it's only whitespace change - The Yojson library's pretty-printer is not stable across different versions of the library. We should check that the new JSON AST is actually different before updating the "gold" files, otherwise `make gold` may produce too many unrelated changes. | infrastructure | make gold should not update json gold files if it s only whitespace change the yojson library s pretty printer is not stable across different versions of the library we should check that the new json ast is actually different before updating the gold files otherwise make gold may produce too many unrelated changes | 1 |
15,940 | 11,778,018,289 | IssuesEvent | 2020-03-16 15:40:53 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Debug issue of aspnetcore source on VS2019 | area-infrastructure | I was able to successfully build aspnetcore code base by following the instructions at (https://github.com/dotnet/aspnetcore/blob/master/docs/BuildFromSource.md). But when I run any of the sample projects in debug mode and set a break point, none of the breakpoints are being hit and I get "The application is in Break mode" message.( Screenshot attached).
I want to point out that I am **not** referring to stepping into aspnetcore code from my application using source linking and loading debug symbols as mentioned here
https://www.stevejgordon.co.uk/debugging-asp-net-core-2-source
I am struggling to debug the actual aspnet core source code on my local machine after i downloaded the source from git.
What am I missing?

| 1.0 | Debug issue of aspnetcore source on VS2019 - I was able to successfully build aspnetcore code base by following the instructions at (https://github.com/dotnet/aspnetcore/blob/master/docs/BuildFromSource.md). But when I run any of the sample projects in debug mode and set a break point, none of the breakpoints are being hit and I get "The application is in Break mode" message.( Screenshot attached).
I want to point out that I am **not** referring to stepping into aspnetcore code from my application using source linking and loading debug symbols as mentioned here
https://www.stevejgordon.co.uk/debugging-asp-net-core-2-source
I am struggling to debug the actual aspnet core source code on my local machine after i downloaded the source from git.
What am I missing?

| infrastructure | debug issue of aspnetcore source on i was able to successfully build aspnetcore code base by following the instructions at but when i run any of the sample projects in debug mode and set a break point none of the breakpoints are being hit and i get the application is in break mode message screenshot attached i want to point out that i am not referring to stepping into aspnetcore code from my application using source linking and loading debug symbols as mentioned here i am struggling to debug the actual aspnet core source code on my local machine after i downloaded the source from git what am i missing | 1 |
4,848 | 5,294,623,079 | IssuesEvent | 2017-02-09 11:21:08 | twingly/twingly-search-api-ruby | https://api.github.com/repos/twingly/twingly-search-api-ruby | opened | Make Travis test gem installation | enhancement infrastructure small | Seen on https://github.com/sickill/rainbow/commit/30904393e61844304abc0f9f00226497fc1b8201
```yml
matrix:
include:
- rvm: 2.0.0
...
- rvm: 2.2.6
install: true # This skips 'bundle install'
script: gem build rainbow && gem install *.gem
``` | 1.0 | Make Travis test gem installation - Seen on https://github.com/sickill/rainbow/commit/30904393e61844304abc0f9f00226497fc1b8201
```yml
matrix:
include:
- rvm: 2.0.0
...
- rvm: 2.2.6
install: true # This skips 'bundle install'
script: gem build rainbow && gem install *.gem
``` | infrastructure | make travis test gem installation seen on yml matrix include rvm rvm install true this skips bundle install script gem build rainbow gem install gem | 1 |
81,019 | 15,597,889,913 | IssuesEvent | 2021-03-18 17:26:09 | MicrosoftDocs/microsoft-365-docs | https://api.github.com/repos/MicrosoftDocs/microsoft-365-docs | closed | Update versions information for Outlook client | security |
The information on the docs site and in the M365 admin center, "Deploy a new add-in" > "Choose from the Store" pages contradicts. At the link below, it shows that the "Report Phishing" button is only compatible with Outlook 2016 for Mac, but the Store shows "Outlook 2016 or later on Mac".
https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/enable-the-report-phish-add-in?view=o365-worldwide#what-do-you-need-to-know-before-you-begin

---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 214e9296-0909-0946-57b0-dca5f22ee854
* Version Independent ID: 689f3c32-eeb9-9c60-5d7c-78ebd6dcb165
* Content: [Enable the Report Phish add-in - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/enable-the-report-phish-add-in?view=o365-worldwide#get-and-enable-the-report-phishing-add-in-for-your-organization)
* Content Source: [microsoft-365/security/office-365-security/enable-the-report-phish-add-in.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/enable-the-report-phish-add-in.md)
* Service: **o365-seccomp**
* GitHub Login: @siosulli
* Microsoft Alias: **siosulli** | True | Update versions information for Outlook client -
The information on the docs site and in the M365 admin center, "Deploy a new add-in" > "Choose from the Store" pages contradicts. At the link below, it shows that the "Report Phishing" button is only compatible with Outlook 2016 for Mac, but the Store shows "Outlook 2016 or later on Mac".
https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/enable-the-report-phish-add-in?view=o365-worldwide#what-do-you-need-to-know-before-you-begin

---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 214e9296-0909-0946-57b0-dca5f22ee854
* Version Independent ID: 689f3c32-eeb9-9c60-5d7c-78ebd6dcb165
* Content: [Enable the Report Phish add-in - Office 365](https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/enable-the-report-phish-add-in?view=o365-worldwide#get-and-enable-the-report-phishing-add-in-for-your-organization)
* Content Source: [microsoft-365/security/office-365-security/enable-the-report-phish-add-in.md](https://github.com/MicrosoftDocs/microsoft-365-docs/blob/public/microsoft-365/security/office-365-security/enable-the-report-phish-add-in.md)
* Service: **o365-seccomp**
* GitHub Login: @siosulli
* Microsoft Alias: **siosulli** | non_infrastructure | update versions information for outlook client the information on the docs site and in the admin center deploy a new add in choose from the store pages contradicts at the link below it shows that the report phishing button is only compatible with outlook for mac but the store shows outlook or later on mac document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service seccomp github login siosulli microsoft alias siosulli | 0 |
47,732 | 6,062,897,010 | IssuesEvent | 2017-06-14 10:34:13 | owncloud/client | https://api.github.com/repos/owncloud/client | opened | [Sharing Dialog] Include "Show file listing" checkbox on public link section on folders | Design & UX Enhancement Sharing | See: https://github.com/owncloud/core/pull/27548 - `permissions` is set to `4` (shouldn't it be `6`: `CREATE` & `UPDATE` instead? cc/ @PVince81)
- [ ] Show file listing checkbox: https://github.com/owncloud/core/pull/27548#issuecomment-299473915
- [ ] Do not display the "Direct download" option on non-listed directories: https://github.com/owncloud/core/pull/27548#issuecomment-300431656 | 1.0 | [Sharing Dialog] Include "Show file listing" checkbox on public link section on folders - See: https://github.com/owncloud/core/pull/27548 - `permissions` is set to `4` (shouldn't it be `6`: `CREATE` & `UPDATE` instead? cc/ @PVince81)
- [ ] Show file listing checkbox: https://github.com/owncloud/core/pull/27548#issuecomment-299473915
- [ ] Do not display the "Direct download" option on non-listed directories: https://github.com/owncloud/core/pull/27548#issuecomment-300431656 | non_infrastructure | include show file listing checkbox on public link section on folders see permissions is set to shouldn t it be create update instead cc show file listing checkbox do not display the direct download option on non listed directories | 0 |
17,206 | 12,247,906,652 | IssuesEvent | 2020-05-05 16:36:48 | enarx/enarx | https://api.github.com/repos/enarx/enarx | opened | Set the log level to 'error' for our cargo-make 'deny' task | good first issue infrastructure | One of of the side effects of emulating the workspace with `cargo-make` is that `cargo-deny` helpfully points out that some of the crates it is looking at don't contain all of the licenses listed in `deny.toml`. These are spurious and tend to clutter the CI output. The `deny.toml` is correct _overall_ throughout our emulated workspace but `cargo-deny` looks at each crate in a vacuum with `deny.toml`.
For example:
```
[cargo-make][1] INFO - Execute Command: "cargo" "deny" "check" "licenses"
warning: license was not encountered
┌── /__w/enarx/enarx/deny.toml:13:5 ───
│
13 │ "BSD-3-Clause",
│ ^^^^^^^^^^^^^^ no crate used this license
│
warning: license was not encountered
┌── /__w/enarx/enarx/deny.toml:11:5 ───
│
11 │ "MIT",
│ ^^^^^ no crate used this license
│
```
Check out `cargo deny --help` to see how to set the log level. Update the `deny` task in the `Makefile.toml` to supply the argument. | 1.0 | Set the log level to 'error' for our cargo-make 'deny' task - One of of the side effects of emulating the workspace with `cargo-make` is that `cargo-deny` helpfully points out that some of the crates it is looking at don't contain all of the licenses listed in `deny.toml`. These are spurious and tend to clutter the CI output. The `deny.toml` is correct _overall_ throughout our emulated workspace but `cargo-deny` looks at each crate in a vacuum with `deny.toml`.
For example:
```
[cargo-make][1] INFO - Execute Command: "cargo" "deny" "check" "licenses"
warning: license was not encountered
┌── /__w/enarx/enarx/deny.toml:13:5 ───
│
13 │ "BSD-3-Clause",
│ ^^^^^^^^^^^^^^ no crate used this license
│
warning: license was not encountered
┌── /__w/enarx/enarx/deny.toml:11:5 ───
│
11 │ "MIT",
│ ^^^^^ no crate used this license
│
```
Check out `cargo deny --help` to see how to set the log level. Update the `deny` task in the `Makefile.toml` to supply the argument. | infrastructure | set the log level to error for our cargo make deny task one of of the side effects of emulating the workspace with cargo make is that cargo deny helpfully points out that some of the crates it is looking at don t contain all of the licenses listed in deny toml these are spurious and tend to clutter the ci output the deny toml is correct overall throughout our emulated workspace but cargo deny looks at each crate in a vacuum with deny toml for example info execute command cargo deny check licenses warning license was not encountered ┌── w enarx enarx deny toml ─── │ │ bsd clause │ no crate used this license │ warning license was not encountered ┌── w enarx enarx deny toml ─── │ │ mit │ no crate used this license │ check out cargo deny help to see how to set the log level update the deny task in the makefile toml to supply the argument | 1 |
256,595 | 19,429,271,442 | IssuesEvent | 2021-12-21 10:02:02 | swagger-api/swagger-js | https://api.github.com/repos/swagger-api/swagger-js | closed | Feature Request: Ability to abort tagged operations | type: feature cat: documentation version: 3.x | Unsure if this is a closer fit for a bug or feature, as the direct `SwaggerClient.http.http()` method [is documented to support](https://github.com/swagger-api/swagger-js/blob/22da4ad9bbe9ea742ad8a20b15f7c10160bb3043/docs/usage/http-client.md#browser) [AbortController ](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) signals.
### Content & configuration
Swagger/OpenAPI definition:
https://petstore3.swagger.io/api/v3/openapi.json
Swagger-Client usage:
```html
<html>
<head>
<script src="https://unpkg.com/swagger-client"></script>
<script>
const controller = new AbortController();
const { signal } = controller;
const timeout = setTimeout(() => {
controller.abort();
}, 1);
(async () => {
try {
const client = await SwaggerClient({ url: 'https://petstore3.swagger.io/api/v3/openapi.json' })
await client.apis.pet.getPetById({ petId: 1 }, { signal });
} catch (error) {
if (error.name === 'AbortError') {
console.info('request was aborted');
return;
}
} finally {
clearTimeout(timeout);
}
console.error('request was NOT aborted');
})();
</script>
</head>
<body>
check console in browser's dev. tools
</body>
</html>
```
### Is your feature request related to a problem?
I would like the ability to cancel requests made by the `client.apis.{tag}.{operationId}` API. This feature is present with `SwaggerClient.http()` but does not work with tagged operations. This would allow me as a to ensure that race conditions do not occur, by using a native browser feature.
### Describe the solution you'd like
I would like to be able to supply an `AbortController` signal (and ideally any other init parameters to [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch#parameters)) such that I can cancel requests.
### Describe alternatives you've considered
A workaround for this problem could be achieved by using request/response interceptors or more complex flow control around the usage of the client - but I have not explored these in depth.
### Additional context
AbortController[ is documented to be usable](https://github.com/swagger-api/swagger-js/blob/22da4ad9bbe9ea742ad8a20b15f7c10160bb3043/docs/usage/http-client.md#browser) with `SwaggerClient.http()` and works as documented. This same parameter does not work with tagged API operations, possibly because buildRequest [discards aditional init params](https://github.com/swagger-api/swagger-js/blob/b978355cceacae72fdc3c6fc27365f1589a1cef9/src/execute/oas3/build-request.js#L8).
| 1.0 | Feature Request: Ability to abort tagged operations - Unsure if this is a closer fit for a bug or feature, as the direct `SwaggerClient.http.http()` method [is documented to support](https://github.com/swagger-api/swagger-js/blob/22da4ad9bbe9ea742ad8a20b15f7c10160bb3043/docs/usage/http-client.md#browser) [AbortController ](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) signals.
### Content & configuration
Swagger/OpenAPI definition:
https://petstore3.swagger.io/api/v3/openapi.json
Swagger-Client usage:
```html
<html>
<head>
<script src="https://unpkg.com/swagger-client"></script>
<script>
const controller = new AbortController();
const { signal } = controller;
const timeout = setTimeout(() => {
controller.abort();
}, 1);
(async () => {
try {
const client = await SwaggerClient({ url: 'https://petstore3.swagger.io/api/v3/openapi.json' })
await client.apis.pet.getPetById({ petId: 1 }, { signal });
} catch (error) {
if (error.name === 'AbortError') {
console.info('request was aborted');
return;
}
} finally {
clearTimeout(timeout);
}
console.error('request was NOT aborted');
})();
</script>
</head>
<body>
check console in browser's dev. tools
</body>
</html>
```
### Is your feature request related to a problem?
I would like the ability to cancel requests made by the `client.apis.{tag}.{operationId}` API. This feature is present with `SwaggerClient.http()` but does not work with tagged operations. This would allow me as a to ensure that race conditions do not occur, by using a native browser feature.
### Describe the solution you'd like
I would like to be able to supply an `AbortController` signal (and ideally any other init parameters to [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch#parameters)) such that I can cancel requests.
### Describe alternatives you've considered
A workaround for this problem could be achieved by using request/response interceptors or more complex flow control around the usage of the client - but I have not explored these in depth.
### Additional context
AbortController[ is documented to be usable](https://github.com/swagger-api/swagger-js/blob/22da4ad9bbe9ea742ad8a20b15f7c10160bb3043/docs/usage/http-client.md#browser) with `SwaggerClient.http()` and works as documented. This same parameter does not work with tagged API operations, possibly because buildRequest [discards aditional init params](https://github.com/swagger-api/swagger-js/blob/b978355cceacae72fdc3c6fc27365f1589a1cef9/src/execute/oas3/build-request.js#L8).
| non_infrastructure | feature request ability to abort tagged operations unsure if this is a closer fit for a bug or feature as the direct swaggerclient http http method signals content configuration swagger openapi definition swagger client usage html script src const controller new abortcontroller const signal controller const timeout settimeout controller abort async try const client await swaggerclient url await client apis pet getpetbyid petid signal catch error if error name aborterror console info request was aborted return finally cleartimeout timeout console error request was not aborted check console in browser s dev tools is your feature request related to a problem i would like the ability to cancel requests made by the client apis tag operationid api this feature is present with swaggerclient http but does not work with tagged operations this would allow me as a to ensure that race conditions do not occur by using a native browser feature describe the solution you d like i would like to be able to supply an abortcontroller signal and ideally any other init parameters to such that i can cancel requests describe alternatives you ve considered a workaround for this problem could be achieved by using request response interceptors or more complex flow control around the usage of the client but i have not explored these in depth additional context abortcontroller with swaggerclient http and works as documented this same parameter does not work with tagged api operations possibly because buildrequest | 0 |
57,769 | 14,219,807,762 | IssuesEvent | 2020-11-17 13:47:01 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | closed | Modularise `userDevices` | Build | The `userDevices` portion of the state tree needs to be modularised. See the [modularised state documentation](https://github.com/Automattic/wp-calypso/blob/master/docs/modularized-state.md) for more details. | 1.0 | Modularise `userDevices` - The `userDevices` portion of the state tree needs to be modularised. See the [modularised state documentation](https://github.com/Automattic/wp-calypso/blob/master/docs/modularized-state.md) for more details. | non_infrastructure | modularise userdevices the userdevices portion of the state tree needs to be modularised see the for more details | 0 |
338,962 | 10,239,872,259 | IssuesEvent | 2019-08-19 19:21:09 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | `null/` shows in shields for origin after disabling all origins | bug feature/shields needs-investigation priority/P3 security | Discovered when reviewing https://github.com/brave/brave-extension/pull/78
## Steps to reproduce
1. Visit vox.com
2. Use shields to block script. Page refreshes
3. Open shields, scroll down to `Toggle switches to disable script blocking`
4. Toggle all the entries and hit `Apply`
5. A mysterious `null/` entry shows up
<img width="275" alt="screen shot 2018-10-29 at 9 50 45 pm" src="https://user-images.githubusercontent.com/4733304/47696594-b9147a00-dbc4-11e8-95ae-fa7dbb5110bc.png">
## Version information
Version 0.57.3 Chromium: 70.0.3538.67 (Official Build) dev (64-bit)
(macOS 10.13.6) | 1.0 | `null/` shows in shields for origin after disabling all origins - Discovered when reviewing https://github.com/brave/brave-extension/pull/78
## Steps to reproduce
1. Visit vox.com
2. Use shields to block script. Page refreshes
3. Open shields, scroll down to `Toggle switches to disable script blocking`
4. Toggle all the entries and hit `Apply`
5. A mysterious `null/` entry shows up
<img width="275" alt="screen shot 2018-10-29 at 9 50 45 pm" src="https://user-images.githubusercontent.com/4733304/47696594-b9147a00-dbc4-11e8-95ae-fa7dbb5110bc.png">
## Version information
Version 0.57.3 Chromium: 70.0.3538.67 (Official Build) dev (64-bit)
(macOS 10.13.6) | non_infrastructure | null shows in shields for origin after disabling all origins discovered when reviewing steps to reproduce visit vox com use shields to block script page refreshes open shields scroll down to toggle switches to disable script blocking toggle all the entries and hit apply a mysterious null entry shows up img width alt screen shot at pm src version information version chromium official build dev bit macos | 0 |
14,700 | 11,053,879,471 | IssuesEvent | 2019-12-10 12:21:31 | aarhusstadsarkiv/digiarch | https://api.github.com/repos/aarhusstadsarkiv/digiarch | opened | Move to Github actions? | infrastructure | <!--
Hi! :)
If applicable, please link guides/articles when submitting infrastructure issues.
The markdown syntax for adding links to text is `[text](url)`
-->
It'd be nice if we could have everything in one place :) https://github.com/aarhusstadsarkiv/digiarch/actions/ | 1.0 | Move to Github actions? - <!--
Hi! :)
If applicable, please link guides/articles when submitting infrastructure issues.
The markdown syntax for adding links to text is `[text](url)`
-->
It'd be nice if we could have everything in one place :) https://github.com/aarhusstadsarkiv/digiarch/actions/ | infrastructure | move to github actions hi if applicable please link guides articles when submitting infrastructure issues the markdown syntax for adding links to text is url it d be nice if we could have everything in one place | 1 |
3,984 | 4,750,767,432 | IssuesEvent | 2016-10-22 14:32:52 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Build race condition | Area-Infrastructure Bug Urgency-Now | There is a race condition in our build that is causing it to fail with MSB3277 errors.
``` txt
error MSB3277: Found conflicts between different versions of the same dependent assembly that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed. [D:\j\workspace\windows_relea---78ad37e5\src\VisualStudio\Core\SolutionExplorerShim\SolutionExplorerShim.csproj]
```
In the last 24 hours we've had three failures:
> http://jdash.azurewebsites.net/builds/kind?name=Build&startDate=2016-09-20&viewName=dotnet_roslyn
The actual error from the diagnostic log is the following:
``` txt
There was a conflict between "Microsoft.VisualStudio.Text.Logic, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" and "Microsoft.VisualStudio.Text.Logic, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a". (TaskId:1581)
"Microsoft.VisualStudio.Text.Logic, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" was chosen because it was primary and "Microsoft.VisualStudio.Text.Logic, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" was not. (TaskId:1581)
```
The problem appears to be that MSBuild is considering `Binaries\$(Configuration)` when attempting te resolve the `Microsoft.VisualStudio.Text.Logic` reference. An earlier build component is placing the binary in that directory. Hence it becomes a race between the component copying it to the output directory and when SolutionExplorerShim builds.
I've labeled this as `urgency-now` because it affects the correctness of the produced DLLs.
| 1.0 | Build race condition - There is a race condition in our build that is causing it to fail with MSB3277 errors.
``` txt
error MSB3277: Found conflicts between different versions of the same dependent assembly that could not be resolved. These reference conflicts are listed in the build log when log verbosity is set to detailed. [D:\j\workspace\windows_relea---78ad37e5\src\VisualStudio\Core\SolutionExplorerShim\SolutionExplorerShim.csproj]
```
In the last 24 hours we've had three failures:
> http://jdash.azurewebsites.net/builds/kind?name=Build&startDate=2016-09-20&viewName=dotnet_roslyn
The actual error from the diagnostic log is the following:
``` txt
There was a conflict between "Microsoft.VisualStudio.Text.Logic, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" and "Microsoft.VisualStudio.Text.Logic, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a". (TaskId:1581)
"Microsoft.VisualStudio.Text.Logic, Version=14.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" was chosen because it was primary and "Microsoft.VisualStudio.Text.Logic, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" was not. (TaskId:1581)
```
The problem appears to be that MSBuild is considering `Binaries\$(Configuration)` when attempting te resolve the `Microsoft.VisualStudio.Text.Logic` reference. An earlier build component is placing the binary in that directory. Hence it becomes a race between the component copying it to the output directory and when SolutionExplorerShim builds.
I've labeled this as `urgency-now` because it affects the correctness of the produced DLLs.
| infrastructure | build race condition there is a race condition in our build that is causing it to fail with errors txt error found conflicts between different versions of the same dependent assembly that could not be resolved these reference conflicts are listed in the build log when log verbosity is set to detailed in the last hours we ve had three failures the actual error from the diagnostic log is the following txt there was a conflict between microsoft visualstudio text logic version culture neutral publickeytoken and microsoft visualstudio text logic version culture neutral publickeytoken taskid microsoft visualstudio text logic version culture neutral publickeytoken was chosen because it was primary and microsoft visualstudio text logic version culture neutral publickeytoken was not taskid the problem appears to be that msbuild is considering binaries configuration when attempting te resolve the microsoft visualstudio text logic reference an earlier build component is placing the binary in that directory hence it becomes a race between the component copying it to the output directory and when solutionexplorershim builds i ve labeled this as urgency now because it affects the correctness of the produced dlls | 1 |
177,093 | 14,615,498,792 | IssuesEvent | 2020-12-22 11:38:42 | markmuetz/remake | https://api.github.com/repos/markmuetz/remake | opened | Documentation | documentation | Basic documentation on how to use remake. Upload to doc hosting site:
- installation
- quickstart
- running
- CLI
- python API | 1.0 | Documentation - Basic documentation on how to use remake. Upload to doc hosting site:
- installation
- quickstart
- running
- CLI
- python API | non_infrastructure | documentation basic documentation on how to use remake upload to doc hosting site installation quickstart running cli python api | 0 |
819,020 | 30,716,361,329 | IssuesEvent | 2023-07-27 13:16:58 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | opened | run.jobs.e2e_test: test_end_to_end failed | priority: p1 type: bug flakybot: issue | Note: #8465 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: b22d7fe3b5e7347d76a94dd57addbc2de239a368
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/6436a0f2-8a33-4399-a2a8-d2d0a584e899), [Sponge](http://sponge2/6436a0f2-8a33-4399-a2a8-d2d0a584e899)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/grpc/_channel.py", line 1030, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/grpc/_channel.py", line 910, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-27T13:15:59.220442523+00:00", grpc_status:4}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py", line 191, in retry_target
return target()
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 74, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.DeadlineExceeded: 504 Deadline Exceeded
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/run/jobs/e2e_test.py", line 96, in test_end_to_end
iterator = client.list_log_entries(
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/logging_v2/services/logging_service_v2/client.py", line 875, in list_log_entries
response = rpc(
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py", line 349, in retry_wrapped_func
return retry_target(
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py", line 207, in retry_target
raise exceptions.RetryError(
google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling target function, last exception: 504 Deadline Exceeded</pre></details> | 1.0 | run.jobs.e2e_test: test_end_to_end failed - Note: #8465 was also for this test, but it was closed more than 10 days ago. So, I didn't mark it flaky.
----
commit: b22d7fe3b5e7347d76a94dd57addbc2de239a368
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/6436a0f2-8a33-4399-a2a8-d2d0a584e899), [Sponge](http://sponge2/6436a0f2-8a33-4399-a2a8-d2d0a584e899)
status: failed
<details><summary>Test output</summary><br><pre>Traceback (most recent call last):
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/grpc/_channel.py", line 1030, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/grpc/_channel.py", line 910, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "UNKNOWN:Deadline Exceeded {created_time:"2023-07-27T13:15:59.220442523+00:00", grpc_status:4}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py", line 191, in retry_target
return target()
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/timeout.py", line 120, in func_with_timeout
return func(*args, **kwargs)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 74, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.DeadlineExceeded: 504 Deadline Exceeded
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/run/jobs/e2e_test.py", line 96, in test_end_to_end
iterator = client.list_log_entries(
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/cloud/logging_v2/services/logging_service_v2/client.py", line 875, in list_log_entries
response = rpc(
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py", line 349, in retry_wrapped_func
return retry_target(
File "/workspace/run/jobs/.nox/py-3-9/lib/python3.9/site-packages/google/api_core/retry.py", line 207, in retry_target
raise exceptions.RetryError(
google.api_core.exceptions.RetryError: Deadline of 60.0s exceeded while calling target function, last exception: 504 Deadline Exceeded</pre></details> | non_infrastructure | run jobs test test end to end failed note was also for this test but it was closed more than days ago so i didn t mark it flaky commit buildurl status failed test output traceback most recent call last file workspace run jobs nox py lib site packages google api core grpc helpers py line in error remapped callable return callable args kwargs file workspace run jobs nox py lib site packages grpc channel py line in call return end unary response blocking state call false none file workspace run jobs nox py lib site packages grpc channel py line in end unary response blocking raise inactiverpcerror state pytype disable not instantiable grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with status statuscode deadline exceeded details deadline exceeded debug error string unknown deadline exceeded created time grpc status the above exception was the direct cause of the following exception traceback most recent call last file workspace run jobs nox py lib site packages google api core retry py line in retry target return target file workspace run jobs nox py lib site packages google api core timeout py line in func with timeout return func args kwargs file workspace run jobs nox py lib site packages google api core grpc helpers py line in error remapped callable raise exceptions from grpc error exc from exc google api core exceptions deadlineexceeded deadline exceeded the above exception was the direct cause of the following exception traceback most recent call last file workspace run jobs test py line in test end to end iterator client list log entries file workspace run jobs nox py lib site packages google cloud logging services logging service client py line in list log entries response rpc file workspace run jobs nox py lib site packages google api core gapic method py line in call return wrapped func args kwargs file workspace run jobs nox py lib site packages google api core retry py line in retry wrapped func return retry target file workspace run jobs nox py lib site packages google api core retry py line in retry target raise exceptions retryerror google api core exceptions retryerror deadline of exceeded while calling target function last exception deadline exceeded | 0 |
24,830 | 17,799,631,664 | IssuesEvent | 2021-09-01 05:26:54 | zkldi/tachi-server | https://api.github.com/repos/zkldi/tachi-server | closed | Tests sometimes catastrophically fail | High Priority Gamma Ray Infrastructure Difficult | Very important and related to the single-process-tap hack. It's unacceptable for the entire testing suite to fail at random intervals, so uh, lets look into this. | 1.0 | Tests sometimes catastrophically fail - Very important and related to the single-process-tap hack. It's unacceptable for the entire testing suite to fail at random intervals, so uh, lets look into this. | infrastructure | tests sometimes catastrophically fail very important and related to the single process tap hack it s unacceptable for the entire testing suite to fail at random intervals so uh lets look into this | 1 |
14,609 | 17,791,966,365 | IssuesEvent | 2021-08-31 17:14:58 | OneSignal/OneSignal-Flutter-SDK | https://api.github.com/repos/OneSignal/OneSignal-Flutter-SDK | closed | Building IOS app results in error (when using Firebase) | Bug: Not OneSignal Compatibility Issue | **TL;DR for the ones who just got here**
In order to fix this add the firebase pods from your xcode error to the onesignal target, as a workaround
**Description:**
Running `flutter build ios` throws an error (see bottom of issue)
EDIT: Changing to ^2.3.1 prints the second error
EDIT 2: It turned out that the OneSignal target also tries to compile with the Firebase pods, adding them to the target worked as a workaround
**Environment**
OneSignal flutter: `^2.0.0 (pub.dev)`
**Steps to Reproduce Issue:**
1. Follow the setup tutorial https://documentation.onesignal.com/docs/flutter-sdk-setup
1. Run `flutter build ios` or `flutter run` with an iOS device connected
**Anything else:**
```
2019-12-09 16:13:56.784 xcodebuild[2261:23606] DTDeviceKit: deviceType from f5a636663df701faeeb802603d17d40e58668839 was NULL
2019-12-09 16:13:56.934 xcodebuild[2261:23801] DTDeviceKit: deviceType from f5a636663df701faeeb802603d17d40e58668839 was NULL
** BUILD FAILED **
Xcode's output:
↳
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: error: duplicate protocol definition of 'GDTLifecycleProtocol' is ignored [-Werror,-Wduplicate-protocol]
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: previous definition is here
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:59:1: error: duplicate interface definition for class 'GDTLifecycle'
@interface GDTLifecycle : NSObject <GDTApplicationDelegate>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:59:12: note: previous definition is here
@interface GDTLifecycle : NSObject <GDTApplicationDelegate>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:51:37: error: reference to 'GDTLifecycleProtocol' is ambiguous
@protocol GDTPrioritizer <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:51:37: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
@protocol GDTPrioritizer <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:21:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTUploader.h:28:34: error: reference to 'GDTLifecycleProtocol' is ambiguous
@protocol GDTUploader <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:21:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTUploader.h:28:34: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
@protocol GDTUploader <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:26:37: error: reference to 'GDTLifecycleProtocol' is ambiguous
@interface GDTRegistrar : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:26:37: error: no type or protocol named 'GDTLifecycleProtocol'
@interface GDTRegistrar : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:22:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage.h:27:51: error: reference to 'GDTLifecycleProtocol' is ambiguous
@interface GDTStorage : NSObject <NSSecureCoding, GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:22:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage.h:27:51: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
@interface GDTStorage : NSObject <NSSecureCoding, GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:23:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer.h:33:39: error: reference to 'GDTLifecycleProtocol' is ambiguous
@interface GDTTransformer : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:23:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer.h:33:39: error: no type or protocol named 'GDTLifecycleProtocol'
@interface GDTTransformer : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:24:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTUploadCoordinator.h:33:33: error: reference to 'GDTLifecycleProtocol' is ambiguous
: NSObject <NSSecureCoding, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:24:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTUploadCoordinator.h:33:33: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
: NSObject <NSSecureCoding, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:74:38: error: no visible @interface for 'GDTTransformer' declares the selector 'appWillBackground:'
[[GDTTransformer sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:77:34: error: no visible @interface for 'GDTStorage' declares the selector 'appWillBackground:'
[[GDTStorage sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:80:44: error: no visible @interface for 'GDTUploadCoordinator' declares the selector 'appWillBackground:'
[[GDTUploadCoordinator sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:83:36: error: no visible @interface for 'GDTRegistrar' declares the selector 'appWillBackground:'
[[GDTRegistrar sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:90:38: error: no visible @interface for 'GDTTransformer' declares the selector 'appWillForeground:'
[[GDTTransformer sharedInstance] appWillForeground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Debug-iphoneos/onesignal_flutter'
ld: framework not found onesignal_flutter
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Could not build the precompiled application for the device.
Error launching application on iPhone von Link.```
```Text
Xcode build done. 18,5s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseAuth'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseCore'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseCoreDiagnostics'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseFirestore'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseInstanceID'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/GTMSessionFetcher'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/GoogleDataTransportCCTSupport'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/GoogleUtilities'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/abseil'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/android_intent'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/cloud_firestore'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/devicelocale'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/firebase_analytics'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/firebase_auth'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/firebase_core'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/flutter_inappbrowser'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/flutter_nfc_reader'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/fluttertoast'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/gRPC-C++'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/gRPC-Core'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/leveldb-library'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/onesignal_flutter'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/shared_preferences'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/url_launcher'
ld: framework not found FirebaseAuth
clang: error: linker command failed with exit code 1 (use -v to see invocation)```
| True | Building IOS app results in error (when using Firebase) - **TL;DR for the ones who just got here**
In order to fix this add the firebase pods from your xcode error to the onesignal target, as a workaround
**Description:**
Running `flutter build ios` throws an error (see bottom of issue)
EDIT: Changing to ^2.3.1 prints the second error
EDIT 2: It turned out that the OneSignal target also tries to compile with the Firebase pods, adding them to the target worked as a workaround
**Environment**
OneSignal flutter: `^2.0.0 (pub.dev)`
**Steps to Reproduce Issue:**
1. Follow the setup tutorial https://documentation.onesignal.com/docs/flutter-sdk-setup
1. Run `flutter build ios` or `flutter run` with an iOS device connected
**Anything else:**
```
2019-12-09 16:13:56.784 xcodebuild[2261:23606] DTDeviceKit: deviceType from f5a636663df701faeeb802603d17d40e58668839 was NULL
2019-12-09 16:13:56.934 xcodebuild[2261:23801] DTDeviceKit: deviceType from f5a636663df701faeeb802603d17d40e58668839 was NULL
** BUILD FAILED **
Xcode's output:
↳
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: error: duplicate protocol definition of 'GDTLifecycleProtocol' is ignored [-Werror,-Wduplicate-protocol]
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: previous definition is here
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:59:1: error: duplicate interface definition for class 'GDTLifecycle'
@interface GDTLifecycle : NSObject <GDTApplicationDelegate>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:59:12: note: previous definition is here
@interface GDTLifecycle : NSObject <GDTApplicationDelegate>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:51:37: error: reference to 'GDTLifecycleProtocol' is ambiguous
@protocol GDTPrioritizer <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:51:37: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
@protocol GDTPrioritizer <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:21:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTUploader.h:28:34: error: reference to 'GDTLifecycleProtocol' is ambiguous
@protocol GDTUploader <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:21:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTUploader.h:28:34: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
@protocol GDTUploader <NSObject, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:26:37: error: reference to 'GDTLifecycleProtocol' is ambiguous
@interface GDTRegistrar : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:26:37: error: no type or protocol named 'GDTLifecycleProtocol'
@interface GDTRegistrar : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:22:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage.h:27:51: error: reference to 'GDTLifecycleProtocol' is ambiguous
@interface GDTStorage : NSObject <NSSecureCoding, GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:22:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTStorage.h:27:51: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
@interface GDTStorage : NSObject <NSSecureCoding, GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:23:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer.h:33:39: error: reference to 'GDTLifecycleProtocol' is ambiguous
@interface GDTTransformer : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:23:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer_Private.h:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTTransformer.h:33:39: error: no type or protocol named 'GDTLifecycleProtocol'
@interface GDTTransformer : NSObject <GDTLifecycleProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:24:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTUploadCoordinator.h:33:33: error: reference to 'GDTLifecycleProtocol' is ambiguous
: NSObject <NSSecureCoding, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:21:
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTRegistrar_Private.h:17:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTRegistrar.h:19:
In file included from /Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTPrioritizer.h:19:
/Users/User/appname/build/ios/Debug-iphoneos/GoogleDataTransport/GoogleDataTransport.framework/Headers/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:17:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Public/GDTLifecycle.h:26:11: note: candidate found by name lookup is 'GDTLifecycleProtocol'
@protocol GDTLifecycleProtocol <NSObject>
^
In file included from /Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:24:
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/Private/GDTUploadCoordinator.h:33:33: error: cannot find protocol declaration for 'GDTLifecycleProtocol'
: NSObject <NSSecureCoding, GDTLifecycleProtocol, GDTUploadPackageProtocol>
^
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:74:38: error: no visible @interface for 'GDTTransformer' declares the selector 'appWillBackground:'
[[GDTTransformer sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:77:34: error: no visible @interface for 'GDTStorage' declares the selector 'appWillBackground:'
[[GDTStorage sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:80:44: error: no visible @interface for 'GDTUploadCoordinator' declares the selector 'appWillBackground:'
[[GDTUploadCoordinator sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:83:36: error: no visible @interface for 'GDTRegistrar' declares the selector 'appWillBackground:'
[[GDTRegistrar sharedInstance] appWillBackground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
/Users/User/appname/ios/Pods/GoogleDataTransport/GoogleDataTransport/GDTLibrary/GDTLifecycle.m:90:38: error: no visible @interface for 'GDTTransformer' declares the selector 'appWillForeground:'
[[GDTTransformer sharedInstance] appWillForeground:application];
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~~~~~~~~~~~~~~
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Debug-iphoneos/onesignal_flutter'
ld: framework not found onesignal_flutter
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Could not build the precompiled application for the device.
Error launching application on iPhone von Link.```
```Text
Xcode build done. 18,5s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseAuth'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseCore'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseCoreDiagnostics'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseFirestore'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/FirebaseInstanceID'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/GTMSessionFetcher'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/GoogleDataTransportCCTSupport'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/GoogleUtilities'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/abseil'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/android_intent'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/cloud_firestore'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/devicelocale'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/firebase_analytics'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/firebase_auth'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/firebase_core'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/flutter_inappbrowser'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/flutter_nfc_reader'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/fluttertoast'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/gRPC-C++'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/gRPC-Core'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/leveldb-library'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/onesignal_flutter'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/shared_preferences'
ld: warning: directory not found for option '-F/Users/User/appname/build/ios/Release-iphoneos/url_launcher'
ld: framework not found FirebaseAuth
clang: error: linker command failed with exit code 1 (use -v to see invocation)```
| non_infrastructure | building ios app results in error when using firebase tl dr for the ones who just got here in order to fix this add the firebase pods from your xcode error to the onesignal target as a workaround description running flutter build ios throws an error see bottom of issue edit changing to prints the second error edit it turned out that the onesignal target also tries to compile with the firebase pods adding them to the target worked as a workaround environment onesignal flutter pub dev steps to reproduce issue follow the setup tutorial run flutter build ios or flutter run with an ios device connected anything else xcodebuild dtdevicekit devicetype from was null xcodebuild dtdevicekit devicetype from was null build failed xcode s output ↳ in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h error duplicate protocol definition of gdtlifecycleprotocol is ignored protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note previous definition is here protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h error duplicate interface definition for class gdtlifecycle interface gdtlifecycle nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note previous definition is here interface gdtlifecycle nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h error reference to gdtlifecycleprotocol is ambiguous protocol gdtprioritizer in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h error cannot find protocol declaration for gdtlifecycleprotocol protocol gdtprioritizer in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtuploader h error reference to gdtlifecycleprotocol is ambiguous protocol gdtuploader in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtuploader h error cannot find protocol declaration for gdtlifecycleprotocol protocol gdtuploader in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h error reference to gdtlifecycleprotocol is ambiguous interface gdtregistrar nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h error no type or protocol named gdtlifecycleprotocol interface gdtregistrar nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtstorage private h users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtstorage h error reference to gdtlifecycleprotocol is ambiguous interface gdtstorage nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtstorage private h users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtstorage h error cannot find protocol declaration for gdtlifecycleprotocol interface gdtstorage nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdttransformer private h users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdttransformer h error reference to gdtlifecycleprotocol is ambiguous interface gdttransformer nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdttransformer private h users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdttransformer h error no type or protocol named gdtlifecycleprotocol interface gdttransformer nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtuploadcoordinator h error reference to gdtlifecycleprotocol is ambiguous nsobject in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtregistrar private h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtregistrar h in file included from users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtprioritizer h users user appname build ios debug iphoneos googledatatransport googledatatransport framework headers gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary public gdtlifecycle h note candidate found by name lookup is gdtlifecycleprotocol protocol gdtlifecycleprotocol in file included from users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m users user appname ios pods googledatatransport googledatatransport gdtlibrary private gdtuploadcoordinator h error cannot find protocol declaration for gdtlifecycleprotocol nsobject users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m error no visible interface for gdttransformer declares the selector appwillbackground appwillbackground application users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m error no visible interface for gdtstorage declares the selector appwillbackground appwillbackground application users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m error no visible interface for gdtuploadcoordinator declares the selector appwillbackground appwillbackground application users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m error no visible interface for gdtregistrar declares the selector appwillbackground appwillbackground application users user appname ios pods googledatatransport googledatatransport gdtlibrary gdtlifecycle m error no visible interface for gdttransformer declares the selector appwillforeground appwillforeground application fatal error too many errors emitted stopping now errors generated ld warning directory not found for option f users user appname build ios debug iphoneos onesignal flutter ld framework not found onesignal flutter clang error linker command failed with exit code use v to see invocation could not build the precompiled application for the device error launching application on iphone von link text xcode build done failed to build ios app error output from xcode build ↳ build failed xcode s output ↳ ld warning directory not found for option f users user appname build ios release iphoneos firebaseauth ld warning directory not found for option f users user appname build ios release iphoneos firebasecore ld warning directory not found for option f users user appname build ios release iphoneos firebasecorediagnostics ld warning directory not found for option f users user appname build ios release iphoneos firebasefirestore ld warning directory not found for option f users user appname build ios release iphoneos firebaseinstanceid ld warning directory not found for option f users user appname build ios release iphoneos gtmsessionfetcher ld warning directory not found for option f users user appname build ios release iphoneos googledatatransportcctsupport ld warning directory not found for option f users user appname build ios release iphoneos googleutilities ld warning directory not found for option f users user appname build ios release iphoneos abseil ld warning directory not found for option f users user appname build ios release iphoneos android intent ld warning directory not found for option f users user appname build ios release iphoneos cloud firestore ld warning directory not found for option f users user appname build ios release iphoneos devicelocale ld warning directory not found for option f users user appname build ios release iphoneos firebase analytics ld warning directory not found for option f users user appname build ios release iphoneos firebase auth ld warning directory not found for option f users user appname build ios release iphoneos firebase core ld warning directory not found for option f users user appname build ios release iphoneos flutter inappbrowser ld warning directory not found for option f users user appname build ios release iphoneos flutter nfc reader ld warning directory not found for option f users user appname build ios release iphoneos fluttertoast ld warning directory not found for option f users user appname build ios release iphoneos grpc c ld warning directory not found for option f users user appname build ios release iphoneos grpc core ld warning directory not found for option f users user appname build ios release iphoneos leveldb library ld warning directory not found for option f users user appname build ios release iphoneos onesignal flutter ld warning directory not found for option f users user appname build ios release iphoneos shared preferences ld warning directory not found for option f users user appname build ios release iphoneos url launcher ld framework not found firebaseauth clang error linker command failed with exit code use v to see invocation | 0 |
464,338 | 13,310,964,172 | IssuesEvent | 2020-08-26 07:30:49 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Mark select input as not editable has no effect on update existed entity | priority: high source: plugin:content-manager status: confirmed type: bug | **Describe the bug**
I have a collection with the field of the Relation type. After upgrading to the latest version, disabling relational fields no more works.
**Steps to reproduce the behavior**
1. Create two collections
2. Add a relational type field to one of the collections.
3. Save the new record to the collection with the relational field.
3. Go to 'Configure the view' in Content-Manager plugin.
4. Select a relational field and mark it not editable.
**Expected behavior**
The data in the relational field is unable to change.
**System**
- Node.js version: 12.18.2
- NPM version: 6.14.7
- Strapi version: 3.1.3
- Database: PostgresSQL
- Operating system: MacOS | 1.0 | Mark select input as not editable has no effect on update existed entity - **Describe the bug**
I have a collection with the field of the Relation type. After upgrading to the latest version, disabling relational fields no more works.
**Steps to reproduce the behavior**
1. Create two collections
2. Add a relational type field to one of the collections.
3. Save the new record to the collection with the relational field.
3. Go to 'Configure the view' in Content-Manager plugin.
4. Select a relational field and mark it not editable.
**Expected behavior**
The data in the relational field is unable to change.
**System**
- Node.js version: 12.18.2
- NPM version: 6.14.7
- Strapi version: 3.1.3
- Database: PostgresSQL
- Operating system: MacOS | non_infrastructure | mark select input as not editable has no effect on update existed entity describe the bug i have a collection with the field of the relation type after upgrading to the latest version disabling relational fields no more works steps to reproduce the behavior create two collections add a relational type field to one of the collections save the new record to the collection with the relational field go to configure the view in content manager plugin select a relational field and mark it not editable expected behavior the data in the relational field is unable to change system node js version npm version strapi version database postgressql operating system macos | 0 |
27,800 | 22,347,276,779 | IssuesEvent | 2022-06-15 08:55:32 | GetStream/stream-chat-react | https://api.github.com/repos/GetStream/stream-chat-react | opened | e2e test date separators using the time enabled message endpoint | infrastructure | We can populate messages across several days using this:
```js
const jwt = JWTServerToken(STREAM_SECRET)
axios.post(`https://chat.stream-io-api.com/channels/messaging/group-9527/import?api_key=${STREAM_API_KEY}`, {
messages: [{
text: 'Hello in the past 888',
user_id: 'some-user-id',
created_at: '2021-02-11T20:51:25.030Z',
}],
}, {
headers: {
'Stream-Auth-Type': 'jwt',
Authorization: jwt,
},
})
``` | 1.0 | e2e test date separators using the time enabled message endpoint - We can populate messages across several days using this:
```js
const jwt = JWTServerToken(STREAM_SECRET)
axios.post(`https://chat.stream-io-api.com/channels/messaging/group-9527/import?api_key=${STREAM_API_KEY}`, {
messages: [{
text: 'Hello in the past 888',
user_id: 'some-user-id',
created_at: '2021-02-11T20:51:25.030Z',
}],
}, {
headers: {
'Stream-Auth-Type': 'jwt',
Authorization: jwt,
},
})
``` | infrastructure | test date separators using the time enabled message endpoint we can populate messages across several days using this js const jwt jwtservertoken stream secret axios post messages text hello in the past user id some user id created at headers stream auth type jwt authorization jwt | 1 |
23,560 | 16,416,330,138 | IssuesEvent | 2021-05-19 07:15:28 | pokino-project/pokino | https://api.github.com/repos/pokino-project/pokino | closed | Setup Docker Registry | infrastructure | - user docker hub and push images
- research on best-practices with tagging etc.
- check if private registry has to be used
- write a little documentation on the wiki page | 1.0 | Setup Docker Registry - - user docker hub and push images
- research on best-practices with tagging etc.
- check if private registry has to be used
- write a little documentation on the wiki page | infrastructure | setup docker registry user docker hub and push images research on best practices with tagging etc check if private registry has to be used write a little documentation on the wiki page | 1 |
20,163 | 13,728,342,968 | IssuesEvent | 2020-10-04 11:11:57 | LearningByExample/kotlin-event-driven-petstore | https://api.github.com/repos/LearningByExample/kotlin-event-driven-petstore | closed | [FEATURE] Test infra code | domain:pet feature infrastructure | **Describe the feature**
We should have tests for the code that creates the infra.
| 1.0 | [FEATURE] Test infra code - **Describe the feature**
We should have tests for the code that creates the infra.
| infrastructure | test infra code describe the feature we should have tests for the code that creates the infra | 1 |
15,670 | 11,647,533,207 | IssuesEvent | 2020-03-01 15:41:03 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | CI OSX x64 Could not find a part of the path '.../NuGetScratch/ | area-Infrastructure-coreclr untriaged | ```
azure-pipelines
/ runtime-live-build (Build OSX x64 release Runtime_Debug)
src/libraries/Directory.Build.props#L38
src/libraries/Directory.Build.props(38,3): error : System.AggregateException: One or more errors occurred. (Could not find a part of the path '/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/NuGetScratch/ccd9d22adb6549f086362689e42245d6/9c159c0681d8495c9d48c5e0d296844d.proj.nuget.dgspec.json'.)
---> System.IO.DirectoryNotFoundException: Could not find a part of the path '/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/NuGetScratch/ccd9d22adb6549f086362689e42245d6/9c159c0681d8495c9d48c5e0d296844d.proj.nuget.dgspec.json'.
at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirectory, Func`2 errorRewriter)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String path, OpenFlags flags, Int32 mode)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
at System.IO.FileStream..ctor(String path, FileMode mode)
at NuGet.ProjectModel.DependencyGraphSpec.Save(String path)
at NuGet.Commands.NoOpRestoreUtilities.PersistDGSpecFile(DependencyGraphSpec spec, String dgPath, ILogger log)
at NuGet.Commands.RestoreCommand.EvaluateCacheFile()
at NuGet.Commands.RestoreCommand.ExecuteAsync(CancellationToken token)
at NuGet.Commands.RestoreRunner.ExecuteAsync(RestoreSummaryRequest summaryRequest, CancellationToken token)
at NuGet.Commands.RestoreRunner.CompleteTaskAsync(List`1 restoreTasks)
at NuGet.Commands.RestoreRunner.RunWithoutCommit(IEnumerable`1 restoreRequests, RestoreArgs restoreContext)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at Microsoft.Build.NuGetSdkResolver.NuGetSdkResolver.NuGetAbstraction.GetSdkResult(SdkReference sdk, Object nuGetVersion, SdkResolverContext context, SdkResultFactory factory)
```
Seen in https://github.com/dotnet/runtime/pull/2004 | 1.0 | CI OSX x64 Could not find a part of the path '.../NuGetScratch/ - ```
azure-pipelines
/ runtime-live-build (Build OSX x64 release Runtime_Debug)
src/libraries/Directory.Build.props#L38
src/libraries/Directory.Build.props(38,3): error : System.AggregateException: One or more errors occurred. (Could not find a part of the path '/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/NuGetScratch/ccd9d22adb6549f086362689e42245d6/9c159c0681d8495c9d48c5e0d296844d.proj.nuget.dgspec.json'.)
---> System.IO.DirectoryNotFoundException: Could not find a part of the path '/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/NuGetScratch/ccd9d22adb6549f086362689e42245d6/9c159c0681d8495c9d48c5e0d296844d.proj.nuget.dgspec.json'.
at Interop.ThrowExceptionForIoErrno(ErrorInfo errorInfo, String path, Boolean isDirectory, Func`2 errorRewriter)
at Microsoft.Win32.SafeHandles.SafeFileHandle.Open(String path, OpenFlags flags, Int32 mode)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
at System.IO.FileStream..ctor(String path, FileMode mode)
at NuGet.ProjectModel.DependencyGraphSpec.Save(String path)
at NuGet.Commands.NoOpRestoreUtilities.PersistDGSpecFile(DependencyGraphSpec spec, String dgPath, ILogger log)
at NuGet.Commands.RestoreCommand.EvaluateCacheFile()
at NuGet.Commands.RestoreCommand.ExecuteAsync(CancellationToken token)
at NuGet.Commands.RestoreRunner.ExecuteAsync(RestoreSummaryRequest summaryRequest, CancellationToken token)
at NuGet.Commands.RestoreRunner.CompleteTaskAsync(List`1 restoreTasks)
at NuGet.Commands.RestoreRunner.RunWithoutCommit(IEnumerable`1 restoreRequests, RestoreArgs restoreContext)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at Microsoft.Build.NuGetSdkResolver.NuGetSdkResolver.NuGetAbstraction.GetSdkResult(SdkReference sdk, Object nuGetVersion, SdkResolverContext context, SdkResultFactory factory)
```
Seen in https://github.com/dotnet/runtime/pull/2004 | infrastructure | ci osx could not find a part of the path nugetscratch azure pipelines runtime live build build osx release runtime debug src libraries directory build props src libraries directory build props error system aggregateexception one or more errors occurred could not find a part of the path var folders n t nugetscratch proj nuget dgspec json system io directorynotfoundexception could not find a part of the path var folders n t nugetscratch proj nuget dgspec json at interop throwexceptionforioerrno errorinfo errorinfo string path boolean isdirectory func errorrewriter at microsoft safehandles safefilehandle open string path openflags flags mode at system io filestream ctor string path filemode mode fileaccess access fileshare share buffersize fileoptions options at system io filestream ctor string path filemode mode at nuget projectmodel dependencygraphspec save string path at nuget commands nooprestoreutilities persistdgspecfile dependencygraphspec spec string dgpath ilogger log at nuget commands restorecommand evaluatecachefile at nuget commands restorecommand executeasync cancellationtoken token at nuget commands restorerunner executeasync restoresummaryrequest summaryrequest cancellationtoken token at nuget commands restorerunner completetaskasync list restoretasks at nuget commands restorerunner runwithoutcommit ienumerable restorerequests restoreargs restorecontext end of inner exception stack trace at system threading tasks task getresultcore boolean waitcompletionnotification at microsoft build nugetsdkresolver nugetsdkresolver nugetabstraction getsdkresult sdkreference sdk object nugetversion sdkresolvercontext context sdkresultfactory factory seen in | 1 |
11,299 | 9,086,436,732 | IssuesEvent | 2019-02-18 10:56:28 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | [Infra UI] Services overview page | :Infra UI :infrastructure [zube]: Backlog | Add a page to show an overview of services. It should contain
* query bar
* datetime range picker
* list of services reporting data, within the given datetime range and with the given filter query applied, if any
| 1.0 | [Infra UI] Services overview page - Add a page to show an overview of services. It should contain
* query bar
* datetime range picker
* list of services reporting data, within the given datetime range and with the given filter query applied, if any
| infrastructure | services overview page add a page to show an overview of services it should contain query bar datetime range picker list of services reporting data within the given datetime range and with the given filter query applied if any | 1 |
158,014 | 24,768,597,191 | IssuesEvent | 2022-10-22 21:21:23 | Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022 | https://api.github.com/repos/Ingressive-for-Good/I4G-OPENSOURCE-FRONTEND-PROJECT-2022 | opened | Admin Design - Admin Dashboard: Delete Product (Desktop view) | documentation enhancement hacktoberfest-accepted hacktoberfest design | - Design the desktop view of the delete product pop up on the admin home dashboard.
- The screen should display the following details:
- a write up that confirms the action
- a "yes, proceed" button
- a "no, cancel action" button
- Design a second screen which is an alert screen that confirms that the deletion process is successful.
- Ensure you use the colors, typography, icons and side navigation on the style guide to ensure consistency | 1.0 | Admin Design - Admin Dashboard: Delete Product (Desktop view) - - Design the desktop view of the delete product pop up on the admin home dashboard.
- The screen should display the following details:
- a write up that confirms the action
- a "yes, proceed" button
- a "no, cancel action" button
- Design a second screen which is an alert screen that confirms that the deletion process is successful.
- Ensure you use the colors, typography, icons and side navigation on the style guide to ensure consistency | non_infrastructure | admin design admin dashboard delete product desktop view design the desktop view of the delete product pop up on the admin home dashboard the screen should display the following details a write up that confirms the action a yes proceed button a no cancel action button design a second screen which is an alert screen that confirms that the deletion process is successful ensure you use the colors typography icons and side navigation on the style guide to ensure consistency | 0 |
10,818 | 8,743,599,634 | IssuesEvent | 2018-12-12 19:38:02 | servo/servo | https://api.github.com/repos/servo/servo | opened | Taskcluster WPT output is not readable | A-infrastructure | ```
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field
at-mime-trailing-sem
/eventsource/format-leading
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
at-mime-trailing-sem
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
at-mime-trailing-sem
/eventsource/shared-worker/eventsource-event
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
at-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/request-crede
/eventsource/format-field
at-field-u
/eventsource/format-field-retry
/eventsource/request-crede
/eventsource/format-field
at-field-retry
/eventsource/request-crede
/eventsource/format-field
at-field-retry
/eventsource/format-field
at-field-retry
• 62 ran as expected. 0 tests skipped.
real 0m8.050s
user 0m12.580s
sys 0m9.876s
+ ./mach test-wpt --release --product=servodriver --headless tests/wpt/mozilla/tests/mozilla/DOMParser.html tests/wpt/mozilla/tests/css/per_glyph_font_fallback_a.html tests/wpt/mozilla/tests/css/img_simple.html tests/wpt/mozilla/tests/mozilla/secure.https.html
Running 4 tests in web-platform-tests
[0/4] /_mozilla/css/per_glyph_font_fallbac
ozilla/css/per_glyph_font_fallbac
/_mozilla/css/img_sim
ozilla/css/img_sim
[2/4] No tests running.
ozilla/mozilla/DOMPar
ozilla/mozilla/DOMPar
/_mozilla/mozilla/secure.ht
ozilla/mozilla/secure.ht
• 4 ran as expected. 0 tests skipped.
real 0m7.836s
user 0m4.644s
sys 0m1.060s
```
The default logger rewrites the terminal, which doesn't work well with the taskcluster logging. We should try enabling `--log-mach -` and see if that yields something more understandable. | 1.0 | Taskcluster WPT output is not readable - ```
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field
at-mime-trailing-sem
/eventsource/format-leading
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
at-mime-trailing-sem
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
at-mime-trailing-sem
/eventsource/shared-worker/eventsource-event
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field-retry
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
at-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-data-before-final-empt
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-mime-valid
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/eventsource-request-cancel
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/format-field
/eventsource/format-field-u
/eventsource/format-field-retry
/eventsource/request-crede
/eventsource/format-field
at-field-u
/eventsource/format-field-retry
/eventsource/request-crede
/eventsource/format-field
at-field-retry
/eventsource/request-crede
/eventsource/format-field
at-field-retry
/eventsource/format-field
at-field-retry
• 62 ran as expected. 0 tests skipped.
real 0m8.050s
user 0m12.580s
sys 0m9.876s
+ ./mach test-wpt --release --product=servodriver --headless tests/wpt/mozilla/tests/mozilla/DOMParser.html tests/wpt/mozilla/tests/css/per_glyph_font_fallback_a.html tests/wpt/mozilla/tests/css/img_simple.html tests/wpt/mozilla/tests/mozilla/secure.https.html
Running 4 tests in web-platform-tests
[0/4] /_mozilla/css/per_glyph_font_fallbac
ozilla/css/per_glyph_font_fallbac
/_mozilla/css/img_sim
ozilla/css/img_sim
[2/4] No tests running.
ozilla/mozilla/DOMPar
ozilla/mozilla/DOMPar
/_mozilla/mozilla/secure.ht
ozilla/mozilla/secure.ht
• 4 ran as expected. 0 tests skipped.
real 0m7.836s
user 0m4.644s
sys 0m1.060s
```
The default logger rewrites the terminal, which doesn't work well with the taskcluster logging. We should try enabling `--log-mach -` and see if that yields something more understandable. | infrastructure | taskcluster wpt output is not readable eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format field at mime trailing sem eventsource format leading eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format field retry eventsource format field at mime trailing sem eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format field retry eventsource format field at mime trailing sem eventsource shared worker eventsource event eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format field retry eventsource format field eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format field retry eventsource format field eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format field eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format mime valid eventsource format field at data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format mime valid eventsource format field eventsource format data before final empt eventsource format field retry eventsource eventsource request cancel eventsource format mime valid eventsource format field eventsource format field retry eventsource eventsource request cancel eventsource format mime valid eventsource format field eventsource format field u eventsource format field retry eventsource eventsource request cancel eventsource format mime valid eventsource format field eventsource format field u eventsource format field retry eventsource eventsource request cancel eventsource format field eventsource format field u eventsource format field retry eventsource format field eventsource format field u eventsource format field retry eventsource request crede eventsource format field at field u eventsource format field retry eventsource request crede eventsource format field at field retry eventsource request crede eventsource format field at field retry eventsource format field at field retry • ran as expected tests skipped real user sys mach test wpt release product servodriver headless tests wpt mozilla tests mozilla domparser html tests wpt mozilla tests css per glyph font fallback a html tests wpt mozilla tests css img simple html tests wpt mozilla tests mozilla secure https html running tests in web platform tests mozilla css per glyph font fallbac ozilla css per glyph font fallbac mozilla css img sim ozilla css img sim no tests running ozilla mozilla dompar ozilla mozilla dompar mozilla mozilla secure ht ozilla mozilla secure ht • ran as expected tests skipped real user sys the default logger rewrites the terminal which doesn t work well with the taskcluster logging we should try enabling log mach and see if that yields something more understandable | 1 |
13,240 | 10,166,367,189 | IssuesEvent | 2019-08-07 15:40:27 | wellcometrust/platform | https://api.github.com/repos/wellcometrust/platform | opened | Unify AWS organisation accounts | 🚧 Infrastructure | Needs AWS organisation account tagging in order to visually seperate costs in a satisfactory way. | 1.0 | Unify AWS organisation accounts - Needs AWS organisation account tagging in order to visually seperate costs in a satisfactory way. | infrastructure | unify aws organisation accounts needs aws organisation account tagging in order to visually seperate costs in a satisfactory way | 1 |
12,569 | 9,853,696,234 | IssuesEvent | 2019-06-19 15:15:49 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | Automatically dismiss "Find Source" dialog if it appears during tests | Area-Infrastructure Concept-Continuous Improvement Integration-Test Urgency-Now | Sometimes the following dialog appears during integration tests:

When the **Find Source** dialog appears, it should be automatically dismissed to avoid cascading test failures. The test which triggered the dialog should report a test failure.
📝 Currently the most common cause of this dialog is a product bug where documents fail to save (e.g. https://github.com/dotnet/roslyn/issues/34637#issuecomment-478657402). However, the test improvement to address this will be a good overall reliability improvement. | 1.0 | Automatically dismiss "Find Source" dialog if it appears during tests - Sometimes the following dialog appears during integration tests:

When the **Find Source** dialog appears, it should be automatically dismissed to avoid cascading test failures. The test which triggered the dialog should report a test failure.
📝 Currently the most common cause of this dialog is a product bug where documents fail to save (e.g. https://github.com/dotnet/roslyn/issues/34637#issuecomment-478657402). However, the test improvement to address this will be a good overall reliability improvement. | infrastructure | automatically dismiss find source dialog if it appears during tests sometimes the following dialog appears during integration tests when the find source dialog appears it should be automatically dismissed to avoid cascading test failures the test which triggered the dialog should report a test failure 📝 currently the most common cause of this dialog is a product bug where documents fail to save e g however the test improvement to address this will be a good overall reliability improvement | 1 |
34,782 | 30,456,238,263 | IssuesEvent | 2023-07-16 23:05:16 | solidjs/solid-docs-next | https://api.github.com/repos/solidjs/solid-docs-next | closed | Testing Guide [How-To Guide] | infrastructure how-to docs-migration | We start with the [existing guide](https://www.solidjs.com/guides/testing). This is a perfect candidate for an interactive Stackblitz-based guide. | 1.0 | Testing Guide [How-To Guide] - We start with the [existing guide](https://www.solidjs.com/guides/testing). This is a perfect candidate for an interactive Stackblitz-based guide. | infrastructure | testing guide we start with the this is a perfect candidate for an interactive stackblitz based guide | 1 |
279,750 | 24,252,568,895 | IssuesEvent | 2022-09-27 15:11:20 | dusk-network/wallet-cli | https://api.github.com/repos/dusk-network/wallet-cli | closed | Quit program if 3 times wrong recovery phrase is given during recovery process | area:wallet mark:testnet module:rusk-wallet | **Describe what you want implemented**
The program should quit after 3 times a wrong phase phrase
**Describe "Why" this is needed**
If the user is not able to fill in the correct recovery phrase, the program will exit
| 1.0 | Quit program if 3 times wrong recovery phrase is given during recovery process - **Describe what you want implemented**
The program should quit after 3 times a wrong phase phrase
**Describe "Why" this is needed**
If the user is not able to fill in the correct recovery phrase, the program will exit
| non_infrastructure | quit program if times wrong recovery phrase is given during recovery process describe what you want implemented the program should quit after times a wrong phase phrase describe why this is needed if the user is not able to fill in the correct recovery phrase the program will exit | 0 |
22,496 | 15,222,595,803 | IssuesEvent | 2021-02-18 00:38:59 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Read only memo | bug interface/infrastructure | When leaving a read-only memo field - it is showing an error message in the status bar.
You don't have to change anything, or even click on the viewer.
It's showing up when you're viewing the model structure of a crop - and probably any other read-only instances..
Can it check the read only status before showing an error?
_Unable to modify *name* - it is read-only._
| 1.0 | Read only memo - When leaving a read-only memo field - it is showing an error message in the status bar.
You don't have to change anything, or even click on the viewer.
It's showing up when you're viewing the model structure of a crop - and probably any other read-only instances..
Can it check the read only status before showing an error?
_Unable to modify *name* - it is read-only._
| infrastructure | read only memo when leaving a read only memo field it is showing an error message in the status bar you don t have to change anything or even click on the viewer it s showing up when you re viewing the model structure of a crop and probably any other read only instances can it check the read only status before showing an error unable to modify name it is read only | 1 |
3,958 | 4,796,960,080 | IssuesEvent | 2016-11-01 10:10:12 | Cadasta/cadasta-platform | https://api.github.com/repos/Cadasta/cadasta-platform | opened | Upgrade Django to resolve security vulnerabilities | security | Current version: 1.9.6
Latest stable version: 1.10.2
Multiple vulns affecting v1.9, listed here: https://docs.djangoproject.com/en/1.10/releases/security/
Changelogs for upgrading from 1.9.6 to 1.10.2: https://docs.djangoproject.com/en/1.10/releases/
Note there are some backwards incompatible changes when coming from 1.9.x: https://docs.djangoproject.com/en/1.10/releases/1.10/#backwards-incompatible-1-10 | True | Upgrade Django to resolve security vulnerabilities - Current version: 1.9.6
Latest stable version: 1.10.2
Multiple vulns affecting v1.9, listed here: https://docs.djangoproject.com/en/1.10/releases/security/
Changelogs for upgrading from 1.9.6 to 1.10.2: https://docs.djangoproject.com/en/1.10/releases/
Note there are some backwards incompatible changes when coming from 1.9.x: https://docs.djangoproject.com/en/1.10/releases/1.10/#backwards-incompatible-1-10 | non_infrastructure | upgrade django to resolve security vulnerabilities current version latest stable version multiple vulns affecting listed here changelogs for upgrading from to note there are some backwards incompatible changes when coming from x | 0 |
20,751 | 14,138,681,369 | IssuesEvent | 2020-11-10 08:47:46 | fluencelabs/fluence | https://api.github.com/repos/fluencelabs/fluence | closed | Supply Node's application.conf as a volume | enhancement scala ~infrastructure ~node | Currently a bunch of environment variables is used to generate application.conf for node.
ENV is a global state, it's easily misconfigured and overall error prone. It's better to pass `application.conf` on node's `docker run` as a volume. | 1.0 | Supply Node's application.conf as a volume - Currently a bunch of environment variables is used to generate application.conf for node.
ENV is a global state, it's easily misconfigured and overall error prone. It's better to pass `application.conf` on node's `docker run` as a volume. | infrastructure | supply node s application conf as a volume currently a bunch of environment variables is used to generate application conf for node env is a global state it s easily misconfigured and overall error prone it s better to pass application conf on node s docker run as a volume | 1 |
143,608 | 5,520,815,750 | IssuesEvent | 2017-03-19 09:50:44 | siteorigin/siteorigin-panels | https://api.github.com/repos/siteorigin/siteorigin-panels | closed | Widget Styles: Add checkbox, "Disable On Mobile" or breakpoint | bug priority-2 | I've seen a lot of support requests about having set margin or CSS for a really nice desktop experience and then turning around and saying that it looks like crap due to the large amount of margin / padding they've added. You should be able to either disable it on mobile or adjust it for mobile users.
| 1.0 | Widget Styles: Add checkbox, "Disable On Mobile" or breakpoint - I've seen a lot of support requests about having set margin or CSS for a really nice desktop experience and then turning around and saying that it looks like crap due to the large amount of margin / padding they've added. You should be able to either disable it on mobile or adjust it for mobile users.
| non_infrastructure | widget styles add checkbox disable on mobile or breakpoint i ve seen a lot of support requests about having set margin or css for a really nice desktop experience and then turning around and saying that it looks like crap due to the large amount of margin padding they ve added you should be able to either disable it on mobile or adjust it for mobile users | 0 |
236,361 | 18,094,657,306 | IssuesEvent | 2021-09-22 07:42:01 | bjw-s/ansible-role-vyos | https://api.github.com/repos/bjw-s/ansible-role-vyos | opened | Add documentation for role variables | documentation | ### TODO:
* [ ] Document variables in `/var/**.yml` files
* [ ] Add variables documentation to GitHub pages. (Preferably automatically) | 1.0 | Add documentation for role variables - ### TODO:
* [ ] Document variables in `/var/**.yml` files
* [ ] Add variables documentation to GitHub pages. (Preferably automatically) | non_infrastructure | add documentation for role variables todo document variables in var yml files add variables documentation to github pages preferably automatically | 0 |
57,066 | 8,139,499,238 | IssuesEvent | 2018-08-20 17:55:01 | AladW/aurutils | https://api.github.com/repos/AladW/aurutils | closed | aursearch: inconsistency between man page and programme | documentation | Hello, I just installed aurutils. The first thing I did was `man aursearch`, where I found:
```
aursearch [-brvmd] [-F string] [-P pattern] string
```
And, in parcitular, the part I was interested in:
```
-d
Search by package name and description.
```
This option, however, looks unsupported by the programme:
```
$ aursearch -d whatever
usage: aursearch [-PFbrvmn]
``` | 1.0 | aursearch: inconsistency between man page and programme - Hello, I just installed aurutils. The first thing I did was `man aursearch`, where I found:
```
aursearch [-brvmd] [-F string] [-P pattern] string
```
And, in parcitular, the part I was interested in:
```
-d
Search by package name and description.
```
This option, however, looks unsupported by the programme:
```
$ aursearch -d whatever
usage: aursearch [-PFbrvmn]
``` | non_infrastructure | aursearch inconsistency between man page and programme hello i just installed aurutils the first thing i did was man aursearch where i found aursearch string and in parcitular the part i was interested in d search by package name and description this option however looks unsupported by the programme aursearch d whatever usage aursearch | 0 |
34,138 | 28,321,510,132 | IssuesEvent | 2023-04-11 01:51:43 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | MathUtilities float comparisons >= <= | bug interface/infrastructure | I had found various code snipped in APSIMx where float comparisons were handled directly in the code and so I created my own float extension methods to safely handle double comparisons accounting for double rounding. After this I found the provided methods in Shared.MathUtilities and have switched across to using the standard methods provided. This is not working as I would have expected.
The IsGreaterThanOrEqual and IsLessThanOrEqual do not provide the additional FloatsAreEqual() test so
MathUtilities.IsGreaterThanOrEqual(1.0, 1.0) = false (a value1 - value2 [0] is not >= tolerance)
I would expect this needs to be <= tolerance for the test to return true.
- Are these Is methods correct and tested or are they meant for something other than what I expected?
- Should the difference between the two values be absolute before comparing with the tolerance level to account for precision error above and below expected value? Only FloatsAreEqual does the absolute conversion.
- Should IsGreaterThanOrEqual have || FloatsAreEqual(value1, value2) to also check for equality?
This will also apply to IsPositive and IsNegative using the simple IsLessThan and IsGreaterThan. | 1.0 | MathUtilities float comparisons >= <= - I had found various code snipped in APSIMx where float comparisons were handled directly in the code and so I created my own float extension methods to safely handle double comparisons accounting for double rounding. After this I found the provided methods in Shared.MathUtilities and have switched across to using the standard methods provided. This is not working as I would have expected.
The IsGreaterThanOrEqual and IsLessThanOrEqual do not provide the additional FloatsAreEqual() test so
MathUtilities.IsGreaterThanOrEqual(1.0, 1.0) = false (a value1 - value2 [0] is not >= tolerance)
I would expect this needs to be <= tolerance for the test to return true.
- Are these Is methods correct and tested or are they meant for something other than what I expected?
- Should the difference between the two values be absolute before comparing with the tolerance level to account for precision error above and below expected value? Only FloatsAreEqual does the absolute conversion.
- Should IsGreaterThanOrEqual have || FloatsAreEqual(value1, value2) to also check for equality?
This will also apply to IsPositive and IsNegative using the simple IsLessThan and IsGreaterThan. | infrastructure | mathutilities float comparisons i had found various code snipped in apsimx where float comparisons were handled directly in the code and so i created my own float extension methods to safely handle double comparisons accounting for double rounding after this i found the provided methods in shared mathutilities and have switched across to using the standard methods provided this is not working as i would have expected the isgreaterthanorequal and islessthanorequal do not provide the additional floatsareequal test so mathutilities isgreaterthanorequal false a is not tolerance i would expect this needs to be tolerance for the test to return true are these is methods correct and tested or are they meant for something other than what i expected should the difference between the two values be absolute before comparing with the tolerance level to account for precision error above and below expected value only floatsareequal does the absolute conversion should isgreaterthanorequal have floatsareequal to also check for equality this will also apply to ispositive and isnegative using the simple islessthan and isgreaterthan | 1 |
26,864 | 20,797,369,917 | IssuesEvent | 2022-03-17 10:35:53 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Package versioning vs framework versioning | question area-Infrastructure-libraries | Usually, when there is a new version of .NET released (such as .NET 6), core libraries (System.\*, Microsoft.Extensions.\* and so on) also get new versions (such as 6.0.x). What I observe for most of them, is that they usually target the new .NET version, but also some older / more general targets such as netstandard2.0.
Just one simple example: [https://www.nuget.org/packages/System.Text.Encodings.Web/6.0.0](https://www.nuget.org/packages/System.Text.Encodings.Web/6.0.0)
Are there any drawbacks of using the newer packages (6.0.x) with older framework versions (net5.0)? Or anything that I should be aware of when doing so? Because in most cases it is technically possible and I couldn’t find any discouragement of doing this in the documentation.
The context here is that I have several teams maintaining several microservices and some libraries that they share, and updating everything at once is not possible. I am planning a migration from .NET 5 to .NET 6 and there will probably be some transition period when some of them are already migrated and some are not. I'm considering if I should multitarget the shared libraries or maintain two branches etc, or maybe just gradually update all the core libraries to 6.0.x and when all the microservices are already migrated to net6.0, change the TargetFramework of the shared libraries also to net6.0. Are there any guidelines for such scenarios? | 1.0 | Package versioning vs framework versioning - Usually, when there is a new version of .NET released (such as .NET 6), core libraries (System.\*, Microsoft.Extensions.\* and so on) also get new versions (such as 6.0.x). What I observe for most of them, is that they usually target the new .NET version, but also some older / more general targets such as netstandard2.0.
Just one simple example: [https://www.nuget.org/packages/System.Text.Encodings.Web/6.0.0](https://www.nuget.org/packages/System.Text.Encodings.Web/6.0.0)
Are there any drawbacks of using the newer packages (6.0.x) with older framework versions (net5.0)? Or anything that I should be aware of when doing so? Because in most cases it is technically possible and I couldn’t find any discouragement of doing this in the documentation.
The context here is that I have several teams maintaining several microservices and some libraries that they share, and updating everything at once is not possible. I am planning a migration from .NET 5 to .NET 6 and there will probably be some transition period when some of them are already migrated and some are not. I'm considering if I should multitarget the shared libraries or maintain two branches etc, or maybe just gradually update all the core libraries to 6.0.x and when all the microservices are already migrated to net6.0, change the TargetFramework of the shared libraries also to net6.0. Are there any guidelines for such scenarios? | infrastructure | package versioning vs framework versioning usually when there is a new version of net released such as net core libraries system microsoft extensions and so on also get new versions such as x what i observe for most of them is that they usually target the new net version but also some older more general targets such as just one simple example are there any drawbacks of using the newer packages x with older framework versions or anything that i should be aware of when doing so because in most cases it is technically possible and i couldn’t find any discouragement of doing this in the documentation the context here is that i have several teams maintaining several microservices and some libraries that they share and updating everything at once is not possible i am planning a migration from net to net and there will probably be some transition period when some of them are already migrated and some are not i m considering if i should multitarget the shared libraries or maintain two branches etc or maybe just gradually update all the core libraries to x and when all the microservices are already migrated to change the targetframework of the shared libraries also to are there any guidelines for such scenarios | 1 |
22,164 | 15,025,684,654 | IssuesEvent | 2021-02-01 21:27:11 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | ShadedBarsOnGraph is not working properly when the axis values are reset | bug interface/infrastructure | See StockExample the second plot in the sheep feedlot as an example. | 1.0 | ShadedBarsOnGraph is not working properly when the axis values are reset - See StockExample the second plot in the sheep feedlot as an example. | infrastructure | shadedbarsongraph is not working properly when the axis values are reset see stockexample the second plot in the sheep feedlot as an example | 1 |
286,248 | 31,468,096,575 | IssuesEvent | 2023-08-30 04:52:19 | UpendoVentures/generator-upendodnn | https://api.github.com/repos/UpendoVentures/generator-upendodnn | closed | CVE-2020-7676 (Medium) detected in angular-1.4.7.min.js, angular-1.4.7.js - autoclosed | Mend: dependency security vulnerability | ## CVE-2020-7676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>angular-1.4.7.min.js</b>, <b>angular-1.4.7.js</b></p></summary>
<p>
<details><summary><b>angular-1.4.7.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.min.js</a></p>
<p>Path to vulnerable library: /generators/mvc-spa/templates/Scripts/ng/angular/angular.min.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.4.7.min.js** (Vulnerable Library)
</details>
<details><summary><b>angular-1.4.7.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.js</a></p>
<p>Path to vulnerable library: /generators/mvc-spa/templates/Scripts/ng/angular/angular.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.4.7.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/UpendoVentures/generator-upendodnn/commit/1c68c9a9ea9734a0a208d80999c86ff1564255dc">1c68c9a9ea9734a0a208d80999c86ff1564255dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
angular.js prior to 1.8.0 allows cross site scripting. The regex-based input HTML replacement may turn sanitized code into unsanitized one. Wrapping "<option>" elements in "<select>" ones changes parsing behavior, leading to possibly unsanitizing code.
<p>Publish Date: 2020-06-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7676>CVE-2020-7676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: 1.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7676 (Medium) detected in angular-1.4.7.min.js, angular-1.4.7.js - autoclosed - ## CVE-2020-7676 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>angular-1.4.7.min.js</b>, <b>angular-1.4.7.js</b></p></summary>
<p>
<details><summary><b>angular-1.4.7.min.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.min.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.min.js</a></p>
<p>Path to vulnerable library: /generators/mvc-spa/templates/Scripts/ng/angular/angular.min.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.4.7.min.js** (Vulnerable Library)
</details>
<details><summary><b>angular-1.4.7.js</b></p></summary>
<p>AngularJS is an MVC framework for building web applications. The core features include HTML enhanced with custom component and data-binding capabilities, dependency injection and strong focus on simplicity, testability, maintainability and boiler-plate reduction.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.js">https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.4.7/angular.js</a></p>
<p>Path to vulnerable library: /generators/mvc-spa/templates/Scripts/ng/angular/angular.js</p>
<p>
Dependency Hierarchy:
- :x: **angular-1.4.7.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/UpendoVentures/generator-upendodnn/commit/1c68c9a9ea9734a0a208d80999c86ff1564255dc">1c68c9a9ea9734a0a208d80999c86ff1564255dc</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
angular.js prior to 1.8.0 allows cross site scripting. The regex-based input HTML replacement may turn sanitized code into unsanitized one. Wrapping "<option>" elements in "<select>" ones changes parsing behavior, leading to possibly unsanitizing code.
<p>Publish Date: 2020-06-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-7676>CVE-2020-7676</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7676</a></p>
<p>Release Date: 2020-10-09</p>
<p>Fix Resolution: 1.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in angular min js angular js autoclosed cve medium severity vulnerability vulnerable libraries angular min js angular js angular min js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to vulnerable library generators mvc spa templates scripts ng angular angular min js dependency hierarchy x angular min js vulnerable library angular js angularjs is an mvc framework for building web applications the core features include html enhanced with custom component and data binding capabilities dependency injection and strong focus on simplicity testability maintainability and boiler plate reduction library home page a href path to vulnerable library generators mvc spa templates scripts ng angular angular js dependency hierarchy x angular js vulnerable library found in head commit a href found in base branch master vulnerability details angular js prior to allows cross site scripting the regex based input html replacement may turn sanitized code into unsanitized one wrapping elements in ones changes parsing behavior leading to possibly unsanitizing code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
15,037 | 11,303,824,095 | IssuesEvent | 2020-01-17 21:08:42 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Apparent configuration mismatch in the local live-live build | area-Infrastructure untriaged | Recent repro:
https://dev.azure.com/dnceng/public/_build/results?buildId=463372&view=logs&j=54758165-a89d-5ad0-61e6-21d87a00c5ac&t=5c295aef-d681-5b69-7935-39becf80aa8b
Proximate error:
<pre>
2019-12-20T19:23:05.0572522Z InjectResource.vcxproj -> F:\workspace\_work\1\s\artifacts\obj\coreclr\Windows_NT.x64.Release\src\tools\InjectResource\Release\InjectResource.exe
...
2019-12-20T19:45:07.6336718Z F:\workspace\_work\1\s\eng\liveBuilds.targets(37,5): error : The CoreCLR artifacts path does not exist 'F:\workspace\_work\1\s\artifacts\bin\coreclr\Windows_NT.x64.Debug\'. The CoreCLR subset category must be built before building this project. [F:\workspace\_work\1\s\src\libraries\System.AppContext\src\System.AppContext.csproj]
</pre>
FYI: @janvorli
Seemingly coming from the local live-live change, apologies in case of unjust accusation.
Thanks
Tomas | 1.0 | Apparent configuration mismatch in the local live-live build - Recent repro:
https://dev.azure.com/dnceng/public/_build/results?buildId=463372&view=logs&j=54758165-a89d-5ad0-61e6-21d87a00c5ac&t=5c295aef-d681-5b69-7935-39becf80aa8b
Proximate error:
<pre>
2019-12-20T19:23:05.0572522Z InjectResource.vcxproj -> F:\workspace\_work\1\s\artifacts\obj\coreclr\Windows_NT.x64.Release\src\tools\InjectResource\Release\InjectResource.exe
...
2019-12-20T19:45:07.6336718Z F:\workspace\_work\1\s\eng\liveBuilds.targets(37,5): error : The CoreCLR artifacts path does not exist 'F:\workspace\_work\1\s\artifacts\bin\coreclr\Windows_NT.x64.Debug\'. The CoreCLR subset category must be built before building this project. [F:\workspace\_work\1\s\src\libraries\System.AppContext\src\System.AppContext.csproj]
</pre>
FYI: @janvorli
Seemingly coming from the local live-live change, apologies in case of unjust accusation.
Thanks
Tomas | infrastructure | apparent configuration mismatch in the local live live build recent repro proximate error injectresource vcxproj f workspace work s artifacts obj coreclr windows nt release src tools injectresource release injectresource exe f workspace work s eng livebuilds targets error the coreclr artifacts path does not exist f workspace work s artifacts bin coreclr windows nt debug the coreclr subset category must be built before building this project fyi janvorli seemingly coming from the local live live change apologies in case of unjust accusation thanks tomas | 1 |
22,997 | 3,736,471,147 | IssuesEvent | 2016-03-08 16:03:31 | jccastillo0007/eFacturaT | https://api.github.com/repos/jccastillo0007/eFacturaT | opened | Cuando agregas el productos, en el precio unitario aparece el signo de $ | defect | anteriormente no lo colocaba, y ahora los clientes modifican el precio dejando el signo, y el sistema pone todo en cero.
Por que aparece ahora este signo en text box del precio unitario?
Favor de eliminarlo y dejarlo como estaba antes. | 1.0 | Cuando agregas el productos, en el precio unitario aparece el signo de $ - anteriormente no lo colocaba, y ahora los clientes modifican el precio dejando el signo, y el sistema pone todo en cero.
Por que aparece ahora este signo en text box del precio unitario?
Favor de eliminarlo y dejarlo como estaba antes. | non_infrastructure | cuando agregas el productos en el precio unitario aparece el signo de anteriormente no lo colocaba y ahora los clientes modifican el precio dejando el signo y el sistema pone todo en cero por que aparece ahora este signo en text box del precio unitario favor de eliminarlo y dejarlo como estaba antes | 0 |
14,291 | 10,739,831,207 | IssuesEvent | 2019-10-29 17:02:47 | HumanCellAtlas/secondary-analysis | https://api.github.com/repos/HumanCellAtlas/secondary-analysis | opened | Fix HCA Grafana dashboard for analysis service | infrastructure | * The Grafana dashboard for the analysis integration environment is still consuming logs from the broad-dsde-mint-test project, but should now be using broad-dsde-mint-integration logs.
* The GCloud API request rate graph is not loading info due to a 400 error for both integration and staging
* None of the prod graphs are loading due to 403 errors
Link to dashboard:
integration: https://metrics.dev.data.humancellatlas.org/d/analysis-integration/analysis-integration?orgId=1
staging: https://metrics.dev.data.humancellatlas.org/d/lpH9MaYiz/analysis-staging?orgId=1
prod: https://metrics.data.humancellatlas.org/d/analysis-prod/analysis-prod?orgId=1
┆Issue is synchronized with this [Jira Story](https://broadinstitute.atlassian.net/browse/GH-532)
| 1.0 | Fix HCA Grafana dashboard for analysis service - * The Grafana dashboard for the analysis integration environment is still consuming logs from the broad-dsde-mint-test project, but should now be using broad-dsde-mint-integration logs.
* The GCloud API request rate graph is not loading info due to a 400 error for both integration and staging
* None of the prod graphs are loading due to 403 errors
Link to dashboard:
integration: https://metrics.dev.data.humancellatlas.org/d/analysis-integration/analysis-integration?orgId=1
staging: https://metrics.dev.data.humancellatlas.org/d/lpH9MaYiz/analysis-staging?orgId=1
prod: https://metrics.data.humancellatlas.org/d/analysis-prod/analysis-prod?orgId=1
┆Issue is synchronized with this [Jira Story](https://broadinstitute.atlassian.net/browse/GH-532)
| infrastructure | fix hca grafana dashboard for analysis service the grafana dashboard for the analysis integration environment is still consuming logs from the broad dsde mint test project but should now be using broad dsde mint integration logs the gcloud api request rate graph is not loading info due to a error for both integration and staging none of the prod graphs are loading due to errors link to dashboard integration staging prod ┆issue is synchronized with this | 1 |
24,538 | 17,374,263,705 | IssuesEvent | 2021-07-30 18:19:29 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | [linux-arm] superpmi replay fails due to illegal alignment | arch-arm32 area-Infrastructure-coreclr os-linux | ```
Process 15 stopped
* thread #1, name = 'superpmi', stop reason = signal SIGBUS: illegal alignment
frame #0: 0xf681c3ea libclrjit.so`Compiler::impImportStaticReadOnlyField(this=0x018b373c, fldAddr=0x018a5f99, lclTyp=<unavailable>) at importer.cpp:7162
(lldb) bt
* thread #1, name = 'superpmi', stop reason = signal SIGBUS: illegal alignment
* frame #0: 0xf681c3ea libclrjit.so`Compiler::impImportStaticReadOnlyField(this=0x018b373c, fldAddr=0x018a5f99, lclTyp=<unavailable>) at importer.cpp:7162
frame #1: 0xf682d268 libclrjit.so`Compiler::impImportBlockCode(this=0x018b373c, block=<unavailable>) at importer.cpp:14586
frame #2: 0xf68315c2 libclrjit.so`Compiler::impImportBlock(BasicBlock*) [inlined] Compiler::impImportBlock(this=<unavailable>, pParam=0xfffedf10)::$_0::operator()(Compiler::impImportBlock(BasicBlock*)::FilterVerificationExceptionsParam*) const at importer.cpp:17401
frame #3: 0xf68315aa libclrjit.so`Compiler::impImportBlock(this=0x018b373c, block=0x018da494) at importer.cpp:17411
frame #4: 0xf68333f4 libclrjit.so`Compiler::impImport(this=0x018b373c) at importer.cpp:18499
frame #5: 0xf67c2c02 libclrjit.so`Compiler::fgImport(this=0x018b373c) at flowgraph.cpp:7192
frame #6: 0xf68c6832 libclrjit.so`Phase::Run(this=0xfffee06c) at phase.cpp:61
frame #7: 0xf679f54a libclrjit.so`Compiler::compCompile(void**, unsigned int*, JitFlags*) [inlined] DoPhase(_compiler=0x018b373c, _phase=PHASE_IMPORTATION)()) at phase.h:136
frame #8: 0xf679f524 libclrjit.so`Compiler::compCompile(this=0x018b373c, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650) at compiler.cpp:4260
frame #9: 0xf67a231e libclrjit.so`Compiler::compCompileHelper(this=0x018b373c, classPtr=<unavailable>, compHnd=<unavailable>, methodInfo=<unavailable>, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650) at compiler.cpp:6128
frame #10: 0xf67a1288 libclrjit.so`Compiler::compCompile(CORINFO_MODULE_STRUCT_*, void**, unsigned int*, JitFlags*) at compiler.cpp:5467
frame #11: 0xf67a1276 libclrjit.so`Compiler::compCompile(this=0x018b373c, classPtr=0xf3974010, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650) at compiler.cpp:5486
frame #12: 0xf67a2cf6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6770
frame #13: 0xf67a2bd6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6795
frame #14: 0xf67a2bd2 libclrjit.so`jitNativeCode(methodHnd=0xeb3d12c4, classPtr=0xf3974010, compHnd=0x005814c0, methodInfo=0x018af4c0, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650, inlineInfoPtr=0xfffee6e0) at compiler.cpp:6797
frame #15: 0xf67d90f4 libclrjit.so`Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::$_0::__invoke(Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*) at flowgraph.cpp:23318
frame #16: 0xf67d9012 libclrjit.so`Compiler::fgInvokeInlineeCompiler(pParam=0xfffee6cc)::$_0::__invoke(Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*) at flowgraph.cpp:23267
frame #17: 0x004182e8 superpmi`RunWithErrorTrap(void (*)(void*), void*) [inlined] RunWithErrorTrap(this=<unavailable>, pTrapParam=<unavailable>)(void*), void*)::$_0::operator()(RunWithErrorTrap(void (*)(void*), void*)::TrapParam*) const at errorhandling.cpp:150
frame #18: 0x004182e2 superpmi`RunWithErrorTrap(function=<unavailable>, param=<unavailable>)(void*), void*) at errorhandling.cpp:152
frame #19: 0xf67d4816 libclrjit.so`Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*) [inlined] bool Compiler::eeRunWithErrorTrap<Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param>(this=0x018a7074, function=<unavailable>, param=0xfffee6cc)(Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*), Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*) at compiler.h:7445
frame #20: 0xf67d4812 libclrjit.so`Compiler::fgInvokeInlineeCompiler(this=0x018a7074, call=0x018af3a0, inlineResult=0xfffeee68) at flowgraph.cpp:23266
frame #21: 0xf689fce0 libclrjit.so`Compiler::fgMorphCallInlineHelper(this=0x018a7074, call=0x018af3a0, result=0xfffeee68) at morph.cpp:6538
frame #22: 0xf689fb06 libclrjit.so`Compiler::fgMorphCallInline(this=0x018a7074, call=0x018af3a0, inlineResult=0xfffeee68) at morph.cpp:6420
frame #23: 0xf67d33fe libclrjit.so`Compiler::fgInline(this=0x018a7074) at flowgraph.cpp:22491
frame #24: 0xf68c6832 libclrjit.so`Phase::Run(this=0xfffeefac) at phase.cpp:61
frame #25: 0xf679f6ec libclrjit.so`Compiler::compCompile(void**, unsigned int*, JitFlags*) [inlined] DoPhase(_compiler=0x018a7074, _phase=PHASE_MORPH_INLINE)()) at phase.h:136
frame #26: 0xf679f6c8 libclrjit.so`Compiler::compCompile(this=0x018a7074, methodCodePtr=0xfffef5a4, methodCodeSize=<unavailable>, compileFlags=0xfffef5b8) at compiler.cpp:4431
frame #27: 0xf67a231e libclrjit.so`Compiler::compCompileHelper(this=0x018a7074, classPtr=<unavailable>, compHnd=<unavailable>, methodInfo=<unavailable>, methodCodePtr=0xfffef5a4, methodCodeSize=0xfffef698, compileFlags=0xfffef5b8) at compiler.cpp:6128
frame #28: 0xf67a1288 libclrjit.so`Compiler::compCompile(CORINFO_MODULE_STRUCT_*, void**, unsigned int*, JitFlags*) at compiler.cpp:5467
frame #29: 0xf67a1276 libclrjit.so`Compiler::compCompile(this=0x018a7074, classPtr=0xf3974010, methodCodePtr=0xfffef5a4, methodCodeSize=0xfffef698, compileFlags=0xfffef5b8) at compiler.cpp:5486
frame #30: 0xf67a2cf6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6770
frame #31: 0xf67a2bd6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6795
frame #32: 0xf67a2bd2 libclrjit.so`jitNativeCode(methodHnd=0xed147220, classPtr=0xf3974010, compHnd=0x005814c0, methodInfo=0xfffef6b4, methodCodePtr=0xfffef5a4, methodCodeSize=0xfffef698, compileFlags=0xfffef5b8, inlineInfoPtr=0x00000000) at compiler.cpp:6797
frame #33: 0xf67aa316 libclrjit.so`CILJit::compileMethod(this=<unavailable>, compHnd=0x005814c0, methodInfo=0xfffef6b4, flags=<unavailable>, entryAddress=0xfffef69c, nativeSizeOfCode=0xfffef698) at ee_il_dll.cpp:273
frame #34: 0x0040d2a0 superpmi`JitInstance::CompileMethod(MethodContext*, int, bool) at jitinstance.cpp:287
frame #35: 0x0040d25c superpmi`JitInstance::CompileMethod(this=0x00574f38, MethodToCompile=<unavailable>, mcIndex=9118, collectThroughput=<unavailable>) at jitinstance.cpp:324
frame #36: 0x004108e6 superpmi`main(argc=<unavailable>, argv=<unavailable>) at superpmi.cpp:360
frame #37: 0xf74dafe6 libc.so.6`__libc_start_main(main=(superpmi`main + 1 at superpmi.cpp:138), argc=15, argv=0xfffefc54, init=<unavailable>, fini=(superpmi`__libc_csu_fini + 1), rtld_fini=(ld-2.27.so`_dl_fini + 1 at dl-fini.c:50), stack_end=0xfffefc54) at libc-start.c:310
frame #38: 0x00408fa0 superpmi`_start + 52
(lldb) disassemble
libclrjit.so`Compiler::impImportStaticReadOnlyField:
0xf681c3c8 <+0>: push {r7, lr}
0xf681c3ca <+2>: mov r7, sp
0xf681c3cc <+4>: subs r2, #0x2
0xf681c3ce <+6>: cmp r2, #0xa
0xf681c3d0 <+8>: bhi 0xf681c3f6 ; <+46> at importer.cpp:7178
0xf681c3d2 <+10>: tbb [pc, r2]
0xf681c3d6 <+14>: vstrne s2, [r6, #-24]
0xf681c3da <+18>: beq 0xf6a1e462
0xf681c3de <+22>: eorseq r2, r0, r10, lsl #12
0xf681c3e2 <+26>: ldrb r1, [r1]
0xf681c3e4 <+28>: b 0xf681c418 ; <+80> at importer.cpp:7157
0xf681c3e6 <+30>: ldr r1, [r1]
0xf681c3e8 <+32>: b 0xf681c418 ; <+80> at importer.cpp:7157
-> 0xf681c3ea <+34>: ldrd r2, r3, [r1]
0xf681c3ee <+38>: pop.w {r7, lr}
0xf681c3f2 <+42>: b.w 0xf67ec1b0 ; Compiler::gtNewLconNode at gentree.cpp:6110
(lldb) re r r1
r1 = 0x018a5f99
``` | 1.0 | [linux-arm] superpmi replay fails due to illegal alignment - ```
Process 15 stopped
* thread #1, name = 'superpmi', stop reason = signal SIGBUS: illegal alignment
frame #0: 0xf681c3ea libclrjit.so`Compiler::impImportStaticReadOnlyField(this=0x018b373c, fldAddr=0x018a5f99, lclTyp=<unavailable>) at importer.cpp:7162
(lldb) bt
* thread #1, name = 'superpmi', stop reason = signal SIGBUS: illegal alignment
* frame #0: 0xf681c3ea libclrjit.so`Compiler::impImportStaticReadOnlyField(this=0x018b373c, fldAddr=0x018a5f99, lclTyp=<unavailable>) at importer.cpp:7162
frame #1: 0xf682d268 libclrjit.so`Compiler::impImportBlockCode(this=0x018b373c, block=<unavailable>) at importer.cpp:14586
frame #2: 0xf68315c2 libclrjit.so`Compiler::impImportBlock(BasicBlock*) [inlined] Compiler::impImportBlock(this=<unavailable>, pParam=0xfffedf10)::$_0::operator()(Compiler::impImportBlock(BasicBlock*)::FilterVerificationExceptionsParam*) const at importer.cpp:17401
frame #3: 0xf68315aa libclrjit.so`Compiler::impImportBlock(this=0x018b373c, block=0x018da494) at importer.cpp:17411
frame #4: 0xf68333f4 libclrjit.so`Compiler::impImport(this=0x018b373c) at importer.cpp:18499
frame #5: 0xf67c2c02 libclrjit.so`Compiler::fgImport(this=0x018b373c) at flowgraph.cpp:7192
frame #6: 0xf68c6832 libclrjit.so`Phase::Run(this=0xfffee06c) at phase.cpp:61
frame #7: 0xf679f54a libclrjit.so`Compiler::compCompile(void**, unsigned int*, JitFlags*) [inlined] DoPhase(_compiler=0x018b373c, _phase=PHASE_IMPORTATION)()) at phase.h:136
frame #8: 0xf679f524 libclrjit.so`Compiler::compCompile(this=0x018b373c, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650) at compiler.cpp:4260
frame #9: 0xf67a231e libclrjit.so`Compiler::compCompileHelper(this=0x018b373c, classPtr=<unavailable>, compHnd=<unavailable>, methodInfo=<unavailable>, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650) at compiler.cpp:6128
frame #10: 0xf67a1288 libclrjit.so`Compiler::compCompile(CORINFO_MODULE_STRUCT_*, void**, unsigned int*, JitFlags*) at compiler.cpp:5467
frame #11: 0xf67a1276 libclrjit.so`Compiler::compCompile(this=0x018b373c, classPtr=0xf3974010, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650) at compiler.cpp:5486
frame #12: 0xf67a2cf6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6770
frame #13: 0xf67a2bd6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6795
frame #14: 0xf67a2bd2 libclrjit.so`jitNativeCode(methodHnd=0xeb3d12c4, classPtr=0xf3974010, compHnd=0x005814c0, methodInfo=0x018af4c0, methodCodePtr=0xfffee6e0, methodCodeSize=0x00000000, compileFlags=0xfffee650, inlineInfoPtr=0xfffee6e0) at compiler.cpp:6797
frame #15: 0xf67d90f4 libclrjit.so`Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::$_0::__invoke(Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*) at flowgraph.cpp:23318
frame #16: 0xf67d9012 libclrjit.so`Compiler::fgInvokeInlineeCompiler(pParam=0xfffee6cc)::$_0::__invoke(Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*) at flowgraph.cpp:23267
frame #17: 0x004182e8 superpmi`RunWithErrorTrap(void (*)(void*), void*) [inlined] RunWithErrorTrap(this=<unavailable>, pTrapParam=<unavailable>)(void*), void*)::$_0::operator()(RunWithErrorTrap(void (*)(void*), void*)::TrapParam*) const at errorhandling.cpp:150
frame #18: 0x004182e2 superpmi`RunWithErrorTrap(function=<unavailable>, param=<unavailable>)(void*), void*) at errorhandling.cpp:152
frame #19: 0xf67d4816 libclrjit.so`Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*) [inlined] bool Compiler::eeRunWithErrorTrap<Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param>(this=0x018a7074, function=<unavailable>, param=0xfffee6cc)(Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*), Compiler::fgInvokeInlineeCompiler(GenTreeCall*, InlineResult*)::Param*) at compiler.h:7445
frame #20: 0xf67d4812 libclrjit.so`Compiler::fgInvokeInlineeCompiler(this=0x018a7074, call=0x018af3a0, inlineResult=0xfffeee68) at flowgraph.cpp:23266
frame #21: 0xf689fce0 libclrjit.so`Compiler::fgMorphCallInlineHelper(this=0x018a7074, call=0x018af3a0, result=0xfffeee68) at morph.cpp:6538
frame #22: 0xf689fb06 libclrjit.so`Compiler::fgMorphCallInline(this=0x018a7074, call=0x018af3a0, inlineResult=0xfffeee68) at morph.cpp:6420
frame #23: 0xf67d33fe libclrjit.so`Compiler::fgInline(this=0x018a7074) at flowgraph.cpp:22491
frame #24: 0xf68c6832 libclrjit.so`Phase::Run(this=0xfffeefac) at phase.cpp:61
frame #25: 0xf679f6ec libclrjit.so`Compiler::compCompile(void**, unsigned int*, JitFlags*) [inlined] DoPhase(_compiler=0x018a7074, _phase=PHASE_MORPH_INLINE)()) at phase.h:136
frame #26: 0xf679f6c8 libclrjit.so`Compiler::compCompile(this=0x018a7074, methodCodePtr=0xfffef5a4, methodCodeSize=<unavailable>, compileFlags=0xfffef5b8) at compiler.cpp:4431
frame #27: 0xf67a231e libclrjit.so`Compiler::compCompileHelper(this=0x018a7074, classPtr=<unavailable>, compHnd=<unavailable>, methodInfo=<unavailable>, methodCodePtr=0xfffef5a4, methodCodeSize=0xfffef698, compileFlags=0xfffef5b8) at compiler.cpp:6128
frame #28: 0xf67a1288 libclrjit.so`Compiler::compCompile(CORINFO_MODULE_STRUCT_*, void**, unsigned int*, JitFlags*) at compiler.cpp:5467
frame #29: 0xf67a1276 libclrjit.so`Compiler::compCompile(this=0x018a7074, classPtr=0xf3974010, methodCodePtr=0xfffef5a4, methodCodeSize=0xfffef698, compileFlags=0xfffef5b8) at compiler.cpp:5486
frame #30: 0xf67a2cf6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6770
frame #31: 0xf67a2bd6 libclrjit.so`jitNativeCode(CORINFO_METHOD_STRUCT_*, CORINFO_MODULE_STRUCT_*, ICorJitInfo*, CORINFO_METHOD_INFO*, void**, unsigned int*, JitFlags*, void*) at compiler.cpp:6795
frame #32: 0xf67a2bd2 libclrjit.so`jitNativeCode(methodHnd=0xed147220, classPtr=0xf3974010, compHnd=0x005814c0, methodInfo=0xfffef6b4, methodCodePtr=0xfffef5a4, methodCodeSize=0xfffef698, compileFlags=0xfffef5b8, inlineInfoPtr=0x00000000) at compiler.cpp:6797
frame #33: 0xf67aa316 libclrjit.so`CILJit::compileMethod(this=<unavailable>, compHnd=0x005814c0, methodInfo=0xfffef6b4, flags=<unavailable>, entryAddress=0xfffef69c, nativeSizeOfCode=0xfffef698) at ee_il_dll.cpp:273
frame #34: 0x0040d2a0 superpmi`JitInstance::CompileMethod(MethodContext*, int, bool) at jitinstance.cpp:287
frame #35: 0x0040d25c superpmi`JitInstance::CompileMethod(this=0x00574f38, MethodToCompile=<unavailable>, mcIndex=9118, collectThroughput=<unavailable>) at jitinstance.cpp:324
frame #36: 0x004108e6 superpmi`main(argc=<unavailable>, argv=<unavailable>) at superpmi.cpp:360
frame #37: 0xf74dafe6 libc.so.6`__libc_start_main(main=(superpmi`main + 1 at superpmi.cpp:138), argc=15, argv=0xfffefc54, init=<unavailable>, fini=(superpmi`__libc_csu_fini + 1), rtld_fini=(ld-2.27.so`_dl_fini + 1 at dl-fini.c:50), stack_end=0xfffefc54) at libc-start.c:310
frame #38: 0x00408fa0 superpmi`_start + 52
(lldb) disassemble
libclrjit.so`Compiler::impImportStaticReadOnlyField:
0xf681c3c8 <+0>: push {r7, lr}
0xf681c3ca <+2>: mov r7, sp
0xf681c3cc <+4>: subs r2, #0x2
0xf681c3ce <+6>: cmp r2, #0xa
0xf681c3d0 <+8>: bhi 0xf681c3f6 ; <+46> at importer.cpp:7178
0xf681c3d2 <+10>: tbb [pc, r2]
0xf681c3d6 <+14>: vstrne s2, [r6, #-24]
0xf681c3da <+18>: beq 0xf6a1e462
0xf681c3de <+22>: eorseq r2, r0, r10, lsl #12
0xf681c3e2 <+26>: ldrb r1, [r1]
0xf681c3e4 <+28>: b 0xf681c418 ; <+80> at importer.cpp:7157
0xf681c3e6 <+30>: ldr r1, [r1]
0xf681c3e8 <+32>: b 0xf681c418 ; <+80> at importer.cpp:7157
-> 0xf681c3ea <+34>: ldrd r2, r3, [r1]
0xf681c3ee <+38>: pop.w {r7, lr}
0xf681c3f2 <+42>: b.w 0xf67ec1b0 ; Compiler::gtNewLconNode at gentree.cpp:6110
(lldb) re r r1
r1 = 0x018a5f99
``` | infrastructure | superpmi replay fails due to illegal alignment process stopped thread name superpmi stop reason signal sigbus illegal alignment frame libclrjit so compiler impimportstaticreadonlyfield this fldaddr lcltyp at importer cpp lldb bt thread name superpmi stop reason signal sigbus illegal alignment frame libclrjit so compiler impimportstaticreadonlyfield this fldaddr lcltyp at importer cpp frame libclrjit so compiler impimportblockcode this block at importer cpp frame libclrjit so compiler impimportblock basicblock compiler impimportblock this pparam operator compiler impimportblock basicblock filterverificationexceptionsparam const at importer cpp frame libclrjit so compiler impimportblock this block at importer cpp frame libclrjit so compiler impimport this at importer cpp frame libclrjit so compiler fgimport this at flowgraph cpp frame libclrjit so phase run this at phase cpp frame libclrjit so compiler compcompile void unsigned int jitflags dophase compiler phase phase importation at phase h frame libclrjit so compiler compcompile this methodcodeptr methodcodesize compileflags at compiler cpp frame libclrjit so compiler compcompilehelper this classptr comphnd methodinfo methodcodeptr methodcodesize compileflags at compiler cpp frame libclrjit so compiler compcompile corinfo module struct void unsigned int jitflags at compiler cpp frame libclrjit so compiler compcompile this classptr methodcodeptr methodcodesize compileflags at compiler cpp frame libclrjit so jitnativecode corinfo method struct corinfo module struct icorjitinfo corinfo method info void unsigned int jitflags void at compiler cpp frame libclrjit so jitnativecode corinfo method struct corinfo module struct icorjitinfo corinfo method info void unsigned int jitflags void at compiler cpp frame libclrjit so jitnativecode methodhnd classptr comphnd methodinfo methodcodeptr methodcodesize compileflags inlineinfoptr at compiler cpp frame libclrjit so compiler fginvokeinlineecompiler gentreecall inlineresult invoke compiler fginvokeinlineecompiler gentreecall inlineresult param at flowgraph cpp frame libclrjit so compiler fginvokeinlineecompiler pparam invoke compiler fginvokeinlineecompiler gentreecall inlineresult param at flowgraph cpp frame superpmi runwitherrortrap void void void runwitherrortrap this ptrapparam void void operator runwitherrortrap void void void trapparam const at errorhandling cpp frame superpmi runwitherrortrap function param void void at errorhandling cpp frame libclrjit so compiler fginvokeinlineecompiler gentreecall inlineresult bool compiler eerunwitherrortrap this function param compiler fginvokeinlineecompiler gentreecall inlineresult param compiler fginvokeinlineecompiler gentreecall inlineresult param at compiler h frame libclrjit so compiler fginvokeinlineecompiler this call inlineresult at flowgraph cpp frame libclrjit so compiler fgmorphcallinlinehelper this call result at morph cpp frame libclrjit so compiler fgmorphcallinline this call inlineresult at morph cpp frame libclrjit so compiler fginline this at flowgraph cpp frame libclrjit so phase run this at phase cpp frame libclrjit so compiler compcompile void unsigned int jitflags dophase compiler phase phase morph inline at phase h frame libclrjit so compiler compcompile this methodcodeptr methodcodesize compileflags at compiler cpp frame libclrjit so compiler compcompilehelper this classptr comphnd methodinfo methodcodeptr methodcodesize compileflags at compiler cpp frame libclrjit so compiler compcompile corinfo module struct void unsigned int jitflags at compiler cpp frame libclrjit so compiler compcompile this classptr methodcodeptr methodcodesize compileflags at compiler cpp frame libclrjit so jitnativecode corinfo method struct corinfo module struct icorjitinfo corinfo method info void unsigned int jitflags void at compiler cpp frame libclrjit so jitnativecode corinfo method struct corinfo module struct icorjitinfo corinfo method info void unsigned int jitflags void at compiler cpp frame libclrjit so jitnativecode methodhnd classptr comphnd methodinfo methodcodeptr methodcodesize compileflags inlineinfoptr at compiler cpp frame libclrjit so ciljit compilemethod this comphnd methodinfo flags entryaddress nativesizeofcode at ee il dll cpp frame superpmi jitinstance compilemethod methodcontext int bool at jitinstance cpp frame superpmi jitinstance compilemethod this methodtocompile mcindex collectthroughput at jitinstance cpp frame superpmi main argc argv at superpmi cpp frame libc so libc start main main superpmi main at superpmi cpp argc argv init fini superpmi libc csu fini rtld fini ld so dl fini at dl fini c stack end at libc start c frame superpmi start lldb disassemble libclrjit so compiler impimportstaticreadonlyfield push lr mov sp subs cmp bhi at importer cpp tbb vstrne beq eorseq lsl ldrb b at importer cpp ldr b at importer cpp ldrd pop w lr b w compiler gtnewlconnode at gentree cpp lldb re r | 1 |
22,766 | 15,436,636,015 | IssuesEvent | 2021-03-07 13:48:44 | ilri/OpenRXV | https://api.github.com/repos/ilri/OpenRXV | closed | Migrate away from "links" in docker-compose | enhancement infrastructure | The [Docker documentation says](https://docs.docker.com/network/links/) that links are deprecated:
> The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
We currently use links in `docker-compose.yml` to allow containers to discover each other using their names when using the default Docker network bridge, but we can achieve the same thing by using ["user-defined networks"](https://docs.docker.com/network/bridge/#differences-between-user-defined-bridges-and-the-default-bridge). | 1.0 | Migrate away from "links" in docker-compose - The [Docker documentation says](https://docs.docker.com/network/links/) that links are deprecated:
> The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link.
We currently use links in `docker-compose.yml` to allow containers to discover each other using their names when using the default Docker network bridge, but we can achieve the same thing by using ["user-defined networks"](https://docs.docker.com/network/bridge/#differences-between-user-defined-bridges-and-the-default-bridge). | infrastructure | migrate away from links in docker compose the that links are deprecated the link flag is a legacy feature of docker it may eventually be removed unless you absolutely need to continue using it we recommend that you use user defined networks to facilitate communication between two containers instead of using link we currently use links in docker compose yml to allow containers to discover each other using their names when using the default docker network bridge but we can achieve the same thing by using | 1 |
12,971 | 3,296,677,821 | IssuesEvent | 2015-11-02 00:29:30 | plehegar/dummy | https://api.github.com/repos/plehegar/dummy | closed | Test Suite needs to be updated so that multiple region support is not required | test-suite | all of the test suite test seem to assume support for multiple regions.
From tracker issue http://www.w3.org/AudioVideo/TT/tracker/issues/2 | 1.0 | Test Suite needs to be updated so that multiple region support is not required - all of the test suite test seem to assume support for multiple regions.
From tracker issue http://www.w3.org/AudioVideo/TT/tracker/issues/2 | non_infrastructure | test suite needs to be updated so that multiple region support is not required all of the test suite test seem to assume support for multiple regions from tracker issue | 0 |
345,100 | 24,844,443,059 | IssuesEvent | 2022-10-26 14:54:13 | immediatelylee/PortFolio_Board | https://api.github.com/repos/immediatelylee/PortFolio_Board | closed | 계정 도메인 추가 작업 | documentation enhancement | 지난 #9 에서 누락되었던 회원 계정 도메인을 추가 작업한다.
이는 인증 기능 구현을 위해 필요함
* [ ] erd 업데이트
* [ ] 도메인 추가
* [ ] jpa 인터페이스 추가 | 1.0 | 계정 도메인 추가 작업 - 지난 #9 에서 누락되었던 회원 계정 도메인을 추가 작업한다.
이는 인증 기능 구현을 위해 필요함
* [ ] erd 업데이트
* [ ] 도메인 추가
* [ ] jpa 인터페이스 추가 | non_infrastructure | 계정 도메인 추가 작업 지난 에서 누락되었던 회원 계정 도메인을 추가 작업한다 이는 인증 기능 구현을 위해 필요함 erd 업데이트 도메인 추가 jpa 인터페이스 추가 | 0 |
28,271 | 6,974,047,349 | IssuesEvent | 2017-12-11 22:48:10 | GoogleCloudPlatform/ruby-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/ruby-docs-samples | closed | Flaky Tests | code health P0 testing | **Possible reason**
A resource race-condition, running more than one instance of _ruby-docs-samples_ tests will cause an error related to a resource being in use.
**Storage and Datastore**
[Storage-Example-1](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/222?utm_campaign=vcs&integration-link&utm_medium=referral&utm_source=github-build-link)
[Storage-Example-2](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/263)
[Datastore-and-Storage-Example](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/227?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
**BigQuery**
[Example-1](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/219?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
[Example-2](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/224#tests/containers/0)
[Example-3](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/209?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
| 1.0 | Flaky Tests - **Possible reason**
A resource race-condition, running more than one instance of _ruby-docs-samples_ tests will cause an error related to a resource being in use.
**Storage and Datastore**
[Storage-Example-1](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/222?utm_campaign=vcs&integration-link&utm_medium=referral&utm_source=github-build-link)
[Storage-Example-2](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/263)
[Datastore-and-Storage-Example](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/227?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
**BigQuery**
[Example-1](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/219?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
[Example-2](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/224#tests/containers/0)
[Example-3](https://circleci.com/gh/GoogleCloudPlatform/ruby-docs-samples/209?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
| non_infrastructure | flaky tests possible reason a resource race condition running more than one instance of ruby docs samples tests will cause an error related to a resource being in use storage and datastore bigquery | 0 |
155,941 | 13,637,342,465 | IssuesEvent | 2020-09-25 07:40:40 | hapas-io/hyupup | https://api.github.com/repos/hapas-io/hyupup | closed | 🎲 2020-09-16-Meeting | documentation | **⏰ 이 회의록은 2020-09-16 에 정리될 예정입니다!**
### TODO
- [x] Slack 정기회의용 채널생성
- [x] 캘린더 일정 생성
### 멤버
역활 | 성명
------| ---
마스터 | @Hansanghyeon
멤버 | @youngroklee323 | 1.0 | 🎲 2020-09-16-Meeting - **⏰ 이 회의록은 2020-09-16 에 정리될 예정입니다!**
### TODO
- [x] Slack 정기회의용 채널생성
- [x] 캘린더 일정 생성
### 멤버
역활 | 성명
------| ---
마스터 | @Hansanghyeon
멤버 | @youngroklee323 | non_infrastructure | 🎲 meeting ⏰ 이 회의록은 에 정리될 예정입니다 todo slack 정기회의용 채널생성 캘린더 일정 생성 멤버 역활 성명 마스터 hansanghyeon 멤버 | 0 |
628,571 | 19,989,068,395 | IssuesEvent | 2022-01-31 02:23:47 | AzisabaNetwork/RyuZUPluginChat | https://api.github.com/repos/AzisabaNetwork/RyuZUPluginChat | closed | 他のPLがチャットに介入できない | kind/bug priority/critical area/listener | ChatのEventPriorityがLOWESTになっている影響で、他のPLがチャットを利用した文字列inputを実装していたとしてもRPCが先に処理してしまい失敗する問題 | 1.0 | 他のPLがチャットに介入できない - ChatのEventPriorityがLOWESTになっている影響で、他のPLがチャットを利用した文字列inputを実装していたとしてもRPCが先に処理してしまい失敗する問題 | non_infrastructure | 他のplがチャットに介入できない chatのeventpriorityがlowestになっている影響で、他のplがチャットを利用した文字列inputを実装していたとしてもrpcが先に処理してしまい失敗する問題 | 0 |
283,887 | 30,913,565,116 | IssuesEvent | 2023-08-05 02:15:33 | Satheesh575555/linux-4.1.15_CVE-2022-45934 | https://api.github.com/repos/Satheesh575555/linux-4.1.15_CVE-2022-45934 | reopened | CVE-2016-6213 (Medium) detected in linuxlinux-4.6 | Mend: dependency security vulnerability | ## CVE-2016-6213 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-4.1.15_CVE-2022-45934/commit/7c0b143b43394df131d83e9aecb3c5518edc127a">7c0b143b43394df131d83e9aecb3c5518edc127a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/pnode.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
fs/namespace.c in the Linux kernel before 4.9 does not restrict how many mounts may exist in a mount namespace, which allows local users to cause a denial of service (memory consumption and deadlock) via MS_BIND mount system calls, as demonstrated by a loop that triggers exponential growth in the number of mounts.
<p>Publish Date: 2016-12-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-6213>CVE-2016-6213</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-6213">https://nvd.nist.gov/vuln/detail/CVE-2016-6213</a></p>
<p>Release Date: 2016-12-28</p>
<p>Fix Resolution: 4.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-6213 (Medium) detected in linuxlinux-4.6 - ## CVE-2016-6213 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/linux-4.1.15_CVE-2022-45934/commit/7c0b143b43394df131d83e9aecb3c5518edc127a">7c0b143b43394df131d83e9aecb3c5518edc127a</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/pnode.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
fs/namespace.c in the Linux kernel before 4.9 does not restrict how many mounts may exist in a mount namespace, which allows local users to cause a denial of service (memory consumption and deadlock) via MS_BIND mount system calls, as demonstrated by a loop that triggers exponential growth in the number of mounts.
<p>Publish Date: 2016-12-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-6213>CVE-2016-6213</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-6213">https://nvd.nist.gov/vuln/detail/CVE-2016-6213</a></p>
<p>Release Date: 2016-12-28</p>
<p>Fix Resolution: 4.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files fs pnode c vulnerability details fs namespace c in the linux kernel before does not restrict how many mounts may exist in a mount namespace which allows local users to cause a denial of service memory consumption and deadlock via ms bind mount system calls as demonstrated by a loop that triggers exponential growth in the number of mounts publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
68,040 | 8,211,200,201 | IssuesEvent | 2018-09-04 13:12:13 | epfl-sti/wordpress.theme | https://api.github.com/repos/epfl-sti/wordpress.theme | closed | Captions on institutes' sliders not responsive | design + layout | e.g. https://sti-test.epfl.ch/research/institutes/igm/
When the width is > 1250px the caption is not displayed. | 1.0 | Captions on institutes' sliders not responsive - e.g. https://sti-test.epfl.ch/research/institutes/igm/
When the width is > 1250px the caption is not displayed. | non_infrastructure | captions on institutes sliders not responsive e g when the width is the caption is not displayed | 0 |
8,371 | 10,409,911,225 | IssuesEvent | 2019-09-13 09:56:38 | MachoThemes/strong-testimonials | https://api.github.com/repos/MachoThemes/strong-testimonials | opened | double check ST with Elementor | compatibility enhancement | further investigate this and make sure there are no incompatibilities between the 2 | True | double check ST with Elementor - further investigate this and make sure there are no incompatibilities between the 2 | non_infrastructure | double check st with elementor further investigate this and make sure there are no incompatibilities between the | 0 |
569,661 | 17,015,695,931 | IssuesEvent | 2021-07-02 11:42:32 | codee-team/codee-app | https://api.github.com/repos/codee-team/codee-app | closed | Provide an API to store some data / settings for plugins | enhancement priority:normal wontfix | Plugins should have a convenient API to store some values. | 1.0 | Provide an API to store some data / settings for plugins - Plugins should have a convenient API to store some values. | non_infrastructure | provide an api to store some data settings for plugins plugins should have a convenient api to store some values | 0 |
69,490 | 17,691,513,076 | IssuesEvent | 2021-08-24 10:31:21 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [SB] Admin is getting navigated to studies list screen when admin clicks on the 'Actions' section or the following copied studies | Bug P1 Study builder Process: Fixed Process: Tested dev | **Steps**
1. Click on edit study
2. Navigate to the 'Actions' section
3. Observe
**Study 1**
**Study id**: cDup003d
**Study name**: Copy of cDup003
**Study 2**
**Study id**: copied-LPV
**Study name**: Copy of Original new study (Open study)
**AR:** Admin is getting navigated to the 'Studies list' screen
**ER:** Admin should be navigated to the 'Actions' screen | 1.0 | [SB] Admin is getting navigated to studies list screen when admin clicks on the 'Actions' section or the following copied studies - **Steps**
1. Click on edit study
2. Navigate to the 'Actions' section
3. Observe
**Study 1**
**Study id**: cDup003d
**Study name**: Copy of cDup003
**Study 2**
**Study id**: copied-LPV
**Study name**: Copy of Original new study (Open study)
**AR:** Admin is getting navigated to the 'Studies list' screen
**ER:** Admin should be navigated to the 'Actions' screen | non_infrastructure | admin is getting navigated to studies list screen when admin clicks on the actions section or the following copied studies steps click on edit study navigate to the actions section observe study study id study name copy of study study id copied lpv study name copy of original new study open study ar admin is getting navigated to the studies list screen er admin should be navigated to the actions screen | 0 |
604,777 | 18,718,651,151 | IssuesEvent | 2021-11-03 09:13:46 | AY2122S1-CS2103T-F12-4/tp | https://api.github.com/repos/AY2122S1-CS2103T-F12-4/tp | closed | [PE-D] [Bug] Redeem breaks | type.Bug :bee: priority.High :1st_place_medal: | Redeem breaks on trying to redeem more points than the member has.
Steps to reproduce:
- Run the redeem command with points greater than the member in question has
Expected:
- Useful error message
Actual:
- Uncaught exception

<!--session: 1635494608666-f344754f-c445-4322-a4d9-4d3e1af956d7--><!--Version: Web v3.4.1-->
-------------
Labels: `severity.High` `type.FunctionalityBug`
original: VimuthM/ped#7 | 1.0 | [PE-D] [Bug] Redeem breaks - Redeem breaks on trying to redeem more points than the member has.
Steps to reproduce:
- Run the redeem command with points greater than the member in question has
Expected:
- Useful error message
Actual:
- Uncaught exception

<!--session: 1635494608666-f344754f-c445-4322-a4d9-4d3e1af956d7--><!--Version: Web v3.4.1-->
-------------
Labels: `severity.High` `type.FunctionalityBug`
original: VimuthM/ped#7 | non_infrastructure | redeem breaks redeem breaks on trying to redeem more points than the member has steps to reproduce run the redeem command with points greater than the member in question has expected useful error message actual uncaught exception labels severity high type functionalitybug original vimuthm ped | 0 |
28,091 | 22,950,494,369 | IssuesEvent | 2022-07-19 06:56:06 | blockframes/blockframes | https://api.github.com/repos/blockframes/blockframes | closed | write unit tests for devOps processes | Infrastructure Type - Unit test Dev - Test / Quality Assurance July clean up | there are a number of scripts that might break if something in a lib changes, etc...
For example, the deploy script broke when we removed `dotenv.config()` from the top of the file. To prevent this in future, a simple unit test should be written that checks to ensure scripts are working, and even just basic things -they can test for any condition - you can check if env vars exist directly, not run certain tests in CI, you can use loops to generate tests, etc... | 1.0 | write unit tests for devOps processes - there are a number of scripts that might break if something in a lib changes, etc...
For example, the deploy script broke when we removed `dotenv.config()` from the top of the file. To prevent this in future, a simple unit test should be written that checks to ensure scripts are working, and even just basic things -they can test for any condition - you can check if env vars exist directly, not run certain tests in CI, you can use loops to generate tests, etc... | infrastructure | write unit tests for devops processes there are a number of scripts that might break if something in a lib changes etc for example the deploy script broke when we removed dotenv config from the top of the file to prevent this in future a simple unit test should be written that checks to ensure scripts are working and even just basic things they can test for any condition you can check if env vars exist directly not run certain tests in ci you can use loops to generate tests etc | 1 |
540,268 | 15,803,513,826 | IssuesEvent | 2021-04-03 14:37:56 | drashland/website | https://api.github.com/repos/drashland/website | opened | Automate prod deploy process | Priority: Low Type: Enhancement | ## Summary
What:
We want to automate the prod deploy process so that we don't have to do it manually.
Why:
This is essentially what we had before when we were hosted on GitHub pages, but we're going to be doing it in DO. Also, automation ftw.
Example Process (doesn't need to be followed verbatim):
1. Set up webhook to POST to DO.
2. Catch webhook in DO.
3. Process deploy procedures.
| 1.0 | Automate prod deploy process - ## Summary
What:
We want to automate the prod deploy process so that we don't have to do it manually.
Why:
This is essentially what we had before when we were hosted on GitHub pages, but we're going to be doing it in DO. Also, automation ftw.
Example Process (doesn't need to be followed verbatim):
1. Set up webhook to POST to DO.
2. Catch webhook in DO.
3. Process deploy procedures.
| non_infrastructure | automate prod deploy process summary what we want to automate the prod deploy process so that we don t have to do it manually why this is essentially what we had before when we were hosted on github pages but we re going to be doing it in do also automation ftw example process doesn t need to be followed verbatim set up webhook to post to do catch webhook in do process deploy procedures | 0 |
68,348 | 21,647,521,489 | IssuesEvent | 2022-05-06 05:05:12 | klubcoin/lcn-mobile | https://api.github.com/repos/klubcoin/lcn-mobile | opened | [Account Maintenance][Currency] Fix must be able to select EUR as currency. | Defect Must Have Critical Account Maintenance Services | ### **Description:**
Must be able to select EUR as currency.
**Build Environment:** Prod Candidate Environment
**Affects Version:** 1.0.0.prod.4
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User has an existing Klubcoin Wallet Account
3. User is currently at Klubcoin Dashboard
### **Steps to Reproduce:**
1. Tap Hamburger Buttons
2. Tap Settings
3. Tap General
4. Select EUR as Currency
### **Expected Result:**
EUR Currency Selected
### **Actual Result:**
Cannot select EUR, always selecting USD
### **Attachment/s:**
https://user-images.githubusercontent.com/100281200/167070291-c87d7543-bca0-4872-9619-bcdac340e727.mp4 | 1.0 | [Account Maintenance][Currency] Fix must be able to select EUR as currency. - ### **Description:**
Must be able to select EUR as currency.
**Build Environment:** Prod Candidate Environment
**Affects Version:** 1.0.0.prod.4
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User has an existing Klubcoin Wallet Account
3. User is currently at Klubcoin Dashboard
### **Steps to Reproduce:**
1. Tap Hamburger Buttons
2. Tap Settings
3. Tap General
4. Select EUR as Currency
### **Expected Result:**
EUR Currency Selected
### **Actual Result:**
Cannot select EUR, always selecting USD
### **Attachment/s:**
https://user-images.githubusercontent.com/100281200/167070291-c87d7543-bca0-4872-9619-bcdac340e727.mp4 | non_infrastructure | fix must be able to select eur as currency description must be able to select eur as currency build environment prod candidate environment affects version prod device platform android device os test device oneplus pro pre condition user successfully installed klubcoin app user has an existing klubcoin wallet account user is currently at klubcoin dashboard steps to reproduce tap hamburger buttons tap settings tap general select eur as currency expected result eur currency selected actual result cannot select eur always selecting usd attachment s | 0 |
27,316 | 21,606,901,577 | IssuesEvent | 2022-05-04 05:08:17 | deneb-viz/deneb | https://api.github.com/repos/deneb-viz/deneb | opened | Expose Preview Image Generation for Templates | infrastructure housekeeping | In #159, we added some functionality to allow preview images, and this included some basic (internal) tooling to use the Vega View API to generate an image via feature switches. This is not 100% ready for production, so we need to re-write it a bit to be more reliable, including removing `ViewServices` (as the view is now part of the store). | 1.0 | Expose Preview Image Generation for Templates - In #159, we added some functionality to allow preview images, and this included some basic (internal) tooling to use the Vega View API to generate an image via feature switches. This is not 100% ready for production, so we need to re-write it a bit to be more reliable, including removing `ViewServices` (as the view is now part of the store). | infrastructure | expose preview image generation for templates in we added some functionality to allow preview images and this included some basic internal tooling to use the vega view api to generate an image via feature switches this is not ready for production so we need to re write it a bit to be more reliable including removing viewservices as the view is now part of the store | 1 |
1,556 | 3,267,302,107 | IssuesEvent | 2015-10-23 02:12:36 | radare/radare2 | https://api.github.com/repos/radare/radare2 | closed | Windows generated binaries are broken | blocker infrastructure regression Windows OS | Installation is correct, but the binary shipped with the installer doesn't seem to do anything | 1.0 | Windows generated binaries are broken - Installation is correct, but the binary shipped with the installer doesn't seem to do anything | infrastructure | windows generated binaries are broken installation is correct but the binary shipped with the installer doesn t seem to do anything | 1 |
24,259 | 17,049,328,763 | IssuesEvent | 2021-07-06 06:54:00 | ansible-collections/community.general | https://api.github.com/repos/ansible-collections/community.general | closed | Unnecessary warning that password does not exist from htpasswd | affects_2.9 bug module needs_info plugins python3 web_infrastructure | **Summary**
I am using `htpasswd` module in a code that has two states - enabled when a password is provided and disabled when it's not. In the enabled state it should set a password for the given user in the file, in disabled it should remove it.
This minor issue I have is that when I run my code in the disabled state for the first time, then the target passwd file does not exist and it is fine. But the module complains about it with a WARNING...
**Issue Type**
Bug Report
**Component Name**
htpasswd
**Ansible Version**
```
ansible 2.9.10
config file = /<redacted>/ansible.cfg
configured module search path = ['/Users/gdubicki/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.7 (default, Dec 30 2020, 10:13:08) [Clang 12.0.0 (clang-1200.0.32.28)]
```
**Steps To Reproduce**
1. Use this code on a server that DOES NOT have the `/etc/httpd/passwdfile` file (yet):
```
- name: "Put password in the passwd file, if password provided"
htpasswd:
state: "{{ 'present' if basic_auth_enabled else 'absent' }}"
path: /etc/httpd/passwdfile
name: "foo"
password: "bar"
owner: apache
group: apache
mode: 0400
```
**Expected Results**
When `basic_auth_enabled` is false we should get this:
```
TASK [foobar : Put password in the passwd file, if password provided] **********************************
ok: [vagrant]
```
**Actual Results**
```
TASK [foobar : Put password in the passwd file, if password provided] **********************************
[WARNING]: /etc/httpd/passwdfile does not exist
ok: [vagrant]
```
| 1.0 | Unnecessary warning that password does not exist from htpasswd - **Summary**
I am using `htpasswd` module in a code that has two states - enabled when a password is provided and disabled when it's not. In the enabled state it should set a password for the given user in the file, in disabled it should remove it.
This minor issue I have is that when I run my code in the disabled state for the first time, then the target passwd file does not exist and it is fine. But the module complains about it with a WARNING...
**Issue Type**
Bug Report
**Component Name**
htpasswd
**Ansible Version**
```
ansible 2.9.10
config file = /<redacted>/ansible.cfg
configured module search path = ['/Users/gdubicki/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.9.10/libexec/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.8.7 (default, Dec 30 2020, 10:13:08) [Clang 12.0.0 (clang-1200.0.32.28)]
```
**Steps To Reproduce**
1. Use this code on a server that DOES NOT have the `/etc/httpd/passwdfile` file (yet):
```
- name: "Put password in the passwd file, if password provided"
htpasswd:
state: "{{ 'present' if basic_auth_enabled else 'absent' }}"
path: /etc/httpd/passwdfile
name: "foo"
password: "bar"
owner: apache
group: apache
mode: 0400
```
**Expected Results**
When `basic_auth_enabled` is false we should get this:
```
TASK [foobar : Put password in the passwd file, if password provided] **********************************
ok: [vagrant]
```
**Actual Results**
```
TASK [foobar : Put password in the passwd file, if password provided] **********************************
[WARNING]: /etc/httpd/passwdfile does not exist
ok: [vagrant]
```
| infrastructure | unnecessary warning that password does not exist from htpasswd summary i am using htpasswd module in a code that has two states enabled when a password is provided and disabled when it s not in the enabled state it should set a password for the given user in the file in disabled it should remove it this minor issue i have is that when i run my code in the disabled state for the first time then the target passwd file does not exist and it is fine but the module complains about it with a warning issue type bug report component name htpasswd ansible version ansible config file ansible cfg configured module search path ansible python module location usr local cellar ansible libexec lib site packages ansible executable location usr local bin ansible python version default dec steps to reproduce use this code on a server that does not have the etc httpd passwdfile file yet name put password in the passwd file if password provided htpasswd state present if basic auth enabled else absent path etc httpd passwdfile name foo password bar owner apache group apache mode expected results when basic auth enabled is false we should get this task ok actual results task etc httpd passwdfile does not exist ok | 1 |
2,100 | 3,511,364,461 | IssuesEvent | 2016-01-10 06:41:00 | timvideos/HDMI2USB-jahanzeb-firmware | https://api.github.com/repos/timvideos/HDMI2USB-jahanzeb-firmware | closed | Build system should parse the Xilinx tool output and generate stats | level-infrastructure type-enhancement | Start simple with things like;
* Number of warnings generated in each stage
* Output bit file size
Then move onto things like;
* The number of LUTs used
* Timing information
Then do all the above for each individual directory. | 1.0 | Build system should parse the Xilinx tool output and generate stats - Start simple with things like;
* Number of warnings generated in each stage
* Output bit file size
Then move onto things like;
* The number of LUTs used
* Timing information
Then do all the above for each individual directory. | infrastructure | build system should parse the xilinx tool output and generate stats start simple with things like number of warnings generated in each stage output bit file size then move onto things like the number of luts used timing information then do all the above for each individual directory | 1 |
17,363 | 12,311,588,287 | IssuesEvent | 2020-05-12 12:40:42 | Altinn/altinn-studio | https://api.github.com/repos/Altinn/altinn-studio | closed | Analyse and define backup architecture for Altinn Platform | Epic area/data-storage kind/user-story ops/disaster-recovery ops/infrastructure short-term-goal | ## Description
It is important to reduce the risk of losing data on the platform. The risks that are identified are
1. Data is deleted by accident by DevOps team or by wrongly configured jobs
2. Data is corrupted by bugs in platform or application code
3. Data is accidentally corrupted or deleted by end-users or systems
4. A storage account is deleted
5. Blob storage is deleted
6. Cosmos DB collection is accidentally deleted
In Altinn Platform different types of data is stored
**Cosmos DB**
- Instances: Metadata about instances created
- InstanceEvents
- DataElements
- Applications
- Texts
**Blob Storage**
- Data for data elements (structured and unstructured data, small to potential gigabytes of data)
- XACML Policy for applications
## Requirement
- We should have back up so we never lose more than 24 hours of data.
- We should be able to recover data that is up to 90 days old.
- We should be able to recover specific data
## What backup mechanism is available in Azure
### Cosmos DB
According to Cosmos DB [documentation](https://docs.microsoft.com/en-us/azure/cosmos-db/online-backup-and-restore) Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters.
Azure Cosmos DB automatically takes a backup of your database every 4 hours and at any point of time, only the latest 2 backups are stored. However, if the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given container or database for 30 days.

The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos account, database, or container and later require the data recovery. It is not helpful when parts data is corrupted or deleted.
Since it the backup also I just 8 hours old there would be many scenarios where the issue is not identified before the problem also is replicated into the backup.
### Blob storage
There is no built-in support for backup of blob storage in Azure, but there are some options that could be used as part of a backup strategy for Azure Blob storage.
**Snapshots**
Blob storage has a snapshot functionality that copies a blob.
A blob snapshot is a read-only version of a blob that's taken at a single point in time. After a snapshot has been created, it can be read, copied, or deleted, but not modified. Snapshots provide a way to back up a blob as it appears at a particular moment in time.
A snapshot of a blob has the same name as the base blob from which the snapshot is taken, with a DateTime value appended to indicate the time at which the snapshot was taken. For example, if the page blob URI is http://storagesample.core.blob.windows.net/mydrives/myvhd, the snapshot URI will be something like http://storagesample.core.blob.windows.net/mydrives/myvhd?snapshot=2011-03-09T01:42:34.9360000Z. You can use this value to reference the snapshot for further operations. A blob's snapshots share the blob's URI and are distinguished only by this DateTime value.
A blob may have any number of snapshots. Snapshots persist until they're explicitly deleted. A snapshot can't outlive its source blob. This means you need to delete all snapshots before blob is deleted. You can enumerate the snapshots associated with your blob to track your current snapshots.
**SoftDelete**
Azure Storage now [offers soft delete for blob objects](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-soft-delete?tabs=azure-portal) so that you can more easily recover your data when it is erroneously modified or deleted by an application or other storage account user. Soft delete can be considered as an automatic snapshot function.
When enabled, soft delete enables you to save and recover your data when blobs or blob snapshots are deleted. This protection extends to blob data that is erased as the result of an overwrite.
When data is deleted, it transitions to a soft deleted state instead of being permanently erased. When soft delete is on and you overwrite data, a soft deleted snapshot is generated to save the state of the overwritten data. Soft deleted objects are invisible unless explicitly listed. You can configure the amount of time soft deleted data is recoverable before it is permanently expired.
Soft delete preserves your data in many cases where blobs or blob snapshots are deleted or overwritten.
When a blob is overwritten using Put Blob, Put Block, Put Block List, or Copy Blob a snapshot of the blob's state prior to the write operation is automatically generated.

For Altinn Platform the biggest issue with soft delete is that it will create many snapshots when the form is filled out. For a user filling out a form in the portal, he will typically update the blob several times. For each time the soft delete function will create a snapshot.
The retention period indicates the amount of time that soft deleted data is stored and available for recovery. For blobs and blob snapshots that are explicitly deleted, the retention period clock starts when the data is deleted. For soft deleted snapshots generated by the soft delete feature when data is overwritten, the clock starts when the snapshot is generated. Currently, you can retain soft deleted data for between 1 and 365 days.
**AzCopy**
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account.
We could then I theory have backup storage accounts for daily/weekly backup
## Proposed solution for Altinn Platform.
### Cosmos DB
Azure Cosmos DB exposes a change feed for containers in Azure Cosmos DB.
Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. The changes are persisted, can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing.

The suggested solution is to have a [Azure Function that listens to the change feed](https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed-functions) and copies documents from Cosmos DB when they are created or modified to a blob storage.
The blob storage should be a shared blob storage for all orgs. (The same way Cosmos DB is shared)
The blob storage should have enabled soft delete. All versions of a document in Cosmos should be written to the same blob. Soft delete will keep track of all versions.
## Blob storage
Each org has separate blob storage.
This analysis has considered Snapshot vs Soft delete strategy.
Snapshot gives us better control over what to take backup of. A user could potentially update forms several times during formfilling. Soft delete would create a copy off all those temporary states of the form.
But since Soft delete only takes a backup of what is changed and we can limit the retention of the backup the amount of data is much smaller with the soft delete strategy. This would reduce cost. In addition, we would with this strategy have.
The below table shows a calculation based on the data in Altinn II platform. The number of attachments and forms is based on the data available in Altinn II platform.

The calculation is based on that around 50% of the data is sent through API and only saved once. And attachments uploaded are only saved once.
Based on this the suggested solution is to enable Soft Delete for blob storage for orgs.
The retention period could be set to 90 days.
We would need to do the same for blob storage for authorization.
## Considerations
- We need to consider the cost of data backup.
- This solution does not cover the scenario for a deleted storage account since snapshots are created. We would need to in some interval make a full copy of the different blobs if we need that.
## Acceptance criteria
- We are able to restore any data that has been modified/deleted in 90 days.
## Summary
The following solutions for the initial risk
**1. Data is deleted by accident by DevOps team or by wrongly configured jobs**
For this scenario, we would need to restore data from snapshot for blobs and from document copy in blob storage. We would need to have a time limit on how long we store the snapshots.
We would need to build tools to help us restore documents/blob to the previous state
**2. Data is corrupted by bugs in platform or application code**
For this scenario, we would need to restore data from snapshot for blobs and from document copy in blob storage. We would need to have a time limit on how long we store the snapshots.
We would need to build tools to help us restore documents/blob to the previous state
**3. Data is accidentally corrupted or deleted by end users or systems**
For this scenario, we would need to restore data from snapshot for blobs and from document copy in blob storage. We would need to have a time limit on how long we store the snapshots.
We would need to build tools to help us restore documents/blob to the previous state
**4. A storage account is deleted**
Not supported. All data is lost
**5. Blob storage is deleted**
Not supported. All data is lost
**6. Cosmos DB collection is accidentally deleted**
Contact Azure team to get help with restoring. Needs to be done right away it happens. Data is lost after 8 hours. When this restored we can use blob storage to recover the last hours of changes.
## Development tasks
- [ ] Create blob storage for cosmos backup #4004
- [x] POC: Create an Azure Function that is triggered on the instance container in cosmos DB and copies #3997
- [ ] POC: Configure soft delete in test environment on blob storage
- [ ] Enable soft delete for blob storage #4005
- [ ] Implement Azure Function that listens to all containers in Cosmos DB and store documents in blob storage #4006
- [ ] Define and implement restore procedure for blobs using [undelete blob api](https://docs.microsoft.com/en-us/rest/api/storageservices/undelete-blob) #4007
- [ ] Define and implement restore procedure for cosmos db documents #4008
| 1.0 | Analyse and define backup architecture for Altinn Platform - ## Description
It is important to reduce the risk of losing data on the platform. The risks that are identified are
1. Data is deleted by accident by DevOps team or by wrongly configured jobs
2. Data is corrupted by bugs in platform or application code
3. Data is accidentally corrupted or deleted by end-users or systems
4. A storage account is deleted
5. Blob storage is deleted
6. Cosmos DB collection is accidentally deleted
In Altinn Platform different types of data is stored
**Cosmos DB**
- Instances: Metadata about instances created
- InstanceEvents
- DataElements
- Applications
- Texts
**Blob Storage**
- Data for data elements (structured and unstructured data, small to potential gigabytes of data)
- XACML Policy for applications
## Requirement
- We should have back up so we never lose more than 24 hours of data.
- We should be able to recover data that is up to 90 days old.
- We should be able to recover specific data
## What backup mechanism is available in Azure
### Cosmos DB
According to Cosmos DB [documentation](https://docs.microsoft.com/en-us/azure/cosmos-db/online-backup-and-restore) Azure Cosmos DB automatically takes backups of your data at regular intervals. The automatic backups are taken without affecting the performance or availability of the database operations. All the backups are stored separately in a storage service, and those backups are globally replicated for resiliency against regional disasters.
Azure Cosmos DB automatically takes a backup of your database every 4 hours and at any point of time, only the latest 2 backups are stored. However, if the container or database is deleted, Azure Cosmos DB retains the existing snapshots of a given container or database for 30 days.

The automatic backups are helpful in scenarios when you accidentally delete or update your Azure Cosmos account, database, or container and later require the data recovery. It is not helpful when parts data is corrupted or deleted.
Since it the backup also I just 8 hours old there would be many scenarios where the issue is not identified before the problem also is replicated into the backup.
### Blob storage
There is no built-in support for backup of blob storage in Azure, but there are some options that could be used as part of a backup strategy for Azure Blob storage.
**Snapshots**
Blob storage has a snapshot functionality that copies a blob.
A blob snapshot is a read-only version of a blob that's taken at a single point in time. After a snapshot has been created, it can be read, copied, or deleted, but not modified. Snapshots provide a way to back up a blob as it appears at a particular moment in time.
A snapshot of a blob has the same name as the base blob from which the snapshot is taken, with a DateTime value appended to indicate the time at which the snapshot was taken. For example, if the page blob URI is http://storagesample.core.blob.windows.net/mydrives/myvhd, the snapshot URI will be something like http://storagesample.core.blob.windows.net/mydrives/myvhd?snapshot=2011-03-09T01:42:34.9360000Z. You can use this value to reference the snapshot for further operations. A blob's snapshots share the blob's URI and are distinguished only by this DateTime value.
A blob may have any number of snapshots. Snapshots persist until they're explicitly deleted. A snapshot can't outlive its source blob. This means you need to delete all snapshots before blob is deleted. You can enumerate the snapshots associated with your blob to track your current snapshots.
**SoftDelete**
Azure Storage now [offers soft delete for blob objects](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-soft-delete?tabs=azure-portal) so that you can more easily recover your data when it is erroneously modified or deleted by an application or other storage account user. Soft delete can be considered as an automatic snapshot function.
When enabled, soft delete enables you to save and recover your data when blobs or blob snapshots are deleted. This protection extends to blob data that is erased as the result of an overwrite.
When data is deleted, it transitions to a soft deleted state instead of being permanently erased. When soft delete is on and you overwrite data, a soft deleted snapshot is generated to save the state of the overwritten data. Soft deleted objects are invisible unless explicitly listed. You can configure the amount of time soft deleted data is recoverable before it is permanently expired.
Soft delete preserves your data in many cases where blobs or blob snapshots are deleted or overwritten.
When a blob is overwritten using Put Blob, Put Block, Put Block List, or Copy Blob a snapshot of the blob's state prior to the write operation is automatically generated.

For Altinn Platform the biggest issue with soft delete is that it will create many snapshots when the form is filled out. For a user filling out a form in the portal, he will typically update the blob several times. For each time the soft delete function will create a snapshot.
The retention period indicates the amount of time that soft deleted data is stored and available for recovery. For blobs and blob snapshots that are explicitly deleted, the retention period clock starts when the data is deleted. For soft deleted snapshots generated by the soft delete feature when data is overwritten, the clock starts when the snapshot is generated. Currently, you can retain soft deleted data for between 1 and 365 days.
**AzCopy**
AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account.
We could then I theory have backup storage accounts for daily/weekly backup
## Proposed solution for Altinn Platform.
### Cosmos DB
Azure Cosmos DB exposes a change feed for containers in Azure Cosmos DB.
Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified. The changes are persisted, can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing.

The suggested solution is to have a [Azure Function that listens to the change feed](https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed-functions) and copies documents from Cosmos DB when they are created or modified to a blob storage.
The blob storage should be a shared blob storage for all orgs. (The same way Cosmos DB is shared)
The blob storage should have enabled soft delete. All versions of a document in Cosmos should be written to the same blob. Soft delete will keep track of all versions.
## Blob storage
Each org has separate blob storage.
This analysis has considered Snapshot vs Soft delete strategy.
Snapshot gives us better control over what to take backup of. A user could potentially update forms several times during formfilling. Soft delete would create a copy off all those temporary states of the form.
But since Soft delete only takes a backup of what is changed and we can limit the retention of the backup the amount of data is much smaller with the soft delete strategy. This would reduce cost. In addition, we would with this strategy have.
The below table shows a calculation based on the data in Altinn II platform. The number of attachments and forms is based on the data available in Altinn II platform.

The calculation is based on that around 50% of the data is sent through API and only saved once. And attachments uploaded are only saved once.
Based on this the suggested solution is to enable Soft Delete for blob storage for orgs.
The retention period could be set to 90 days.
We would need to do the same for blob storage for authorization.
## Considerations
- We need to consider the cost of data backup.
- This solution does not cover the scenario for a deleted storage account since snapshots are created. We would need to in some interval make a full copy of the different blobs if we need that.
## Acceptance criteria
- We are able to restore any data that has been modified/deleted in 90 days.
## Summary
The following solutions for the initial risk
**1. Data is deleted by accident by DevOps team or by wrongly configured jobs**
For this scenario, we would need to restore data from snapshot for blobs and from document copy in blob storage. We would need to have a time limit on how long we store the snapshots.
We would need to build tools to help us restore documents/blob to the previous state
**2. Data is corrupted by bugs in platform or application code**
For this scenario, we would need to restore data from snapshot for blobs and from document copy in blob storage. We would need to have a time limit on how long we store the snapshots.
We would need to build tools to help us restore documents/blob to the previous state
**3. Data is accidentally corrupted or deleted by end users or systems**
For this scenario, we would need to restore data from snapshot for blobs and from document copy in blob storage. We would need to have a time limit on how long we store the snapshots.
We would need to build tools to help us restore documents/blob to the previous state
**4. A storage account is deleted**
Not supported. All data is lost
**5. Blob storage is deleted**
Not supported. All data is lost
**6. Cosmos DB collection is accidentally deleted**
Contact Azure team to get help with restoring. Needs to be done right away it happens. Data is lost after 8 hours. When this restored we can use blob storage to recover the last hours of changes.
## Development tasks
- [ ] Create blob storage for cosmos backup #4004
- [x] POC: Create an Azure Function that is triggered on the instance container in cosmos DB and copies #3997
- [ ] POC: Configure soft delete in test environment on blob storage
- [ ] Enable soft delete for blob storage #4005
- [ ] Implement Azure Function that listens to all containers in Cosmos DB and store documents in blob storage #4006
- [ ] Define and implement restore procedure for blobs using [undelete blob api](https://docs.microsoft.com/en-us/rest/api/storageservices/undelete-blob) #4007
- [ ] Define and implement restore procedure for cosmos db documents #4008
| infrastructure | analyse and define backup architecture for altinn platform description it is important to reduce the risk of losing data on the platform the risks that are identified are data is deleted by accident by devops team or by wrongly configured jobs data is corrupted by bugs in platform or application code data is accidentally corrupted or deleted by end users or systems a storage account is deleted blob storage is deleted cosmos db collection is accidentally deleted in altinn platform different types of data is stored cosmos db instances metadata about instances created instanceevents dataelements applications texts blob storage data for data elements structured and unstructured data small to potential gigabytes of data xacml policy for applications requirement we should have back up so we never lose more than hours of data we should be able to recover data that is up to days old we should be able to recover specific data what backup mechanism is available in azure cosmos db according to cosmos db azure cosmos db automatically takes backups of your data at regular intervals the automatic backups are taken without affecting the performance or availability of the database operations all the backups are stored separately in a storage service and those backups are globally replicated for resiliency against regional disasters azure cosmos db automatically takes a backup of your database every hours and at any point of time only the latest backups are stored however if the container or database is deleted azure cosmos db retains the existing snapshots of a given container or database for days the automatic backups are helpful in scenarios when you accidentally delete or update your azure cosmos account database or container and later require the data recovery it is not helpful when parts data is corrupted or deleted since it the backup also i just hours old there would be many scenarios where the issue is not identified before the problem also is replicated into the backup blob storage there is no built in support for backup of blob storage in azure but there are some options that could be used as part of a backup strategy for azure blob storage snapshots blob storage has a snapshot functionality that copies a blob a blob snapshot is a read only version of a blob that s taken at a single point in time after a snapshot has been created it can be read copied or deleted but not modified snapshots provide a way to back up a blob as it appears at a particular moment in time a snapshot of a blob has the same name as the base blob from which the snapshot is taken with a datetime value appended to indicate the time at which the snapshot was taken for example if the page blob uri is the snapshot uri will be something like you can use this value to reference the snapshot for further operations a blob s snapshots share the blob s uri and are distinguished only by this datetime value a blob may have any number of snapshots snapshots persist until they re explicitly deleted a snapshot can t outlive its source blob this means you need to delete all snapshots before blob is deleted you can enumerate the snapshots associated with your blob to track your current snapshots softdelete azure storage now so that you can more easily recover your data when it is erroneously modified or deleted by an application or other storage account user soft delete can be considered as an automatic snapshot function when enabled soft delete enables you to save and recover your data when blobs or blob snapshots are deleted this protection extends to blob data that is erased as the result of an overwrite when data is deleted it transitions to a soft deleted state instead of being permanently erased when soft delete is on and you overwrite data a soft deleted snapshot is generated to save the state of the overwritten data soft deleted objects are invisible unless explicitly listed you can configure the amount of time soft deleted data is recoverable before it is permanently expired soft delete preserves your data in many cases where blobs or blob snapshots are deleted or overwritten when a blob is overwritten using put blob put block put block list or copy blob a snapshot of the blob s state prior to the write operation is automatically generated for altinn platform the biggest issue with soft delete is that it will create many snapshots when the form is filled out for a user filling out a form in the portal he will typically update the blob several times for each time the soft delete function will create a snapshot the retention period indicates the amount of time that soft deleted data is stored and available for recovery for blobs and blob snapshots that are explicitly deleted the retention period clock starts when the data is deleted for soft deleted snapshots generated by the soft delete feature when data is overwritten the clock starts when the snapshot is generated currently you can retain soft deleted data for between and days azcopy azcopy is a command line utility that you can use to copy blobs or files to or from a storage account we could then i theory have backup storage accounts for daily weekly backup proposed solution for altinn platform cosmos db azure cosmos db exposes a change feed for containers in azure cosmos db change feed support in azure cosmos db works by listening to an azure cosmos container for any changes it then outputs the sorted list of documents that were changed in the order in which they were modified the changes are persisted can be processed asynchronously and incrementally and the output can be distributed across one or more consumers for parallel processing the suggested solution is to have a and copies documents from cosmos db when they are created or modified to a blob storage the blob storage should be a shared blob storage for all orgs the same way cosmos db is shared the blob storage should have enabled soft delete all versions of a document in cosmos should be written to the same blob soft delete will keep track of all versions blob storage each org has separate blob storage this analysis has considered snapshot vs soft delete strategy snapshot gives us better control over what to take backup of a user could potentially update forms several times during formfilling soft delete would create a copy off all those temporary states of the form but since soft delete only takes a backup of what is changed and we can limit the retention of the backup the amount of data is much smaller with the soft delete strategy this would reduce cost in addition we would with this strategy have the below table shows a calculation based on the data in altinn ii platform the number of attachments and forms is based on the data available in altinn ii platform the calculation is based on that around of the data is sent through api and only saved once and attachments uploaded are only saved once based on this the suggested solution is to enable soft delete for blob storage for orgs the retention period could be set to days we would need to do the same for blob storage for authorization considerations we need to consider the cost of data backup this solution does not cover the scenario for a deleted storage account since snapshots are created we would need to in some interval make a full copy of the different blobs if we need that acceptance criteria we are able to restore any data that has been modified deleted in days summary the following solutions for the initial risk data is deleted by accident by devops team or by wrongly configured jobs for this scenario we would need to restore data from snapshot for blobs and from document copy in blob storage we would need to have a time limit on how long we store the snapshots we would need to build tools to help us restore documents blob to the previous state data is corrupted by bugs in platform or application code for this scenario we would need to restore data from snapshot for blobs and from document copy in blob storage we would need to have a time limit on how long we store the snapshots we would need to build tools to help us restore documents blob to the previous state data is accidentally corrupted or deleted by end users or systems for this scenario we would need to restore data from snapshot for blobs and from document copy in blob storage we would need to have a time limit on how long we store the snapshots we would need to build tools to help us restore documents blob to the previous state a storage account is deleted not supported all data is lost blob storage is deleted not supported all data is lost cosmos db collection is accidentally deleted contact azure team to get help with restoring needs to be done right away it happens data is lost after hours when this restored we can use blob storage to recover the last hours of changes development tasks create blob storage for cosmos backup poc create an azure function that is triggered on the instance container in cosmos db and copies poc configure soft delete in test environment on blob storage enable soft delete for blob storage implement azure function that listens to all containers in cosmos db and store documents in blob storage define and implement restore procedure for blobs using define and implement restore procedure for cosmos db documents | 1 |
137,351 | 18,752,692,105 | IssuesEvent | 2021-11-05 05:50:12 | madhans23/linux-4.15 | https://api.github.com/repos/madhans23/linux-4.15 | opened | CVE-2020-25641 (Medium) detected in minimal-linux78bd5fefcf303e772cdcdfe18b049fe97a1bf832 | security vulnerability | ## CVE-2020-25641 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimal-linux78bd5fefcf303e772cdcdfe18b049fe97a1bf832</b></p></summary>
<p>
<p>Library home page: <a href=https://github.com/liva/minimal-linux.git>https://github.com/liva/minimal-linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.15/commit/d96ee498864d1a0b6222cfb17d64ca8196014940">d96ee498864d1a0b6222cfb17d64ca8196014940</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/bvec.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/bvec.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel's implementation of biovecs in versions before 5.9-rc7. A zero-length biovec request issued by the block subsystem could cause the kernel to enter an infinite loop, causing a denial of service. This flaw allows a local attacker with basic privileges to issue requests to a block device, resulting in a denial of service. The highest threat from this vulnerability is to system availability.
<p>Publish Date: 2020-10-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25641>CVE-2020-25641</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gregkh/linux/commit/7e24969022cbd61ddc586f14824fc205661bb124">https://github.com/gregkh/linux/commit/7e24969022cbd61ddc586f14824fc205661bb124</a></p>
<p>Release Date: 2020-09-17</p>
<p>Fix Resolution: v4.9.236,v4.14.197,v4.19.144,v5.4.64,v5.8.8,v5.9-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-25641 (Medium) detected in minimal-linux78bd5fefcf303e772cdcdfe18b049fe97a1bf832 - ## CVE-2020-25641 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimal-linux78bd5fefcf303e772cdcdfe18b049fe97a1bf832</b></p></summary>
<p>
<p>Library home page: <a href=https://github.com/liva/minimal-linux.git>https://github.com/liva/minimal-linux.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/madhans23/linux-4.15/commit/d96ee498864d1a0b6222cfb17d64ca8196014940">d96ee498864d1a0b6222cfb17d64ca8196014940</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/bvec.h</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/bvec.h</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel's implementation of biovecs in versions before 5.9-rc7. A zero-length biovec request issued by the block subsystem could cause the kernel to enter an infinite loop, causing a denial of service. This flaw allows a local attacker with basic privileges to issue requests to a block device, resulting in a denial of service. The highest threat from this vulnerability is to system availability.
<p>Publish Date: 2020-10-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25641>CVE-2020-25641</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/gregkh/linux/commit/7e24969022cbd61ddc586f14824fc205661bb124">https://github.com/gregkh/linux/commit/7e24969022cbd61ddc586f14824fc205661bb124</a></p>
<p>Release Date: 2020-09-17</p>
<p>Fix Resolution: v4.9.236,v4.14.197,v4.19.144,v5.4.64,v5.8.8,v5.9-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in minimal cve medium severity vulnerability vulnerable library minimal library home page a href found in head commit a href found in base branch master vulnerable source files include linux bvec h include linux bvec h vulnerability details a flaw was found in the linux kernel s implementation of biovecs in versions before a zero length biovec request issued by the block subsystem could cause the kernel to enter an infinite loop causing a denial of service this flaw allows a local attacker with basic privileges to issue requests to a block device resulting in a denial of service the highest threat from this vulnerability is to system availability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
10,046 | 8,354,994,216 | IssuesEvent | 2018-10-02 14:40:00 | CurriculumAssociates/canvas-latex | https://api.github.com/repos/CurriculumAssociates/canvas-latex | opened | Update Nightwatch | infrastructure | The current version of nightwatch carries several issues with it. We should be able to upgrade to the latest to remove those issues. | 1.0 | Update Nightwatch - The current version of nightwatch carries several issues with it. We should be able to upgrade to the latest to remove those issues. | infrastructure | update nightwatch the current version of nightwatch carries several issues with it we should be able to upgrade to the latest to remove those issues | 1 |
9,817 | 8,182,897,618 | IssuesEvent | 2018-08-29 07:17:52 | flutter/website | https://api.github.com/repos/flutter/website | opened | [Container image] Use Ruby 2.4.3 | Infrastructure | The Cirrus container image is currently setup with Ruby 2.3.3; from a Cirrus log:
```console
> rvm current && ruby -v
system
ruby 2.3.3p222 (2016-11-21) [x86_64-linux-gnu]
```
It should be using Ruby 2.4.3 instead.
I believe that the Dockerfile is missing:
```dockerfile
RUN rvm use 2.4.3
```
cc @Sfshaza | 1.0 | [Container image] Use Ruby 2.4.3 - The Cirrus container image is currently setup with Ruby 2.3.3; from a Cirrus log:
```console
> rvm current && ruby -v
system
ruby 2.3.3p222 (2016-11-21) [x86_64-linux-gnu]
```
It should be using Ruby 2.4.3 instead.
I believe that the Dockerfile is missing:
```dockerfile
RUN rvm use 2.4.3
```
cc @Sfshaza | infrastructure | use ruby the cirrus container image is currently setup with ruby from a cirrus log console rvm current ruby v system ruby it should be using ruby instead i believe that the dockerfile is missing dockerfile run rvm use cc sfshaza | 1 |
65,995 | 19,848,819,637 | IssuesEvent | 2022-01-21 09:57:41 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | opened | test-runner doesn't always kill hung tests | Type: Defect | ### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | 11.2
Kernel Version | 5.10.0-10-amd64
Architecture | x86_64
OpenZFS Version | 5a4d282
### Describe the problem you're observing
Sometimes, if a test hangs the wrong way, test-runner doesn't ever time out and kill it, it just...hangs around forever until something else makes it die.
Normally, this works okay:
```
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid (run as root) [20:00] [KILLED]
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid1 (run as root) [20:00] [KILLED]
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid2 (run as root) [20:00] [KILLED]
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid3 (run as root) [20:00] [KILLED]
```
Sometimes, however, you wind up with tests that never die until someone manually intervenes...
```Test (Linux): /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/fault/scrub_after_resilver (run as root) [196:46] [KILLED]```
```Test (Linux): /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/fault/scrub_after_resilver (run as root) [206:54] [KILLED]```
```Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/write_dirs/write_dirs_001_pos (run as root) [33:53] [KILLED]```
(Three different runs, three different machines.)
It seems to be commonly, but not exclusively, the "fault/" tests that just get stuck...but not on the pool being faulted, AFAICT?
### Describe how to reproduce the problem
* Run test suite
* Sometimes it'll hang
### Include any warning/errors/backtraces from the system logs
None that I see. | 1.0 | test-runner doesn't always kill hung tests - ### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | 11.2
Kernel Version | 5.10.0-10-amd64
Architecture | x86_64
OpenZFS Version | 5a4d282
### Describe the problem you're observing
Sometimes, if a test hangs the wrong way, test-runner doesn't ever time out and kill it, it just...hangs around forever until something else makes it die.
Normally, this works okay:
```
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid (run as root) [20:00] [KILLED]
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid1 (run as root) [20:00] [KILLED]
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid2 (run as root) [20:00] [KILLED]
Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/redundancy/redundancy_draid3 (run as root) [20:00] [KILLED]
```
Sometimes, however, you wind up with tests that never die until someone manually intervenes...
```Test (Linux): /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/fault/scrub_after_resilver (run as root) [196:46] [KILLED]```
```Test (Linux): /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/fault/scrub_after_resilver (run as root) [206:54] [KILLED]```
```Test: /home/rich/zfs_oopsalllz4/tests/zfs-tests/tests/functional/write_dirs/write_dirs_001_pos (run as root) [33:53] [KILLED]```
(Three different runs, three different machines.)
It seems to be commonly, but not exclusively, the "fault/" tests that just get stuck...but not on the pool being faulted, AFAICT?
### Describe how to reproduce the problem
* Run test suite
* Sometimes it'll hang
### Include any warning/errors/backtraces from the system logs
None that I see. | non_infrastructure | test runner doesn t always kill hung tests system information type version name distribution name debian distribution version kernel version architecture openzfs version describe the problem you re observing sometimes if a test hangs the wrong way test runner doesn t ever time out and kill it it just hangs around forever until something else makes it die normally this works okay test home rich zfs tests zfs tests tests functional redundancy redundancy draid run as root test home rich zfs tests zfs tests tests functional redundancy redundancy run as root test home rich zfs tests zfs tests tests functional redundancy redundancy run as root test home rich zfs tests zfs tests tests functional redundancy redundancy run as root sometimes however you wind up with tests that never die until someone manually intervenes test linux home rich zfs tests zfs tests tests functional fault scrub after resilver run as root test linux home rich zfs tests zfs tests tests functional fault scrub after resilver run as root test home rich zfs tests zfs tests tests functional write dirs write dirs pos run as root three different runs three different machines it seems to be commonly but not exclusively the fault tests that just get stuck but not on the pool being faulted afaict describe how to reproduce the problem run test suite sometimes it ll hang include any warning errors backtraces from the system logs none that i see | 0 |
19,339 | 13,220,067,661 | IssuesEvent | 2020-08-17 11:42:49 | DigitalExcellence/dex-frontend | https://api.github.com/repos/DigitalExcellence/dex-frontend | closed | [Need more information ]Auto deployment staging/production frontend not working due to npm package typescript bug | bug infrastructure | **Describe the bug**

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
| 1.0 | [Need more information ]Auto deployment staging/production frontend not working due to npm package typescript bug - **Describe the bug**

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
| infrastructure | auto deployment staging production frontend not working due to npm package typescript bug describe the bug to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here | 1 |
43,036 | 11,139,649,019 | IssuesEvent | 2019-12-21 07:19:10 | pnplab/Flux | https://api.github.com/repos/pnplab/Flux | opened | java.lang.UnsatisfiedLinkError in MainActivity | LOCAL_BUILD bugsnag development | ## Error in Flux
**java.lang.UnsatisfiedLinkError** in **MainActivity**
couldn't find DSO to load: librealmreact.so caused by: dlopen failed: library "libjsc.so" not found
[View on Bugsnag](https://app.bugsnag.com/criusmm/flux/errors/5dfdc76eccb788001a9f6557?event_id=5dfdc76e0054e191d4a50000&i=gh&m=ci)
## Stacktrace
SoLoader.java:738 - com.facebook.soloader.SoLoader.doLoadLibraryBySoName
[View full stacktrace](https://app.bugsnag.com/criusmm/flux/errors/5dfdc76eccb788001a9f6557?event_id=5dfdc76e0054e191d4a50000&i=gh&m=ci)
*Created automatically via Bugsnag* | 1.0 | java.lang.UnsatisfiedLinkError in MainActivity - ## Error in Flux
**java.lang.UnsatisfiedLinkError** in **MainActivity**
couldn't find DSO to load: librealmreact.so caused by: dlopen failed: library "libjsc.so" not found
[View on Bugsnag](https://app.bugsnag.com/criusmm/flux/errors/5dfdc76eccb788001a9f6557?event_id=5dfdc76e0054e191d4a50000&i=gh&m=ci)
## Stacktrace
SoLoader.java:738 - com.facebook.soloader.SoLoader.doLoadLibraryBySoName
[View full stacktrace](https://app.bugsnag.com/criusmm/flux/errors/5dfdc76eccb788001a9f6557?event_id=5dfdc76e0054e191d4a50000&i=gh&m=ci)
*Created automatically via Bugsnag* | non_infrastructure | java lang unsatisfiedlinkerror in mainactivity error in flux java lang unsatisfiedlinkerror in mainactivity couldn t find dso to load librealmreact so caused by dlopen failed library libjsc so not found stacktrace soloader java com facebook soloader soloader doloadlibrarybysoname created automatically via bugsnag | 0 |
2,260 | 5,093,448,569 | IssuesEvent | 2017-01-03 06:07:30 | CS3216-Bubble/bubble-frontend-deprecated | https://api.github.com/repos/CS3216-Bubble/bubble-frontend-deprecated | closed | Process flag request view | counsel-ui feature high-priority process-flag-view | A view to approve or reject a professional help flag request sent to report on a user. This view will display the message context and the user profile summary of the reporter and reportee. On approving, a SOS chat will be created to facilitate the conversation between the counsellor and the reported user.
| 1.0 | Process flag request view - A view to approve or reject a professional help flag request sent to report on a user. This view will display the message context and the user profile summary of the reporter and reportee. On approving, a SOS chat will be created to facilitate the conversation between the counsellor and the reported user.
| non_infrastructure | process flag request view a view to approve or reject a professional help flag request sent to report on a user this view will display the message context and the user profile summary of the reporter and reportee on approving a sos chat will be created to facilitate the conversation between the counsellor and the reported user | 0 |
360,781 | 25,310,452,455 | IssuesEvent | 2022-11-17 17:02:11 | jenkinsci/bridge-method-injector | https://api.github.com/repos/jenkinsci/bridge-method-injector | closed | Add README | documentation | Hello,
would it be possible to add a README to this project, which explains the project and how to use it a bit? Currently for new users this might not be immediately obvious, and they might not spot the link to the website on the right sidebar.
Additionally the link http://bridge-method-injector.infradna.com/ seems to be dead (but at least [an old version has been archived](https://web.archive.org/web/20180818064629/https://bridge-method-injector.infradna.com/)). | 1.0 | Add README - Hello,
would it be possible to add a README to this project, which explains the project and how to use it a bit? Currently for new users this might not be immediately obvious, and they might not spot the link to the website on the right sidebar.
Additionally the link http://bridge-method-injector.infradna.com/ seems to be dead (but at least [an old version has been archived](https://web.archive.org/web/20180818064629/https://bridge-method-injector.infradna.com/)). | non_infrastructure | add readme hello would it be possible to add a readme to this project which explains the project and how to use it a bit currently for new users this might not be immediately obvious and they might not spot the link to the website on the right sidebar additionally the link seems to be dead but at least | 0 |
495 | 2,904,045,096 | IssuesEvent | 2015-06-18 16:11:31 | opencb/opencga | https://api.github.com/repos/opencb/opencga | closed | REST 'modify' methods must be renamed to 'update' | web services | Currently most most of the REST endpoints support CRUD operations. It will be nice to renamed 'modify' methods with 'update' to be more consistent. | 1.0 | REST 'modify' methods must be renamed to 'update' - Currently most most of the REST endpoints support CRUD operations. It will be nice to renamed 'modify' methods with 'update' to be more consistent. | non_infrastructure | rest modify methods must be renamed to update currently most most of the rest endpoints support crud operations it will be nice to renamed modify methods with update to be more consistent | 0 |
69,911 | 22,746,759,857 | IssuesEvent | 2022-07-07 09:50:17 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | closed | [🐛 Bug]: devtools.debugger.resume() can effectively not be used | C-py I-defect | ### What happened?
Applying one of the fixes suggested for issue #10804 , it is now possible to enable the debugger using the provided Selenium wrapper (`debugger.enable()`). However, it seems like the wrapper `debugger.resume()` can effectively not be used, preventing one from resuming JavaScript execution after a breakpoint was triggered.
As the code below demonstrates, the command `session.execute(devtools.debugger.resume())` is not executed. From what I understand, Selenium "freezes" at the `button.click()` statement (which triggers the breakpoint), as the missing `"This statement after the click is not executed."` message illustrates.
As I launched the click method using `nursery.start_soon`, I allowed the execution to actually reach `session.execute(devtools.debugger.resume())`, however, it still does not execute. It seems like the click statement blocks Selenium internally when it tries to send the CDP resume command to Chrome.
As a test website for the code, you can use this small HTML file and host it using `python3 -m http.server`:
```
<!DOCTYPE html>
<head>
<meta charset="UTF-8">
</head>
<body>
<button id='button' type="button" onclick='console.log("Button was clicked")'>Click me!</button>
</body>
</html>
```
### How can we reproduce the issue?
```shell
import trio
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
async def perform_click(driver):
# Locate button and click it
button = driver.find_element(By.ID, "button")
button.click()
print("This statement after the click is not executed.")
async def main():
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
async with driver.bidi_connection() as connection:
session, devtools = connection.session, connection.devtools
driver.get('http://localhost:8000/test.html')
# For debugger.enable to work, the fix suggested for issue #10804 has to be applied
await session.execute(devtools.debugger.enable())
await session.execute(devtools.dom_debugger.set_event_listener_breakpoint("click"))
async with trio.open_nursery() as nursery:
nursery.start_soon(perform_click, driver)
time.sleep(2)
print("This statement is executed...")
await session.execute(devtools.debugger.resume()) # ... but this one is not.
trio.run(main)
```
### Relevant log output
```shell
[WDM] - ====== WebDriver manager ======
[WDM] - Current google-chrome version is 103.0.5060
[WDM] - Get LATEST chromedriver version for 103.0.5060 google-chrome
[WDM] - Driver [/home/tim/.wdm/drivers/chromedriver/linux64/103.0.5060.53/chromedriver] found in cache
This statement is executed...
```
### Operating System
Linux Mint 20.1
### Selenium version
Python 4.3.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 103.0.5060
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 103.0.5060.53
### Are you using Selenium Grid?
No | 1.0 | [🐛 Bug]: devtools.debugger.resume() can effectively not be used - ### What happened?
Applying one of the fixes suggested for issue #10804 , it is now possible to enable the debugger using the provided Selenium wrapper (`debugger.enable()`). However, it seems like the wrapper `debugger.resume()` can effectively not be used, preventing one from resuming JavaScript execution after a breakpoint was triggered.
As the code below demonstrates, the command `session.execute(devtools.debugger.resume())` is not executed. From what I understand, Selenium "freezes" at the `button.click()` statement (which triggers the breakpoint), as the missing `"This statement after the click is not executed."` message illustrates.
As I launched the click method using `nursery.start_soon`, I allowed the execution to actually reach `session.execute(devtools.debugger.resume())`, however, it still does not execute. It seems like the click statement blocks Selenium internally when it tries to send the CDP resume command to Chrome.
As a test website for the code, you can use this small HTML file and host it using `python3 -m http.server`:
```
<!DOCTYPE html>
<head>
<meta charset="UTF-8">
</head>
<body>
<button id='button' type="button" onclick='console.log("Button was clicked")'>Click me!</button>
</body>
</html>
```
### How can we reproduce the issue?
```shell
import trio
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
async def perform_click(driver):
# Locate button and click it
button = driver.find_element(By.ID, "button")
button.click()
print("This statement after the click is not executed.")
async def main():
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
async with driver.bidi_connection() as connection:
session, devtools = connection.session, connection.devtools
driver.get('http://localhost:8000/test.html')
# For debugger.enable to work, the fix suggested for issue #10804 has to be applied
await session.execute(devtools.debugger.enable())
await session.execute(devtools.dom_debugger.set_event_listener_breakpoint("click"))
async with trio.open_nursery() as nursery:
nursery.start_soon(perform_click, driver)
time.sleep(2)
print("This statement is executed...")
await session.execute(devtools.debugger.resume()) # ... but this one is not.
trio.run(main)
```
### Relevant log output
```shell
[WDM] - ====== WebDriver manager ======
[WDM] - Current google-chrome version is 103.0.5060
[WDM] - Get LATEST chromedriver version for 103.0.5060 google-chrome
[WDM] - Driver [/home/tim/.wdm/drivers/chromedriver/linux64/103.0.5060.53/chromedriver] found in cache
This statement is executed...
```
### Operating System
Linux Mint 20.1
### Selenium version
Python 4.3.0
### What are the browser(s) and version(s) where you see this issue?
Chrome 103.0.5060
### What are the browser driver(s) and version(s) where you see this issue?
ChromeDriver 103.0.5060.53
### Are you using Selenium Grid?
No | non_infrastructure | devtools debugger resume can effectively not be used what happened applying one of the fixes suggested for issue it is now possible to enable the debugger using the provided selenium wrapper debugger enable however it seems like the wrapper debugger resume can effectively not be used preventing one from resuming javascript execution after a breakpoint was triggered as the code below demonstrates the command session execute devtools debugger resume is not executed from what i understand selenium freezes at the button click statement which triggers the breakpoint as the missing this statement after the click is not executed message illustrates as i launched the click method using nursery start soon i allowed the execution to actually reach session execute devtools debugger resume however it still does not execute it seems like the click statement blocks selenium internally when it tries to send the cdp resume command to chrome as a test website for the code you can use this small html file and host it using m http server click me how can we reproduce the issue shell import trio import time from selenium import webdriver from selenium webdriver chrome service import service from selenium webdriver common by import by from webdriver manager chrome import chromedrivermanager async def perform click driver locate button and click it button driver find element by id button button click print this statement after the click is not executed async def main driver webdriver chrome service service chromedrivermanager install async with driver bidi connection as connection session devtools connection session connection devtools driver get for debugger enable to work the fix suggested for issue has to be applied await session execute devtools debugger enable await session execute devtools dom debugger set event listener breakpoint click async with trio open nursery as nursery nursery start soon perform click driver time sleep print this statement is executed await session execute devtools debugger resume but this one is not trio run main relevant log output shell webdriver manager current google chrome version is get latest chromedriver version for google chrome driver found in cache this statement is executed operating system linux mint selenium version python what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no | 0 |
31,510 | 25,833,323,580 | IssuesEvent | 2022-12-12 17:37:13 | st-tu-dresden/salespoint | https://api.github.com/repos/st-tu-dresden/salespoint | closed | Invalid workflow file .github/workflows/build.yaml | type: bug in: infrastructure | Our `build.yaml` file is broken according to a [recent PR build](https://github.com/st-tu-dresden/salespoint/actions/runs/3659503258):
```
Invalid workflow file: .github/workflows/build.yaml#L1
The workflow is not valid. .github/workflows/build.yaml: Unexpected tag '!main'
```
This expression probably just needs quotation marks. | 1.0 | Invalid workflow file .github/workflows/build.yaml - Our `build.yaml` file is broken according to a [recent PR build](https://github.com/st-tu-dresden/salespoint/actions/runs/3659503258):
```
Invalid workflow file: .github/workflows/build.yaml#L1
The workflow is not valid. .github/workflows/build.yaml: Unexpected tag '!main'
```
This expression probably just needs quotation marks. | infrastructure | invalid workflow file github workflows build yaml our build yaml file is broken according to a invalid workflow file github workflows build yaml the workflow is not valid github workflows build yaml unexpected tag main this expression probably just needs quotation marks | 1 |
14,047 | 24,287,559,327 | IssuesEvent | 2022-09-29 00:38:43 | CS3219-AY2223S1/cs3219-project-ay2223s1-g5 | https://api.github.com/repos/CS3219-AY2223S1/cs3219-project-ay2223s1-g5 | closed | [FR-MATCHING-4] The system should inform the users that no match is available if a match cannot be found within 30 seconds. | functional requirement P1 | - [x] #105 | 1.0 | [FR-MATCHING-4] The system should inform the users that no match is available if a match cannot be found within 30 seconds. - - [x] #105 | non_infrastructure | the system should inform the users that no match is available if a match cannot be found within seconds | 0 |
10,815 | 8,742,386,233 | IssuesEvent | 2018-12-12 16:18:46 | coq/coq | https://api.github.com/repos/coq/coq | opened | Windows CI: cygwin install cache is not reused | kind: infrastructure platform: Windows | In the gitlab Windows CI the install cache for cygwin is not reused. This produces a lot of traffic.
Please note that the cache cannot result in an security issues, even if it is shared between runners. The cygwin setup checks the signature of the main index file and the main index file contains hashes for all modules. The only things we must make sure is not manipulated between runs is the cygwin setup program itself.
Does someone have preferences if we should put the setup program somewhere so that only admins can change it or if we should download it every time? | 1.0 | Windows CI: cygwin install cache is not reused - In the gitlab Windows CI the install cache for cygwin is not reused. This produces a lot of traffic.
Please note that the cache cannot result in an security issues, even if it is shared between runners. The cygwin setup checks the signature of the main index file and the main index file contains hashes for all modules. The only things we must make sure is not manipulated between runs is the cygwin setup program itself.
Does someone have preferences if we should put the setup program somewhere so that only admins can change it or if we should download it every time? | infrastructure | windows ci cygwin install cache is not reused in the gitlab windows ci the install cache for cygwin is not reused this produces a lot of traffic please note that the cache cannot result in an security issues even if it is shared between runners the cygwin setup checks the signature of the main index file and the main index file contains hashes for all modules the only things we must make sure is not manipulated between runs is the cygwin setup program itself does someone have preferences if we should put the setup program somewhere so that only admins can change it or if we should download it every time | 1 |
15,234 | 11,424,422,873 | IssuesEvent | 2020-02-03 17:43:10 | dotnet/dotnet-docker | https://api.github.com/repos/dotnet/dotnet-docker | closed | Exclude some paths from PR trigger | area:infrastructure enhancement triaged | Now that [PR triggers for YAML pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml#pr-triggers) support path filtering for GitHub repos, the PR pipeline validation for this repo can be updated to exclude some paths that shouldn't trigger a build:
* `README*`
* `documentation/*` | 1.0 | Exclude some paths from PR trigger - Now that [PR triggers for YAML pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml#pr-triggers) support path filtering for GitHub repos, the PR pipeline validation for this repo can be updated to exclude some paths that shouldn't trigger a build:
* `README*`
* `documentation/*` | infrastructure | exclude some paths from pr trigger now that support path filtering for github repos the pr pipeline validation for this repo can be updated to exclude some paths that shouldn t trigger a build readme documentation | 1 |
13,386 | 10,229,412,001 | IssuesEvent | 2019-08-17 12:28:49 | amethyst/amethyst | https://api.github.com/repos/amethyst/amethyst | closed | [BUG] Code coverage reports are unreliable | pri: normal stale status: waiting for merge team: infrastructure type: bug | I've noticed that code coverage is very inaccurate in many cases. As we are planning to rely on that coverage always going up, we have to make sure that it's reliable. https://github.com/amethyst/amethyst/pull/1704 is a good example where it's obviously incorrect.
I did some digging, suspecting that maybe codecov is doing something stupid (like selecting wrong base commits), but this seems all right. Instead, the coverage reports generated by our builds seems to be wrong. Codecov allows downloading raw coverages used for comparison, so i did that for https://github.com/amethyst/amethyst/pull/1704
The reports are quite different. Both versions and their somewhat unreadable diff is here: http://www.mergely.com/FUf75Qz8/
Interesting to notice is that those reports actually have different data, for example for "types.rs" file, and this is exactly what reports says. Obviously the PR itself doesn't contain any changes that could influence this file's coverage.
```
master:
{"file": "/home/jenkins/workspace/amethyst_PR-1704@2/amethyst_rendy/src/types.rs", "percent_covered": "50.00", "covered_lines": "3", "total_lines": "6"},
that PR tip:
{"file": "/home/jenkins/workspace/amethyst_master@2/amethyst_rendy/src/types.rs", "percent_covered": "40.00", "covered_lines": "2", "total_lines": "5"},
```
There's also per-line reports further down, and it's also different.
The cause of those discrepancies is unknown to me right now, needs further investigation. | 1.0 | [BUG] Code coverage reports are unreliable - I've noticed that code coverage is very inaccurate in many cases. As we are planning to rely on that coverage always going up, we have to make sure that it's reliable. https://github.com/amethyst/amethyst/pull/1704 is a good example where it's obviously incorrect.
I did some digging, suspecting that maybe codecov is doing something stupid (like selecting wrong base commits), but this seems all right. Instead, the coverage reports generated by our builds seems to be wrong. Codecov allows downloading raw coverages used for comparison, so i did that for https://github.com/amethyst/amethyst/pull/1704
The reports are quite different. Both versions and their somewhat unreadable diff is here: http://www.mergely.com/FUf75Qz8/
Interesting to notice is that those reports actually have different data, for example for "types.rs" file, and this is exactly what reports says. Obviously the PR itself doesn't contain any changes that could influence this file's coverage.
```
master:
{"file": "/home/jenkins/workspace/amethyst_PR-1704@2/amethyst_rendy/src/types.rs", "percent_covered": "50.00", "covered_lines": "3", "total_lines": "6"},
that PR tip:
{"file": "/home/jenkins/workspace/amethyst_master@2/amethyst_rendy/src/types.rs", "percent_covered": "40.00", "covered_lines": "2", "total_lines": "5"},
```
There's also per-line reports further down, and it's also different.
The cause of those discrepancies is unknown to me right now, needs further investigation. | infrastructure | code coverage reports are unreliable i ve noticed that code coverage is very inaccurate in many cases as we are planning to rely on that coverage always going up we have to make sure that it s reliable is a good example where it s obviously incorrect i did some digging suspecting that maybe codecov is doing something stupid like selecting wrong base commits but this seems all right instead the coverage reports generated by our builds seems to be wrong codecov allows downloading raw coverages used for comparison so i did that for the reports are quite different both versions and their somewhat unreadable diff is here interesting to notice is that those reports actually have different data for example for types rs file and this is exactly what reports says obviously the pr itself doesn t contain any changes that could influence this file s coverage master file home jenkins workspace amethyst pr amethyst rendy src types rs percent covered covered lines total lines that pr tip file home jenkins workspace amethyst master amethyst rendy src types rs percent covered covered lines total lines there s also per line reports further down and it s also different the cause of those discrepancies is unknown to me right now needs further investigation | 1 |
87,681 | 8,109,730,175 | IssuesEvent | 2018-08-14 08:36:57 | BEXIS2/Core | https://api.github.com/repos/BEXIS2/Core | closed | BAM: Add additional infos (NaN) in the table if click to sort | 2.12 RC1 Status: Testing Required Type: Enhancement | It happens by clicking on each title, not only the Group-member.

| 1.0 | BAM: Add additional infos (NaN) in the table if click to sort - It happens by clicking on each title, not only the Group-member.

| non_infrastructure | bam add additional infos nan in the table if click to sort it happens by clicking on each title not only the group member | 0 |
15,927 | 11,771,038,924 | IssuesEvent | 2020-03-15 21:59:53 | reservix-ui/marigold | https://api.github.com/repos/reservix-ui/marigold | opened | Publishing workflow | infrastructure | - Setup publishing `@marigold` packages to npm (and Github?).
We can either use lerna or [`changesets`](https://www.npmjs.com/package/@changesets/cli) (if we use the later, maybe we can ditch lerna?)
- Should only publish if text succeed | 1.0 | Publishing workflow - - Setup publishing `@marigold` packages to npm (and Github?).
We can either use lerna or [`changesets`](https://www.npmjs.com/package/@changesets/cli) (if we use the later, maybe we can ditch lerna?)
- Should only publish if text succeed | infrastructure | publishing workflow setup publishing marigold packages to npm and github we can either use lerna or if we use the later maybe we can ditch lerna should only publish if text succeed | 1 |
144,978 | 11,643,576,871 | IssuesEvent | 2020-02-29 14:24:17 | offa/release-tool | https://api.github.com/repos/offa/release-tool | closed | Nested patch blocks avoidable? | test | Are the nested _patch_ blocks of `test_project_and_repository_from_path` avoidable? | 1.0 | Nested patch blocks avoidable? - Are the nested _patch_ blocks of `test_project_and_repository_from_path` avoidable? | non_infrastructure | nested patch blocks avoidable are the nested patch blocks of test project and repository from path avoidable | 0 |
16,086 | 11,826,378,528 | IssuesEvent | 2020-03-21 17:34:57 | julianGoh17/commit-reader | https://api.github.com/repos/julianGoh17/commit-reader | closed | Create job to run unit tests | Infrastructure enhancement | With true TDD development, we need to create to a github action which runs the unit tests to make sure there are no tests that have failed to prevent regressions. | 1.0 | Create job to run unit tests - With true TDD development, we need to create to a github action which runs the unit tests to make sure there are no tests that have failed to prevent regressions. | infrastructure | create job to run unit tests with true tdd development we need to create to a github action which runs the unit tests to make sure there are no tests that have failed to prevent regressions | 1 |
425,621 | 12,343,131,871 | IssuesEvent | 2020-05-15 03:00:38 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | closed | Fetch Kubernetes images using tag@hash rather to prevent invalid downloads | good first issue help wanted kind/feature lifecycle/rotten priority/backlog roadmap/2019 | Currently, if a user uses --image-repository, they may end up with images that do not match the official Kubernetes ones.
We should use digests or some other mechanism to prevent mirror poisoning. | 1.0 | Fetch Kubernetes images using tag@hash rather to prevent invalid downloads - Currently, if a user uses --image-repository, they may end up with images that do not match the official Kubernetes ones.
We should use digests or some other mechanism to prevent mirror poisoning. | non_infrastructure | fetch kubernetes images using tag hash rather to prevent invalid downloads currently if a user uses image repository they may end up with images that do not match the official kubernetes ones we should use digests or some other mechanism to prevent mirror poisoning | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.