Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
2
665
labels
stringlengths
4
554
body
stringlengths
3
235k
index
stringclasses
6 values
text_combine
stringlengths
96
235k
label
stringclasses
2 values
text
stringlengths
96
196k
binary_label
int64
0
1
30,253
24,701,374,935
IssuesEvent
2022-10-19 15:31:01
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
build-android-rootfs.sh Must be fix the "termux" URL
help wanted area-Infrastructure-coreclr
The https://github.com/dotnet/runtime/blob/main/eng/common/cross/build-android-rootfs.sh Go line: 100 Find string: http://termux.net/ Replace all to: https://packages.termux.dev/termux-main-21/ Build succeeded test: ./build.sh --os Android or ROOTFS_DIR=$(realpath /Home/runtime_linux/.tools/android-rootfs/android-ndk-r21/sysroot) ./build.sh --cross --arch arm64 --subset mono @steveisok
1.0
build-android-rootfs.sh Must be fix the "termux" URL - The https://github.com/dotnet/runtime/blob/main/eng/common/cross/build-android-rootfs.sh Go line: 100 Find string: http://termux.net/ Replace all to: https://packages.termux.dev/termux-main-21/ Build succeeded test: ./build.sh --os Android or ROOTFS_DIR=$(realpath /Home/runtime_linux/.tools/android-rootfs/android-ndk-r21/sysroot) ./build.sh --cross --arch arm64 --subset mono @steveisok
infrastructure
build android rootfs sh must be fix the termux url the go line find string replace all to build succeeded test build sh os android or rootfs dir realpath home runtime linux tools android rootfs android ndk sysroot build sh cross arch subset mono steveisok
1
85,900
16,759,626,193
IssuesEvent
2021-06-13 14:15:14
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[com_fields] Field type media. Wrong default value/path in article edit.
J3 Issue No Code Attached Yet
### Steps to reproduce the issue - Create a custom field for articles of type `Media` - Select a directory and enter an existing image name inside that directory as Default Value ![19-04-_2019_15-17-13](https://user-images.githubusercontent.com/20780646/56425884-646dd400-62b6-11e9-94e8-cbbafa257b3b.jpg) - Save the field - Open an article and see: ![19-04-_2019_15-19-32](https://user-images.githubusercontent.com/20780646/56425972-b3b40480-62b6-11e9-8773-3f88b9235de2.jpg) - **The path is wrong. It should be `images/banners/shop-ad.jpg`** - Don't change the media field and save the article. - See the article in frontend. ![19-04-_2019_15-24-47](https://user-images.githubusercontent.com/20780646/56426134-61bfae80-62b7-11e9-939c-bebbbecd1b6b.jpg) ### Expected result - Field resolves the directory path correctly. ### Actual result - Directory path ignored and one have to select the media again (in any article).
1.0
[com_fields] Field type media. Wrong default value/path in article edit. - ### Steps to reproduce the issue - Create a custom field for articles of type `Media` - Select a directory and enter an existing image name inside that directory as Default Value ![19-04-_2019_15-17-13](https://user-images.githubusercontent.com/20780646/56425884-646dd400-62b6-11e9-94e8-cbbafa257b3b.jpg) - Save the field - Open an article and see: ![19-04-_2019_15-19-32](https://user-images.githubusercontent.com/20780646/56425972-b3b40480-62b6-11e9-8773-3f88b9235de2.jpg) - **The path is wrong. It should be `images/banners/shop-ad.jpg`** - Don't change the media field and save the article. - See the article in frontend. ![19-04-_2019_15-24-47](https://user-images.githubusercontent.com/20780646/56426134-61bfae80-62b7-11e9-939c-bebbbecd1b6b.jpg) ### Expected result - Field resolves the directory path correctly. ### Actual result - Directory path ignored and one have to select the media again (in any article).
non_infrastructure
field type media wrong default value path in article edit steps to reproduce the issue create a custom field for articles of type media select a directory and enter an existing image name inside that directory as default value save the field open an article and see the path is wrong it should be images banners shop ad jpg don t change the media field and save the article see the article in frontend expected result field resolves the directory path correctly actual result directory path ignored and one have to select the media again in any article
0
301,435
26,047,941,148
IssuesEvent
2022-12-22 15:55:14
openhab/openhab-core
https://api.github.com/repos/openhab/openhab-core
opened
ThreadPoolManagerTest unstable
test
This test failed in a GHA Windows build: https://github.com/wborn/openhab-core/actions/runs/3758246710/jobs/6386349876 ``` [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.194 s <<< FAILURE! - in org.openhab.core.common.ThreadPoolManagerTest [ERROR] org.openhab.core.common.ThreadPoolManagerTest.testGetPoolShutdown Time elapsed: 1.57 s <<< FAILURE! org.opentest4j.AssertionFailedError: Checking if thread pool Test works ==> expected: <true> but was: <false> at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40) at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210) at org.openhab.core.common.ThreadPoolManagerTest.checkThreadPoolWorks(ThreadPoolManagerTest.java:128) at org.openhab.core.common.ThreadPoolManagerTest.testGetPoolShutdown(ThreadPoolManagerTest.java:108) ```
1.0
ThreadPoolManagerTest unstable - This test failed in a GHA Windows build: https://github.com/wborn/openhab-core/actions/runs/3758246710/jobs/6386349876 ``` [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.194 s <<< FAILURE! - in org.openhab.core.common.ThreadPoolManagerTest [ERROR] org.openhab.core.common.ThreadPoolManagerTest.testGetPoolShutdown Time elapsed: 1.57 s <<< FAILURE! org.opentest4j.AssertionFailedError: Checking if thread pool Test works ==> expected: <true> but was: <false> at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40) at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:210) at org.openhab.core.common.ThreadPoolManagerTest.checkThreadPoolWorks(ThreadPoolManagerTest.java:128) at org.openhab.core.common.ThreadPoolManagerTest.testGetPoolShutdown(ThreadPoolManagerTest.java:108) ```
non_infrastructure
threadpoolmanagertest unstable this test failed in a gha windows build tests run failures errors skipped time elapsed s failure in org openhab core common threadpoolmanagertest org openhab core common threadpoolmanagertest testgetpoolshutdown time elapsed s failure org assertionfailederror checking if thread pool test works expected but was at org junit jupiter api assertionutils fail assertionutils java at org junit jupiter api asserttrue asserttrue asserttrue java at org junit jupiter api assertions asserttrue assertions java at org openhab core common threadpoolmanagertest checkthreadpoolworks threadpoolmanagertest java at org openhab core common threadpoolmanagertest testgetpoolshutdown threadpoolmanagertest java
0
475,010
13,685,839,211
IssuesEvent
2020-09-30 07:48:25
zeebe-io/zeebe
https://api.github.com/repos/zeebe-io/zeebe
closed
Detect issues when upgrading the broker to a new version
Impact: Availability Impact: Data Priority: Critical Scope: broker Status: In Progress Status: Planned Type: Enhancement
**Is your feature request related to a problem? Please describe.** Currently, it can happen that the upgrade of the broker to a new version fails (e.g. #5268). As a user, it is not clear how to detect these failures and how to solve them. **Describe the solution you'd like** On startup with a new version, the broker detects if there are any issues that prevent a successful upgrade. If it detects an issue then it informs the user how to solve this issue. This can be a good error message and documentation that explain the procedure in detail. **Describe alternatives you've considered** * fix the upgrade issues on-the-fly: would be nice but this is complicated and error-prone * build a tool for the detection: it requires too much internal logic from the workflow engine that can't be extracted **Additional context** Current upgrade issues: #5268, #5251
1.0
Detect issues when upgrading the broker to a new version - **Is your feature request related to a problem? Please describe.** Currently, it can happen that the upgrade of the broker to a new version fails (e.g. #5268). As a user, it is not clear how to detect these failures and how to solve them. **Describe the solution you'd like** On startup with a new version, the broker detects if there are any issues that prevent a successful upgrade. If it detects an issue then it informs the user how to solve this issue. This can be a good error message and documentation that explain the procedure in detail. **Describe alternatives you've considered** * fix the upgrade issues on-the-fly: would be nice but this is complicated and error-prone * build a tool for the detection: it requires too much internal logic from the workflow engine that can't be extracted **Additional context** Current upgrade issues: #5268, #5251
non_infrastructure
detect issues when upgrading the broker to a new version is your feature request related to a problem please describe currently it can happen that the upgrade of the broker to a new version fails e g as a user it is not clear how to detect these failures and how to solve them describe the solution you d like on startup with a new version the broker detects if there are any issues that prevent a successful upgrade if it detects an issue then it informs the user how to solve this issue this can be a good error message and documentation that explain the procedure in detail describe alternatives you ve considered fix the upgrade issues on the fly would be nice but this is complicated and error prone build a tool for the detection it requires too much internal logic from the workflow engine that can t be extracted additional context current upgrade issues
0
6,110
6,159,733,973
IssuesEvent
2017-06-29 01:38:39
vmware/docker-volume-vsphere
https://api.github.com/repos/vmware/docker-volume-vsphere
opened
Need test binary to enable verbose mode of gocheck
component/test-infrastructure kind/enhancement
To have better time reporting of our e2e tests, we need to enable verbose mode (done through -check.v flag). Doing this will allow us to spit more suite level information (time required and individual test suite pass or fail. We can then use it to print overall summary at the end of test-all target. We also need this kind of binary to properly spit out coverage info. So two things are served from this binary. CC @shuklanirdesh82
1.0
Need test binary to enable verbose mode of gocheck - To have better time reporting of our e2e tests, we need to enable verbose mode (done through -check.v flag). Doing this will allow us to spit more suite level information (time required and individual test suite pass or fail. We can then use it to print overall summary at the end of test-all target. We also need this kind of binary to properly spit out coverage info. So two things are served from this binary. CC @shuklanirdesh82
infrastructure
need test binary to enable verbose mode of gocheck to have better time reporting of our tests we need to enable verbose mode done through check v flag doing this will allow us to spit more suite level information time required and individual test suite pass or fail we can then use it to print overall summary at the end of test all target we also need this kind of binary to properly spit out coverage info so two things are served from this binary cc
1
23,977
11,994,922,013
IssuesEvent
2020-04-08 14:26:32
wellcomecollection/platform
https://api.github.com/repos/wellcomecollection/platform
closed
Bag verifier: one of the tests is flaky
🐛 Bug 📦 Storage service
As seen in https://api.travis-ci.org/v3/job/601215760/log.txt on https://github.com/wellcometrust/storage-service/pull/388: > 11:23:18.022 [default-akka.actor.default-dispatcher-3] ERROR u.a.w.p.a.c.s.m.IngestStepWorker$$anon$1 - DeterministicFailure(java.lang.Throwable: Payload-Oxum has the wrong number of payload files: 26, but bag manifest has 27,Some(root=qmtisvty/UwHIqF8C/fCmzS9nM/v6, status=incomplete, ingestId=94f2402d-6ee8-44aa-bb9f-cf826a9d78bb, duration=PT0.039472S, durationSeconds=0))
1.0
Bag verifier: one of the tests is flaky - As seen in https://api.travis-ci.org/v3/job/601215760/log.txt on https://github.com/wellcometrust/storage-service/pull/388: > 11:23:18.022 [default-akka.actor.default-dispatcher-3] ERROR u.a.w.p.a.c.s.m.IngestStepWorker$$anon$1 - DeterministicFailure(java.lang.Throwable: Payload-Oxum has the wrong number of payload files: 26, but bag manifest has 27,Some(root=qmtisvty/UwHIqF8C/fCmzS9nM/v6, status=incomplete, ingestId=94f2402d-6ee8-44aa-bb9f-cf826a9d78bb, duration=PT0.039472S, durationSeconds=0))
non_infrastructure
bag verifier one of the tests is flaky as seen in on error u a w p a c s m ingeststepworker anon deterministicfailure java lang throwable payload oxum has the wrong number of payload files but bag manifest has some root qmtisvty status incomplete ingestid duration durationseconds
0
18,854
13,136,016,219
IssuesEvent
2020-08-07 04:47:36
kubeflow/kfserving
https://api.github.com/repos/kubeflow/kfserving
closed
Separate AdmissionControllers to a different deployment
area/engprod area/infrastructure-feature area/operator kind/feature priority/p2
/kind feature **Describe the solution you'd like** [A clear and concise description of what you want to happen.] We need to be able to autoscale our admission controller separately to support large volatile clusters with lots of Pod CRUDs. **Anything else you would like to add:** [Miscellaneous information that will assist in solving the issue.] This should be done when upgrading ControllerRuntime to avoid wasted effort.
1.0
Separate AdmissionControllers to a different deployment - /kind feature **Describe the solution you'd like** [A clear and concise description of what you want to happen.] We need to be able to autoscale our admission controller separately to support large volatile clusters with lots of Pod CRUDs. **Anything else you would like to add:** [Miscellaneous information that will assist in solving the issue.] This should be done when upgrading ControllerRuntime to avoid wasted effort.
infrastructure
separate admissioncontrollers to a different deployment kind feature describe the solution you d like we need to be able to autoscale our admission controller separately to support large volatile clusters with lots of pod cruds anything else you would like to add this should be done when upgrading controllerruntime to avoid wasted effort
1
14,821
11,172,377,858
IssuesEvent
2019-12-29 05:27:56
Blacksmoke16/oq
https://api.github.com/repos/Blacksmoke16/oq
closed
oq does not work in a docker container
kind:bug kind:infrastructure status:wip
I have build a docker container like so FROM crystallang/crystal:latest RUN git clone https://github.com/Blacksmoke16/oq.git WORKDIR /oq RUN shards build --production RUN chmod +x /oq/bin/oq RUN cp /oq/bin/oq /bin/ ENV PATH /bin/:$PATH RUN oq --help ------------ I get an error when running oq --help Sending build context to Docker daemon 5.632kB Step 1/8 : FROM crystallang/crystal:latest ---> e9906ad8c49f Step 2/8 : RUN git clone https://github.com/Blacksmoke16/oq.git ---> Using cache ---> 9989d5d29ddb Step 3/8 : WORKDIR /oq ---> Using cache ---> 9a3f277c8558 Step 4/8 : RUN shards build --production ---> Using cache ---> 1ed78db07894 Step 5/8 : RUN chmod +x /oq/bin/oq ---> Using cache ---> 0ebaf7c94a18 Step 6/8 : RUN cp /oq/bin/oq /bin/ ---> Using cache ---> e02d9d996fff Step 7/8 : ENV PATH /bin/:$PATH ---> Using cache ---> 6bc2e585cc6e Step 8/8 : RUN oq --help ---> Running in d0822a0f3af4 Failed to raise an exception: END_OF_STACK [0x490bb6] *CallStack::print_backtrace:Int32 +118 [0x46d466] __crystal_raise +86 [0x46d98e] ??? [0x4bbab6] *Crystal::System::File::open<String, String, File::Permissions>:Int32 +214 [0x4b7ec3] *File::new<String, String, File::Permissions, Nil, Nil>:File +67 [0x48785d] *CallStack::read_dwarf_sections:(Array(Tuple(UInt64, UInt64, String)) | Nil) +109 [0x4875ed] *CallStack::decode_line_number<UInt64>:Tuple(String, Int32, Int32) +45 [0x486d78] *CallStack#decode_backtrace:Array(String) +296 [0x486c32] *CallStack#printable_backtrace:Array(String) +50 [0x4f049d] *Exception+ +77 [0x4f02e8] *Exception+ +120 [0x4ec07a] *AtExitHandlers::run<Int32>:Int32 +490 [0x55510b] *Crystal::main<Int32, Pointer(Pointer(UInt8))>:Int32 +139 [0x477e76] main +6 [0x7f0eb572e830] __libc_start_main +240 [0x46ba19] _start +41 [0x0] ??? The command '/bin/sh -c oq --help' returned a non-zero code: 5 ------------ Please can you help identify the issue. Thanks.
1.0
oq does not work in a docker container - I have build a docker container like so FROM crystallang/crystal:latest RUN git clone https://github.com/Blacksmoke16/oq.git WORKDIR /oq RUN shards build --production RUN chmod +x /oq/bin/oq RUN cp /oq/bin/oq /bin/ ENV PATH /bin/:$PATH RUN oq --help ------------ I get an error when running oq --help Sending build context to Docker daemon 5.632kB Step 1/8 : FROM crystallang/crystal:latest ---> e9906ad8c49f Step 2/8 : RUN git clone https://github.com/Blacksmoke16/oq.git ---> Using cache ---> 9989d5d29ddb Step 3/8 : WORKDIR /oq ---> Using cache ---> 9a3f277c8558 Step 4/8 : RUN shards build --production ---> Using cache ---> 1ed78db07894 Step 5/8 : RUN chmod +x /oq/bin/oq ---> Using cache ---> 0ebaf7c94a18 Step 6/8 : RUN cp /oq/bin/oq /bin/ ---> Using cache ---> e02d9d996fff Step 7/8 : ENV PATH /bin/:$PATH ---> Using cache ---> 6bc2e585cc6e Step 8/8 : RUN oq --help ---> Running in d0822a0f3af4 Failed to raise an exception: END_OF_STACK [0x490bb6] *CallStack::print_backtrace:Int32 +118 [0x46d466] __crystal_raise +86 [0x46d98e] ??? [0x4bbab6] *Crystal::System::File::open<String, String, File::Permissions>:Int32 +214 [0x4b7ec3] *File::new<String, String, File::Permissions, Nil, Nil>:File +67 [0x48785d] *CallStack::read_dwarf_sections:(Array(Tuple(UInt64, UInt64, String)) | Nil) +109 [0x4875ed] *CallStack::decode_line_number<UInt64>:Tuple(String, Int32, Int32) +45 [0x486d78] *CallStack#decode_backtrace:Array(String) +296 [0x486c32] *CallStack#printable_backtrace:Array(String) +50 [0x4f049d] *Exception+ +77 [0x4f02e8] *Exception+ +120 [0x4ec07a] *AtExitHandlers::run<Int32>:Int32 +490 [0x55510b] *Crystal::main<Int32, Pointer(Pointer(UInt8))>:Int32 +139 [0x477e76] main +6 [0x7f0eb572e830] __libc_start_main +240 [0x46ba19] _start +41 [0x0] ??? The command '/bin/sh -c oq --help' returned a non-zero code: 5 ------------ Please can you help identify the issue. Thanks.
infrastructure
oq does not work in a docker container i have build a docker container like so from crystallang crystal latest run git clone workdir oq run shards build production run chmod x oq bin oq run cp oq bin oq bin env path bin path run oq help i get an error when running oq help sending build context to docker daemon step from crystallang crystal latest step run git clone using cache step workdir oq using cache step run shards build production using cache step run chmod x oq bin oq using cache step run cp oq bin oq bin using cache step env path bin path using cache step run oq help running in failed to raise an exception end of stack callstack print backtrace crystal raise crystal system file open file new file callstack read dwarf sections array tuple string nil callstack decode line number tuple string callstack decode backtrace array string callstack printable backtrace array string exception exception atexithandlers run crystal main main libc start main start the command bin sh c oq help returned a non zero code please can you help identify the issue thanks
1
24,394
17,191,179,005
IssuesEvent
2021-07-16 11:11:44
google/site-kit-wp
https://api.github.com/repos/google/site-kit-wp
opened
Add viewContext for all top-level React apps
P1 Type: Infrastructure
## Feature Description The `Root` component that we use for all top-level React apps has a `viewContext` prop that was introduced initially for providing view/screen context for [feature tours](https://github.com/google/site-kit-wp/issues/2649). It was later also integrated with our `ErrorHandler` (boundary) to provide more contextual error codes for tracking React errors. However, the prop was initially optional because we only needed feature tours in a few select places so it didn't make sense at the time to define this in all instances. For errors however, we do see these raised on all screens (for one reason or another) and this results in less contextual error codes because it has `null` where there would otherwise be a named view context. --------------- _Do not alter or remove anything below. The following sections will be managed by moderators only._ ## Acceptance criteria * The `viewContext` prop of the `Root` component should be promoted to be a required prop * [View context constants](https://github.com/google/site-kit-wp/blob/6123efaea980ad2c4179a2995b1d7c6178b010da/assets/js/googlesitekit/constants.js) should be added for all missing contexts (please list in IB) * All instances of `Root` missing this prop should be updated to provide it via the respective constant ## Implementation Brief * <!-- One or more bullet points for how to technically implement the feature. --> ### Test Coverage * <!-- One or more bullet points for how to implement automated tests to verify the feature works. --> ### Visual Regression Changes * <!-- One or more bullet points describing how the feature will affect visual regression tests, if applicable. --> ## QA Brief * <!-- One or more bullet points for how to test that the feature works as expected. --> ## Changelog entry * <!-- One sentence summarizing the PR, to be used in the changelog. -->
1.0
Add viewContext for all top-level React apps - ## Feature Description The `Root` component that we use for all top-level React apps has a `viewContext` prop that was introduced initially for providing view/screen context for [feature tours](https://github.com/google/site-kit-wp/issues/2649). It was later also integrated with our `ErrorHandler` (boundary) to provide more contextual error codes for tracking React errors. However, the prop was initially optional because we only needed feature tours in a few select places so it didn't make sense at the time to define this in all instances. For errors however, we do see these raised on all screens (for one reason or another) and this results in less contextual error codes because it has `null` where there would otherwise be a named view context. --------------- _Do not alter or remove anything below. The following sections will be managed by moderators only._ ## Acceptance criteria * The `viewContext` prop of the `Root` component should be promoted to be a required prop * [View context constants](https://github.com/google/site-kit-wp/blob/6123efaea980ad2c4179a2995b1d7c6178b010da/assets/js/googlesitekit/constants.js) should be added for all missing contexts (please list in IB) * All instances of `Root` missing this prop should be updated to provide it via the respective constant ## Implementation Brief * <!-- One or more bullet points for how to technically implement the feature. --> ### Test Coverage * <!-- One or more bullet points for how to implement automated tests to verify the feature works. --> ### Visual Regression Changes * <!-- One or more bullet points describing how the feature will affect visual regression tests, if applicable. --> ## QA Brief * <!-- One or more bullet points for how to test that the feature works as expected. --> ## Changelog entry * <!-- One sentence summarizing the PR, to be used in the changelog. -->
infrastructure
add viewcontext for all top level react apps feature description the root component that we use for all top level react apps has a viewcontext prop that was introduced initially for providing view screen context for it was later also integrated with our errorhandler boundary to provide more contextual error codes for tracking react errors however the prop was initially optional because we only needed feature tours in a few select places so it didn t make sense at the time to define this in all instances for errors however we do see these raised on all screens for one reason or another and this results in less contextual error codes because it has null where there would otherwise be a named view context do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria the viewcontext prop of the root component should be promoted to be a required prop should be added for all missing contexts please list in ib all instances of root missing this prop should be updated to provide it via the respective constant implementation brief test coverage visual regression changes qa brief changelog entry
1
120,568
17,644,231,593
IssuesEvent
2021-08-20 02:00:55
fbennets/HCLC-GDPR-Bot
https://api.github.com/repos/fbennets/HCLC-GDPR-Bot
opened
CVE-2021-29522 (Medium) detected in tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl
security vulnerability
## CVE-2021-29522 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: HCLC-GDPR-Bot/requirements.txt</p> <p>Path to vulnerable library: HCLC-GDPR-Bot/requirements.txt</p> <p> Dependency Hierarchy: - tensorflow_addons-0.7.1-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library) - :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. The `tf.raw_ops.Conv3DBackprop*` operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/a91bb59769f19146d5a0c20060244378e878f140/tensorflow/core/kernels/conv_grad_ops_3d.cc#L430-L450) does not check that the divisor used in computing the shard size is not zero. Thus, if attacker controls the input sizes, they can trigger a denial of service via a division by zero error. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29522>CVE-2021-29522</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c968-pq7h-7fxv">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c968-pq7h-7fxv</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-29522 (Medium) detected in tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-29522 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: HCLC-GDPR-Bot/requirements.txt</p> <p>Path to vulnerable library: HCLC-GDPR-Bot/requirements.txt</p> <p> Dependency Hierarchy: - tensorflow_addons-0.7.1-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library) - :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an end-to-end open source platform for machine learning. The `tf.raw_ops.Conv3DBackprop*` operations fail to validate that the input tensors are not empty. In turn, this would result in a division by 0. This is because the implementation(https://github.com/tensorflow/tensorflow/blob/a91bb59769f19146d5a0c20060244378e878f140/tensorflow/core/kernels/conv_grad_ops_3d.cc#L430-L450) does not check that the divisor used in computing the shard size is not zero. Thus, if attacker controls the input sizes, they can trigger a denial of service via a division by zero error. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range. <p>Publish Date: 2021-05-14 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-29522>CVE-2021-29522</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c968-pq7h-7fxv">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-c968-pq7h-7fxv</a></p> <p>Release Date: 2021-05-14</p> <p>Fix Resolution: tensorflow - 2.5.0, tensorflow-cpu - 2.5.0, tensorflow-gpu - 2.5.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file hclc gdpr bot requirements txt path to vulnerable library hclc gdpr bot requirements txt dependency hierarchy tensorflow addons whl root library x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning the tf raw ops operations fail to validate that the input tensors are not empty in turn this would result in a division by this is because the implementation does not check that the divisor used in computing the shard size is not zero thus if attacker controls the input sizes they can trigger a denial of service via a division by zero error the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource
0
714,400
24,560,388,149
IssuesEvent
2022-10-12 19:41:11
interaction-lab/MoveToCode
https://api.github.com/repos/interaction-lab/MoveToCode
closed
Make tutor Kuri communication more salient?
high priority
- [ ] test out new location for text - [ ] test out voice -> possibly Polly if not just our voice
1.0
Make tutor Kuri communication more salient? - - [ ] test out new location for text - [ ] test out voice -> possibly Polly if not just our voice
non_infrastructure
make tutor kuri communication more salient test out new location for text test out voice possibly polly if not just our voice
0
20,903
14,233,776,305
IssuesEvent
2020-11-18 12:40:09
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Unable to build a custom version of the dotnet/runtime repository with a suffix.
area-Infrastructure-coreclr untriaged
### Description At Criteo, we build our own custom CLR. We do this because we have changes in the CLR that are specifics to our use-cases and it won't be accepted upstream (already discussed). To have my _dotnet-runtime-5.0.0-criteo1.tar.gz_ artifact, I use the command: `./build.sh -c Release /p:VersionSuffix=criteo1` Then I looked at the _artifacts/packages/Release/Shipping/_ folder to get the artifact and use it. Steps to reproduce (without applying the patches): ``` git clone https://github.com/dotnet/runtime cd runtime git checkout -b 5.0.0-rtm-criteo1 v5.0.0-rtm.20519.4 ./build.sh -c Release /p:VersionSuffix=criteo1 ``` Since the RTM, I got these errors ``` Build FAILED. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: Detected package downgrade: Microsoft.NETCore.DotNetHostPolicy from 5.0.0 to 5.0.0-criteo1. Reference the package directly from the project to select a different version. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.App.Internal 5.0.0-criteo1 -> Microsoft.NETCore.DotNetHostPolicy (>= 5.0.0) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostPolicy (>= 5.0.0-criteo1) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: Detected package downgrade: Microsoft.NETCore.DotNetHostResolver from 5.0.0 to 5.0.0-criteo1. Reference the package directly from the project to select a different version. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostPolicy 5.0.0-criteo1 -> Microsoft.NETCore.DotNetHostResolver (>= 5.0.0) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostResolver (>= 5.0.0-criteo1) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: Detected package downgrade: Microsoft.NETCore.DotNetAppHost from 5.0.0 to 5.0.0-criteo1. Reference the package directly from the project to select a different version. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostResolver 5.0.0-criteo1 -> Microsoft.NETCore.DotNetAppHost (>= 5.0.0) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetAppHost (>= 5.0.0-criteo1) /build/runtime/src/installer/pkg/packaging/installers.proj(50,5): error MSB4181: The "MSBuild" task returned false but did not log an error. 0 Warning(s) 4 Error(s) ``` ### Configuration * We use the Centos7 docker image to build the CLR ### Regression? I was able to build and publish the dotnet-runtime-5.0.0-criteo1.tar.gz artifact until the RTM release (example: for the RC1 https://github.com/criteo-forks/runtime/releases/tag/v5.0.0-rc1-criteo1)
1.0
Unable to build a custom version of the dotnet/runtime repository with a suffix. - ### Description At Criteo, we build our own custom CLR. We do this because we have changes in the CLR that are specifics to our use-cases and it won't be accepted upstream (already discussed). To have my _dotnet-runtime-5.0.0-criteo1.tar.gz_ artifact, I use the command: `./build.sh -c Release /p:VersionSuffix=criteo1` Then I looked at the _artifacts/packages/Release/Shipping/_ folder to get the artifact and use it. Steps to reproduce (without applying the patches): ``` git clone https://github.com/dotnet/runtime cd runtime git checkout -b 5.0.0-rtm-criteo1 v5.0.0-rtm.20519.4 ./build.sh -c Release /p:VersionSuffix=criteo1 ``` Since the RTM, I got these errors ``` Build FAILED. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: Detected package downgrade: Microsoft.NETCore.DotNetHostPolicy from 5.0.0 to 5.0.0-criteo1. Reference the package directly from the project to select a different version. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.App.Internal 5.0.0-criteo1 -> Microsoft.NETCore.DotNetHostPolicy (>= 5.0.0) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostPolicy (>= 5.0.0-criteo1) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: Detected package downgrade: Microsoft.NETCore.DotNetHostResolver from 5.0.0 to 5.0.0-criteo1. Reference the package directly from the project to select a different version. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostPolicy 5.0.0-criteo1 -> Microsoft.NETCore.DotNetHostResolver (>= 5.0.0) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostResolver (>= 5.0.0-criteo1) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: Detected package downgrade: Microsoft.NETCore.DotNetAppHost from 5.0.0 to 5.0.0-criteo1. Reference the package directly from the project to select a different version. /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetHostResolver 5.0.0-criteo1 -> Microsoft.NETCore.DotNetAppHost (>= 5.0.0) /build/runtime/src/installer/pkg/projects/netcoreapp/sfx/Microsoft.NETCore.App.SharedFx.sfxproj : error NU1605: unused -> Microsoft.NETCore.DotNetAppHost (>= 5.0.0-criteo1) /build/runtime/src/installer/pkg/packaging/installers.proj(50,5): error MSB4181: The "MSBuild" task returned false but did not log an error. 0 Warning(s) 4 Error(s) ``` ### Configuration * We use the Centos7 docker image to build the CLR ### Regression? I was able to build and publish the dotnet-runtime-5.0.0-criteo1.tar.gz artifact until the RTM release (example: for the RC1 https://github.com/criteo-forks/runtime/releases/tag/v5.0.0-rc1-criteo1)
infrastructure
unable to build a custom version of the dotnet runtime repository with a suffix description at criteo we build our own custom clr we do this because we have changes in the clr that are specifics to our use cases and it won t be accepted upstream already discussed to have my dotnet runtime tar gz artifact i use the command build sh c release p versionsuffix then i looked at the artifacts packages release shipping folder to get the artifact and use it steps to reproduce without applying the patches git clone cd runtime git checkout b rtm rtm build sh c release p versionsuffix since the rtm i got these errors build failed build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error detected package downgrade microsoft netcore dotnethostpolicy from to reference the package directly from the project to select a different version build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error unused microsoft netcore app internal microsoft netcore dotnethostpolicy build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error unused microsoft netcore dotnethostpolicy build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error detected package downgrade microsoft netcore dotnethostresolver from to reference the package directly from the project to select a different version build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error unused microsoft netcore dotnethostpolicy microsoft netcore dotnethostresolver build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error unused microsoft netcore dotnethostresolver build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error detected package downgrade microsoft netcore dotnetapphost from to reference the package directly from the project to select a different version build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error unused microsoft netcore dotnethostresolver microsoft netcore dotnetapphost build runtime src installer pkg projects netcoreapp sfx microsoft netcore app sharedfx sfxproj error unused microsoft netcore dotnetapphost build runtime src installer pkg packaging installers proj error the msbuild task returned false but did not log an error warning s error s configuration we use the docker image to build the clr regression i was able to build and publish the dotnet runtime tar gz artifact until the rtm release example for the
1
14,941
11,255,833,875
IssuesEvent
2020-01-12 12:19:43
stylelint/stylelint
https://api.github.com/repos/stylelint/stylelint
closed
Increase flow coverage
help wanted type: infrastructure
A few places we could gain some ground adding Flow annotations, without diving into the rules. - [x] lib/utils/getCacheFile - [x] lib/utils/hash.js - [x] lib/utils/isAfterSingleLineComment.js - [x] lib/utils/whitespaceChecker.js - [x] lib/formatters/needlessDisablesStringFormatter.js - [ ] lib/formatters/stringFormatter.js - [x] lib/formatters/verboseFormatter.js
1.0
Increase flow coverage - A few places we could gain some ground adding Flow annotations, without diving into the rules. - [x] lib/utils/getCacheFile - [x] lib/utils/hash.js - [x] lib/utils/isAfterSingleLineComment.js - [x] lib/utils/whitespaceChecker.js - [x] lib/formatters/needlessDisablesStringFormatter.js - [ ] lib/formatters/stringFormatter.js - [x] lib/formatters/verboseFormatter.js
infrastructure
increase flow coverage a few places we could gain some ground adding flow annotations without diving into the rules lib utils getcachefile lib utils hash js lib utils isaftersinglelinecomment js lib utils whitespacechecker js lib formatters needlessdisablesstringformatter js lib formatters stringformatter js lib formatters verboseformatter js
1
10,122
8,377,499,671
IssuesEvent
2018-10-06 01:56:12
RITlug/TigerOS
https://api.github.com/repos/RITlug/TigerOS
closed
Document Fedora Upgrade Process
infrastructure priority:med
Currently we are upgrading to Fedora 28. However, we don't have a process on how to properly do this. It would be nice if we had the documentation available to look at with each 6-month release cycle Fedora follows.
1.0
Document Fedora Upgrade Process - Currently we are upgrading to Fedora 28. However, we don't have a process on how to properly do this. It would be nice if we had the documentation available to look at with each 6-month release cycle Fedora follows.
infrastructure
document fedora upgrade process currently we are upgrading to fedora however we don t have a process on how to properly do this it would be nice if we had the documentation available to look at with each month release cycle fedora follows
1
442,181
12,741,231,488
IssuesEvent
2020-06-26 05:25:53
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.9.0 staging-1595] Claiming troubles
Category: Gameplay Priority: Medium Status: Fixed Week Task
Step to reproduce: - place store on unowned property: ![image](https://user-images.githubusercontent.com/45708377/84270290-1d364880-ab33-11ea-90e4-365dc320f646.png) - claim this store: ![image](https://user-images.githubusercontent.com/45708377/84270329-2f17eb80-ab33-11ea-856d-342dcba4bf67.png) - unclaim: ![image](https://user-images.githubusercontent.com/45708377/84270375-4060f800-ab33-11ea-9e11-62f3f3104b04.png) Same with distribution station, but you can use it at all, because if you don't have an owner you can't add items: ![image](https://user-images.githubusercontent.com/45708377/84869309-86174680-b086-11ea-9cb8-b990441cc5e7.png)
1.0
[0.9.0 staging-1595] Claiming troubles - Step to reproduce: - place store on unowned property: ![image](https://user-images.githubusercontent.com/45708377/84270290-1d364880-ab33-11ea-90e4-365dc320f646.png) - claim this store: ![image](https://user-images.githubusercontent.com/45708377/84270329-2f17eb80-ab33-11ea-856d-342dcba4bf67.png) - unclaim: ![image](https://user-images.githubusercontent.com/45708377/84270375-4060f800-ab33-11ea-9e11-62f3f3104b04.png) Same with distribution station, but you can use it at all, because if you don't have an owner you can't add items: ![image](https://user-images.githubusercontent.com/45708377/84869309-86174680-b086-11ea-9cb8-b990441cc5e7.png)
non_infrastructure
claiming troubles step to reproduce place store on unowned property claim this store unclaim same with distribution station but you can use it at all because if you don t have an owner you can t add items
0
10,878
8,781,857,381
IssuesEvent
2018-12-19 21:46:41
elleFlorio/scalachain
https://api.github.com/repos/elleFlorio/scalachain
opened
Integrate sbt Native Packager docker image creation
enhancement good first issue infrastructure
The docker image defined in the [/docker](https://github.com/elleFlorio/scalachain/tree/master/docker) folder is good for development purposes. It would be good to integrate the sbt plugin for [sbt Native Packager](https://www.scala-sbt.org/sbt-native-packager/), in order to create a docker container running the Scalachain node binary. The image should have the correct name - `elleflorio/scalachain` - and the correct tags. This issue can be implemented inside the `docker-integration` branch.
1.0
Integrate sbt Native Packager docker image creation - The docker image defined in the [/docker](https://github.com/elleFlorio/scalachain/tree/master/docker) folder is good for development purposes. It would be good to integrate the sbt plugin for [sbt Native Packager](https://www.scala-sbt.org/sbt-native-packager/), in order to create a docker container running the Scalachain node binary. The image should have the correct name - `elleflorio/scalachain` - and the correct tags. This issue can be implemented inside the `docker-integration` branch.
infrastructure
integrate sbt native packager docker image creation the docker image defined in the folder is good for development purposes it would be good to integrate the sbt plugin for in order to create a docker container running the scalachain node binary the image should have the correct name elleflorio scalachain and the correct tags this issue can be implemented inside the docker integration branch
1
71,165
13,625,638,695
IssuesEvent
2020-09-24 09:47:35
drafthub/drafthub
https://api.github.com/repos/drafthub/drafthub
closed
enhance code quality of `core.models`
code quality good first issue help wanted
here are some complaints reported by pylint. ``` $ docker-compose exec web python check.py lint | grep -v docstring | grep core | grep models ************* Module drafthub.core.models drafthub/core/models.py:45:0: C0305: Trailing newlines (trailing-newlines) drafthub/core/models.py:10:4: E0307: __str__ does not return str (invalid-str-returned) drafthub/core/models.py:15:8: C0103: Variable name "Draft" doesn't conform to snake_case naming style (invalid-name) drafthub/core/models.py:15:16: E1101: Instance of 'Blog' has no 'my_drafts' member (no-member) drafthub/core/models.py:23:8: C0103: Variable name "Draft" doesn't conform to snake_case naming style (invalid-name) drafthub/core/models.py:23:16: E1101: Instance of 'Blog' has no 'my_drafts' member (no-member) drafthub/core/models.py:31:8: C0103: Variable name "Draft" doesn't conform to snake_case naming style (invalid-name) drafthub/core/models.py:31:16: E1101: Instance of 'Blog' has no 'my_drafts' member (no-member) drafthub/core/models.py:42:4: R0903: Too few public methods (0/2) (too-few-public-methods) ``` See how you can contribute: [CONTRIBUTING.md](https://github.com/drafthub/drafthub/blob/master/CONTRIBUTING.md)
1.0
enhance code quality of `core.models` - here are some complaints reported by pylint. ``` $ docker-compose exec web python check.py lint | grep -v docstring | grep core | grep models ************* Module drafthub.core.models drafthub/core/models.py:45:0: C0305: Trailing newlines (trailing-newlines) drafthub/core/models.py:10:4: E0307: __str__ does not return str (invalid-str-returned) drafthub/core/models.py:15:8: C0103: Variable name "Draft" doesn't conform to snake_case naming style (invalid-name) drafthub/core/models.py:15:16: E1101: Instance of 'Blog' has no 'my_drafts' member (no-member) drafthub/core/models.py:23:8: C0103: Variable name "Draft" doesn't conform to snake_case naming style (invalid-name) drafthub/core/models.py:23:16: E1101: Instance of 'Blog' has no 'my_drafts' member (no-member) drafthub/core/models.py:31:8: C0103: Variable name "Draft" doesn't conform to snake_case naming style (invalid-name) drafthub/core/models.py:31:16: E1101: Instance of 'Blog' has no 'my_drafts' member (no-member) drafthub/core/models.py:42:4: R0903: Too few public methods (0/2) (too-few-public-methods) ``` See how you can contribute: [CONTRIBUTING.md](https://github.com/drafthub/drafthub/blob/master/CONTRIBUTING.md)
non_infrastructure
enhance code quality of core models here are some complaints reported by pylint docker compose exec web python check py lint grep v docstring grep core grep models module drafthub core models drafthub core models py trailing newlines trailing newlines drafthub core models py str does not return str invalid str returned drafthub core models py variable name draft doesn t conform to snake case naming style invalid name drafthub core models py instance of blog has no my drafts member no member drafthub core models py variable name draft doesn t conform to snake case naming style invalid name drafthub core models py instance of blog has no my drafts member no member drafthub core models py variable name draft doesn t conform to snake case naming style invalid name drafthub core models py instance of blog has no my drafts member no member drafthub core models py too few public methods too few public methods see how you can contribute
0
21,094
14,360,999,872
IssuesEvent
2020-11-30 17:37:09
servo/servo
https://api.github.com/repos/servo/servo
closed
Permanent timeout in test-android-startup job
A-infrastructure I-bustage
``` + ./mach test-android-startup --release Assuming --target i686-linux-android Couldn't statvfs() path: No such file or directory emulator: Requested console port 5580: Inferring adb port 5581. emulator: WARNING: cannot read adb public key file: /root/.android/adbkey.pub emulator: WARNING: Your AVD has been configured with an in-guest renderer, but the system image does not support guest rendering.Falling back to 'swiftshader_indirect' mode. qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.abm [bit 5] qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.abm [bit 5] pulseaudio: pa_context_connect() failed pulseaudio: Reason: Connection refused pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver ### WARNING: could not find /usr/share/zoneinfo/ directory. unable to determine host timezone emulator: Cold boot: requested by the user * daemon not running; starting now at tcp:5037 * daemon started successfully ### WARNING: could not find /usr/share/zoneinfo/ directory. unable to determine host timezone emulator: INFO: boot completed ### WARNING: could not find /usr/share/zoneinfo/ directory. unable to determine host timezone Success Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=org.mozilla.servo/.MainActivity (has extras) } --------- beginning of system --------- beginning of main --------- beginning of crash [taskcluster:error] Task timeout after 1800 seconds. Force killing container. ``` https://tools.taskcluster.net/groups/X33ARXIMS5WDXIOk0PKwHg/tasks/LjWoG6vwQwSkWVwokP9dvA/runs/0/logs/public%2Flogs%2Flive.log cc @SimonSapin I have seen this on two jobs today so far.
1.0
Permanent timeout in test-android-startup job - ``` + ./mach test-android-startup --release Assuming --target i686-linux-android Couldn't statvfs() path: No such file or directory emulator: Requested console port 5580: Inferring adb port 5581. emulator: WARNING: cannot read adb public key file: /root/.android/adbkey.pub emulator: WARNING: Your AVD has been configured with an in-guest renderer, but the system image does not support guest rendering.Falling back to 'swiftshader_indirect' mode. qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.abm [bit 5] qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.abm [bit 5] pulseaudio: pa_context_connect() failed pulseaudio: Reason: Connection refused pulseaudio: Failed to initialize PA contextaudio: Could not init `pa' audio driver ### WARNING: could not find /usr/share/zoneinfo/ directory. unable to determine host timezone emulator: Cold boot: requested by the user * daemon not running; starting now at tcp:5037 * daemon started successfully ### WARNING: could not find /usr/share/zoneinfo/ directory. unable to determine host timezone emulator: INFO: boot completed ### WARNING: could not find /usr/share/zoneinfo/ directory. unable to determine host timezone Success Starting: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] cmp=org.mozilla.servo/.MainActivity (has extras) } --------- beginning of system --------- beginning of main --------- beginning of crash [taskcluster:error] Task timeout after 1800 seconds. Force killing container. ``` https://tools.taskcluster.net/groups/X33ARXIMS5WDXIOk0PKwHg/tasks/LjWoG6vwQwSkWVwokP9dvA/runs/0/logs/public%2Flogs%2Flive.log cc @SimonSapin I have seen this on two jobs today so far.
infrastructure
permanent timeout in test android startup job mach test android startup release assuming target linux android couldn t statvfs path no such file or directory emulator requested console port inferring adb port emulator warning cannot read adb public key file root android adbkey pub emulator warning your avd has been configured with an in guest renderer but the system image does not support guest rendering falling back to swiftshader indirect mode qemu system warning host doesn t support requested feature cpuid ecx abm qemu system warning host doesn t support requested feature cpuid ecx abm pulseaudio pa context connect failed pulseaudio reason connection refused pulseaudio failed to initialize pa contextaudio could not init pa audio driver warning could not find usr share zoneinfo directory unable to determine host timezone emulator cold boot requested by the user daemon not running starting now at tcp daemon started successfully warning could not find usr share zoneinfo directory unable to determine host timezone emulator info boot completed warning could not find usr share zoneinfo directory unable to determine host timezone success starting intent act android intent action main cat cmp org mozilla servo mainactivity has extras beginning of system beginning of main beginning of crash task timeout after seconds force killing container cc simonsapin i have seen this on two jobs today so far
1
12,477
9,798,904,061
IssuesEvent
2019-06-11 13:24:23
astropy/regions
https://api.github.com/repos/astropy/regions
reopened
Example breaks windows testing
bug infrastructure testing
#263 has enable doctests and as a result the appveyor build is complaining file access error as it's being used by another process. (The weird thing is that all tests pass, so it would be nice to be able to turn this strictness off). The workaround I ended up with is to allow the failures of the appveyor jobs to let the release procedure for 0.4 proceed. cc @pllim @astrofrog @saimn as you are the ones who solved similar issues in astropy before. ``` ============= 955 passed, 8 skipped, 13 xfailed in 35.62 seconds ============== Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\conda\envs\test\lib\site-packages\astropy\utils\decorators.py", line 860, in test func = make_function_with_signature(func, name=name, **wrapped_args) File "C:\conda\envs\test\lib\site-packages\astropy\tests\runner.py", line 260, in test return runner.run_tests(**kwargs) File "C:\conda\envs\test\lib\site-packages\astropy\tests\runner.py", line 605, in run_tests return super().run_tests(**kwargs) File "C:\conda\envs\test\lib\site-packages\astropy\tests\runner.py", line 242, in run_tests return pytest.main(args=args, plugins=plugins) File "C:\conda\envs\test\lib\site-packages\astropy\config\paths.py", line 182, in __exit__ shutil.rmtree(self._path) File "C:\conda\envs\test\lib\shutil.py", line 513, in rmtree return _rmtree_unsafe(path, onerror) File "C:\conda\envs\test\lib\shutil.py", line 392, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\conda\envs\test\lib\shutil.py", line 392, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\conda\envs\test\lib\shutil.py", line 392, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\conda\envs\test\lib\shutil.py", line 397, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\conda\envs\test\lib\shutil.py", line 395, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\appveyor\\AppData\\Local\\Temp\\1\\tmppexf4t80astropy_cache\\astropy\\download\\py3\\2c9202ae878ecfcb60878ceb63837f5f' ```
1.0
Example breaks windows testing - #263 has enable doctests and as a result the appveyor build is complaining file access error as it's being used by another process. (The weird thing is that all tests pass, so it would be nice to be able to turn this strictness off). The workaround I ended up with is to allow the failures of the appveyor jobs to let the release procedure for 0.4 proceed. cc @pllim @astrofrog @saimn as you are the ones who solved similar issues in astropy before. ``` ============= 955 passed, 8 skipped, 13 xfailed in 35.62 seconds ============== Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\conda\envs\test\lib\site-packages\astropy\utils\decorators.py", line 860, in test func = make_function_with_signature(func, name=name, **wrapped_args) File "C:\conda\envs\test\lib\site-packages\astropy\tests\runner.py", line 260, in test return runner.run_tests(**kwargs) File "C:\conda\envs\test\lib\site-packages\astropy\tests\runner.py", line 605, in run_tests return super().run_tests(**kwargs) File "C:\conda\envs\test\lib\site-packages\astropy\tests\runner.py", line 242, in run_tests return pytest.main(args=args, plugins=plugins) File "C:\conda\envs\test\lib\site-packages\astropy\config\paths.py", line 182, in __exit__ shutil.rmtree(self._path) File "C:\conda\envs\test\lib\shutil.py", line 513, in rmtree return _rmtree_unsafe(path, onerror) File "C:\conda\envs\test\lib\shutil.py", line 392, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\conda\envs\test\lib\shutil.py", line 392, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\conda\envs\test\lib\shutil.py", line 392, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\conda\envs\test\lib\shutil.py", line 397, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\conda\envs\test\lib\shutil.py", line 395, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\appveyor\\AppData\\Local\\Temp\\1\\tmppexf4t80astropy_cache\\astropy\\download\\py3\\2c9202ae878ecfcb60878ceb63837f5f' ```
infrastructure
example breaks windows testing has enable doctests and as a result the appveyor build is complaining file access error as it s being used by another process the weird thing is that all tests pass so it would be nice to be able to turn this strictness off the workaround i ended up with is to allow the failures of the appveyor jobs to let the release procedure for proceed cc pllim astrofrog saimn as you are the ones who solved similar issues in astropy before passed skipped xfailed in seconds traceback most recent call last file line in file c conda envs test lib site packages astropy utils decorators py line in test func make function with signature func name name wrapped args file c conda envs test lib site packages astropy tests runner py line in test return runner run tests kwargs file c conda envs test lib site packages astropy tests runner py line in run tests return super run tests kwargs file c conda envs test lib site packages astropy tests runner py line in run tests return pytest main args args plugins plugins file c conda envs test lib site packages astropy config paths py line in exit shutil rmtree self path file c conda envs test lib shutil py line in rmtree return rmtree unsafe path onerror file c conda envs test lib shutil py line in rmtree unsafe rmtree unsafe fullname onerror file c conda envs test lib shutil py line in rmtree unsafe rmtree unsafe fullname onerror file c conda envs test lib shutil py line in rmtree unsafe rmtree unsafe fullname onerror file c conda envs test lib shutil py line in rmtree unsafe onerror os unlink fullname sys exc info file c conda envs test lib shutil py line in rmtree unsafe os unlink fullname permissionerror the process cannot access the file because it is being used by another process c users appveyor appdata local temp cache astropy download
1
22,499
15,224,201,844
IssuesEvent
2021-02-18 04:38:16
hyphacoop/organizing
https://api.github.com/repos/hyphacoop/organizing
opened
Identify existing implicit roles
wg:business-planning wg:finance wg:governance wg:infrastructure wg:operations
<sup>_This initial comment is collaborative and open to modification by all._</sup> 📅 **Due date:** March 1, 2021 ## Task Summary Each WG to discuss and identify any implicit roles currently operating in the WG, the can list them here in a comment or add them directly to the miro board: https://miro.com/app/board/o9J_lVt9EFQ=/. Associated with Call me Chrysalis initiative. ## To Do for each WG, check off when complete - [ ] bizdev - [ ] gov - [ ] ops - [ ] infra - [ ] finance - [ ] cmc: add to miro board
1.0
Identify existing implicit roles - <sup>_This initial comment is collaborative and open to modification by all._</sup> 📅 **Due date:** March 1, 2021 ## Task Summary Each WG to discuss and identify any implicit roles currently operating in the WG, the can list them here in a comment or add them directly to the miro board: https://miro.com/app/board/o9J_lVt9EFQ=/. Associated with Call me Chrysalis initiative. ## To Do for each WG, check off when complete - [ ] bizdev - [ ] gov - [ ] ops - [ ] infra - [ ] finance - [ ] cmc: add to miro board
infrastructure
identify existing implicit roles this initial comment is collaborative and open to modification by all 📅 due date march task summary each wg to discuss and identify any implicit roles currently operating in the wg the can list them here in a comment or add them directly to the miro board associated with call me chrysalis initiative to do for each wg check off when complete bizdev gov ops infra finance cmc add to miro board
1
26,108
19,668,650,882
IssuesEvent
2022-01-11 03:08:30
APSIMInitiative/ApsimX
https://api.github.com/repos/APSIMInitiative/ApsimX
closed
Apsim should be able to download soil descriptions from web services
newfeature interface/infrastructure
ASRIS and ISRIC provide web services which yield soil descriptions for any specified location. Apsim should be able to access these web services to populate its soil descriptions.
1.0
Apsim should be able to download soil descriptions from web services - ASRIS and ISRIC provide web services which yield soil descriptions for any specified location. Apsim should be able to access these web services to populate its soil descriptions.
infrastructure
apsim should be able to download soil descriptions from web services asris and isric provide web services which yield soil descriptions for any specified location apsim should be able to access these web services to populate its soil descriptions
1
344
2,652,902,416
IssuesEvent
2015-03-16 19:58:04
mroth/emojitrack
https://api.github.com/repos/mroth/emojitrack
closed
admin pages bootstrap 3 transition
infrastructure
and redesign a little to be more legible on mobile, so i can check up on things remotely more effectively
1.0
admin pages bootstrap 3 transition - and redesign a little to be more legible on mobile, so i can check up on things remotely more effectively
infrastructure
admin pages bootstrap transition and redesign a little to be more legible on mobile so i can check up on things remotely more effectively
1
476,368
13,737,377,410
IssuesEvent
2020-10-05 13:06:46
root-project/root
https://api.github.com/repos/root-project/root
opened
[TMVA] Provide support in MethodPyKeras for tensorflow.keras
affects:6.22 affects:master in:TMVA new feature priority:critical
Currently in ROOT version 6.22 >Tensorflow is supported but with a standalone keras (with Keras version >= 2.3) The keras shipped with tensorflow (tf.keras) is instead not supported. It is now needed since LCG software distributions (from LCG 98) do not have anymore Keras
1.0
[TMVA] Provide support in MethodPyKeras for tensorflow.keras - Currently in ROOT version 6.22 >Tensorflow is supported but with a standalone keras (with Keras version >= 2.3) The keras shipped with tensorflow (tf.keras) is instead not supported. It is now needed since LCG software distributions (from LCG 98) do not have anymore Keras
non_infrastructure
provide support in methodpykeras for tensorflow keras currently in root version tensorflow is supported but with a standalone keras with keras version the keras shipped with tensorflow tf keras is instead not supported it is now needed since lcg software distributions from lcg do not have anymore keras
0
11,421
9,187,415,754
IssuesEvent
2019-03-06 02:47:31
ansible/ansible
https://api.github.com/repos/ansible/ansible
closed
module gunicorn hardcodes path to temp directory
affects_2.3 bug module support:community web_infrastructure
<!--- Verify first that your issue/request is not already reported on GitHub. Also test if the latest release, and devel branch are affected too. --> ##### ISSUE TYPE <!--- Pick one below and delete the rest --> - Bug Report ##### COMPONENT NAME <!--- Name of the module, plugin, task or feature Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path --> gunicorn ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes below --> ``` ansible 2.3.1.0 config file = configured module search path = Default w/o overrides python version = 2.7.13 (default, Feb 1 2017, 13:04:42) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] ``` ##### CONFIGURATION <!--- If using Ansible 2.4 or above, paste the results of "ansible-config dump --only-changed" Otherwise, mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say "N/A" for anything that is not platform-specific. Also mention the specific version of what you are trying to control, e.g. if this is a network bug the version of firmware on the network device. --> Mac OS X 10.12.6 ##### SUMMARY <!--- Explain the problem briefly --> The gunicorn module is hardcoding references to a '/tmp' directory instead of using Python's tempfile. It assumes that /tmp is the always the TMP directory, which is not the case. See https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/web_infrastructure/gunicorn.py#L134 ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case. For new features, show how the feature would be used. --> N/A - found using code inspection <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` ```
1.0
module gunicorn hardcodes path to temp directory - <!--- Verify first that your issue/request is not already reported on GitHub. Also test if the latest release, and devel branch are affected too. --> ##### ISSUE TYPE <!--- Pick one below and delete the rest --> - Bug Report ##### COMPONENT NAME <!--- Name of the module, plugin, task or feature Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path --> gunicorn ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes below --> ``` ansible 2.3.1.0 config file = configured module search path = Default w/o overrides python version = 2.7.13 (default, Feb 1 2017, 13:04:42) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] ``` ##### CONFIGURATION <!--- If using Ansible 2.4 or above, paste the results of "ansible-config dump --only-changed" Otherwise, mention any settings you have changed/added/removed in ansible.cfg (or using the ANSIBLE_* environment variables). --> ##### OS / ENVIRONMENT <!--- Mention the OS you are running Ansible from, and the OS you are managing, or say "N/A" for anything that is not platform-specific. Also mention the specific version of what you are trying to control, e.g. if this is a network bug the version of firmware on the network device. --> Mac OS X 10.12.6 ##### SUMMARY <!--- Explain the problem briefly --> The gunicorn module is hardcoding references to a '/tmp' directory instead of using Python's tempfile. It assumes that /tmp is the always the TMP directory, which is not the case. See https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/web_infrastructure/gunicorn.py#L134 ##### STEPS TO REPRODUCE <!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case. For new features, show how the feature would be used. --> N/A - found using code inspection <!--- Paste example playbooks or commands between quotes below --> ```yaml ``` <!--- You can also paste gist.github.com links for larger files --> ##### EXPECTED RESULTS <!--- What did you expect to happen when running the steps above? --> ##### ACTUAL RESULTS <!--- What actually happened? If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes below --> ``` ```
infrastructure
module gunicorn hardcodes path to temp directory verify first that your issue request is not already reported on github also test if the latest release and devel branch are affected too issue type bug report component name name of the module plugin task or feature do not include extra details here e g vyos command not the network module vyos command or the full path gunicorn ansible version ansible config file configured module search path default w o overrides python version default feb configuration if using ansible or above paste the results of ansible config dump only changed otherwise mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say n a for anything that is not platform specific also mention the specific version of what you are trying to control e g if this is a network bug the version of firmware on the network device mac os x summary the gunicorn module is hardcoding references to a tmp directory instead of using python s tempfile it assumes that tmp is the always the tmp directory which is not the case see steps to reproduce for bugs show exactly how to reproduce the problem using a minimal test case for new features show how the feature would be used n a found using code inspection yaml expected results actual results
1
2,322
3,618,426,913
IssuesEvent
2016-02-08 11:31:55
jaagameister/learning
https://api.github.com/repos/jaagameister/learning
opened
application logging
infrastructure
basically every user input and state change should be logged with a time stamp - new skill - new mission
1.0
application logging - basically every user input and state change should be logged with a time stamp - new skill - new mission
infrastructure
application logging basically every user input and state change should be logged with a time stamp new skill new mission
1
31,385
25,602,216,518
IssuesEvent
2022-12-01 21:19:49
iree-org/iree
https://api.github.com/repos/iree-org/iree
closed
Migrate e2e benchmark module tests to e2e test framework
infrastructure
The follow-up of #10442 to migrate e2e module tests from the current benchmark suites to e2e test framework.
1.0
Migrate e2e benchmark module tests to e2e test framework - The follow-up of #10442 to migrate e2e module tests from the current benchmark suites to e2e test framework.
infrastructure
migrate benchmark module tests to test framework the follow up of to migrate module tests from the current benchmark suites to test framework
1
23,871
16,657,236,692
IssuesEvent
2021-06-05 18:56:40
toccatina/altoclef
https://api.github.com/repos/toccatina/altoclef
closed
Move TaskCatalogue to file where users can add/edit recipies if they want
duplicate enhancement infrastructure
Create a file format that handles recipes, so that people can add custom recipes easily. Examples: ``` mine dirt from [dirt, grass_block, grass_path] mine log[acacia_log, birch_log, crimson_log, dark_oak_log, oak_log, jungle_log, spruce_log, warped_log] from [acacia_log, birch_log, crimson_log, dark_oak_log, oak_log, jungle_log, spruce_log, warped_log]; any dimension mine cobblestone from [stone, cobblestone] with wooden_pickaxe mine netherrack with wooden_pickaxe; nether simple planks from CollectPlanksTask; dont automine simple cobblestone from CollectCobblestoneTask; dont automine mob gunpowder CreeperEntity smelt stone from cobblestone; dont automine recipe2x2 crafting_table from [planks, planks, planks, planks] recipe2x2 torch from [coal, , stick, ] recipeSlab cobblestone_slab using cobblestone recipeStairs cobblestone_stairs using cobblestone recipeWall cobblestone_wall using cobblestone tools wooden from planks alias wooden_pick from wooden_pickaxe mobCook porkchop PigEntity.class crop carrot break carrot plant carrot crop beetroot break beetroots plant beetroot_seeds crop beetroot_seeds break beetroots plant beetroot_seeds ``` PROBLEM: How to deal with stuff like this? ```java woodTasks("log", wood -> wood.log, (wood, count) -> new MineAndCollectTask(wood.log, count, new Block[]{Block.getBlockFromItem(wood.log)}, MiningRequirement.HAND)); ``` I could manually write those out. For example: ``` mine acacia_log mine birch_log etc... ``` I think since this is a text based file it wouldn't hurt to add some extra lines and a comment system to make things more legible. We'll stick with that for now.
1.0
Move TaskCatalogue to file where users can add/edit recipies if they want - Create a file format that handles recipes, so that people can add custom recipes easily. Examples: ``` mine dirt from [dirt, grass_block, grass_path] mine log[acacia_log, birch_log, crimson_log, dark_oak_log, oak_log, jungle_log, spruce_log, warped_log] from [acacia_log, birch_log, crimson_log, dark_oak_log, oak_log, jungle_log, spruce_log, warped_log]; any dimension mine cobblestone from [stone, cobblestone] with wooden_pickaxe mine netherrack with wooden_pickaxe; nether simple planks from CollectPlanksTask; dont automine simple cobblestone from CollectCobblestoneTask; dont automine mob gunpowder CreeperEntity smelt stone from cobblestone; dont automine recipe2x2 crafting_table from [planks, planks, planks, planks] recipe2x2 torch from [coal, , stick, ] recipeSlab cobblestone_slab using cobblestone recipeStairs cobblestone_stairs using cobblestone recipeWall cobblestone_wall using cobblestone tools wooden from planks alias wooden_pick from wooden_pickaxe mobCook porkchop PigEntity.class crop carrot break carrot plant carrot crop beetroot break beetroots plant beetroot_seeds crop beetroot_seeds break beetroots plant beetroot_seeds ``` PROBLEM: How to deal with stuff like this? ```java woodTasks("log", wood -> wood.log, (wood, count) -> new MineAndCollectTask(wood.log, count, new Block[]{Block.getBlockFromItem(wood.log)}, MiningRequirement.HAND)); ``` I could manually write those out. For example: ``` mine acacia_log mine birch_log etc... ``` I think since this is a text based file it wouldn't hurt to add some extra lines and a comment system to make things more legible. We'll stick with that for now.
infrastructure
move taskcatalogue to file where users can add edit recipies if they want create a file format that handles recipes so that people can add custom recipes easily examples mine dirt from mine log from any dimension mine cobblestone from with wooden pickaxe mine netherrack with wooden pickaxe nether simple planks from collectplankstask dont automine simple cobblestone from collectcobblestonetask dont automine mob gunpowder creeperentity smelt stone from cobblestone dont automine crafting table from torch from recipeslab cobblestone slab using cobblestone recipestairs cobblestone stairs using cobblestone recipewall cobblestone wall using cobblestone tools wooden from planks alias wooden pick from wooden pickaxe mobcook porkchop pigentity class crop carrot break carrot plant carrot crop beetroot break beetroots plant beetroot seeds crop beetroot seeds break beetroots plant beetroot seeds problem how to deal with stuff like this java woodtasks log wood wood log wood count new mineandcollecttask wood log count new block block getblockfromitem wood log miningrequirement hand i could manually write those out for example mine acacia log mine birch log etc i think since this is a text based file it wouldn t hurt to add some extra lines and a comment system to make things more legible we ll stick with that for now
1
35,362
31,076,399,398
IssuesEvent
2023-08-12 14:37:01
softeerbootcamp-2nd/H2-O
https://api.github.com/repos/softeerbootcamp-2nd/H2-O
opened
bug: 스키마 및 DDL 수정
:bug: bug :leaves: backend :globe_with_meridians: infrastructure 📂 database
### 📘 수정상황 - 옵션 및 패키지의 카테고리를 별개의 테이블로 유지하지 않고 옵션과 패키지의 컬럼으로 추가 - 추가 옵션의 경우 : [상세품목, 악세사리, 휠] 중 하나 - 기본 옵션의 경우 : [파워트레인/성능, 지능형 안전기술, 안전, 외관, 내장, 시트, 편의, 멀티미디어] 중 하나 - `category` 테이블 제거 - `options_category` 테이블 제거 - `package_category` 테이블 제거 ### 📗 세부사항 - ERD 수정 - DDL 작성 자동화 스크립트 수정 - 운영 DB에 반영
1.0
bug: 스키마 및 DDL 수정 - ### 📘 수정상황 - 옵션 및 패키지의 카테고리를 별개의 테이블로 유지하지 않고 옵션과 패키지의 컬럼으로 추가 - 추가 옵션의 경우 : [상세품목, 악세사리, 휠] 중 하나 - 기본 옵션의 경우 : [파워트레인/성능, 지능형 안전기술, 안전, 외관, 내장, 시트, 편의, 멀티미디어] 중 하나 - `category` 테이블 제거 - `options_category` 테이블 제거 - `package_category` 테이블 제거 ### 📗 세부사항 - ERD 수정 - DDL 작성 자동화 스크립트 수정 - 운영 DB에 반영
infrastructure
bug 스키마 및 ddl 수정 📘 수정상황 옵션 및 패키지의 카테고리를 별개의 테이블로 유지하지 않고 옵션과 패키지의 컬럼으로 추가 추가 옵션의 경우 중 하나 기본 옵션의 경우 중 하나 category 테이블 제거 options category 테이블 제거 package category 테이블 제거 📗 세부사항 erd 수정 ddl 작성 자동화 스크립트 수정 운영 db에 반영
1
157,025
12,343,201,705
IssuesEvent
2020-05-15 03:14:01
google/knative-gcp
https://api.github.com/repos/google/knative-gcp
closed
Investigate whether it's possible to dump the logs of relevant resources when E2E test fails
area/test-and-release kind/feature-request
**Problem** Currently it's quite difficult to debug E2E tests when E2E tests fails. For example, https://prow.knative.dev/view/gcs/knative-prow/pr-logs/pull/google_knative-gcp/1028/pull-google-knative-gcp-wi-tests/1258956641588482049. If you just take a look at this prow job link, it's hard to figure out the root cause of the E2E test failure. Devs need to try to reproduce the same failure in local and try to get the logs of the exact pod which goes wrong. This can be quite tricky because devs need to catch the logs before the E2E tests fail and the test namespace gets terminated. For example, for some E2E tests running with WI, sometimes it fail because of the error of cre-pull as follows: `"error":"unable to create sunscription \"cre-pull-99b0ac99-9d81-44a9-b340-236b4c917490\", rpc error: code = Unauthenticated desc = transport: compute: Received 403 \nUnable to generate token; IAM returned 403 Forbidden: The caller does not have permission\n\nThis error could be caused by a missing IAM policy binding on the target IAM service account.\n\nYou can create the necessary policy binding with:\n\n gcloud iam service-accounts add-iam-policy-binding \\\n --role=roles/iam.workloadIdentityUser \\\n --member=\"serviceAccount:xiyue-opensource-project-wi.svc.id.goog[test-cloud-storage-source-with-g-c-p-broker-s7xxf/cre-pubsub]\" \\\n cre-pubsub@xiyue-opensource-project-wi.iam.gserviceaccount.com\n\nFor more information, refer to the Workload Identity documentation:\n\n https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity\n\n","stacktrace":"main.main\n\tgithub.com/google/knative-gcp/cmd/pubsub/receive_adapter/main.go:93\nruntime.main\n\truntime/proc.go:203"}`. However, such error logs never appear in the logs of prow job. In addition, currently we have some flaky tests. However, when you run it individually with `go test`, they always pass. For example, like `CloudStorageSourceBrokerWithPubSubChannel` and `CloudAuditLogsSourceWithGCPBroker`, I ran them 10 times in local, they always pass. They are just flaky when we run all E2E tests together. Without the dumping of logs of relevant sources, it's difficult for us to find the root cause of these flaky tests. **Exit Criteria** When a E2E test fails, the logs of all relevant resources in the test namespace should get dumped. **Time Estimate (optional):** How many developer-days do you think this may take to resolve? **Additional context (optional)** Add any other context about the feature request here.
1.0
Investigate whether it's possible to dump the logs of relevant resources when E2E test fails - **Problem** Currently it's quite difficult to debug E2E tests when E2E tests fails. For example, https://prow.knative.dev/view/gcs/knative-prow/pr-logs/pull/google_knative-gcp/1028/pull-google-knative-gcp-wi-tests/1258956641588482049. If you just take a look at this prow job link, it's hard to figure out the root cause of the E2E test failure. Devs need to try to reproduce the same failure in local and try to get the logs of the exact pod which goes wrong. This can be quite tricky because devs need to catch the logs before the E2E tests fail and the test namespace gets terminated. For example, for some E2E tests running with WI, sometimes it fail because of the error of cre-pull as follows: `"error":"unable to create sunscription \"cre-pull-99b0ac99-9d81-44a9-b340-236b4c917490\", rpc error: code = Unauthenticated desc = transport: compute: Received 403 \nUnable to generate token; IAM returned 403 Forbidden: The caller does not have permission\n\nThis error could be caused by a missing IAM policy binding on the target IAM service account.\n\nYou can create the necessary policy binding with:\n\n gcloud iam service-accounts add-iam-policy-binding \\\n --role=roles/iam.workloadIdentityUser \\\n --member=\"serviceAccount:xiyue-opensource-project-wi.svc.id.goog[test-cloud-storage-source-with-g-c-p-broker-s7xxf/cre-pubsub]\" \\\n cre-pubsub@xiyue-opensource-project-wi.iam.gserviceaccount.com\n\nFor more information, refer to the Workload Identity documentation:\n\n https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity\n\n","stacktrace":"main.main\n\tgithub.com/google/knative-gcp/cmd/pubsub/receive_adapter/main.go:93\nruntime.main\n\truntime/proc.go:203"}`. However, such error logs never appear in the logs of prow job. In addition, currently we have some flaky tests. However, when you run it individually with `go test`, they always pass. For example, like `CloudStorageSourceBrokerWithPubSubChannel` and `CloudAuditLogsSourceWithGCPBroker`, I ran them 10 times in local, they always pass. They are just flaky when we run all E2E tests together. Without the dumping of logs of relevant sources, it's difficult for us to find the root cause of these flaky tests. **Exit Criteria** When a E2E test fails, the logs of all relevant resources in the test namespace should get dumped. **Time Estimate (optional):** How many developer-days do you think this may take to resolve? **Additional context (optional)** Add any other context about the feature request here.
non_infrastructure
investigate whether it s possible to dump the logs of relevant resources when test fails problem currently it s quite difficult to debug tests when tests fails for example if you just take a look at this prow job link it s hard to figure out the root cause of the test failure devs need to try to reproduce the same failure in local and try to get the logs of the exact pod which goes wrong this can be quite tricky because devs need to catch the logs before the tests fail and the test namespace gets terminated for example for some tests running with wi sometimes it fail because of the error of cre pull as follows error unable to create sunscription cre pull rpc error code unauthenticated desc transport compute received nunable to generate token iam returned forbidden the caller does not have permission n nthis error could be caused by a missing iam policy binding on the target iam service account n nyou can create the necessary policy binding with n n gcloud iam service accounts add iam policy binding n role roles iam workloadidentityuser n member serviceaccount xiyue opensource project wi svc id goog n cre pubsub xiyue opensource project wi iam gserviceaccount com n nfor more information refer to the workload identity documentation n n however such error logs never appear in the logs of prow job in addition currently we have some flaky tests however when you run it individually with go test they always pass for example like cloudstoragesourcebrokerwithpubsubchannel and cloudauditlogssourcewithgcpbroker i ran them times in local they always pass they are just flaky when we run all tests together without the dumping of logs of relevant sources it s difficult for us to find the root cause of these flaky tests exit criteria when a test fails the logs of all relevant resources in the test namespace should get dumped time estimate optional how many developer days do you think this may take to resolve additional context optional add any other context about the feature request here
0
124,065
10,292,490,675
IssuesEvent
2019-08-27 14:33:27
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
teamcity: failed test: _escapes_direct=false
C-test-failure O-robot
The following tests appear to have failed on master (testrace): _escapes_direct=false You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+_escapes_direct=false). [#1451983](https://teamcity.cockroachdb.com/viewLog.html?buildId=1451983): ``` _escapes_direct=false ... in panic. ------- Stdout: ------- I190823 22:18:12.497394 861 sql/event_log.go:130 [n1,client=127.0.0.1:46736,user=root] Event: "create_database", target: 101, info: {DatabaseName:d24 Statement:CREATE DATABASE d24 User:root} I190823 22:18:12.665744 12674 storage/replica_command.go:598 [n1,merge,s1,r50/1:/Table/78{-/1}] initiating a merge of r49:/Table/{78/1-80} [(n1,s1):1, next=2, gen=18] into this range (lhs+rhs has (size=0 B+64 KiB qps=0.00+1.87 --> 1.87qps) below threshold (size=64 KiB, qps=1.87)) I190823 22:18:12.731634 215 storage/store.go:2593 [n1,s1,r50/1:/Table/78{-/1}] removing replica r49/1 W190823 22:18:12.811612 141 sql/schema_changer.go:949 [n1,scExec] waiting to update leases: error with attached stack trace: github.com/cockroachdb/cockroach/pkg/sql.LeaseStore.WaitForOneVersion /go/src/github.com/cockroachdb/cockroach/pkg/sql/lease.go:314 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).waitToUpdateLeases /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1201 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:948 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:964 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1961 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:2226 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1 /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1337 - error with embedded safe details: ID %d is not a table -- arg 1: <sqlbase.ID> - ID 100 is not a table I190823 22:18:12.877478 12688 storage/replica_command.go:284 [n1,s1,r73/1:/{Table/100-Max}] initiating a split of this range at key /Table/102/1 [r74] (manual) I190823 22:18:12.939143 12687 ccl/importccl/read_import_proc.go:83 [n1,import-distsql-ingest] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I190823 22:18:12.970268 12648 storage/replica_command.go:284 [n1,split,s1,r73/1:/{Table/100-Max}] initiating a split of this range at key /Table/102 [r75] (zone config) I190823 22:18:13.011894 12648 storage/split_queue.go:144 [n1,split,s1,r73/1:/Table/10{0-2/1}] split saw concurrent descriptor modification; maybe retrying I190823 22:18:13.013289 12824 storage/replica_command.go:284 [n1,split,s1,r73/1:/Table/10{0-2/1}] initiating a split of this range at key /Table/102 [r76] (zone config) W190823 22:18:13.097421 12688 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190823 22:18:13.109319 214 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190823 22:18:13.109917 214 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190823 22:18:13.424441 861 sql/sqlbase/structured.go:1511 [n1,client=127.0.0.1:46736,user=root] publish: descID=102 (t) version=3 mtime=2019-08-23 22:18:13.272971305 +0000 UTC I190823 22:18:13.587807 861 sql/event_log.go:130 [n1,client=127.0.0.1:46736,user=root] Event: "drop_database", target: 101, info: {DatabaseName:d24 Statement:DROP DATABASE d24 User:root DroppedSchemaObjects:[d24.public.t]} I190823 22:18:13.661696 861 sql/sqlbase/structured.go:1511 [n1,client=127.0.0.1:46736,user=root,scExec] publish: descID=102 (t) version=4 mtime=2019-08-23 22:18:13.660206068 +0000 UTC ``` Please assign, take a look and update the issue accordingly.
1.0
teamcity: failed test: _escapes_direct=false - The following tests appear to have failed on master (testrace): _escapes_direct=false You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+_escapes_direct=false). [#1451983](https://teamcity.cockroachdb.com/viewLog.html?buildId=1451983): ``` _escapes_direct=false ... in panic. ------- Stdout: ------- I190823 22:18:12.497394 861 sql/event_log.go:130 [n1,client=127.0.0.1:46736,user=root] Event: "create_database", target: 101, info: {DatabaseName:d24 Statement:CREATE DATABASE d24 User:root} I190823 22:18:12.665744 12674 storage/replica_command.go:598 [n1,merge,s1,r50/1:/Table/78{-/1}] initiating a merge of r49:/Table/{78/1-80} [(n1,s1):1, next=2, gen=18] into this range (lhs+rhs has (size=0 B+64 KiB qps=0.00+1.87 --> 1.87qps) below threshold (size=64 KiB, qps=1.87)) I190823 22:18:12.731634 215 storage/store.go:2593 [n1,s1,r50/1:/Table/78{-/1}] removing replica r49/1 W190823 22:18:12.811612 141 sql/schema_changer.go:949 [n1,scExec] waiting to update leases: error with attached stack trace: github.com/cockroachdb/cockroach/pkg/sql.LeaseStore.WaitForOneVersion /go/src/github.com/cockroachdb/cockroach/pkg/sql/lease.go:314 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).waitToUpdateLeases /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1201 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:948 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChanger).exec /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:964 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1.1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:1961 github.com/cockroachdb/cockroach/pkg/sql.(*SchemaChangeManager).Start.func1 /go/src/github.com/cockroachdb/cockroach/pkg/sql/schema_changer.go:2226 github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1 /go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1337 - error with embedded safe details: ID %d is not a table -- arg 1: <sqlbase.ID> - ID 100 is not a table I190823 22:18:12.877478 12688 storage/replica_command.go:284 [n1,s1,r73/1:/{Table/100-Max}] initiating a split of this range at key /Table/102/1 [r74] (manual) I190823 22:18:12.939143 12687 ccl/importccl/read_import_proc.go:83 [n1,import-distsql-ingest] could not fetch file size; falling back to per-file progress: bad ContentLength: -1 I190823 22:18:12.970268 12648 storage/replica_command.go:284 [n1,split,s1,r73/1:/{Table/100-Max}] initiating a split of this range at key /Table/102 [r75] (zone config) I190823 22:18:13.011894 12648 storage/split_queue.go:144 [n1,split,s1,r73/1:/Table/10{0-2/1}] split saw concurrent descriptor modification; maybe retrying I190823 22:18:13.013289 12824 storage/replica_command.go:284 [n1,split,s1,r73/1:/Table/10{0-2/1}] initiating a split of this range at key /Table/102 [r76] (zone config) W190823 22:18:13.097421 12688 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190823 22:18:13.109319 214 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. W190823 22:18:13.109917 214 storage/engine/rocksdb.go:116 [rocksdb] [db/version_set.cc:3086] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. I190823 22:18:13.424441 861 sql/sqlbase/structured.go:1511 [n1,client=127.0.0.1:46736,user=root] publish: descID=102 (t) version=3 mtime=2019-08-23 22:18:13.272971305 +0000 UTC I190823 22:18:13.587807 861 sql/event_log.go:130 [n1,client=127.0.0.1:46736,user=root] Event: "drop_database", target: 101, info: {DatabaseName:d24 Statement:DROP DATABASE d24 User:root DroppedSchemaObjects:[d24.public.t]} I190823 22:18:13.661696 861 sql/sqlbase/structured.go:1511 [n1,client=127.0.0.1:46736,user=root,scExec] publish: descID=102 (t) version=4 mtime=2019-08-23 22:18:13.660206068 +0000 UTC ``` Please assign, take a look and update the issue accordingly.
non_infrastructure
teamcity failed test escapes direct false the following tests appear to have failed on master testrace escapes direct false you may want to check escapes direct false in panic stdout sql event log go event create database target info databasename statement create database user root storage replica command go initiating a merge of table into this range lhs rhs has size b kib qps below threshold size kib qps storage store go removing replica sql schema changer go waiting to update leases error with attached stack trace github com cockroachdb cockroach pkg sql leasestore waitforoneversion go src github com cockroachdb cockroach pkg sql lease go github com cockroachdb cockroach pkg sql schemachanger waittoupdateleases go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachanger exec go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachanger exec go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachangemanager start go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg sql schemachangemanager start go src github com cockroachdb cockroach pkg sql schema changer go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go runtime goexit usr local go src runtime asm s error with embedded safe details id d is not a table arg id is not a table storage replica command go initiating a split of this range at key table manual ccl importccl read import proc go could not fetch file size falling back to per file progress bad contentlength storage replica command go initiating a split of this range at key table zone config storage split queue go split saw concurrent descriptor modification maybe retrying storage replica command go initiating a split of this range at key table zone config storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed storage engine rocksdb go more existing levels in db than needed max bytes for level multiplier may not be guaranteed sql sqlbase structured go publish descid t version mtime utc sql event log go event drop database target info databasename statement drop database user root droppedschemaobjects sql sqlbase structured go publish descid t version mtime utc please assign take a look and update the issue accordingly
0
431,122
12,475,478,195
IssuesEvent
2020-05-29 11:41:13
hotosm/tasking-manager
https://api.github.com/repos/hotosm/tasking-manager
closed
Question and Comments - Notification
Priority: Low Type: Enhancement
Q/C section is great, but if you're like me and have hundreds of active projects; it's just not feasible to check them all regularly. Ideally it would be nice to have a 'separate' notification showing any 'unread' Q/C posts on projects or a simpler solution would be that the PM is just auto-included in any posts there vs. the mapper needing to hit the message PM link.
1.0
Question and Comments - Notification - Q/C section is great, but if you're like me and have hundreds of active projects; it's just not feasible to check them all regularly. Ideally it would be nice to have a 'separate' notification showing any 'unread' Q/C posts on projects or a simpler solution would be that the PM is just auto-included in any posts there vs. the mapper needing to hit the message PM link.
non_infrastructure
question and comments notification q c section is great but if you re like me and have hundreds of active projects it s just not feasible to check them all regularly ideally it would be nice to have a separate notification showing any unread q c posts on projects or a simpler solution would be that the pm is just auto included in any posts there vs the mapper needing to hit the message pm link
0
258,675
22,338,646,505
IssuesEvent
2022-06-14 21:16:33
archesproject/arches
https://api.github.com/repos/archesproject/arches
closed
Two-factor authentication unit tests
Subject: Testing
We should create unit tests that capture the two-factor authentication creation / login workflows.
1.0
Two-factor authentication unit tests - We should create unit tests that capture the two-factor authentication creation / login workflows.
non_infrastructure
two factor authentication unit tests we should create unit tests that capture the two factor authentication creation login workflows
0
66,069
16,532,939,292
IssuesEvent
2021-05-27 08:27:06
xamarin/xamarin-android
https://api.github.com/repos/xamarin/xamarin-android
closed
LibraryProjectZip build action is not working after migrating binding project to .NET6
Area: App+Library Build
<!-- Documentation for how to troubleshoot issues with binding projects, as well as common issues and how to solve them, is available here: https://github.com/xamarin/java.interop/wiki/Troubleshooting-Android-Bindings-Issues --> ### Error Message or Issue I have migrated a mono.android binding project to .NET 6. After the migration the LibraryProjectZip build action set for the .aar file does not work. After decompiling the binary file inside the Resources folder there isn't any "**\_\_AndroidNativeLibraries\_\_.zip"**. I have tried with AndroidLibrary build action as well - the result was the same. ### Version Information My environment is **MacOS Big Sur 11.3.1** with **dotnet version - 6.0.100-preview.4.21255.9** ### Log File [binlog.zip](https://github.com/xamarin/xamarin-android/files/6538366/binlog.zip) ### Other Helpful Info Here is a sample project where the issue can be reproduced - the sample contains aar file as well. [AndroidBindingNet6.zip](https://github.com/xamarin/xamarin-android/files/6538406/AndroidBindingNet6.zip) 1. Open terminal. 2. Run the following command: **dotnet build AndroidBindingNet6.sln -c Release** 3. Decompile the binary and notice how Resources is empty. <img width="350" alt="Screenshot 2021-05-25 at 12 43 14" src="https://user-images.githubusercontent.com/9336157/119480090-fbd96700-bd59-11eb-8dbf-0e07387cc6f4.png">
1.0
LibraryProjectZip build action is not working after migrating binding project to .NET6 - <!-- Documentation for how to troubleshoot issues with binding projects, as well as common issues and how to solve them, is available here: https://github.com/xamarin/java.interop/wiki/Troubleshooting-Android-Bindings-Issues --> ### Error Message or Issue I have migrated a mono.android binding project to .NET 6. After the migration the LibraryProjectZip build action set for the .aar file does not work. After decompiling the binary file inside the Resources folder there isn't any "**\_\_AndroidNativeLibraries\_\_.zip"**. I have tried with AndroidLibrary build action as well - the result was the same. ### Version Information My environment is **MacOS Big Sur 11.3.1** with **dotnet version - 6.0.100-preview.4.21255.9** ### Log File [binlog.zip](https://github.com/xamarin/xamarin-android/files/6538366/binlog.zip) ### Other Helpful Info Here is a sample project where the issue can be reproduced - the sample contains aar file as well. [AndroidBindingNet6.zip](https://github.com/xamarin/xamarin-android/files/6538406/AndroidBindingNet6.zip) 1. Open terminal. 2. Run the following command: **dotnet build AndroidBindingNet6.sln -c Release** 3. Decompile the binary and notice how Resources is empty. <img width="350" alt="Screenshot 2021-05-25 at 12 43 14" src="https://user-images.githubusercontent.com/9336157/119480090-fbd96700-bd59-11eb-8dbf-0e07387cc6f4.png">
non_infrastructure
libraryprojectzip build action is not working after migrating binding project to documentation for how to troubleshoot issues with binding projects as well as common issues and how to solve them is available here error message or issue i have migrated a mono android binding project to net after the migration the libraryprojectzip build action set for the aar file does not work after decompiling the binary file inside the resources folder there isn t any androidnativelibraries zip i have tried with androidlibrary build action as well the result was the same version information my environment is macos big sur with dotnet version preview log file other helpful info here is a sample project where the issue can be reproduced the sample contains aar file as well open terminal run the following command dotnet build sln c release decompile the binary and notice how resources is empty img width alt screenshot at src
0
25,672
18,957,845,214
IssuesEvent
2021-11-18 22:46:31
E3SM-Project/scream
https://api.github.com/repos/E3SM-Project/scream
opened
Improve register_physics.hpp logic
enhancement infrastructure Atmosphere Driver
Right now, the header pulls in all the physics packages. So far, except for unit testing, it's fine, since we don't have multiple choices for each parametrization. But if/when the SCREAM AD becomes the main atm driver for e3sm, this might make `register_physics.hpp` unusable. **A possible approach** First, rename all files avoiding the generic prefix `atmosphere`. E.g., `atmosphere_macrophysics.*pp` -> `shoc_atm_process.*pp`. Then, make the inclusion and registration of processes guarded by ifdefs: ``` #ifdef SCREAM_HAS_SHOC #include "physics/shoc/shoc_atm_process.hpp #endif #ifdef SCREAM_HAS_CLUBB #include "physics/clubb/clubb_atm_process.hpp #endif #ifdef SCREAM_HAS_P3 #include "physics/p3/p3_atm_process.hpp #endif ... void register_physics () { auto& proc_factory = AtmosphereProcessFactory::instance(); #ifdef SCREAM_HAS_SHOC proc_factory.register_product("shoc",&create_atmosphere_process<SHOCMacrophysics>); #endif #ifdef SCREAM_HAS_CLUBB proc_factory.register_product("clubb",&create_atmosphere_process<CLUBBMacrophysics>); #endif ... } ``` Now, in order to be able to call `register_physics()` from the test setup, we need to forward the macros we want, so that they are defined when `register_physics.hpp` is parsed. The naive approach would be to add the compile definitions when building the tests. A much more powerful (and less intrusive) method is to have the libraries define the macro. So Inside p3's CMakeLists.txt: ``` add_library(p3 ...) target_compile_definitions (p3 PUBLIC SCREAM_HAS_P3) ``` Inside the test (or lib) CMakeLists.txt: ``` # use scream's macro to create test CreateUnitTest (my_test ... LIBS "p3;share;...") # create a library for mct add_library (atm ...) target_link_libraries (atm PUBLIC p3) ``` Done. Nothing else is required, since `target_link_libraries(<target> PUBLIC p3)` already propagates the PUBLIC (and INTERFACE) compile definition (and more) of the `p3` library target to the exec/lib currently built. So when building the `atm` library, `SCREAM_HAS_P3` _will_ be defined, so that `register_physics.hpp` is registering P3 in the factory. This approach has the advantage of leaving the downstream code as lightweight as possible. Moreover, in CIME optic, one could change a parametrization name in the case, and have SCREAM's cmake machinery do everything. E.g., in the mct_coupling folder, we might have ``` if ("${SCREAM_MACROPHYSICS_PKG}" STREQUAL "SHOC") target_link_libraries (atm PUBLIC shoc) elseif ("${SCREAM_MACROPHYSICS_PKG}" STREQUAL "CLUBB") target_link_libraries (atm PUBLIC clubb) endif() ``` or even ``` target_link_libraries (atm PUBLIC ${SCREAM_MACROPHYSICS_PKG}) ``` (assuming we know (or already checked) that it expands to a valid/supported macrophysics pkg name), without having to change implementation of the `atm` lib source code.
1.0
Improve register_physics.hpp logic - Right now, the header pulls in all the physics packages. So far, except for unit testing, it's fine, since we don't have multiple choices for each parametrization. But if/when the SCREAM AD becomes the main atm driver for e3sm, this might make `register_physics.hpp` unusable. **A possible approach** First, rename all files avoiding the generic prefix `atmosphere`. E.g., `atmosphere_macrophysics.*pp` -> `shoc_atm_process.*pp`. Then, make the inclusion and registration of processes guarded by ifdefs: ``` #ifdef SCREAM_HAS_SHOC #include "physics/shoc/shoc_atm_process.hpp #endif #ifdef SCREAM_HAS_CLUBB #include "physics/clubb/clubb_atm_process.hpp #endif #ifdef SCREAM_HAS_P3 #include "physics/p3/p3_atm_process.hpp #endif ... void register_physics () { auto& proc_factory = AtmosphereProcessFactory::instance(); #ifdef SCREAM_HAS_SHOC proc_factory.register_product("shoc",&create_atmosphere_process<SHOCMacrophysics>); #endif #ifdef SCREAM_HAS_CLUBB proc_factory.register_product("clubb",&create_atmosphere_process<CLUBBMacrophysics>); #endif ... } ``` Now, in order to be able to call `register_physics()` from the test setup, we need to forward the macros we want, so that they are defined when `register_physics.hpp` is parsed. The naive approach would be to add the compile definitions when building the tests. A much more powerful (and less intrusive) method is to have the libraries define the macro. So Inside p3's CMakeLists.txt: ``` add_library(p3 ...) target_compile_definitions (p3 PUBLIC SCREAM_HAS_P3) ``` Inside the test (or lib) CMakeLists.txt: ``` # use scream's macro to create test CreateUnitTest (my_test ... LIBS "p3;share;...") # create a library for mct add_library (atm ...) target_link_libraries (atm PUBLIC p3) ``` Done. Nothing else is required, since `target_link_libraries(<target> PUBLIC p3)` already propagates the PUBLIC (and INTERFACE) compile definition (and more) of the `p3` library target to the exec/lib currently built. So when building the `atm` library, `SCREAM_HAS_P3` _will_ be defined, so that `register_physics.hpp` is registering P3 in the factory. This approach has the advantage of leaving the downstream code as lightweight as possible. Moreover, in CIME optic, one could change a parametrization name in the case, and have SCREAM's cmake machinery do everything. E.g., in the mct_coupling folder, we might have ``` if ("${SCREAM_MACROPHYSICS_PKG}" STREQUAL "SHOC") target_link_libraries (atm PUBLIC shoc) elseif ("${SCREAM_MACROPHYSICS_PKG}" STREQUAL "CLUBB") target_link_libraries (atm PUBLIC clubb) endif() ``` or even ``` target_link_libraries (atm PUBLIC ${SCREAM_MACROPHYSICS_PKG}) ``` (assuming we know (or already checked) that it expands to a valid/supported macrophysics pkg name), without having to change implementation of the `atm` lib source code.
infrastructure
improve register physics hpp logic right now the header pulls in all the physics packages so far except for unit testing it s fine since we don t have multiple choices for each parametrization but if when the scream ad becomes the main atm driver for this might make register physics hpp unusable a possible approach first rename all files avoiding the generic prefix atmosphere e g atmosphere macrophysics pp shoc atm process pp then make the inclusion and registration of processes guarded by ifdefs ifdef scream has shoc include physics shoc shoc atm process hpp endif ifdef scream has clubb include physics clubb clubb atm process hpp endif ifdef scream has include physics atm process hpp endif void register physics auto proc factory atmosphereprocessfactory instance ifdef scream has shoc proc factory register product shoc create atmosphere process endif ifdef scream has clubb proc factory register product clubb create atmosphere process endif now in order to be able to call register physics from the test setup we need to forward the macros we want so that they are defined when register physics hpp is parsed the naive approach would be to add the compile definitions when building the tests a much more powerful and less intrusive method is to have the libraries define the macro so inside s cmakelists txt add library target compile definitions public scream has inside the test or lib cmakelists txt use scream s macro to create test createunittest my test libs share create a library for mct add library atm target link libraries atm public done nothing else is required since target link libraries public already propagates the public and interface compile definition and more of the library target to the exec lib currently built so when building the atm library scream has will be defined so that register physics hpp is registering in the factory this approach has the advantage of leaving the downstream code as lightweight as possible moreover in cime optic one could change a parametrization name in the case and have scream s cmake machinery do everything e g in the mct coupling folder we might have if scream macrophysics pkg strequal shoc target link libraries atm public shoc elseif scream macrophysics pkg strequal clubb target link libraries atm public clubb endif or even target link libraries atm public scream macrophysics pkg assuming we know or already checked that it expands to a valid supported macrophysics pkg name without having to change implementation of the atm lib source code
1
28,418
23,241,551,274
IssuesEvent
2022-08-03 16:00:19
culibraries/folio
https://api.github.com/repos/culibraries/folio
closed
Configure auto scaling groups
Pod - SysOps enhancement Infrastructure wontfix-hosted
#106 may be relevant here. This explains ASG and how they work in the context of k8s clusters and AZs. https://aws.amazon.com/blogs/containers/amazon-eks-cluster-multi-zone-auto-scaling-groups/
1.0
Configure auto scaling groups - #106 may be relevant here. This explains ASG and how they work in the context of k8s clusters and AZs. https://aws.amazon.com/blogs/containers/amazon-eks-cluster-multi-zone-auto-scaling-groups/
infrastructure
configure auto scaling groups may be relevant here this explains asg and how they work in the context of clusters and azs
1
7,916
7,134,030,673
IssuesEvent
2018-01-22 19:27:10
kaitai-io/kaitai_struct
https://api.github.com/repos/kaitai-io/kaitai_struct
opened
adding people to developers team
infrastructure
For your discretion @GreyCat, you may consider adding @Arlorean to developers team. I have no idea what one needs to do or how long to be part of this project to get listed there, but I guess its more like a mailinglist subsciption rather than a hall of fame? :) EDIT: I already got there yay! https://github.com/orgs/kaitai-io/teams
1.0
adding people to developers team - For your discretion @GreyCat, you may consider adding @Arlorean to developers team. I have no idea what one needs to do or how long to be part of this project to get listed there, but I guess its more like a mailinglist subsciption rather than a hall of fame? :) EDIT: I already got there yay! https://github.com/orgs/kaitai-io/teams
infrastructure
adding people to developers team for your discretion greycat you may consider adding arlorean to developers team i have no idea what one needs to do or how long to be part of this project to get listed there but i guess its more like a mailinglist subsciption rather than a hall of fame edit i already got there yay
1
21,040
14,287,188,075
IssuesEvent
2020-11-23 16:03:24
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Managing native dependencies for .NET components
area-Infrastructure-coreclr
# Managing native dependencies for .NET components ## Problem statement The .NET product relies on the native libraries within the operating system for critical functionality. Each of the three supported operating systems (Linux, macOS, Windows) has its own set of dependencies required by .NET, as well as optional dependencies for other scenarios such as System.Drawing.Common's dependency on libgdiplus in Linux. Managing .NET's native dependencies is currently done in an ad hoc fashion with little to no formal process. This has lead to a variety of issues outlined below. ### Community usage While the primary set of native dependencies are [documented](https://docs.microsoft.com/en-us/dotnet/core/install/), it doesn't provide a full accounting of all dependencies. Without that information, .NET developers are currently left to discover what dependencies are required by running a scenario and seeing what breaks. A few examples of this: * https://github.com/dotnet/runtime/issues/36888#issuecomment-633220620 * https://github.com/dotnet/dotnet-docker/issues/1767 These developers would benefit by being able to determine up front which native dependencies an application requires without actually running it. ### Product consistency Maintaining product consistency has become a challenge due to the breadth of product assets that need to keep track of native dependencies. As the product evolves, new dependencies are added, old dependencies may be removed, and existing dependencies may be modified to target a different version. This requires updates to the various product assets to reflect these changes. There isn't a defined process for this and no central location for tracking these changes. If a dependency change does occur, it requires product contributors to make a series of changes in disparate dotnet repos to update the affected assets. These are often overlooked, leading to out-of date documentation (see https://github.com/dotnet/docs/pull/18989) or other missing dependencies in downstream assets (see https://github.com/dotnet/dotnet-docker/issues/1946). In addition, not all dependency changes necessitate the same asset updates. There may be a native dependency added for functionality not commonly used in which case it wouldn't be considered required or even recommended. But even in those cases, it's important for contributors to at least be aware of such changes so they can make the proper evaluation of its need. ### Accounting Without a holistic view of the .NET product's native dependency profile, there's not an efficient process to actually get that information. If a security incident were to occur for some Linux package, how easy would it be to know whether .NET assets were affected or that products built on .NET might be affected due to their usage of a specific .NET library? By having a full accounting of these dependencies, it can help eliminate guesswork and scrounging through code for such situations. ## Affected product assets These are the product assets — packages, Docker images, documentation — that are potentially affected by any dependency change: * [deb packages](https://github.com/dotnet/runtime/tree/master/src/installer/pkg/packaging/deb) * [RPM packages](https://github.com/dotnet/runtime/tree/master/src/installer/pkg/packaging/rpm) * [Snap packages](https://github.com/dotnet/runtime/tree/master/src/installer/pkg/packaging/snaps) * [Docker images](https://github.com/dotnet/dotnet-docker) * [dotnet/core Linux pre-reqs doc](https://github.com/dotnet/core/blob/master/Documentation/linux-prereqs.md) * [.NET Installation guide docs](https://docs.microsoft.com/en-us/dotnet/core/install/) It is important that these assets be kept up-to-date as product changes are made. You'll notice that these are all downstream assets from the core product components. This requires a well-defined process in order to communicate dependency updates. ## Scope ### Operating system Due to the pervasiveness of package managers in Linux distros, this need is most relevant for Linux packages but also applies more broadly to other operating systems like Windows. Windows has several SKUs — Windows Server, Windows Server Core, Nano Server, Windows IoT — each having varying levels of functionality, not necessarily encompassing all the native dependencies .NET may require. Here are several examples where Nano Server doesn't provide the set of native dependencies required by .NET Core that is provided by Windows Server Core: * https://github.com/dotnet/dotnet-docker/issues/1767 * https://github.com/dotnet/dotnet-docker/issues/1098 * https://github.com/aspnet/aspnet-docker/issues/398 Ideally, a solution that describes .NET's dependencies would support descriptions across all supported operating systems: Linux, macOS, and Windows. ### Servicing .NET 5.0 is not the only version undergoing change with respect to dependencies. As long as 2.1 and 3.1 are supported, the package dependencies they have are an ever-changing landscape as new distro versions are released. This is specifically the case for packages like the ICU packages where an older version gets replaced by a newer one with a different package name (e.g. https://github.com/dotnet/dotnet-docker/pull/614). ## Potential solutions What follows are a set of proposals that describe, at a high-level, approaches for solving the problems stated above. A more detailed design would need to be developed. The central idea is to define a machine-readable metadata file that describes the set of native dependencies contained within each .NET component. How this content gets defined is open for debate. It could be manually managed, requiring that product contributors be diligent in maintaining it. Or perhaps it could be automatically generated through static analysis tools. As part of the .NET engineering processes, transform logic could then be applied to this file to generate updated content for the various downstream assets (packages, Docker images, documentation). At the very least, having a single source of truth for dependencies within source control allows for a mechanism for affected parties to be notified of changes. Potentially, this transformation logic could even be automated to ensure all assets are automatically maintained. From a customer standpoint, the minimum bar is having up-to-date and comprehensive documentation that describes the native dependencies. Having this generated directly from an authoritative source rather through ad hoc means would be beneficial in achieving that goal.
1.0
Managing native dependencies for .NET components - # Managing native dependencies for .NET components ## Problem statement The .NET product relies on the native libraries within the operating system for critical functionality. Each of the three supported operating systems (Linux, macOS, Windows) has its own set of dependencies required by .NET, as well as optional dependencies for other scenarios such as System.Drawing.Common's dependency on libgdiplus in Linux. Managing .NET's native dependencies is currently done in an ad hoc fashion with little to no formal process. This has lead to a variety of issues outlined below. ### Community usage While the primary set of native dependencies are [documented](https://docs.microsoft.com/en-us/dotnet/core/install/), it doesn't provide a full accounting of all dependencies. Without that information, .NET developers are currently left to discover what dependencies are required by running a scenario and seeing what breaks. A few examples of this: * https://github.com/dotnet/runtime/issues/36888#issuecomment-633220620 * https://github.com/dotnet/dotnet-docker/issues/1767 These developers would benefit by being able to determine up front which native dependencies an application requires without actually running it. ### Product consistency Maintaining product consistency has become a challenge due to the breadth of product assets that need to keep track of native dependencies. As the product evolves, new dependencies are added, old dependencies may be removed, and existing dependencies may be modified to target a different version. This requires updates to the various product assets to reflect these changes. There isn't a defined process for this and no central location for tracking these changes. If a dependency change does occur, it requires product contributors to make a series of changes in disparate dotnet repos to update the affected assets. These are often overlooked, leading to out-of date documentation (see https://github.com/dotnet/docs/pull/18989) or other missing dependencies in downstream assets (see https://github.com/dotnet/dotnet-docker/issues/1946). In addition, not all dependency changes necessitate the same asset updates. There may be a native dependency added for functionality not commonly used in which case it wouldn't be considered required or even recommended. But even in those cases, it's important for contributors to at least be aware of such changes so they can make the proper evaluation of its need. ### Accounting Without a holistic view of the .NET product's native dependency profile, there's not an efficient process to actually get that information. If a security incident were to occur for some Linux package, how easy would it be to know whether .NET assets were affected or that products built on .NET might be affected due to their usage of a specific .NET library? By having a full accounting of these dependencies, it can help eliminate guesswork and scrounging through code for such situations. ## Affected product assets These are the product assets — packages, Docker images, documentation — that are potentially affected by any dependency change: * [deb packages](https://github.com/dotnet/runtime/tree/master/src/installer/pkg/packaging/deb) * [RPM packages](https://github.com/dotnet/runtime/tree/master/src/installer/pkg/packaging/rpm) * [Snap packages](https://github.com/dotnet/runtime/tree/master/src/installer/pkg/packaging/snaps) * [Docker images](https://github.com/dotnet/dotnet-docker) * [dotnet/core Linux pre-reqs doc](https://github.com/dotnet/core/blob/master/Documentation/linux-prereqs.md) * [.NET Installation guide docs](https://docs.microsoft.com/en-us/dotnet/core/install/) It is important that these assets be kept up-to-date as product changes are made. You'll notice that these are all downstream assets from the core product components. This requires a well-defined process in order to communicate dependency updates. ## Scope ### Operating system Due to the pervasiveness of package managers in Linux distros, this need is most relevant for Linux packages but also applies more broadly to other operating systems like Windows. Windows has several SKUs — Windows Server, Windows Server Core, Nano Server, Windows IoT — each having varying levels of functionality, not necessarily encompassing all the native dependencies .NET may require. Here are several examples where Nano Server doesn't provide the set of native dependencies required by .NET Core that is provided by Windows Server Core: * https://github.com/dotnet/dotnet-docker/issues/1767 * https://github.com/dotnet/dotnet-docker/issues/1098 * https://github.com/aspnet/aspnet-docker/issues/398 Ideally, a solution that describes .NET's dependencies would support descriptions across all supported operating systems: Linux, macOS, and Windows. ### Servicing .NET 5.0 is not the only version undergoing change with respect to dependencies. As long as 2.1 and 3.1 are supported, the package dependencies they have are an ever-changing landscape as new distro versions are released. This is specifically the case for packages like the ICU packages where an older version gets replaced by a newer one with a different package name (e.g. https://github.com/dotnet/dotnet-docker/pull/614). ## Potential solutions What follows are a set of proposals that describe, at a high-level, approaches for solving the problems stated above. A more detailed design would need to be developed. The central idea is to define a machine-readable metadata file that describes the set of native dependencies contained within each .NET component. How this content gets defined is open for debate. It could be manually managed, requiring that product contributors be diligent in maintaining it. Or perhaps it could be automatically generated through static analysis tools. As part of the .NET engineering processes, transform logic could then be applied to this file to generate updated content for the various downstream assets (packages, Docker images, documentation). At the very least, having a single source of truth for dependencies within source control allows for a mechanism for affected parties to be notified of changes. Potentially, this transformation logic could even be automated to ensure all assets are automatically maintained. From a customer standpoint, the minimum bar is having up-to-date and comprehensive documentation that describes the native dependencies. Having this generated directly from an authoritative source rather through ad hoc means would be beneficial in achieving that goal.
infrastructure
managing native dependencies for net components managing native dependencies for net components problem statement the net product relies on the native libraries within the operating system for critical functionality each of the three supported operating systems linux macos windows has its own set of dependencies required by net as well as optional dependencies for other scenarios such as system drawing common s dependency on libgdiplus in linux managing net s native dependencies is currently done in an ad hoc fashion with little to no formal process this has lead to a variety of issues outlined below community usage while the primary set of native dependencies are it doesn t provide a full accounting of all dependencies without that information net developers are currently left to discover what dependencies are required by running a scenario and seeing what breaks a few examples of this these developers would benefit by being able to determine up front which native dependencies an application requires without actually running it product consistency maintaining product consistency has become a challenge due to the breadth of product assets that need to keep track of native dependencies as the product evolves new dependencies are added old dependencies may be removed and existing dependencies may be modified to target a different version this requires updates to the various product assets to reflect these changes there isn t a defined process for this and no central location for tracking these changes if a dependency change does occur it requires product contributors to make a series of changes in disparate dotnet repos to update the affected assets these are often overlooked leading to out of date documentation see or other missing dependencies in downstream assets see in addition not all dependency changes necessitate the same asset updates there may be a native dependency added for functionality not commonly used in which case it wouldn t be considered required or even recommended but even in those cases it s important for contributors to at least be aware of such changes so they can make the proper evaluation of its need accounting without a holistic view of the net product s native dependency profile there s not an efficient process to actually get that information if a security incident were to occur for some linux package how easy would it be to know whether net assets were affected or that products built on net might be affected due to their usage of a specific net library by having a full accounting of these dependencies it can help eliminate guesswork and scrounging through code for such situations affected product assets these are the product assets — packages docker images documentation — that are potentially affected by any dependency change it is important that these assets be kept up to date as product changes are made you ll notice that these are all downstream assets from the core product components this requires a well defined process in order to communicate dependency updates scope operating system due to the pervasiveness of package managers in linux distros this need is most relevant for linux packages but also applies more broadly to other operating systems like windows windows has several skus — windows server windows server core nano server windows iot — each having varying levels of functionality not necessarily encompassing all the native dependencies net may require here are several examples where nano server doesn t provide the set of native dependencies required by net core that is provided by windows server core ideally a solution that describes net s dependencies would support descriptions across all supported operating systems linux macos and windows servicing net is not the only version undergoing change with respect to dependencies as long as and are supported the package dependencies they have are an ever changing landscape as new distro versions are released this is specifically the case for packages like the icu packages where an older version gets replaced by a newer one with a different package name e g potential solutions what follows are a set of proposals that describe at a high level approaches for solving the problems stated above a more detailed design would need to be developed the central idea is to define a machine readable metadata file that describes the set of native dependencies contained within each net component how this content gets defined is open for debate it could be manually managed requiring that product contributors be diligent in maintaining it or perhaps it could be automatically generated through static analysis tools as part of the net engineering processes transform logic could then be applied to this file to generate updated content for the various downstream assets packages docker images documentation at the very least having a single source of truth for dependencies within source control allows for a mechanism for affected parties to be notified of changes potentially this transformation logic could even be automated to ensure all assets are automatically maintained from a customer standpoint the minimum bar is having up to date and comprehensive documentation that describes the native dependencies having this generated directly from an authoritative source rather through ad hoc means would be beneficial in achieving that goal
1
375,475
11,104,673,304
IssuesEvent
2019-12-17 08:11:25
wso2/ballerina-message-broker
https://api.github.com/repos/wso2/ballerina-message-broker
closed
Update commands in CLI
Complexity/Moderate Module/cli-client Priority/High Severity/Major Type/Improvement
**Description:** CLI Client uses following commands at the moment, queue exchange etc. But in REST API it uses plural nouns. Therefore its better to make the commands in CLI into queus, exchanges etc. **Suggested Labels:** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees:** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees--> **Affected Product Version:** **OS, DB, other environment details and versions:** **Steps to reproduce:** **Related Issues:** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
1.0
Update commands in CLI - **Description:** CLI Client uses following commands at the moment, queue exchange etc. But in REST API it uses plural nouns. Therefore its better to make the commands in CLI into queus, exchanges etc. **Suggested Labels:** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees:** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees--> **Affected Product Version:** **OS, DB, other environment details and versions:** **Steps to reproduce:** **Related Issues:** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
non_infrastructure
update commands in cli description cli client uses following commands at the moment queue exchange etc but in rest api it uses plural nouns therefore its better to make the commands in cli into queus exchanges etc suggested labels suggested assignees affected product version os db other environment details and versions steps to reproduce related issues
0
9,594
8,042,419,910
IssuesEvent
2018-07-31 08:05:25
brave/browser-laptop
https://api.github.com/repos/brave/browser-laptop
closed
Restructure directory hierarchy based on process
hackathon infrastructure refactoring stale
Ideally there would be: - browser: For the main process code (Most of this code is inside ./app right now) - stores - ... - renderer: For the renderer process code (Most of this code is inside of ./js right now) - components - stores - ... - common: Code used by both processes - constants - ...
1.0
Restructure directory hierarchy based on process - Ideally there would be: - browser: For the main process code (Most of this code is inside ./app right now) - stores - ... - renderer: For the renderer process code (Most of this code is inside of ./js right now) - components - stores - ... - common: Code used by both processes - constants - ...
infrastructure
restructure directory hierarchy based on process ideally there would be browser for the main process code most of this code is inside app right now stores renderer for the renderer process code most of this code is inside of js right now components stores common code used by both processes constants
1
19,508
27,089,532,267
IssuesEvent
2023-02-14 19:46:25
dotnet/docs
https://api.github.com/repos/dotnet/docs
closed
[Breaking change]: ExceptionCollection would now throw 'ArgumentException` if input is not of Exception type.
doc-idea breaking-change Pri1 binary incompatible in-pr :checkered_flag: Release: .NET 8 :pushpin: seQUESTered
### Description Similar to https://github.com/dotnet/docs/issues/33694, We are now enforcing a specific type when creating an `ExceptionCollection`. The type being passed must be of the `exception `type, otherwise an `ArgumentException `will be thrown. ### Version .NET 8 Preview 1 ### Previous behavior Previously, the creation of an `ExceptionCollection `object did not check for the type passed in and could potentially delay failure until later in the process. No exceptions were thrown during the object creation. ### New behavior the creation of an `ExceptionCollection `object will throw an `ArgumentException ` if the type passed in is of not `exception` type. ### Type of breaking change - [X] **Binary incompatible**: Existing binaries may encounter a breaking change in behavior, such as failure to load or execute, and if so, require recompilation. - [ ] **Source incompatible**: When recompiled using the new SDK or component or to target the new runtime, existing source code may require source changes to compile successfully. - [X] **Behavioral change**: Existing binaries may behave differently at run time. ### Reason for change Existing documentation clearly notes that type should be of type `exception`. Making exceptions consistent across codebase. ### Recommended action This change should not have a significant impact on most scenarios. However, if users were previously not handling exceptions will now need to make changes to handle the ArgumentException. ### Feature area Windows Forms ### Affected APIs Creation of ExceptionCollection object. ExceptionCollection() --- [Associated WorkItem - 62394](https://dev.azure.com/msft-skilling/Content/_workitems/edit/62394)
True
[Breaking change]: ExceptionCollection would now throw 'ArgumentException` if input is not of Exception type. - ### Description Similar to https://github.com/dotnet/docs/issues/33694, We are now enforcing a specific type when creating an `ExceptionCollection`. The type being passed must be of the `exception `type, otherwise an `ArgumentException `will be thrown. ### Version .NET 8 Preview 1 ### Previous behavior Previously, the creation of an `ExceptionCollection `object did not check for the type passed in and could potentially delay failure until later in the process. No exceptions were thrown during the object creation. ### New behavior the creation of an `ExceptionCollection `object will throw an `ArgumentException ` if the type passed in is of not `exception` type. ### Type of breaking change - [X] **Binary incompatible**: Existing binaries may encounter a breaking change in behavior, such as failure to load or execute, and if so, require recompilation. - [ ] **Source incompatible**: When recompiled using the new SDK or component or to target the new runtime, existing source code may require source changes to compile successfully. - [X] **Behavioral change**: Existing binaries may behave differently at run time. ### Reason for change Existing documentation clearly notes that type should be of type `exception`. Making exceptions consistent across codebase. ### Recommended action This change should not have a significant impact on most scenarios. However, if users were previously not handling exceptions will now need to make changes to handle the ArgumentException. ### Feature area Windows Forms ### Affected APIs Creation of ExceptionCollection object. ExceptionCollection() --- [Associated WorkItem - 62394](https://dev.azure.com/msft-skilling/Content/_workitems/edit/62394)
non_infrastructure
exceptioncollection would now throw argumentexception if input is not of exception type description similar to we are now enforcing a specific type when creating an exceptioncollection the type being passed must be of the exception type otherwise an argumentexception will be thrown version net preview previous behavior previously the creation of an exceptioncollection object did not check for the type passed in and could potentially delay failure until later in the process no exceptions were thrown during the object creation new behavior the creation of an exceptioncollection object will throw an argumentexception if the type passed in is of not exception type type of breaking change binary incompatible existing binaries may encounter a breaking change in behavior such as failure to load or execute and if so require recompilation source incompatible when recompiled using the new sdk or component or to target the new runtime existing source code may require source changes to compile successfully behavioral change existing binaries may behave differently at run time reason for change existing documentation clearly notes that type should be of type exception making exceptions consistent across codebase recommended action this change should not have a significant impact on most scenarios however if users were previously not handling exceptions will now need to make changes to handle the argumentexception feature area windows forms affected apis creation of exceptioncollection object exceptioncollection
0
10,538
3,122,943,799
IssuesEvent
2015-09-07 00:28:07
leafo/scssphp
https://api.github.com/repos/leafo/scssphp
closed
using @keyframes causes exception when using @extend
needs-test
using the following declaration ```css @keyframes anim-rotate { 0% { transform: rotate(0); } 100% { transform: rotate(360deg); } } ``` causes an exception when trying to use `@extend` afterwards (the exception is detailed in #322)
1.0
using @keyframes causes exception when using @extend - using the following declaration ```css @keyframes anim-rotate { 0% { transform: rotate(0); } 100% { transform: rotate(360deg); } } ``` causes an exception when trying to use `@extend` afterwards (the exception is detailed in #322)
non_infrastructure
using keyframes causes exception when using extend using the following declaration css keyframes anim rotate transform rotate transform rotate causes an exception when trying to use extend afterwards the exception is detailed in
0
32,438
26,698,739,363
IssuesEvent
2023-01-27 12:41:25
SonarSource/sonarlint-visualstudio
https://api.github.com/repos/SonarSource/sonarlint-visualstudio
opened
[Infra] Embedded resources are not included in Rules.csproj in clean command line builds
Area: VS2019 Infrastructure
### Description The `ProcessPluginJars` project extracts rule description files and adds them to a folder in `Rules.csproj`, which should then embed them as resource files. This works when building inside VS. However, the embedded files are not included when building from the command line. It is only a problem with clean builds i.e. when the `ProcessPluginJars` project is being built for the first time. Unfortunately, this means it is a problem on the CI machine - the pipeline yaml has a hack work round the problem i.e. it explicitly builds the `ProcessPluginJars` and its dependency first.
1.0
[Infra] Embedded resources are not included in Rules.csproj in clean command line builds - ### Description The `ProcessPluginJars` project extracts rule description files and adds them to a folder in `Rules.csproj`, which should then embed them as resource files. This works when building inside VS. However, the embedded files are not included when building from the command line. It is only a problem with clean builds i.e. when the `ProcessPluginJars` project is being built for the first time. Unfortunately, this means it is a problem on the CI machine - the pipeline yaml has a hack work round the problem i.e. it explicitly builds the `ProcessPluginJars` and its dependency first.
infrastructure
embedded resources are not included in rules csproj in clean command line builds description the processpluginjars project extracts rule description files and adds them to a folder in rules csproj which should then embed them as resource files this works when building inside vs however the embedded files are not included when building from the command line it is only a problem with clean builds i e when the processpluginjars project is being built for the first time unfortunately this means it is a problem on the ci machine the pipeline yaml has a hack work round the problem i e it explicitly builds the processpluginjars and its dependency first
1
29,564
24,062,114,390
IssuesEvent
2022-09-17 01:21:45
oppia/oppia-android
https://api.github.com/repos/oppia/oppia-android
closed
Introduce Fake Audio on Espresso
issue_type_infrastructure
**Describe the bug** Testing Audio on Espresso is a bit tricky. We cannot test audio when the audio is playing in a real-time because that's the idle state in espresso and nothing can be a check on idle state. **Expected behaviour** `ExplorationActivityTest.kt` - Check `@Ignore` tests 1. Test is only passing on Robolectric as for Espresso we need to check the pause button is visible, but the pause button is visible only in the idle state. So, we need a Fake Library here to help us with a different state of audio. 2. 1 test is ignored, with the same reason above and it is also not passing on robolectric. 3. Remove `waitForTheView` **Additional context** Reference - https://github.com/oppia/oppia-android/pull/2216#discussion_r538027676
1.0
Introduce Fake Audio on Espresso - **Describe the bug** Testing Audio on Espresso is a bit tricky. We cannot test audio when the audio is playing in a real-time because that's the idle state in espresso and nothing can be a check on idle state. **Expected behaviour** `ExplorationActivityTest.kt` - Check `@Ignore` tests 1. Test is only passing on Robolectric as for Espresso we need to check the pause button is visible, but the pause button is visible only in the idle state. So, we need a Fake Library here to help us with a different state of audio. 2. 1 test is ignored, with the same reason above and it is also not passing on robolectric. 3. Remove `waitForTheView` **Additional context** Reference - https://github.com/oppia/oppia-android/pull/2216#discussion_r538027676
infrastructure
introduce fake audio on espresso describe the bug testing audio on espresso is a bit tricky we cannot test audio when the audio is playing in a real time because that s the idle state in espresso and nothing can be a check on idle state expected behaviour explorationactivitytest kt check ignore tests test is only passing on robolectric as for espresso we need to check the pause button is visible but the pause button is visible only in the idle state so we need a fake library here to help us with a different state of audio test is ignored with the same reason above and it is also not passing on robolectric remove waitfortheview additional context reference
1
13,612
10,347,155,967
IssuesEvent
2019-09-04 16:43:53
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
opened
The build step is currently overriding the restore binlog
area-Infrastructure
We are invoking build.cmd/sh twice, once for restore and later for the actual build and by that the produced binlog is overridden. We want to fix that by allowing to specify the binlog's name in the build scripts in arcade.
1.0
The build step is currently overriding the restore binlog - We are invoking build.cmd/sh twice, once for restore and later for the actual build and by that the produced binlog is overridden. We want to fix that by allowing to specify the binlog's name in the build scripts in arcade.
infrastructure
the build step is currently overriding the restore binlog we are invoking build cmd sh twice once for restore and later for the actual build and by that the produced binlog is overridden we want to fix that by allowing to specify the binlog s name in the build scripts in arcade
1
8,268
7,313,223,752
IssuesEvent
2018-03-01 00:00:50
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
opened
GN: Many targets fail to use public_deps where appropriate
Type: bug area-infrastructure
This results in many errors when running gn with --check.
1.0
GN: Many targets fail to use public_deps where appropriate - This results in many errors when running gn with --check.
infrastructure
gn many targets fail to use public deps where appropriate this results in many errors when running gn with check
1
25,011
18,041,023,104
IssuesEvent
2021-09-18 03:31:00
ProjectPythia/pythia-foundations
https://api.github.com/repos/ProjectPythia/pythia-foundations
reopened
Specify customized directory in pythia_datasets
infrastructure
Datasets that are requested from pythia_datasets get stored in the user's .cache directory. Could we do something akin to what Cartopy does in terms of Natural Earth shapefiles, where one could define a different, networked-served directory? The motivator here is for users who are running scripts or notebooks that use pythia_datasets but who may be logging into a networked-served Linux directory environment (such as in my department at UAlbany). Users' home directories, which is where the .cache directory sits, are limited to a mere 5GB of space.
1.0
Specify customized directory in pythia_datasets - Datasets that are requested from pythia_datasets get stored in the user's .cache directory. Could we do something akin to what Cartopy does in terms of Natural Earth shapefiles, where one could define a different, networked-served directory? The motivator here is for users who are running scripts or notebooks that use pythia_datasets but who may be logging into a networked-served Linux directory environment (such as in my department at UAlbany). Users' home directories, which is where the .cache directory sits, are limited to a mere 5GB of space.
infrastructure
specify customized directory in pythia datasets datasets that are requested from pythia datasets get stored in the user s cache directory could we do something akin to what cartopy does in terms of natural earth shapefiles where one could define a different networked served directory the motivator here is for users who are running scripts or notebooks that use pythia datasets but who may be logging into a networked served linux directory environment such as in my department at ualbany users home directories which is where the cache directory sits are limited to a mere of space
1
161,125
12,531,579,978
IssuesEvent
2020-06-04 14:43:40
ForgottenGlory/Living-Skyrim-2
https://api.github.com/repos/ForgottenGlory/Living-Skyrim-2
closed
Vanilla textures at sky haven temple.
bug need testers
**If you are reporting a crash to desktop, please attach your NET Script Framework crash log. This can be found in MO2's Overwrite folder. If possible, please also attach a copy of your most recent save before the issue occurred.** **LS Version** 2.0.0 beta 1 **Describe the bug** Vanilla textures at the blood sealed doorway **To Reproduce** go to skyhaven temple and look at doorway **Expected behavior** no vanilla textures **Screenshots** ![ScreenShot259](https://user-images.githubusercontent.com/62785432/83698500-384f0880-a5c7-11ea-966a-e98fe78a0a47.png) ![ScreenShot260](https://user-images.githubusercontent.com/62785432/83698506-3b49f900-a5c7-11ea-9387-d4ec5b9eb1f5.png) ![ScreenShot261](https://user-images.githubusercontent.com/62785432/83698510-3c7b2600-a5c7-11ea-882f-1ec99222f07f.png) ![ScreenShot258](https://user-images.githubusercontent.com/62785432/83698513-3dac5300-a5c7-11ea-89da-0c645999ca60.png) **Additional context** Call me Mr. Bug Finder
1.0
Vanilla textures at sky haven temple. - **If you are reporting a crash to desktop, please attach your NET Script Framework crash log. This can be found in MO2's Overwrite folder. If possible, please also attach a copy of your most recent save before the issue occurred.** **LS Version** 2.0.0 beta 1 **Describe the bug** Vanilla textures at the blood sealed doorway **To Reproduce** go to skyhaven temple and look at doorway **Expected behavior** no vanilla textures **Screenshots** ![ScreenShot259](https://user-images.githubusercontent.com/62785432/83698500-384f0880-a5c7-11ea-966a-e98fe78a0a47.png) ![ScreenShot260](https://user-images.githubusercontent.com/62785432/83698506-3b49f900-a5c7-11ea-9387-d4ec5b9eb1f5.png) ![ScreenShot261](https://user-images.githubusercontent.com/62785432/83698510-3c7b2600-a5c7-11ea-882f-1ec99222f07f.png) ![ScreenShot258](https://user-images.githubusercontent.com/62785432/83698513-3dac5300-a5c7-11ea-89da-0c645999ca60.png) **Additional context** Call me Mr. Bug Finder
non_infrastructure
vanilla textures at sky haven temple if you are reporting a crash to desktop please attach your net script framework crash log this can be found in s overwrite folder if possible please also attach a copy of your most recent save before the issue occurred ls version beta describe the bug vanilla textures at the blood sealed doorway to reproduce go to skyhaven temple and look at doorway expected behavior no vanilla textures screenshots additional context call me mr bug finder
0
11,450
9,200,920,440
IssuesEvent
2019-03-07 18:15:55
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
can't run restore.cmd
Area-Infrastructure Bug Contributor Pain
Can someone help? I apologize in advance if I'm missing something very obvious. **Version Used**: latest `master`. I have powershell 3.0 **Steps to Reproduce**: 1. run `restore.cmd` **Expected Behavior**: no error **Actual Behavior**: I get an error saying ``` Method invocation failed because [System.Version] doesn't contain a method named 'new'. ```
1.0
can't run restore.cmd - Can someone help? I apologize in advance if I'm missing something very obvious. **Version Used**: latest `master`. I have powershell 3.0 **Steps to Reproduce**: 1. run `restore.cmd` **Expected Behavior**: no error **Actual Behavior**: I get an error saying ``` Method invocation failed because [System.Version] doesn't contain a method named 'new'. ```
infrastructure
can t run restore cmd can someone help i apologize in advance if i m missing something very obvious version used latest master i have powershell steps to reproduce run restore cmd expected behavior no error actual behavior i get an error saying method invocation failed because doesn t contain a method named new
1
81,654
15,785,023,710
IssuesEvent
2021-04-01 15:49:27
MicrosoftDocs/intellicode
https://api.github.com/repos/MicrosoftDocs/intellicode
closed
Option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection
product-feedback vscode
**Feature Request for VS Code** It would be great if there was an option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection. Something similar to the `remote.SSH.allowLocalServerDownload` or the `remote.downloadExtensionsLocally` options that are available in the [vscode-remote extension](https://github.com/microsoft/vscode-remote-release). A lot of us use VS Code in corporate environments with restricted or no access to the Internet.
1.0
Option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection - **Feature Request for VS Code** It would be great if there was an option to download the model locally and transfer it to the remote instance running VS Code on a host without Internet access over the SSH connection. Something similar to the `remote.SSH.allowLocalServerDownload` or the `remote.downloadExtensionsLocally` options that are available in the [vscode-remote extension](https://github.com/microsoft/vscode-remote-release). A lot of us use VS Code in corporate environments with restricted or no access to the Internet.
non_infrastructure
option to download the model locally and transfer it to the remote instance running vs code on a host without internet access over the ssh connection feature request for vs code it would be great if there was an option to download the model locally and transfer it to the remote instance running vs code on a host without internet access over the ssh connection something similar to the remote ssh allowlocalserverdownload or the remote downloadextensionslocally options that are available in the a lot of us use vs code in corporate environments with restricted or no access to the internet
0
302,240
9,256,345,098
IssuesEvent
2019-03-16 18:15:15
bocadilloproject/bocadillo
https://api.github.com/repos/bocadilloproject/bocadillo
closed
Algolia search is broken
bug priority
**Expected behavior** Using the search box should show up the Algolia search dialog. **Actual behavior** Dialog does not show up. **To Reproduce** <!-- Steps to reproduce the behavior. For example, a minimal application script exhibiting the bug. --> https://bocadilloproject.github.io and type something in the search field. **Screenshots/Traceback** ``` Unhandled Promise Rejection: TypeError: undefined is not a function (near '...n...') ``` ```js n(Object.assign({}, t, { inputSelector: "#algolia-search-input", algoliaOptions: Object.assign({ facetFilters: ["lang:".concat(e)].concat(o.facetFilters || []) }, o) })) ``` **Possible solutions** <!-- Any clues you might have on how to fix this bug. --> **Additional context** <!-- Add any other context about the problem here. --> https://twitter.com/grassfedcode/status/1106935181611458561
1.0
Algolia search is broken - **Expected behavior** Using the search box should show up the Algolia search dialog. **Actual behavior** Dialog does not show up. **To Reproduce** <!-- Steps to reproduce the behavior. For example, a minimal application script exhibiting the bug. --> https://bocadilloproject.github.io and type something in the search field. **Screenshots/Traceback** ``` Unhandled Promise Rejection: TypeError: undefined is not a function (near '...n...') ``` ```js n(Object.assign({}, t, { inputSelector: "#algolia-search-input", algoliaOptions: Object.assign({ facetFilters: ["lang:".concat(e)].concat(o.facetFilters || []) }, o) })) ``` **Possible solutions** <!-- Any clues you might have on how to fix this bug. --> **Additional context** <!-- Add any other context about the problem here. --> https://twitter.com/grassfedcode/status/1106935181611458561
non_infrastructure
algolia search is broken expected behavior using the search box should show up the algolia search dialog actual behavior dialog does not show up to reproduce and type something in the search field screenshots traceback unhandled promise rejection typeerror undefined is not a function near n js n object assign t inputselector algolia search input algoliaoptions object assign facetfilters concat o facetfilters o possible solutions additional context
0
6,968
6,688,475,252
IssuesEvent
2017-10-08 15:11:59
taiyun/corrplot
https://api.github.com/repos/taiyun/corrplot
closed
please enable dependency-ci for corrplot
infrastructure
go to this page and enable it: https://dependencyci.com/github/taiyun/corrplot (The badge is already in README)
1.0
please enable dependency-ci for corrplot - go to this page and enable it: https://dependencyci.com/github/taiyun/corrplot (The badge is already in README)
infrastructure
please enable dependency ci for corrplot go to this page and enable it the badge is already in readme
1
11,235
7,472,156,323
IssuesEvent
2018-04-03 11:43:46
symfony/symfony
https://api.github.com/repos/symfony/symfony
closed
Add Symfony 4 to kenjis/php-framework-benchmark
Performance help wanted
| Q | A | ---------------- | ----- | Bug report? | no | Feature request? | no | BC Break report? | no | RFC? | no | Symfony version | 4.0 This benchmark contains old versions of Symfony: https://github.com/kenjis/php-framework-benchmark Anyone willing to send an update for Symfony 4?
True
Add Symfony 4 to kenjis/php-framework-benchmark - | Q | A | ---------------- | ----- | Bug report? | no | Feature request? | no | BC Break report? | no | RFC? | no | Symfony version | 4.0 This benchmark contains old versions of Symfony: https://github.com/kenjis/php-framework-benchmark Anyone willing to send an update for Symfony 4?
non_infrastructure
add symfony to kenjis php framework benchmark q a bug report no feature request no bc break report no rfc no symfony version this benchmark contains old versions of symfony anyone willing to send an update for symfony
0
3,578
4,417,039,966
IssuesEvent
2016-08-15 01:36:54
arm-hpc/ohpc
https://api.github.com/repos/arm-hpc/ohpc
closed
Script to adjust _service files
infrastructure
_service files have hardcoded branch and github URLs in them for each component. We need a easy mechanism to update these to point at our github fork and appropriate branch files so we can make changes.
1.0
Script to adjust _service files - _service files have hardcoded branch and github URLs in them for each component. We need a easy mechanism to update these to point at our github fork and appropriate branch files so we can make changes.
infrastructure
script to adjust service files service files have hardcoded branch and github urls in them for each component we need a easy mechanism to update these to point at our github fork and appropriate branch files so we can make changes
1
3,271
4,175,348,960
IssuesEvent
2016-06-21 16:35:06
jlongster/debugger.html
https://api.github.com/repos/jlongster/debugger.html
closed
Integration tests should test chrome as well
infrastructure
We've recently added support for chrome debugging, which should also be tested with our integration tests so that we don't have regressions there.
1.0
Integration tests should test chrome as well - We've recently added support for chrome debugging, which should also be tested with our integration tests so that we don't have regressions there.
infrastructure
integration tests should test chrome as well we ve recently added support for chrome debugging which should also be tested with our integration tests so that we don t have regressions there
1
366,867
10,831,672,947
IssuesEvent
2019-11-11 08:54:10
unitystation/unitystation
https://api.github.com/repos/unitystation/unitystation
closed
Current Performance Bottlenecks B: 70 x 5 (Total B350)
Bounty Bug High Priority In Progress Performance
# Description Currently there are 5 performance bottle necks that are affecting the game. ![image](https://user-images.githubusercontent.com/20813925/51219354-44a7d680-197c-11e9-8976-d69e1ef71805.png) * GC = Garbage Collection. Anything over 1KB is usually bad ## TODO: - IMPORTANT: You need to check out the atmos/work02 branch, this is the most up to date branch (not the develop one) so run your diagnostic tests on there and if you provide a solution please PR to it instead of develop - Diagnose problem areas related to the bottle necks above and provide a solution This is a seek and destroy job. You need to use the profiler heavily to determine problem areas. If you can, please provide a before and after screenshot of your solution. ### To use the profiler: To use the profiler you can run it in the editor by going to Window --> Analysis. You can profile a game running in the editor. Down the bottom left change timeline to heirarchy so your bottom window looks like this: ![image](https://user-images.githubusercontent.com/20813925/51219485-e16a7400-197c-11e9-9882-2a062b66dc33.png) Use the Profiler.BeginSample Api call to create samples around blocks of code that you think might be causing the problem (the name of the sample will show up in the hierarchy of the profiler and if you see your GC values or high CPU times there, then that confirms that that is the part of the code causing the problem) https://docs.unity3d.com/ScriptReference/Profiling.Profiler.BeginSample.html <--- to drill down further on problem areas you think might be the cause. Values to look out for are high cpu time and GC amounts A bounty of B70 will be paid out for each performance bottle neck problem. If you discover a problem but don't have time to submit a PR then let one of us know and we will try the solution you recommend. If it works we will award you the bounty. There are 5 x B70 bounties all up (check the screen shot above, if a bottle neck has GC and CPU issues that is two separate bounties you can claim for it).
1.0
Current Performance Bottlenecks B: 70 x 5 (Total B350) - # Description Currently there are 5 performance bottle necks that are affecting the game. ![image](https://user-images.githubusercontent.com/20813925/51219354-44a7d680-197c-11e9-8976-d69e1ef71805.png) * GC = Garbage Collection. Anything over 1KB is usually bad ## TODO: - IMPORTANT: You need to check out the atmos/work02 branch, this is the most up to date branch (not the develop one) so run your diagnostic tests on there and if you provide a solution please PR to it instead of develop - Diagnose problem areas related to the bottle necks above and provide a solution This is a seek and destroy job. You need to use the profiler heavily to determine problem areas. If you can, please provide a before and after screenshot of your solution. ### To use the profiler: To use the profiler you can run it in the editor by going to Window --> Analysis. You can profile a game running in the editor. Down the bottom left change timeline to heirarchy so your bottom window looks like this: ![image](https://user-images.githubusercontent.com/20813925/51219485-e16a7400-197c-11e9-9882-2a062b66dc33.png) Use the Profiler.BeginSample Api call to create samples around blocks of code that you think might be causing the problem (the name of the sample will show up in the hierarchy of the profiler and if you see your GC values or high CPU times there, then that confirms that that is the part of the code causing the problem) https://docs.unity3d.com/ScriptReference/Profiling.Profiler.BeginSample.html <--- to drill down further on problem areas you think might be the cause. Values to look out for are high cpu time and GC amounts A bounty of B70 will be paid out for each performance bottle neck problem. If you discover a problem but don't have time to submit a PR then let one of us know and we will try the solution you recommend. If it works we will award you the bounty. There are 5 x B70 bounties all up (check the screen shot above, if a bottle neck has GC and CPU issues that is two separate bounties you can claim for it).
non_infrastructure
current performance bottlenecks b x total description currently there are performance bottle necks that are affecting the game gc garbage collection anything over is usually bad todo important you need to check out the atmos branch this is the most up to date branch not the develop one so run your diagnostic tests on there and if you provide a solution please pr to it instead of develop diagnose problem areas related to the bottle necks above and provide a solution this is a seek and destroy job you need to use the profiler heavily to determine problem areas if you can please provide a before and after screenshot of your solution to use the profiler to use the profiler you can run it in the editor by going to window analysis you can profile a game running in the editor down the bottom left change timeline to heirarchy so your bottom window looks like this use the profiler beginsample api call to create samples around blocks of code that you think might be causing the problem the name of the sample will show up in the hierarchy of the profiler and if you see your gc values or high cpu times there then that confirms that that is the part of the code causing the problem to drill down further on problem areas you think might be the cause values to look out for are high cpu time and gc amounts a bounty of will be paid out for each performance bottle neck problem if you discover a problem but don t have time to submit a pr then let one of us know and we will try the solution you recommend if it works we will award you the bounty there are x bounties all up check the screen shot above if a bottle neck has gc and cpu issues that is two separate bounties you can claim for it
0
238,366
18,239,490,488
IssuesEvent
2021-10-01 11:07:25
obophenotype/uberon
https://api.github.com/repos/obophenotype/uberon
closed
Broken Links on UBERON Page
documentation issue
I am using Ontobee occasionally, and I noticed that the link below is broken; it appears that all/most links on the page result in 404 error. Could someone look into this? Thanks and Happy New Year, Sam Smith – Michigan (retired volunteer with Dr. Oliver He’s lab at U of M) http://uberon.github.io/browse/ontobee.html
1.0
Broken Links on UBERON Page - I am using Ontobee occasionally, and I noticed that the link below is broken; it appears that all/most links on the page result in 404 error. Could someone look into this? Thanks and Happy New Year, Sam Smith – Michigan (retired volunteer with Dr. Oliver He’s lab at U of M) http://uberon.github.io/browse/ontobee.html
non_infrastructure
broken links on uberon page i am using ontobee occasionally and i noticed that the link below is broken it appears that all most links on the page result in error could someone look into this thanks and happy new year sam smith – michigan retired volunteer with dr oliver he’s lab at u of m
0
1,741
3,356,556,848
IssuesEvent
2015-11-18 21:01:06
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
Support -trait for RunTest.exe
Area-Infrastructure Feature Request Grabbed By Community Up for Grabs
[Roslyn.Services.Editor.(\w+).UnitTests are known slow](https://github.com/dotnet/roslyn/commit/c3e4799eb2a5fb5fb84ff9a938868bfd177b8fd3#diff-0102553256b475e6f1e0f2663dedf27bL93), it would be nice that `-trait` can be specified to shorten the test time when running them locally, as running test is the easy way for external contributors to verify editor features currently, or I have to use the long xunit runner command lines and if failure encountered, have to open the browser myself after finding the report file.
1.0
Support -trait for RunTest.exe - [Roslyn.Services.Editor.(\w+).UnitTests are known slow](https://github.com/dotnet/roslyn/commit/c3e4799eb2a5fb5fb84ff9a938868bfd177b8fd3#diff-0102553256b475e6f1e0f2663dedf27bL93), it would be nice that `-trait` can be specified to shorten the test time when running them locally, as running test is the easy way for external contributors to verify editor features currently, or I have to use the long xunit runner command lines and if failure encountered, have to open the browser myself after finding the report file.
infrastructure
support trait for runtest exe it would be nice that trait can be specified to shorten the test time when running them locally as running test is the easy way for external contributors to verify editor features currently or i have to use the long xunit runner command lines and if failure encountered have to open the browser myself after finding the report file
1
293,931
25,333,895,047
IssuesEvent
2022-11-18 15:18:24
cc65/cc65
https://api.github.com/repos/cc65/cc65
closed
tests that compare stdout and stderr with a reference may fail
bug Testbench
Some tests pipe the stdout and stderr of the compiler or assembler into a file and compare the result with a reference. This has the subtle problem that both stdout and stderr and buffered i/o streams that may or may not work the same on different OSs or even shells. That means the order of the lines in the resulting file is not guaranteed. In practise it is "mostly" not an issue, but "we" still stumbled about one test where different behaviour can be triggered depending on running it in cmd.exe or bash (msys): ```test/asm/listing``` output of ```010-paramcount.bin``` differs from the reference output when running the test from cmd.exe. The solution is most likely to have two reference files and not redirect into one file. Fixing all the tests seems like a huge effort for little gain right now - but fixing enough that at least this one failing one works again on both would be nice - @spiro-trikaliotis perhaps you can have a look?
1.0
tests that compare stdout and stderr with a reference may fail - Some tests pipe the stdout and stderr of the compiler or assembler into a file and compare the result with a reference. This has the subtle problem that both stdout and stderr and buffered i/o streams that may or may not work the same on different OSs or even shells. That means the order of the lines in the resulting file is not guaranteed. In practise it is "mostly" not an issue, but "we" still stumbled about one test where different behaviour can be triggered depending on running it in cmd.exe or bash (msys): ```test/asm/listing``` output of ```010-paramcount.bin``` differs from the reference output when running the test from cmd.exe. The solution is most likely to have two reference files and not redirect into one file. Fixing all the tests seems like a huge effort for little gain right now - but fixing enough that at least this one failing one works again on both would be nice - @spiro-trikaliotis perhaps you can have a look?
non_infrastructure
tests that compare stdout and stderr with a reference may fail some tests pipe the stdout and stderr of the compiler or assembler into a file and compare the result with a reference this has the subtle problem that both stdout and stderr and buffered i o streams that may or may not work the same on different oss or even shells that means the order of the lines in the resulting file is not guaranteed in practise it is mostly not an issue but we still stumbled about one test where different behaviour can be triggered depending on running it in cmd exe or bash msys test asm listing output of paramcount bin differs from the reference output when running the test from cmd exe the solution is most likely to have two reference files and not redirect into one file fixing all the tests seems like a huge effort for little gain right now but fixing enough that at least this one failing one works again on both would be nice spiro trikaliotis perhaps you can have a look
0
7,564
2,911,047,147
IssuesEvent
2015-06-22 06:15:05
piwik/piwik
https://api.github.com/repos/piwik/piwik
closed
New automated test to detect when the Piwik files become too big
c: Tests & QA
It's happened a few times before that Piwik ZIP package grows by Megabytes, eg. Piwik 2.14.0-b4 package is 30Mb big instead of 14Mb #8144 The goal of this issue is to a new automated test that will count the total size of all files in Piwik codebase (including `vendors/` composer packages), and fail when the total size is greater than what we reasonably expect (eg. current total size + 10% growth margin) This is important because we want all Piwik users to upgrade to latest version and many users have slow connection between their servers and `builds.piwik.org` eg #7280. Therefore each Mb in the ZIP package can add minutes of waiting time for users to wait for Piwik to auto upgrade. Also Mb saved on package size means more people can actually successfully upgrade Piwik as they're less likely to reach network or other timeouts.
1.0
New automated test to detect when the Piwik files become too big - It's happened a few times before that Piwik ZIP package grows by Megabytes, eg. Piwik 2.14.0-b4 package is 30Mb big instead of 14Mb #8144 The goal of this issue is to a new automated test that will count the total size of all files in Piwik codebase (including `vendors/` composer packages), and fail when the total size is greater than what we reasonably expect (eg. current total size + 10% growth margin) This is important because we want all Piwik users to upgrade to latest version and many users have slow connection between their servers and `builds.piwik.org` eg #7280. Therefore each Mb in the ZIP package can add minutes of waiting time for users to wait for Piwik to auto upgrade. Also Mb saved on package size means more people can actually successfully upgrade Piwik as they're less likely to reach network or other timeouts.
non_infrastructure
new automated test to detect when the piwik files become too big it s happened a few times before that piwik zip package grows by megabytes eg piwik package is big instead of the goal of this issue is to a new automated test that will count the total size of all files in piwik codebase including vendors composer packages and fail when the total size is greater than what we reasonably expect eg current total size growth margin this is important because we want all piwik users to upgrade to latest version and many users have slow connection between their servers and builds piwik org eg therefore each mb in the zip package can add minutes of waiting time for users to wait for piwik to auto upgrade also mb saved on package size means more people can actually successfully upgrade piwik as they re less likely to reach network or other timeouts
0
5,897
6,028,801,476
IssuesEvent
2017-06-08 16:30:17
devtools-html/debugger.html
https://api.github.com/repos/devtools-html/debugger.html
closed
Convert to es modules
available difficulty: medium infrastructure
### We are switching to es6 modules for several reasons: * flow uses es6 modules * it's more expressive * we want to use es6 classes [cjs-to-es6](https://github.com/nolanlawson/cjs-to-es6) is a great tool for converting a module ### Steps to convert a module * `cjs-to-es6 src/components/SecondaryPanes/index.js` * update import sites to use `default` ### gotchas #### nested destructuring. watch out for `cjs-to-es6` missing this ```js const { Services: { appinfo }} = require("devtools-modules"); ``` ```js import { Services } from "devtools-module"; const { appinfo } = Services; ``` #### mocking modules The unit tests mock some modules like [`devtools-modules`](https://github.com/devtools-html/debugger.html/blob/master/src/test/node-unit-tests.js#L15). `mock-require` doesn't work with es modules so we should find a work around. #### react-dom For some reason `import dom from "react-dom"` does not work, so just leave that one as is. Ask all questions in our **slack** room. This can potentialy be tricky with edge cases. ### Progress converting the codebase - [x] actions - [x] breakpoints.js - [x] coverage.js - [x] event-listeners.js - [x] expressions.js - [x] index.js - [x] navigation.js - [x] pause.js - [x] sources.js - [ ] components - [x] App.js - [ ] Editor - [x] Breakpoint.js - [ ] ConditionalPanel.js - [x] Footer.js - [ ] HitMarker.js - [ ] SearchBar.js - [x] Tabs.js - [ ] index.js - [x] Preview.js - [ ] SecondaryPanes - [x] Breakpoints.js - [x] ChromeScopes.js - [x] CommandBar.js - [x] EventListeners.js - [x] Expressions.js - [x] Frames.js - [x] Scopes.js - [x] WhyPaused.js - [ ] index.js - [x] SourceSearch.js - [x] Sources.js - [x] SourcesTree.js - [x] WelcomeBox.js - [ ] shared - [x] Accordion.js - [ ] Autocomplete.js - [x] Button - [x] Close.js - [x] PaneToggle.js - [x] Dropdown.js - [x] ManagedTree.js - [x] ObjectInspector.js - [x] Rep.js - [x] Svg.js - [x] menu.js - [ ] constants.js - [x] feature.js - [ ] global-types.js - [ ] main.js - [ ] panel.js - [x] reducers - [x] async-requests.js - [x] breakpoints.js - [x] coverage.js - [x] event-listeners.js - [x] expressions.js - [x] index.js - [x] pause.js - [x] sources.js - utils - editor - [x] build-query.js - [x] expression.js - [ ] index.js - [x] source-documents.js - [x] source-search.js - redux - middleware - [x] history.js - [x] log.js - [x] promise.js - [x] thunk.js - [x] wait-service.js - pretty-print - [x] worker.js - [x] index.js - [x] DevToolsUtils.js - [x] assert.js - [x] client.js - [x] create-store.js - [x] defer.js - [x] dehydrate-state.js - [x] editor.js - [x] fromJS.js - [x] log.js - [x] makeRecord.js - [x] path.js - [x] pause.js - [x] prefs.js - [x] test-head.js - [x] utils.js
1.0
Convert to es modules - ### We are switching to es6 modules for several reasons: * flow uses es6 modules * it's more expressive * we want to use es6 classes [cjs-to-es6](https://github.com/nolanlawson/cjs-to-es6) is a great tool for converting a module ### Steps to convert a module * `cjs-to-es6 src/components/SecondaryPanes/index.js` * update import sites to use `default` ### gotchas #### nested destructuring. watch out for `cjs-to-es6` missing this ```js const { Services: { appinfo }} = require("devtools-modules"); ``` ```js import { Services } from "devtools-module"; const { appinfo } = Services; ``` #### mocking modules The unit tests mock some modules like [`devtools-modules`](https://github.com/devtools-html/debugger.html/blob/master/src/test/node-unit-tests.js#L15). `mock-require` doesn't work with es modules so we should find a work around. #### react-dom For some reason `import dom from "react-dom"` does not work, so just leave that one as is. Ask all questions in our **slack** room. This can potentialy be tricky with edge cases. ### Progress converting the codebase - [x] actions - [x] breakpoints.js - [x] coverage.js - [x] event-listeners.js - [x] expressions.js - [x] index.js - [x] navigation.js - [x] pause.js - [x] sources.js - [ ] components - [x] App.js - [ ] Editor - [x] Breakpoint.js - [ ] ConditionalPanel.js - [x] Footer.js - [ ] HitMarker.js - [ ] SearchBar.js - [x] Tabs.js - [ ] index.js - [x] Preview.js - [ ] SecondaryPanes - [x] Breakpoints.js - [x] ChromeScopes.js - [x] CommandBar.js - [x] EventListeners.js - [x] Expressions.js - [x] Frames.js - [x] Scopes.js - [x] WhyPaused.js - [ ] index.js - [x] SourceSearch.js - [x] Sources.js - [x] SourcesTree.js - [x] WelcomeBox.js - [ ] shared - [x] Accordion.js - [ ] Autocomplete.js - [x] Button - [x] Close.js - [x] PaneToggle.js - [x] Dropdown.js - [x] ManagedTree.js - [x] ObjectInspector.js - [x] Rep.js - [x] Svg.js - [x] menu.js - [ ] constants.js - [x] feature.js - [ ] global-types.js - [ ] main.js - [ ] panel.js - [x] reducers - [x] async-requests.js - [x] breakpoints.js - [x] coverage.js - [x] event-listeners.js - [x] expressions.js - [x] index.js - [x] pause.js - [x] sources.js - utils - editor - [x] build-query.js - [x] expression.js - [ ] index.js - [x] source-documents.js - [x] source-search.js - redux - middleware - [x] history.js - [x] log.js - [x] promise.js - [x] thunk.js - [x] wait-service.js - pretty-print - [x] worker.js - [x] index.js - [x] DevToolsUtils.js - [x] assert.js - [x] client.js - [x] create-store.js - [x] defer.js - [x] dehydrate-state.js - [x] editor.js - [x] fromJS.js - [x] log.js - [x] makeRecord.js - [x] path.js - [x] pause.js - [x] prefs.js - [x] test-head.js - [x] utils.js
infrastructure
convert to es modules we are switching to modules for several reasons flow uses modules it s more expressive we want to use classes is a great tool for converting a module steps to convert a module cjs to src components secondarypanes index js update import sites to use default gotchas nested destructuring watch out for cjs to missing this js const services appinfo require devtools modules js import services from devtools module const appinfo services mocking modules the unit tests mock some modules like mock require doesn t work with es modules so we should find a work around react dom for some reason import dom from react dom does not work so just leave that one as is ask all questions in our slack room this can potentialy be tricky with edge cases progress converting the codebase actions breakpoints js coverage js event listeners js expressions js index js navigation js pause js sources js components app js editor breakpoint js conditionalpanel js footer js hitmarker js searchbar js tabs js index js preview js secondarypanes breakpoints js chromescopes js commandbar js eventlisteners js expressions js frames js scopes js whypaused js index js sourcesearch js sources js sourcestree js welcomebox js shared accordion js autocomplete js button close js panetoggle js dropdown js managedtree js objectinspector js rep js svg js menu js constants js feature js global types js main js panel js reducers async requests js breakpoints js coverage js event listeners js expressions js index js pause js sources js utils editor build query js expression js index js source documents js source search js redux middleware history js log js promise js thunk js wait service js pretty print worker js index js devtoolsutils js assert js client js create store js defer js dehydrate state js editor js fromjs js log js makerecord js path js pause js prefs js test head js utils js
1
31,618
25,941,107,465
IssuesEvent
2022-12-16 18:29:06
SimpleITK/SimpleITK
https://api.github.com/repos/SimpleITK/SimpleITK
closed
problem installing latest development wheels directly from releases page
Infrastructure
The links to the Python wheel artifacts are no longer part of the web page, so the `pip install --find-links...` doesn't work. The solution is to explicitly add invisible links to the page. The only constraint is that there are no line-breaks between the links. When there are, it results in a significant amount of empty lines. The template for the invisible links is: ``` <a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/WHL_FILE_NAME"></a> ``` The end result should be a single line inserted under the `pip install...` instructions: ``` <a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/SimpleITK-2.3.0.dev20-cp310-cp310-macosx_10_9_x86_64.whl"></a><a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/SimpleITK-2.3.0.dev20-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl"></a><a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/SimpleITK-2.3.0.dev20-cp310-cp310-win_amd64.whl"></a>... ```
1.0
problem installing latest development wheels directly from releases page - The links to the Python wheel artifacts are no longer part of the web page, so the `pip install --find-links...` doesn't work. The solution is to explicitly add invisible links to the page. The only constraint is that there are no line-breaks between the links. When there are, it results in a significant amount of empty lines. The template for the invisible links is: ``` <a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/WHL_FILE_NAME"></a> ``` The end result should be a single line inserted under the `pip install...` instructions: ``` <a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/SimpleITK-2.3.0.dev20-cp310-cp310-macosx_10_9_x86_64.whl"></a><a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/SimpleITK-2.3.0.dev20-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl"></a><a href="https://github.com/SimpleITK/SimpleITK/releases/download/latest/SimpleITK-2.3.0.dev20-cp310-cp310-win_amd64.whl"></a>... ```
infrastructure
problem installing latest development wheels directly from releases page the links to the python wheel artifacts are no longer part of the web page so the pip install find links doesn t work the solution is to explicitly add invisible links to the page the only constraint is that there are no line breaks between the links when there are it results in a significant amount of empty lines the template for the invisible links is a href the end result should be a single line inserted under the pip install instructions a href href href
1
360,145
10,684,724,096
IssuesEvent
2019-10-22 11:08:09
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
developer.mozilla.org - site is not usable
browser-firefox engine-gecko priority-important
<!-- @browser: Firefox Klar (Focus) 8.0.23 (Build #332871437 Gecko 71.0a1-20191003093956) --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 --> <!-- @reported_with: --> **URL**: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select **Browser / Version**: Firefox Klar (Focus) 8.0.23 (Build #332871437 Gecko 71.0a1-20191003093956) **Operating System**: Android 9 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: <select> element doesn't work **Steps to Reproduce**: The HTML Demo of the <select> element doesn't fully work: When I tap it, a list of selectable items should be shown, but this doesn't happen. Instead, only a dotted border appears around the element's current value (to indicate that the element has been interacted with). <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
developer.mozilla.org - site is not usable - <!-- @browser: Firefox Klar (Focus) 8.0.23 (Build #332871437 Gecko 71.0a1-20191003093956) --> <!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:71.0) Gecko/71.0 Firefox/71.0 --> <!-- @reported_with: --> **URL**: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select **Browser / Version**: Firefox Klar (Focus) 8.0.23 (Build #332871437 Gecko 71.0a1-20191003093956) **Operating System**: Android 9 **Tested Another Browser**: Yes **Problem type**: Site is not usable **Description**: <select> element doesn't work **Steps to Reproduce**: The HTML Demo of the <select> element doesn't fully work: When I tap it, a list of selectable items should be shown, but this doesn't happen. Instead, only a dotted border appears around the element's current value (to indicate that the element has been interacted with). <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_infrastructure
developer mozilla org site is not usable url browser version firefox klar focus build gecko operating system android tested another browser yes problem type site is not usable description element doesn t work steps to reproduce the html demo of the element doesn t fully work when i tap it a list of selectable items should be shown but this doesn t happen instead only a dotted border appears around the element s current value to indicate that the element has been interacted with browser configuration none from with ❤️
0
28,680
23,439,678,225
IssuesEvent
2022-08-15 13:46:07
cylc/cylc-doc
https://api.github.com/repos/cylc/cylc-doc
closed
Add 7.x and 8.x version symlinks to build?
infrastructure
We use symlinks in the build process to allow navigation to https://cylc.github.io/cylc-doc/latest and https://cylc.github.io/cylc-doc/stable. Currently these links are `latest -> 8.0rc1` and `stable -> 7.9.3`. But when Cylc 8.0.0 is released, we'll want `stable -> 8.0.0`. So, to maintain easy access to the latest Cylc 7 docs, we should consider adding new symlinks `8.x` and `7.x`. --- Note however that the latest Cylc 7 version is actually 7.9.6 - we haven't deployed docs for the last few maintenance releases either because we forgot or nothing has changed in the docs.
1.0
Add 7.x and 8.x version symlinks to build? - We use symlinks in the build process to allow navigation to https://cylc.github.io/cylc-doc/latest and https://cylc.github.io/cylc-doc/stable. Currently these links are `latest -> 8.0rc1` and `stable -> 7.9.3`. But when Cylc 8.0.0 is released, we'll want `stable -> 8.0.0`. So, to maintain easy access to the latest Cylc 7 docs, we should consider adding new symlinks `8.x` and `7.x`. --- Note however that the latest Cylc 7 version is actually 7.9.6 - we haven't deployed docs for the last few maintenance releases either because we forgot or nothing has changed in the docs.
infrastructure
add x and x version symlinks to build we use symlinks in the build process to allow navigation to and currently these links are latest and stable but when cylc is released we ll want stable so to maintain easy access to the latest cylc docs we should consider adding new symlinks x and x note however that the latest cylc version is actually we haven t deployed docs for the last few maintenance releases either because we forgot or nothing has changed in the docs
1
53,108
27,974,074,569
IssuesEvent
2023-03-25 11:11:17
Blinue/Magpie
https://api.github.com/repos/Blinue/Magpie
closed
打开游戏内显示时帧数异常
area: performance
### Magpie version 程序版本 0.10.0-preview2 ### Windows version 系统版本 win11 22621.1344 ### Related screenshot (optional) 相关截图(可选) ![20230303_000330](https://user-images.githubusercontent.com/93318731/222485900-82994f4d-8dce-4564-9ab0-a3b0b89cb388.gif) ![20230303_000407](https://user-images.githubusercontent.com/93318731/222486696-cd528b9f-ec8b-4c98-adda-52acd1a332d4.gif) 如图所示,第一张为关闭垂直同步时,正常状态下为90fps左右,打开游戏内覆盖后降至60fps,显卡占用率明显下降;第二张为打开垂直同步时(显示器刷新率为75hz),正常状态下为50-60fps,显卡占用率低(这里有第二个问题,实际渲染帧数可以超过刷新率,但开启垂直同步后帧率异常的低),打开游戏内覆盖后反而升至75hz ### Reproduction steps 复现步骤 如图所示 ### Log files 日志文件 [magpie.log](https://github.com/Blinue/Magpie/files/10873387/magpie.log) [magpie.1.log](https://github.com/Blinue/Magpie/files/10873388/magpie.1.log) [magpie.2.log](https://github.com/Blinue/Magpie/files/10873389/magpie.2.log)
True
打开游戏内显示时帧数异常 - ### Magpie version 程序版本 0.10.0-preview2 ### Windows version 系统版本 win11 22621.1344 ### Related screenshot (optional) 相关截图(可选) ![20230303_000330](https://user-images.githubusercontent.com/93318731/222485900-82994f4d-8dce-4564-9ab0-a3b0b89cb388.gif) ![20230303_000407](https://user-images.githubusercontent.com/93318731/222486696-cd528b9f-ec8b-4c98-adda-52acd1a332d4.gif) 如图所示,第一张为关闭垂直同步时,正常状态下为90fps左右,打开游戏内覆盖后降至60fps,显卡占用率明显下降;第二张为打开垂直同步时(显示器刷新率为75hz),正常状态下为50-60fps,显卡占用率低(这里有第二个问题,实际渲染帧数可以超过刷新率,但开启垂直同步后帧率异常的低),打开游戏内覆盖后反而升至75hz ### Reproduction steps 复现步骤 如图所示 ### Log files 日志文件 [magpie.log](https://github.com/Blinue/Magpie/files/10873387/magpie.log) [magpie.1.log](https://github.com/Blinue/Magpie/files/10873388/magpie.1.log) [magpie.2.log](https://github.com/Blinue/Magpie/files/10873389/magpie.2.log)
non_infrastructure
打开游戏内显示时帧数异常 magpie version 程序版本 windows version 系统版本 related screenshot optional 相关截图(可选) 如图所示,第一张为关闭垂直同步时, , ,显卡占用率明显下降;第二张为打开垂直同步时( ), ,显卡占用率低(这里有第二个问题,实际渲染帧数可以超过刷新率,但开启垂直同步后帧率异常的低), reproduction steps 复现步骤 如图所示 log files 日志文件
0
17,875
12,678,643,006
IssuesEvent
2020-06-19 10:08:50
libero/reviewer
https://api.github.com/repos/libero/reviewer
closed
Define Release Strategy for Production
Infrastructure discussion
Definition of Done: - [x] how and when do we release to production Might require spin off of new issue to set up automation for this. Probably depends on https://github.com/elifesciences/issues/issues/5519
1.0
Define Release Strategy for Production - Definition of Done: - [x] how and when do we release to production Might require spin off of new issue to set up automation for this. Probably depends on https://github.com/elifesciences/issues/issues/5519
infrastructure
define release strategy for production definition of done how and when do we release to production might require spin off of new issue to set up automation for this probably depends on
1
93,763
10,774,936,083
IssuesEvent
2019-11-03 10:39:27
PistonDevelopers/conrod
https://api.github.com/repos/PistonDevelopers/conrod
closed
Where is the macro hiding? (image_map!)
documentation
Hello! On this page: https://docs.rs/conrod/0.61.1/conrod/image/struct.Map.html it says there is a macro called `image_map!` I couldn't find it in all of conrods source code. Am I searching wrong or is the doc outdated and misleading?
1.0
Where is the macro hiding? (image_map!) - Hello! On this page: https://docs.rs/conrod/0.61.1/conrod/image/struct.Map.html it says there is a macro called `image_map!` I couldn't find it in all of conrods source code. Am I searching wrong or is the doc outdated and misleading?
non_infrastructure
where is the macro hiding image map hello on this page it says there is a macro called image map i couldn t find it in all of conrods source code am i searching wrong or is the doc outdated and misleading
0
8,746
7,606,715,029
IssuesEvent
2018-04-30 14:16:38
servo/servo
https://api.github.com/repos/servo/servo
opened
Add more information to automated WPT sync PR
A-infrastructure A-testing
PRs like https://github.com/servo/servo/pull/20715 are easiest to understand if the reviewer has a lot of prior context. It would be great to create PRs with information like this: ``` New passing tests: - foo/some-test.html - bar/some-other-test.html New tests with unexpected results: - foo/some-test-that-fails-in-servo.html - bar/some-other-test-that-fails-in-servo.html Existing tests that now pass: - foo/some-test-that-was-updated-that-now-passes.html Existing tests with new unexpected results: - bar/some-test-that-was-updated-that-now-fails-in-servo.html Tests that were removed: - foo/some-removed-test.html - bar/some-other-removed-test.html Known intermittent test results: - foo/some-intermittent-test.html.ini - bar/some-other-intermittent-test.html.ini ``` This information could be derived from looking at the git commit and performing the following filtering: * for each ini file in the commit, does the test [match a known intermittent failure](https://github.com/servo/servo/blob/bf667677f75cd3f56fff3a91f73c21ba1e4705af/python/servo/testing_commands.py#L495-L504)? if so, mark it as a known intermittent result. otherwise, if the ini file is being removed, mark it as "existing test that now passes". otherwise, mark it as "existing test with new unexpected results". * for each new test file created, is there a corresponding ini file created? if so, mark it as "new test with unexpected results". otherwise, mark it as "new passing test". * for each test file removed, mark it as "test that was removed".
1.0
Add more information to automated WPT sync PR - PRs like https://github.com/servo/servo/pull/20715 are easiest to understand if the reviewer has a lot of prior context. It would be great to create PRs with information like this: ``` New passing tests: - foo/some-test.html - bar/some-other-test.html New tests with unexpected results: - foo/some-test-that-fails-in-servo.html - bar/some-other-test-that-fails-in-servo.html Existing tests that now pass: - foo/some-test-that-was-updated-that-now-passes.html Existing tests with new unexpected results: - bar/some-test-that-was-updated-that-now-fails-in-servo.html Tests that were removed: - foo/some-removed-test.html - bar/some-other-removed-test.html Known intermittent test results: - foo/some-intermittent-test.html.ini - bar/some-other-intermittent-test.html.ini ``` This information could be derived from looking at the git commit and performing the following filtering: * for each ini file in the commit, does the test [match a known intermittent failure](https://github.com/servo/servo/blob/bf667677f75cd3f56fff3a91f73c21ba1e4705af/python/servo/testing_commands.py#L495-L504)? if so, mark it as a known intermittent result. otherwise, if the ini file is being removed, mark it as "existing test that now passes". otherwise, mark it as "existing test with new unexpected results". * for each new test file created, is there a corresponding ini file created? if so, mark it as "new test with unexpected results". otherwise, mark it as "new passing test". * for each test file removed, mark it as "test that was removed".
infrastructure
add more information to automated wpt sync pr prs like are easiest to understand if the reviewer has a lot of prior context it would be great to create prs with information like this new passing tests foo some test html bar some other test html new tests with unexpected results foo some test that fails in servo html bar some other test that fails in servo html existing tests that now pass foo some test that was updated that now passes html existing tests with new unexpected results bar some test that was updated that now fails in servo html tests that were removed foo some removed test html bar some other removed test html known intermittent test results foo some intermittent test html ini bar some other intermittent test html ini this information could be derived from looking at the git commit and performing the following filtering for each ini file in the commit does the test if so mark it as a known intermittent result otherwise if the ini file is being removed mark it as existing test that now passes otherwise mark it as existing test with new unexpected results for each new test file created is there a corresponding ini file created if so mark it as new test with unexpected results otherwise mark it as new passing test for each test file removed mark it as test that was removed
1
16,517
21,527,202,109
IssuesEvent
2022-04-28 19:44:36
googleapis/google-cloud-php-eventarc-publishing
https://api.github.com/repos/googleapis/google-cloud-php-eventarc-publishing
closed
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * must have required property 'library_type' in .repo-metadata.json * client_documentation must match pattern "^https://.*" in .repo-metadata.json * release_level must be equal to one of the allowed values in .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * must have required property 'library_type' in .repo-metadata.json * client_documentation must match pattern "^https://.*" in .repo-metadata.json * release_level must be equal to one of the allowed values in .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
non_infrastructure
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property library type in repo metadata json client documentation must match pattern in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
0
26,955
20,958,653,204
IssuesEvent
2022-03-27 13:18:46
BibleBot/BibleBot
https://api.github.com/repos/BibleBot/BibleBot
closed
Automatic daily verse is duplicated from load balancing backend.
bug priority::high service service::infrastructure
**The bot sends the Daily Verse twice** I am not repeating myself a third time **To Reproduce** Steps to reproduce the behavior: 1. Do "+dailyverse set [time two minutes from the actual time]" 2. See error **Expected behavior** A single daily verse **Screenshots** https://cdn.discordapp.com/attachments/400855639808540673/874266387416240139/image0.jpg
1.0
Automatic daily verse is duplicated from load balancing backend. - **The bot sends the Daily Verse twice** I am not repeating myself a third time **To Reproduce** Steps to reproduce the behavior: 1. Do "+dailyverse set [time two minutes from the actual time]" 2. See error **Expected behavior** A single daily verse **Screenshots** https://cdn.discordapp.com/attachments/400855639808540673/874266387416240139/image0.jpg
infrastructure
automatic daily verse is duplicated from load balancing backend the bot sends the daily verse twice i am not repeating myself a third time to reproduce steps to reproduce the behavior do dailyverse set see error expected behavior a single daily verse screenshots
1
710,224
24,411,352,240
IssuesEvent
2022-10-05 12:39:35
OpenQDev/OpenQ-Frontend
https://api.github.com/repos/OpenQDev/OpenQ-Frontend
closed
[Mobile UI] Label list cut
LOW PRIORITY
When adding labels to search, when the field is too small for all the labels, it doesn't show the latest addition(s) I suggest to either wrap the search field to show all search labels etc. OR move the cursor of the field to the end, so that it always shows the latest addition. ![image](https://user-images.githubusercontent.com/75732239/193791250-29617be6-93bd-46c3-a1ef-eda9d81e2cf6.png)
1.0
[Mobile UI] Label list cut - When adding labels to search, when the field is too small for all the labels, it doesn't show the latest addition(s) I suggest to either wrap the search field to show all search labels etc. OR move the cursor of the field to the end, so that it always shows the latest addition. ![image](https://user-images.githubusercontent.com/75732239/193791250-29617be6-93bd-46c3-a1ef-eda9d81e2cf6.png)
non_infrastructure
label list cut when adding labels to search when the field is too small for all the labels it doesn t show the latest addition s i suggest to either wrap the search field to show all search labels etc or move the cursor of the field to the end so that it always shows the latest addition
0
19,340
13,220,178,363
IssuesEvent
2020-08-17 11:56:00
gnosis/safe-ios
https://api.github.com/repos/gnosis/safe-ios
closed
[iOS] Freshly cloned repository can't be built due to Crashlytics script error
bug infrastructure
Crashlytics needs configuration file which doesn't exist. To fix, we need to skip crashlytics build phase when in Debug configuration or when the Firebase configuration file does not exist # How to test Clone the repository in a fresh location, run the config script, then try to build the app (see README https://github.com/gnosis/safe-ios)
1.0
[iOS] Freshly cloned repository can't be built due to Crashlytics script error - Crashlytics needs configuration file which doesn't exist. To fix, we need to skip crashlytics build phase when in Debug configuration or when the Firebase configuration file does not exist # How to test Clone the repository in a fresh location, run the config script, then try to build the app (see README https://github.com/gnosis/safe-ios)
infrastructure
freshly cloned repository can t be built due to crashlytics script error crashlytics needs configuration file which doesn t exist to fix we need to skip crashlytics build phase when in debug configuration or when the firebase configuration file does not exist how to test clone the repository in a fresh location run the config script then try to build the app see readme
1
22,490
15,219,039,925
IssuesEvent
2021-02-17 18:38:52
TheIOFoundation/TIOF
https://api.github.com/repos/TheIOFoundation/TIOF
opened
[ADM] Looking for: New Short URL platform.
Project: TIOF ⚙ Team: Infrastructure ✔ Stage: Ready 💧 Priority: Medium 🛠 Need: Tool
<a id="top"></a> ![logo](https://user-images.githubusercontent.com/9198668/103214045-6c668e00-494a-11eb-94bb-4246857b8380.png) # INSTRUCTIONS - Fill up this template (be as accurate as possible) - Review Labels. You should at least have the following: -- Need: Tool [MANDATORY] -- Stage: Assign the corresponding one [MANDATORY] -- Flag: Good First Issue [OPTIONAL] -- Keyword: Assign the corresponding ones [OPTIONAL] -- Module: Assign the corresponding ones [MANDATORY] -- Priority: Assign the corresponding one [MANDATORY] -- Project: TIOF [MANDATORY] -- Module: Assign the corresponding one [MANDATORY] -- Team: Assign the corresponding ones [MANDATORY] -- Assignees: Assign the corresponding ones [OPTIONAL] Once the Task is filled up PLEASE DELETE THIS INSTRUCTIONS BLOCK --- **Problem** The current implementation (YOURLS - self hosted) is not scalable and the admin interface is not really user friendly. We have experienced issues on features (such as case sensitivity or QS params not working or fallback URL not working) and need more control from an API perspective for instance. Plugins are not always working and it takes just too long to handle it properly. **Objectives** Migrate to another platform to have better control on the Short URLs. Migrate: - [ ] TIOF.Click - [ ] DoThe.Click **Requirements** - Must have -- API -- QR Code generator -- Multidomain -- Stats -- Custom slugs -- Able to redirect parameters -- Reports - Should have -- - Could have -- Freeware / Open Source -- Usage limit by parameters -- Plugins -- Fallback URL -- Case insensitive -- Being able to track with Matomo -- Tags on links - Won't have -- Nil **Resources** Interesting options at the moment: - https://kutt.it/ - https://shlink.io/ **Related Issues**
1.0
[ADM] Looking for: New Short URL platform. - <a id="top"></a> ![logo](https://user-images.githubusercontent.com/9198668/103214045-6c668e00-494a-11eb-94bb-4246857b8380.png) # INSTRUCTIONS - Fill up this template (be as accurate as possible) - Review Labels. You should at least have the following: -- Need: Tool [MANDATORY] -- Stage: Assign the corresponding one [MANDATORY] -- Flag: Good First Issue [OPTIONAL] -- Keyword: Assign the corresponding ones [OPTIONAL] -- Module: Assign the corresponding ones [MANDATORY] -- Priority: Assign the corresponding one [MANDATORY] -- Project: TIOF [MANDATORY] -- Module: Assign the corresponding one [MANDATORY] -- Team: Assign the corresponding ones [MANDATORY] -- Assignees: Assign the corresponding ones [OPTIONAL] Once the Task is filled up PLEASE DELETE THIS INSTRUCTIONS BLOCK --- **Problem** The current implementation (YOURLS - self hosted) is not scalable and the admin interface is not really user friendly. We have experienced issues on features (such as case sensitivity or QS params not working or fallback URL not working) and need more control from an API perspective for instance. Plugins are not always working and it takes just too long to handle it properly. **Objectives** Migrate to another platform to have better control on the Short URLs. Migrate: - [ ] TIOF.Click - [ ] DoThe.Click **Requirements** - Must have -- API -- QR Code generator -- Multidomain -- Stats -- Custom slugs -- Able to redirect parameters -- Reports - Should have -- - Could have -- Freeware / Open Source -- Usage limit by parameters -- Plugins -- Fallback URL -- Case insensitive -- Being able to track with Matomo -- Tags on links - Won't have -- Nil **Resources** Interesting options at the moment: - https://kutt.it/ - https://shlink.io/ **Related Issues**
infrastructure
looking for new short url platform instructions fill up this template be as accurate as possible review labels you should at least have the following need tool stage assign the corresponding one flag good first issue keyword assign the corresponding ones module assign the corresponding ones priority assign the corresponding one project tiof module assign the corresponding one team assign the corresponding ones assignees assign the corresponding ones once the task is filled up please delete this instructions block problem the current implementation yourls self hosted is not scalable and the admin interface is not really user friendly we have experienced issues on features such as case sensitivity or qs params not working or fallback url not working and need more control from an api perspective for instance plugins are not always working and it takes just too long to handle it properly objectives migrate to another platform to have better control on the short urls migrate tiof click dothe click requirements must have api qr code generator multidomain stats custom slugs able to redirect parameters reports should have could have freeware open source usage limit by parameters plugins fallback url case insensitive being able to track with matomo tags on links won t have nil resources interesting options at the moment related issues
1
24,666
17,581,560,293
IssuesEvent
2021-08-16 08:09:59
Spine-project/Spine-Toolbox
https://api.github.com/repos/Spine-project/Spine-Toolbox
closed
Support passing data between project items
Feature Infrastructure Project items
In GitLab by @ererkka on Sep 30, 2019, 10:14 Project items should be able to pass objects in memory to descendant items. * [x] Create example project with two tools * [ ] Validate connections between project items Related to #323
1.0
Support passing data between project items - In GitLab by @ererkka on Sep 30, 2019, 10:14 Project items should be able to pass objects in memory to descendant items. * [x] Create example project with two tools * [ ] Validate connections between project items Related to #323
infrastructure
support passing data between project items in gitlab by ererkka on sep project items should be able to pass objects in memory to descendant items create example project with two tools validate connections between project items related to
1
75,638
9,305,250,008
IssuesEvent
2019-03-25 05:37:50
processing/processing-pi-website
https://api.github.com/repos/processing/processing-pi-website
closed
Design: Article layout
design
The different sections would benefit from some breathing room before/after. It might also be worth limiting the (maximum-) width of the text in paragraphs. Right now the text will go as wide as the container is, but I believe it's best not to go above 90 characters per line for readability. <hr> <img width="966" alt="screen shot 2018-05-30 at 10 36 12 am" src="https://user-images.githubusercontent.com/4945451/40737387-71754ebe-63f5-11e8-8c53-471b5e14216b.png">
1.0
Design: Article layout - The different sections would benefit from some breathing room before/after. It might also be worth limiting the (maximum-) width of the text in paragraphs. Right now the text will go as wide as the container is, but I believe it's best not to go above 90 characters per line for readability. <hr> <img width="966" alt="screen shot 2018-05-30 at 10 36 12 am" src="https://user-images.githubusercontent.com/4945451/40737387-71754ebe-63f5-11e8-8c53-471b5e14216b.png">
non_infrastructure
design article layout the different sections would benefit from some breathing room before after it might also be worth limiting the maximum width of the text in paragraphs right now the text will go as wide as the container is but i believe it s best not to go above characters per line for readability img width alt screen shot at am src
0
93,928
8,454,327,827
IssuesEvent
2018-10-21 01:31:51
statsmodels/statsmodels
https://api.github.com/repos/statsmodels/statsmodels
opened
TST: missing seed and test failure in qsturng
comp-stats type-test
try except leaves a variable undefined test error in https://travis-ci.org/statsmodels/statsmodels/jobs/444183621 ``` statsmodels/stats/libqsturng/qsturng_.py:731: in _qsturng r0_sq = _interpolate_p(p, r, v0)**2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ p = 0.902896898398093, r = 9, v = 1 def _interpolate_p(p, r, v): """ interpolates p based on the values in the A table for the scalar value of r and the scalar value of v """ # interpolate p (v should be in table) # if .5 < p < .75 use linear interpolation in q # if p > .75 use quadratic interpolation in log(y + r/v) # by -1. / (1. + 1.5 * _phi((1. + p)/2.)) # find the 3 closest v values p0, p1, p2 = _select_ps(p) try: y0 = _func(A[(p0, v)], p0, r, v) + 1. except: print(p,r,v) y1 = _func(A[(p1, v)], p1, r, v) + 1. y2 = _func(A[(p2, v)], p2, r, v) + 1. > y_log0 = math.log(y0 + float(r)/float(v)) E UnboundLocalError: local variable 'y0' referenced before assignment ```
1.0
TST: missing seed and test failure in qsturng - try except leaves a variable undefined test error in https://travis-ci.org/statsmodels/statsmodels/jobs/444183621 ``` statsmodels/stats/libqsturng/qsturng_.py:731: in _qsturng r0_sq = _interpolate_p(p, r, v0)**2 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ p = 0.902896898398093, r = 9, v = 1 def _interpolate_p(p, r, v): """ interpolates p based on the values in the A table for the scalar value of r and the scalar value of v """ # interpolate p (v should be in table) # if .5 < p < .75 use linear interpolation in q # if p > .75 use quadratic interpolation in log(y + r/v) # by -1. / (1. + 1.5 * _phi((1. + p)/2.)) # find the 3 closest v values p0, p1, p2 = _select_ps(p) try: y0 = _func(A[(p0, v)], p0, r, v) + 1. except: print(p,r,v) y1 = _func(A[(p1, v)], p1, r, v) + 1. y2 = _func(A[(p2, v)], p2, r, v) + 1. > y_log0 = math.log(y0 + float(r)/float(v)) E UnboundLocalError: local variable 'y0' referenced before assignment ```
non_infrastructure
tst missing seed and test failure in qsturng try except leaves a variable undefined test error in statsmodels stats libqsturng qsturng py in qsturng sq interpolate p p r p r v def interpolate p p r v interpolates p based on the values in the a table for the scalar value of r and the scalar value of v interpolate p v should be in table if p use linear interpolation in q if p use quadratic interpolation in log y r v by phi p find the closest v values select ps p try func a r v except print p r v func a r v func a r v y math log float r float v e unboundlocalerror local variable referenced before assignment
0
16,629
12,069,558,989
IssuesEvent
2020-04-16 16:13:48
lampepfl/dotty
https://api.github.com/repos/lampepfl/dotty
opened
The CI should automatically open an issue in case of failure
area:infrastructure itype:enhancement
We keep having our nightly builds fail for multiple days until someone realizes that they're broken (latest example: https://gitter.im/lampepfl/dotty?at=5e9880a963e7b73a5fdd7770). Can we set up github actions to automatically open an issue when that happen so we can be faster at fixing these things? A quick google leads me to https://github.com/JasonEtco/create-an-issue.
1.0
The CI should automatically open an issue in case of failure - We keep having our nightly builds fail for multiple days until someone realizes that they're broken (latest example: https://gitter.im/lampepfl/dotty?at=5e9880a963e7b73a5fdd7770). Can we set up github actions to automatically open an issue when that happen so we can be faster at fixing these things? A quick google leads me to https://github.com/JasonEtco/create-an-issue.
infrastructure
the ci should automatically open an issue in case of failure we keep having our nightly builds fail for multiple days until someone realizes that they re broken latest example can we set up github actions to automatically open an issue when that happen so we can be faster at fixing these things a quick google leads me to
1
9,570
8,034,836,978
IssuesEvent
2018-07-29 23:36:48
APSIMInitiative/ApsimX
https://api.github.com/repos/APSIMInitiative/ApsimX
closed
R component in APSIM
interface/infrastructure newfeature
We need the ability to run arbitrary R code from APSIM. Users without an existing R installation will be able to run APSIM as usual, but will not be able to use this feature.
1.0
R component in APSIM - We need the ability to run arbitrary R code from APSIM. Users without an existing R installation will be able to run APSIM as usual, but will not be able to use this feature.
infrastructure
r component in apsim we need the ability to run arbitrary r code from apsim users without an existing r installation will be able to run apsim as usual but will not be able to use this feature
1
13,701
10,427,357,181
IssuesEvent
2019-09-16 19:44:56
dotnet/core-setup
https://api.github.com/repos/dotnet/core-setup
closed
Build intermediary static libraries to speed up compilation
area-Infrastructure enhancement up for grabs
Some files are used by multiple targets in corehost. Mainly: ``` roll_forward_option.cpp runtime_config.cpp json/casablanca/src/json/json.cpp json/casablanca/src/json/json_parsing.cpp json/casablanca/src/json/json_serialization.cpp json/casablanca/src/utilities/asyncrt_utils.cpp fxr/fx_ver.cpp host_startup_info.cpp deps_format.cpp deps_entry.cpp fx_definition.cpp fx_reference.cpp version.cpp version_compatibility_range.cpp ``` (Other candidates include `trace.cpp`, `utils.cpp`, and the PAL.) Those are built twice, when building `hostpolicy` and `hostfxr`. It might be a better idea to create a `common` static library that's linked against those targets.
1.0
Build intermediary static libraries to speed up compilation - Some files are used by multiple targets in corehost. Mainly: ``` roll_forward_option.cpp runtime_config.cpp json/casablanca/src/json/json.cpp json/casablanca/src/json/json_parsing.cpp json/casablanca/src/json/json_serialization.cpp json/casablanca/src/utilities/asyncrt_utils.cpp fxr/fx_ver.cpp host_startup_info.cpp deps_format.cpp deps_entry.cpp fx_definition.cpp fx_reference.cpp version.cpp version_compatibility_range.cpp ``` (Other candidates include `trace.cpp`, `utils.cpp`, and the PAL.) Those are built twice, when building `hostpolicy` and `hostfxr`. It might be a better idea to create a `common` static library that's linked against those targets.
infrastructure
build intermediary static libraries to speed up compilation some files are used by multiple targets in corehost mainly roll forward option cpp runtime config cpp json casablanca src json json cpp json casablanca src json json parsing cpp json casablanca src json json serialization cpp json casablanca src utilities asyncrt utils cpp fxr fx ver cpp host startup info cpp deps format cpp deps entry cpp fx definition cpp fx reference cpp version cpp version compatibility range cpp other candidates include trace cpp utils cpp and the pal those are built twice when building hostpolicy and hostfxr it might be a better idea to create a common static library that s linked against those targets
1
26,749
20,628,167,926
IssuesEvent
2022-03-08 01:58:51
zulip/zulip-terminal
https://api.github.com/repos/zulip/zulip-terminal
closed
Installing zulip-term in Python 3.10 fallbacks to zulip-term 0.2.1
bug area: infrastructure
As previously mentioned in #1143, newer versions of zulip installed through `pip` fail to start due to the following error: ``` Traceback (most recent call last): return self.do_api_query(marshalled_request, versioned_url, method=method, File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 501, in do_api_query File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/cli/run.py", line 123, in main Controller(zuliprc_path, zterm['theme']).main() File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 618, in call_endpoint File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/core.py", line 30, in __init__ self.model = Model(self) File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/model.py", line 40, in __init__ self._update_initial_data() File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 447, in ensure_session session.headers.update({"User-agent": self.get_user_agent()}) self.ensure_session() File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 447, in ensure_session return self.do_api_query(marshalled_request, versioned_url, method=method, File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/model.py", line 159, in _update_initial_data raise urwid.ExitMainLoop() session.headers.update({"User-agent": self.get_user_agent()}) File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent session.headers.update({"User-agent": self.get_user_agent()}) File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 501, in do_api_query File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent vendor, vendor_version, dummy = platform.linux_distribution() vendor, vendor_version, dummy = platform.linux_distribution() AttributeError: module 'platform' has no attribute 'linux_distribution' urwid.main_loop.ExitMainLoop AttributeError: module 'platform' has no attribute 'linux_distribution' self.ensure_session() vendor, vendor_version, dummy = platform.linux_distribution() Zulip Terminal has crashed! File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 447, in ensure_session AttributeError: module 'platform' has no attribute 'linux_distribution' You can ask for help at: session.headers.update({"User-agent": self.get_user_agent()}) File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent https://chat.zulip.org/#narrow/stream/206-zulip-terminal ``` --- ## Some debugging Looking at the end of installation log, zulip-term-0.2.1 is installed instead of 0.6.0: ```console $ pip install zulip-term ... Installing collected packages: urwid, typing, zulip, urwid-readline, emoji, zulip-term Successfully installed emoji-0.5.0 typing-3.6.4 urwid-2.0.1 urwid-readline-0.7 zulip-0.4.7 zulip-term-0.2.1 ``` Getting the 0.6.0 .whl file and building it manually shows that 0.6.0 is incompatible with Python 3.10: ```console $ pip wheel zulip_term-0.6.0+git-py3-none-any.whl Processing ./zulip_term-0.6.0+git-py3-none-any.whl ... ERROR: Package 'zulip-term' requires a different Python: 3.10.1 not in '<3.10,>=3.6' ``` Which might be the cause, maybe it is falling back to a previous version which apparently didn't had any Python restrictions (at least checking [0.2.0 tagged commit's file tree](https://github.com/zulip/zulip-terminal/tree/0.2.0)).
1.0
Installing zulip-term in Python 3.10 fallbacks to zulip-term 0.2.1 - As previously mentioned in #1143, newer versions of zulip installed through `pip` fail to start due to the following error: ``` Traceback (most recent call last): return self.do_api_query(marshalled_request, versioned_url, method=method, File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 501, in do_api_query File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/cli/run.py", line 123, in main Controller(zuliprc_path, zterm['theme']).main() File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 618, in call_endpoint File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/core.py", line 30, in __init__ self.model = Model(self) File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/model.py", line 40, in __init__ self._update_initial_data() File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 447, in ensure_session session.headers.update({"User-agent": self.get_user_agent()}) self.ensure_session() File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 447, in ensure_session return self.do_api_query(marshalled_request, versioned_url, method=method, File "/home/jptiz/.local/lib/python3.10/site-packages/zulipterminal/model.py", line 159, in _update_initial_data raise urwid.ExitMainLoop() session.headers.update({"User-agent": self.get_user_agent()}) File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent session.headers.update({"User-agent": self.get_user_agent()}) File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 501, in do_api_query File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent vendor, vendor_version, dummy = platform.linux_distribution() vendor, vendor_version, dummy = platform.linux_distribution() AttributeError: module 'platform' has no attribute 'linux_distribution' urwid.main_loop.ExitMainLoop AttributeError: module 'platform' has no attribute 'linux_distribution' self.ensure_session() vendor, vendor_version, dummy = platform.linux_distribution() Zulip Terminal has crashed! File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 447, in ensure_session AttributeError: module 'platform' has no attribute 'linux_distribution' You can ask for help at: session.headers.update({"User-agent": self.get_user_agent()}) File "/home/jptiz/.local/lib/python3.10/site-packages/zulip/__init__.py", line 463, in get_user_agent https://chat.zulip.org/#narrow/stream/206-zulip-terminal ``` --- ## Some debugging Looking at the end of installation log, zulip-term-0.2.1 is installed instead of 0.6.0: ```console $ pip install zulip-term ... Installing collected packages: urwid, typing, zulip, urwid-readline, emoji, zulip-term Successfully installed emoji-0.5.0 typing-3.6.4 urwid-2.0.1 urwid-readline-0.7 zulip-0.4.7 zulip-term-0.2.1 ``` Getting the 0.6.0 .whl file and building it manually shows that 0.6.0 is incompatible with Python 3.10: ```console $ pip wheel zulip_term-0.6.0+git-py3-none-any.whl Processing ./zulip_term-0.6.0+git-py3-none-any.whl ... ERROR: Package 'zulip-term' requires a different Python: 3.10.1 not in '<3.10,>=3.6' ``` Which might be the cause, maybe it is falling back to a previous version which apparently didn't had any Python restrictions (at least checking [0.2.0 tagged commit's file tree](https://github.com/zulip/zulip-terminal/tree/0.2.0)).
infrastructure
installing zulip term in python fallbacks to zulip term as previously mentioned in newer versions of zulip installed through pip fail to start due to the following error traceback most recent call last return self do api query marshalled request versioned url method method file home jptiz local lib site packages zulip init py line in do api query file home jptiz local lib site packages zulipterminal cli run py line in main controller zuliprc path zterm main file home jptiz local lib site packages zulip init py line in call endpoint file home jptiz local lib site packages zulipterminal core py line in init self model model self file home jptiz local lib site packages zulipterminal model py line in init self update initial data file home jptiz local lib site packages zulip init py line in ensure session session headers update user agent self get user agent self ensure session file home jptiz local lib site packages zulip init py line in ensure session return self do api query marshalled request versioned url method method file home jptiz local lib site packages zulipterminal model py line in update initial data raise urwid exitmainloop session headers update user agent self get user agent file home jptiz local lib site packages zulip init py line in get user agent session headers update user agent self get user agent file home jptiz local lib site packages zulip init py line in get user agent file home jptiz local lib site packages zulip init py line in do api query file home jptiz local lib site packages zulip init py line in get user agent vendor vendor version dummy platform linux distribution vendor vendor version dummy platform linux distribution attributeerror module platform has no attribute linux distribution urwid main loop exitmainloop attributeerror module platform has no attribute linux distribution self ensure session vendor vendor version dummy platform linux distribution zulip terminal has crashed file home jptiz local lib site packages zulip init py line in ensure session attributeerror module platform has no attribute linux distribution you can ask for help at session headers update user agent self get user agent file home jptiz local lib site packages zulip init py line in get user agent some debugging looking at the end of installation log zulip term is installed instead of console pip install zulip term installing collected packages urwid typing zulip urwid readline emoji zulip term successfully installed emoji typing urwid urwid readline zulip zulip term getting the whl file and building it manually shows that is incompatible with python console pip wheel zulip term git none any whl processing zulip term git none any whl error package zulip term requires a different python not in which might be the cause maybe it is falling back to a previous version which apparently didn t had any python restrictions at least checking
1
263,504
8,290,413,135
IssuesEvent
2018-09-19 17:17:26
fac-14/Vent-Bot
https://api.github.com/repos/fac-14/Vent-Bot
closed
deployment on heroku
bug priority-2
- [x] fix deployment - [x] change the API response from a promise and add in required credentials to heroku
1.0
deployment on heroku - - [x] fix deployment - [x] change the API response from a promise and add in required credentials to heroku
non_infrastructure
deployment on heroku fix deployment change the api response from a promise and add in required credentials to heroku
0
31,610
25,934,137,857
IssuesEvent
2022-12-16 12:40:31
maciejwalkowiak/just
https://api.github.com/repos/maciejwalkowiak/just
closed
Starting infrastructure services fails if Docker image has not been already downloaded
bug command:just-run feat:infrastructure-services
First of all, this is an awesome utility really liked the idea behind it. I gave it a try and tried to run `just run` but I am seeing multiple exception related to `testcontainers` ``` ❯ just run ██╗██╗ ██╗███████╗████████╗ ██║██║ ██║██╔════╝╚══██╔══╝ ██║██║ ██║███████╗ ██║ ██ ██║██║ ██║╚════██║ ██║ ╚█████╔╝╚██████╔╝███████║ ██║ ╚════╝ ╚═════╝ ╚══════╝ ╚═╝ just | Just 0.11.1 just | ✅ License valid until 2023-01-31 just | ✅ Build tool: Gradle just | 📴 Docker Compose file: docker-compose.yml not found just | ✅ Live Reload support is enabled just | ✅ Zero Config Infrastructure Services support is enabled just | Executing command: ./gradlew bootRun --args=--just.port=55762 --init-script /var/folders/fq/v75k24091j3dtgj1ttmh88680000gr/T/just.init2537262509551420474gradle --console=plain .... .... just | Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: org.testcontainers.containers.ContainerLaunchException: Container startup failed] with root cause org.testcontainers.shaded.com.fasterxml.jackson.databind.JsonMappingException: Can not construct instance of com.github.dockerjava.api.model.PullResponseItem: no suitable constructor found, can not deserialize from Object value (missing default constructor or creator, or perhaps need to add/enable type information?) at [Source: N/A; line: -1, column: -1] at org.testcontainers.shaded.com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:1456) at org.testcontainers.shaded.com.fasterxml.jackson.databind.DeserializationContext.handleMissingInstantiator(DeserializationContext.java:1012) at org.testcontainers.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1206) at org.testcontainers.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:314) at org.testcontainers.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:148) at org.testcontainers.shaded.com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3770) at org.testcontainers.shaded.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2099) at org.testcontainers.shaded.com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2596) at org.testcontainers.shaded.com.github.dockerjava.core.DockerObjectDeserializer.deserialize(DockerClientConfig.java:132) at org.testcontainers.shaded.com.fasterxml.jackson.databind.MappingIterator.nextValue(MappingIterator.java:277) at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:315) at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:298) at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:275) at java.base@17.0.5/java.lang.Thread.run(Thread.java:833) at org.graalvm.nativeimage.builder/com.oracle.svm.core.thread.PlatformThreads.threadStartRoutine(PlatformThreads.java:775) at org.graalvm.nativeimage.builder/com.oracle.svm.core.posix.thread.PosixPlatformThreads.pthreadStartRoutine(PosixPlatformThreads.java:203) spring | 2022-12-14 20:43:27.269 ERROR [asset-group-service,,] 1446 --- [ restartedMain] c.m.j.spring.boot.DevcontainersClient : Failed to retrieve configuration properties for devcontainer [postgres]. Response: {"timestamp":"2022-12-14T15:13:27.267+00:00","status":500,"error":"Internal Server Error","path":"/postgres"} ```
1.0
Starting infrastructure services fails if Docker image has not been already downloaded - First of all, this is an awesome utility really liked the idea behind it. I gave it a try and tried to run `just run` but I am seeing multiple exception related to `testcontainers` ``` ❯ just run ██╗██╗ ██╗███████╗████████╗ ██║██║ ██║██╔════╝╚══██╔══╝ ██║██║ ██║███████╗ ██║ ██ ██║██║ ██║╚════██║ ██║ ╚█████╔╝╚██████╔╝███████║ ██║ ╚════╝ ╚═════╝ ╚══════╝ ╚═╝ just | Just 0.11.1 just | ✅ License valid until 2023-01-31 just | ✅ Build tool: Gradle just | 📴 Docker Compose file: docker-compose.yml not found just | ✅ Live Reload support is enabled just | ✅ Zero Config Infrastructure Services support is enabled just | Executing command: ./gradlew bootRun --args=--just.port=55762 --init-script /var/folders/fq/v75k24091j3dtgj1ttmh88680000gr/T/just.init2537262509551420474gradle --console=plain .... .... just | Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: org.testcontainers.containers.ContainerLaunchException: Container startup failed] with root cause org.testcontainers.shaded.com.fasterxml.jackson.databind.JsonMappingException: Can not construct instance of com.github.dockerjava.api.model.PullResponseItem: no suitable constructor found, can not deserialize from Object value (missing default constructor or creator, or perhaps need to add/enable type information?) at [Source: N/A; line: -1, column: -1] at org.testcontainers.shaded.com.fasterxml.jackson.databind.DeserializationContext.instantiationException(DeserializationContext.java:1456) at org.testcontainers.shaded.com.fasterxml.jackson.databind.DeserializationContext.handleMissingInstantiator(DeserializationContext.java:1012) at org.testcontainers.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1206) at org.testcontainers.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:314) at org.testcontainers.shaded.com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:148) at org.testcontainers.shaded.com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3770) at org.testcontainers.shaded.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2099) at org.testcontainers.shaded.com.fasterxml.jackson.databind.ObjectMapper.treeToValue(ObjectMapper.java:2596) at org.testcontainers.shaded.com.github.dockerjava.core.DockerObjectDeserializer.deserialize(DockerClientConfig.java:132) at org.testcontainers.shaded.com.fasterxml.jackson.databind.MappingIterator.nextValue(MappingIterator.java:277) at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:315) at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:298) at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:275) at java.base@17.0.5/java.lang.Thread.run(Thread.java:833) at org.graalvm.nativeimage.builder/com.oracle.svm.core.thread.PlatformThreads.threadStartRoutine(PlatformThreads.java:775) at org.graalvm.nativeimage.builder/com.oracle.svm.core.posix.thread.PosixPlatformThreads.pthreadStartRoutine(PosixPlatformThreads.java:203) spring | 2022-12-14 20:43:27.269 ERROR [asset-group-service,,] 1446 --- [ restartedMain] c.m.j.spring.boot.DevcontainersClient : Failed to retrieve configuration properties for devcontainer [postgres]. Response: {"timestamp":"2022-12-14T15:13:27.267+00:00","status":500,"error":"Internal Server Error","path":"/postgres"} ```
infrastructure
starting infrastructure services fails if docker image has not been already downloaded first of all this is an awesome utility really liked the idea behind it i gave it a try and tried to run just run but i am seeing multiple exception related to testcontainers ❯ just run ██╗██╗ ██╗███████╗████████╗ ██║██║ ██║██╔════╝╚══██╔══╝ ██║██║ ██║███████╗ ██║ ██ ██║██║ ██║╚════██║ ██║ ╚█████╔╝╚██████╔╝███████║ ██║ ╚════╝ ╚═════╝ ╚══════╝ ╚═╝ just just just ✅ license valid until just ✅ build tool gradle just 📴 docker compose file docker compose yml not found just ✅ live reload support is enabled just ✅ zero config infrastructure services support is enabled just executing command gradlew bootrun args just port init script var folders fq t just console plain just servlet service for servlet in context with path threw exception with root cause org testcontainers shaded com fasterxml jackson databind jsonmappingexception can not construct instance of com github dockerjava api model pullresponseitem no suitable constructor found can not deserialize from object value missing default constructor or creator or perhaps need to add enable type information at at org testcontainers shaded com fasterxml jackson databind deserializationcontext instantiationexception deserializationcontext java at org testcontainers shaded com fasterxml jackson databind deserializationcontext handlemissinginstantiator deserializationcontext java at org testcontainers shaded com fasterxml jackson databind deser beandeserializerbase deserializefromobjectusingnondefault beandeserializerbase java at org testcontainers shaded com fasterxml jackson databind deser beandeserializer deserializefromobject beandeserializer java at org testcontainers shaded com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at org testcontainers shaded com fasterxml jackson databind objectmapper readvalue objectmapper java at org testcontainers shaded com fasterxml jackson databind objectmapper readvalue objectmapper java at org testcontainers shaded com fasterxml jackson databind objectmapper treetovalue objectmapper java at org testcontainers shaded com github dockerjava core dockerobjectdeserializer deserialize dockerclientconfig java at org testcontainers shaded com fasterxml jackson databind mappingiterator nextvalue mappingiterator java at org testcontainers shaded com github dockerjava core defaultinvocationbuilder jsonsink accept defaultinvocationbuilder java at org testcontainers shaded com github dockerjava core defaultinvocationbuilder jsonsink accept defaultinvocationbuilder java at org testcontainers shaded com github dockerjava core defaultinvocationbuilder lambda executeandstream defaultinvocationbuilder java at java base java lang thread run thread java at org graalvm nativeimage builder com oracle svm core thread platformthreads threadstartroutine platformthreads java at org graalvm nativeimage builder com oracle svm core posix thread posixplatformthreads pthreadstartroutine posixplatformthreads java spring error c m j spring boot devcontainersclient failed to retrieve configuration properties for devcontainer response timestamp status error internal server error path postgres
1
33,887
27,972,613,549
IssuesEvent
2023-03-25 07:17:39
Linzell/kiro
https://api.github.com/repos/Linzell/kiro
opened
Create Fireproof store and use in app
🚠 infrastructure
**Is your feature request related to a problem? Please describe.** Need to implement Fireproof in app, to get and set data. **Describe the solution you'd like** Actually use Redux for the store, need to replace this to Fireproof ... or use call in Fireproof ? (Redux not implement in Fireproof, actually). **Describe alternatives you've considered** Use File storage ?
1.0
Create Fireproof store and use in app - **Is your feature request related to a problem? Please describe.** Need to implement Fireproof in app, to get and set data. **Describe the solution you'd like** Actually use Redux for the store, need to replace this to Fireproof ... or use call in Fireproof ? (Redux not implement in Fireproof, actually). **Describe alternatives you've considered** Use File storage ?
infrastructure
create fireproof store and use in app is your feature request related to a problem please describe need to implement fireproof in app to get and set data describe the solution you d like actually use redux for the store need to replace this to fireproof or use call in fireproof redux not implement in fireproof actually describe alternatives you ve considered use file storage
1
24,777
4,108,762,060
IssuesEvent
2016-06-06 17:11:37
albaizq/NBAMovements2
https://api.github.com/repos/albaizq/NBAMovements2
closed
Acceptance test notification
Acceptance test bug
The ontology created has not passed the acceptance test: 1. Error with the requirement with ID 2. Priority of the requirement: 1. - The ontology did not return the results that the user expected. Expected: [http://www.w3.org/2001/XMLSchema#anyURI, xsd:anyURI, xsd:anyURI, xsd:anyURI] in the list of results. 2. Error with the requirement with ID 4. - The ontology did not return the results that the user expected. Expected: [itinerario, http://schema.org/TouristAttraction] in the list of results. 3. Error with the requirement with ID 1. - The ontology did not return the results that the user expected. Expected: [true] in the list of results. 4. Error with the requirement with ID 3. Priority of the requirement: 1. - The ontology did not return the results that the user expected. Expected: [poblacion] in the list of results.
1.0
Acceptance test notification - The ontology created has not passed the acceptance test: 1. Error with the requirement with ID 2. Priority of the requirement: 1. - The ontology did not return the results that the user expected. Expected: [http://www.w3.org/2001/XMLSchema#anyURI, xsd:anyURI, xsd:anyURI, xsd:anyURI] in the list of results. 2. Error with the requirement with ID 4. - The ontology did not return the results that the user expected. Expected: [itinerario, http://schema.org/TouristAttraction] in the list of results. 3. Error with the requirement with ID 1. - The ontology did not return the results that the user expected. Expected: [true] in the list of results. 4. Error with the requirement with ID 3. Priority of the requirement: 1. - The ontology did not return the results that the user expected. Expected: [poblacion] in the list of results.
non_infrastructure
acceptance test notification the ontology created has not passed the acceptance test error with the requirement with id priority of the requirement the ontology did not return the results that the user expected expected in the list of results error with the requirement with id the ontology did not return the results that the user expected expected in the list of results error with the requirement with id the ontology did not return the results that the user expected expected in the list of results error with the requirement with id priority of the requirement the ontology did not return the results that the user expected expected in the list of results
0
2,969
30,686,253,706
IssuesEvent
2023-07-26 12:33:46
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
closed
Excessive disk usage from benchmark CW30-mixed
kind/bug severity/high area/reliability
<!-- In case you have questions about our software we encourage everyone to participate in our community via the - Camunda Platform community forum https://forum.camunda.io/ or - Slack https://camunda-cloud.slack.com/ (For invite: https://camunda-slack-invite.herokuapp.com/) There you can exchange ideas with other Zeebe and Camunda Platform 8 users, as well as the product developers, and use the search to find answer to similar questions. This issue template is used by the Zeebe engineers to create general tasks. --> **Description** The disk usage of the mixed benchmark of CW30 is steadily increasing. It's a matter of time before it runs out of disk space: <img width="1009" alt="image" src="https://github.com/camunda/zeebe/assets/5787702/1d7fef20-3083-4fe7-9bf5-e92dc6c07c3e"> If we compare this to CW29 we can see it remains stable there: <img width="993" alt="image" src="https://github.com/camunda/zeebe/assets/5787702/a428c87c-dc48-4d89-8e0e-66b6d859f2b1"> We more than likely merged something in the last week that is causing this issue.
True
Excessive disk usage from benchmark CW30-mixed - <!-- In case you have questions about our software we encourage everyone to participate in our community via the - Camunda Platform community forum https://forum.camunda.io/ or - Slack https://camunda-cloud.slack.com/ (For invite: https://camunda-slack-invite.herokuapp.com/) There you can exchange ideas with other Zeebe and Camunda Platform 8 users, as well as the product developers, and use the search to find answer to similar questions. This issue template is used by the Zeebe engineers to create general tasks. --> **Description** The disk usage of the mixed benchmark of CW30 is steadily increasing. It's a matter of time before it runs out of disk space: <img width="1009" alt="image" src="https://github.com/camunda/zeebe/assets/5787702/1d7fef20-3083-4fe7-9bf5-e92dc6c07c3e"> If we compare this to CW29 we can see it remains stable there: <img width="993" alt="image" src="https://github.com/camunda/zeebe/assets/5787702/a428c87c-dc48-4d89-8e0e-66b6d859f2b1"> We more than likely merged something in the last week that is causing this issue.
non_infrastructure
excessive disk usage from benchmark mixed in case you have questions about our software we encourage everyone to participate in our community via the camunda platform community forum or slack for invite there you can exchange ideas with other zeebe and camunda platform users as well as the product developers and use the search to find answer to similar questions this issue template is used by the zeebe engineers to create general tasks description the disk usage of the mixed benchmark of is steadily increasing it s a matter of time before it runs out of disk space img width alt image src if we compare this to we can see it remains stable there img width alt image src we more than likely merged something in the last week that is causing this issue
0
88,202
8,134,936,794
IssuesEvent
2018-08-19 21:43:34
TryGhost/Ghost
https://api.github.com/repos/TryGhost/Ghost
closed
Tidy up importer tests and write some more tests for 1.0 imports
help wanted importer refactoring/cleanup server tests
The importer tests are still a bit messy. We would like to have a clean separation between 1.0 and LTS tests. Furthermore, we would like to add more tests and export examples for 1.0.
1.0
Tidy up importer tests and write some more tests for 1.0 imports - The importer tests are still a bit messy. We would like to have a clean separation between 1.0 and LTS tests. Furthermore, we would like to add more tests and export examples for 1.0.
non_infrastructure
tidy up importer tests and write some more tests for imports the importer tests are still a bit messy we would like to have a clean separation between and lts tests furthermore we would like to add more tests and export examples for
0
27,553
21,916,627,790
IssuesEvent
2022-05-21 23:24:38
Unidata/MetPy
https://api.github.com/repos/Unidata/MetPy
opened
Consider dropping codecov
Area: Infrastructure Type: Enhancement
Codecov is occasionally a source of weird CI failures due to dropped reports. We could accomplish the same thing by combining all our testing into a single workflow with a single end job that combines the reports and checks the coverage. (Unsure if/how we could check 100% running tests.) We could also have it upload the HTML report for coverage as an artifact. Inspired by this [blog post](https://hynek.me/articles/ditch-codecov-python/) and [workflow](https://github.com/cjolowicz/cookiecutter-hypermodern-python/blob/main/%7B%7Bcookiecutter.project_name%7D%7D/.github/workflows/tests.yml).
1.0
Consider dropping codecov - Codecov is occasionally a source of weird CI failures due to dropped reports. We could accomplish the same thing by combining all our testing into a single workflow with a single end job that combines the reports and checks the coverage. (Unsure if/how we could check 100% running tests.) We could also have it upload the HTML report for coverage as an artifact. Inspired by this [blog post](https://hynek.me/articles/ditch-codecov-python/) and [workflow](https://github.com/cjolowicz/cookiecutter-hypermodern-python/blob/main/%7B%7Bcookiecutter.project_name%7D%7D/.github/workflows/tests.yml).
infrastructure
consider dropping codecov codecov is occasionally a source of weird ci failures due to dropped reports we could accomplish the same thing by combining all our testing into a single workflow with a single end job that combines the reports and checks the coverage unsure if how we could check running tests we could also have it upload the html report for coverage as an artifact inspired by this and
1
237,058
26,078,769,990
IssuesEvent
2022-12-25 01:09:15
mkevenaar/OctoPrint-Slack
https://api.github.com/repos/mkevenaar/OctoPrint-Slack
opened
CVE-2022-40899 (Medium) detected in future-0.18.2.tar.gz
security vulnerability
## CVE-2022-40899 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>future-0.18.2.tar.gz</b></p></summary> <p>Clean single-source support for Python 3 and 2</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/future-0.18.2.tar.gz">https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/future-0.18.2.tar.gz</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt,/tmp/ws-scm/OctoPrint-Slack,/requirements.txt</p> <p> Dependency Hierarchy: - :x: **future-0.18.2.tar.gz** (Vulnerable Library) <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue discovered in Python Charmers Future 0.18.2 and earlier allows remote attackers to cause a denial of service via crafted Set-Cookie header from malicious web server. <p>Publish Date: 2022-12-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40899>CVE-2022-40899</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-40899 (Medium) detected in future-0.18.2.tar.gz - ## CVE-2022-40899 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>future-0.18.2.tar.gz</b></p></summary> <p>Clean single-source support for Python 3 and 2</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/future-0.18.2.tar.gz">https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/future-0.18.2.tar.gz</a></p> <p>Path to dependency file: /requirements.txt</p> <p>Path to vulnerable library: /requirements.txt,/tmp/ws-scm/OctoPrint-Slack,/requirements.txt</p> <p> Dependency Hierarchy: - :x: **future-0.18.2.tar.gz** (Vulnerable Library) <p>Found in base branch: <b>develop</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue discovered in Python Charmers Future 0.18.2 and earlier allows remote attackers to cause a denial of service via crafted Set-Cookie header from malicious web server. <p>Publish Date: 2022-12-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-40899>CVE-2022-40899</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_infrastructure
cve medium detected in future tar gz cve medium severity vulnerability vulnerable library future tar gz clean single source support for python and library home page a href path to dependency file requirements txt path to vulnerable library requirements txt tmp ws scm octoprint slack requirements txt dependency hierarchy x future tar gz vulnerable library found in base branch develop vulnerability details an issue discovered in python charmers future and earlier allows remote attackers to cause a denial of service via crafted set cookie header from malicious web server publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
0
137,203
5,299,694,535
IssuesEvent
2017-02-10 01:09:39
atilatosta/dotnet-standard-sdk
https://api.github.com/repos/atilatosta/dotnet-standard-sdk
closed
[documentation] Conversation example
high-priority
Create an example for Conversation service and add code sample to readme.
1.0
[documentation] Conversation example - Create an example for Conversation service and add code sample to readme.
non_infrastructure
conversation example create an example for conversation service and add code sample to readme
0
504,597
14,620,241,743
IssuesEvent
2020-12-22 19:21:10
googleapis/elixir-google-api
https://api.github.com/repos/googleapis/elixir-google-api
closed
Synthesis failed for CloudRun
autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate CloudRun. :broken_heart: Here's the output from running `synth.py`: ``` run/lib/google_api/cloud_run/v1alpha1/model/list_configurations_response.ex. Writing ListDomainMappingsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_domain_mappings_response.ex. Writing ListLocationsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_locations_response.ex. Writing ListMeta to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_meta.ex. Writing ListRevisionsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_revisions_response.ex. Writing ListRoutesResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_routes_response.ex. Writing ListServicesResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_services_response.ex. Writing ListTriggersResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_triggers_response.ex. Writing LocalObjectReference to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/local_object_reference.ex. Writing Location to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/location.ex. Writing ObjectMeta to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/object_meta.ex. Writing ObjectReference to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/object_reference.ex. Writing OwnerReference to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/owner_reference.ex. Writing Policy to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/policy.ex. Writing Probe to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/probe.ex. Writing Quantity to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/quantity.ex. Writing ResourceRecord to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/resource_record.ex. Writing ResourceRequirements to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/resource_requirements.ex. Writing Revision to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision.ex. Writing RevisionCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_condition.ex. Writing RevisionSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_spec.ex. Writing RevisionStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_status.ex. Writing RevisionTemplate to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_template.ex. Writing Route to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route.ex. Writing RouteCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route_condition.ex. Writing RouteSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route_spec.ex. Writing RouteStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route_status.ex. Writing SELinuxOptions to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/se_linux_options.ex. Writing SecretEnvSource to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/secret_env_source.ex. Writing SecretKeySelector to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/secret_key_selector.ex. Writing SecretVolumeSource to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/secret_volume_source.ex. Writing SecurityContext to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/security_context.ex. Writing Service to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service.ex. Writing ServiceCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_condition.ex. Writing ServiceSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec.ex. Writing ServiceSpecManualType to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_manual_type.ex. Writing ServiceSpecPinnedType to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_pinned_type.ex. Writing ServiceSpecReleaseType to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_release_type.ex. Writing ServiceSpecRunLatest to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_run_latest.ex. Writing ServiceStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_status.ex. Writing SetIamPolicyRequest to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/set_iam_policy_request.ex. Writing TCPSocketAction to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/tcp_socket_action.ex. Writing TestIamPermissionsRequest to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/test_iam_permissions_request.ex. Writing TestIamPermissionsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/test_iam_permissions_response.ex. Writing TrafficTarget to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/traffic_target.ex. Writing Trigger to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger.ex. Writing TriggerCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_condition.ex. Writing TriggerFilter to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_filter.ex. Writing TriggerSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_spec.ex. Writing TriggerStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_status.ex. Writing Volume to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/volume.ex. Writing VolumeDevice to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/volume_device.ex. Writing VolumeMount to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/volume_mount.ex. Writing Namespaces to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/api/namespaces.ex. Writing Projects to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/api/projects.ex. Writing connection.ex. Writing metadata.ex. Writing mix.exs Writing README.md Writing LICENSE Writing .gitignore Writing config/config.exs Writing test/test_helper.exs 13:36:25.276 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR. fixing file permissions Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 252, in __exit__ self.observer.stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop self.on_thread_stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 361, in on_thread_stop self.unschedule_all() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 357, in unschedule_all self._clear_emitters() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 231, in _clear_emitters emitter.stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop self.on_thread_stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify.py", line 121, in on_thread_stop self._inotify.close() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 50, in close self.stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop self.on_thread_stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 46, in on_thread_stop self._inotify.close() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 277, in close os.close(self._inotify_fd) OSError: [Errno 9] Bad file descriptor 2020-12-18 05:36:28,397 autosynth [ERROR] > Synthesis failed 2020-12-18 05:36:28,397 autosynth [DEBUG] > Running: git clean -fdx Removing __pycache__/ Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 291, in _inner_main ).synthesize(synth_log_path / "sponge_log.log") File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/cloud_run/synth.metadata', 'synth.py', '--', 'CloudRun']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](http://sponge2/results/invocations/3a455424-7540-46a9-bb77-8265c8f04c06/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
1.0
Synthesis failed for CloudRun - Hello! Autosynth couldn't regenerate CloudRun. :broken_heart: Here's the output from running `synth.py`: ``` run/lib/google_api/cloud_run/v1alpha1/model/list_configurations_response.ex. Writing ListDomainMappingsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_domain_mappings_response.ex. Writing ListLocationsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_locations_response.ex. Writing ListMeta to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_meta.ex. Writing ListRevisionsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_revisions_response.ex. Writing ListRoutesResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_routes_response.ex. Writing ListServicesResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_services_response.ex. Writing ListTriggersResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/list_triggers_response.ex. Writing LocalObjectReference to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/local_object_reference.ex. Writing Location to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/location.ex. Writing ObjectMeta to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/object_meta.ex. Writing ObjectReference to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/object_reference.ex. Writing OwnerReference to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/owner_reference.ex. Writing Policy to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/policy.ex. Writing Probe to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/probe.ex. Writing Quantity to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/quantity.ex. Writing ResourceRecord to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/resource_record.ex. Writing ResourceRequirements to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/resource_requirements.ex. Writing Revision to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision.ex. Writing RevisionCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_condition.ex. Writing RevisionSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_spec.ex. Writing RevisionStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_status.ex. Writing RevisionTemplate to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/revision_template.ex. Writing Route to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route.ex. Writing RouteCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route_condition.ex. Writing RouteSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route_spec.ex. Writing RouteStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/route_status.ex. Writing SELinuxOptions to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/se_linux_options.ex. Writing SecretEnvSource to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/secret_env_source.ex. Writing SecretKeySelector to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/secret_key_selector.ex. Writing SecretVolumeSource to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/secret_volume_source.ex. Writing SecurityContext to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/security_context.ex. Writing Service to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service.ex. Writing ServiceCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_condition.ex. Writing ServiceSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec.ex. Writing ServiceSpecManualType to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_manual_type.ex. Writing ServiceSpecPinnedType to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_pinned_type.ex. Writing ServiceSpecReleaseType to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_release_type.ex. Writing ServiceSpecRunLatest to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_spec_run_latest.ex. Writing ServiceStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/service_status.ex. Writing SetIamPolicyRequest to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/set_iam_policy_request.ex. Writing TCPSocketAction to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/tcp_socket_action.ex. Writing TestIamPermissionsRequest to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/test_iam_permissions_request.ex. Writing TestIamPermissionsResponse to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/test_iam_permissions_response.ex. Writing TrafficTarget to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/traffic_target.ex. Writing Trigger to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger.ex. Writing TriggerCondition to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_condition.ex. Writing TriggerFilter to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_filter.ex. Writing TriggerSpec to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_spec.ex. Writing TriggerStatus to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/trigger_status.ex. Writing Volume to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/volume.ex. Writing VolumeDevice to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/volume_device.ex. Writing VolumeMount to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/model/volume_mount.ex. Writing Namespaces to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/api/namespaces.ex. Writing Projects to clients/cloud_run/lib/google_api/cloud_run/v1alpha1/api/projects.ex. Writing connection.ex. Writing metadata.ex. Writing mix.exs Writing README.md Writing LICENSE Writing .gitignore Writing config/config.exs Writing test/test_helper.exs 13:36:25.276 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR. fixing file permissions Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module> main() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, **ctx.params) File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke return callback(*args, **kwargs) File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main spec.loader.exec_module(synth_module) # type: ignore File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 252, in __exit__ self.observer.stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop self.on_thread_stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 361, in on_thread_stop self.unschedule_all() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 357, in unschedule_all self._clear_emitters() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 231, in _clear_emitters emitter.stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop self.on_thread_stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify.py", line 121, in on_thread_stop self._inotify.close() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 50, in close self.stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop self.on_thread_stop() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 46, in on_thread_stop self._inotify.close() File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 277, in close os.close(self._inotify_fd) OSError: [Errno 9] Bad file descriptor 2020-12-18 05:36:28,397 autosynth [ERROR] > Synthesis failed 2020-12-18 05:36:28,397 autosynth [DEBUG] > Running: git clean -fdx Removing __pycache__/ Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module> main() File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main return _inner_main(temp_dir) File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 291, in _inner_main ).synthesize(synth_log_path / "sponge_log.log") File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize synth_proc.check_returncode() # Raise an exception. File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode self.stderr) subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/cloud_run/synth.metadata', 'synth.py', '--', 'CloudRun']' returned non-zero exit status 1. ``` Google internal developers can see the full log [here](http://sponge2/results/invocations/3a455424-7540-46a9-bb77-8265c8f04c06/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
non_infrastructure
synthesis failed for cloudrun hello autosynth couldn t regenerate cloudrun broken heart here s the output from running synth py run lib google api cloud run model list configurations response ex writing listdomainmappingsresponse to clients cloud run lib google api cloud run model list domain mappings response ex writing listlocationsresponse to clients cloud run lib google api cloud run model list locations response ex writing listmeta to clients cloud run lib google api cloud run model list meta ex writing listrevisionsresponse to clients cloud run lib google api cloud run model list revisions response ex writing listroutesresponse to clients cloud run lib google api cloud run model list routes response ex writing listservicesresponse to clients cloud run lib google api cloud run model list services response ex writing listtriggersresponse to clients cloud run lib google api cloud run model list triggers response ex writing localobjectreference to clients cloud run lib google api cloud run model local object reference ex writing location to clients cloud run lib google api cloud run model location ex writing objectmeta to clients cloud run lib google api cloud run model object meta ex writing objectreference to clients cloud run lib google api cloud run model object reference ex writing ownerreference to clients cloud run lib google api cloud run model owner reference ex writing policy to clients cloud run lib google api cloud run model policy ex writing probe to clients cloud run lib google api cloud run model probe ex writing quantity to clients cloud run lib google api cloud run model quantity ex writing resourcerecord to clients cloud run lib google api cloud run model resource record ex writing resourcerequirements to clients cloud run lib google api cloud run model resource requirements ex writing revision to clients cloud run lib google api cloud run model revision ex writing revisioncondition to clients cloud run lib google api cloud run model revision condition ex writing revisionspec to clients cloud run lib google api cloud run model revision spec ex writing revisionstatus to clients cloud run lib google api cloud run model revision status ex writing revisiontemplate to clients cloud run lib google api cloud run model revision template ex writing route to clients cloud run lib google api cloud run model route ex writing routecondition to clients cloud run lib google api cloud run model route condition ex writing routespec to clients cloud run lib google api cloud run model route spec ex writing routestatus to clients cloud run lib google api cloud run model route status ex writing selinuxoptions to clients cloud run lib google api cloud run model se linux options ex writing secretenvsource to clients cloud run lib google api cloud run model secret env source ex writing secretkeyselector to clients cloud run lib google api cloud run model secret key selector ex writing secretvolumesource to clients cloud run lib google api cloud run model secret volume source ex writing securitycontext to clients cloud run lib google api cloud run model security context ex writing service to clients cloud run lib google api cloud run model service ex writing servicecondition to clients cloud run lib google api cloud run model service condition ex writing servicespec to clients cloud run lib google api cloud run model service spec ex writing servicespecmanualtype to clients cloud run lib google api cloud run model service spec manual type ex writing servicespecpinnedtype to clients cloud run lib google api cloud run model service spec pinned type ex writing servicespecreleasetype to clients cloud run lib google api cloud run model service spec release type ex writing servicespecrunlatest to clients cloud run lib google api cloud run model service spec run latest ex writing servicestatus to clients cloud run lib google api cloud run model service status ex writing setiampolicyrequest to clients cloud run lib google api cloud run model set iam policy request ex writing tcpsocketaction to clients cloud run lib google api cloud run model tcp socket action ex writing testiampermissionsrequest to clients cloud run lib google api cloud run model test iam permissions request ex writing testiampermissionsresponse to clients cloud run lib google api cloud run model test iam permissions response ex writing traffictarget to clients cloud run lib google api cloud run model traffic target ex writing trigger to clients cloud run lib google api cloud run model trigger ex writing triggercondition to clients cloud run lib google api cloud run model trigger condition ex writing triggerfilter to clients cloud run lib google api cloud run model trigger filter ex writing triggerspec to clients cloud run lib google api cloud run model trigger spec ex writing triggerstatus to clients cloud run lib google api cloud run model trigger status ex writing volume to clients cloud run lib google api cloud run model volume ex writing volumedevice to clients cloud run lib google api cloud run model volume device ex writing volumemount to clients cloud run lib google api cloud run model volume mount ex writing namespaces to clients cloud run lib google api cloud run api namespaces ex writing projects to clients cloud run lib google api cloud run api projects ex writing connection ex writing metadata ex writing mix exs writing readme md writing license writing gitignore writing config config exs writing test test helper exs found only discovery revision and or formatting changes not significant enough for a pr fixing file permissions traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file tmpfs src github synthtool synthtool metadata py line in exit self observer stop file tmpfs src github synthtool env lib site packages watchdog utils init py line in stop self on thread stop file tmpfs src github synthtool env lib site packages watchdog observers api py line in on thread stop self unschedule all file tmpfs src github synthtool env lib site packages watchdog observers api py line in unschedule all self clear emitters file tmpfs src github synthtool env lib site packages watchdog observers api py line in clear emitters emitter stop file tmpfs src github synthtool env lib site packages watchdog utils init py line in stop self on thread stop file tmpfs src github synthtool env lib site packages watchdog observers inotify py line in on thread stop self inotify close file tmpfs src github synthtool env lib site packages watchdog observers inotify buffer py line in close self stop file tmpfs src github synthtool env lib site packages watchdog utils init py line in stop self on thread stop file tmpfs src github synthtool env lib site packages watchdog observers inotify buffer py line in on thread stop self inotify close file tmpfs src github synthtool env lib site packages watchdog observers inotify c py line in close os close self inotify fd oserror bad file descriptor autosynth synthesis failed autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize synth log path sponge log log file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
0
6,744
2,610,275,206
IssuesEvent
2015-02-26 19:28:05
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
closed
TUMBLR
auto-migrated Type-Defect
``` What new feature do you want? I CANNOT SEE THE LIST OF ALREADY POSTED ENTRIES AND CATEGORIES IN MY TUMBLR BLOGS; BUT I CAN IN MY BLOGSPOT ACCOUNT. IS THIS AN ISSUE OR NORMAL BEHAVIOR ? I CANT FIND AND OPTION FOR 11 PT FONT SIZE. I USE IT AS DEFAULT IN ONE OF MY BLOGS. THANK YOU IN ADVANCE. CONGRATULATIONS; GREAT AND USEFUL EXTENSION !!! ``` ----- Original issue reported on code.google.com by `ojoconlo...@gmail.com` on 18 Mar 2011 at 3:53
1.0
TUMBLR - ``` What new feature do you want? I CANNOT SEE THE LIST OF ALREADY POSTED ENTRIES AND CATEGORIES IN MY TUMBLR BLOGS; BUT I CAN IN MY BLOGSPOT ACCOUNT. IS THIS AN ISSUE OR NORMAL BEHAVIOR ? I CANT FIND AND OPTION FOR 11 PT FONT SIZE. I USE IT AS DEFAULT IN ONE OF MY BLOGS. THANK YOU IN ADVANCE. CONGRATULATIONS; GREAT AND USEFUL EXTENSION !!! ``` ----- Original issue reported on code.google.com by `ojoconlo...@gmail.com` on 18 Mar 2011 at 3:53
non_infrastructure
tumblr what new feature do you want i cannot see the list of already posted entries and categories in my tumblr blogs but i can in my blogspot account is this an issue or normal behavior i cant find and option for pt font size i use it as default in one of my blogs thank you in advance congratulations great and useful extension original issue reported on code google com by ojoconlo gmail com on mar at
0
136,453
19,807,362,080
IssuesEvent
2022-01-19 08:32:37
TeamHavit/Havit-iOS
https://api.github.com/repos/TeamHavit/Havit-iOS
opened
[ADD] 봐야하는 콘텐츠 뷰 생성
🗂 윤아 🟣 Main 🖍 Design
## 💡 Issue <!-- 이슈에 대한 내용을 설명해주세요. --> 봐야하는 콘텐츠 뷰 생성 및 코디네이터 연결 ## 📝 todo - [ ] 봐야하는 콘텐츠 뷰 생성 - [ ] 내부 navigationController 생성 - [ ] TableView 생성 - [ ] EmptyView 생성 - [ ] 셀 추가
1.0
[ADD] 봐야하는 콘텐츠 뷰 생성 - ## 💡 Issue <!-- 이슈에 대한 내용을 설명해주세요. --> 봐야하는 콘텐츠 뷰 생성 및 코디네이터 연결 ## 📝 todo - [ ] 봐야하는 콘텐츠 뷰 생성 - [ ] 내부 navigationController 생성 - [ ] TableView 생성 - [ ] EmptyView 생성 - [ ] 셀 추가
non_infrastructure
봐야하는 콘텐츠 뷰 생성 💡 issue 봐야하는 콘텐츠 뷰 생성 및 코디네이터 연결 📝 todo 봐야하는 콘텐츠 뷰 생성 내부 navigationcontroller 생성 tableview 생성 emptyview 생성 셀 추가
0
166,210
12,907,563,638
IssuesEvent
2020-07-15 05:20:56
haskell/containers
https://api.github.com/repos/haskell/containers
closed
Cannot configure containers with tests and benchmarks enabled
docs testing
I'm not sure if this is a containers problem or cabal problem, but `cabal configure --enable-tests --enable-benchmarks` is unable to solve the constraints: ``` $ cabal configure --enable-tests --enable-benchmarks Resolving dependencies... Warning: solver failed to find a solution: Could not resolve dependencies: rejecting: containers-0.5.10.2:!bench (constraint from config file, command line flag, or user target requires opposite flag selection) trying: containers-0.5.10.2:*bench unknown package: random (dependency of containers-0.5.10.2:*bench) Dependency tree exhaustively searched. Trying configure anyway. Configuring containers-0.5.10.2... cabal: Encountered missing dependencies: ChasingBottoms -any, HUnit -any, QuickCheck >=2.7.1, criterion >=0.4.0 && <1.3, random <1.2, test-framework >=0.3.3, test-framework-hunit -any, test-framework-quickcheck2 >=0.2.9 ``` This causes our instructions in `CONTRIBUTING.md` to be more complex than they should be.
1.0
Cannot configure containers with tests and benchmarks enabled - I'm not sure if this is a containers problem or cabal problem, but `cabal configure --enable-tests --enable-benchmarks` is unable to solve the constraints: ``` $ cabal configure --enable-tests --enable-benchmarks Resolving dependencies... Warning: solver failed to find a solution: Could not resolve dependencies: rejecting: containers-0.5.10.2:!bench (constraint from config file, command line flag, or user target requires opposite flag selection) trying: containers-0.5.10.2:*bench unknown package: random (dependency of containers-0.5.10.2:*bench) Dependency tree exhaustively searched. Trying configure anyway. Configuring containers-0.5.10.2... cabal: Encountered missing dependencies: ChasingBottoms -any, HUnit -any, QuickCheck >=2.7.1, criterion >=0.4.0 && <1.3, random <1.2, test-framework >=0.3.3, test-framework-hunit -any, test-framework-quickcheck2 >=0.2.9 ``` This causes our instructions in `CONTRIBUTING.md` to be more complex than they should be.
non_infrastructure
cannot configure containers with tests and benchmarks enabled i m not sure if this is a containers problem or cabal problem but cabal configure enable tests enable benchmarks is unable to solve the constraints cabal configure enable tests enable benchmarks resolving dependencies warning solver failed to find a solution could not resolve dependencies rejecting containers bench constraint from config file command line flag or user target requires opposite flag selection trying containers bench unknown package random dependency of containers bench dependency tree exhaustively searched trying configure anyway configuring containers cabal encountered missing dependencies chasingbottoms any hunit any quickcheck criterion random test framework test framework hunit any test framework this causes our instructions in contributing md to be more complex than they should be
0
506,881
14,675,131,502
IssuesEvent
2020-12-30 16:51:45
luna-rs/luna
https://api.github.com/repos/luna-rs/luna
closed
Reduce coupling to LunaContext
discussion medium priority
The coupling of the `LunaContext` onto unrelated objects such as `PluginManager`, `Entity`, and `LunaChannelInitializer` makes writing unit tests tedious and messy.
1.0
Reduce coupling to LunaContext - The coupling of the `LunaContext` onto unrelated objects such as `PluginManager`, `Entity`, and `LunaChannelInitializer` makes writing unit tests tedious and messy.
non_infrastructure
reduce coupling to lunacontext the coupling of the lunacontext onto unrelated objects such as pluginmanager entity and lunachannelinitializer makes writing unit tests tedious and messy
0
28,342
23,166,246,733
IssuesEvent
2022-07-30 02:16:28
cloud-native-toolkit/software-everywhere
https://api.github.com/repos/cloud-native-toolkit/software-everywhere
closed
Request for New Cluster for CP4BA with 8 worker node and 16X32
category:infrastructure cp4ba
@triceam Request to you please create a new cluster for CP4BA. As per IKC (IBM Knowledge Center), we need a **minimum of 8 worker nodes with a 16x32 system required for CP4BA**. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/21.0.3?topic=ppd-system-requirements <img width="1089" alt="Screenshot 2022-07-26 at 6 35 17 PM" src="https://user-images.githubusercontent.com/103416270/181012950-4d12a61b-fc38-4f18-b48f-bce23b40b04a.png"> Thanks & Regards Brahm Singh
1.0
Request for New Cluster for CP4BA with 8 worker node and 16X32 - @triceam Request to you please create a new cluster for CP4BA. As per IKC (IBM Knowledge Center), we need a **minimum of 8 worker nodes with a 16x32 system required for CP4BA**. https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/21.0.3?topic=ppd-system-requirements <img width="1089" alt="Screenshot 2022-07-26 at 6 35 17 PM" src="https://user-images.githubusercontent.com/103416270/181012950-4d12a61b-fc38-4f18-b48f-bce23b40b04a.png"> Thanks & Regards Brahm Singh
infrastructure
request for new cluster for with worker node and triceam request to you please create a new cluster for as per ikc ibm knowledge center we need a minimum of worker nodes with a system required for img width alt screenshot at pm src thanks regards brahm singh
1
768,596
26,971,891,833
IssuesEvent
2023-02-09 06:01:00
bryntum/support
https://api.github.com/repos/bryntum/support
closed
clearChanges method stopped working on 5.2.1
bug info requested regression high-priority forum
[Forum post](https://forum.bryntum.com/viewtopic.php?f=52&t=22799&p=112811#p112811) Check the example described by the user [here](https://forum.bryntum.com/viewtopic.php?p=112798#p112798) > Add an item at the bottom using the text box. Check the BaseGridApp.Controls.Gantt.project.changes object see that there is an item with a phantom id press save in top left now check BaseGridApp.Controls.Gantt.project.changes the added task is still there but the Id has changed. you can also try BaseGridApp.Controls.Gantt.project.clearChanges() which doesnt clear the tasks.
1.0
clearChanges method stopped working on 5.2.1 - [Forum post](https://forum.bryntum.com/viewtopic.php?f=52&t=22799&p=112811#p112811) Check the example described by the user [here](https://forum.bryntum.com/viewtopic.php?p=112798#p112798) > Add an item at the bottom using the text box. Check the BaseGridApp.Controls.Gantt.project.changes object see that there is an item with a phantom id press save in top left now check BaseGridApp.Controls.Gantt.project.changes the added task is still there but the Id has changed. you can also try BaseGridApp.Controls.Gantt.project.clearChanges() which doesnt clear the tasks.
non_infrastructure
clearchanges method stopped working on check the example described by the user add an item at the bottom using the text box check the basegridapp controls gantt project changes object see that there is an item with a phantom id press save in top left now check basegridapp controls gantt project changes the added task is still there but the id has changed you can also try basegridapp controls gantt project clearchanges which doesnt clear the tasks
0
26,391
20,052,490,102
IssuesEvent
2022-02-03 08:31:00
deckhouse/deckhouse
https://api.github.com/repos/deckhouse/deckhouse
closed
"remove_csi_taints" did not execute even though it ought to
type/bug area/cluster-and-infrastructure status/rotten
A user has created a new NG; new Nodes joined with a CSI taint, CSINode objects corresponding to these Nodes got created, deckhouse did not remove the taint from Nodes, there were zero tasks in the queue. After deleting the deckhouse Pod, the problem went away. [Hook](https://github.com/deckhouse/deckhouse/blob/main/modules/040-node-manager/hooks/remove_csi_taints.go) did not execute. It could be a problem with filtering or with ExecuteHookOnEvents/Execution parameters.
1.0
"remove_csi_taints" did not execute even though it ought to - A user has created a new NG; new Nodes joined with a CSI taint, CSINode objects corresponding to these Nodes got created, deckhouse did not remove the taint from Nodes, there were zero tasks in the queue. After deleting the deckhouse Pod, the problem went away. [Hook](https://github.com/deckhouse/deckhouse/blob/main/modules/040-node-manager/hooks/remove_csi_taints.go) did not execute. It could be a problem with filtering or with ExecuteHookOnEvents/Execution parameters.
infrastructure
remove csi taints did not execute even though it ought to a user has created a new ng new nodes joined with a csi taint csinode objects corresponding to these nodes got created deckhouse did not remove the taint from nodes there were zero tasks in the queue after deleting the deckhouse pod the problem went away did not execute it could be a problem with filtering or with executehookonevents execution parameters
1
18,183
10,217,699,445
IssuesEvent
2019-08-15 14:16:46
whitesource-yossi/npm-plugin3
https://api.github.com/repos/whitesource-yossi/npm-plugin3
opened
CVE-2010-0205 (High) detected in libpng-v1.2.2
security vulnerability
## CVE-2010-0205 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libpngv1.2.2</b></p></summary> <p> <p>mirror of git://git.code.sf.net/p/libpng/code (mirror of the official repository)</p> <p>Library home page: <a href=https://api.github.com/repos/miningathome/libpng>https://api.github.com/repos/miningathome/libpng</a></p> <p>Found in HEAD commit: <a href="https://github.com/whitesource-yossi/npm-plugin3/commit/17c4f5082db21ae062ab0ca04afea5c034c60c6b">17c4f5082db21ae062ab0ca04afea5c034c60c6b</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (1)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /npm-plugin3/pngrutil.c </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The png_decompress_chunk function in pngrutil.c in libpng 1.0.x before 1.0.53, 1.2.x before 1.2.43, and 1.4.x before 1.4.1 does not properly handle compressed ancillary-chunk data that has a disproportionately large uncompressed representation, which allows remote attackers to cause a denial of service (memory and CPU consumption, and application hang) via a crafted PNG file, as demonstrated by use of the deflate compression method on data composed of many occurrences of the same character, related to a "decompression bomb" attack. <p>Publish Date: 2010-03-03 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-0205>CVE-2010-0205</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2010-0205">https://nvd.nist.gov/vuln/detail/CVE-2010-0205</a></p> <p>Release Date: 2010-03-03</p> <p>Fix Resolution: 1.0.53,1.2.43,1.4.1</p> </p> </details> <p></p>
True
CVE-2010-0205 (High) detected in libpng-v1.2.2 - ## CVE-2010-0205 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libpngv1.2.2</b></p></summary> <p> <p>mirror of git://git.code.sf.net/p/libpng/code (mirror of the official repository)</p> <p>Library home page: <a href=https://api.github.com/repos/miningathome/libpng>https://api.github.com/repos/miningathome/libpng</a></p> <p>Found in HEAD commit: <a href="https://github.com/whitesource-yossi/npm-plugin3/commit/17c4f5082db21ae062ab0ca04afea5c034c60c6b">17c4f5082db21ae062ab0ca04afea5c034c60c6b</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (1)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /npm-plugin3/pngrutil.c </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The png_decompress_chunk function in pngrutil.c in libpng 1.0.x before 1.0.53, 1.2.x before 1.2.43, and 1.4.x before 1.4.1 does not properly handle compressed ancillary-chunk data that has a disproportionately large uncompressed representation, which allows remote attackers to cause a denial of service (memory and CPU consumption, and application hang) via a crafted PNG file, as demonstrated by use of the deflate compression method on data composed of many occurrences of the same character, related to a "decompression bomb" attack. <p>Publish Date: 2010-03-03 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2010-0205>CVE-2010-0205</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2010-0205">https://nvd.nist.gov/vuln/detail/CVE-2010-0205</a></p> <p>Release Date: 2010-03-03</p> <p>Fix Resolution: 1.0.53,1.2.43,1.4.1</p> </p> </details> <p></p>
non_infrastructure
cve high detected in libpng cve high severity vulnerability vulnerable library mirror of git git code sf net p libpng code mirror of the official repository library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries npm pngrutil c vulnerability details the png decompress chunk function in pngrutil c in libpng x before x before and x before does not properly handle compressed ancillary chunk data that has a disproportionately large uncompressed representation which allows remote attackers to cause a denial of service memory and cpu consumption and application hang via a crafted png file as demonstrated by use of the deflate compression method on data composed of many occurrences of the same character related to a decompression bomb attack publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution
0
7,170
6,801,578,861
IssuesEvent
2017-11-02 17:14:59
Narvalex/Agrobook
https://api.github.com/repos/Narvalex/Agrobook
closed
Mejorar el metadata de los eventos
enhancement Infrastructure
Que sea un json sencillo que permita ver: El id del event, El commit id de los eventos
1.0
Mejorar el metadata de los eventos - Que sea un json sencillo que permita ver: El id del event, El commit id de los eventos
infrastructure
mejorar el metadata de los eventos que sea un json sencillo que permita ver el id del event el commit id de los eventos
1
8,271
7,314,362,919
IssuesEvent
2018-03-01 06:46:32
sonarwhal/sonarwhal
https://api.github.com/repos/sonarwhal/sonarwhal
closed
New: Configuration package
area:infrastructure priority:high type:new-feature
This is somehow similar to #51 and if implemented it should be closed. The idea is to create a new type of resource `configuration` (or any better name). The `configuration` resources will be published as independent packages. E.g.: `@sonarwhal/configuration-recommended`. These packages will have in their `package.json` the list of all the dependencies. It could be rules, parsers, formatters or any combination of the above. The main file will expose the `configuration` which is basically a `.sonarwhalrc` json object with the configuration for each of the packages. The way someone will use a configuration is via the `.sonarwhalrc` file: ```json { "extends": ["config1", "config2"] } ``` A user can extend from multiple configurations. The priority goes from left to right, anything declared explicitly in the `.sonarwhalrc` file takes precedence. Doing this will allow us to have packages for: * TypeScript rules that also install the TypeScript parser (and similar) (#802) * Group rules such as `manifest` rules * Recommended configuration we had prior to 0.26 (rules + formatters) * Configuration to ignore domains that usually fail (e.g.: google analytics) We could even separate all the connectors using this approach and pull them via the `recommended` configuration. The `--init` option of the command line will have to be modified. Right now it has 2 options: * List all the official rules and select which ones to install. This will be a problem because we are just going to keep adding more * Install the recommended ones. This will also be a problem. Right now we use the `sonarwhal-recommended` keyword to search in npm, but what will we do once we have rules for developer environments? And if we keep adding rules, searching in npm might become a problem (#814). With this new concept, `--init` will list all the official configuration files found in npm and prompt the user which one(s) it should install (maybe it should be renamed to `--configure`?). **Possible problems**: When installing a config package globally, the rules, parsers, etc. will be in the `node_modules` folder of the config. We need to find a way to make `sonarwhal` find those resources @sonarwhal/contributors thoughts?
1.0
New: Configuration package - This is somehow similar to #51 and if implemented it should be closed. The idea is to create a new type of resource `configuration` (or any better name). The `configuration` resources will be published as independent packages. E.g.: `@sonarwhal/configuration-recommended`. These packages will have in their `package.json` the list of all the dependencies. It could be rules, parsers, formatters or any combination of the above. The main file will expose the `configuration` which is basically a `.sonarwhalrc` json object with the configuration for each of the packages. The way someone will use a configuration is via the `.sonarwhalrc` file: ```json { "extends": ["config1", "config2"] } ``` A user can extend from multiple configurations. The priority goes from left to right, anything declared explicitly in the `.sonarwhalrc` file takes precedence. Doing this will allow us to have packages for: * TypeScript rules that also install the TypeScript parser (and similar) (#802) * Group rules such as `manifest` rules * Recommended configuration we had prior to 0.26 (rules + formatters) * Configuration to ignore domains that usually fail (e.g.: google analytics) We could even separate all the connectors using this approach and pull them via the `recommended` configuration. The `--init` option of the command line will have to be modified. Right now it has 2 options: * List all the official rules and select which ones to install. This will be a problem because we are just going to keep adding more * Install the recommended ones. This will also be a problem. Right now we use the `sonarwhal-recommended` keyword to search in npm, but what will we do once we have rules for developer environments? And if we keep adding rules, searching in npm might become a problem (#814). With this new concept, `--init` will list all the official configuration files found in npm and prompt the user which one(s) it should install (maybe it should be renamed to `--configure`?). **Possible problems**: When installing a config package globally, the rules, parsers, etc. will be in the `node_modules` folder of the config. We need to find a way to make `sonarwhal` find those resources @sonarwhal/contributors thoughts?
infrastructure
new configuration package this is somehow similar to and if implemented it should be closed the idea is to create a new type of resource configuration or any better name the configuration resources will be published as independent packages e g sonarwhal configuration recommended these packages will have in their package json the list of all the dependencies it could be rules parsers formatters or any combination of the above the main file will expose the configuration which is basically a sonarwhalrc json object with the configuration for each of the packages the way someone will use a configuration is via the sonarwhalrc file json extends a user can extend from multiple configurations the priority goes from left to right anything declared explicitly in the sonarwhalrc file takes precedence doing this will allow us to have packages for typescript rules that also install the typescript parser and similar group rules such as manifest rules recommended configuration we had prior to rules formatters configuration to ignore domains that usually fail e g google analytics we could even separate all the connectors using this approach and pull them via the recommended configuration the init option of the command line will have to be modified right now it has options list all the official rules and select which ones to install this will be a problem because we are just going to keep adding more install the recommended ones this will also be a problem right now we use the sonarwhal recommended keyword to search in npm but what will we do once we have rules for developer environments and if we keep adding rules searching in npm might become a problem with this new concept init will list all the official configuration files found in npm and prompt the user which one s it should install maybe it should be renamed to configure possible problems when installing a config package globally the rules parsers etc will be in the node modules folder of the config we need to find a way to make sonarwhal find those resources sonarwhal contributors thoughts
1
779,956
27,373,628,835
IssuesEvent
2023-02-28 02:51:29
etternagame/etterna
https://api.github.com/repos/etternagame/etterna
closed
[Feature Request]: Associate a profile to a gamemode
Type: Enhancement Priority: Very Low
### Is there an existing issue for the feature? - [X] I have searched the existing feature requests ### Describe the Feature Allow a player to have a default gamemode tied to a given profile Example of what this might look like: ![image](https://user-images.githubusercontent.com/39735358/158274954-79952516-7bfd-4729-828b-1cee3a127ed3.png) ### How Does The Feature Add To The Game? People who play on multiple different gamemodes usually do so with separate profiles for each gamemode, this would allow you to just select the profile that you want to play with, instead of swapping gamemodes and then selecting that profile. ### Additional Context _No response_
1.0
[Feature Request]: Associate a profile to a gamemode - ### Is there an existing issue for the feature? - [X] I have searched the existing feature requests ### Describe the Feature Allow a player to have a default gamemode tied to a given profile Example of what this might look like: ![image](https://user-images.githubusercontent.com/39735358/158274954-79952516-7bfd-4729-828b-1cee3a127ed3.png) ### How Does The Feature Add To The Game? People who play on multiple different gamemodes usually do so with separate profiles for each gamemode, this would allow you to just select the profile that you want to play with, instead of swapping gamemodes and then selecting that profile. ### Additional Context _No response_
non_infrastructure
associate a profile to a gamemode is there an existing issue for the feature i have searched the existing feature requests describe the feature allow a player to have a default gamemode tied to a given profile example of what this might look like how does the feature add to the game people who play on multiple different gamemodes usually do so with separate profiles for each gamemode this would allow you to just select the profile that you want to play with instead of swapping gamemodes and then selecting that profile additional context no response
0