Unnamed: 0
int64 1
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 3
438
| labels
stringlengths 4
308
| body
stringlengths 7
254k
| index
stringclasses 7
values | text_combine
stringlengths 96
254k
| label
stringclasses 2
values | text
stringlengths 96
246k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
432,647
| 30,290,911,394
|
IssuesEvent
|
2023-07-09 09:12:59
|
apache/arrow
|
https://api.github.com/repos/apache/arrow
|
closed
|
[CI][Docs] test-ubuntu-default-docs is failing with no space left on device
|
Type: bug Component: Documentation Component: Continuous Integration Priority: Blocker
|
### Describe the bug, including details regarding any error messages, version, and platform.
The nightly build for [test-ubuntu-default-docs](https://github.com/ursacomputing/crossbow/runs/14845214272) has failed to build with:
```
error: could not write to 'build/bdist.linux-x86_64/wheel/pyarrow/libarrow_python.so': No space left on device
```
I've retried and the issue has been reproduced again.
We are currently running the nightly job on azure-linux but the PR one that uses the same container `ubuntu-docs` on GitHub actions, we probably could move it to GitHub actions to have a common way of doing it.
This is a must fix for the release as this is run as part of the packaging jobs in order to build the documentation for the release.
### Component(s)
Continuous Integration, Documentation
|
1.0
|
[CI][Docs] test-ubuntu-default-docs is failing with no space left on device - ### Describe the bug, including details regarding any error messages, version, and platform.
The nightly build for [test-ubuntu-default-docs](https://github.com/ursacomputing/crossbow/runs/14845214272) has failed to build with:
```
error: could not write to 'build/bdist.linux-x86_64/wheel/pyarrow/libarrow_python.so': No space left on device
```
I've retried and the issue has been reproduced again.
We are currently running the nightly job on azure-linux but the PR one that uses the same container `ubuntu-docs` on GitHub actions, we probably could move it to GitHub actions to have a common way of doing it.
This is a must fix for the release as this is run as part of the packaging jobs in order to build the documentation for the release.
### Component(s)
Continuous Integration, Documentation
|
non_main
|
test ubuntu default docs is failing with no space left on device describe the bug including details regarding any error messages version and platform the nightly build for has failed to build with error could not write to build bdist linux wheel pyarrow libarrow python so no space left on device i ve retried and the issue has been reproduced again we are currently running the nightly job on azure linux but the pr one that uses the same container ubuntu docs on github actions we probably could move it to github actions to have a common way of doing it this is a must fix for the release as this is run as part of the packaging jobs in order to build the documentation for the release component s continuous integration documentation
| 0
|
1,076
| 4,892,785,903
|
IssuesEvent
|
2016-11-18 20:51:13
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
system/mount.py
|
affects_2.3 bug_report waiting_on_maintainer
|
##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
mount module
##### ANSIBLE VERSION
N/A
##### SUMMARY
This ticket is opened because #8377 on ansible core is closed (and this is a more appropriate repo)
This issue is still present as recently as 1.9.4 (cannot test on newer versions).
It looks as though the mount module calls -o remount blindly instead of calling umount followed by mount.
Example:
```
- name: mount a mount
mount:
state: "mounted"
fstype: "nfs"
name: "/mnt/foo"
src: "fizz:/buzz"
opts: "rsize=8192,wsize=8192,timeo=14,intr,nosuid"
```
Will happily mount your mount, but when you run with
```
- name: mount a mount
mount:
state: "mounted"
fstype: "nfs"
name: "/mnt/foo"
src: "fizz:/buzz"
opts: "timeo=14,intr,nosuid"
```
It will fail with an unsupported mount option error, because remount is not a supported mount option for mount.nfs
|
True
|
system/mount.py - ##### ISSUE TYPE
Bug Report
##### COMPONENT NAME
mount module
##### ANSIBLE VERSION
N/A
##### SUMMARY
This ticket is opened because #8377 on ansible core is closed (and this is a more appropriate repo)
This issue is still present as recently as 1.9.4 (cannot test on newer versions).
It looks as though the mount module calls -o remount blindly instead of calling umount followed by mount.
Example:
```
- name: mount a mount
mount:
state: "mounted"
fstype: "nfs"
name: "/mnt/foo"
src: "fizz:/buzz"
opts: "rsize=8192,wsize=8192,timeo=14,intr,nosuid"
```
Will happily mount your mount, but when you run with
```
- name: mount a mount
mount:
state: "mounted"
fstype: "nfs"
name: "/mnt/foo"
src: "fizz:/buzz"
opts: "timeo=14,intr,nosuid"
```
It will fail with an unsupported mount option error, because remount is not a supported mount option for mount.nfs
|
main
|
system mount py issue type bug report component name mount module ansible version n a summary this ticket is opened because on ansible core is closed and this is a more appropriate repo this issue is still present as recently as cannot test on newer versions it looks as though the mount module calls o remount blindly instead of calling umount followed by mount example name mount a mount mount state mounted fstype nfs name mnt foo src fizz buzz opts rsize wsize timeo intr nosuid will happily mount your mount but when you run with name mount a mount mount state mounted fstype nfs name mnt foo src fizz buzz opts timeo intr nosuid it will fail with an unsupported mount option error because remount is not a supported mount option for mount nfs
| 1
|
4,138
| 19,663,538,793
|
IssuesEvent
|
2022-01-10 19:39:56
|
VA-Explorer/va_explorer
|
https://api.github.com/repos/VA-Explorer/va_explorer
|
opened
|
Consider ways to prevent unnecessary API calls via clientside callbacks
|
Type: Maintainance
|
**What is the expected state?**
Clientside callbacks are utilized in the context described in #173
**What is the actual state?**
No optimization
**Relevant context**
`va_explorer/va_analytics/dash_apps/va_dashboard.py`
|
True
|
Consider ways to prevent unnecessary API calls via clientside callbacks - **What is the expected state?**
Clientside callbacks are utilized in the context described in #173
**What is the actual state?**
No optimization
**Relevant context**
`va_explorer/va_analytics/dash_apps/va_dashboard.py`
|
main
|
consider ways to prevent unnecessary api calls via clientside callbacks what is the expected state clientside callbacks are utilized in the context described in what is the actual state no optimization relevant context va explorer va analytics dash apps va dashboard py
| 1
|
4,148
| 19,750,670,102
|
IssuesEvent
|
2022-01-15 03:22:07
|
tModLoader/tModLoader
|
https://api.github.com/repos/tModLoader/tModLoader
|
closed
|
Combine client and server - post .NET Core
|
Requestor-TML Maintainers Type: Change/Feature Request
|
### Description
We could simplify distribution size significantly by having a single dll. With the .NET Core release, we no longer have the .FNA build for compiling mods either. Windows build could use a separate project for the server with output type <Exe> which launches the console window. Other platforms just need a custom launch script.
This would make downloading builds from the CI easier.
|
True
|
Combine client and server - post .NET Core - ### Description
We could simplify distribution size significantly by having a single dll. With the .NET Core release, we no longer have the .FNA build for compiling mods either. Windows build could use a separate project for the server with output type <Exe> which launches the console window. Other platforms just need a custom launch script.
This would make downloading builds from the CI easier.
|
main
|
combine client and server post net core description we could simplify distribution size significantly by having a single dll with the net core release we no longer have the fna build for compiling mods either windows build could use a separate project for the server with output type which launches the console window other platforms just need a custom launch script this would make downloading builds from the ci easier
| 1
|
291,336
| 25,138,559,274
|
IssuesEvent
|
2022-11-09 20:50:20
|
istio/ztunnel
|
https://api.github.com/repos/istio/ztunnel
|
opened
|
istio/istio is tested with new zTunnel
|
area/testing P0 size/TBD
|
The istio/istio repo should have a blocking prow job that runs our ambient integration tests using the new zTunnel. This may replace or temporarily run alongside a job that tests the original Envoy implementation.
|
1.0
|
istio/istio is tested with new zTunnel - The istio/istio repo should have a blocking prow job that runs our ambient integration tests using the new zTunnel. This may replace or temporarily run alongside a job that tests the original Envoy implementation.
|
non_main
|
istio istio is tested with new ztunnel the istio istio repo should have a blocking prow job that runs our ambient integration tests using the new ztunnel this may replace or temporarily run alongside a job that tests the original envoy implementation
| 0
|
3,369
| 13,041,628,752
|
IssuesEvent
|
2020-07-28 20:45:28
|
laminas/automatic-releases
|
https://api.github.com/repos/laminas/automatic-releases
|
closed
|
Set up `ORGANIZATION_ADMIN_TOKEN` to allow for automatic branch switching
|
Awaiting Maintainer Response Feature Request Help Wanted
|
Self-release partially succeeded, but we had a failure later in actions run:
```
/usr/bin/docker run --name c20181c155a4ed455de0bcfd869a25c9bb5f_021211 --label 87c201 --workdir /github/workspace --rm -e GITHUB_TOKEN -e SIGNING_SECRET_KEY -e GIT_AUTHOR_NAME -e GIT_AUTHOR_EMAIL -e INPUT_COMMAND-NAME -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/automatic-releases/automatic-releases":"/github/workspace" 87c201:81c155a4ed455de0bcfd869a25c9bb5f "laminas:automatic-releases:switch-default-branch-to-next-minor"
```
Caused
```
Fatal error: Uncaught InvalidArgumentException: Could not find a value for environment variable "GITHUB_TOKEN" in /app/vendor/webmozart/assert/src/Assert.php:2042
Stack trace:
#0 /app/vendor/webmozart/assert/src/Assert.php(779): Webmozart\Assert\Assert::reportInvalidArgument('Could not find ...')
#1 /app/vendor/webmozart/assert/src/Assert.php(69): Webmozart\Assert\Assert::notEq('', '', 'Could not find ...')
#2 /app/src/Environment/EnvironmentVariables.php(72): Webmozart\Assert\Assert::stringNotEmpty('', 'Could not find ...')
#3 /app/src/Environment/EnvironmentVariables.php(55): Laminas\AutomaticReleases\Environment\EnvironmentVariables::getenv('GITHUB_TOKEN')
#4 /app/bin/console.php(47): Laminas\AutomaticReleases\Environment\EnvironmentVariables::fromEnvironment(Object(Laminas\AutomaticReleases\Gpg\ImportGpgKeyFromStringViaTemporaryFile))
#5 /app/bin/console.php(113): Laminas\AutomaticReleases\WebApplication\{closure}()
#6 {main}
thrown in /app/vendor/webmozart/assert/src/Assert.php on line 2042
```
That's because `ORGANIZATION_ADMIN_TOKEN` is not set.
It's obviously a bit risky to add such a variable to the environment, but it should be fine if it's marked as protected (only direct pushes to repository branches can access it).
|
True
|
Set up `ORGANIZATION_ADMIN_TOKEN` to allow for automatic branch switching - Self-release partially succeeded, but we had a failure later in actions run:
```
/usr/bin/docker run --name c20181c155a4ed455de0bcfd869a25c9bb5f_021211 --label 87c201 --workdir /github/workspace --rm -e GITHUB_TOKEN -e SIGNING_SECRET_KEY -e GIT_AUTHOR_NAME -e GIT_AUTHOR_EMAIL -e INPUT_COMMAND-NAME -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/automatic-releases/automatic-releases":"/github/workspace" 87c201:81c155a4ed455de0bcfd869a25c9bb5f "laminas:automatic-releases:switch-default-branch-to-next-minor"
```
Caused
```
Fatal error: Uncaught InvalidArgumentException: Could not find a value for environment variable "GITHUB_TOKEN" in /app/vendor/webmozart/assert/src/Assert.php:2042
Stack trace:
#0 /app/vendor/webmozart/assert/src/Assert.php(779): Webmozart\Assert\Assert::reportInvalidArgument('Could not find ...')
#1 /app/vendor/webmozart/assert/src/Assert.php(69): Webmozart\Assert\Assert::notEq('', '', 'Could not find ...')
#2 /app/src/Environment/EnvironmentVariables.php(72): Webmozart\Assert\Assert::stringNotEmpty('', 'Could not find ...')
#3 /app/src/Environment/EnvironmentVariables.php(55): Laminas\AutomaticReleases\Environment\EnvironmentVariables::getenv('GITHUB_TOKEN')
#4 /app/bin/console.php(47): Laminas\AutomaticReleases\Environment\EnvironmentVariables::fromEnvironment(Object(Laminas\AutomaticReleases\Gpg\ImportGpgKeyFromStringViaTemporaryFile))
#5 /app/bin/console.php(113): Laminas\AutomaticReleases\WebApplication\{closure}()
#6 {main}
thrown in /app/vendor/webmozart/assert/src/Assert.php on line 2042
```
That's because `ORGANIZATION_ADMIN_TOKEN` is not set.
It's obviously a bit risky to add such a variable to the environment, but it should be fine if it's marked as protected (only direct pushes to repository branches can access it).
|
main
|
set up organization admin token to allow for automatic branch switching self release partially succeeded but we had a failure later in actions run usr bin docker run name label workdir github workspace rm e github token e signing secret key e git author name e git author email e input command name e home e github job e github ref e github sha e github repository e github repository owner e github run id e github run number e github actor e github workflow e github head ref e github base ref e github event name e github server url e github api url e github graphql url e github workspace e github action e github event path e runner os e runner tool cache e runner temp e runner workspace e actions runtime url e actions runtime token e actions cache url e github actions true e ci true v var run docker sock var run docker sock v home runner work temp github home github home v home runner work temp github workflow github workflow v home runner work automatic releases automatic releases github workspace laminas automatic releases switch default branch to next minor caused fatal error uncaught invalidargumentexception could not find a value for environment variable github token in app vendor webmozart assert src assert php stack trace app vendor webmozart assert src assert php webmozart assert assert reportinvalidargument could not find app vendor webmozart assert src assert php webmozart assert assert noteq could not find app src environment environmentvariables php webmozart assert assert stringnotempty could not find app src environment environmentvariables php laminas automaticreleases environment environmentvariables getenv github token app bin console php laminas automaticreleases environment environmentvariables fromenvironment object laminas automaticreleases gpg importgpgkeyfromstringviatemporaryfile app bin console php laminas automaticreleases webapplication closure main thrown in app vendor webmozart assert src assert php on line that s because organization admin token is not set it s obviously a bit risky to add such a variable to the environment but it should be fine if it s marked as protected only direct pushes to repository branches can access it
| 1
|
22,453
| 31,224,333,295
|
IssuesEvent
|
2023-08-19 00:07:18
|
googleapis/google-cloud-node
|
https://api.github.com/repos/googleapis/google-cloud-node
|
closed
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #4497 - The release job failed -- check the build log.
* #4467 - The release job failed -- check the build log.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #4497 - The release job failed -- check the build log.
* #4467 - The release job failed -- check the build log.
|
non_main
|
warning a recent release failed the following release prs may have failed the release job failed check the build log the release job failed check the build log
| 0
|
5,461
| 27,313,313,247
|
IssuesEvent
|
2023-02-24 13:56:25
|
centerofci/mathesar
|
https://api.github.com/repos/centerofci/mathesar
|
closed
|
Figure out how to navigate from the live demo back to Mathesar.org
|
type: enhancement work: frontend status: ready restricted: maintainers
|
In live demo mode, we should:
- Add a link back to Mathesar in the live demo banner.
- Add a link to Mathesar.org in the top corner menu.
|
True
|
Figure out how to navigate from the live demo back to Mathesar.org - In live demo mode, we should:
- Add a link back to Mathesar in the live demo banner.
- Add a link to Mathesar.org in the top corner menu.
|
main
|
figure out how to navigate from the live demo back to mathesar org in live demo mode we should add a link back to mathesar in the live demo banner add a link to mathesar org in the top corner menu
| 1
|
399,411
| 27,239,924,340
|
IssuesEvent
|
2023-02-21 19:27:55
|
VeryGoodOpenSource/dart_frog
|
https://api.github.com/repos/VeryGoodOpenSource/dart_frog
|
closed
|
fix: Deploying with Elastic Beanstalk
|
bug documentation needs triage
|
**Description**
As I'm new on this backend world, I was discouraged to follow the documentation to deploy using those 3 recommendations (AWS App Runner, Google Cloud Platform and Digital Ocean App Platform) due to it's price. It's unpraticable to deploy with an application 24/7.
So, because of this, I'm trying to deploy it on EC2 instance, using the Elastic Beanstalk to admistrate the contained app using this build folder .zip generated. But, I didn't get success on it. Could you help on it?
**Steps To Reproduce**
1. Build a new application and .zip the content under build folder
2. Upload this .zip file into the Elastic Beanstalk environment with a docker app.
**Expected Behavior**
Application success deployed.
**Screenshots**

Reading the log error, it seems like not founding the Dockerfile archive.
`Instance deployment: Both 'Dockerfile' and 'Dockerrun.aws.json' are missing in your source bundle. Include at least one of them. The deployment failed.`
But it's there:

**Additional Context**
Docker running on 64bit Amazon Linux 2 - WebServer
t2.micro instance
|
1.0
|
fix: Deploying with Elastic Beanstalk - **Description**
As I'm new on this backend world, I was discouraged to follow the documentation to deploy using those 3 recommendations (AWS App Runner, Google Cloud Platform and Digital Ocean App Platform) due to it's price. It's unpraticable to deploy with an application 24/7.
So, because of this, I'm trying to deploy it on EC2 instance, using the Elastic Beanstalk to admistrate the contained app using this build folder .zip generated. But, I didn't get success on it. Could you help on it?
**Steps To Reproduce**
1. Build a new application and .zip the content under build folder
2. Upload this .zip file into the Elastic Beanstalk environment with a docker app.
**Expected Behavior**
Application success deployed.
**Screenshots**

Reading the log error, it seems like not founding the Dockerfile archive.
`Instance deployment: Both 'Dockerfile' and 'Dockerrun.aws.json' are missing in your source bundle. Include at least one of them. The deployment failed.`
But it's there:

**Additional Context**
Docker running on 64bit Amazon Linux 2 - WebServer
t2.micro instance
|
non_main
|
fix deploying with elastic beanstalk description as i m new on this backend world i was discouraged to follow the documentation to deploy using those recommendations aws app runner google cloud platform and digital ocean app platform due to it s price it s unpraticable to deploy with an application so because of this i m trying to deploy it on instance using the elastic beanstalk to admistrate the contained app using this build folder zip generated but i didn t get success on it could you help on it steps to reproduce build a new application and zip the content under build folder upload this zip file into the elastic beanstalk environment with a docker app expected behavior application success deployed screenshots reading the log error it seems like not founding the dockerfile archive instance deployment both dockerfile and dockerrun aws json are missing in your source bundle include at least one of them the deployment failed but it s there additional context docker running on amazon linux webserver micro instance
| 0
|
2,035
| 6,847,219,062
|
IssuesEvent
|
2017-11-13 14:48:07
|
ansible/ansible
|
https://api.github.com/repos/ansible/ansible
|
closed
|
mtu and interface parameter not available on nxos_system
|
affects_2.3 feature_idea module needs_maintainer networking nxos support:core
|
Since the nxos_mtu module has been deprecated and according to the module documentation it will be replace by mtu parameter for nxos_system module. I've checked the nxos_system module but it doesn't have the mtu parameter only system_mtu. I couldn't see interface parameter as well on the nxos_system removing the capability to set mtu on per interface basis.
##### ISSUE TYPE
- setting mtu on per interface basis is not available on nxos_system moduel
##### COMPONENT NAME
/usr/lib/python2.7/site-packages/ansible/modules/network/nxos/nxos_system.py
##### ANSIBLE VERSION
2.3
```
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.
-->
##### SUMMARY
Since the DEPRECATION of nxos_mtu module the capability to set mtu on per interface basis is gone because it isn't supported on the nxos_system module. Because the nxos_system module doesn't have the mtu and interface parameter.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
|
True
|
mtu and interface parameter not available on nxos_system -
Since the nxos_mtu module has been deprecated and according to the module documentation it will be replace by mtu parameter for nxos_system module. I've checked the nxos_system module but it doesn't have the mtu parameter only system_mtu. I couldn't see interface parameter as well on the nxos_system removing the capability to set mtu on per interface basis.
##### ISSUE TYPE
- setting mtu on per interface basis is not available on nxos_system moduel
##### COMPONENT NAME
/usr/lib/python2.7/site-packages/ansible/modules/network/nxos/nxos_system.py
##### ANSIBLE VERSION
2.3
```
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.
-->
##### SUMMARY
Since the DEPRECATION of nxos_mtu module the capability to set mtu on per interface basis is gone because it isn't supported on the nxos_system module. Because the nxos_system module doesn't have the mtu and interface parameter.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
```
|
main
|
mtu and interface parameter not available on nxos system since the nxos mtu module has been deprecated and according to the module documentation it will be replace by mtu parameter for nxos system module i ve checked the nxos system module but it doesn t have the mtu parameter only system mtu i couldn t see interface parameter as well on the nxos system removing the capability to set mtu on per interface basis issue type setting mtu on per interface basis is not available on nxos system moduel component name usr lib site packages ansible modules network nxos nxos system py ansible version configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific also mention the specific version of what you are trying to control e g if this is a network bug the version of firmware on the network device summary since the deprecation of nxos mtu module the capability to set mtu on per interface basis is gone because it isn t supported on the nxos system module because the nxos system module doesn t have the mtu and interface parameter steps to reproduce for bugs show exactly how to reproduce the problem using a minimal test case for new features show how the feature would be used yaml expected results actual results
| 1
|
5,061
| 25,922,574,145
|
IssuesEvent
|
2022-12-15 23:46:05
|
carbon-design-system/carbon
|
https://api.github.com/repos/carbon-design-system/carbon
|
reopened
|
[Feature Request]: Datepicker - better documentation and example
|
type: enhancement 💡 status: needs triage 🕵️♀️ status: waiting for maintainer response 💬
|
### The problem
Hi, the datepicker has a bad documentation and examples. There is no examples of ow to read the values in a forms with a validator like Formik.
### The solution
<DatePicker
dateFormat="d/m/Y"
datePickerType="single"
size="md"
locale="it"
value={this.state.datacollaudo5anni}
onChange={(e) => {
values.datacollaudo5anni = e;
}}
id="datacollaudo5annifield"
>
<DatePickerInput
labelText="Data Collaudo 5 Anni"
placeholder="dd/mm/yyyy"
size="md"
id="datacollaudo5anni"
name="datacollaudo5anni"
/>
### Examples
_No response_
### Application/PAL
_No response_
### Business priority
None
### Available extra resources
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
|
True
|
[Feature Request]: Datepicker - better documentation and example - ### The problem
Hi, the datepicker has a bad documentation and examples. There is no examples of ow to read the values in a forms with a validator like Formik.
### The solution
<DatePicker
dateFormat="d/m/Y"
datePickerType="single"
size="md"
locale="it"
value={this.state.datacollaudo5anni}
onChange={(e) => {
values.datacollaudo5anni = e;
}}
id="datacollaudo5annifield"
>
<DatePickerInput
labelText="Data Collaudo 5 Anni"
placeholder="dd/mm/yyyy"
size="md"
id="datacollaudo5anni"
name="datacollaudo5anni"
/>
### Examples
_No response_
### Application/PAL
_No response_
### Business priority
None
### Available extra resources
_No response_
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
|
main
|
datepicker better documentation and example the problem hi the datepicker has a bad documentation and examples there is no examples of ow to read the values in a forms with a validator like formik the solution datepicker dateformat d m y datepickertype single size md locale it value this state onchange e values e id datepickerinput labeltext data collaudo anni placeholder dd mm yyyy size md id name examples no response application pal no response business priority none available extra resources no response code of conduct i agree to follow this project s
| 1
|
1,446
| 6,283,055,729
|
IssuesEvent
|
2017-07-19 01:42:10
|
opencaching/opencaching-pl
|
https://api.github.com/repos/opencaching/opencaching-pl
|
closed
|
Unused files?
|
General_Discussion Maintainability Type_Other
|
I am investigating unused files in the source tree.
- [x] images/cacheatr/ - apparently referenced only in /tpl/stdstyle/cache-atr.tpl.php (which seems to be unused)
- [x] images/cachemaps - apparently not referenced
- [x] util/google-earth/\* and images/ge - apparently not referenced
- [x] images/header/ and util.sec/make_logo.php - apparently not referenced
- [x] images/thumbs - apparently not referenced
- [x] images/amnesius\*
- [x] images/*.swf
- [x] images/diff-*
- [x] images/terr-*
- [x] images/lightbox/\* seem to be a duplicate. It is referenced, but /tpl/stdstyle/images/lightbox is actually used
- [x] various files under /images (especially third party logos)
- [x] lib/tinymce (not 4) seems to be a remnant from a previous tinymce instance ?
- [x] various .js files are spread around lib/ rather then being under lib/js/
- [x] SVG and graphic resource files under logbook/ should be moved as templates under docs/ and the files actually used to be published by each node via wiki, based on these templates
- [x] util/ and Utils/ should be merged. (no reason to have many utilities directories)
- [x] there are at least 3 versions on jQuery under tpl/stdstyle/js/ ; These should be unified to a single deployment.
- [x] tpl/stdstyle/images/log/ and tpl/stdstyle/images/log2 ; These should be merged or cleaned up
- [x] tpl/stdstyle/images/profile
- [x] tpl/stdstyle/images/progressbar
- [x] tpl/stdstyle/images/thumb - at least some are unused (images with german text)
- [x] tpl/stdstyle/images/toltipsImages - should be added to more pages that need such icons (and directory renamed)
These are the ones I was able to identify today.
If any/some/all of these are not used anymore, I suggest removal, they only add clutter.
|
True
|
Unused files? - I am investigating unused files in the source tree.
- [x] images/cacheatr/ - apparently referenced only in /tpl/stdstyle/cache-atr.tpl.php (which seems to be unused)
- [x] images/cachemaps - apparently not referenced
- [x] util/google-earth/\* and images/ge - apparently not referenced
- [x] images/header/ and util.sec/make_logo.php - apparently not referenced
- [x] images/thumbs - apparently not referenced
- [x] images/amnesius\*
- [x] images/*.swf
- [x] images/diff-*
- [x] images/terr-*
- [x] images/lightbox/\* seem to be a duplicate. It is referenced, but /tpl/stdstyle/images/lightbox is actually used
- [x] various files under /images (especially third party logos)
- [x] lib/tinymce (not 4) seems to be a remnant from a previous tinymce instance ?
- [x] various .js files are spread around lib/ rather then being under lib/js/
- [x] SVG and graphic resource files under logbook/ should be moved as templates under docs/ and the files actually used to be published by each node via wiki, based on these templates
- [x] util/ and Utils/ should be merged. (no reason to have many utilities directories)
- [x] there are at least 3 versions on jQuery under tpl/stdstyle/js/ ; These should be unified to a single deployment.
- [x] tpl/stdstyle/images/log/ and tpl/stdstyle/images/log2 ; These should be merged or cleaned up
- [x] tpl/stdstyle/images/profile
- [x] tpl/stdstyle/images/progressbar
- [x] tpl/stdstyle/images/thumb - at least some are unused (images with german text)
- [x] tpl/stdstyle/images/toltipsImages - should be added to more pages that need such icons (and directory renamed)
These are the ones I was able to identify today.
If any/some/all of these are not used anymore, I suggest removal, they only add clutter.
|
main
|
unused files i am investigating unused files in the source tree images cacheatr apparently referenced only in tpl stdstyle cache atr tpl php which seems to be unused images cachemaps apparently not referenced util google earth and images ge apparently not referenced images header and util sec make logo php apparently not referenced images thumbs apparently not referenced images amnesius images swf images diff images terr images lightbox seem to be a duplicate it is referenced but tpl stdstyle images lightbox is actually used various files under images especially third party logos lib tinymce not seems to be a remnant from a previous tinymce instance various js files are spread around lib rather then being under lib js svg and graphic resource files under logbook should be moved as templates under docs and the files actually used to be published by each node via wiki based on these templates util and utils should be merged no reason to have many utilities directories there are at least versions on jquery under tpl stdstyle js these should be unified to a single deployment tpl stdstyle images log and tpl stdstyle images these should be merged or cleaned up tpl stdstyle images profile tpl stdstyle images progressbar tpl stdstyle images thumb at least some are unused images with german text tpl stdstyle images toltipsimages should be added to more pages that need such icons and directory renamed these are the ones i was able to identify today if any some all of these are not used anymore i suggest removal they only add clutter
| 1
|
782,344
| 27,493,863,901
|
IssuesEvent
|
2023-03-04 23:39:50
|
netzo/netzo
|
https://api.github.com/repos/netzo/netzo
|
closed
|
[ci] get npm-publish workflow working
|
priority: high type: bug
|
It's currently failing. This should, on each release, build the Deno module into an npm package and auto-deploy it to npm.
|
1.0
|
[ci] get npm-publish workflow working - It's currently failing. This should, on each release, build the Deno module into an npm package and auto-deploy it to npm.
|
non_main
|
get npm publish workflow working it s currently failing this should on each release build the deno module into an npm package and auto deploy it to npm
| 0
|
157,690
| 13,711,165,521
|
IssuesEvent
|
2020-10-02 03:31:10
|
CS3219-SE-Principles-and-Patterns/cs3219-ay2021-s1-project-2020-s1-g11
|
https://api.github.com/repos/CS3219-SE-Principles-and-Patterns/cs3219-ay2021-s1-project-2020-s1-g11
|
closed
|
Requirement Specification
|
documentation
|
You will be required to present your requirements and preliminary design of the project you want to do, to your tutor(s) in a meeting. This is a mandatory activity to attend but not formally graded. All team members are expected to attend the meeting. Your effort will be noted and may have an impact on overall project grade.
Come up with a prioritized list of requirements, and a provisional design (design diagram(s) of existing/new app. You can use any familiar notation(s).
|
1.0
|
Requirement Specification - You will be required to present your requirements and preliminary design of the project you want to do, to your tutor(s) in a meeting. This is a mandatory activity to attend but not formally graded. All team members are expected to attend the meeting. Your effort will be noted and may have an impact on overall project grade.
Come up with a prioritized list of requirements, and a provisional design (design diagram(s) of existing/new app. You can use any familiar notation(s).
|
non_main
|
requirement specification you will be required to present your requirements and preliminary design of the project you want to do to your tutor s in a meeting this is a mandatory activity to attend but not formally graded all team members are expected to attend the meeting your effort will be noted and may have an impact on overall project grade come up with a prioritized list of requirements and a provisional design design diagram s of existing new app you can use any familiar notation s
| 0
|
4,208
| 20,739,170,815
|
IssuesEvent
|
2022-03-14 16:08:23
|
backdrop-ops/contrib
|
https://api.github.com/repos/backdrop-ops/contrib
|
closed
|
Maintainer change request: insert module
|
Maintainer change request
|
**Thank you for supporting the Backdrop community!**
Please note the procedure to add a new maintainer to a project:
1. Please join the Backdrop Contrib group (if you have not already) by
submitting [an application](https://github.com/backdrop-ops/contrib/issues/new?assignees=klonos&labels=Maintainer+application&template=application-to-join-the-contrib-group.md&title=Application+to+join+the+Contrib+Group%3A).
2. File an issue in the current project's issue queue offering to help maintain
that project.
3. Create a PR for that project that adds your name to the README.md file in
the list of maintainers. <!-- The project maintainer, or a backdrop-contrib
administrator, will merge this PR to accept your offer of help. -->
4. If the project does not have a listed maintainer, or if a current maintainer
does not respond within 2 weeks, create *this issue* to take over the project.
**Please include a link to the issue you filed for the project.**
https://github.com/backdrop-contrib/insert/issues/17
**Please include a link to the PR that adds your name to the README.md file.**
https://github.com/backdrop-contrib/insert/pull/18
<!-- After confirming the project has been abandoned for a period of 2 weeks or
more, a Backdrop Contrib administrator will add your name to the list of
maintainers in that project's README.md file, and grant you admin access to the
project. -->
|
True
|
Maintainer change request: insert module - **Thank you for supporting the Backdrop community!**
Please note the procedure to add a new maintainer to a project:
1. Please join the Backdrop Contrib group (if you have not already) by
submitting [an application](https://github.com/backdrop-ops/contrib/issues/new?assignees=klonos&labels=Maintainer+application&template=application-to-join-the-contrib-group.md&title=Application+to+join+the+Contrib+Group%3A).
2. File an issue in the current project's issue queue offering to help maintain
that project.
3. Create a PR for that project that adds your name to the README.md file in
the list of maintainers. <!-- The project maintainer, or a backdrop-contrib
administrator, will merge this PR to accept your offer of help. -->
4. If the project does not have a listed maintainer, or if a current maintainer
does not respond within 2 weeks, create *this issue* to take over the project.
**Please include a link to the issue you filed for the project.**
https://github.com/backdrop-contrib/insert/issues/17
**Please include a link to the PR that adds your name to the README.md file.**
https://github.com/backdrop-contrib/insert/pull/18
<!-- After confirming the project has been abandoned for a period of 2 weeks or
more, a Backdrop Contrib administrator will add your name to the list of
maintainers in that project's README.md file, and grant you admin access to the
project. -->
|
main
|
maintainer change request insert module thank you for supporting the backdrop community please note the procedure to add a new maintainer to a project please join the backdrop contrib group if you have not already by submitting file an issue in the current project s issue queue offering to help maintain that project create a pr for that project that adds your name to the readme md file in the list of maintainers the project maintainer or a backdrop contrib administrator will merge this pr to accept your offer of help if the project does not have a listed maintainer or if a current maintainer does not respond within weeks create this issue to take over the project please include a link to the issue you filed for the project please include a link to the pr that adds your name to the readme md file after confirming the project has been abandoned for a period of weeks or more a backdrop contrib administrator will add your name to the list of maintainers in that project s readme md file and grant you admin access to the project
| 1
|
3,908
| 17,380,166,845
|
IssuesEvent
|
2021-07-31 14:40:27
|
Catalyst-Swarm/Catalyst-Circle-Co-ordination
|
https://api.github.com/repos/Catalyst-Swarm/Catalyst-Circle-Co-ordination
|
closed
|
Circle Meeting 1 - Toolmaker and Maintainer Circle - Tracking
|
Catalyst-Circle Tracking Meeting #1 Toolmaker and Maintainer Tracking
|
# Circle Meeting 1 - Toolmaker and Maintainer Circle - Tracking
## The Catalyst Alliance Meeting - 17/07/2021 - 16:00
- [x] @FelixfromSwarm - The Catalyst Alliance Meeting - 17/07/2021 - 16:00
## 1) Saturday Swarm Session - 17/07/2021 - 18:00
- [x] @FelixfromSwarm - Saturday Swarm Session - 17/07/2021 - 18:00

### Discord context
https://discord.com/channels/756943420660121600/864481683621543947/866618990242955264
### Recording from Saturday T&M problem sensing
https://drive.google.com/file/d/1g8DnVHvrIQ_ZyFmEU0C-7lJKQQLNcuY7/view?usp=sharing
### Miro Board link
https://miro.com/app/board/o9J_l6MT8rY=/?moveToWidget=3074457361524586504&cot=14
## 2) After Town Hall - Wednesday, 21st, July 2021
- [x] @FelixfromSwarm - After Townhall - 21/07/2021 - 20:00
https://catalyst-swarm.gitbook.io/catalyst-circle/toolmakers-and-maintainers/activity#t-and-m-problem-sensing-2

## 3) Saturday Swarm Session - 24th July 2021
- [x] @FelixfromSwarm - Saturday Swarm Session - 24/07/2021 - 18:00
https://catalyst-swarm.gitbook.io/catalyst-circle/toolmakers-and-maintainers/activity#t-and-m-problem-sensing-3

## 4) After Town Hall, 28th July 2021 - Final Form of T&M Problem Statement
- [x] @FelixfromSwarm - After Town Hall, 28th July 2021 - 24/07/2021 - 20:00
https://catalyst-swarm.gitbook.io/catalyst-circle/toolmakers-and-maintainers/activity#t-and-m-problem-sensing-4

|
True
|
Circle Meeting 1 - Toolmaker and Maintainer Circle - Tracking - # Circle Meeting 1 - Toolmaker and Maintainer Circle - Tracking
## The Catalyst Alliance Meeting - 17/07/2021 - 16:00
- [x] @FelixfromSwarm - The Catalyst Alliance Meeting - 17/07/2021 - 16:00
## 1) Saturday Swarm Session - 17/07/2021 - 18:00
- [x] @FelixfromSwarm - Saturday Swarm Session - 17/07/2021 - 18:00

### Discord context
https://discord.com/channels/756943420660121600/864481683621543947/866618990242955264
### Recording from Saturday T&M problem sensing
https://drive.google.com/file/d/1g8DnVHvrIQ_ZyFmEU0C-7lJKQQLNcuY7/view?usp=sharing
### Miro Board link
https://miro.com/app/board/o9J_l6MT8rY=/?moveToWidget=3074457361524586504&cot=14
## 2) After Town Hall - Wednesday, 21st, July 2021
- [x] @FelixfromSwarm - After Townhall - 21/07/2021 - 20:00
https://catalyst-swarm.gitbook.io/catalyst-circle/toolmakers-and-maintainers/activity#t-and-m-problem-sensing-2

## 3) Saturday Swarm Session - 24th July 2021
- [x] @FelixfromSwarm - Saturday Swarm Session - 24/07/2021 - 18:00
https://catalyst-swarm.gitbook.io/catalyst-circle/toolmakers-and-maintainers/activity#t-and-m-problem-sensing-3

## 4) After Town Hall, 28th July 2021 - Final Form of T&M Problem Statement
- [x] @FelixfromSwarm - After Town Hall, 28th July 2021 - 24/07/2021 - 20:00
https://catalyst-swarm.gitbook.io/catalyst-circle/toolmakers-and-maintainers/activity#t-and-m-problem-sensing-4

|
main
|
circle meeting toolmaker and maintainer circle tracking circle meeting toolmaker and maintainer circle tracking the catalyst alliance meeting felixfromswarm the catalyst alliance meeting saturday swarm session felixfromswarm saturday swarm session discord context recording from saturday t m problem sensing miro board link after town hall wednesday july felixfromswarm after townhall saturday swarm session july felixfromswarm saturday swarm session after town hall july final form of t m problem statement felixfromswarm after town hall july
| 1
|
183
| 2,795,060,031
|
IssuesEvent
|
2015-05-11 19:59:21
|
acl2/acl2
|
https://api.github.com/repos/acl2/acl2
|
opened
|
books/centaur/quicklisp/clean.sh dependence on git foils release
|
Maintainability
|
There are git commands in books/centaur/quicklisp/clean.sh,
but the ACL2 release process involves a snapshot that avoids
the .git directory. Jared suggested that I create this Issue, for
him to address at his leisure (if at all) -- I can easily work around
this at release time, simply by temporarily commenting out the
git-related parts of that file (which Jared suggests is probably
good enough).
|
True
|
books/centaur/quicklisp/clean.sh dependence on git foils release - There are git commands in books/centaur/quicklisp/clean.sh,
but the ACL2 release process involves a snapshot that avoids
the .git directory. Jared suggested that I create this Issue, for
him to address at his leisure (if at all) -- I can easily work around
this at release time, simply by temporarily commenting out the
git-related parts of that file (which Jared suggests is probably
good enough).
|
main
|
books centaur quicklisp clean sh dependence on git foils release there are git commands in books centaur quicklisp clean sh but the release process involves a snapshot that avoids the git directory jared suggested that i create this issue for him to address at his leisure if at all i can easily work around this at release time simply by temporarily commenting out the git related parts of that file which jared suggests is probably good enough
| 1
|
219,531
| 16,835,538,400
|
IssuesEvent
|
2021-06-18 11:31:36
|
Code-Sauce-Official/FitMate
|
https://api.github.com/repos/Code-Sauce-Official/FitMate
|
opened
|
Feat: Format tests
|
Level2 documentation good first issue
|
**What is your issue related to ?**
- [ ] Code
- [ ] Designing
- [x] Documentation
- [ ] Testing/Research
- [x] Format
**Description**
Format sign up/sign in test document
Change the format of all tests similar to test 1
**Describe the solution you'd like**
| After CHANGING | ORIGINAL |
:-------------------------:|:-------------------------:
|<img src = "https://user-images.githubusercontent.com/53532851/122553430-48f8e200-d055-11eb-9408-6365c295cde7.png" width="400px">|<img src = "https://user-images.githubusercontent.com/53532851/122553460-52824a00-d055-11eb-9bd8-19d4c8f15542.png" width="400px">|
| TARGET | ORIGINAL |
:-------------------------:|:-------------------------:
|<img src = "https://user-images.githubusercontent.com/53532851/122554340-78f4b500-d056-11eb-9aaa-54541773d7e5.png" >|<img src = "https://user-images.githubusercontent.com/53532851/122554354-7eea9600-d056-11eb-9854-615defb73b6b.png" >|
### You have to format every test like this from test-2 to test-15
[File](https://github.com/Code-Sauce-Official/FitMate/blob/develop/Documentation/Testing/SignUpTesting.md)
|
1.0
|
Feat: Format tests - **What is your issue related to ?**
- [ ] Code
- [ ] Designing
- [x] Documentation
- [ ] Testing/Research
- [x] Format
**Description**
Format sign up/sign in test document
Change the format of all tests similar to test 1
**Describe the solution you'd like**
| After CHANGING | ORIGINAL |
:-------------------------:|:-------------------------:
|<img src = "https://user-images.githubusercontent.com/53532851/122553430-48f8e200-d055-11eb-9408-6365c295cde7.png" width="400px">|<img src = "https://user-images.githubusercontent.com/53532851/122553460-52824a00-d055-11eb-9bd8-19d4c8f15542.png" width="400px">|
| TARGET | ORIGINAL |
:-------------------------:|:-------------------------:
|<img src = "https://user-images.githubusercontent.com/53532851/122554340-78f4b500-d056-11eb-9aaa-54541773d7e5.png" >|<img src = "https://user-images.githubusercontent.com/53532851/122554354-7eea9600-d056-11eb-9854-615defb73b6b.png" >|
### You have to format every test like this from test-2 to test-15
[File](https://github.com/Code-Sauce-Official/FitMate/blob/develop/Documentation/Testing/SignUpTesting.md)
|
non_main
|
feat format tests what is your issue related to code designing documentation testing research format description format sign up sign in test document change the format of all tests similar to test describe the solution you d like after changing original target original you have to format every test like this from test to test
| 0
|
2,843
| 10,217,675,736
|
IssuesEvent
|
2019-08-15 14:13:27
|
precice/tutorials
|
https://api.github.com/repos/precice/tutorials
|
opened
|
Some FSI tutorials are not working with OpenFOAM6
|
enhancement maintainability
|
### What's the problem?
I am using OpenFOAM6 on my machine and the tutorial cases `tutorials/FSI/cylinderFlap/*` do not work. I explicitly tested the case `tutorials/FSI/cylinderFlap/OpenFOAM-FEniCS` while checking #38. For the other cases I tried executing `runFluid` and it always resulted in an error thrown by OpenFOAM (OF), even before the coupling was initialized.
### My proposed workaround
Commenting out some lines in `Fluid/system/fvSolution` does the trick and it works:
```
PIMPLE
{
nCorrectors 2;
nNonOrthogonalCorrectors 0;
// tolerance 1.0e-14;
// relTol 5e-3;
// pisoTol 1e-6;
consistent true;
nOuterCorrectors 50;
// residualControl
// {
// U
// {
// tolerance 5e-5;
// relTol 0;
// }
// p
// {
// tolerance 5e-4;
// relTol 0;
// }
// }
}
```
Note again: I only explicitly tested the case with OpenFOAM and FEniCS. For OpenFOAM and deal.II, OpenFOAM and CalculiX respectively, I only ran `runFluid` and with the changes mentioned above OpenFOAM does not exit with an error, but executes until coupling is initialized and it is waiting for the second participant.
### What I already did (as soon as #38 is merged)
I created a branch https://github.com/precice/tutorials/tree/OpenFOAM6 with the modified files. Here, I am following the convention established in https://github.com/precice/openfoam-adapter.
* The FEniCS test cases are working (until the very end) without changes on my machine with OF6.
* The other cases (CalculiX, deal.II) are working without changes, but I only validated until the coupling is initialized and OpenFOAM is waiting for the Solid solver.
### What should we do now?
The fix I am proposing above seems to work. However, there are some more things we should do in order to improve the compatibility of the tutorials with different version of OpenFOAM (**bold** for important ones and *italics* for less important):
- [ ] **Check whether the proposed fix worsens the performance** Especially the first few iterations are very expensive. This might be a property of the test case or the performance has been worsened by removing the `residualControl`. I am not an OpenFOAM expert and I did not compare with the performance using a different OpenFOAM version that accepts `residualControl`.
- [ ] **Test tutorials and OF-adapter for different versions of OF.** I think we should open this issue in https://github.com/precice/systemtests. Currently, we only test with OF4 (see [here](https://github.com/precice/systemtests/blob/master/Dockerfile.openfoam-adapter#L18)).
- [ ] **Test cylinderFlap.** Again, this is an issue for https://github.com/precice/systemtests. Currently, we only test `flap_perp`. Here, `residualControl` is not provided in `fvSolution` (see [here](https://github.com/precice/tutorials/blob/master/FSI/flap_perp/OpenFOAM-deal.II/Fluid/system/fvSolution)). Therefore, I also did not observe any problems when running `flap_perp` under OF6 and I did not expect any problems to show up running `cylinderFlap`.
- [ ] *follow up:* there are quite some differences in the tolerances provided in `fvSolution` of the both cases. Why?
- [ ] *automatically choose `pimpleFoam` or `pimpleDyMFoam`* In `Fluid/system/controlDict` one has to manually choose the fitting solver. This straightforward, but still has to be done manually.
|
True
|
Some FSI tutorials are not working with OpenFOAM6 - ### What's the problem?
I am using OpenFOAM6 on my machine and the tutorial cases `tutorials/FSI/cylinderFlap/*` do not work. I explicitly tested the case `tutorials/FSI/cylinderFlap/OpenFOAM-FEniCS` while checking #38. For the other cases I tried executing `runFluid` and it always resulted in an error thrown by OpenFOAM (OF), even before the coupling was initialized.
### My proposed workaround
Commenting out some lines in `Fluid/system/fvSolution` does the trick and it works:
```
PIMPLE
{
nCorrectors 2;
nNonOrthogonalCorrectors 0;
// tolerance 1.0e-14;
// relTol 5e-3;
// pisoTol 1e-6;
consistent true;
nOuterCorrectors 50;
// residualControl
// {
// U
// {
// tolerance 5e-5;
// relTol 0;
// }
// p
// {
// tolerance 5e-4;
// relTol 0;
// }
// }
}
```
Note again: I only explicitly tested the case with OpenFOAM and FEniCS. For OpenFOAM and deal.II, OpenFOAM and CalculiX respectively, I only ran `runFluid` and with the changes mentioned above OpenFOAM does not exit with an error, but executes until coupling is initialized and it is waiting for the second participant.
### What I already did (as soon as #38 is merged)
I created a branch https://github.com/precice/tutorials/tree/OpenFOAM6 with the modified files. Here, I am following the convention established in https://github.com/precice/openfoam-adapter.
* The FEniCS test cases are working (until the very end) without changes on my machine with OF6.
* The other cases (CalculiX, deal.II) are working without changes, but I only validated until the coupling is initialized and OpenFOAM is waiting for the Solid solver.
### What should we do now?
The fix I am proposing above seems to work. However, there are some more things we should do in order to improve the compatibility of the tutorials with different version of OpenFOAM (**bold** for important ones and *italics* for less important):
- [ ] **Check whether the proposed fix worsens the performance** Especially the first few iterations are very expensive. This might be a property of the test case or the performance has been worsened by removing the `residualControl`. I am not an OpenFOAM expert and I did not compare with the performance using a different OpenFOAM version that accepts `residualControl`.
- [ ] **Test tutorials and OF-adapter for different versions of OF.** I think we should open this issue in https://github.com/precice/systemtests. Currently, we only test with OF4 (see [here](https://github.com/precice/systemtests/blob/master/Dockerfile.openfoam-adapter#L18)).
- [ ] **Test cylinderFlap.** Again, this is an issue for https://github.com/precice/systemtests. Currently, we only test `flap_perp`. Here, `residualControl` is not provided in `fvSolution` (see [here](https://github.com/precice/tutorials/blob/master/FSI/flap_perp/OpenFOAM-deal.II/Fluid/system/fvSolution)). Therefore, I also did not observe any problems when running `flap_perp` under OF6 and I did not expect any problems to show up running `cylinderFlap`.
- [ ] *follow up:* there are quite some differences in the tolerances provided in `fvSolution` of the both cases. Why?
- [ ] *automatically choose `pimpleFoam` or `pimpleDyMFoam`* In `Fluid/system/controlDict` one has to manually choose the fitting solver. This straightforward, but still has to be done manually.
|
main
|
some fsi tutorials are not working with what s the problem i am using on my machine and the tutorial cases tutorials fsi cylinderflap do not work i explicitly tested the case tutorials fsi cylinderflap openfoam fenics while checking for the other cases i tried executing runfluid and it always resulted in an error thrown by openfoam of even before the coupling was initialized my proposed workaround commenting out some lines in fluid system fvsolution does the trick and it works pimple ncorrectors nnonorthogonalcorrectors tolerance reltol pisotol consistent true noutercorrectors residualcontrol u tolerance reltol p tolerance reltol note again i only explicitly tested the case with openfoam and fenics for openfoam and deal ii openfoam and calculix respectively i only ran runfluid and with the changes mentioned above openfoam does not exit with an error but executes until coupling is initialized and it is waiting for the second participant what i already did as soon as is merged i created a branch with the modified files here i am following the convention established in the fenics test cases are working until the very end without changes on my machine with the other cases calculix deal ii are working without changes but i only validated until the coupling is initialized and openfoam is waiting for the solid solver what should we do now the fix i am proposing above seems to work however there are some more things we should do in order to improve the compatibility of the tutorials with different version of openfoam bold for important ones and italics for less important check whether the proposed fix worsens the performance especially the first few iterations are very expensive this might be a property of the test case or the performance has been worsened by removing the residualcontrol i am not an openfoam expert and i did not compare with the performance using a different openfoam version that accepts residualcontrol test tutorials and of adapter for different versions of of i think we should open this issue in currently we only test with see test cylinderflap again this is an issue for currently we only test flap perp here residualcontrol is not provided in fvsolution see therefore i also did not observe any problems when running flap perp under and i did not expect any problems to show up running cylinderflap follow up there are quite some differences in the tolerances provided in fvsolution of the both cases why automatically choose pimplefoam or pimpledymfoam in fluid system controldict one has to manually choose the fitting solver this straightforward but still has to be done manually
| 1
|
4,133
| 19,601,892,769
|
IssuesEvent
|
2022-01-06 02:56:39
|
BioArchLinux/Packages
|
https://api.github.com/repos/BioArchLinux/Packages
|
reopened
|
[MAINTAIN] Phyx
|
maintain
|
<!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
building pxstrec
g++ -o "pxstrec" -O3 -std=c++14 -fopenmp -Wall -DOMP -ffast-math -ftree-vectorize main_strec.o ./utils.o ./citations.o ./log.o ./superdouble.o ./timer.o ./sequence.o ./seq_reader.o ./seq_utils.o ./seq_models.o ./pairwise_alignment.o ./node.o ./tree.o ./tree_reader.o ./tree_utils.o ./rate_model.o ./state_reconstructor.o ./optimize_state_reconstructor_nlopt.o ./optimize_state_reconstructor_periods_nlopt.o ./branch_segment.o ./cont_models.o ./seq_gen.o -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -llapack -lblas -lpthread -lm -lnlopt -larmadillo # -lgfortran
cat man/pxstrec.1.in > man/pxstrec.1
[1m[32m==>(B[m[1m Starting check()...(B[m
python3 run_tests.py
make: python3: No such file or directory
make: *** [Makefile:655: check] Error 127
[1m[31m==> ERROR:(B[m[1m A failure occurred in check().(B[m
[1m Aborting...(B[m
```
</details>
**Packages (please complete the following information):**
- Package Name: Phyx
**Description**
Add any other context about the problem here.
|
True
|
[MAINTAIN] Phyx - <!--
Please report the error of one package in one issue! Use multi issues to report multi bugs.
Thanks!
-->
**Log of the bug**
<details>
```
building pxstrec
g++ -o "pxstrec" -O3 -std=c++14 -fopenmp -Wall -DOMP -ffast-math -ftree-vectorize main_strec.o ./utils.o ./citations.o ./log.o ./superdouble.o ./timer.o ./sequence.o ./seq_reader.o ./seq_utils.o ./seq_models.o ./pairwise_alignment.o ./node.o ./tree.o ./tree_reader.o ./tree_utils.o ./rate_model.o ./state_reconstructor.o ./optimize_state_reconstructor_nlopt.o ./optimize_state_reconstructor_periods_nlopt.o ./branch_segment.o ./cont_models.o ./seq_gen.o -Wl,-O1,--sort-common,--as-needed,-z,relro,-z,now -llapack -lblas -lpthread -lm -lnlopt -larmadillo # -lgfortran
cat man/pxstrec.1.in > man/pxstrec.1
[1m[32m==>(B[m[1m Starting check()...(B[m
python3 run_tests.py
make: python3: No such file or directory
make: *** [Makefile:655: check] Error 127
[1m[31m==> ERROR:(B[m[1m A failure occurred in check().(B[m
[1m Aborting...(B[m
```
</details>
**Packages (please complete the following information):**
- Package Name: Phyx
**Description**
Add any other context about the problem here.
|
main
|
phyx please report the error of one package in one issue use multi issues to report multi bugs thanks log of the bug building pxstrec g o pxstrec std c fopenmp wall domp ffast math ftree vectorize main strec o utils o citations o log o superdouble o timer o sequence o seq reader o seq utils o seq models o pairwise alignment o node o tree o tree reader o tree utils o rate model o state reconstructor o optimize state reconstructor nlopt o optimize state reconstructor periods nlopt o branch segment o cont models o seq gen o wl sort common as needed z relro z now llapack lblas lpthread lm lnlopt larmadillo lgfortran cat man pxstrec in man pxstrec b m starting check b m run tests py make no such file or directory make error error b m a failure occurred in check b m aborting b m packages please complete the following information package name phyx description add any other context about the problem here
| 1
|
38,191
| 5,168,666,882
|
IssuesEvent
|
2017-01-17 22:12:45
|
mobdata/replication
|
https://api.github.com/repos/mobdata/replication
|
closed
|
Parent task: Integrate DSL rules into integration tests
|
component : DSL component : testing Parent task ready
|
The rules generated by our DSL parser need to be loaded into couchdb in order to run integration tests against the rules.
Subtasks: #113, #122, #123, #124, #125
|
1.0
|
Parent task: Integrate DSL rules into integration tests - The rules generated by our DSL parser need to be loaded into couchdb in order to run integration tests against the rules.
Subtasks: #113, #122, #123, #124, #125
|
non_main
|
parent task integrate dsl rules into integration tests the rules generated by our dsl parser need to be loaded into couchdb in order to run integration tests against the rules subtasks
| 0
|
106,397
| 13,283,102,049
|
IssuesEvent
|
2020-08-24 02:02:31
|
woowa-techcamp-2020/bmart-5
|
https://api.github.com/repos/woowa-techcamp-2020/bmart-5
|
closed
|
[style] Header, CategoryHeader, ProductCard(Like Icon & Text)의 크기가 작게보이는 문제
|
design style
|
## 이슈 요약
> 해당 이슈가 어떤 이슈인지에 대한 간략한 설명을 추가합니다.
전체적인 요소의 크기가 너무 작은 문제가 있습니다.
## 기타 (스크린샷 등)
> 이슈에 대한 이해를 돕기 위한 자료가 있다면 추가합니다.

|
1.0
|
[style] Header, CategoryHeader, ProductCard(Like Icon & Text)의 크기가 작게보이는 문제 - ## 이슈 요약
> 해당 이슈가 어떤 이슈인지에 대한 간략한 설명을 추가합니다.
전체적인 요소의 크기가 너무 작은 문제가 있습니다.
## 기타 (스크린샷 등)
> 이슈에 대한 이해를 돕기 위한 자료가 있다면 추가합니다.

|
non_main
|
header categoryheader productcard like icon text 의 크기가 작게보이는 문제 이슈 요약 해당 이슈가 어떤 이슈인지에 대한 간략한 설명을 추가합니다 전체적인 요소의 크기가 너무 작은 문제가 있습니다 기타 스크린샷 등 이슈에 대한 이해를 돕기 위한 자료가 있다면 추가합니다
| 0
|
45,670
| 5,950,360,416
|
IssuesEvent
|
2017-05-26 16:30:42
|
JoshuaBThompson/JEM
|
https://api.github.com/repos/JoshuaBThompson/JEM
|
opened
|
Design: User experience
|
design
|
## JEM-1 User experience concepts
### Setting up JEM for First Time
- User will make sure the JEM is powered on via USB or Battery and that power led is ON
- User will make sure the JEM and the Mobile device are paired over BLE by opening the JEM Control App and pairing
- User will verify that the JEM version, name and other hardware info is displayed upon pairing
- JEM-1 comes pre-loaded with example application to toggle LEDs, read sensors and read / write to GPIO
+ User will verify that the example JEM-1 App is loaded by confirming that there is a Application ID and name displayed
|
1.0
|
Design: User experience - ## JEM-1 User experience concepts
### Setting up JEM for First Time
- User will make sure the JEM is powered on via USB or Battery and that power led is ON
- User will make sure the JEM and the Mobile device are paired over BLE by opening the JEM Control App and pairing
- User will verify that the JEM version, name and other hardware info is displayed upon pairing
- JEM-1 comes pre-loaded with example application to toggle LEDs, read sensors and read / write to GPIO
+ User will verify that the example JEM-1 App is loaded by confirming that there is a Application ID and name displayed
|
non_main
|
design user experience jem user experience concepts setting up jem for first time user will make sure the jem is powered on via usb or battery and that power led is on user will make sure the jem and the mobile device are paired over ble by opening the jem control app and pairing user will verify that the jem version name and other hardware info is displayed upon pairing jem comes pre loaded with example application to toggle leds read sensors and read write to gpio user will verify that the example jem app is loaded by confirming that there is a application id and name displayed
| 0
|
784
| 4,387,511,460
|
IssuesEvent
|
2016-08-08 15:59:26
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
Synchronize: a problem with a non-standard ssh port
|
bug_report in progress waiting_on_maintainer
|
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
synchronize
##### ANSIBLE VERSION
```
<!--- Paste verbatim output from “ansible --version” between quotes -->
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
[defaults]
vault_password_file=.vault_pass.txt
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
I have the following option in my /etc/ssh/ssh_config :
```
Port 2222
```
I want to sync some directory to a machine that has this address: 10.0.3.188:22.
I get an error
```
ssh: connect to host 10.0.3.188 port 2222: Connection refused
```
This [code](https://github.com/ansible/ansible-modules-core/blob/c52f475c64372042daab4ebc0660a2782b71d10d/files/synchronize.py#L414-L417) is responsible for this behavior. Why don't set port explicitly whatever its value is?
|
True
|
Synchronize: a problem with a non-standard ssh port - ##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the plugin/module/task -->
synchronize
##### ANSIBLE VERSION
```
<!--- Paste verbatim output from “ansible --version” between quotes -->
ansible 2.1.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
[defaults]
vault_password_file=.vault_pass.txt
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Ubuntu 16.04
##### SUMMARY
<!--- Explain the problem briefly -->
I have the following option in my /etc/ssh/ssh_config :
```
Port 2222
```
I want to sync some directory to a machine that has this address: 10.0.3.188:22.
I get an error
```
ssh: connect to host 10.0.3.188 port 2222: Connection refused
```
This [code](https://github.com/ansible/ansible-modules-core/blob/c52f475c64372042daab4ebc0660a2782b71d10d/files/synchronize.py#L414-L417) is responsible for this behavior. Why don't set port explicitly whatever its value is?
|
main
|
synchronize a problem with a non standard ssh port issue type bug report component name synchronize ansible version ansible config file etc ansible ansible cfg configured module search path default w o overrides configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables vault password file vault pass txt os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific ubuntu summary i have the following option in my etc ssh ssh config port i want to sync some directory to a machine that has this address i get an error ssh connect to host port connection refused this is responsible for this behavior why don t set port explicitly whatever its value is
| 1
|
1,840
| 6,577,374,169
|
IssuesEvent
|
2017-09-12 00:27:56
|
ansible/ansible-modules-core
|
https://api.github.com/repos/ansible/ansible-modules-core
|
closed
|
nxos_config issue with creating L2 vlans using transport cli
|
affects_2.2 bug_report networking P2 waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
module
- nxos_config
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 1861151fa4) last updated 2016/05/20 16:06:57 (GMT +000)
lib/ansible/modules/core: (detached HEAD d3097bf580) last updated 2016/05/20 16:12:00 (GMT +000)
lib/ansible/modules/extras: (detached HEAD ce5a9b6c5f) last updated 2016/05/20 16:12:01 (GMT +000)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Cisco NXOS - n3000-uk9.6.0.2.U5.2.bin
##### SUMMARY
<!--- Explain the problem briefly -->
When running nxos_config with transport cli L2 vlans are created, however, name is ignored.
Currently, I believe this is due to the need to create the parent, but when no name is supplied the vlan is not in the running-configuration.
```
show run | i vlan
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
- name: Setup Bridging (Vlans)
nxos_config:
lines:
- 'name {{ item.name }}'
parents:
- 'vlan {{ item.id }}'
host: "{{ inventory_hostname }}"
username: "{{ cisco.nexus.username }}"
password: "{{ cisco.nexus.password }}"
transport: cli
use_ssl: yes
validate_certs: false
when: vlans is defined
with_items:
- "{{ vlans }}"
```
group_vars or host_vars
```
---
vlans:
- id: 10
name: Ansible
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
TASK [Setup Bridging (Vlans)] **************************************************
changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'})
```
On NXOS:
show run vlan
```
vlan 10
name Ansible
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
TASK [Setup Bridging (Vlans)] **************************************************
changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'})
```
On NXOS:
show run vlan
```
vlan 10
```
|
True
|
nxos_config issue with creating L2 vlans using transport cli - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
module
- nxos_config
##### ANSIBLE VERSION
```
ansible 2.2.0 (devel 1861151fa4) last updated 2016/05/20 16:06:57 (GMT +000)
lib/ansible/modules/core: (detached HEAD d3097bf580) last updated 2016/05/20 16:12:00 (GMT +000)
lib/ansible/modules/extras: (detached HEAD ce5a9b6c5f) last updated 2016/05/20 16:12:01 (GMT +000)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/']
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
Cisco NXOS - n3000-uk9.6.0.2.U5.2.bin
##### SUMMARY
<!--- Explain the problem briefly -->
When running nxos_config with transport cli L2 vlans are created, however, name is ignored.
Currently, I believe this is due to the need to create the parent, but when no name is supplied the vlan is not in the running-configuration.
```
show run | i vlan
```
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem.
For new features, show how the feature would be used.
-->
```
- name: Setup Bridging (Vlans)
nxos_config:
lines:
- 'name {{ item.name }}'
parents:
- 'vlan {{ item.id }}'
host: "{{ inventory_hostname }}"
username: "{{ cisco.nexus.username }}"
password: "{{ cisco.nexus.password }}"
transport: cli
use_ssl: yes
validate_certs: false
when: vlans is defined
with_items:
- "{{ vlans }}"
```
group_vars or host_vars
```
---
vlans:
- id: 10
name: Ansible
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
TASK [Setup Bridging (Vlans)] **************************************************
changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'})
```
On NXOS:
show run vlan
```
vlan 10
name Ansible
```
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with high verbosity (-vvvv) -->
```
TASK [Setup Bridging (Vlans)] **************************************************
changed: [nxos1] => (item={u'id': 10, u'name': u'Ansible'})
```
On NXOS:
show run vlan
```
vlan 10
```
|
main
|
nxos config issue with creating vlans using transport cli issue type bug report component name module nxos config ansible version ansible devel last updated gmt lib ansible modules core detached head last updated gmt lib ansible modules extras detached head last updated gmt config file etc ansible ansible cfg configured module search path configuration mention any settings you have changed added removed in ansible cfg or using the ansible environment variables os environment mention the os you are running ansible from and the os you are managing or say “n a” for anything that is not platform specific cisco nxos bin summary when running nxos config with transport cli vlans are created however name is ignored currently i believe this is due to the need to create the parent but when no name is supplied the vlan is not in the running configuration show run i vlan steps to reproduce for bugs show exactly how to reproduce the problem for new features show how the feature would be used name setup bridging vlans nxos config lines name item name parents vlan item id host inventory hostname username cisco nexus username password cisco nexus password transport cli use ssl yes validate certs false when vlans is defined with items vlans group vars or host vars vlans id name ansible expected results task changed item u id u name u ansible on nxos show run vlan vlan name ansible actual results task changed item u id u name u ansible on nxos show run vlan vlan
| 1
|
216,415
| 16,761,004,706
|
IssuesEvent
|
2021-06-13 19:38:23
|
cseelhoff/RimThreaded
|
https://api.github.com/repos/cseelhoff/RimThreaded
|
closed
|
"Standalone Hot Spring [BAL]" Crash to Desktop with Mod Standalone Hot Springs
|
1.3.1 - 1.3.2 Accepted For Testing Bug Confirmed Game Breaking Mod Incompatibility Reproducible
|
**BUG DESCRIPTION**
A crash happens when two colonists try to use the hot spring at the same time, or when one gets out.
**To Reproduce**
Steps to reproduce the behavior:
1. Open up provided save
2. Run Game
3. Observe crash, or force crash by manually asking two or more colonists to bathe in the spring.
**Player.Log**
[Player.log](https://github.com/cseelhoff/RimThreaded/files/6022358/Player.log)
**Crash.log**
[error.log](https://github.com/cseelhoff/RimThreaded/files/6022363/error.log)
**Mod List**
Harmony
Core
Royalty
HugsLib
Standalone Hot Springs [BAL] https://steamcommunity.com/sharedfiles/filedetails/?id=2205980094&searchtext=Standalone+Hot+Springs
RimThreaded[1.3.1.3]
**Save File**
https://drive.google.com/file/d/1PP6UucjamLgzrIQgwna0N9jsim4gAz3-/view?usp=sharing
|
1.0
|
"Standalone Hot Spring [BAL]" Crash to Desktop with Mod Standalone Hot Springs - **BUG DESCRIPTION**
A crash happens when two colonists try to use the hot spring at the same time, or when one gets out.
**To Reproduce**
Steps to reproduce the behavior:
1. Open up provided save
2. Run Game
3. Observe crash, or force crash by manually asking two or more colonists to bathe in the spring.
**Player.Log**
[Player.log](https://github.com/cseelhoff/RimThreaded/files/6022358/Player.log)
**Crash.log**
[error.log](https://github.com/cseelhoff/RimThreaded/files/6022363/error.log)
**Mod List**
Harmony
Core
Royalty
HugsLib
Standalone Hot Springs [BAL] https://steamcommunity.com/sharedfiles/filedetails/?id=2205980094&searchtext=Standalone+Hot+Springs
RimThreaded[1.3.1.3]
**Save File**
https://drive.google.com/file/d/1PP6UucjamLgzrIQgwna0N9jsim4gAz3-/view?usp=sharing
|
non_main
|
standalone hot spring crash to desktop with mod standalone hot springs bug description a crash happens when two colonists try to use the hot spring at the same time or when one gets out to reproduce steps to reproduce the behavior open up provided save run game observe crash or force crash by manually asking two or more colonists to bathe in the spring player log crash log mod list harmony core royalty hugslib standalone hot springs rimthreaded save file
| 0
|
139,131
| 11,252,455,305
|
IssuesEvent
|
2020-01-11 08:48:06
|
one-lightning/TestingUp
|
https://api.github.com/repos/one-lightning/TestingUp
|
closed
|
[TestRun] Verify if text information is displayed [T204]
|
field1 testcase testing testrun testsuites
|
**Type**
Functional
**Priority**
Medium
**References**
SKIL-57
**Automation Type**
To_analize
**Preconditions**
_Enabled order info_
**1.** Company
**2.** Order Number
**3.** Payment Summary
**Steps:**
**1.** Navigate to web site
**2.** Open Payment Section
**3.** Observe the information
**Expected Result:**
Static labels can not be changed

|
4.0
|
[TestRun] Verify if text information is displayed [T204] - **Type**
Functional
**Priority**
Medium
**References**
SKIL-57
**Automation Type**
To_analize
**Preconditions**
_Enabled order info_
**1.** Company
**2.** Order Number
**3.** Payment Summary
**Steps:**
**1.** Navigate to web site
**2.** Open Payment Section
**3.** Observe the information
**Expected Result:**
Static labels can not be changed

|
non_main
|
verify if text information is displayed type functional priority medium references skil automation type to analize preconditions enabled order info company order number payment summary steps navigate to web site open payment section observe the information expected result static labels can not be changed
| 0
|
5,665
| 29,477,401,678
|
IssuesEvent
|
2023-06-02 00:17:29
|
ipfs/ipfs-companion
|
https://api.github.com/repos/ipfs/ipfs-companion
|
closed
|
IPFS Companion cannot handle a ? in the title of a document.
|
kind/bug status/blocked/upstream-bug need/analysis need/maintainer-input
|
**Describe the bug**
* Files such as e.g. https://mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf result in [the following erroneous HTTP session](https://github.com/ipfs/ipfs-companion/files/11497121/mindwar.net.ipns.localhost.har.zip):
```yaml
301 GET https://mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf
Request headers
Response headers (status = 301)
Accept-Ranges: bytes
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 180941
Content-Type: application/pdf
Date: Wed, 17 May 2023 10:18:16 GMT
Etag: "QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3"
Last-Modified: Wed, 17 May 2023 10:18:16 GMT
Location: http://mindwar.net.ipns.localhost:8080/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference?.pdf
X-DNS-Prefetch-Control: off
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon â Can you spot the difference?.pdf
X-Ipfs-Roots: QmV7UhgGn2qULg9Wwz76if557XrwDbrSkxQGmhtya77a7R,QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3
404 GET http://localhost:8080/ipns/mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf
Request headers
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q
=0.7
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,es-419;q=0.8,es;q=0.7
Connection: keep-alive
DNT: 1
Host: mindwar.net.ipns.localhost:8080
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36
sec-ch-ua: "Not:A-Brand";v="99", "Chromium";v="112"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Response headers (status = 404)
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 221
Content-Type: text/plain; charset=utf-8
Date: Wed, 17 May 2023 10:38:01 GMT
X-Content-Type-Options: nosniff
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon â Can you spot the difference
404 GET http://mindwar.net.ipns.localhost:8080/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference?.pdf
Request headers
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36
sec-ch-ua: "Not:A-Brand";v="99", "Chromium";v="112"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Response headers (status = 404)
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 221
Content-Type: text/plain; charset=utf-8
Date: Wed, 17 May 2023 10:38:01 GMT
X-Content-Type-Options: nosniff
X-DNS-Prefetch-Control: off
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon — Can you spot the difference
```
IPFS Companion incorrectly converts `%3F` to `?`.
**To Reproduce**
Steps to reproduce the behavior:
1. Self evident.
**Expected behavior**
Self-evident.
kubo itself is fully capable of resolving `/ipns/mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf` to QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3.
```
$ ipfs get --progress=0 "$(ipfs resolve /ipns/mindwar.net/'Jim Stewartson v. QAnon — Can you spot the difference?.pdf')"
Saving file(s) to QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3
```
But perhaps is the cause of the bug because note its Location return here:
```bash
$ curl --head 'localhost:8080/ipns/mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf'
```
```http
HTTP/1.1 301 Moved Permanently
Accept-Ranges: bytes
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 180941
Content-Type: application/pdf
Etag: "QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3"
Last-Modified: Wed, 17 May 2023 11:00:40 GMT
Location: http://mindwar.net.ipns.localhost:8080/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference?.pdf
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon — Can you spot the difference?.pdf
X-Ipfs-Roots: QmV7UhgGn2qULg9Wwz76if557XrwDbrSkxQGmhtya77a7R,QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3
Date: Wed, 17 May 2023 11:00:40 GMT
```
I'll report up there too.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Linux debutanuki 6.3.2-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Thu, 11 May 2023 16:40:19 +0000 x86_64 GNU/Linux
- Browser: ungoogled-chromium-bin 112.0.5615.165-1
- Version: 2.22.1
**Smartphone (please complete the following information):**
N/A.
|
True
|
IPFS Companion cannot handle a ? in the title of a document. - **Describe the bug**
* Files such as e.g. https://mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf result in [the following erroneous HTTP session](https://github.com/ipfs/ipfs-companion/files/11497121/mindwar.net.ipns.localhost.har.zip):
```yaml
301 GET https://mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf
Request headers
Response headers (status = 301)
Accept-Ranges: bytes
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 180941
Content-Type: application/pdf
Date: Wed, 17 May 2023 10:18:16 GMT
Etag: "QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3"
Last-Modified: Wed, 17 May 2023 10:18:16 GMT
Location: http://mindwar.net.ipns.localhost:8080/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference?.pdf
X-DNS-Prefetch-Control: off
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon â Can you spot the difference?.pdf
X-Ipfs-Roots: QmV7UhgGn2qULg9Wwz76if557XrwDbrSkxQGmhtya77a7R,QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3
404 GET http://localhost:8080/ipns/mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf
Request headers
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q
=0.7
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9,es-419;q=0.8,es;q=0.7
Connection: keep-alive
DNT: 1
Host: mindwar.net.ipns.localhost:8080
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36
sec-ch-ua: "Not:A-Brand";v="99", "Chromium";v="112"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Response headers (status = 404)
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 221
Content-Type: text/plain; charset=utf-8
Date: Wed, 17 May 2023 10:38:01 GMT
X-Content-Type-Options: nosniff
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon â Can you spot the difference
404 GET http://mindwar.net.ipns.localhost:8080/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference?.pdf
Request headers
DNT: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36
sec-ch-ua: "Not:A-Brand";v="99", "Chromium";v="112"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Linux"
Response headers (status = 404)
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 221
Content-Type: text/plain; charset=utf-8
Date: Wed, 17 May 2023 10:38:01 GMT
X-Content-Type-Options: nosniff
X-DNS-Prefetch-Control: off
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon — Can you spot the difference
```
IPFS Companion incorrectly converts `%3F` to `?`.
**To Reproduce**
Steps to reproduce the behavior:
1. Self evident.
**Expected behavior**
Self-evident.
kubo itself is fully capable of resolving `/ipns/mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf` to QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3.
```
$ ipfs get --progress=0 "$(ipfs resolve /ipns/mindwar.net/'Jim Stewartson v. QAnon — Can you spot the difference?.pdf')"
Saving file(s) to QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3
```
But perhaps is the cause of the bug because note its Location return here:
```bash
$ curl --head 'localhost:8080/ipns/mindwar.net/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference%3F.pdf'
```
```http
HTTP/1.1 301 Moved Permanently
Accept-Ranges: bytes
Access-Control-Allow-Headers: Content-Type
Access-Control-Allow-Headers: Range
Access-Control-Allow-Headers: User-Agent
Access-Control-Allow-Headers: X-Requested-With
Access-Control-Allow-Methods: GET
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Length
Access-Control-Expose-Headers: Content-Range
Access-Control-Expose-Headers: X-Chunked-Output
Access-Control-Expose-Headers: X-Ipfs-Path
Access-Control-Expose-Headers: X-Ipfs-Roots
Access-Control-Expose-Headers: X-Stream-Output
Content-Length: 180941
Content-Type: application/pdf
Etag: "QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3"
Last-Modified: Wed, 17 May 2023 11:00:40 GMT
Location: http://mindwar.net.ipns.localhost:8080/Jim%20Stewartson%20v.%20QAnon%20%E2%80%94%20Can%20you%20spot%20the%20difference?.pdf
X-Ipfs-Path: /ipns/mindwar.net/Jim Stewartson v. QAnon — Can you spot the difference?.pdf
X-Ipfs-Roots: QmV7UhgGn2qULg9Wwz76if557XrwDbrSkxQGmhtya77a7R,QmXZdegY4BBHdjMeCadhCbciziQTFynkApbgtDGbodhsz3
Date: Wed, 17 May 2023 11:00:40 GMT
```
I'll report up there too.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Linux debutanuki 6.3.2-zen1-1-zen #1 ZEN SMP PREEMPT_DYNAMIC Thu, 11 May 2023 16:40:19 +0000 x86_64 GNU/Linux
- Browser: ungoogled-chromium-bin 112.0.5615.165-1
- Version: 2.22.1
**Smartphone (please complete the following information):**
N/A.
|
main
|
ipfs companion cannot handle a in the title of a document describe the bug files such as e g result in yaml get request headers response headers status accept ranges bytes access control allow headers content type access control allow headers range access control allow headers user agent access control allow headers x requested with access control allow methods get access control allow origin access control expose headers content length access control expose headers content range access control expose headers x chunked output access control expose headers x ipfs path access control expose headers x ipfs roots access control expose headers x stream output content length content type application pdf date wed may gmt etag last modified wed may gmt location x dns prefetch control off x ipfs path ipns mindwar net jim stewartson v qanon â can you spot the difference pdf x ipfs roots get request headers accept text html application xhtml xml application xml q image avif image webp image apng q application signed exchange v q accept encoding gzip deflate br accept language en us en q es q es q connection keep alive dnt host mindwar net ipns localhost sec fetch dest document sec fetch mode navigate sec fetch site none sec fetch user upgrade insecure requests user agent mozilla linux applewebkit khtml like gecko chrome safari sec ch ua not a brand v chromium v sec ch ua mobile sec ch ua platform linux response headers status access control allow headers content type access control allow headers range access control allow headers user agent access control allow headers x requested with access control allow methods get access control allow origin access control expose headers content length access control expose headers content range access control expose headers x chunked output access control expose headers x ipfs path access control expose headers x ipfs roots access control expose headers x stream output content length content type text plain charset utf date wed may gmt x content type options nosniff x ipfs path ipns mindwar net jim stewartson v qanon â can you spot the difference get request headers dnt upgrade insecure requests user agent mozilla linux applewebkit khtml like gecko chrome safari sec ch ua not a brand v chromium v sec ch ua mobile sec ch ua platform linux response headers status access control allow headers content type access control allow headers range access control allow headers user agent access control allow headers x requested with access control allow methods get access control allow origin access control expose headers content length access control expose headers content range access control expose headers x chunked output access control expose headers x ipfs path access control expose headers x ipfs roots access control expose headers x stream output content length content type text plain charset utf date wed may gmt x content type options nosniff x dns prefetch control off x ipfs path ipns mindwar net jim stewartson v qanon — can you spot the difference ipfs companion incorrectly converts to to reproduce steps to reproduce the behavior self evident expected behavior self evident kubo itself is fully capable of resolving ipns mindwar net jim pdf to ipfs get progress ipfs resolve ipns mindwar net jim stewartson v qanon — can you spot the difference pdf saving file s to but perhaps is the cause of the bug because note its location return here bash curl head localhost ipns mindwar net jim pdf http http moved permanently accept ranges bytes access control allow headers content type access control allow headers range access control allow headers user agent access control allow headers x requested with access control allow methods get access control allow origin access control expose headers content length access control expose headers content range access control expose headers x chunked output access control expose headers x ipfs path access control expose headers x ipfs roots access control expose headers x stream output content length content type application pdf etag last modified wed may gmt location x ipfs path ipns mindwar net jim stewartson v qanon — can you spot the difference pdf x ipfs roots date wed may gmt i ll report up there too screenshots desktop please complete the following information os linux debutanuki zen zen smp preempt dynamic thu may gnu linux browser ungoogled chromium bin version smartphone please complete the following information n a
| 1
|
4,699
| 24,256,567,629
|
IssuesEvent
|
2022-09-27 18:19:23
|
aws/aws-sam-cli
|
https://api.github.com/repos/aws/aws-sam-cli
|
closed
|
Locally Invoking Lambda function on Apple M1 - Strange Behaviour
|
blocked/more-info-needed area/local/invoke stage/bug-repro maintainer/need-followup platform/mac/arm
|
### **Description:**
I am trying to locally invoke a Lambda function which uses a HuggingFace Summarization model (ref: https://huggingface.co/sshleifer/distilbart-cnn-6-6).
### Expected Result:
Once the Lambda is locally invoked, it should ideally execute the lambda code.
### Observed Result:
I am able to invoke `sam build`, however, when I locally invoke the Lambda function, I observe the following issue:

### Steps to Reproduce:
1. Create a YAML template file which declares the Lambda PackageType as an Image.
2. Create a DockerFile similar to:

3. Locally invoke the Lambda function.
### Environment Details:
- OS: MacOS - Big Sur (v11.4)
- Chip: Apple M1
- AWS SAM CLI Version: 1.33.0
- AWS Region: eu-west-2
I have tried many fixes such as clean installing Docker (https://docs.docker.com/desktop/mac/apple-silicon/) and AWS SAM CLI (https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-mac.html), using previous versions of both, pruning images and so on.
Is there anything I am overlooking/missing?
|
True
|
Locally Invoking Lambda function on Apple M1 - Strange Behaviour - ### **Description:**
I am trying to locally invoke a Lambda function which uses a HuggingFace Summarization model (ref: https://huggingface.co/sshleifer/distilbart-cnn-6-6).
### Expected Result:
Once the Lambda is locally invoked, it should ideally execute the lambda code.
### Observed Result:
I am able to invoke `sam build`, however, when I locally invoke the Lambda function, I observe the following issue:

### Steps to Reproduce:
1. Create a YAML template file which declares the Lambda PackageType as an Image.
2. Create a DockerFile similar to:

3. Locally invoke the Lambda function.
### Environment Details:
- OS: MacOS - Big Sur (v11.4)
- Chip: Apple M1
- AWS SAM CLI Version: 1.33.0
- AWS Region: eu-west-2
I have tried many fixes such as clean installing Docker (https://docs.docker.com/desktop/mac/apple-silicon/) and AWS SAM CLI (https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-mac.html), using previous versions of both, pruning images and so on.
Is there anything I am overlooking/missing?
|
main
|
locally invoking lambda function on apple strange behaviour description i am trying to locally invoke a lambda function which uses a huggingface summarization model ref expected result once the lambda is locally invoked it should ideally execute the lambda code observed result i am able to invoke sam build however when i locally invoke the lambda function i observe the following issue steps to reproduce create a yaml template file which declares the lambda packagetype as an image create a dockerfile similar to locally invoke the lambda function environment details os macos big sur chip apple aws sam cli version aws region eu west i have tried many fixes such as clean installing docker and aws sam cli using previous versions of both pruning images and so on is there anything i am overlooking missing
| 1
|
2,189
| 7,733,227,487
|
IssuesEvent
|
2018-05-26 08:54:24
|
exercism/python
|
https://api.github.com/repos/exercism/python
|
closed
|
Python 3.3.x EOL
|
discussion maintainer action required
|
[As of Sep. 29, 2017, Python 3.3.x has reached end of life.](https://www.python.org/dev/peps/pep-0398/#x-end-of-life) As 3.3.x occasionally trips up Travis-CI builds for various reasons, I advocate that we discontinue supporting Python 3.3 in this repository. However, I will not take action on this without input from the other maintainers.
@Dog @behrtam @N-Parsons @m-a-ge Thoughts?
Input from other contributors to the repository and other Exercism members is also welcome.
|
True
|
Python 3.3.x EOL - [As of Sep. 29, 2017, Python 3.3.x has reached end of life.](https://www.python.org/dev/peps/pep-0398/#x-end-of-life) As 3.3.x occasionally trips up Travis-CI builds for various reasons, I advocate that we discontinue supporting Python 3.3 in this repository. However, I will not take action on this without input from the other maintainers.
@Dog @behrtam @N-Parsons @m-a-ge Thoughts?
Input from other contributors to the repository and other Exercism members is also welcome.
|
main
|
python x eol as x occasionally trips up travis ci builds for various reasons i advocate that we discontinue supporting python in this repository however i will not take action on this without input from the other maintainers dog behrtam n parsons m a ge thoughts input from other contributors to the repository and other exercism members is also welcome
| 1
|
5,620
| 28,117,776,843
|
IssuesEvent
|
2023-03-31 12:08:47
|
google/wasefire
|
https://api.github.com/repos/google/wasefire
|
closed
|
Add a test that all Fn::link in api-desc are unique
|
good first issue needs:implementation for:maintainability crate:api
|
Somehow this doesn't get caught and end up compiling an invalid wasm module at the end.
|
True
|
Add a test that all Fn::link in api-desc are unique - Somehow this doesn't get caught and end up compiling an invalid wasm module at the end.
|
main
|
add a test that all fn link in api desc are unique somehow this doesn t get caught and end up compiling an invalid wasm module at the end
| 1
|
3,030
| 11,209,934,700
|
IssuesEvent
|
2020-01-06 11:49:57
|
Kristinita/Erics-Green-Room
|
https://api.github.com/repos/Kristinita/Erics-Green-Room
|
opened
|
feat(justice): ЭЛО
|
multiplayer need-maintainer tournaments
|
### 1. Запрос
В Комнатах Эрика равно как и любых Вс_Поллардах неплохо было бы иметь справедливую систему оценки силы игроков. Хорошим решением мне видится [**рейтинг ЭЛО**](https://chess-land.com/modules.php?name=Encyclopedia&op=list_content&eid=3), используемый в соревнованиях по шахматам, шашкам, го, сёги, [**с недавнего времени футболу**](https://www.eurosport.ru/football/world-cup/2018/story_sto6851126.shtml) и другим видам спорта.
Не отрицаю, что применительно к Вс_Поллардам можно придумать более совершенную систему рейтинга или же модифицировать ЭЛО; это обсуждаемо. Однако считаю, что ЭЛО — намного лучше чем ничего, как сейчас.
[**Здесь**](http://sportfiction.ru/books/reyting-v-sporte-vchera-segodnya-zavtra/?bookpart=190096) можно подробно почитать о существующих разновидностях рейтингов, их достоинствах и недостатках, а также деталях.
### 2. Детали
Следующие рейтинги, по моему мнению, не должны смешиваться:
1. В блицах/дуэлях (этот тип соревнования по-разному называется; мне оба термина представляются плохими) регулярного чемпионата;
1. Турнирный.
Приведу пример из собственной практики (все приведённые в разделе данные можете проверить в статистике Филимании, как это сделал я, когда писал данный раздел). В 2015 мне удалось победить в 8 турнирах на новых вопросах. В то же время по общей сумме очков в том году только однажды, весной, вошёл в топ-50, заняв 36-е место; летом, осенью и весной пролетел мимо топ-50, что не мешало регулярно побеждать в турнирах.
В то же время, игроки, входившие в тройку лидеров сезона совсем не проявляли себя на турнирах либо же не заходили туда.
### 3. Аргументация
Кому будет выгодно внедрение рейтинга.
#### 3.1. Джинджерины в целом
1. Наконец, появится более или менее объективная система оценки игроков.
1. (Можете считать, что ИМХО) Перенимая прогрессивные явления из других интеллектуальных игр, джинджерины сделают шаг на пути из игрушки пресыщенных интеллигентов к полноценному виду спорта.
#### 3.2. Владельцы ресурсов
Для владельцев важность представляет количество игроков и их вовлечённость в игру. Введение рейтингов в сочетании с другими факторами (например, разнообразными поощрениями игроков, статистикой) поспособствуют повышению данных показателей. Вот что научные данные говорят:
+ [**Stefan Stieglitz, «Gamification. Using Game Elements in Serious Contexts»**](https://b-ok.cc/book/2868328/47bacc):
> Game Mechanics describe the particular components of the game, at the level of data representation and algorithms (Hunicke et al. 2004). Game mechanics may strongly influence the user's motivation and engagement. Despite being interrelated, it is important to mention that game mechanics differ from game rules. The latter determine the endorsed behaviours that are pursued when implementing the corresponding mechanics. For example, implementing game levels (see below) is a game mechanic that basically allows users to level-up (e.g., upgrading the character's status) and/or level-down (e.g., downgrading Elo-rating when losing in a chess game) within a system. The behaviours/actions that cause the users to level-up or down are defined in the game rules.
Подробности можно узнать в этой или других книгах о геймификации.
---
Могу здесь аргументировать не только книгами, но и личным опытом — 3-мя годами, проведёнными в КонКВИЗтадоре, где как раз была куча рейтингов. Игроков этим удалось завлечь; они постоянно заводили разговоры об ОСИ, ОСО (понятия из тамошней системы рейтингов; подробнее см. в разделе 6.5). Причём интересно, что статистикой интересовались не только самые сильные игроки, но и весьма средние; и они стремились повышать свои показатели, а не по-страусиному скрывать их.
Как результат, игроков в лучшие годы там было в разы больше, чем в любом из Вс_Поллардов. Вот сколько, к примеру, было зарегистрировано на мой турнир:

#### 3.3. Сильные игроки
1. Игрокам, стремящимся играть лучше, будет удобнее оценивать свой прогресс/регресс, насколько быстро идёт повышение уровня.
#### 3.4. Слабые игроки
(Прошу прощения, если кому-то определения «слабый игрок», «мешок» покажутся неполиткорректным)
1. Есть такая проблема. Сейчас во всех Вс_Поллардах сильные и слабые игроки заходят в один и тот же блиц (дуэль). Слабые обычно проигрывают, и некоторые со временем вообще перестают заходить — а какой смысл бороться, если, скорее всего, проиграешь. Всё равно что идти на убой. Последующая реакция слабого игрока может напоминать [**формы психологической защиты**](https://psyfactor.org/lib/zelinski2-06.htm). С введением же рейтинга, по моему мнению, в блицы к сильным будут заходить намного чаще, и вот почему:
1. В турниры и особенно блицы можно будет заходить, решая локальные задачи. Если сразу всех монстров вынести не получается; можно поставить цель сначала обойти в рейтинге тех, кто тебе по силам на данный момент, а уже потом, спустя какое-то время браться за соперников посерьёзнее.
1. Поражения намного превосходящим в классе соперникам никак не скажутся на рейтинге. Аналогия: Каспаров может сколько угодно обыгрывать меня на официальных шахматных турнирах, но никаких прибавок в рейтинге это ему не принесёт; чтобы повысить свой ЭЛО, ему нужно играть с гораздо более сильными оппонентами. Верно и обратное: я никак не рискую своим рейтингом, играя с ним.
1. В то же время, если слабый игрок победит заведомо более сильного; рейтинг слабого значительно возрастёт. Фактически, слабый практически ничем не рискует в случае поражения, а при победе же срывает большой куш.
### 4. Техническая реализация
Положим, в блиц сыграли четверо. A — выиграл, B и C набрали поровну очков, D — меньше всех. Конкретное число очков, скорость ответов и прочие показатели при подсчёте рейтинга не учитываются: важно только то, как ты сыграл относительно других. Набрал больше очков, чем другой участник — это считается, что ты победил его; набрали поровну — вы «сыграли вничью». Таблицу теперь можно представить следующим образом:

Теперь берём какую-нибудь программу, подсчитывающую ЭЛО. Для Node.js это [**elo-rating**](https://www.npmjs.com/package/elo-rating) или [**альтернативы**](https://www.npmjs.com/package/elo-rating#similar-modules). Она будет считать, как в шахматах, сколько прибавлять/отнимать очков рейтинга за победу над определённым соперником. По окончании турнира/блица программа сложит/вычтет изменения рейтинга за поражения/ничьи/победы → у игрока появится новый ЭЛО.
### 5. Возможные проблемы
#### 5.1. Клоноводство
Вероятно, при введении рейтинга можно будет столкнуться с проблемой, когда игроки будут создавать новых клонов (виртуалов), портя рейтинг другим. Что может помочь в её разрешении:
1. Начисление рейтинга новому участнику не сразу, а после некоторого числа блицев/турниров. Клоноводу нужно будет прокачать свой новый никнейм; прежде чем этот никнейм будет иметь рейтинг. Прокачка каждого нового клона потребует времени, и охота создавать их ещё может отбиться.
1. Ну а лучшее, на мой взгляд, решение — запретить одному человеку играть на рейтинг множеством ников под угрозой бана. В случае подозрений, что какой-то игрок является виртуалом, другие игроки могут инициировать проверку на совпадение по IP с уже имеющимися участниками (наподобие [**ВП:ЧЮ**](https://ru.wikipedia.org/wiki/%D0%92%D0%B8%D0%BA%D0%B8%D0%BF%D0%B5%D0%B4%D0%B8%D1%8F:%D0%9F%D1%80%D0%BE%D0%B2%D0%B5%D1%80%D0%BA%D0%B0_%D1%83%D1%87%D0%B0%D1%81%D1%82%D0%BD%D0%B8%D0%BA%D0%BE%D0%B2) в Википедии).
#### 5.2. Набивка рейтинга
Возможна такая проблема, которая присутствовала в КонКВИЗтадоре. Игрок может играть только с определёнными соперниками, чтобы набить себе рейтинг. 2 разновидности:
1. Игры только со слабыми оппонентами. Рейтинг хоть и слабенько, но повышается.
1. Игры с определёнными соперниками. С тобой в игру заходят друзья/знакомые с неплохим рейтингом, согласные слить бой.
Во Вс_Поллардах не думаю, что проблема будет иметь столь же масштабный характер, как в КонКВИЗтадоре (разве что по ночам, когда нет, может, кому-то захочется набивать стату), по следующим причинам:
1. Что происходит в чате, видят все присутствующие. В КонКВИЗтадоре же конкретную игру не видел никто, кроме трёх её участников.
1. Отсутствие ограничений на вход в турниры/блицы. Заходить может любой, а не только «нужные» игроки.
Однако в качестве дополнительной меры, на мой взгляд, неплохо было бы автоматически записывать в статистику результаты каждого блица/турнира с составом участников. Если появятся подозрения насчёт набивки, можно будет осуществить проверку игр участника и по её результатам решать, применять или нет к нему какие-то санкции.
#### 5.3. Читерство
Это проблема, которая имеет место быть и без всяких рейтингов, а не какая-то новая, которая появится при введении ЭЛО. Потому не буду здесь подробно на ней останавливаться. Отмечу лишь, что, возможно, при введении рейтинга её масштабы увеличатся, так как появится больше целей (лучше сказать — квазипотребностей), ради которых можно будет прибегать к читерству.
### 6. Не рекомендуемые системы оценки силы игроков
#### 6.1. Общее количество набранных очков
Общее количество очков за сезон в регулярной викторине **вообще никак не говорит о силе игрока**. Этот показатель говорит только о том, что ты провёл за викториной очень много времени. Как [**сказал Стив Джобс**](https://ria.ru/20131014/968716365.html), работать нужно не 12 часов, а головой. К сожалению, владельцы ресурсов Вс_Поллардов ставят данный показатель «главным», на первое место, поощряя тех, кто играет больше, а не сильнее.
Опять же личный пример (не потому что я какой-то там великий игрок, а потому что своя статистика лучше помнится). Ни в одной викторине я никогда и близко не подходил к тройке лидеров сезона по общему числу набранных очков, однако постоянно был победителем/призёром турниров и (правда, с меньшим успехом) находился в лидерах по показателям Филимании, хоть как характеризующим силу игрока в регулярной викторине (эффективность, максимальное количество очков в час, победы в блицах над топами).
#### 6.2. Процент побед
Победы могут совершаться над игроками разной силы. Можно всё время играть с мешками, и набить 100% побед; но не факт, что такой игрок играет сильнее того, который соревнуется только с монстрами, проигрывая им.
С введением же ЭЛО, играющий со слабаками будет продвигаться вперёд черепашьими шагами. Победа же над сильным будет цениться гораздо больше.
#### 6.3. Процент отвеченных вопросов
Неидеальный вариант. Когда народу мало, и среди них сплошь игроки невысокого уровня; времени на ответ и подсказок становится больше → шансы на большой процент ответов и длинные цепочки высоки. Когда же среди твоих противников множество топовых игроков, процент упадёт.
Тем не менее, будь моя воля, я бы ввёл и этот показатель для дополнительной статистики. Можно было бы изучать, как он коррелирует с рейтингом.
#### 6.4. Очки/ответы в час (для регулярной викторины)
То есть, сила игроков определяется их лучшими результатами за час. Данный показатель используется в Филимании. Мне он видится неплохим; его наличие лучше, чем отсутствие; однако:
1. См. предыдущий подраздел; результат будет зависеть он количества/уровня соперников.
1. Зависимость от вопросов: на протяжении часа могут идти как удобные для себя вопросы, так и те, на которые ты не в состоянии отвечать. Да, и с ЭЛО в блицах/турнирах будут попадаться как удобные вопросы, так и нет. Но ЭЛО показывает средний результат, а не лучший. В среднем игроки могут играть одинаково, но одному из них может однажды повезти с вопросами за час, другому — нет.
1. Нужно весь час проводить на пределе возможностей. На полную я играю только в блицах/турнирах. В остальное время же выписываю понятия, на которые не удалось ответить, гуглю информацию о них, бывает, отвлекаюсь, разговариваю. Всего этого не сделаешь, если хочешь набрать высокий результат за час. Причём по-максимуму выкладываться потребуется всегда: ведь заранее не предугадаешь, когда тебе попадутся оптимальные вопросы и состав соперников.
#### 6.5. КонКВИЗтадор
Рейтинг (**ОСИ** — Очки Силы Ответа) там начислялся по формуле `ОСИ = (ПП+ОСО+ССП)/3`, где:
1. **ПП** — процент побед (во Вс_Поллардах можно посчитать его, используя метод, расписанный мной в разделе 4);
1. **ОСО** — очки силы ответа (во Вс_Поллардах можно использовать процент правильных ответов);
1. **ССП** — средняя сила (средние ОСИ) противников.
Чем мне не нравится данная система применительно ко Вс_Поллардам:
1. Не могу придумать, чем адекватно заменить ССП. Если использовать этот показатель так, как это было в КонКВИЗтадоре, то рейтинг игрока может упасть даже при победах над слабыми соперниками → может случиться так, что сильные будут избегать заходить играть со слабыми. ЭЛО же, если ты выиграл, никогда не упадёт, пусть твой соперник имеет худший рейтинг на планете (но ничего за победу не зачислится тоже).
1. В КонКВИЗтадоре приходилось видеть, как некоторые игроки подбирали себе противников в играх на рейтинг. С одной стороны не заходили рано утром, когда людей мало → можно было попасть на очень слабого и испортить свой ССП; с другой — опасались заходить, когда видели в чате монстров (чтобы не понизить ПП). Возможно, нечто подобное будет происходить и во Вс_Поллардах.
1. Зачем вообще такая сложная система, когда в ЭЛО каких-то критических недочётов не вижу.
Спасибо.
|
True
|
feat(justice): ЭЛО - ### 1. Запрос
В Комнатах Эрика равно как и любых Вс_Поллардах неплохо было бы иметь справедливую систему оценки силы игроков. Хорошим решением мне видится [**рейтинг ЭЛО**](https://chess-land.com/modules.php?name=Encyclopedia&op=list_content&eid=3), используемый в соревнованиях по шахматам, шашкам, го, сёги, [**с недавнего времени футболу**](https://www.eurosport.ru/football/world-cup/2018/story_sto6851126.shtml) и другим видам спорта.
Не отрицаю, что применительно к Вс_Поллардам можно придумать более совершенную систему рейтинга или же модифицировать ЭЛО; это обсуждаемо. Однако считаю, что ЭЛО — намного лучше чем ничего, как сейчас.
[**Здесь**](http://sportfiction.ru/books/reyting-v-sporte-vchera-segodnya-zavtra/?bookpart=190096) можно подробно почитать о существующих разновидностях рейтингов, их достоинствах и недостатках, а также деталях.
### 2. Детали
Следующие рейтинги, по моему мнению, не должны смешиваться:
1. В блицах/дуэлях (этот тип соревнования по-разному называется; мне оба термина представляются плохими) регулярного чемпионата;
1. Турнирный.
Приведу пример из собственной практики (все приведённые в разделе данные можете проверить в статистике Филимании, как это сделал я, когда писал данный раздел). В 2015 мне удалось победить в 8 турнирах на новых вопросах. В то же время по общей сумме очков в том году только однажды, весной, вошёл в топ-50, заняв 36-е место; летом, осенью и весной пролетел мимо топ-50, что не мешало регулярно побеждать в турнирах.
В то же время, игроки, входившие в тройку лидеров сезона совсем не проявляли себя на турнирах либо же не заходили туда.
### 3. Аргументация
Кому будет выгодно внедрение рейтинга.
#### 3.1. Джинджерины в целом
1. Наконец, появится более или менее объективная система оценки игроков.
1. (Можете считать, что ИМХО) Перенимая прогрессивные явления из других интеллектуальных игр, джинджерины сделают шаг на пути из игрушки пресыщенных интеллигентов к полноценному виду спорта.
#### 3.2. Владельцы ресурсов
Для владельцев важность представляет количество игроков и их вовлечённость в игру. Введение рейтингов в сочетании с другими факторами (например, разнообразными поощрениями игроков, статистикой) поспособствуют повышению данных показателей. Вот что научные данные говорят:
+ [**Stefan Stieglitz, «Gamification. Using Game Elements in Serious Contexts»**](https://b-ok.cc/book/2868328/47bacc):
> Game Mechanics describe the particular components of the game, at the level of data representation and algorithms (Hunicke et al. 2004). Game mechanics may strongly influence the user's motivation and engagement. Despite being interrelated, it is important to mention that game mechanics differ from game rules. The latter determine the endorsed behaviours that are pursued when implementing the corresponding mechanics. For example, implementing game levels (see below) is a game mechanic that basically allows users to level-up (e.g., upgrading the character's status) and/or level-down (e.g., downgrading Elo-rating when losing in a chess game) within a system. The behaviours/actions that cause the users to level-up or down are defined in the game rules.
Подробности можно узнать в этой или других книгах о геймификации.
---
Могу здесь аргументировать не только книгами, но и личным опытом — 3-мя годами, проведёнными в КонКВИЗтадоре, где как раз была куча рейтингов. Игроков этим удалось завлечь; они постоянно заводили разговоры об ОСИ, ОСО (понятия из тамошней системы рейтингов; подробнее см. в разделе 6.5). Причём интересно, что статистикой интересовались не только самые сильные игроки, но и весьма средние; и они стремились повышать свои показатели, а не по-страусиному скрывать их.
Как результат, игроков в лучшие годы там было в разы больше, чем в любом из Вс_Поллардов. Вот сколько, к примеру, было зарегистрировано на мой турнир:

#### 3.3. Сильные игроки
1. Игрокам, стремящимся играть лучше, будет удобнее оценивать свой прогресс/регресс, насколько быстро идёт повышение уровня.
#### 3.4. Слабые игроки
(Прошу прощения, если кому-то определения «слабый игрок», «мешок» покажутся неполиткорректным)
1. Есть такая проблема. Сейчас во всех Вс_Поллардах сильные и слабые игроки заходят в один и тот же блиц (дуэль). Слабые обычно проигрывают, и некоторые со временем вообще перестают заходить — а какой смысл бороться, если, скорее всего, проиграешь. Всё равно что идти на убой. Последующая реакция слабого игрока может напоминать [**формы психологической защиты**](https://psyfactor.org/lib/zelinski2-06.htm). С введением же рейтинга, по моему мнению, в блицы к сильным будут заходить намного чаще, и вот почему:
1. В турниры и особенно блицы можно будет заходить, решая локальные задачи. Если сразу всех монстров вынести не получается; можно поставить цель сначала обойти в рейтинге тех, кто тебе по силам на данный момент, а уже потом, спустя какое-то время браться за соперников посерьёзнее.
1. Поражения намного превосходящим в классе соперникам никак не скажутся на рейтинге. Аналогия: Каспаров может сколько угодно обыгрывать меня на официальных шахматных турнирах, но никаких прибавок в рейтинге это ему не принесёт; чтобы повысить свой ЭЛО, ему нужно играть с гораздо более сильными оппонентами. Верно и обратное: я никак не рискую своим рейтингом, играя с ним.
1. В то же время, если слабый игрок победит заведомо более сильного; рейтинг слабого значительно возрастёт. Фактически, слабый практически ничем не рискует в случае поражения, а при победе же срывает большой куш.
### 4. Техническая реализация
Положим, в блиц сыграли четверо. A — выиграл, B и C набрали поровну очков, D — меньше всех. Конкретное число очков, скорость ответов и прочие показатели при подсчёте рейтинга не учитываются: важно только то, как ты сыграл относительно других. Набрал больше очков, чем другой участник — это считается, что ты победил его; набрали поровну — вы «сыграли вничью». Таблицу теперь можно представить следующим образом:

Теперь берём какую-нибудь программу, подсчитывающую ЭЛО. Для Node.js это [**elo-rating**](https://www.npmjs.com/package/elo-rating) или [**альтернативы**](https://www.npmjs.com/package/elo-rating#similar-modules). Она будет считать, как в шахматах, сколько прибавлять/отнимать очков рейтинга за победу над определённым соперником. По окончании турнира/блица программа сложит/вычтет изменения рейтинга за поражения/ничьи/победы → у игрока появится новый ЭЛО.
### 5. Возможные проблемы
#### 5.1. Клоноводство
Вероятно, при введении рейтинга можно будет столкнуться с проблемой, когда игроки будут создавать новых клонов (виртуалов), портя рейтинг другим. Что может помочь в её разрешении:
1. Начисление рейтинга новому участнику не сразу, а после некоторого числа блицев/турниров. Клоноводу нужно будет прокачать свой новый никнейм; прежде чем этот никнейм будет иметь рейтинг. Прокачка каждого нового клона потребует времени, и охота создавать их ещё может отбиться.
1. Ну а лучшее, на мой взгляд, решение — запретить одному человеку играть на рейтинг множеством ников под угрозой бана. В случае подозрений, что какой-то игрок является виртуалом, другие игроки могут инициировать проверку на совпадение по IP с уже имеющимися участниками (наподобие [**ВП:ЧЮ**](https://ru.wikipedia.org/wiki/%D0%92%D0%B8%D0%BA%D0%B8%D0%BF%D0%B5%D0%B4%D0%B8%D1%8F:%D0%9F%D1%80%D0%BE%D0%B2%D0%B5%D1%80%D0%BA%D0%B0_%D1%83%D1%87%D0%B0%D1%81%D1%82%D0%BD%D0%B8%D0%BA%D0%BE%D0%B2) в Википедии).
#### 5.2. Набивка рейтинга
Возможна такая проблема, которая присутствовала в КонКВИЗтадоре. Игрок может играть только с определёнными соперниками, чтобы набить себе рейтинг. 2 разновидности:
1. Игры только со слабыми оппонентами. Рейтинг хоть и слабенько, но повышается.
1. Игры с определёнными соперниками. С тобой в игру заходят друзья/знакомые с неплохим рейтингом, согласные слить бой.
Во Вс_Поллардах не думаю, что проблема будет иметь столь же масштабный характер, как в КонКВИЗтадоре (разве что по ночам, когда нет, может, кому-то захочется набивать стату), по следующим причинам:
1. Что происходит в чате, видят все присутствующие. В КонКВИЗтадоре же конкретную игру не видел никто, кроме трёх её участников.
1. Отсутствие ограничений на вход в турниры/блицы. Заходить может любой, а не только «нужные» игроки.
Однако в качестве дополнительной меры, на мой взгляд, неплохо было бы автоматически записывать в статистику результаты каждого блица/турнира с составом участников. Если появятся подозрения насчёт набивки, можно будет осуществить проверку игр участника и по её результатам решать, применять или нет к нему какие-то санкции.
#### 5.3. Читерство
Это проблема, которая имеет место быть и без всяких рейтингов, а не какая-то новая, которая появится при введении ЭЛО. Потому не буду здесь подробно на ней останавливаться. Отмечу лишь, что, возможно, при введении рейтинга её масштабы увеличатся, так как появится больше целей (лучше сказать — квазипотребностей), ради которых можно будет прибегать к читерству.
### 6. Не рекомендуемые системы оценки силы игроков
#### 6.1. Общее количество набранных очков
Общее количество очков за сезон в регулярной викторине **вообще никак не говорит о силе игрока**. Этот показатель говорит только о том, что ты провёл за викториной очень много времени. Как [**сказал Стив Джобс**](https://ria.ru/20131014/968716365.html), работать нужно не 12 часов, а головой. К сожалению, владельцы ресурсов Вс_Поллардов ставят данный показатель «главным», на первое место, поощряя тех, кто играет больше, а не сильнее.
Опять же личный пример (не потому что я какой-то там великий игрок, а потому что своя статистика лучше помнится). Ни в одной викторине я никогда и близко не подходил к тройке лидеров сезона по общему числу набранных очков, однако постоянно был победителем/призёром турниров и (правда, с меньшим успехом) находился в лидерах по показателям Филимании, хоть как характеризующим силу игрока в регулярной викторине (эффективность, максимальное количество очков в час, победы в блицах над топами).
#### 6.2. Процент побед
Победы могут совершаться над игроками разной силы. Можно всё время играть с мешками, и набить 100% побед; но не факт, что такой игрок играет сильнее того, который соревнуется только с монстрами, проигрывая им.
С введением же ЭЛО, играющий со слабаками будет продвигаться вперёд черепашьими шагами. Победа же над сильным будет цениться гораздо больше.
#### 6.3. Процент отвеченных вопросов
Неидеальный вариант. Когда народу мало, и среди них сплошь игроки невысокого уровня; времени на ответ и подсказок становится больше → шансы на большой процент ответов и длинные цепочки высоки. Когда же среди твоих противников множество топовых игроков, процент упадёт.
Тем не менее, будь моя воля, я бы ввёл и этот показатель для дополнительной статистики. Можно было бы изучать, как он коррелирует с рейтингом.
#### 6.4. Очки/ответы в час (для регулярной викторины)
То есть, сила игроков определяется их лучшими результатами за час. Данный показатель используется в Филимании. Мне он видится неплохим; его наличие лучше, чем отсутствие; однако:
1. См. предыдущий подраздел; результат будет зависеть он количества/уровня соперников.
1. Зависимость от вопросов: на протяжении часа могут идти как удобные для себя вопросы, так и те, на которые ты не в состоянии отвечать. Да, и с ЭЛО в блицах/турнирах будут попадаться как удобные вопросы, так и нет. Но ЭЛО показывает средний результат, а не лучший. В среднем игроки могут играть одинаково, но одному из них может однажды повезти с вопросами за час, другому — нет.
1. Нужно весь час проводить на пределе возможностей. На полную я играю только в блицах/турнирах. В остальное время же выписываю понятия, на которые не удалось ответить, гуглю информацию о них, бывает, отвлекаюсь, разговариваю. Всего этого не сделаешь, если хочешь набрать высокий результат за час. Причём по-максимуму выкладываться потребуется всегда: ведь заранее не предугадаешь, когда тебе попадутся оптимальные вопросы и состав соперников.
#### 6.5. КонКВИЗтадор
Рейтинг (**ОСИ** — Очки Силы Ответа) там начислялся по формуле `ОСИ = (ПП+ОСО+ССП)/3`, где:
1. **ПП** — процент побед (во Вс_Поллардах можно посчитать его, используя метод, расписанный мной в разделе 4);
1. **ОСО** — очки силы ответа (во Вс_Поллардах можно использовать процент правильных ответов);
1. **ССП** — средняя сила (средние ОСИ) противников.
Чем мне не нравится данная система применительно ко Вс_Поллардам:
1. Не могу придумать, чем адекватно заменить ССП. Если использовать этот показатель так, как это было в КонКВИЗтадоре, то рейтинг игрока может упасть даже при победах над слабыми соперниками → может случиться так, что сильные будут избегать заходить играть со слабыми. ЭЛО же, если ты выиграл, никогда не упадёт, пусть твой соперник имеет худший рейтинг на планете (но ничего за победу не зачислится тоже).
1. В КонКВИЗтадоре приходилось видеть, как некоторые игроки подбирали себе противников в играх на рейтинг. С одной стороны не заходили рано утром, когда людей мало → можно было попасть на очень слабого и испортить свой ССП; с другой — опасались заходить, когда видели в чате монстров (чтобы не понизить ПП). Возможно, нечто подобное будет происходить и во Вс_Поллардах.
1. Зачем вообще такая сложная система, когда в ЭЛО каких-то критических недочётов не вижу.
Спасибо.
|
main
|
feat justice эло запрос в комнатах эрика равно как и любых вс поллардах неплохо было бы иметь справедливую систему оценки силы игроков хорошим решением мне видится используемый в соревнованиях по шахматам шашкам го сёги и другим видам спорта не отрицаю что применительно к вс поллардам можно придумать более совершенную систему рейтинга или же модифицировать эло это обсуждаемо однако считаю что эло — намного лучше чем ничего как сейчас можно подробно почитать о существующих разновидностях рейтингов их достоинствах и недостатках а также деталях детали следующие рейтинги по моему мнению не должны смешиваться в блицах дуэлях этот тип соревнования по разному называется мне оба термина представляются плохими регулярного чемпионата турнирный приведу пример из собственной практики все приведённые в разделе данные можете проверить в статистике филимании как это сделал я когда писал данный раздел в мне удалось победить в турнирах на новых вопросах в то же время по общей сумме очков в том году только однажды весной вошёл в топ заняв е место летом осенью и весной пролетел мимо топ что не мешало регулярно побеждать в турнирах в то же время игроки входившие в тройку лидеров сезона совсем не проявляли себя на турнирах либо же не заходили туда аргументация кому будет выгодно внедрение рейтинга джинджерины в целом наконец появится более или менее объективная система оценки игроков можете считать что имхо перенимая прогрессивные явления из других интеллектуальных игр джинджерины сделают шаг на пути из игрушки пресыщенных интеллигентов к полноценному виду спорта владельцы ресурсов для владельцев важность представляет количество игроков и их вовлечённость в игру введение рейтингов в сочетании с другими факторами например разнообразными поощрениями игроков статистикой поспособствуют повышению данных показателей вот что научные данные говорят game mechanics describe the particular components of the game at the level of data representation and algorithms hunicke et al game mechanics may strongly influence the user s motivation and engagement despite being interrelated it is important to mention that game mechanics differ from game rules the latter determine the endorsed behaviours that are pursued when implementing the corresponding mechanics for example implementing game levels see below is a game mechanic that basically allows users to level up e g upgrading the character s status and or level down e g downgrading elo rating when losing in a chess game within a system the behaviours actions that cause the users to level up or down are defined in the game rules подробности можно узнать в этой или других книгах о геймификации могу здесь аргументировать не только книгами но и личным опытом — мя годами проведёнными в конквизтадоре где как раз была куча рейтингов игроков этим удалось завлечь они постоянно заводили разговоры об оси осо понятия из тамошней системы рейтингов подробнее см в разделе причём интересно что статистикой интересовались не только самые сильные игроки но и весьма средние и они стремились повышать свои показатели а не по страусиному скрывать их как результат игроков в лучшие годы там было в разы больше чем в любом из вс поллардов вот сколько к примеру было зарегистрировано на мой турнир сильные игроки игрокам стремящимся играть лучше будет удобнее оценивать свой прогресс регресс насколько быстро идёт повышение уровня слабые игроки прошу прощения если кому то определения «слабый игрок» «мешок» покажутся неполиткорректным есть такая проблема сейчас во всех вс поллардах сильные и слабые игроки заходят в один и тот же блиц дуэль слабые обычно проигрывают и некоторые со временем вообще перестают заходить — а какой смысл бороться если скорее всего проиграешь всё равно что идти на убой последующая реакция слабого игрока может напоминать с введением же рейтинга по моему мнению в блицы к сильным будут заходить намного чаще и вот почему в турниры и особенно блицы можно будет заходить решая локальные задачи если сразу всех монстров вынести не получается можно поставить цель сначала обойти в рейтинге тех кто тебе по силам на данный момент а уже потом спустя какое то время браться за соперников посерьёзнее поражения намного превосходящим в классе соперникам никак не скажутся на рейтинге аналогия каспаров может сколько угодно обыгрывать меня на официальных шахматных турнирах но никаких прибавок в рейтинге это ему не принесёт чтобы повысить свой эло ему нужно играть с гораздо более сильными оппонентами верно и обратное я никак не рискую своим рейтингом играя с ним в то же время если слабый игрок победит заведомо более сильного рейтинг слабого значительно возрастёт фактически слабый практически ничем не рискует в случае поражения а при победе же срывает большой куш техническая реализация положим в блиц сыграли четверо a — выиграл b и c набрали поровну очков d — меньше всех конкретное число очков скорость ответов и прочие показатели при подсчёте рейтинга не учитываются важно только то как ты сыграл относительно других набрал больше очков чем другой участник — это считается что ты победил его набрали поровну — вы «сыграли вничью» таблицу теперь можно представить следующим образом теперь берём какую нибудь программу подсчитывающую эло для node js это или она будет считать как в шахматах сколько прибавлять отнимать очков рейтинга за победу над определённым соперником по окончании турнира блица программа сложит вычтет изменения рейтинга за поражения ничьи победы → у игрока появится новый эло возможные проблемы клоноводство вероятно при введении рейтинга можно будет столкнуться с проблемой когда игроки будут создавать новых клонов виртуалов портя рейтинг другим что может помочь в её разрешении начисление рейтинга новому участнику не сразу а после некоторого числа блицев турниров клоноводу нужно будет прокачать свой новый никнейм прежде чем этот никнейм будет иметь рейтинг прокачка каждого нового клона потребует времени и охота создавать их ещё может отбиться ну а лучшее на мой взгляд решение — запретить одному человеку играть на рейтинг множеством ников под угрозой бана в случае подозрений что какой то игрок является виртуалом другие игроки могут инициировать проверку на совпадение по ip с уже имеющимися участниками наподобие в википедии набивка рейтинга возможна такая проблема которая присутствовала в конквизтадоре игрок может играть только с определёнными соперниками чтобы набить себе рейтинг разновидности игры только со слабыми оппонентами рейтинг хоть и слабенько но повышается игры с определёнными соперниками с тобой в игру заходят друзья знакомые с неплохим рейтингом согласные слить бой во вс поллардах не думаю что проблема будет иметь столь же масштабный характер как в конквизтадоре разве что по ночам когда нет может кому то захочется набивать стату по следующим причинам что происходит в чате видят все присутствующие в конквизтадоре же конкретную игру не видел никто кроме трёх её участников отсутствие ограничений на вход в турниры блицы заходить может любой а не только «нужные» игроки однако в качестве дополнительной меры на мой взгляд неплохо было бы автоматически записывать в статистику результаты каждого блица турнира с составом участников если появятся подозрения насчёт набивки можно будет осуществить проверку игр участника и по её результатам решать применять или нет к нему какие то санкции читерство это проблема которая имеет место быть и без всяких рейтингов а не какая то новая которая появится при введении эло потому не буду здесь подробно на ней останавливаться отмечу лишь что возможно при введении рейтинга её масштабы увеличатся так как появится больше целей лучше сказать — квазипотребностей ради которых можно будет прибегать к читерству не рекомендуемые системы оценки силы игроков общее количество набранных очков общее количество очков за сезон в регулярной викторине вообще никак не говорит о силе игрока этот показатель говорит только о том что ты провёл за викториной очень много времени как работать нужно не часов а головой к сожалению владельцы ресурсов вс поллардов ставят данный показатель «главным» на первое место поощряя тех кто играет больше а не сильнее опять же личный пример не потому что я какой то там великий игрок а потому что своя статистика лучше помнится ни в одной викторине я никогда и близко не подходил к тройке лидеров сезона по общему числу набранных очков однако постоянно был победителем призёром турниров и правда с меньшим успехом находился в лидерах по показателям филимании хоть как характеризующим силу игрока в регулярной викторине эффективность максимальное количество очков в час победы в блицах над топами процент побед победы могут совершаться над игроками разной силы можно всё время играть с мешками и набить побед но не факт что такой игрок играет сильнее того который соревнуется только с монстрами проигрывая им с введением же эло играющий со слабаками будет продвигаться вперёд черепашьими шагами победа же над сильным будет цениться гораздо больше процент отвеченных вопросов неидеальный вариант когда народу мало и среди них сплошь игроки невысокого уровня времени на ответ и подсказок становится больше → шансы на большой процент ответов и длинные цепочки высоки когда же среди твоих противников множество топовых игроков процент упадёт тем не менее будь моя воля я бы ввёл и этот показатель для дополнительной статистики можно было бы изучать как он коррелирует с рейтингом очки ответы в час для регулярной викторины то есть сила игроков определяется их лучшими результатами за час данный показатель используется в филимании мне он видится неплохим его наличие лучше чем отсутствие однако см предыдущий подраздел результат будет зависеть он количества уровня соперников зависимость от вопросов на протяжении часа могут идти как удобные для себя вопросы так и те на которые ты не в состоянии отвечать да и с эло в блицах турнирах будут попадаться как удобные вопросы так и нет но эло показывает средний результат а не лучший в среднем игроки могут играть одинаково но одному из них может однажды повезти с вопросами за час другому — нет нужно весь час проводить на пределе возможностей на полную я играю только в блицах турнирах в остальное время же выписываю понятия на которые не удалось ответить гуглю информацию о них бывает отвлекаюсь разговариваю всего этого не сделаешь если хочешь набрать высокий результат за час причём по максимуму выкладываться потребуется всегда ведь заранее не предугадаешь когда тебе попадутся оптимальные вопросы и состав соперников конквизтадор рейтинг оси — очки силы ответа там начислялся по формуле оси пп осо ссп где пп — процент побед во вс поллардах можно посчитать его используя метод расписанный мной в разделе осо — очки силы ответа во вс поллардах можно использовать процент правильных ответов ссп — средняя сила средние оси противников чем мне не нравится данная система применительно ко вс поллардам не могу придумать чем адекватно заменить ссп если использовать этот показатель так как это было в конквизтадоре то рейтинг игрока может упасть даже при победах над слабыми соперниками → может случиться так что сильные будут избегать заходить играть со слабыми эло же если ты выиграл никогда не упадёт пусть твой соперник имеет худший рейтинг на планете но ничего за победу не зачислится тоже в конквизтадоре приходилось видеть как некоторые игроки подбирали себе противников в играх на рейтинг с одной стороны не заходили рано утром когда людей мало → можно было попасть на очень слабого и испортить свой ссп с другой — опасались заходить когда видели в чате монстров чтобы не понизить пп возможно нечто подобное будет происходить и во вс поллардах зачем вообще такая сложная система когда в эло каких то критических недочётов не вижу спасибо
| 1
|
3,818
| 16,607,016,940
|
IssuesEvent
|
2021-06-02 06:03:17
|
keptn/community
|
https://api.github.com/repos/keptn/community
|
closed
|
REQUEST: New membership for @warber @christian-kreuzberger-dtx @laneli @ermin-muratovic @bacherfl
|
membership:maintainer status:approved
|
### Multi-request for Keptn core developer team
Please note, that these people are already maintainers. This issue is created for bookkeeping purposes.
* Bernd Warmuth @warber
* Christian Kreuzberger @christian-kreuzberger-dtx
* Elisabeth Lang @laneli
* Ermin Muratovic @ermin-muratovic
* Florian Bacher @bacherfl
### Requirements
- [x] We have reviewed the community membership guidelines (https://github.com/keptn/community/blob/master/COMMUNITY_MEMBERSHIP.md)
- [x] We have enabled 2FA on my GitHub account. See https://github.com/settings/security
- [x] We have subscribed to the [Keptn Slack channel](http://slack.keptn.sh/)
- [x] We are actively contributing to 1 or more Keptn subprojects
- [x] We have two sponsors that meet the sponsor requirements listed in the community membership guidelines. Among other requirements, sponsors must be approvers or maintainers of at least one repository in the organization
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
<!-- Replace (at) with the `@` sign -->
- @jetzlstorfer
- @johannes-b
Each sponsor should reply to this issue with the comment "*I support*".
Please remember, it is an applicant's responsibility to get their sponsors' confirmation before submitting the request.
### List of contributions to the Keptn project
The aforementioned members have been actively contributing to the Keptn project (multiple repos) for several months, some of them even more than a year. They are also listed as top contributors: https://github.com/keptn/keptn/graphs/contributors
|
True
|
REQUEST: New membership for @warber @christian-kreuzberger-dtx @laneli @ermin-muratovic @bacherfl - ### Multi-request for Keptn core developer team
Please note, that these people are already maintainers. This issue is created for bookkeeping purposes.
* Bernd Warmuth @warber
* Christian Kreuzberger @christian-kreuzberger-dtx
* Elisabeth Lang @laneli
* Ermin Muratovic @ermin-muratovic
* Florian Bacher @bacherfl
### Requirements
- [x] We have reviewed the community membership guidelines (https://github.com/keptn/community/blob/master/COMMUNITY_MEMBERSHIP.md)
- [x] We have enabled 2FA on my GitHub account. See https://github.com/settings/security
- [x] We have subscribed to the [Keptn Slack channel](http://slack.keptn.sh/)
- [x] We are actively contributing to 1 or more Keptn subprojects
- [x] We have two sponsors that meet the sponsor requirements listed in the community membership guidelines. Among other requirements, sponsors must be approvers or maintainers of at least one repository in the organization
- [x] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
### Sponsors
<!-- Replace (at) with the `@` sign -->
- @jetzlstorfer
- @johannes-b
Each sponsor should reply to this issue with the comment "*I support*".
Please remember, it is an applicant's responsibility to get their sponsors' confirmation before submitting the request.
### List of contributions to the Keptn project
The aforementioned members have been actively contributing to the Keptn project (multiple repos) for several months, some of them even more than a year. They are also listed as top contributors: https://github.com/keptn/keptn/graphs/contributors
|
main
|
request new membership for warber christian kreuzberger dtx laneli ermin muratovic bacherfl multi request for keptn core developer team please note that these people are already maintainers this issue is created for bookkeeping purposes bernd warmuth warber christian kreuzberger christian kreuzberger dtx elisabeth lang laneli ermin muratovic ermin muratovic florian bacher bacherfl requirements we have reviewed the community membership guidelines we have enabled on my github account see we have subscribed to the we are actively contributing to or more keptn subprojects we have two sponsors that meet the sponsor requirements listed in the community membership guidelines among other requirements sponsors must be approvers or maintainers of at least one repository in the organization i have spoken to my sponsors ahead of this application and they have agreed to sponsor my application sponsors jetzlstorfer johannes b each sponsor should reply to this issue with the comment i support please remember it is an applicant s responsibility to get their sponsors confirmation before submitting the request list of contributions to the keptn project the aforementioned members have been actively contributing to the keptn project multiple repos for several months some of them even more than a year they are also listed as top contributors
| 1
|
10,542
| 8,974,715,739
|
IssuesEvent
|
2019-01-30 01:38:32
|
Azure/azure-sdk-for-js
|
https://api.github.com/repos/Azure/azure-sdk-for-js
|
closed
|
[Service Bus] Bug with SessionReceiver not closing properly when running tests
|
Client Service Bus bug
|
Encountered an issue while working with testUtils.ts where purge() is not working as expected.
|
1.0
|
[Service Bus] Bug with SessionReceiver not closing properly when running tests - Encountered an issue while working with testUtils.ts where purge() is not working as expected.
|
non_main
|
bug with sessionreceiver not closing properly when running tests encountered an issue while working with testutils ts where purge is not working as expected
| 0
|
3,314
| 12,831,174,378
|
IssuesEvent
|
2020-07-07 04:31:44
|
short-d/short
|
https://api.github.com/repos/short-d/short
|
closed
|
[Refactor] Disallow special URL characters as aliases
|
maintainability
|
**What is frustrating you?**
As observed in #743, '#' characters should not be allowed as a character to use within an alias. I believe it also doesn't make too much sense to allow other special URL characters such as '?', ',', '=', and ';' either. I created this issue to allow for discussion.
**Your solution**
Create a function that checks if any of these characters are present in custom alias (can perhaps extend `IsValid` method in `CustomAlias` validator). If they are, then throw error.
**Alternatives considered**
Only disallow '#' since it actually causes a problem with long link retrieval and keep allowing the other URL special characters.
**Additional context**

(note: everything at and after the '#' part of the aliased URL is discarded before sent to server, see #743)
|
True
|
[Refactor] Disallow special URL characters as aliases - **What is frustrating you?**
As observed in #743, '#' characters should not be allowed as a character to use within an alias. I believe it also doesn't make too much sense to allow other special URL characters such as '?', ',', '=', and ';' either. I created this issue to allow for discussion.
**Your solution**
Create a function that checks if any of these characters are present in custom alias (can perhaps extend `IsValid` method in `CustomAlias` validator). If they are, then throw error.
**Alternatives considered**
Only disallow '#' since it actually causes a problem with long link retrieval and keep allowing the other URL special characters.
**Additional context**

(note: everything at and after the '#' part of the aliased URL is discarded before sent to server, see #743)
|
main
|
disallow special url characters as aliases what is frustrating you as observed in characters should not be allowed as a character to use within an alias i believe it also doesn t make too much sense to allow other special url characters such as and either i created this issue to allow for discussion your solution create a function that checks if any of these characters are present in custom alias can perhaps extend isvalid method in customalias validator if they are then throw error alternatives considered only disallow since it actually causes a problem with long link retrieval and keep allowing the other url special characters additional context note everything at and after the part of the aliased url is discarded before sent to server see
| 1
|
3,643
| 14,750,463,750
|
IssuesEvent
|
2021-01-08 02:14:28
|
Homebrew/homebrew-cask
|
https://api.github.com/repos/Homebrew/homebrew-cask
|
closed
|
brew bump-cask-pr fails with fork not found
|
awaiting maintainer feedback
|
#### General troubleshooting steps
- [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [ ] I have retried my command with `--force`.
- [x] I ran `brew update-reset && brew update` and retried my command.
- [x] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [x] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [ ] I made doubly sure this is not a [checksum does not match](https://github.com/Homebrew/homebrew-cask/blob/master/doc/reporting_bugs/checksum_does_not_match_error.md) error.
#### Description of issue
Unable to bump-cask-pr a Cask
<!-- Please DO NOT delete the backticks. Only change the “{{replace this}}” text. -->
#### Command that failed
```
brew bump-cask-pr --version 0.12.3 boost-note
```
#### Output of command with `--verbose --debug`
```
/opt/homebrew/Library/Homebrew/shims/scm/git --version
/usr/bin/curl --disable --globoff --show-error --user-agent Homebrew/2.7.1-139-gd278e87\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 11.1\)\ curl/7.64.1 --header Accept-Language:\ en --retry 3 --location https://api.github.com/search/issues\?q=boost-note\+repo\%3AHomebrew\%2Fhomebrew-cask\+state\%3Aopen\+in\%3Atitle\&per_page=100 --header Accept:\ application/vnd.github.v3\+json --write-out '
'\%\{http_code\} --header Accept:\ application/vnd.github.antiope-preview\+json --header Authorization:\ token\ ****** --dump-header /private/tmp/github_api_headers20210105-30092-1teb10y
==> Downloading https://github.com/BoostIO/BoostNote.next/releases/download/v0.12.3/boost-note-mac.dmg
/usr/bin/curl --disable --globoff --show-error --user-agent Homebrew/2.7.1-139-gd278e87\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 11.1\)\ curl/7.64.1 --header Accept-Language:\ en --retry 3 --location --silent --head --request GET https://github.com/BoostIO/BoostNote.next/releases/download/v0.12.3/boost-note-mac.dmg
Already downloaded: /Users/markus/Library/Caches/Homebrew/downloads/08a0980c8e2ffec7f3de961793bc42fac149e6a0a72729eeb1ecc7404c17e715--boost-note-mac.dmg
==> Verifying checksum for '08a0980c8e2ffec7f3de961793bc42fac149e6a0a72729eeb1ecc7404c17e715--boost-note-mac.dmg'.
Warning: Cannot verify integrity of '08a0980c8e2ffec7f3de961793bc42fac149e6a0a72729eeb1ecc7404c17e715--boost-note-mac.dmg'.
No checksum was provided for this resource.
For your reference, the checksum is:
sha256 "cf3c4e41accf53726276ff9b803793b3e9043e08d12f11a8598db9327b9a0db7"
==> replace /version\s+["']0\.11\.6["']/m with "version \"0.12.3\""
==> replace /sha256\s+["']224df26e306b8c4594a40255fc5e01eeaa9a8fdd1df916eac0ef0cc9d4eb19bf["']/m with "sha256 \"cf3c4e41accf53726276ff9b803793b3e9043e08d12f11a8598db9327b9a0db7\""
/opt/homebrew/bin/brew audit --cask /opt/homebrew/Library/Taps/homebrew/homebrew-cask/Casks/boost-note.rb
audit for boost-note: passed
/opt/homebrew/bin/brew style --fix /opt/homebrew/Library/Taps/homebrew/homebrew-cask/Casks/boost-note.rb
1 file inspected, no offenses detected
/usr/bin/curl --disable --globoff --show-error --user-agent Homebrew/2.7.1-139-gd278e87\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 11.1\)\ curl/7.64.1 --header Accept-Language:\ en --retry 3 --location https://api.github.com/repos/Homebrew/homebrew-cask/forks --header Accept:\ application/vnd.github.v3\+json --write-out '
'\%\{http_code\} --header Accept:\ application/vnd.github.antiope-preview\+json --header Authorization:\ token\ ****** --data @/private/tmp/github_api_post20210105-30092-o6fj19 --dump-header /private/tmp/github_api_headers20210105-30092-1x9n5zb
Error: Unable to fork: Not Found!
Error: Kernel.exit
/opt/homebrew/Library/Homebrew/utils.rb:159:in `exit'
/opt/homebrew/Library/Homebrew/utils.rb:159:in `odie'
/opt/homebrew/Library/Homebrew/utils/github.rb:709:in `rescue in block in create_bump_pr'
/opt/homebrew/Library/Homebrew/utils/github.rb:705:in `block in create_bump_pr'
/opt/homebrew/Library/Homebrew/extend/pathname.rb:318:in `block in cd'
/opt/homebrew/Library/Homebrew/extend/pathname.rb:318:in `chdir'
/opt/homebrew/Library/Homebrew/extend/pathname.rb:318:in `cd'
/opt/homebrew/Library/Homebrew/utils/github.rb:682:in `create_bump_pr'
/opt/homebrew/Library/Homebrew/dev-cmd/bump-cask-pr.rb:191:in `bump_cask_pr'
/opt/homebrew/Library/Homebrew/brew.rb:124:in `<main>'
```
#### Output of `brew doctor --verbose`
```
/opt/homebrew/Library/Homebrew/shims/scm/git --version
==> Cask Environment Variables:
BUNDLE_PATH
CHRUBY_VERSION
GEM_HOME
GEM_PATH
HOMEBREW_CASK_OPTS
LC_ALL
PATH
RBENV_VERSION
RUBYLIB
RUBYOPT
RUBYPATH
SHELL
==> $LOAD_PATHS
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/tapioca-0.4.10/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/spoom-1.0.7/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/thor-1.0.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/sorbet-runtime-stub-0.2.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ruby-macho-2.5.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-sorbet-0.5.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-rspec-2.1.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-rails-2.9.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-performance-1.9.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-1.7.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/unicode-display_width-1.7.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ruby-progressbar-1.11.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-ast-1.3.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-wait-0.0.9/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-sorbet-1.8.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/sorbet-0.5.6189/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-retry-0.6.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-its-1.3.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-github-2.3.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-mocks-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-expectations-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-core-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-support-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ronn-0.7.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rexml-3.2.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/regexp_parser-2.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rdiscount-2.2.0.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/rdiscount-2.2.0.2
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rack-2.2.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/pry-0.13.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/plist-3.6.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/patchelf-1.3.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parlour-4.0.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/sorbet-runtime-0.5.6189/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rainbow-3.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parser-3.0.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parallel_tests-3.4.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parallel-1.20.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mustache-1.1.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/method_source-1.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mechanize-2.7.6/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/webrobots-0.1.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ntlm-http-0.1.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/nokogiri-1.10.10/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/nokogiri-1.10.10
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mini_portile2-2.4.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/net-http-persistent-4.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/net-http-digest_auth-1.4.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mime-types-3.3.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mime-types-data-3.2020.1104/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/http-cookie-1.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/hpricot-0.8.6/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/hpricot-0.8.6
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/elftools-1.1.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/domain_name-0.5.20190701/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/unf-0.1.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/unf_ext-0.0.7.7/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/unf_ext-0.0.7.7
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/diff-lcs-1.4.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/connection_pool-2.2.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/commander-4.5.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/highline-2.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/colorize-0.8.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/coderay-1.1.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/codecov-0.2.15/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/simplecov-0.20.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/simplecov_json_formatter-0.1.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/simplecov-html-0.12.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/docile-1.3.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/byebug-11.1.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/byebug-11.1.3
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/bindata-2.4.8/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ast-2.4.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/activesupport-6.1.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/zeitwerk-2.4.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/tzinfo-2.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/minitest-5.14.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/i18n-1.8.5/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/concurrent-ruby-1.1.7/lib/concurrent-ruby
/Library/Ruby/Site/2.6.0
/Library/Ruby/Site/2.6.0/universal-darwin20
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby/2.6.0
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby/2.6.0/universal-darwin20
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/universal-darwin20
/opt/homebrew/Library/Homebrew
/usr/bin/xattr
/usr/bin/swift /opt/homebrew/Library/Homebrew/cask/utils/quarantine.swift
==> Homebrew Version
2.7.1-139-gd278e87
==> macOS
11.1
==> SIP
Enabled
/usr/libexec/java_home --xml --failfast
==> Java
N/A
==> Homebrew Cask Staging Location
/opt/homebrew/Caskroom
==> Homebrew Cask Taps:
/opt/homebrew/Library/Taps/homebrew/homebrew-cask (3773 casks)
/usr/bin/xattr
Your system is ready to brew.
```
#### Output of `brew tap`
```
buo/cask-upgrade
homebrew/cask
homebrew/core
```
**Sidenote**: I did everything mentioned in #93010 . But I can't post there because its locked.
I'm also using a Apple Silicon Mac, but I was able to cask-upgrade it a few days ago. Forgot to submit the PR and updated to Big Sur 11.1. Since then I'm unable to run the command.
I've also uninstalled homebrew and started from scratch.
|
True
|
brew bump-cask-pr fails with fork not found - #### General troubleshooting steps
- [x] I understand that [if I ignore these instructions, my issue may be closed without review](https://github.com/Homebrew/homebrew-cask/blob/master/doc/faq/closing_issues_without_review.md).
- [ ] I have retried my command with `--force`.
- [x] I ran `brew update-reset && brew update` and retried my command.
- [x] I have checked the instructions for [reporting bugs](https://github.com/Homebrew/homebrew-cask#reporting-bugs).
- [x] I ran `brew doctor`, fixed as many issues as possible and retried my command.
- [ ] I made doubly sure this is not a [checksum does not match](https://github.com/Homebrew/homebrew-cask/blob/master/doc/reporting_bugs/checksum_does_not_match_error.md) error.
#### Description of issue
Unable to bump-cask-pr a Cask
<!-- Please DO NOT delete the backticks. Only change the “{{replace this}}” text. -->
#### Command that failed
```
brew bump-cask-pr --version 0.12.3 boost-note
```
#### Output of command with `--verbose --debug`
```
/opt/homebrew/Library/Homebrew/shims/scm/git --version
/usr/bin/curl --disable --globoff --show-error --user-agent Homebrew/2.7.1-139-gd278e87\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 11.1\)\ curl/7.64.1 --header Accept-Language:\ en --retry 3 --location https://api.github.com/search/issues\?q=boost-note\+repo\%3AHomebrew\%2Fhomebrew-cask\+state\%3Aopen\+in\%3Atitle\&per_page=100 --header Accept:\ application/vnd.github.v3\+json --write-out '
'\%\{http_code\} --header Accept:\ application/vnd.github.antiope-preview\+json --header Authorization:\ token\ ****** --dump-header /private/tmp/github_api_headers20210105-30092-1teb10y
==> Downloading https://github.com/BoostIO/BoostNote.next/releases/download/v0.12.3/boost-note-mac.dmg
/usr/bin/curl --disable --globoff --show-error --user-agent Homebrew/2.7.1-139-gd278e87\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 11.1\)\ curl/7.64.1 --header Accept-Language:\ en --retry 3 --location --silent --head --request GET https://github.com/BoostIO/BoostNote.next/releases/download/v0.12.3/boost-note-mac.dmg
Already downloaded: /Users/markus/Library/Caches/Homebrew/downloads/08a0980c8e2ffec7f3de961793bc42fac149e6a0a72729eeb1ecc7404c17e715--boost-note-mac.dmg
==> Verifying checksum for '08a0980c8e2ffec7f3de961793bc42fac149e6a0a72729eeb1ecc7404c17e715--boost-note-mac.dmg'.
Warning: Cannot verify integrity of '08a0980c8e2ffec7f3de961793bc42fac149e6a0a72729eeb1ecc7404c17e715--boost-note-mac.dmg'.
No checksum was provided for this resource.
For your reference, the checksum is:
sha256 "cf3c4e41accf53726276ff9b803793b3e9043e08d12f11a8598db9327b9a0db7"
==> replace /version\s+["']0\.11\.6["']/m with "version \"0.12.3\""
==> replace /sha256\s+["']224df26e306b8c4594a40255fc5e01eeaa9a8fdd1df916eac0ef0cc9d4eb19bf["']/m with "sha256 \"cf3c4e41accf53726276ff9b803793b3e9043e08d12f11a8598db9327b9a0db7\""
/opt/homebrew/bin/brew audit --cask /opt/homebrew/Library/Taps/homebrew/homebrew-cask/Casks/boost-note.rb
audit for boost-note: passed
/opt/homebrew/bin/brew style --fix /opt/homebrew/Library/Taps/homebrew/homebrew-cask/Casks/boost-note.rb
1 file inspected, no offenses detected
/usr/bin/curl --disable --globoff --show-error --user-agent Homebrew/2.7.1-139-gd278e87\ \(Macintosh\;\ arm64\ Mac\ OS\ X\ 11.1\)\ curl/7.64.1 --header Accept-Language:\ en --retry 3 --location https://api.github.com/repos/Homebrew/homebrew-cask/forks --header Accept:\ application/vnd.github.v3\+json --write-out '
'\%\{http_code\} --header Accept:\ application/vnd.github.antiope-preview\+json --header Authorization:\ token\ ****** --data @/private/tmp/github_api_post20210105-30092-o6fj19 --dump-header /private/tmp/github_api_headers20210105-30092-1x9n5zb
Error: Unable to fork: Not Found!
Error: Kernel.exit
/opt/homebrew/Library/Homebrew/utils.rb:159:in `exit'
/opt/homebrew/Library/Homebrew/utils.rb:159:in `odie'
/opt/homebrew/Library/Homebrew/utils/github.rb:709:in `rescue in block in create_bump_pr'
/opt/homebrew/Library/Homebrew/utils/github.rb:705:in `block in create_bump_pr'
/opt/homebrew/Library/Homebrew/extend/pathname.rb:318:in `block in cd'
/opt/homebrew/Library/Homebrew/extend/pathname.rb:318:in `chdir'
/opt/homebrew/Library/Homebrew/extend/pathname.rb:318:in `cd'
/opt/homebrew/Library/Homebrew/utils/github.rb:682:in `create_bump_pr'
/opt/homebrew/Library/Homebrew/dev-cmd/bump-cask-pr.rb:191:in `bump_cask_pr'
/opt/homebrew/Library/Homebrew/brew.rb:124:in `<main>'
```
#### Output of `brew doctor --verbose`
```
/opt/homebrew/Library/Homebrew/shims/scm/git --version
==> Cask Environment Variables:
BUNDLE_PATH
CHRUBY_VERSION
GEM_HOME
GEM_PATH
HOMEBREW_CASK_OPTS
LC_ALL
PATH
RBENV_VERSION
RUBYLIB
RUBYOPT
RUBYPATH
SHELL
==> $LOAD_PATHS
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/tapioca-0.4.10/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/spoom-1.0.7/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/thor-1.0.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/sorbet-runtime-stub-0.2.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ruby-macho-2.5.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-sorbet-0.5.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-rspec-2.1.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-rails-2.9.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-performance-1.9.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-1.7.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/unicode-display_width-1.7.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ruby-progressbar-1.11.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rubocop-ast-1.3.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-wait-0.0.9/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-sorbet-1.8.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/sorbet-0.5.6189/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-retry-0.6.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-its-1.3.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-github-2.3.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-mocks-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-expectations-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-core-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rspec-support-3.10.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ronn-0.7.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rexml-3.2.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/regexp_parser-2.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rdiscount-2.2.0.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/rdiscount-2.2.0.2
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rack-2.2.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/pry-0.13.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/plist-3.6.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/patchelf-1.3.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parlour-4.0.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/sorbet-runtime-0.5.6189/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/rainbow-3.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parser-3.0.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parallel_tests-3.4.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/parallel-1.20.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mustache-1.1.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/method_source-1.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mechanize-2.7.6/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/webrobots-0.1.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ntlm-http-0.1.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/nokogiri-1.10.10/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/nokogiri-1.10.10
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mini_portile2-2.4.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/net-http-persistent-4.0.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/net-http-digest_auth-1.4.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mime-types-3.3.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/mime-types-data-3.2020.1104/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/http-cookie-1.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/hpricot-0.8.6/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/hpricot-0.8.6
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/elftools-1.1.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/domain_name-0.5.20190701/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/unf-0.1.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/unf_ext-0.0.7.7/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/unf_ext-0.0.7.7
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/diff-lcs-1.4.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/connection_pool-2.2.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/commander-4.5.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/highline-2.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/colorize-0.8.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/coderay-1.1.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/codecov-0.2.15/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/simplecov-0.20.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/simplecov_json_formatter-0.1.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/simplecov-html-0.12.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/docile-1.3.4/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/byebug-11.1.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/extensions/universal-darwin-20/2.6.0/byebug-11.1.3
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/bindata-2.4.8/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/ast-2.4.1/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/activesupport-6.1.0/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/zeitwerk-2.4.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/tzinfo-2.0.3/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/minitest-5.14.2/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/i18n-1.8.5/lib
/opt/homebrew/Library/Homebrew/vendor/bundle/bundler/../ruby/2.6.0/gems/concurrent-ruby-1.1.7/lib/concurrent-ruby
/Library/Ruby/Site/2.6.0
/Library/Ruby/Site/2.6.0/universal-darwin20
/Library/Ruby/Site
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby/2.6.0
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby/2.6.0/universal-darwin20
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/vendor_ruby
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0
/System/Library/Frameworks/Ruby.framework/Versions/2.6/usr/lib/ruby/2.6.0/universal-darwin20
/opt/homebrew/Library/Homebrew
/usr/bin/xattr
/usr/bin/swift /opt/homebrew/Library/Homebrew/cask/utils/quarantine.swift
==> Homebrew Version
2.7.1-139-gd278e87
==> macOS
11.1
==> SIP
Enabled
/usr/libexec/java_home --xml --failfast
==> Java
N/A
==> Homebrew Cask Staging Location
/opt/homebrew/Caskroom
==> Homebrew Cask Taps:
/opt/homebrew/Library/Taps/homebrew/homebrew-cask (3773 casks)
/usr/bin/xattr
Your system is ready to brew.
```
#### Output of `brew tap`
```
buo/cask-upgrade
homebrew/cask
homebrew/core
```
**Sidenote**: I did everything mentioned in #93010 . But I can't post there because its locked.
I'm also using a Apple Silicon Mac, but I was able to cask-upgrade it a few days ago. Forgot to submit the PR and updated to Big Sur 11.1. Since then I'm unable to run the command.
I've also uninstalled homebrew and started from scratch.
|
main
|
brew bump cask pr fails with fork not found general troubleshooting steps i understand that i have retried my command with force i ran brew update reset brew update and retried my command i have checked the instructions for i ran brew doctor fixed as many issues as possible and retried my command i made doubly sure this is not a error description of issue unable to bump cask pr a cask command that failed brew bump cask pr version boost note output of command with verbose debug opt homebrew library homebrew shims scm git version usr bin curl disable globoff show error user agent homebrew macintosh mac os x curl header accept language en retry location header accept application vnd github json write out http code header accept application vnd github antiope preview json header authorization token dump header private tmp github api downloading usr bin curl disable globoff show error user agent homebrew macintosh mac os x curl header accept language en retry location silent head request get already downloaded users markus library caches homebrew downloads boost note mac dmg verifying checksum for boost note mac dmg warning cannot verify integrity of boost note mac dmg no checksum was provided for this resource for your reference the checksum is replace version s m with version replace s m with opt homebrew bin brew audit cask opt homebrew library taps homebrew homebrew cask casks boost note rb audit for boost note passed opt homebrew bin brew style fix opt homebrew library taps homebrew homebrew cask casks boost note rb file inspected no offenses detected usr bin curl disable globoff show error user agent homebrew macintosh mac os x curl header accept language en retry location header accept application vnd github json write out http code header accept application vnd github antiope preview json header authorization token data private tmp github api dump header private tmp github api error unable to fork not found error kernel exit opt homebrew library homebrew utils rb in exit opt homebrew library homebrew utils rb in odie opt homebrew library homebrew utils github rb in rescue in block in create bump pr opt homebrew library homebrew utils github rb in block in create bump pr opt homebrew library homebrew extend pathname rb in block in cd opt homebrew library homebrew extend pathname rb in chdir opt homebrew library homebrew extend pathname rb in cd opt homebrew library homebrew utils github rb in create bump pr opt homebrew library homebrew dev cmd bump cask pr rb in bump cask pr opt homebrew library homebrew brew rb in output of brew doctor verbose opt homebrew library homebrew shims scm git version cask environment variables bundle path chruby version gem home gem path homebrew cask opts lc all path rbenv version rubylib rubyopt rubypath shell load paths opt homebrew library homebrew vendor bundle bundler ruby gems tapioca lib opt homebrew library homebrew vendor bundle bundler ruby gems spoom lib opt homebrew library homebrew vendor bundle bundler ruby gems thor lib opt homebrew library homebrew vendor bundle bundler ruby gems sorbet runtime stub lib opt homebrew library homebrew vendor bundle bundler ruby gems ruby macho lib opt homebrew library homebrew vendor bundle bundler ruby gems rubocop sorbet lib opt homebrew library homebrew vendor bundle bundler ruby gems rubocop rspec lib opt homebrew library homebrew vendor bundle bundler ruby gems rubocop rails lib opt homebrew library homebrew vendor bundle bundler ruby gems rubocop performance lib opt homebrew library homebrew vendor bundle bundler ruby gems rubocop lib opt homebrew library homebrew vendor bundle bundler ruby gems unicode display width lib opt homebrew library homebrew vendor bundle bundler ruby gems ruby progressbar lib opt homebrew library homebrew vendor bundle bundler ruby gems rubocop ast lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec wait lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec sorbet lib opt homebrew library homebrew vendor bundle bundler ruby gems sorbet lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec retry lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec its lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec github lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec mocks lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec expectations lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec core lib opt homebrew library homebrew vendor bundle bundler ruby gems rspec support lib opt homebrew library homebrew vendor bundle bundler ruby gems ronn lib opt homebrew library homebrew vendor bundle bundler ruby gems rexml lib opt homebrew library homebrew vendor bundle bundler ruby gems regexp parser lib opt homebrew library homebrew vendor bundle bundler ruby gems rdiscount lib opt homebrew library homebrew vendor bundle bundler ruby extensions universal darwin rdiscount opt homebrew library homebrew vendor bundle bundler ruby gems rack lib opt homebrew library homebrew vendor bundle bundler ruby gems pry lib opt homebrew library homebrew vendor bundle bundler ruby gems plist lib opt homebrew library homebrew vendor bundle bundler ruby gems patchelf lib opt homebrew library homebrew vendor bundle bundler ruby gems parlour lib opt homebrew library homebrew vendor bundle bundler ruby gems sorbet runtime lib opt homebrew library homebrew vendor bundle bundler ruby gems rainbow lib opt homebrew library homebrew vendor bundle bundler ruby gems parser lib opt homebrew library homebrew vendor bundle bundler ruby gems parallel tests lib opt homebrew library homebrew vendor bundle bundler ruby gems parallel lib opt homebrew library homebrew vendor bundle bundler ruby gems mustache lib opt homebrew library homebrew vendor bundle bundler ruby gems method source lib opt homebrew library homebrew vendor bundle bundler ruby gems mechanize lib opt homebrew library homebrew vendor bundle bundler ruby gems webrobots lib opt homebrew library homebrew vendor bundle bundler ruby gems ntlm http lib opt homebrew library homebrew vendor bundle bundler ruby gems nokogiri lib opt homebrew library homebrew vendor bundle bundler ruby extensions universal darwin nokogiri opt homebrew library homebrew vendor bundle bundler ruby gems mini lib opt homebrew library homebrew vendor bundle bundler ruby gems net http persistent lib opt homebrew library homebrew vendor bundle bundler ruby gems net http digest auth lib opt homebrew library homebrew vendor bundle bundler ruby gems mime types lib opt homebrew library homebrew vendor bundle bundler ruby gems mime types data lib opt homebrew library homebrew vendor bundle bundler ruby gems http cookie lib opt homebrew library homebrew vendor bundle bundler ruby gems hpricot lib opt homebrew library homebrew vendor bundle bundler ruby extensions universal darwin hpricot opt homebrew library homebrew vendor bundle bundler ruby gems elftools lib opt homebrew library homebrew vendor bundle bundler ruby gems domain name lib opt homebrew library homebrew vendor bundle bundler ruby gems unf lib opt homebrew library homebrew vendor bundle bundler ruby gems unf ext lib opt homebrew library homebrew vendor bundle bundler ruby extensions universal darwin unf ext opt homebrew library homebrew vendor bundle bundler ruby gems diff lcs lib opt homebrew library homebrew vendor bundle bundler ruby gems connection pool lib opt homebrew library homebrew vendor bundle bundler ruby gems commander lib opt homebrew library homebrew vendor bundle bundler ruby gems highline lib opt homebrew library homebrew vendor bundle bundler ruby gems colorize lib opt homebrew library homebrew vendor bundle bundler ruby gems coderay lib opt homebrew library homebrew vendor bundle bundler ruby gems codecov lib opt homebrew library homebrew vendor bundle bundler ruby gems simplecov lib opt homebrew library homebrew vendor bundle bundler ruby gems simplecov json formatter lib opt homebrew library homebrew vendor bundle bundler ruby gems simplecov html lib opt homebrew library homebrew vendor bundle bundler ruby gems docile lib opt homebrew library homebrew vendor bundle bundler ruby gems byebug lib opt homebrew library homebrew vendor bundle bundler ruby extensions universal darwin byebug opt homebrew library homebrew vendor bundle bundler opt homebrew library homebrew vendor bundle bundler ruby gems bindata lib opt homebrew library homebrew vendor bundle bundler ruby gems ast lib opt homebrew library homebrew vendor bundle bundler ruby gems activesupport lib opt homebrew library homebrew vendor bundle bundler ruby gems zeitwerk lib opt homebrew library homebrew vendor bundle bundler ruby gems tzinfo lib opt homebrew library homebrew vendor bundle bundler ruby gems minitest lib opt homebrew library homebrew vendor bundle bundler ruby gems lib opt homebrew library homebrew vendor bundle bundler ruby gems concurrent ruby lib concurrent ruby library ruby site library ruby site universal library ruby site system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby vendor ruby universal system library frameworks ruby framework versions usr lib ruby vendor ruby system library frameworks ruby framework versions usr lib ruby system library frameworks ruby framework versions usr lib ruby universal opt homebrew library homebrew usr bin xattr usr bin swift opt homebrew library homebrew cask utils quarantine swift homebrew version macos sip enabled usr libexec java home xml failfast java n a homebrew cask staging location opt homebrew caskroom homebrew cask taps opt homebrew library taps homebrew homebrew cask casks usr bin xattr your system is ready to brew output of brew tap buo cask upgrade homebrew cask homebrew core sidenote i did everything mentioned in but i can t post there because its locked i m also using a apple silicon mac but i was able to cask upgrade it a few days ago forgot to submit the pr and updated to big sur since then i m unable to run the command i ve also uninstalled homebrew and started from scratch
| 1
|
333
| 3,135,218,147
|
IssuesEvent
|
2015-09-10 14:23:02
|
opencaching/opencaching-pl
|
https://api.github.com/repos/opencaching/opencaching-pl
|
closed
|
Deploying SQL changes
|
Component-DATABASE enhancement Maintainability
|
As the project advances, there are changes in the database structure from time to time.
Most of them are additions or improvements, which, when applied, do not affect existing user data.
Perhaps there would be a good idea to integrate database changes into the update process in a similar fashion to how OKAPI updates it's own database structure, or at least this is my understanding.
Example scenario:
- database changes that do not affect user data and can be processed automatically, without sysadmin intervention
- run cron job to update:
a) update site via GIT
b) call a script to consolidate changes, apply database changes
c) call OKAPI update script
Currently the update subsystem uses only a) and c).
Example scenario 2:
- database changes that require sysadmin review and possibly intervention
OR
OS needs changes that require sysadmin review and intervention
- before posting changes to master branch, send emails directly to each sysadmin, requesting database changes review. Allot a certain time for it. (ex. 3-7 days).
- if not all sysadmins reply in due time, contact those again, allow one more day. If no answer, consider that node as lagging behind and remove GIT update notification for that node.
- apply update on master branch (all active and up to date nodes have applied necessary changes).
Lagging nodes may come up to date at a later time and contact the developer team to rejoin the update stream. It would be a good idea to have a place where active and inactive nodes using this project's automatic update system are shown and kept up to date.
Thank you.
|
True
|
Deploying SQL changes - As the project advances, there are changes in the database structure from time to time.
Most of them are additions or improvements, which, when applied, do not affect existing user data.
Perhaps there would be a good idea to integrate database changes into the update process in a similar fashion to how OKAPI updates it's own database structure, or at least this is my understanding.
Example scenario:
- database changes that do not affect user data and can be processed automatically, without sysadmin intervention
- run cron job to update:
a) update site via GIT
b) call a script to consolidate changes, apply database changes
c) call OKAPI update script
Currently the update subsystem uses only a) and c).
Example scenario 2:
- database changes that require sysadmin review and possibly intervention
OR
OS needs changes that require sysadmin review and intervention
- before posting changes to master branch, send emails directly to each sysadmin, requesting database changes review. Allot a certain time for it. (ex. 3-7 days).
- if not all sysadmins reply in due time, contact those again, allow one more day. If no answer, consider that node as lagging behind and remove GIT update notification for that node.
- apply update on master branch (all active and up to date nodes have applied necessary changes).
Lagging nodes may come up to date at a later time and contact the developer team to rejoin the update stream. It would be a good idea to have a place where active and inactive nodes using this project's automatic update system are shown and kept up to date.
Thank you.
|
main
|
deploying sql changes as the project advances there are changes in the database structure from time to time most of them are additions or improvements which when applied do not affect existing user data perhaps there would be a good idea to integrate database changes into the update process in a similar fashion to how okapi updates it s own database structure or at least this is my understanding example scenario database changes that do not affect user data and can be processed automatically without sysadmin intervention run cron job to update a update site via git b call a script to consolidate changes apply database changes c call okapi update script currently the update subsystem uses only a and c example scenario database changes that require sysadmin review and possibly intervention or os needs changes that require sysadmin review and intervention before posting changes to master branch send emails directly to each sysadmin requesting database changes review allot a certain time for it ex days if not all sysadmins reply in due time contact those again allow one more day if no answer consider that node as lagging behind and remove git update notification for that node apply update on master branch all active and up to date nodes have applied necessary changes lagging nodes may come up to date at a later time and contact the developer team to rejoin the update stream it would be a good idea to have a place where active and inactive nodes using this project s automatic update system are shown and kept up to date thank you
| 1
|
2,398
| 8,518,442,973
|
IssuesEvent
|
2018-11-01 11:38:13
|
backdrop-ops/contrib
|
https://api.github.com/repos/backdrop-ops/contrib
|
closed
|
Application to join Backdrop contrib
|
Maintainer application
|
Here I am, Olaf Grabienski from Hamburg, Germany. I work with Backdrop for a while, mainly as a site architect, and I'd like to join the Backdrop contrib community.
I need a port of the Drupal *Footnotes* module and tried to do it by myself. I guess it worked quite well even if I'm not a coder, thanks to the help of other contributors. Here's the repository of the module: https://github.com/olafgrabienski/footnotes
(LICENSE.txt will be added soon, cf. https://github.com/olafgrabienski/footnotes/issues/27.)
I agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement).
|
True
|
Application to join Backdrop contrib - Here I am, Olaf Grabienski from Hamburg, Germany. I work with Backdrop for a while, mainly as a site architect, and I'd like to join the Backdrop contrib community.
I need a port of the Drupal *Footnotes* module and tried to do it by myself. I guess it worked quite well even if I'm not a coder, thanks to the help of other contributors. Here's the repository of the module: https://github.com/olafgrabienski/footnotes
(LICENSE.txt will be added soon, cf. https://github.com/olafgrabienski/footnotes/issues/27.)
I agree to the [Backdrop Contributed Project Agreement](https://github.com/backdrop-ops/contrib#backdrop-contributed-project-agreement).
|
main
|
application to join backdrop contrib here i am olaf grabienski from hamburg germany i work with backdrop for a while mainly as a site architect and i d like to join the backdrop contrib community i need a port of the drupal footnotes module and tried to do it by myself i guess it worked quite well even if i m not a coder thanks to the help of other contributors here s the repository of the module license txt will be added soon cf i agree to the
| 1
|
233,440
| 25,765,480,750
|
IssuesEvent
|
2022-12-09 01:14:10
|
dreamboy9/mongo
|
https://api.github.com/repos/dreamboy9/mongo
|
reopened
|
CVE-2021-32803 (High) detected in tar-6.1.0.tgz
|
security vulnerability
|
## CVE-2021-32803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /buildscripts/libdeps/graph_visualizer_web_stack/package.json</p>
<p>Path to vulnerable library: /buildscripts/libdeps/graph_visualizer_web_stack/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- canvas-2.8.0.tgz (Root Library)
- node-pre-gyp-1.0.5.tgz
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dreamboy9/mongo/commit/60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b">60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-32803>CVE-2021-32803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution (tar): 6.1.2</p>
<p>Direct dependency fix Resolution (canvas): 2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-32803 (High) detected in tar-6.1.0.tgz - ## CVE-2021-32803 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-6.1.0.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.1.0.tgz">https://registry.npmjs.org/tar/-/tar-6.1.0.tgz</a></p>
<p>Path to dependency file: /buildscripts/libdeps/graph_visualizer_web_stack/package.json</p>
<p>Path to vulnerable library: /buildscripts/libdeps/graph_visualizer_web_stack/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- canvas-2.8.0.tgz (Root Library)
- node-pre-gyp-1.0.5.tgz
- :x: **tar-6.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dreamboy9/mongo/commit/60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b">60ef70ebd8d46f4c893b3fb90ccf2616f8e21d2b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.
<p>Publish Date: 2021-08-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-32803>CVE-2021-32803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p>
<p>Release Date: 2021-08-03</p>
<p>Fix Resolution (tar): 6.1.2</p>
<p>Direct dependency fix Resolution (canvas): 2.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_main
|
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file buildscripts libdeps graph visualizer web stack package json path to vulnerable library buildscripts libdeps graph visualizer web stack node modules tar package json dependency hierarchy canvas tgz root library node pre gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite vulnerability via insufficient symlink protection node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory this order of operations resulted in the directory being created and added to the node tar directory cache when a directory is present in the directory cache subsequent calls to mkdir for that directory are skipped however this is also where node tar checks for symlinks occur by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite this issue was addressed in releases and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution canvas step up your open source security game with mend
| 0
|
116,740
| 4,705,949,200
|
IssuesEvent
|
2016-10-13 15:47:12
|
bedita/bedita
|
https://api.github.com/repos/bedita/bedita
|
opened
|
Validate input data in bedita setup shell script
|
Priority - Low Topic - Core Type - Enhancement
|
To mitigate potential risks in `bedita setup` script when writing database connection data in `config/app.php` we should validate the user input data.
Using the `Cake\Filesystem\File` class also helps writing file more safely.
See https://github.com/bedita/bedita/pull/1005#pullrequestreview-4083907
|
1.0
|
Validate input data in bedita setup shell script - To mitigate potential risks in `bedita setup` script when writing database connection data in `config/app.php` we should validate the user input data.
Using the `Cake\Filesystem\File` class also helps writing file more safely.
See https://github.com/bedita/bedita/pull/1005#pullrequestreview-4083907
|
non_main
|
validate input data in bedita setup shell script to mitigate potential risks in bedita setup script when writing database connection data in config app php we should validate the user input data using the cake filesystem file class also helps writing file more safely see
| 0
|
3,672
| 15,035,942,889
|
IssuesEvent
|
2021-02-02 14:42:55
|
IITIDIDX597/sp_2021_team1
|
https://api.github.com/repos/IITIDIDX597/sp_2021_team1
|
opened
|
Usage analytics
|
Epic: 1 Consuming Information Epic: 5 Maintaining the system Story Week 3
|
**Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. Clinicians will be able to view the most popular articles or research areas and areas that need further focus for the lab.
### **Story Details:**
As a: admin/analyst
I want: to see what articles clinicians are consuming
So that: I can get insight into what clinicians are consuming compared to the daily operations of the clinic
|
True
|
Usage analytics - **Project Goal:** S Lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way, while at the same time foster deeper learning experiences in order to deliver better AbilityLab Patient care.
**Hill Statement:** Individual Clinicians can reference relevant, continuously evolving information for their patient's therapy needs to self-manage their approach & patient care plan development in a single platform.
**Sub-Hill Statements:**
1. Clinicians will be able to view the most popular articles or research areas and areas that need further focus for the lab.
### **Story Details:**
As a: admin/analyst
I want: to see what articles clinicians are consuming
So that: I can get insight into what clinicians are consuming compared to the daily operations of the clinic
|
main
|
usage analytics project goal s lab is a tailored integrative learning and collaboration platform for clinicians that combines the latest research and tacit knowledge gained from experience in a practical way while at the same time foster deeper learning experiences in order to deliver better abilitylab patient care hill statement individual clinicians can reference relevant continuously evolving information for their patient s therapy needs to self manage their approach patient care plan development in a single platform sub hill statements clinicians will be able to view the most popular articles or research areas and areas that need further focus for the lab story details as a admin analyst i want to see what articles clinicians are consuming so that i can get insight into what clinicians are consuming compared to the daily operations of the clinic
| 1
|
183,190
| 21,716,140,498
|
IssuesEvent
|
2022-05-10 18:06:44
|
argoproj/argo-events
|
https://api.github.com/repos/argoproj/argo-events
|
opened
|
Unused function parameters
|
security
|
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-86568da2-7fff-8ed4-ce08-74d0fc6fbd50"><div dir="ltr" style="margin-left:0pt;" align="left">
Severity | Informational
-- | --
Difficulty | High
Target |
</div><br /><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;">The following functions accept parameters that are not used in the functions body:</span></p><br /></b>
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-223d8f79-7fff-d080-b3d5-98c74e4f0b6b"><div dir="ltr" style="margin-left:0pt;" align="left">
Function signature | File | Unused parameter
-- | -- | --
func (e *expr) evaluatePostfix(vars []string, set term, postfix []string) bool | https://github.com/argoproj/argo-events/blob/master/common/boolminifier.go | vars
func (r *reconciler) reconcile(ctx context.Context, eventSource *v1alpha1.EventSource) error | https://github.com/argoproj/argo-events/blob/master/controllers/eventsource/controller.go | ctx
func (el *EventListener) processMessage(ctx context.Context, message *sqslib.Message, dispatch func([]byte, ...eventsourcecommon.Option) error, ack func(), log *zap.SugaredLogger) | https://github.com/argoproj/argo-events/blob/master/eventsources/sources/awssqs/start.go | ctx
getHook := func(hooks []*gitlab.ProjectHook, url string, events []string) *gitlab.ProjectHook | https://github.com/argoproj/argo-events/blob/master/eventsources/sources/gitlab/start.go#L218 | events
func schema_argo_events_pkg_apis_common_Amount(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_Int64OrString(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_Metadata(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_Resource(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_S3Bucket(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_S3Filter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_AMQPConsumeConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_AMQPExchangeDeclareConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_AMQPQueueBindConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_BitbucketServerRepository(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_CatchupConfiguration(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_ConfigMapPersistence(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_EventSourceFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_KafkaConsumerGroup(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_OwnedRepositories(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_Selector(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_StorageGridFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_WatchPathConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_ConditionsResetByTime(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_DataFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_EventDependencyTransformer(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_FileArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_GitRemoteConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_LogTrigger(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_PayloadField(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_RateLimit(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_StatusPolicy(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_TimeFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_TriggerParameterSource(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_URLArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
</div><br /><br /><br /><br /><br /><br /></b>
|
True
|
Unused function parameters - <meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-86568da2-7fff-8ed4-ce08-74d0fc6fbd50"><div dir="ltr" style="margin-left:0pt;" align="left">
Severity | Informational
-- | --
Difficulty | High
Target |
</div><br /><p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:11pt;font-family:Arial;color:#000000;background-color:transparent;font-weight:400;font-style:normal;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre;white-space:pre-wrap;">The following functions accept parameters that are not used in the functions body:</span></p><br /></b>
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-223d8f79-7fff-d080-b3d5-98c74e4f0b6b"><div dir="ltr" style="margin-left:0pt;" align="left">
Function signature | File | Unused parameter
-- | -- | --
func (e *expr) evaluatePostfix(vars []string, set term, postfix []string) bool | https://github.com/argoproj/argo-events/blob/master/common/boolminifier.go | vars
func (r *reconciler) reconcile(ctx context.Context, eventSource *v1alpha1.EventSource) error | https://github.com/argoproj/argo-events/blob/master/controllers/eventsource/controller.go | ctx
func (el *EventListener) processMessage(ctx context.Context, message *sqslib.Message, dispatch func([]byte, ...eventsourcecommon.Option) error, ack func(), log *zap.SugaredLogger) | https://github.com/argoproj/argo-events/blob/master/eventsources/sources/awssqs/start.go | ctx
getHook := func(hooks []*gitlab.ProjectHook, url string, events []string) *gitlab.ProjectHook | https://github.com/argoproj/argo-events/blob/master/eventsources/sources/gitlab/start.go#L218 | events
func schema_argo_events_pkg_apis_common_Amount(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_Int64OrString(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_Metadata(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_Resource(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_S3Bucket(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_argo_events_pkg_apis_common_S3Filter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/common/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_AMQPConsumeConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_AMQPExchangeDeclareConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_AMQPQueueBindConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_BitbucketServerRepository(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_CatchupConfiguration(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_ConfigMapPersistence(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_EventSourceFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_KafkaConsumerGroup(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_OwnedRepositories(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_Selector(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_StorageGridFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_eventsource_v1alpha1_WatchPathConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/eventsource/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_ConditionsResetByTime(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_DataFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_EventDependencyTransformer(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_FileArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_GitRemoteConfig(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_LogTrigger(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_PayloadField(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_RateLimit(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_StatusPolicy(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_TimeFilter(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_TriggerParameterSource(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
func schema_pkg_apis_sensor_v1alpha1_URLArtifact(ref common.ReferenceCallback) common.OpenAPIDefinition | https://github.com/argoproj/argo-events/blob/master/pkg/apis/sensor/v1alpha1/openapi_generated.go | ref
</div><br /><br /><br /><br /><br /><br /></b>
|
non_main
|
unused function parameters severity informational difficulty high target the following functions accept parameters that are not used in the functions body function signature file unused parameter func e expr evaluatepostfix vars string set term postfix string bool vars func r reconciler reconcile ctx context context eventsource eventsource error ctx func el eventlistener processmessage ctx context context message sqslib message dispatch func byte eventsourcecommon option error ack func log zap sugaredlogger ctx gethook func hooks gitlab projecthook url string events string gitlab projecthook events func schema argo events pkg apis common amount ref common referencecallback common openapidefinition ref func schema argo events pkg apis common ref common referencecallback common openapidefinition ref func schema argo events pkg apis common metadata ref common referencecallback common openapidefinition ref func schema argo events pkg apis common resource ref common referencecallback common openapidefinition ref func schema argo events pkg apis common ref common referencecallback common openapidefinition ref func schema argo events pkg apis common ref common referencecallback common openapidefinition ref func schema pkg apis eventsource amqpconsumeconfig ref common referencecallback common openapidefinition ref func schema pkg apis eventsource amqpexchangedeclareconfig ref common referencecallback common openapidefinition ref func schema pkg apis eventsource amqpqueuebindconfig ref common referencecallback common openapidefinition ref func schema pkg apis eventsource bitbucketserverrepository ref common referencecallback common openapidefinition ref func schema pkg apis eventsource catchupconfiguration ref common referencecallback common openapidefinition ref func schema pkg apis eventsource configmappersistence ref common referencecallback common openapidefinition ref func schema pkg apis eventsource eventsourcefilter ref common referencecallback common openapidefinition ref func schema pkg apis eventsource kafkaconsumergroup ref common referencecallback common openapidefinition ref func schema pkg apis eventsource ownedrepositories ref common referencecallback common openapidefinition ref func schema pkg apis eventsource selector ref common referencecallback common openapidefinition ref func schema pkg apis eventsource storagegridfilter ref common referencecallback common openapidefinition ref func schema pkg apis eventsource watchpathconfig ref common referencecallback common openapidefinition ref func schema pkg apis sensor conditionsresetbytime ref common referencecallback common openapidefinition ref func schema pkg apis sensor datafilter ref common referencecallback common openapidefinition ref func schema pkg apis sensor eventdependencytransformer ref common referencecallback common openapidefinition ref func schema pkg apis sensor fileartifact ref common referencecallback common openapidefinition ref func schema pkg apis sensor gitremoteconfig ref common referencecallback common openapidefinition ref func schema pkg apis sensor logtrigger ref common referencecallback common openapidefinition ref func schema pkg apis sensor payloadfield ref common referencecallback common openapidefinition ref func schema pkg apis sensor ratelimit ref common referencecallback common openapidefinition ref func schema pkg apis sensor statuspolicy ref common referencecallback common openapidefinition ref func schema pkg apis sensor timefilter ref common referencecallback common openapidefinition ref func schema pkg apis sensor triggerparametersource ref common referencecallback common openapidefinition ref func schema pkg apis sensor urlartifact ref common referencecallback common openapidefinition ref
| 0
|
3,639
| 14,713,738,721
|
IssuesEvent
|
2021-01-05 10:49:59
|
Twasi/websocket-obs-java
|
https://api.github.com/repos/Twasi/websocket-obs-java
|
opened
|
Centralize Gson config
|
maintainability
|
Generally in calls that serialize/deserialize objects, a new Gson instance is being created. [Gson instances are thread-safe](https://www.javadoc.io/doc/com.google.code.gson/gson/2.8.0/com/google/gson/Gson.html), so what we can do is initialize GSON in the contructor. This would enable us to centralize configuration, for example to add custom serializers/deserializers, or even a [RuntimeTypeAdapter](https://github.com/google/gson/blob/master/extras/src/main/java/com/google/gson/typeadapters/RuntimeTypeAdapterFactory.java) to better handle the polymorphism inherent in the Request/Response structure.
|
True
|
Centralize Gson config - Generally in calls that serialize/deserialize objects, a new Gson instance is being created. [Gson instances are thread-safe](https://www.javadoc.io/doc/com.google.code.gson/gson/2.8.0/com/google/gson/Gson.html), so what we can do is initialize GSON in the contructor. This would enable us to centralize configuration, for example to add custom serializers/deserializers, or even a [RuntimeTypeAdapter](https://github.com/google/gson/blob/master/extras/src/main/java/com/google/gson/typeadapters/RuntimeTypeAdapterFactory.java) to better handle the polymorphism inherent in the Request/Response structure.
|
main
|
centralize gson config generally in calls that serialize deserialize objects a new gson instance is being created so what we can do is initialize gson in the contructor this would enable us to centralize configuration for example to add custom serializers deserializers or even a to better handle the polymorphism inherent in the request response structure
| 1
|
283
| 3,054,874,950
|
IssuesEvent
|
2015-08-13 07:33:44
|
Homebrew/homebrew
|
https://api.github.com/repos/Homebrew/homebrew
|
closed
|
Install all packages from particular Tap.
|
features maintainer feedback
|
Two examples of lightweight taps that one may consider to install all packages: dupes and completions.
|
True
|
Install all packages from particular Tap. - Two examples of lightweight taps that one may consider to install all packages: dupes and completions.
|
main
|
install all packages from particular tap two examples of lightweight taps that one may consider to install all packages dupes and completions
| 1
|
390,854
| 11,564,762,340
|
IssuesEvent
|
2020-02-20 09:17:10
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
app.adjust.com - see bug description
|
browser-fenix engine-gecko priority-normal
|
<!-- @browser: Firefox Mobile 73.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:73.0) Gecko/73.0 Firefox/73.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://app.adjust.com/6ypk9q?deep_link=spotify://track/5gS8whHdcpbkdz0qonQZF8
**Browser / Version**: Firefox Mobile 73.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: link should open spotify but it doesnt
**Steps to Reproduce**:
Instead of opening spotify it saied address not found.
Not working only on firefox
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/2/06f14e6c-d06a-49c1-92f9-541b8fcd9471.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
app.adjust.com - see bug description - <!-- @browser: Firefox Mobile 73.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:73.0) Gecko/73.0 Firefox/73.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://app.adjust.com/6ypk9q?deep_link=spotify://track/5gS8whHdcpbkdz0qonQZF8
**Browser / Version**: Firefox Mobile 73.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: link should open spotify but it doesnt
**Steps to Reproduce**:
Instead of opening spotify it saied address not found.
Not working only on firefox
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/2/06f14e6c-d06a-49c1-92f9-541b8fcd9471.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_main
|
app adjust com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description link should open spotify but it doesnt steps to reproduce instead of opening spotify it saied address not found not working only on firefox view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
7,320
| 2,893,298,111
|
IssuesEvent
|
2015-06-15 17:14:33
|
inbaz/ers
|
https://api.github.com/repos/inbaz/ers
|
closed
|
bug: blank page for admin/bankaccount/format/2
|
bug prio1 test
|
0.3.60
-> blank page after defining the format of the csv of credit card
|
1.0
|
bug: blank page for admin/bankaccount/format/2 - 0.3.60
-> blank page after defining the format of the csv of credit card
|
non_main
|
bug blank page for admin bankaccount format blank page after defining the format of the csv of credit card
| 0
|
38,829
| 15,801,816,273
|
IssuesEvent
|
2021-04-03 06:41:59
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
How to set Health Check URL for a webapp
|
Pri2 assigned-to-author doc-enhancement service-health/svc triaged
|
This document explains everything, but how to set HealthCheck, before setting up Alerts.
Can you please explain detail on how to set health check URL for a WebApp Resource, especially using Docker in ASE ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: aee61956-ad7e-4573-3bbb-461f3043bcb1
* Version Independent ID: cdf50dff-210a-54d9-cc3c-717dc690d4b3
* Content: [Azure Resource Health overview](https://docs.microsoft.com/en-us/azure/service-health/resource-health-overview)
* Content Source: [articles/service-health/resource-health-overview.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-health/resource-health-overview.md)
* Service: **service-health**
* GitHub Login: @stephbaron
* Microsoft Alias: **stbaron**
|
1.0
|
How to set Health Check URL for a webapp -
This document explains everything, but how to set HealthCheck, before setting up Alerts.
Can you please explain detail on how to set health check URL for a WebApp Resource, especially using Docker in ASE ?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: aee61956-ad7e-4573-3bbb-461f3043bcb1
* Version Independent ID: cdf50dff-210a-54d9-cc3c-717dc690d4b3
* Content: [Azure Resource Health overview](https://docs.microsoft.com/en-us/azure/service-health/resource-health-overview)
* Content Source: [articles/service-health/resource-health-overview.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-health/resource-health-overview.md)
* Service: **service-health**
* GitHub Login: @stephbaron
* Microsoft Alias: **stbaron**
|
non_main
|
how to set health check url for a webapp this document explains everything but how to set healthcheck before setting up alerts can you please explain detail on how to set health check url for a webapp resource especially using docker in ase document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service health github login stephbaron microsoft alias stbaron
| 0
|
111,440
| 11,732,603,753
|
IssuesEvent
|
2020-03-11 04:16:43
|
Students-of-the-city-of-Kostroma/trpo_automation
|
https://api.github.com/repos/Students-of-the-city-of-Kostroma/trpo_automation
|
closed
|
Описать и реализовать процесс валидации письма
|
Epic Sprint 1 Sprint 2 documentation realization
|
Водопад фильтров с чётким приоритетом
На вход подаются тестовые письма, сформированные командой тестирования, разделённые по разным категориям
[Ссылка](https://docs.google.com/document/d/1knlDwZ4lGp7NlXlYRQp_11sQAKZlG4qBdP2qyhC0TQY/edit) на письма
После проверки писем каждому письму назначается код. Ссылка на соответствие состояний и кодов [здесь](https://docs.google.com/document/d/12mSzNBvU_WRPhW6snqCZLgmx2ziJRMnahdztMTvb8wk/edit#heading=h.gjdgxs)
Результаты
- Реализация — класс на языке python, отвечающий за реализацию данного функционала
- Документация
- - Пояснительная записка, в которой будет прописан цикл алгоритма проверки
- - Либо диаграмма вызовов
|
1.0
|
Описать и реализовать процесс валидации письма - Водопад фильтров с чётким приоритетом
На вход подаются тестовые письма, сформированные командой тестирования, разделённые по разным категориям
[Ссылка](https://docs.google.com/document/d/1knlDwZ4lGp7NlXlYRQp_11sQAKZlG4qBdP2qyhC0TQY/edit) на письма
После проверки писем каждому письму назначается код. Ссылка на соответствие состояний и кодов [здесь](https://docs.google.com/document/d/12mSzNBvU_WRPhW6snqCZLgmx2ziJRMnahdztMTvb8wk/edit#heading=h.gjdgxs)
Результаты
- Реализация — класс на языке python, отвечающий за реализацию данного функционала
- Документация
- - Пояснительная записка, в которой будет прописан цикл алгоритма проверки
- - Либо диаграмма вызовов
|
non_main
|
описать и реализовать процесс валидации письма водопад фильтров с чётким приоритетом на вход подаются тестовые письма сформированные командой тестирования разделённые по разным категориям на письма после проверки писем каждому письму назначается код ссылка на соответствие состояний и кодов результаты реализация — класс на языке python отвечающий за реализацию данного функционала документация пояснительная записка в которой будет прописан цикл алгоритма проверки либо диаграмма вызовов
| 0
|
5,617
| 28,101,303,640
|
IssuesEvent
|
2023-03-30 19:44:39
|
MozillaFoundation/foundation.mozilla.org
|
https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org
|
opened
|
Factory generates werid text for `article_listing_what_to_read_next.html`
|
engineering qa maintain
|
I happened to spot a weird `A / ABLE` label show up on Percy's snapshot. We should investigate where it is coming from and why.
- [Full page snapshot](https://images.percy.io/0812bfe2823ed1de99f94d9a0b5d66452ff1d6bf6710779a29bc4f55edbaaad9)
- Label in question (cropped screenshot from above):

Not sure if it's coincidental but it seems to match where the date related regression that Percy always nags about.

---
Related ticket: https://github.com/MozillaFoundation/foundation.mozilla.org/issues/10328
|
True
|
Factory generates werid text for `article_listing_what_to_read_next.html` - I happened to spot a weird `A / ABLE` label show up on Percy's snapshot. We should investigate where it is coming from and why.
- [Full page snapshot](https://images.percy.io/0812bfe2823ed1de99f94d9a0b5d66452ff1d6bf6710779a29bc4f55edbaaad9)
- Label in question (cropped screenshot from above):

Not sure if it's coincidental but it seems to match where the date related regression that Percy always nags about.

---
Related ticket: https://github.com/MozillaFoundation/foundation.mozilla.org/issues/10328
|
main
|
factory generates werid text for article listing what to read next html i happened to spot a weird a able label show up on percy s snapshot we should investigate where it is coming from and why label in question cropped screenshot from above not sure if it s coincidental but it seems to match where the date related regression that percy always nags about related ticket
| 1
|
141,966
| 19,012,432,340
|
IssuesEvent
|
2021-11-23 10:46:23
|
Yann-dv/_Pekocko
|
https://api.github.com/repos/Yann-dv/_Pekocko
|
opened
|
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz
|
security vulnerability
|
## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: _Pekocko/package.json</p>
<p>Path to vulnerable library: _Pekocko/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.2.tgz (Root Library)
- webpack-dev-server-3.1.8.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Yann-dv/_Pekocko/commit/29a980e4dad903d391a0354b9cb7c71642e2c2fe">29a980e4dad903d391a0354b9cb7c71642e2c2fe</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz - ## CVE-2020-7693 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary>
<p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p>
<p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p>
<p>Path to dependency file: _Pekocko/package.json</p>
<p>Path to vulnerable library: _Pekocko/node_modules/sockjs/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.2.tgz (Root Library)
- webpack-dev-server-3.1.8.tgz
- :x: **sockjs-0.3.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Yann-dv/_Pekocko/commit/29a980e4dad903d391a0354b9cb7c71642e2c2fe">29a980e4dad903d391a0354b9cb7c71642e2c2fe</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.
<p>Publish Date: 2020-07-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: sockjs - 0.3.20</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_main
|
cve medium detected in sockjs tgz cve medium severity vulnerability vulnerable library sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href path to dependency file pekocko package json path to vulnerable library pekocko node modules sockjs package json dependency hierarchy build angular tgz root library webpack dev server tgz x sockjs tgz vulnerable library found in head commit a href found in base branch main vulnerability details incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sockjs step up your open source security game with whitesource
| 0
|
3,195
| 12,227,828,536
|
IssuesEvent
|
2020-05-03 16:49:55
|
gfleetwood/asteres
|
https://api.github.com/repos/gfleetwood/asteres
|
opened
|
HypothesisWorks/hypothesis (8685799)
|
Python maintain
|
https://github.com/HypothesisWorks/hypothesis
Hypothesis is a powerful, flexible, and easy to use library for property-based testing.
|
True
|
HypothesisWorks/hypothesis (8685799) - https://github.com/HypothesisWorks/hypothesis
Hypothesis is a powerful, flexible, and easy to use library for property-based testing.
|
main
|
hypothesisworks hypothesis hypothesis is a powerful flexible and easy to use library for property based testing
| 1
|
4,877
| 25,024,581,994
|
IssuesEvent
|
2022-11-04 06:20:33
|
carbon-design-system/carbon
|
https://api.github.com/repos/carbon-design-system/carbon
|
closed
|
[Question]: What is the correct way to implement a Progress Modal?
|
type: question ❓ status: waiting for maintainer response 💬
|
### Question for Carbon
Trying to implement the Progress Modal variant, I am unable to find anything in the Modal component API that can keep track of the Form progress inside the Modal component.
References:
- [Modal variants](https://carbondesignsystem.com/components/modal/usage/#modal-variants)
- [Component API Docs](https://react.carbondesignsystem.com/?path=/docs/components-modal--default)
Would it be upto the developer to create a Progress Form and add it inside the Modal component and manage the state of the form ourselves? If yes, how can we manage the action button at the Modal Footer section?
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
|
True
|
[Question]: What is the correct way to implement a Progress Modal? - ### Question for Carbon
Trying to implement the Progress Modal variant, I am unable to find anything in the Modal component API that can keep track of the Form progress inside the Modal component.
References:
- [Modal variants](https://carbondesignsystem.com/components/modal/usage/#modal-variants)
- [Component API Docs](https://react.carbondesignsystem.com/?path=/docs/components-modal--default)
Would it be upto the developer to create a Progress Form and add it inside the Modal component and manage the state of the form ourselves? If yes, how can we manage the action button at the Modal Footer section?
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/carbon-design-system/carbon/blob/f555616971a03fd454c0f4daea184adf41fff05b/.github/CODE_OF_CONDUCT.md)
|
main
|
what is the correct way to implement a progress modal question for carbon trying to implement the progress modal variant i am unable to find anything in the modal component api that can keep track of the form progress inside the modal component references would it be upto the developer to create a progress form and add it inside the modal component and manage the state of the form ourselves if yes how can we manage the action button at the modal footer section code of conduct i agree to follow this project s
| 1
|
736,762
| 25,486,637,316
|
IssuesEvent
|
2022-11-26 13:43:37
|
Pictalk-speech-made-easy/pictalk-frontend
|
https://api.github.com/repos/Pictalk-speech-made-easy/pictalk-frontend
|
closed
|
Sharing menu enhancements
|
enhancement alex Low priority
|
- In groups section put a "+" to redirect or open popup of group creation
- Put colors (green?/red) to indicate that the user is added to sharing or not
- The image of no groups is too big
|
1.0
|
Sharing menu enhancements - - In groups section put a "+" to redirect or open popup of group creation
- Put colors (green?/red) to indicate that the user is added to sharing or not
- The image of no groups is too big
|
non_main
|
sharing menu enhancements in groups section put a to redirect or open popup of group creation put colors green red to indicate that the user is added to sharing or not the image of no groups is too big
| 0
|
222,080
| 7,422,933,865
|
IssuesEvent
|
2018-03-23 02:14:19
|
bugfroggy/Quickplay2.0
|
https://api.github.com/repos/bugfroggy/Quickplay2.0
|
closed
|
Add Google Analytics to the .jars themselves
|
Enhancement Priority: MED
|
I believe Google Analytics has a Java API that I can use for analytical tracking. This should be added. Apparently there's also an "Event" system that I could use for tracking what buttons users press, etc.
|
1.0
|
Add Google Analytics to the .jars themselves - I believe Google Analytics has a Java API that I can use for analytical tracking. This should be added. Apparently there's also an "Event" system that I could use for tracking what buttons users press, etc.
|
non_main
|
add google analytics to the jars themselves i believe google analytics has a java api that i can use for analytical tracking this should be added apparently there s also an event system that i could use for tracking what buttons users press etc
| 0
|
57,410
| 14,145,609,197
|
IssuesEvent
|
2020-11-10 17:59:03
|
stefanfreitag/cdktf-budget-notifier
|
https://api.github.com/repos/stefanfreitag/cdktf-budget-notifier
|
closed
|
CVE-2020-26137 (Medium) detected in pip20.1.1 - autoclosed
|
security vulnerability
|
## CVE-2020-26137 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pip20.1.1</b></p></summary>
<p>
<p>The Python package installer</p>
<p>Library home page: <a href=https://github.com/pypa/pip.git>https://github.com/pypa/pip.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/stefanfreitag/cdktf-budget-notifier/commit/1f3e471a9114fefc537152f958857756d194b7a1">1f3e471a9114fefc537152f958857756d194b7a1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116.
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26137>CVE-2020-26137</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137</a></p>
<p>Release Date: 2020-09-30</p>
<p>Fix Resolution: 1.25.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-26137 (Medium) detected in pip20.1.1 - autoclosed - ## CVE-2020-26137 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pip20.1.1</b></p></summary>
<p>
<p>The Python package installer</p>
<p>Library home page: <a href=https://github.com/pypa/pip.git>https://github.com/pypa/pip.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/stefanfreitag/cdktf-budget-notifier/commit/1f3e471a9114fefc537152f958857756d194b7a1">1f3e471a9114fefc537152f958857756d194b7a1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (0)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116.
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-26137>CVE-2020-26137</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26137</a></p>
<p>Release Date: 2020-09-30</p>
<p>Fix Resolution: 1.25.9</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_main
|
cve medium detected in autoclosed cve medium severity vulnerability vulnerable library the python package installer library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details before allows crlf injection if the attacker controls the http request method as demonstrated by inserting cr and lf control characters in the first argument of putrequest note this is similar to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
3,715
| 15,308,330,185
|
IssuesEvent
|
2021-02-24 22:16:50
|
carbon-design-system/carbon
|
https://api.github.com/repos/carbon-design-system/carbon
|
closed
|
'headers' prop in DataTableSkeleton component is not working
|
component: data-table status: waiting for maintainer response 💬 type: bug 🐛
|
According to the [DataTableSkeleton ](https://react.carbondesignsystem.com/?path=/docs/datatableskeleton--default) docs, the `headers` prop should specify the displayed headers. But whatever I pass to the prop doesn't change what's being rendered.
> Is this issue related to a specific component?
- DataTableSkeleton
> What did you expect to happen? What happened instead? What would you like to
> see changed?
The DataTableSkeleton should show the specified headers.
> What browser are you working in?
- Chrome `86.0.4240.198 (Official Build) (64-bit)`
## Steps to reproduce the issue
- https://codesandbox.io/s/datatable-skeleton-headers-bug-8mifu?file=/src/index.js
|
True
|
'headers' prop in DataTableSkeleton component is not working - According to the [DataTableSkeleton ](https://react.carbondesignsystem.com/?path=/docs/datatableskeleton--default) docs, the `headers` prop should specify the displayed headers. But whatever I pass to the prop doesn't change what's being rendered.
> Is this issue related to a specific component?
- DataTableSkeleton
> What did you expect to happen? What happened instead? What would you like to
> see changed?
The DataTableSkeleton should show the specified headers.
> What browser are you working in?
- Chrome `86.0.4240.198 (Official Build) (64-bit)`
## Steps to reproduce the issue
- https://codesandbox.io/s/datatable-skeleton-headers-bug-8mifu?file=/src/index.js
|
main
|
headers prop in datatableskeleton component is not working according to the docs the headers prop should specify the displayed headers but whatever i pass to the prop doesn t change what s being rendered is this issue related to a specific component datatableskeleton what did you expect to happen what happened instead what would you like to see changed the datatableskeleton should show the specified headers what browser are you working in chrome official build bit steps to reproduce the issue
| 1
|
44,508
| 9,598,371,805
|
IssuesEvent
|
2019-05-10 01:16:27
|
pnp/pnpjs
|
https://api.github.com/repos/pnp/pnpjs
|
closed
|
hubsites.ts - HubSites error on SitePages
|
area: code status: details needed type: someting isn't working
|
### Category
- [ ] Enhancement
- [ X ] Bug
- [ ] Question
- [ ] Documentation gap/issue
### Version
Please specify what version of the library you are using: [ 1.3.2 ]
Please specify what version(s) of SharePoint you are targeting: [ SP Online ]
### Desired Behavior
sp.HubSites should return hubsites also on SitePages.
### Observed Behavior
sp.HubSites does not return hubsites on SitePages.
### Steps to Reproduce
create component getting hubsites and put on a site page
## Proposed fix:
change @defaultPath("_api/hubsites")
to @defaultPath("/_api/hubsites")
|
1.0
|
hubsites.ts - HubSites error on SitePages -
### Category
- [ ] Enhancement
- [ X ] Bug
- [ ] Question
- [ ] Documentation gap/issue
### Version
Please specify what version of the library you are using: [ 1.3.2 ]
Please specify what version(s) of SharePoint you are targeting: [ SP Online ]
### Desired Behavior
sp.HubSites should return hubsites also on SitePages.
### Observed Behavior
sp.HubSites does not return hubsites on SitePages.
### Steps to Reproduce
create component getting hubsites and put on a site page
## Proposed fix:
change @defaultPath("_api/hubsites")
to @defaultPath("/_api/hubsites")
|
non_main
|
hubsites ts hubsites error on sitepages category enhancement bug question documentation gap issue version please specify what version of the library you are using please specify what version s of sharepoint you are targeting desired behavior sp hubsites should return hubsites also on sitepages observed behavior sp hubsites does not return hubsites on sitepages steps to reproduce create component getting hubsites and put on a site page proposed fix change defaultpath api hubsites to defaultpath api hubsites
| 0
|
5,426
| 27,219,536,431
|
IssuesEvent
|
2023-02-21 03:11:24
|
schema-inspector/schema-inspector
|
https://api.github.com/repos/schema-inspector/schema-inspector
|
closed
|
Replace use of async dependency with native async code
|
Stale maintainability
|
Since ES6 came out, we have native language tools for managing async work, like `Promise` and `Promise.all`. We can remove a dependency to simplify using the library and increase its maintainability by replacing async with these native tools. If when attempting this, we find that async provides good value compared to native code alone (and we'd be reinventing the wheel), then we can keep it.
|
True
|
Replace use of async dependency with native async code - Since ES6 came out, we have native language tools for managing async work, like `Promise` and `Promise.all`. We can remove a dependency to simplify using the library and increase its maintainability by replacing async with these native tools. If when attempting this, we find that async provides good value compared to native code alone (and we'd be reinventing the wheel), then we can keep it.
|
main
|
replace use of async dependency with native async code since came out we have native language tools for managing async work like promise and promise all we can remove a dependency to simplify using the library and increase its maintainability by replacing async with these native tools if when attempting this we find that async provides good value compared to native code alone and we d be reinventing the wheel then we can keep it
| 1
|
182,827
| 30,989,698,931
|
IssuesEvent
|
2023-08-09 02:49:00
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
opened
|
Support headings props to the Menu Component.
|
Design System Pod
|
I noticed that the menu component doesn't have the ability to support a heading as a prop. This could potentially limit its usefulness in various scenarios.


Design

|
1.0
|
Support headings props to the Menu Component. - I noticed that the menu component doesn't have the ability to support a heading as a prop. This could potentially limit its usefulness in various scenarios.


Design

|
non_main
|
support headings props to the menu component i noticed that the menu component doesn t have the ability to support a heading as a prop this could potentially limit its usefulness in various scenarios design
| 0
|
214,486
| 16,594,166,902
|
IssuesEvent
|
2021-06-01 11:28:27
|
kubernetes-sigs/cloud-provider-azure
|
https://api.github.com/repos/kubernetes-sigs/cloud-provider-azure
|
closed
|
failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters"
|
kind/failing-test
|
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
-->
**What happened**:
pull-cloud-provider-azure-e2e-ccm e2e test is failing with the following error:
```
E0601 01:12:50.892424 1 azure_loadbalancer.go:1439] reconcileLoadBalancer: failed to get load balancer for service "e2e-tests-service-d6xvj/annotation-test", error: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:50.892463 1 azure_loadbalancer.go:104] reconcileLoadBalancer(e2e-tests-service-d6xvj/annotation-test) failed: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:50.892546 1 controller.go:307] error processing service e2e-tests-service-d6xvj/annotation-test (will retry): failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
I0601 01:12:50.892616 1 event.go:291] "Event occurred" object="e2e-tests-service-d6xvj/annotation-test" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters"
I0601 01:12:55.892735 1 controller.go:400] Ensuring load balancer for service e2e-tests-service-d6xvj/annotation-test
I0601 01:12:55.892801 1 azure_loadbalancer.go:1429] reconcileLoadBalancer for service(e2e-tests-service-d6xvj/annotation-test) - wantLb(true): started
I0601 01:12:55.892943 1 event.go:291] "Event occurred" object="e2e-tests-service-d6xvj/annotation-test" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0601 01:12:55.936771 1 azure_backoff.go:311] LoadBalancerClient.List(kubetest-iw8d0boy) success
I0601 01:12:56.007377 1 azure_backoff.go:330] PublicIPAddressesClient.List(e2e-6a03) success
E0601 01:12:56.007417 1 azure_loadbalancer.go:1439] reconcileLoadBalancer: failed to get load balancer for service "e2e-tests-service-d6xvj/annotation-test", error: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:56.007427 1 azure_loadbalancer.go:104] reconcileLoadBalancer(e2e-tests-service-d6xvj/annotation-test) failed: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:56.007645 1 controller.go:307] error processing service e2e-tests-service-d6xvj/annotation-test (will retry): failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
I0601 01:12:56.007691 1 event.go:291] "Event occurred" object="e2e-tests-service-d6xvj/annotation-test" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters"
```
**What you expected to happen**:
E2E tests should pass without any failures.
**How to reproduce it**:
See https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cloud-provider-azure/647/pull-cloud-provider-azure-e2e-ccm/1399529893540139008/artifacts/k8s-master-42849649-2/cloud-controller-manager-k8s-master-42849649-2_kube-system_cloud-controller-manager-11ce53213133b32672f491c18fb7be2e0ceb563adb4875412ea7989703d4853a.log.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
|
1.0
|
failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters" - <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
-->
**What happened**:
pull-cloud-provider-azure-e2e-ccm e2e test is failing with the following error:
```
E0601 01:12:50.892424 1 azure_loadbalancer.go:1439] reconcileLoadBalancer: failed to get load balancer for service "e2e-tests-service-d6xvj/annotation-test", error: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:50.892463 1 azure_loadbalancer.go:104] reconcileLoadBalancer(e2e-tests-service-d6xvj/annotation-test) failed: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:50.892546 1 controller.go:307] error processing service e2e-tests-service-d6xvj/annotation-test (will retry): failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
I0601 01:12:50.892616 1 event.go:291] "Event occurred" object="e2e-tests-service-d6xvj/annotation-test" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters"
I0601 01:12:55.892735 1 controller.go:400] Ensuring load balancer for service e2e-tests-service-d6xvj/annotation-test
I0601 01:12:55.892801 1 azure_loadbalancer.go:1429] reconcileLoadBalancer for service(e2e-tests-service-d6xvj/annotation-test) - wantLb(true): started
I0601 01:12:55.892943 1 event.go:291] "Event occurred" object="e2e-tests-service-d6xvj/annotation-test" kind="Service" apiVersion="v1" type="Normal" reason="EnsuringLoadBalancer" message="Ensuring load balancer"
I0601 01:12:55.936771 1 azure_backoff.go:311] LoadBalancerClient.List(kubetest-iw8d0boy) success
I0601 01:12:56.007377 1 azure_backoff.go:330] PublicIPAddressesClient.List(e2e-6a03) success
E0601 01:12:56.007417 1 azure_loadbalancer.go:1439] reconcileLoadBalancer: failed to get load balancer for service "e2e-tests-service-d6xvj/annotation-test", error: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:56.007427 1 azure_loadbalancer.go:104] reconcileLoadBalancer(e2e-tests-service-d6xvj/annotation-test) failed: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
E0601 01:12:56.007645 1 controller.go:307] error processing service e2e-tests-service-d6xvj/annotation-test (will retry): failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters
I0601 01:12:56.007691 1 event.go:291] "Event occurred" object="e2e-tests-service-d6xvj/annotation-test" kind="Service" apiVersion="v1" type="Warning" reason="SyncLoadBalancerFailed" message="Error syncing load balancer: failed to ensure load balancer: get(e2e-tests-service-d6xvj/annotation-test): lb(k8s-master-internal-lb-42849649) - failed to filter frontend IP configs with error: serviceOwnsFrontendIP: wrong parameters"
```
**What you expected to happen**:
E2E tests should pass without any failures.
**How to reproduce it**:
See https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cloud-provider-azure/647/pull-cloud-provider-azure-e2e-ccm/1399529893540139008/artifacts/k8s-master-42849649-2/cloud-controller-manager-k8s-master-42849649-2_kube-system_cloud-controller-manager-11ce53213133b32672f491c18fb7be2e0ceb563adb4875412ea7989703d4853a.log.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
|
non_main
|
failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters please use this template while reporting a bug and provide as much info as possible not doing so may result in your bug not being addressed in a timely manner thanks what happened pull cloud provider azure ccm test is failing with the following error azure loadbalancer go reconcileloadbalancer failed to get load balancer for service tests service annotation test error get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters azure loadbalancer go reconcileloadbalancer tests service annotation test failed get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters controller go error processing service tests service annotation test will retry failed to ensure load balancer get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters event go event occurred object tests service annotation test kind service apiversion type warning reason syncloadbalancerfailed message error syncing load balancer failed to ensure load balancer get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters controller go ensuring load balancer for service tests service annotation test azure loadbalancer go reconcileloadbalancer for service tests service annotation test wantlb true started event go event occurred object tests service annotation test kind service apiversion type normal reason ensuringloadbalancer message ensuring load balancer azure backoff go loadbalancerclient list kubetest success azure backoff go publicipaddressesclient list success azure loadbalancer go reconcileloadbalancer failed to get load balancer for service tests service annotation test error get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters azure loadbalancer go reconcileloadbalancer tests service annotation test failed get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters controller go error processing service tests service annotation test will retry failed to ensure load balancer get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters event go event occurred object tests service annotation test kind service apiversion type warning reason syncloadbalancerfailed message error syncing load balancer failed to ensure load balancer get tests service annotation test lb master internal lb failed to filter frontend ip configs with error serviceownsfrontendip wrong parameters what you expected to happen tests should pass without any failures how to reproduce it see anything else we need to know environment kubernetes version use kubectl version os e g from etc os release kernel e g uname a install tools others
| 0
|
32,033
| 26,371,984,876
|
IssuesEvent
|
2023-01-11 21:33:11
|
ejgallego/coq-serapi
|
https://api.github.com/repos/ejgallego/coq-serapi
|
closed
|
Missing build dependency on mathcomp's ssreflect
|
kind: bug kind: testing kind: infrastructure
|
While trying to package coq-serapi for Debian, it turns out tests/genarg/move.v wants mathcomp's ssrnat and eqtype.
I guess the opam file needs a dep on coq-mathcomp-ssreflect in addition to what is already declared.
|
1.0
|
Missing build dependency on mathcomp's ssreflect - While trying to package coq-serapi for Debian, it turns out tests/genarg/move.v wants mathcomp's ssrnat and eqtype.
I guess the opam file needs a dep on coq-mathcomp-ssreflect in addition to what is already declared.
|
non_main
|
missing build dependency on mathcomp s ssreflect while trying to package coq serapi for debian it turns out tests genarg move v wants mathcomp s ssrnat and eqtype i guess the opam file needs a dep on coq mathcomp ssreflect in addition to what is already declared
| 0
|
36,771
| 2,812,319,801
|
IssuesEvent
|
2015-05-18 07:51:53
|
MeoMix/StreamusChromeExtension
|
https://api.github.com/repos/MeoMix/StreamusChromeExtension
|
opened
|
Refactor usages of z-index
|
priority:unscheduled scope:small type:refactor
|
My usages of z-index have gotten a bit messed up. It would be good to sit down and go over what is needed from top-to-bottom and get them completely ironed out and sensible. Right now I have to use `!important` to get what I need in a few spots.
For reference, the z-index declarations are located here: https://github.com/MeoMix/StreamusChromeExtension/blob/Development/src/less/utility.less
It would probably be prudent to move z-index into its own file, too. You can find a reference for how it should look here: https://medium.com/@fat/mediums-css-is-actually-pretty-fucking-good-b8e2a6c78b06
|
1.0
|
Refactor usages of z-index - My usages of z-index have gotten a bit messed up. It would be good to sit down and go over what is needed from top-to-bottom and get them completely ironed out and sensible. Right now I have to use `!important` to get what I need in a few spots.
For reference, the z-index declarations are located here: https://github.com/MeoMix/StreamusChromeExtension/blob/Development/src/less/utility.less
It would probably be prudent to move z-index into its own file, too. You can find a reference for how it should look here: https://medium.com/@fat/mediums-css-is-actually-pretty-fucking-good-b8e2a6c78b06
|
non_main
|
refactor usages of z index my usages of z index have gotten a bit messed up it would be good to sit down and go over what is needed from top to bottom and get them completely ironed out and sensible right now i have to use important to get what i need in a few spots for reference the z index declarations are located here it would probably be prudent to move z index into its own file too you can find a reference for how it should look here
| 0
|
4,617
| 23,913,551,446
|
IssuesEvent
|
2022-09-09 10:27:55
|
ipfs/kubo
|
https://api.github.com/repos/ipfs/kubo
|
closed
|
Add Plugin's Config default configuration interface
|
kind/enhancement need/triage need/maintainer-input
|
### Checklist
- [X] My issue is specific & actionable.
- [X] I am not suggesting a protocol enhancement.
- [X] I have searched on the [issue tracker](https://github.com/ipfs/go-ipfs/issues?q=is%3Aissue) for my issue.
### Description
Adding the Plugin's Config default configuration interface is to write the Plugin's default configuration to the config file at init time.
|
True
|
Add Plugin's Config default configuration interface - ### Checklist
- [X] My issue is specific & actionable.
- [X] I am not suggesting a protocol enhancement.
- [X] I have searched on the [issue tracker](https://github.com/ipfs/go-ipfs/issues?q=is%3Aissue) for my issue.
### Description
Adding the Plugin's Config default configuration interface is to write the Plugin's default configuration to the config file at init time.
|
main
|
add plugin s config default configuration interface checklist my issue is specific actionable i am not suggesting a protocol enhancement i have searched on the for my issue description adding the plugin s config default configuration interface is to write the plugin s default configuration to the config file at init time
| 1
|
2,744
| 9,782,387,999
|
IssuesEvent
|
2019-06-07 23:22:14
|
Fuzzik/aperture-flechette-emitter
|
https://api.github.com/repos/Fuzzik/aperture-flechette-emitter
|
closed
|
Maintain a proper workshop branch for releases
|
maintainability
|
I always find myself confused with what code is actually live. Create a workshop branch and push to it every time I update the addon.
|
True
|
Maintain a proper workshop branch for releases - I always find myself confused with what code is actually live. Create a workshop branch and push to it every time I update the addon.
|
main
|
maintain a proper workshop branch for releases i always find myself confused with what code is actually live create a workshop branch and push to it every time i update the addon
| 1
|
657,779
| 21,844,146,274
|
IssuesEvent
|
2022-05-18 01:47:30
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
[Bug] Could not find a package configuration file provided by "absl"
|
kind/bug lang/core priority/P2 disposition/requires reporter action infra/CMake
|
<!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
gRPC version: v1.46.1
language: C++
### What operating system (Linux, Windows,...) and version?
MacOS Monterey(M1 Pro)
### What runtime / compiler are you using (e.g. python version or version of gcc)
gcc version:
```bash
$ gcc -v
Apple clang version 13.1.6 (clang-1316.0.21.2.3)
Target: arm64-apple-darwin21.4.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If there’s not enough information to debug the problem, gRPC team may close the issue at their discretion. You’re welcome to re-open the issue once you have a reproduction.
While installing grpc following the doc:
- https://github.com/grpc/grpc/blob/master/BUILDING.md
When installing, an error occurred:
```bash
$ sudo make install
CMake Error at cmake/abseil-cpp.cmake:35 (find_package):
Could not find a package configuration file provided by "absl" with any of
the following names:
abslConfig.cmake
absl-config.cmake
Add the installation prefix of "absl" to CMAKE_PREFIX_PATH or set
"absl_DIR" to a directory containing one of the above files. If "absl"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:311 (include)
-- Configuring incomplete, errors occurred!
See also "/Users/kylinkzhang/self-workspace/grpc/cmake/build/CMakeFiles/CMakeOutput.log".
See also "/Users/kylinkzhang/self-workspace/grpc/cmake/build/CMakeFiles/CMakeError.log".
make: *** [cmake_check_build_system] Error 1
```
### What did you expect to see?
gRPC installed.
### What did you see instead?
I fixed this problem by install abseil library **Manually:**
```bash
vcpkg install abseil
```
So, i was wondering if the `abseil` config got the wrong path or something?
### Anything else we should know about your project / environment?
|
1.0
|
[Bug] Could not find a package configuration file provided by "absl" - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
gRPC version: v1.46.1
language: C++
### What operating system (Linux, Windows,...) and version?
MacOS Monterey(M1 Pro)
### What runtime / compiler are you using (e.g. python version or version of gcc)
gcc version:
```bash
$ gcc -v
Apple clang version 13.1.6 (clang-1316.0.21.2.3)
Target: arm64-apple-darwin21.4.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
```
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If there’s not enough information to debug the problem, gRPC team may close the issue at their discretion. You’re welcome to re-open the issue once you have a reproduction.
While installing grpc following the doc:
- https://github.com/grpc/grpc/blob/master/BUILDING.md
When installing, an error occurred:
```bash
$ sudo make install
CMake Error at cmake/abseil-cpp.cmake:35 (find_package):
Could not find a package configuration file provided by "absl" with any of
the following names:
abslConfig.cmake
absl-config.cmake
Add the installation prefix of "absl" to CMAKE_PREFIX_PATH or set
"absl_DIR" to a directory containing one of the above files. If "absl"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
CMakeLists.txt:311 (include)
-- Configuring incomplete, errors occurred!
See also "/Users/kylinkzhang/self-workspace/grpc/cmake/build/CMakeFiles/CMakeOutput.log".
See also "/Users/kylinkzhang/self-workspace/grpc/cmake/build/CMakeFiles/CMakeError.log".
make: *** [cmake_check_build_system] Error 1
```
### What did you expect to see?
gRPC installed.
### What did you see instead?
I fixed this problem by install abseil library **Manually:**
```bash
vcpkg install abseil
```
So, i was wondering if the `abseil` config got the wrong path or something?
### Anything else we should know about your project / environment?
|
non_main
|
could not find a package configuration file provided by absl please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using grpc version language c what operating system linux windows and version macos monterey pro what runtime compiler are you using e g python version or version of gcc gcc version bash gcc v apple clang version clang target apple thread model posix installeddir library developer commandlinetools usr bin what did you do please provide either a unit test for reproducing the bug or specific steps for us to follow to reproduce the bug if there’s not enough information to debug the problem grpc team may close the issue at their discretion you’re welcome to re open the issue once you have a reproduction while installing grpc following the doc when installing an error occurred bash sudo make install cmake error at cmake abseil cpp cmake find package could not find a package configuration file provided by absl with any of the following names abslconfig cmake absl config cmake add the installation prefix of absl to cmake prefix path or set absl dir to a directory containing one of the above files if absl provides a separate development package or sdk be sure it has been installed call stack most recent call first cmakelists txt include configuring incomplete errors occurred see also users kylinkzhang self workspace grpc cmake build cmakefiles cmakeoutput log see also users kylinkzhang self workspace grpc cmake build cmakefiles cmakeerror log make error what did you expect to see grpc installed what did you see instead i fixed this problem by install abseil library manually bash vcpkg install abseil so i was wondering if the abseil config got the wrong path or something anything else we should know about your project environment
| 0
|
22,842
| 10,789,978,349
|
IssuesEvent
|
2019-11-05 13:09:16
|
silinternational/simplesamlphp-module-sildisco
|
https://api.github.com/repos/silinternational/simplesamlphp-module-sildisco
|
opened
|
WS-2016-0090 (Medium) detected in jquery-1.8.3.min.js, simplesamlphp/simplesamlphp-v1.17.6
|
security vulnerability
|
## WS-2016-0090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.8.3.min.js</b>, <b>simplesamlphp/simplesamlphp-v1.17.6</b></p></summary>
<p>
<details><summary><b>jquery-1.8.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.min.js</a></p>
<p>Path to vulnerable library: /simplesamlphp-module-sildisco/vendor/simplesamlphp/simplesamlphp/www/resources/jquery-1.8.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>simplesamlphp/simplesamlphp-v1.17.6</b></p></summary>
<p>SimpleSAMLphp is an award-winning application written in native PHP that deals with authentication.</p>
<p>
Dependency Hierarchy:
- simplesamlphp/composer-module-installer-v1.1.6 (Root Library)
- :x: **simplesamlphp/simplesamlphp-v1.17.6** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/silinternational/simplesamlphp-module-sildisco/commit/166265c59148425d9bc6676ed51551b3975dd437">166265c59148425d9bc6676ed51551b3975dd437</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JQuery, before 2.2.0, is vulnerable to Cross-site Scripting (XSS) attacks via text/javascript response with arbitrary code execution.
<p>Publish Date: 2016-11-27
<p>URL: <a href=https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614>WS-2016-0090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614">https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614</a></p>
<p>Release Date: 2019-04-08</p>
<p>Fix Resolution: 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2016-0090 (Medium) detected in jquery-1.8.3.min.js, simplesamlphp/simplesamlphp-v1.17.6 - ## WS-2016-0090 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.8.3.min.js</b>, <b>simplesamlphp/simplesamlphp-v1.17.6</b></p></summary>
<p>
<details><summary><b>jquery-1.8.3.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.3/jquery.min.js</a></p>
<p>Path to vulnerable library: /simplesamlphp-module-sildisco/vendor/simplesamlphp/simplesamlphp/www/resources/jquery-1.8.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.3.min.js** (Vulnerable Library)
</details>
<details><summary><b>simplesamlphp/simplesamlphp-v1.17.6</b></p></summary>
<p>SimpleSAMLphp is an award-winning application written in native PHP that deals with authentication.</p>
<p>
Dependency Hierarchy:
- simplesamlphp/composer-module-installer-v1.1.6 (Root Library)
- :x: **simplesamlphp/simplesamlphp-v1.17.6** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/silinternational/simplesamlphp-module-sildisco/commit/166265c59148425d9bc6676ed51551b3975dd437">166265c59148425d9bc6676ed51551b3975dd437</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JQuery, before 2.2.0, is vulnerable to Cross-site Scripting (XSS) attacks via text/javascript response with arbitrary code execution.
<p>Publish Date: 2016-11-27
<p>URL: <a href=https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614>WS-2016-0090</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614">https://github.com/jquery/jquery/commit/b078a62013782c7424a4a61a240c23c4c0b42614</a></p>
<p>Release Date: 2019-04-08</p>
<p>Fix Resolution: 2.2.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_main
|
ws medium detected in jquery min js simplesamlphp simplesamlphp ws medium severity vulnerability vulnerable libraries jquery min js simplesamlphp simplesamlphp jquery min js javascript library for dom operations library home page a href path to vulnerable library simplesamlphp module sildisco vendor simplesamlphp simplesamlphp www resources jquery js dependency hierarchy x jquery min js vulnerable library simplesamlphp simplesamlphp simplesamlphp is an award winning application written in native php that deals with authentication dependency hierarchy simplesamlphp composer module installer root library x simplesamlphp simplesamlphp vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks via text javascript response with arbitrary code execution publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
433,448
| 12,505,764,681
|
IssuesEvent
|
2020-06-02 11:20:46
|
gitcoinco/web
|
https://api.github.com/repos/gitcoinco/web
|
closed
|
as a user, i would like to be able to send kudos via twitter, because thats where cyptotwitter people are and it'd be good for marketing
|
Gitcoin Kudos enhancement priority: backlog
|
_From @owocki on October 8, 2018 4:12_
as a user, i would like to be able to send kudos via twitter, because thats where cyptotwitter people are and it'd be good for marketing
_Copied from original issue: gitcoinco/gitcoin-erc721#157_
|
1.0
|
as a user, i would like to be able to send kudos via twitter, because thats where cyptotwitter people are and it'd be good for marketing - _From @owocki on October 8, 2018 4:12_
as a user, i would like to be able to send kudos via twitter, because thats where cyptotwitter people are and it'd be good for marketing
_Copied from original issue: gitcoinco/gitcoin-erc721#157_
|
non_main
|
as a user i would like to be able to send kudos via twitter because thats where cyptotwitter people are and it d be good for marketing from owocki on october as a user i would like to be able to send kudos via twitter because thats where cyptotwitter people are and it d be good for marketing copied from original issue gitcoinco gitcoin
| 0
|
620,344
| 19,559,661,133
|
IssuesEvent
|
2022-01-03 14:36:26
|
betagouv/service-national-universel
|
https://api.github.com/repos/betagouv/service-national-universel
|
closed
|
feat: bouton "correction terminée"
|
enhancement priority-HIGH
|
### Fonctionnalité liée à un problème ?
il faut tout revalider alors que seule une étape est à refaire...
### Fonctionnalité
**Solution**
Ajouter pour les jeunes "en attente de correction" le bouton "J'ai terminé la correction mon dossier"
**Conséquence**
Le dossier repasse en attente de validation
### Commentaires
[trello](https://trello.com/c/e2oodkAB)
|
1.0
|
feat: bouton "correction terminée" - ### Fonctionnalité liée à un problème ?
il faut tout revalider alors que seule une étape est à refaire...
### Fonctionnalité
**Solution**
Ajouter pour les jeunes "en attente de correction" le bouton "J'ai terminé la correction mon dossier"
**Conséquence**
Le dossier repasse en attente de validation
### Commentaires
[trello](https://trello.com/c/e2oodkAB)
|
non_main
|
feat bouton correction terminée fonctionnalité liée à un problème il faut tout revalider alors que seule une étape est à refaire fonctionnalité solution ajouter pour les jeunes en attente de correction le bouton j ai terminé la correction mon dossier conséquence le dossier repasse en attente de validation commentaires
| 0
|
4,621
| 23,925,224,730
|
IssuesEvent
|
2022-09-09 21:34:45
|
bazelbuild/intellij
|
https://api.github.com/repos/bazelbuild/intellij
|
closed
|
Partial sync gives sponge2 link instead of error messages
|
type: bug product: IntelliJ topic: java topic: sync awaiting-maintainer
|
### Description of the bug:
If I do a partial sync of a file that doesn't compile, my Bazel output shows a sponge2 link instead of the compile errors.
Presumably this is Google-internal behavior that has leaked to the outside world?
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Grab a project, do a partial sync that doesn't compile. I use https://github.com/batfish/batfish. I have not narrowed down a specific repro; it may e.g., involve trying to sync a JUnit test that depends on generated code where the generation script fails.
### Which Intellij IDE are you using? Please provide the specific version.
2022.2.1 (Community Edition)
### What programming languages and tools are you using? Please provide specific versions.
Java. latest rules_jvm_external
### What Bazel plugin version are you using?
`2022.08.09.0.1-api-version-222`
### Have you found anything relevant by searching the web?
No; no issues or PRs mention sponge2 here.
### Any other information, logs, or outputs that you want to share?
_No response_
|
True
|
Partial sync gives sponge2 link instead of error messages - ### Description of the bug:
If I do a partial sync of a file that doesn't compile, my Bazel output shows a sponge2 link instead of the compile errors.
Presumably this is Google-internal behavior that has leaked to the outside world?
### What's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
Grab a project, do a partial sync that doesn't compile. I use https://github.com/batfish/batfish. I have not narrowed down a specific repro; it may e.g., involve trying to sync a JUnit test that depends on generated code where the generation script fails.
### Which Intellij IDE are you using? Please provide the specific version.
2022.2.1 (Community Edition)
### What programming languages and tools are you using? Please provide specific versions.
Java. latest rules_jvm_external
### What Bazel plugin version are you using?
`2022.08.09.0.1-api-version-222`
### Have you found anything relevant by searching the web?
No; no issues or PRs mention sponge2 here.
### Any other information, logs, or outputs that you want to share?
_No response_
|
main
|
partial sync gives link instead of error messages description of the bug if i do a partial sync of a file that doesn t compile my bazel output shows a link instead of the compile errors presumably this is google internal behavior that has leaked to the outside world what s the simplest easiest way to reproduce this bug please provide a minimal example if possible grab a project do a partial sync that doesn t compile i use i have not narrowed down a specific repro it may e g involve trying to sync a junit test that depends on generated code where the generation script fails which intellij ide are you using please provide the specific version community edition what programming languages and tools are you using please provide specific versions java latest rules jvm external what bazel plugin version are you using api version have you found anything relevant by searching the web no no issues or prs mention here any other information logs or outputs that you want to share no response
| 1
|
313,184
| 23,460,814,337
|
IssuesEvent
|
2022-08-16 12:59:07
|
100mslive/react-native-hms
|
https://api.github.com/repos/100mslive/react-native-hms
|
opened
|
Add set volume docs
|
documentation
|
https://www.100ms.live/docs/react-native/v2/advanced-features/set-volume
- mention applicable only on remote audio
- mention it's local only - does not affect other peers in room
- show one example usage in onTrackUpdate, when trackAdded then setVolume to 0 or 10
- show another example usage which mimics user sliding volume level - get remote peerTrackNode & cast to remoteAudioTrack & do setVolume
- Also, add these as inline code comments
|
1.0
|
Add set volume docs - https://www.100ms.live/docs/react-native/v2/advanced-features/set-volume
- mention applicable only on remote audio
- mention it's local only - does not affect other peers in room
- show one example usage in onTrackUpdate, when trackAdded then setVolume to 0 or 10
- show another example usage which mimics user sliding volume level - get remote peerTrackNode & cast to remoteAudioTrack & do setVolume
- Also, add these as inline code comments
|
non_main
|
add set volume docs mention applicable only on remote audio mention it s local only does not affect other peers in room show one example usage in ontrackupdate when trackadded then setvolume to or show another example usage which mimics user sliding volume level get remote peertracknode cast to remoteaudiotrack do setvolume also add these as inline code comments
| 0
|
903
| 4,561,646,318
|
IssuesEvent
|
2016-09-14 12:32:08
|
ansible/ansible-modules-extras
|
https://api.github.com/repos/ansible/ansible-modules-extras
|
closed
|
modprobe: Call 'modprobe -r' instasd of 'rmmod' for absent?
|
affects_2.0 feature_idea waiting_on_maintainer
|
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
modprobe
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A (linux target)
##### SUMMARY
Current implementation of modprobe module uses `rmmod` command to remove kernel module.
https://github.com/ansible/ansible-modules-extras/blob/stable-2.1/system/modprobe.py#L114
Why don't we use `modprobe -r` instead of `rmmod` here?
`modprobe -r` would be better because;
1. It will also unload unused modules
2. Straight forward from module name
##### STEPS TO REPRODUCE
I was trying to unload sb_edac module from my server (since it conflict with some hardware monitoring of server), the module depends on edac_core and edac_core was loaded only for sb_edac.
Before applying playbook, on the target server.
```
server# lsmod | grep edac
sb_edac 28672 0
edac_core 53248 1 sb_edac
```
playbook (snippet)
```
- name: unload edac modules
modprobe:
name: sb_edac
state: absent
```
##### EXPECTED RESULTS
edac_core module unloaded, since it no longer be used.
##### ACTUAL RESULTS
After applying playbook, on the target server.
```
server# lsmod | grep edac
edac_core 53248 0
```
|
True
|
modprobe: Call 'modprobe -r' instasd of 'rmmod' for absent? - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
modprobe
##### ANSIBLE VERSION
```
ansible 2.0.1.0
config file =
configured module search path = Default w/o overrides
```
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A (linux target)
##### SUMMARY
Current implementation of modprobe module uses `rmmod` command to remove kernel module.
https://github.com/ansible/ansible-modules-extras/blob/stable-2.1/system/modprobe.py#L114
Why don't we use `modprobe -r` instead of `rmmod` here?
`modprobe -r` would be better because;
1. It will also unload unused modules
2. Straight forward from module name
##### STEPS TO REPRODUCE
I was trying to unload sb_edac module from my server (since it conflict with some hardware monitoring of server), the module depends on edac_core and edac_core was loaded only for sb_edac.
Before applying playbook, on the target server.
```
server# lsmod | grep edac
sb_edac 28672 0
edac_core 53248 1 sb_edac
```
playbook (snippet)
```
- name: unload edac modules
modprobe:
name: sb_edac
state: absent
```
##### EXPECTED RESULTS
edac_core module unloaded, since it no longer be used.
##### ACTUAL RESULTS
After applying playbook, on the target server.
```
server# lsmod | grep edac
edac_core 53248 0
```
|
main
|
modprobe call modprobe r instasd of rmmod for absent issue type feature idea component name modprobe ansible version ansible config file configured module search path default w o overrides configuration n a os environment n a linux target summary current implementation of modprobe module uses rmmod command to remove kernel module why don t we use modprobe r instead of rmmod here modprobe r would be better because it will also unload unused modules straight forward from module name steps to reproduce i was trying to unload sb edac module from my server since it conflict with some hardware monitoring of server the module depends on edac core and edac core was loaded only for sb edac before applying playbook on the target server server lsmod grep edac sb edac edac core sb edac playbook snippet name unload edac modules modprobe name sb edac state absent expected results edac core module unloaded since it no longer be used actual results after applying playbook on the target server server lsmod grep edac edac core
| 1
|
448,181
| 31,772,653,501
|
IssuesEvent
|
2023-09-12 12:45:16
|
foundry-rs/starknet-foundry
|
https://api.github.com/repos/foundry-rs/starknet-foundry
|
closed
|
Document `snforge` arguments
|
documentation good first issue Component: Forge new
|
Add a page to foundry book listing all possible arguments of `snforge` command and explaining their usages. Some snippets with practical examples should also be considered.
|
1.0
|
Document `snforge` arguments - Add a page to foundry book listing all possible arguments of `snforge` command and explaining their usages. Some snippets with practical examples should also be considered.
|
non_main
|
document snforge arguments add a page to foundry book listing all possible arguments of snforge command and explaining their usages some snippets with practical examples should also be considered
| 0
|
260,941
| 22,681,072,613
|
IssuesEvent
|
2022-07-04 09:55:56
|
redpanda-data/redpanda
|
https://api.github.com/repos/redpanda-data/redpanda
|
opened
|
rpk tuner errors prevent nightly clustered ducktape from running
|
kind/bug area/tests ci-failure
|
Example:
https://buildkite.com/redpanda/vtools/builds/2757#0181c2e0-017a-442b-ab86-fe1a7786796f
```
RUNNING HANDLER [restart redpanda-tuner] ***************************************
fatal: [35.88.53.48]: FAILED! => {"changed": false, "msg": "Unable to start service redpanda-tuner: Job for redpanda-tuner.service failed because the control process exited with error code.\nSee \"systemctl status redpanda-tuner.service\" and \"journalctl -xeu redpanda-tuner.service\" for details.\n"}
```
```
commit 77a22ccd9cd419c21a786d65ec0984ad1f7bbd5d
Author: Rogger Vasquez <rvasque3@gmail.com>
Date: Thu Jun 30 13:13:55 2022 -0500
rpk: change exit code when rpk tune fails
Now rpk exits with code 1 each time the rpk tune
command either fails to run, or it has a tuner
enabled but it's not supported
```
|
1.0
|
rpk tuner errors prevent nightly clustered ducktape from running - Example:
https://buildkite.com/redpanda/vtools/builds/2757#0181c2e0-017a-442b-ab86-fe1a7786796f
```
RUNNING HANDLER [restart redpanda-tuner] ***************************************
fatal: [35.88.53.48]: FAILED! => {"changed": false, "msg": "Unable to start service redpanda-tuner: Job for redpanda-tuner.service failed because the control process exited with error code.\nSee \"systemctl status redpanda-tuner.service\" and \"journalctl -xeu redpanda-tuner.service\" for details.\n"}
```
```
commit 77a22ccd9cd419c21a786d65ec0984ad1f7bbd5d
Author: Rogger Vasquez <rvasque3@gmail.com>
Date: Thu Jun 30 13:13:55 2022 -0500
rpk: change exit code when rpk tune fails
Now rpk exits with code 1 each time the rpk tune
command either fails to run, or it has a tuner
enabled but it's not supported
```
|
non_main
|
rpk tuner errors prevent nightly clustered ducktape from running example running handler fatal failed changed false msg unable to start service redpanda tuner job for redpanda tuner service failed because the control process exited with error code nsee systemctl status redpanda tuner service and journalctl xeu redpanda tuner service for details n commit author rogger vasquez date thu jun rpk change exit code when rpk tune fails now rpk exits with code each time the rpk tune command either fails to run or it has a tuner enabled but it s not supported
| 0
|
1,523
| 6,572,215,743
|
IssuesEvent
|
2017-09-11 00:09:27
|
ansible/ansible-modules-extras
|
https://api.github.com/repos/ansible/ansible-modules-extras
|
closed
|
support specifying a specific time to run `at` command
|
affects_2.3 feature_idea waiting_on_maintainer
|
##### Issue Type:
- Feature Idea
##### Plugin Name:
`at` module
##### Summary:
support specifying a specific time to run `at` command
|
True
|
support specifying a specific time to run `at` command - ##### Issue Type:
- Feature Idea
##### Plugin Name:
`at` module
##### Summary:
support specifying a specific time to run `at` command
|
main
|
support specifying a specific time to run at command issue type feature idea plugin name at module summary support specifying a specific time to run at command
| 1
|
5,176
| 26,346,006,181
|
IssuesEvent
|
2023-01-10 22:08:18
|
aws/aws-sam-cli
|
https://api.github.com/repos/aws/aws-sam-cli
|
closed
|
sam build and sam local invoke differences regarding of LaverVersion in NodeJS
|
type/bug area/deploy stage/needs-investigation area/layers area/local/invoke maintainer/need-response
|
<!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description
`ContentUri` property of `AWS::Serverless::LayerVersion` behaves differently depending on the command used and it is annoying.
Currently, I have to use 2 different values, once to `sam local invoke` and another to run `sam build`.
### Steps to reproduce
Here is the layer definition.
```yaml
CommonDependenciesLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: 'dev-common-dependencies-layer'
Description: 'Common dependencies for dev env'
ContentUri: './dependencies-layer'
CompatibleRuntimes:
- nodejs10.x
- nodejs12.x
RetentionPolicy: Retain
Metadata:
BuildMethod: nodejs10.x
```
Notice I'm using `BuildMethod` to build the layer when invoking `sam build`. The `dependencies-layer` folder looks like this
```text
- dependencies-layer/
|- node_modules
|- package.json
|- package-lock.json
```
I can invoke `sam build` then `sam deploy` and it works.
Now if I try to invoke a function that depends on the layer, it won't find any packages. We all know that the structure for a NodeJS layer is as follow:
```text
- dependencies-layer/
|- nodejs
|- node_modules
|- package.json
|- package-lock.json
```
If I create the folder structure shown above, now `sam local invoke` works, but then `sam build` fails because the folder `dependencies-layer` does not contain a `package.json`.
There are a few ways we can handle this, like creating the folder `nodejs` copying the package.json in it then invoking `npm install` before `sam local invoke`. Or having both structures so both versions of the commands are happy but then we will have to maintain 2 versions of package.json. Or using a symlink, or using, etc.
Currently, I'm using a parameter, updated the value of `ContentUri: !Ref MyParam` and using `sam local invoke --parameter-overrides MyParam=./dependencies-layer` while the default value of the param is `Default: './dependencies-layer/nodejs'` but it feels like a hack.
### Expected result
I would like that they both get to a mutual agreement so we don't have to do any extra step to make it work. I'm talking about `sam build` and `sam local invoke`
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Amazon Linux
2. `sam --version`: 1.2.0
|
True
|
sam build and sam local invoke differences regarding of LaverVersion in NodeJS - <!-- Make sure we don't have an existing Issue that reports the bug you are seeing (both open and closed).
If you do find an existing Issue, re-open or add a comment to that Issue instead of creating a new one. -->
### Description
`ContentUri` property of `AWS::Serverless::LayerVersion` behaves differently depending on the command used and it is annoying.
Currently, I have to use 2 different values, once to `sam local invoke` and another to run `sam build`.
### Steps to reproduce
Here is the layer definition.
```yaml
CommonDependenciesLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: 'dev-common-dependencies-layer'
Description: 'Common dependencies for dev env'
ContentUri: './dependencies-layer'
CompatibleRuntimes:
- nodejs10.x
- nodejs12.x
RetentionPolicy: Retain
Metadata:
BuildMethod: nodejs10.x
```
Notice I'm using `BuildMethod` to build the layer when invoking `sam build`. The `dependencies-layer` folder looks like this
```text
- dependencies-layer/
|- node_modules
|- package.json
|- package-lock.json
```
I can invoke `sam build` then `sam deploy` and it works.
Now if I try to invoke a function that depends on the layer, it won't find any packages. We all know that the structure for a NodeJS layer is as follow:
```text
- dependencies-layer/
|- nodejs
|- node_modules
|- package.json
|- package-lock.json
```
If I create the folder structure shown above, now `sam local invoke` works, but then `sam build` fails because the folder `dependencies-layer` does not contain a `package.json`.
There are a few ways we can handle this, like creating the folder `nodejs` copying the package.json in it then invoking `npm install` before `sam local invoke`. Or having both structures so both versions of the commands are happy but then we will have to maintain 2 versions of package.json. Or using a symlink, or using, etc.
Currently, I'm using a parameter, updated the value of `ContentUri: !Ref MyParam` and using `sam local invoke --parameter-overrides MyParam=./dependencies-layer` while the default value of the param is `Default: './dependencies-layer/nodejs'` but it feels like a hack.
### Expected result
I would like that they both get to a mutual agreement so we don't have to do any extra step to make it work. I'm talking about `sam build` and `sam local invoke`
### Additional environment details (Ex: Windows, Mac, Amazon Linux etc)
1. OS: Amazon Linux
2. `sam --version`: 1.2.0
|
main
|
sam build and sam local invoke differences regarding of laverversion in nodejs make sure we don t have an existing issue that reports the bug you are seeing both open and closed if you do find an existing issue re open or add a comment to that issue instead of creating a new one description contenturi property of aws serverless layerversion behaves differently depending on the command used and it is annoying currently i have to use different values once to sam local invoke and another to run sam build steps to reproduce here is the layer definition yaml commondependencieslayer type aws serverless layerversion properties layername dev common dependencies layer description common dependencies for dev env contenturi dependencies layer compatibleruntimes x x retentionpolicy retain metadata buildmethod x notice i m using buildmethod to build the layer when invoking sam build the dependencies layer folder looks like this text dependencies layer node modules package json package lock json i can invoke sam build then sam deploy and it works now if i try to invoke a function that depends on the layer it won t find any packages we all know that the structure for a nodejs layer is as follow text dependencies layer nodejs node modules package json package lock json if i create the folder structure shown above now sam local invoke works but then sam build fails because the folder dependencies layer does not contain a package json there are a few ways we can handle this like creating the folder nodejs copying the package json in it then invoking npm install before sam local invoke or having both structures so both versions of the commands are happy but then we will have to maintain versions of package json or using a symlink or using etc currently i m using a parameter updated the value of contenturi ref myparam and using sam local invoke parameter overrides myparam dependencies layer while the default value of the param is default dependencies layer nodejs but it feels like a hack expected result i would like that they both get to a mutual agreement so we don t have to do any extra step to make it work i m talking about sam build and sam local invoke additional environment details ex windows mac amazon linux etc os amazon linux sam version
| 1
|
5,296
| 26,761,302,137
|
IssuesEvent
|
2023-01-31 07:08:55
|
bazelbuild/intellij
|
https://api.github.com/repos/bazelbuild/intellij
|
closed
|
go_tool_library sources marked as unsynced
|
type: bug lang: go product: IntelliJ os: linux topic: sync awaiting-maintainer
|
#### Description of the issue. Please be specific.
Go sources using the `go_tool_library` are marked as unsynced. The `go_tool_library` is necessary for `nogo` rules to avoid a cycle in dependencies. The normal `go_library` uses `nogo`.
#### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible.
https://github.com/jschaf/bazel-bug-go-tool/tree/master/lint

#### Version information
IdeaUltimate: 2020.2.3
Platform: Linux 5.4.0-7642-generic
Bazel plugin: 9999
Bazel: 3.7.0
|
True
|
go_tool_library sources marked as unsynced - #### Description of the issue. Please be specific.
Go sources using the `go_tool_library` are marked as unsynced. The `go_tool_library` is necessary for `nogo` rules to avoid a cycle in dependencies. The normal `go_library` uses `nogo`.
#### What's the simplest set of steps to reproduce this issue? Please provide an example project, if possible.
https://github.com/jschaf/bazel-bug-go-tool/tree/master/lint

#### Version information
IdeaUltimate: 2020.2.3
Platform: Linux 5.4.0-7642-generic
Bazel plugin: 9999
Bazel: 3.7.0
|
main
|
go tool library sources marked as unsynced description of the issue please be specific go sources using the go tool library are marked as unsynced the go tool library is necessary for nogo rules to avoid a cycle in dependencies the normal go library uses nogo what s the simplest set of steps to reproduce this issue please provide an example project if possible version information ideaultimate platform linux generic bazel plugin bazel
| 1
|
4,159
| 19,957,821,461
|
IssuesEvent
|
2022-01-28 02:48:36
|
microsoft/DirectXTK12
|
https://api.github.com/repos/microsoft/DirectXTK12
|
opened
|
Retire VS 2017 support
|
maintainence
|
Visual Studio 2017 reaches it's [mainstream end-of-life](https://docs.microsoft.com/en-us/lifecycle/products/visual-studio-2017) on **April 2022**. I should retire these projects that time:
* DirectXTK_Desktop_2017_Win10.vcxproj
* DirectXTK_GDK_2017.vcxproj
* DirectXTK_Windows10_2017.vcxproj
*
> I am not sure when I'll be retiring Xbox One XDK support which is not supported for VS 2019 or later. That means I'm not sure if I'll delete ``DirectXTK_XboxOneXDK_2017.vcxproj`` or not with this change.
|
True
|
Retire VS 2017 support - Visual Studio 2017 reaches it's [mainstream end-of-life](https://docs.microsoft.com/en-us/lifecycle/products/visual-studio-2017) on **April 2022**. I should retire these projects that time:
* DirectXTK_Desktop_2017_Win10.vcxproj
* DirectXTK_GDK_2017.vcxproj
* DirectXTK_Windows10_2017.vcxproj
*
> I am not sure when I'll be retiring Xbox One XDK support which is not supported for VS 2019 or later. That means I'm not sure if I'll delete ``DirectXTK_XboxOneXDK_2017.vcxproj`` or not with this change.
|
main
|
retire vs support visual studio reaches it s on april i should retire these projects that time directxtk desktop vcxproj directxtk gdk vcxproj directxtk vcxproj i am not sure when i ll be retiring xbox one xdk support which is not supported for vs or later that means i m not sure if i ll delete directxtk xboxonexdk vcxproj or not with this change
| 1
|
2,784
| 9,979,186,250
|
IssuesEvent
|
2019-07-09 21:58:09
|
dgets/lasttime
|
https://api.github.com/repos/dgets/lasttime
|
opened
|
Break up the monolith in consolidate_database
|
maintainability
|
Code is way too long and convoluted. This needs to be broken up into more manageable segments.
|
True
|
Break up the monolith in consolidate_database - Code is way too long and convoluted. This needs to be broken up into more manageable segments.
|
main
|
break up the monolith in consolidate database code is way too long and convoluted this needs to be broken up into more manageable segments
| 1
|
259,726
| 8,199,406,828
|
IssuesEvent
|
2018-08-31 20:04:06
|
NGO-DB/ndb-core
|
https://api.github.com/repos/NGO-DB/ndb-core
|
opened
|
Extend UI to allow more analysis of attendance data
|
Priority: Idea Type: Feature
|
building on top of #138 the analysis and handling of (daily) attendance information can be extended further:
- [ ] extend UI of monthly attendance (AddMonthAttendanceComponent) to allow displaying/editing a group's daily attendance
- [ ] advance filtering and comparison of attendance information on a separate view (similar to NotesManager?)
|
1.0
|
Extend UI to allow more analysis of attendance data - building on top of #138 the analysis and handling of (daily) attendance information can be extended further:
- [ ] extend UI of monthly attendance (AddMonthAttendanceComponent) to allow displaying/editing a group's daily attendance
- [ ] advance filtering and comparison of attendance information on a separate view (similar to NotesManager?)
|
non_main
|
extend ui to allow more analysis of attendance data building on top of the analysis and handling of daily attendance information can be extended further extend ui of monthly attendance addmonthattendancecomponent to allow displaying editing a group s daily attendance advance filtering and comparison of attendance information on a separate view similar to notesmanager
| 0
|
368,134
| 10,866,623,857
|
IssuesEvent
|
2019-11-14 21:41:27
|
inland-empire-software-development/main
|
https://api.github.com/repos/inland-empire-software-development/main
|
closed
|
Slider component is not intuitive - users don't know to swipe or slide
|
enhancement help wanted low-priority
|
**Components that require updating:**
- speakers
- operations
- community
- success stories
**Tech to be used:**
- React
**What is the current behavior:** Sliders are currently not intuitive. Users don't know if they can slide left or right. We currently have some text that talks about being able to slide, but we can do better.

**What is the new behavior:** Sliders should have some form of way to show users they can slide. Something like what we are doing for the blog component on mobile, see example:

We can have each component have its own position component for the slider, so users know how the sliders work.
**Submission information [For PR]**:
All Submissions:
- The commit message follows our guidelines
- Testing steps have been added to this issue or the PR for this issue
- Submission has been tested in all supported browsers
New Feature Submissions:
- Code gone through a code review by at least one other person
- Code has been locally linted `yarn lint`
|
1.0
|
Slider component is not intuitive - users don't know to swipe or slide - **Components that require updating:**
- speakers
- operations
- community
- success stories
**Tech to be used:**
- React
**What is the current behavior:** Sliders are currently not intuitive. Users don't know if they can slide left or right. We currently have some text that talks about being able to slide, but we can do better.

**What is the new behavior:** Sliders should have some form of way to show users they can slide. Something like what we are doing for the blog component on mobile, see example:

We can have each component have its own position component for the slider, so users know how the sliders work.
**Submission information [For PR]**:
All Submissions:
- The commit message follows our guidelines
- Testing steps have been added to this issue or the PR for this issue
- Submission has been tested in all supported browsers
New Feature Submissions:
- Code gone through a code review by at least one other person
- Code has been locally linted `yarn lint`
|
non_main
|
slider component is not intuitive users don t know to swipe or slide components that require updating speakers operations community success stories tech to be used react what is the current behavior sliders are currently not intuitive users don t know if they can slide left or right we currently have some text that talks about being able to slide but we can do better what is the new behavior sliders should have some form of way to show users they can slide something like what we are doing for the blog component on mobile see example we can have each component have its own position component for the slider so users know how the sliders work submission information all submissions the commit message follows our guidelines testing steps have been added to this issue or the pr for this issue submission has been tested in all supported browsers new feature submissions code gone through a code review by at least one other person code has been locally linted yarn lint
| 0
|
1,144
| 5,003,265,062
|
IssuesEvent
|
2016-12-11 20:35:02
|
tgstation/tgstation
|
https://api.github.com/repos/tgstation/tgstation
|
closed
|
Crafting system lacks expandability
|
Maintainability - Hinders improvements -
|
Recipes cannot customize output, how materials are consumed, if a material has to be of a specific type or in a specific state.
The pin removal crafting recipe only works because of this bit of code, which is obviously unsustainable;
/obj/item/weapon/gun/CheckParts(list/parts_list)
..()
var/obj/item/weapon/gun/G = locate(/obj/item/weapon/gun) in contents
if(G)
G.loc = loc
qdel(G.pin)
G.pin = null
visible_message("[G] can now fit a new pin, but old one was destroyed in the process.", null, null, 3)
qdel(src)
|
True
|
Crafting system lacks expandability - Recipes cannot customize output, how materials are consumed, if a material has to be of a specific type or in a specific state.
The pin removal crafting recipe only works because of this bit of code, which is obviously unsustainable;
/obj/item/weapon/gun/CheckParts(list/parts_list)
..()
var/obj/item/weapon/gun/G = locate(/obj/item/weapon/gun) in contents
if(G)
G.loc = loc
qdel(G.pin)
G.pin = null
visible_message("[G] can now fit a new pin, but old one was destroyed in the process.", null, null, 3)
qdel(src)
|
main
|
crafting system lacks expandability recipes cannot customize output how materials are consumed if a material has to be of a specific type or in a specific state the pin removal crafting recipe only works because of this bit of code which is obviously unsustainable obj item weapon gun checkparts list parts list var obj item weapon gun g locate obj item weapon gun in contents if g g loc loc qdel g pin g pin null visible message can now fit a new pin but old one was destroyed in the process null null qdel src
| 1
|
463
| 3,689,370,742
|
IssuesEvent
|
2016-02-25 16:14:25
|
OpenLightingProject/ola
|
https://api.github.com/repos/OpenLightingProject/ola
|
closed
|
cppunit-config removed in fedora
|
Maintainability
|
In the latest version of the cppunit package for fedora, cppunit-config is removed in favor for pkg-config based solutions ([see here](https://apps.fedoraproject.org/packages/cppunit/changelog/))
This is the patch I made for the package in the mean time, it's not pretty and probably not a proper solution:
```patch
diff --git a/config/cppunit.m4 b/config/cppunit.m4
index 41f067b..7a7834c 100644
--- a/config/cppunit.m4
+++ b/config/cppunit.m4
@@ -25,12 +25,11 @@ AC_ARG_WITH(cppunit-exec-prefix,[ --with-cppunit-exec-prefix=PFX Exec prefix w
AC_PATH_PROG(CPPUNIT_CONFIG, cppunit-config, no)
cppunit_version_min=$1
- AC_MSG_CHECKING(for Cppunit - version >= $cppunit_version_min)
no_cppunit=""
if test "$CPPUNIT_CONFIG" = "no" ; then
- AC_MSG_RESULT(no)
- no_cppunit=yes
+ PKG_CHECK_MODULES(CPPUNIT,[cppunit >= $cppunit_version_min],,[no_cppunit="yes"])
else
+ AC_MSG_CHECKING(for Cppunit - version >= $cppunit_version_min)
CPPUNIT_CFLAGS=`$CPPUNIT_CONFIG --cflags`
CPPUNIT_LIBS=`$CPPUNIT_CONFIG --libs`
cppunit_version=`$CPPUNIT_CONFIG --version`
```
This will do in the mean time, but either a better version of this fix has to be made or some other solution.
|
True
|
cppunit-config removed in fedora - In the latest version of the cppunit package for fedora, cppunit-config is removed in favor for pkg-config based solutions ([see here](https://apps.fedoraproject.org/packages/cppunit/changelog/))
This is the patch I made for the package in the mean time, it's not pretty and probably not a proper solution:
```patch
diff --git a/config/cppunit.m4 b/config/cppunit.m4
index 41f067b..7a7834c 100644
--- a/config/cppunit.m4
+++ b/config/cppunit.m4
@@ -25,12 +25,11 @@ AC_ARG_WITH(cppunit-exec-prefix,[ --with-cppunit-exec-prefix=PFX Exec prefix w
AC_PATH_PROG(CPPUNIT_CONFIG, cppunit-config, no)
cppunit_version_min=$1
- AC_MSG_CHECKING(for Cppunit - version >= $cppunit_version_min)
no_cppunit=""
if test "$CPPUNIT_CONFIG" = "no" ; then
- AC_MSG_RESULT(no)
- no_cppunit=yes
+ PKG_CHECK_MODULES(CPPUNIT,[cppunit >= $cppunit_version_min],,[no_cppunit="yes"])
else
+ AC_MSG_CHECKING(for Cppunit - version >= $cppunit_version_min)
CPPUNIT_CFLAGS=`$CPPUNIT_CONFIG --cflags`
CPPUNIT_LIBS=`$CPPUNIT_CONFIG --libs`
cppunit_version=`$CPPUNIT_CONFIG --version`
```
This will do in the mean time, but either a better version of this fix has to be made or some other solution.
|
main
|
cppunit config removed in fedora in the latest version of the cppunit package for fedora cppunit config is removed in favor for pkg config based solutions this is the patch i made for the package in the mean time it s not pretty and probably not a proper solution patch diff git a config cppunit b config cppunit index a config cppunit b config cppunit ac arg with cppunit exec prefix with cppunit exec prefix pfx exec prefix w ac path prog cppunit config cppunit config no cppunit version min ac msg checking for cppunit version cppunit version min no cppunit if test cppunit config no then ac msg result no no cppunit yes pkg check modules cppunit else ac msg checking for cppunit version cppunit version min cppunit cflags cppunit config cflags cppunit libs cppunit config libs cppunit version cppunit config version this will do in the mean time but either a better version of this fix has to be made or some other solution
| 1
|
3,873
| 17,115,930,545
|
IssuesEvent
|
2021-07-11 10:55:04
|
RalfKoban/MiKo-Analyzers
|
https://api.github.com/repos/RalfKoban/MiKo-Analyzers
|
closed
|
Awaited statements should be preceded and followed by a blank line
|
Area: analyzer Area: maintainability feature
|
An awaited call should be followed by a blank line if the following line contains a non-awaited expression.
The reason is ease of reading.
Following should report a violation:
```c#
await DoStuffAsync();
await DoMoreStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
```
While following should **not** report a violation:
```c#
await DoStuffAsync();
await DoMoreStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
```
|
True
|
Awaited statements should be preceded and followed by a blank line - An awaited call should be followed by a blank line if the following line contains a non-awaited expression.
The reason is ease of reading.
Following should report a violation:
```c#
await DoStuffAsync();
await DoMoreStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
```
While following should **not** report a violation:
```c#
await DoStuffAsync();
await DoMoreStuffAsync();
var x = 42;
var y = "something";
var z = Guid.NewGuid();
```
|
main
|
awaited statements should be preceded and followed by a blank line an awaited call should be followed by a blank line if the following line contains a non awaited expression the reason is ease of reading following should report a violation c await dostuffasync await domorestuffasync var x var y something var z guid newguid while following should not report a violation c await dostuffasync await domorestuffasync var x var y something var z guid newguid
| 1
|
586,558
| 17,580,660,471
|
IssuesEvent
|
2021-08-16 06:56:10
|
oppia/oppia-android
|
https://api.github.com/repos/oppia/oppia-android
|
closed
|
Merge help_activity.xml into single xml file
|
Type: Improvement Priority: Nice-to-have good first issue
|
Currently there are 2 versions of `help_activity.xml` file merge it into single xml file.
We can use https://text-compare.com/ to compare two versions of this file and for all the differences we can create variables in `dimens.xml` file and use it accordingly.
**Note**: In PR, make sure you add before and after screenshot of mobile-portrait, mobile-landscape, tablet-portrait and tablet-landscape for comparison and make sure that there is not difference between before and after UI.
|
1.0
|
Merge help_activity.xml into single xml file - Currently there are 2 versions of `help_activity.xml` file merge it into single xml file.
We can use https://text-compare.com/ to compare two versions of this file and for all the differences we can create variables in `dimens.xml` file and use it accordingly.
**Note**: In PR, make sure you add before and after screenshot of mobile-portrait, mobile-landscape, tablet-portrait and tablet-landscape for comparison and make sure that there is not difference between before and after UI.
|
non_main
|
merge help activity xml into single xml file currently there are versions of help activity xml file merge it into single xml file we can use to compare two versions of this file and for all the differences we can create variables in dimens xml file and use it accordingly note in pr make sure you add before and after screenshot of mobile portrait mobile landscape tablet portrait and tablet landscape for comparison and make sure that there is not difference between before and after ui
| 0
|
1,614
| 6,572,632,434
|
IssuesEvent
|
2017-09-11 03:55:27
|
ansible/ansible-modules-extras
|
https://api.github.com/repos/ansible/ansible-modules-extras
|
closed
|
Undefined variables in boundary_meter
|
affects_2.2 bug_report waiting_on_maintainer
|
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
monitoring/boundary_meter.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
devel 2.2
```
##### SUMMARY
When porting to python3 I ran pyflakes on boundary_meter.py and found that there are several undefined variables. These probably make state=absent traceback and state=present when the cert file needs to be downloaded.
###### delete_meter problem
https://github.com/ansible/ansible-modules-extras/blob/devel/monitoring/boundary_meter.py#L191
action is used here but not defined at this point in the code.
###### create_meter problems
https://github.com/ansible/ansible-modules-extras/blob/devel/monitoring/boundary_meter.py#L216
At this point result is not defined. Perhaps you meant response instead?
|
True
|
Undefined variables in boundary_meter - <!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
monitoring/boundary_meter.py
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
devel 2.2
```
##### SUMMARY
When porting to python3 I ran pyflakes on boundary_meter.py and found that there are several undefined variables. These probably make state=absent traceback and state=present when the cert file needs to be downloaded.
###### delete_meter problem
https://github.com/ansible/ansible-modules-extras/blob/devel/monitoring/boundary_meter.py#L191
action is used here but not defined at this point in the code.
###### create_meter problems
https://github.com/ansible/ansible-modules-extras/blob/devel/monitoring/boundary_meter.py#L216
At this point result is not defined. Perhaps you meant response instead?
|
main
|
undefined variables in boundary meter issue type bug report component name monitoring boundary meter py ansible version devel summary when porting to i ran pyflakes on boundary meter py and found that there are several undefined variables these probably make state absent traceback and state present when the cert file needs to be downloaded delete meter problem action is used here but not defined at this point in the code create meter problems at this point result is not defined perhaps you meant response instead
| 1
|
2,992
| 10,855,936,405
|
IssuesEvent
|
2019-11-13 19:29:40
|
DynamoRIO/dynamorio
|
https://api.github.com/repos/DynamoRIO/dynamorio
|
closed
|
make NULL pointer-sized
|
Maintainability
|
Filing an issue for the PR #3922 to help document some of the issues.
Our own def of NULL in globals_shared.h is being used where stddef.h is not
included, which is most of our code. It is currently just defined as "0"
which can cause problems for 64-bit with implicit casts to the wrong
bitwidth (e.g., see PR #3920). It would be better to define it as "(void
*)0".
But, that hits compiler errors in certain places with clang on Mac and MSVC.
Note that on C++ implicit casts from void* to other types don't work:
https://eli.thegreenplace.net/2009/11/16/void-and-casts-in-c-and-c
So e.g. tools.h has a byte* return value function say "return NULL" which
gets an error for events_cpp.cpp on MSVC. We can't easily use "nullptr"
there since it's used for C code too, unless we had a C define for
"nullptr" or sthg.
|
True
|
make NULL pointer-sized - Filing an issue for the PR #3922 to help document some of the issues.
Our own def of NULL in globals_shared.h is being used where stddef.h is not
included, which is most of our code. It is currently just defined as "0"
which can cause problems for 64-bit with implicit casts to the wrong
bitwidth (e.g., see PR #3920). It would be better to define it as "(void
*)0".
But, that hits compiler errors in certain places with clang on Mac and MSVC.
Note that on C++ implicit casts from void* to other types don't work:
https://eli.thegreenplace.net/2009/11/16/void-and-casts-in-c-and-c
So e.g. tools.h has a byte* return value function say "return NULL" which
gets an error for events_cpp.cpp on MSVC. We can't easily use "nullptr"
there since it's used for C code too, unless we had a C define for
"nullptr" or sthg.
|
main
|
make null pointer sized filing an issue for the pr to help document some of the issues our own def of null in globals shared h is being used where stddef h is not included which is most of our code it is currently just defined as which can cause problems for bit with implicit casts to the wrong bitwidth e g see pr it would be better to define it as void but that hits compiler errors in certain places with clang on mac and msvc note that on c implicit casts from void to other types don t work so e g tools h has a byte return value function say return null which gets an error for events cpp cpp on msvc we can t easily use nullptr there since it s used for c code too unless we had a c define for nullptr or sthg
| 1
|
4,858
| 24,997,251,184
|
IssuesEvent
|
2022-11-03 02:26:35
|
centerofci/mathesar
|
https://api.github.com/repos/centerofci/mathesar
|
opened
|
Show cell-level errors after failure to create record via the record selector
|
type: enhancement work: frontend status: ready restricted: maintainers
|
## Current behavior
- You can create a new record from the record selector using the data entered into the search fields.
- If the record creation fails, you see a toast message.
- The toast message might not be specific enough to help you fix the error though. For example, if a unique constraint is not met, then the toast message is quite curt:
> The requested insert violates a uniqueness constraint
_Which column_ violates the uniqueness constraint? We don't know.
## Desired behavior
- Cell-level errors are displayed in the same manner as when creating a new record in the table page.
|
True
|
Show cell-level errors after failure to create record via the record selector - ## Current behavior
- You can create a new record from the record selector using the data entered into the search fields.
- If the record creation fails, you see a toast message.
- The toast message might not be specific enough to help you fix the error though. For example, if a unique constraint is not met, then the toast message is quite curt:
> The requested insert violates a uniqueness constraint
_Which column_ violates the uniqueness constraint? We don't know.
## Desired behavior
- Cell-level errors are displayed in the same manner as when creating a new record in the table page.
|
main
|
show cell level errors after failure to create record via the record selector current behavior you can create a new record from the record selector using the data entered into the search fields if the record creation fails you see a toast message the toast message might not be specific enough to help you fix the error though for example if a unique constraint is not met then the toast message is quite curt the requested insert violates a uniqueness constraint which column violates the uniqueness constraint we don t know desired behavior cell level errors are displayed in the same manner as when creating a new record in the table page
| 1
|
5,847
| 31,072,943,096
|
IssuesEvent
|
2023-08-12 05:52:13
|
shawnlaffan/biodiverse
|
https://api.github.com/repos/shawnlaffan/biodiverse
|
closed
|
paths - replace Path::Class with Path::Tiny
|
Maintainability
|
It's a low priority but we should replace Path::Class with Path::Tiny. The Path::Tiny interface is simpler and it does all we need.
|
True
|
paths - replace Path::Class with Path::Tiny - It's a low priority but we should replace Path::Class with Path::Tiny. The Path::Tiny interface is simpler and it does all we need.
|
main
|
paths replace path class with path tiny it s a low priority but we should replace path class with path tiny the path tiny interface is simpler and it does all we need
| 1
|
713
| 4,306,496,741
|
IssuesEvent
|
2016-07-21 03:36:31
|
duckduckgo/zeroclickinfo-spice
|
https://api.github.com/repos/duckduckgo/zeroclickinfo-spice
|
closed
|
Forecast: Wrong result due to timezone
|
Maintainer Input Requested
|
My timezone is IST and the IA is giving forecast of `Phoenixville, PA` which is wrong.
See the snaps below:
<img width="1436" alt="screen shot 2016-04-23 at 10 09 00 am" src="https://cloud.githubusercontent.com/assets/915277/14759309/3a005caa-093c-11e6-86e0-09ef0d20d7d7.png">
<img width="734" alt="screen shot 2016-04-23 at 10 11 59 am" src="https://cloud.githubusercontent.com/assets/915277/14759310/583e0afa-093c-11e6-8e37-2d53d40e8613.png">
Edit: This happened while running locally
IA Page: http://duck.co/ia/view/forecast
|
True
|
Forecast: Wrong result due to timezone - My timezone is IST and the IA is giving forecast of `Phoenixville, PA` which is wrong.
See the snaps below:
<img width="1436" alt="screen shot 2016-04-23 at 10 09 00 am" src="https://cloud.githubusercontent.com/assets/915277/14759309/3a005caa-093c-11e6-86e0-09ef0d20d7d7.png">
<img width="734" alt="screen shot 2016-04-23 at 10 11 59 am" src="https://cloud.githubusercontent.com/assets/915277/14759310/583e0afa-093c-11e6-8e37-2d53d40e8613.png">
Edit: This happened while running locally
IA Page: http://duck.co/ia/view/forecast
|
main
|
forecast wrong result due to timezone my timezone is ist and the ia is giving forecast of phoenixville pa which is wrong see the snaps below img width alt screen shot at am src img width alt screen shot at am src edit this happened while running locally ia page
| 1
|
252,857
| 19,074,248,596
|
IssuesEvent
|
2021-11-27 13:24:48
|
rrousselGit/river_pod
|
https://api.github.com/repos/rrousselGit/river_pod
|
closed
|
a hacky way to set state using provider, ie. ref.read(someProvider.notifier).state = someState;
|
documentation needs triage
|
```
Widget build(BuildContext context) {
ref.read(userProvider.notifier).state = user; //here!
final checkuser = ref.read(userProvider.notifier).state;
return Scaffold(
//appBar: AppBar(),
endDrawer: MyDrawer(),
body: FirstScreen(user: user),
bottomNavigationBar: MyBottomNavigationBar(selectedIndex: 0),
);
}
```
But now I heard the state is actually just `ref.read(userProvider)` and to call methods in the StateNotifier, use `ref.read(userProvider.notifier)`.
And to also use `watch` rather than `read` for easier debugging later.
Is it safe to say, these calls should be made in the same place in the code as `setState(){}` would be?
I'm having a super difficult time with the transition from login to homescreen where I'm trying to initialize the state of my user and UI based on the login data.
If possible, please make some documentation on this specific case. I have a Reddit post: https://www.reddit.com/r/flutterhelp/comments/quypl5/riverpod_when_do_you_use_provider_to_set_state/?utm_source=share&utm_medium=web2x&context=3
|
1.0
|
a hacky way to set state using provider, ie. ref.read(someProvider.notifier).state = someState; - ```
Widget build(BuildContext context) {
ref.read(userProvider.notifier).state = user; //here!
final checkuser = ref.read(userProvider.notifier).state;
return Scaffold(
//appBar: AppBar(),
endDrawer: MyDrawer(),
body: FirstScreen(user: user),
bottomNavigationBar: MyBottomNavigationBar(selectedIndex: 0),
);
}
```
But now I heard the state is actually just `ref.read(userProvider)` and to call methods in the StateNotifier, use `ref.read(userProvider.notifier)`.
And to also use `watch` rather than `read` for easier debugging later.
Is it safe to say, these calls should be made in the same place in the code as `setState(){}` would be?
I'm having a super difficult time with the transition from login to homescreen where I'm trying to initialize the state of my user and UI based on the login data.
If possible, please make some documentation on this specific case. I have a Reddit post: https://www.reddit.com/r/flutterhelp/comments/quypl5/riverpod_when_do_you_use_provider_to_set_state/?utm_source=share&utm_medium=web2x&context=3
|
non_main
|
a hacky way to set state using provider ie ref read someprovider notifier state somestate widget build buildcontext context ref read userprovider notifier state user here final checkuser ref read userprovider notifier state return scaffold appbar appbar enddrawer mydrawer body firstscreen user user bottomnavigationbar mybottomnavigationbar selectedindex but now i heard the state is actually just ref read userprovider and to call methods in the statenotifier use ref read userprovider notifier and to also use watch rather than read for easier debugging later is it safe to say these calls should be made in the same place in the code as setstate would be i m having a super difficult time with the transition from login to homescreen where i m trying to initialize the state of my user and ui based on the login data if possible please make some documentation on this specific case i have a reddit post
| 0
|
25,224
| 5,144,235,702
|
IssuesEvent
|
2017-01-12 18:03:54
|
zulip/zulip
|
https://api.github.com/repos/zulip/zulip
|
closed
|
Remove unnecessary user docs images
|
area: documentation (user) enhancement help wanted
|
Recently we've been removing a lot of content from the user docs to fit with our new [documentation styling guide](http://zulip.readthedocs.io/en/latest/README.html), but a lot of people are forgetting to delete the images in the `static/images/help` folder that have been removed from the documentation itself. These unused images take up unnecessary file space, so it'd be nice to have them all removed.
Also, a lot of the documentation uses images that look similar to each other, so the duplicated images can also be removed and the links in the documentation can be changed to use the one documentation image left.
Finally, a lot of the large documentation images shows unnecessary parts of the screen, making it hard for users to focus on the one part being discussed in a step.
|
1.0
|
Remove unnecessary user docs images - Recently we've been removing a lot of content from the user docs to fit with our new [documentation styling guide](http://zulip.readthedocs.io/en/latest/README.html), but a lot of people are forgetting to delete the images in the `static/images/help` folder that have been removed from the documentation itself. These unused images take up unnecessary file space, so it'd be nice to have them all removed.
Also, a lot of the documentation uses images that look similar to each other, so the duplicated images can also be removed and the links in the documentation can be changed to use the one documentation image left.
Finally, a lot of the large documentation images shows unnecessary parts of the screen, making it hard for users to focus on the one part being discussed in a step.
|
non_main
|
remove unnecessary user docs images recently we ve been removing a lot of content from the user docs to fit with our new but a lot of people are forgetting to delete the images in the static images help folder that have been removed from the documentation itself these unused images take up unnecessary file space so it d be nice to have them all removed also a lot of the documentation uses images that look similar to each other so the duplicated images can also be removed and the links in the documentation can be changed to use the one documentation image left finally a lot of the large documentation images shows unnecessary parts of the screen making it hard for users to focus on the one part being discussed in a step
| 0
|
424,308
| 12,309,317,244
|
IssuesEvent
|
2020-05-12 08:46:07
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
mobile.twitter.com - see bug description
|
browser-fenix engine-gecko priority-critical
|
<!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/52775 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://mobile.twitter.com/search/?q=@tqsp
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: can not access to url bar to modify url (using twitter search by default)
**Steps to Reproduce**:
can not access to url to modify it,
only search is available
(using twitter search by default)
url should be available to be modified by hands when we click on url bar
firefox nightly
@tqsp
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/8c427161-f1fb-4079-8155-554127554ff9.jpeg'></details>
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/a2275899-c0db-4abb-ab18-56855bf37064.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
mobile.twitter.com - see bug description - <!-- @browser: Firefox Mobile 75.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.1.0; Mobile; rv:75.0) Gecko/75.0 Firefox/75.0 -->
<!-- @reported_with: -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/52775 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://mobile.twitter.com/search/?q=@tqsp
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: can not access to url bar to modify url (using twitter search by default)
**Steps to Reproduce**:
can not access to url to modify it,
only search is available
(using twitter search by default)
url should be available to be modified by hands when we click on url bar
firefox nightly
@tqsp
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/8c427161-f1fb-4079-8155-554127554ff9.jpeg'></details>
<details><summary>View the screenshot</summary><img alt='Screenshot' src='https://webcompat.com/uploads/2020/5/a2275899-c0db-4abb-ab18-56855bf37064.jpeg'></details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_main
|
mobile twitter com see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description can not access to url bar to modify url using twitter search by default steps to reproduce can not access to url to modify it only search is available using twitter search by default url should be available to be modified by hands when we click on url bar firefox nightly tqsp view the screenshot img alt screenshot src view the screenshot img alt screenshot src browser configuration none from with ❤️
| 0
|
4,029
| 18,835,769,228
|
IssuesEvent
|
2021-11-11 00:34:20
|
aws/aws-sam-cli
|
https://api.github.com/repos/aws/aws-sam-cli
|
closed
|
Allow to mount additional volumes when invoking lambda in debugger mode
|
area/debugging stage/pm-review maintainer/need-response
|
### Describe your idea/feature/enhancement
We use a custom debugger when running .net lambda under debug mode. This debugger process produce some logs. I would like to have an option to get those logs for investigation purposes, e.g. investigate debugger process failures during lambda run. It would be great to have an option to pass an additional option through SAM CLI to mount additional volumes in Docker.
### Proposal
Extend SAM CLI to have an additional option `--additional-volume` with `multiple=True` flag that mount directories in a Docker container that run lambda.
Things to consider:
1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model)
No changes needed.
### Additional Details
The mapping should use the following rules:
- All specified volumes are mounted inside `/tmp/lambci_volumes` folder
- Each volume is mounted to a remote volume directory + host directory base name (e.g. `/Users/<user>/mount_this` should be mounted to `/tmp/lambci_volumes/mount_this` directory)
- Remote directory should have write access to be able to write logs into a folder.
|
True
|
Allow to mount additional volumes when invoking lambda in debugger mode - ### Describe your idea/feature/enhancement
We use a custom debugger when running .net lambda under debug mode. This debugger process produce some logs. I would like to have an option to get those logs for investigation purposes, e.g. investigate debugger process failures during lambda run. It would be great to have an option to pass an additional option through SAM CLI to mount additional volumes in Docker.
### Proposal
Extend SAM CLI to have an additional option `--additional-volume` with `multiple=True` flag that mount directories in a Docker container that run lambda.
Things to consider:
1. Will this require any updates to the [SAM Spec](https://github.com/awslabs/serverless-application-model)
No changes needed.
### Additional Details
The mapping should use the following rules:
- All specified volumes are mounted inside `/tmp/lambci_volumes` folder
- Each volume is mounted to a remote volume directory + host directory base name (e.g. `/Users/<user>/mount_this` should be mounted to `/tmp/lambci_volumes/mount_this` directory)
- Remote directory should have write access to be able to write logs into a folder.
|
main
|
allow to mount additional volumes when invoking lambda in debugger mode describe your idea feature enhancement we use a custom debugger when running net lambda under debug mode this debugger process produce some logs i would like to have an option to get those logs for investigation purposes e g investigate debugger process failures during lambda run it would be great to have an option to pass an additional option through sam cli to mount additional volumes in docker proposal extend sam cli to have an additional option additional volume with multiple true flag that mount directories in a docker container that run lambda things to consider will this require any updates to the no changes needed additional details the mapping should use the following rules all specified volumes are mounted inside tmp lambci volumes folder each volume is mounted to a remote volume directory host directory base name e g users mount this should be mounted to tmp lambci volumes mount this directory remote directory should have write access to be able to write logs into a folder
| 1
|
11,513
| 14,396,830,019
|
IssuesEvent
|
2020-12-03 07:05:17
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
--experimental_remote_grpc_log and --build_event_binary_file together result in bazel hanging
|
more data needed team-Remote-Exec type: support / not a bug (process) untriaged
|
This one's bizarre. I've been able to get it to happen with bazel 3.4.1 and 3.5, but haven't narrowed down a reproduction case yet; filing the bug now in case it's more obvious to somebody else what is going on. I also have remote cache enabled for this particular build.
The build will complete normally, and then bazel will print "Waiting for build events: BinaryFormatFileTransport" forever and never exit.
|
1.0
|
--experimental_remote_grpc_log and --build_event_binary_file together result in bazel hanging - This one's bizarre. I've been able to get it to happen with bazel 3.4.1 and 3.5, but haven't narrowed down a reproduction case yet; filing the bug now in case it's more obvious to somebody else what is going on. I also have remote cache enabled for this particular build.
The build will complete normally, and then bazel will print "Waiting for build events: BinaryFormatFileTransport" forever and never exit.
|
non_main
|
experimental remote grpc log and build event binary file together result in bazel hanging this one s bizarre i ve been able to get it to happen with bazel and but haven t narrowed down a reproduction case yet filing the bug now in case it s more obvious to somebody else what is going on i also have remote cache enabled for this particular build the build will complete normally and then bazel will print waiting for build events binaryformatfiletransport forever and never exit
| 0
|
380,430
| 11,260,751,107
|
IssuesEvent
|
2020-01-13 11:12:09
|
Los-nonos/zeep
|
https://api.github.com/repos/Los-nonos/zeep
|
reopened
|
ABM UserAdapters
|
backend priority: high size: 8 status: progress
|
Crear clase UserAdapter con las dependencias necesarias para el manejo de datos de la entidad User.
Usando una librería que verifique los campos pasados y luego crear el command para que el controller pueda pasarlo al handler correspondiente
|
1.0
|
ABM UserAdapters - Crear clase UserAdapter con las dependencias necesarias para el manejo de datos de la entidad User.
Usando una librería que verifique los campos pasados y luego crear el command para que el controller pueda pasarlo al handler correspondiente
|
non_main
|
abm useradapters crear clase useradapter con las dependencias necesarias para el manejo de datos de la entidad user usando una librería que verifique los campos pasados y luego crear el command para que el controller pueda pasarlo al handler correspondiente
| 0
|
2,118
| 7,203,441,015
|
IssuesEvent
|
2018-02-06 09:13:45
|
RalfKoban/MiKo-Analyzers
|
https://api.github.com/repos/RalfKoban/MiKo-Analyzers
|
opened
|
Comparing GUIDs via object.Equals() should be reported as issue
|
Area: analyzer Area: maintainability feature
|
For performance reasons, comparing GUIDs using object.Equals() should not be allowed. There is boxing/unboxing involved which is completely unnecessary as the GUIDs can never be null and have an own `Equals() ` (or `==` implementation).
|
True
|
Comparing GUIDs via object.Equals() should be reported as issue - For performance reasons, comparing GUIDs using object.Equals() should not be allowed. There is boxing/unboxing involved which is completely unnecessary as the GUIDs can never be null and have an own `Equals() ` (or `==` implementation).
|
main
|
comparing guids via object equals should be reported as issue for performance reasons comparing guids using object equals should not be allowed there is boxing unboxing involved which is completely unnecessary as the guids can never be null and have an own equals or implementation
| 1
|
5,279
| 26,679,614,088
|
IssuesEvent
|
2023-01-26 16:39:52
|
cosmos/ibc-rs
|
https://api.github.com/repos/cosmos/ibc-rs
|
closed
|
Remove support for asynchronous acks for `RecvPacket`
|
O: maintainability I: specs
|
[Asynchronous acknowledgements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-004-channel-and-packet-semantics#writing-acknowledgements) are [currently under-specified](https://github.com/cosmos/ibc/issues/917). I don't understand the semantics enough to be confident that our current implementation is correct. Hence, it makes our implementation of `RecvPacket` [more complex](https://github.com/cosmos/ibc-rs/blob/7f62c1b6fe0d123defe8bda7a707bed2edd65c70/crates/ibc/src/core/ics04_channel/handler.rs#L294-L307), while the benefits are unclear.
We should remove support for them and document it in our [list of unsupported features](https://github.com/cosmos/ibc-rs/tree/main/crates/ibc#divergence-from-the-interchain-standards-ics). We can add support at a later time if requested, when the requirements are more clear.
|
True
|
Remove support for asynchronous acks for `RecvPacket` - [Asynchronous acknowledgements](https://github.com/cosmos/ibc/tree/main/spec/core/ics-004-channel-and-packet-semantics#writing-acknowledgements) are [currently under-specified](https://github.com/cosmos/ibc/issues/917). I don't understand the semantics enough to be confident that our current implementation is correct. Hence, it makes our implementation of `RecvPacket` [more complex](https://github.com/cosmos/ibc-rs/blob/7f62c1b6fe0d123defe8bda7a707bed2edd65c70/crates/ibc/src/core/ics04_channel/handler.rs#L294-L307), while the benefits are unclear.
We should remove support for them and document it in our [list of unsupported features](https://github.com/cosmos/ibc-rs/tree/main/crates/ibc#divergence-from-the-interchain-standards-ics). We can add support at a later time if requested, when the requirements are more clear.
|
main
|
remove support for asynchronous acks for recvpacket are i don t understand the semantics enough to be confident that our current implementation is correct hence it makes our implementation of recvpacket while the benefits are unclear we should remove support for them and document it in our we can add support at a later time if requested when the requirements are more clear
| 1
|
2,240
| 7,888,877,908
|
IssuesEvent
|
2018-06-28 00:28:27
|
react-navigation/react-navigation
|
https://api.github.com/repos/react-navigation/react-navigation
|
closed
|
createDrawerNavigator ignoring initialRouteParams
|
bug component: Drawer needs action from maintainer
|
createDrawerNavigator relies on a SwitchRouter, but on building configs does not support initialRouteParams anymore so it's not possible to provide initialRouteParams.
This looks to be a regression, since this behavior worked well before migrating to version v2.
Just by adding initialRouteParams to routeConfig, SwitchRouter should be able to handle that value.
**Current Code in createDrawerNavigator**
```
const {
order,
paths,
initialRouteName,
backBehavior,
...drawerConfig
} = mergedConfig;
const routerConfig = {
order,
paths,
initialRouteName,
backBehavior,
};
```
**Suggested Fix**
```
const {
order,
paths,
initialRouteName,
initialRouteParams,
backBehavior,
...drawerConfig
} = mergedConfig;
const routerConfig = {
order,
paths,
initialRouteName,
initialRouteParams,
backBehavior,
};
```
|
True
|
createDrawerNavigator ignoring initialRouteParams - createDrawerNavigator relies on a SwitchRouter, but on building configs does not support initialRouteParams anymore so it's not possible to provide initialRouteParams.
This looks to be a regression, since this behavior worked well before migrating to version v2.
Just by adding initialRouteParams to routeConfig, SwitchRouter should be able to handle that value.
**Current Code in createDrawerNavigator**
```
const {
order,
paths,
initialRouteName,
backBehavior,
...drawerConfig
} = mergedConfig;
const routerConfig = {
order,
paths,
initialRouteName,
backBehavior,
};
```
**Suggested Fix**
```
const {
order,
paths,
initialRouteName,
initialRouteParams,
backBehavior,
...drawerConfig
} = mergedConfig;
const routerConfig = {
order,
paths,
initialRouteName,
initialRouteParams,
backBehavior,
};
```
|
main
|
createdrawernavigator ignoring initialrouteparams createdrawernavigator relies on a switchrouter but on building configs does not support initialrouteparams anymore so it s not possible to provide initialrouteparams this looks to be a regression since this behavior worked well before migrating to version just by adding initialrouteparams to routeconfig switchrouter should be able to handle that value current code in createdrawernavigator const order paths initialroutename backbehavior drawerconfig mergedconfig const routerconfig order paths initialroutename backbehavior suggested fix const order paths initialroutename initialrouteparams backbehavior drawerconfig mergedconfig const routerconfig order paths initialroutename initialrouteparams backbehavior
| 1
|
256
| 3,008,020,624
|
IssuesEvent
|
2015-07-27 19:02:38
|
borisblizzard/arcreator
|
https://api.github.com/repos/borisblizzard/arcreator
|
opened
|
Refactor of Project System to use File Handlers
|
Editor Related enhancement Maintainability
|
The current project system uses a Project object and some save and open functions. this should be refactored to Use Project and File handlers
A Project Handler will manages a layout and format of a Project and employs File handlers to load Project files and add their data to the project
A File Handler when given a File path will load and save data to that file. intended to work by using a serializer format like ARC Data.
|
True
|
Refactor of Project System to use File Handlers - The current project system uses a Project object and some save and open functions. this should be refactored to Use Project and File handlers
A Project Handler will manages a layout and format of a Project and employs File handlers to load Project files and add their data to the project
A File Handler when given a File path will load and save data to that file. intended to work by using a serializer format like ARC Data.
|
main
|
refactor of project system to use file handlers the current project system uses a project object and some save and open functions this should be refactored to use project and file handlers a project handler will manages a layout and format of a project and employs file handlers to load project files and add their data to the project a file handler when given a file path will load and save data to that file intended to work by using a serializer format like arc data
| 1
|
3,845
| 16,829,304,702
|
IssuesEvent
|
2021-06-18 00:25:46
|
IPVS-AS/MBP
|
https://api.github.com/repos/IPVS-AS/MBP
|
opened
|
Create javascript pipeline
|
maintainance
|
In order to improve the loading times of the frontend, we urgently need a javascript pipeline that prepares the javascript before creating the WAR. It should include:
- Concatenate all javascript files into one single file
- Minify this file
It would be great if we could switch between a development (without pipeline) and a productive version (with pipeline).
|
True
|
Create javascript pipeline - In order to improve the loading times of the frontend, we urgently need a javascript pipeline that prepares the javascript before creating the WAR. It should include:
- Concatenate all javascript files into one single file
- Minify this file
It would be great if we could switch between a development (without pipeline) and a productive version (with pipeline).
|
main
|
create javascript pipeline in order to improve the loading times of the frontend we urgently need a javascript pipeline that prepares the javascript before creating the war it should include concatenate all javascript files into one single file minify this file it would be great if we could switch between a development without pipeline and a productive version with pipeline
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.