Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 855 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 13 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 240k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
473,122 | 13,637,162,152 | IssuesEvent | 2020-09-25 07:21:35 | buger/goreplay | https://api.github.com/repos/buger/goreplay | closed | [1.2.0] panic: runtime error: slice bounds out of range [13:12] | Priority: High bug | ```
panic: runtime error: slice bounds out of range [13:12]
goroutine 7 [running]:
github.com/buger/goreplay/proto.HasResponseTitle(0xc00027ec66, 0xb7, 0xb7, 0xc002610200)
/go/src/github.com/buger/goreplay/proto/proto.go:357 +0x242
main.startHint(0xc002c12370, 0xc0003a0630)
/go/src/github.com/buger/goreplay/input_raw.go:203 +0x7c
github.com/buger/goreplay/tcp.(*MessagePool).Handler(0xc0003a2a40, 0x1124fc0, 0xc003d5da20)
/go/src/github.com/buger/goreplay/tcp/tcp_message.go:176 +0x9fa
github.com/buger/goreplay/capture.(*Listener).Listen(0xc00034f4a0, 0x1118860, 0xc0003a2a80, 0xc000380400, 0x0, 0x0)
/go/src/github.com/buger/goreplay/capture/capture.go:163 +0x163
github.com/buger/goreplay/capture.(*Listener).ListenBackground.func1(0xc00011efc0, 0xc00034f4a0, 0x1118860, 0xc0003a2a80, 0xc000380400)
/go/src/github.com/buger/goreplay/capture/capture.go:173 +0x7b
created by github.com/buger/goreplay/capture.(*Listener).ListenBackground
/go/src/github.com/buger/goreplay/capture/capture.go:171 +0x84
``` | 1.0 | [1.2.0] panic: runtime error: slice bounds out of range [13:12] - ```
panic: runtime error: slice bounds out of range [13:12]
goroutine 7 [running]:
github.com/buger/goreplay/proto.HasResponseTitle(0xc00027ec66, 0xb7, 0xb7, 0xc002610200)
/go/src/github.com/buger/goreplay/proto/proto.go:357 +0x242
main.startHint(0xc002c12370, 0xc0003a0630)
/go/src/github.com/buger/goreplay/input_raw.go:203 +0x7c
github.com/buger/goreplay/tcp.(*MessagePool).Handler(0xc0003a2a40, 0x1124fc0, 0xc003d5da20)
/go/src/github.com/buger/goreplay/tcp/tcp_message.go:176 +0x9fa
github.com/buger/goreplay/capture.(*Listener).Listen(0xc00034f4a0, 0x1118860, 0xc0003a2a80, 0xc000380400, 0x0, 0x0)
/go/src/github.com/buger/goreplay/capture/capture.go:163 +0x163
github.com/buger/goreplay/capture.(*Listener).ListenBackground.func1(0xc00011efc0, 0xc00034f4a0, 0x1118860, 0xc0003a2a80, 0xc000380400)
/go/src/github.com/buger/goreplay/capture/capture.go:173 +0x7b
created by github.com/buger/goreplay/capture.(*Listener).ListenBackground
/go/src/github.com/buger/goreplay/capture/capture.go:171 +0x84
``` | priority | panic runtime error slice bounds out of range panic runtime error slice bounds out of range goroutine github com buger goreplay proto hasresponsetitle go src github com buger goreplay proto proto go main starthint go src github com buger goreplay input raw go github com buger goreplay tcp messagepool handler go src github com buger goreplay tcp tcp message go github com buger goreplay capture listener listen go src github com buger goreplay capture capture go github com buger goreplay capture listener listenbackground go src github com buger goreplay capture capture go created by github com buger goreplay capture listener listenbackground go src github com buger goreplay capture capture go | 1 |
226,129 | 7,504,242,725 | IssuesEvent | 2018-04-10 02:28:36 | bugfroggy/Quickplay2.0 | https://api.github.com/repos/bugfroggy/Quickplay2.0 | closed | Premium .jars for 1.9-1.12.2 have invalid accepted versions | Bug Premium Priority: HIGH | When the latest .jars were compiled, I forgot to update the accepted versions. That means any clients running them won't start, claiming the version is invalid. | 1.0 | Premium .jars for 1.9-1.12.2 have invalid accepted versions - When the latest .jars were compiled, I forgot to update the accepted versions. That means any clients running them won't start, claiming the version is invalid. | priority | premium jars for have invalid accepted versions when the latest jars were compiled i forgot to update the accepted versions that means any clients running them won t start claiming the version is invalid | 1 |
130,470 | 5,116,693,492 | IssuesEvent | 2017-01-07 07:05:47 | HuskieRobotics/roborioExpansion | https://api.github.com/repos/HuskieRobotics/roborioExpansion | closed | Create ReadDIO poly VI to read a single DIO line? | Feature High-Priority labview roboRIO low-priority | Should we create a ReadDIO polymorphic VI similar to the ReadAI polymorphic VI that makes it easy to read a single DIO channel? | 2.0 | Create ReadDIO poly VI to read a single DIO line? - Should we create a ReadDIO polymorphic VI similar to the ReadAI polymorphic VI that makes it easy to read a single DIO channel? | priority | create readdio poly vi to read a single dio line should we create a readdio polymorphic vi similar to the readai polymorphic vi that makes it easy to read a single dio channel | 1 |
350,381 | 10,483,276,066 | IssuesEvent | 2019-09-24 13:45:43 | bmp-git/PPS-18-scala-mqtt | https://api.github.com/repos/bmp-git/PPS-18-scala-mqtt | opened | As a client I want to publish a message in a topic and subscribe to a topic. | priority: high | - [ ] Develop publish/subscribe packet parser.
- [ ] Develop publish/subscribe packet builder
- [ ] Develop MQTT publish/subscribe logic in Protocol Manager (Qos 0)
- [ ] Implement topic’s matching logic | 1.0 | As a client I want to publish a message in a topic and subscribe to a topic. - - [ ] Develop publish/subscribe packet parser.
- [ ] Develop publish/subscribe packet builder
- [ ] Develop MQTT publish/subscribe logic in Protocol Manager (Qos 0)
- [ ] Implement topic’s matching logic | priority | as a client i want to publish a message in a topic and subscribe to a topic develop publish subscribe packet parser develop publish subscribe packet builder develop mqtt publish subscribe logic in protocol manager qos implement topic’s matching logic | 1 |
794,491 | 28,038,108,983 | IssuesEvent | 2023-03-28 16:23:58 | status-im/status-mobile | https://api.github.com/repos/status-im/status-mobile | closed | The user is not navigated to 'Sign in by syncing page' if user with existing multi account taps 'Add existing Status profile' option | bug high-priority onboarding | #### Problem:
The user is not able to open 'Sign in by syncing page' flow If the current user already has a multi-account
#### Preconditions:
User is logged in
#### Steps to reproduce:
1. Sign Out
2. Tap [+]
3. Select 'Add existing Status profile' option
#### Actual result:
The user is navigated to the "I'm new to Status" flow
https://user-images.githubusercontent.com/52490791/228255311-3fac3a58-9c1d-4813-a61b-80beb243a4f6.mp4
#### Expected result:
The user is navigated to the 'Sign in by sync' flow

https://www.figma.com/file/o4qG1bnFyuyFOvHQVGgeFY/Onboarding-for-Mobile?node-id=4357-629321&t=yGoIqMSJielr7hwI-0
#### ENV:
Nightly 28 Mar 2023 | 1.0 | The user is not navigated to 'Sign in by syncing page' if user with existing multi account taps 'Add existing Status profile' option - #### Problem:
The user is not able to open 'Sign in by syncing page' flow If the current user already has a multi-account
#### Preconditions:
User is logged in
#### Steps to reproduce:
1. Sign Out
2. Tap [+]
3. Select 'Add existing Status profile' option
#### Actual result:
The user is navigated to the "I'm new to Status" flow
https://user-images.githubusercontent.com/52490791/228255311-3fac3a58-9c1d-4813-a61b-80beb243a4f6.mp4
#### Expected result:
The user is navigated to the 'Sign in by sync' flow

https://www.figma.com/file/o4qG1bnFyuyFOvHQVGgeFY/Onboarding-for-Mobile?node-id=4357-629321&t=yGoIqMSJielr7hwI-0
#### ENV:
Nightly 28 Mar 2023 | priority | the user is not navigated to sign in by syncing page if user with existing multi account taps add existing status profile option problem the user is not able to open sign in by syncing page flow if the current user already has a multi account preconditions user is logged in steps to reproduce sign out tap select add existing status profile option actual result the user is navigated to the i m new to status flow expected result the user is navigated to the sign in by sync flow env nightly mar | 1 |
184,052 | 6,700,607,494 | IssuesEvent | 2017-10-11 06:02:20 | ocf/slackbridge | https://api.github.com/repos/ocf/slackbridge | closed | Create a new IRC bot when a new user joins Slack | enhancement high-priority | Currently they would not be able to talk from Slack -> IRC until the bridge restarts.
There is an [API method for when a new user joins Slack](https://api.slack.com/events/team_join), and messages are also sent when a user joins/quits a channel (with subtypes of `channel_join` and `channel_leave`), so we should be able to hook into those. | 1.0 | Create a new IRC bot when a new user joins Slack - Currently they would not be able to talk from Slack -> IRC until the bridge restarts.
There is an [API method for when a new user joins Slack](https://api.slack.com/events/team_join), and messages are also sent when a user joins/quits a channel (with subtypes of `channel_join` and `channel_leave`), so we should be able to hook into those. | priority | create a new irc bot when a new user joins slack currently they would not be able to talk from slack irc until the bridge restarts there is an and messages are also sent when a user joins quits a channel with subtypes of channel join and channel leave so we should be able to hook into those | 1 |
659,947 | 21,945,464,259 | IssuesEvent | 2022-05-23 23:40:41 | DIT113-V22/group-14 | https://api.github.com/repos/DIT113-V22/group-14 | opened | Add unique QR code signpost | enhancement sprint #4 High Priority | - [ ] Create unique texture for each signpost
- [ ] Edit signpost asset to be slimmer so it can sit closer to pot (aids QR code processing)
- [ ] Import signposts
- [ ] Position signposts | 1.0 | Add unique QR code signpost - - [ ] Create unique texture for each signpost
- [ ] Edit signpost asset to be slimmer so it can sit closer to pot (aids QR code processing)
- [ ] Import signposts
- [ ] Position signposts | priority | add unique qr code signpost create unique texture for each signpost edit signpost asset to be slimmer so it can sit closer to pot aids qr code processing import signposts position signposts | 1 |
417,817 | 12,179,634,927 | IssuesEvent | 2020-04-28 11:02:07 | Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | https://api.github.com/repos/Warcraft-GoA-Development-Team/Warcraft-Guardians-of-Azeroth | closed | [BUG] | Vrukul Blood trait | :beetle: bug :beetle: :exclamation: priority high |
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is: master branch**
**What expansions do you have installed?
All **
**Please explain your issue in as much detail as possible:**
I have Vrykul blood trait in father and mother
Vrykul blood trait not inherited
**Upload an attachment below: .zip of your save, or screenshots:**



| 1.0 | [BUG] | Vrukul Blood trait -
**DO NOT REMOVE PRE-EXISTING LINES**
------------------------------------------------------------------------------------------------------------
-->
**Your mod version is: master branch**
**What expansions do you have installed?
All **
**Please explain your issue in as much detail as possible:**
I have Vrykul blood trait in father and mother
Vrykul blood trait not inherited
**Upload an attachment below: .zip of your save, or screenshots:**



| priority | vrukul blood trait do not remove pre existing lines your mod version is master branch what expansions do you have installed all please explain your issue in as much detail as possible i have vrykul blood trait in father and mother vrykul blood trait not inherited upload an attachment below zip of your save or screenshots | 1 |
553,491 | 16,372,869,161 | IssuesEvent | 2021-05-15 13:59:33 | GeekyEggo/SoundDeck | https://api.github.com/repos/GeekyEggo/SoundDeck | opened | New action, "Play Folder" | priority: high type: feature | ## Purpose
This issue aims to centralize the discussion of adding a new action that allows users to play audio from a folder.
## Overview
It should be possible, from a new action, to select a folder, and then play audio from that folder when pressing the action. The action should provide a lot of the functionality seen within the existing "Play Audio" action, i.e. device selection and playback action.
## Options
- Playback Device
- Should include "Default" and "Default (Communications)"
- Show list all other playback devices.
- Folder
- The folder that contains the audio files.
- Files
- All(default)
- First (fulfils #38)
- Last
- Order
- Date created
- Date modified
- File name (default)
- Random
- Title (ID3 tag)
- Track Order (ID3 tag)
- Action
- Play / Next (default)
- Play / Stop
- Play All / Stop ¹
- Loop / Stop
- Loop All / Stop ¹
- Loop All / Stop (Reset) ¹
¹ _Only available when the files selection is "All"_
## Requirements
- The action must be able to play audio similar to the "Play Audio" action.
- The action must automatically detect file changes for the selected folder.
- The action must support the following file formats
- MP3, *.mp3, *.mpga
- OGG, *.oga, *.ogg, *.opus
- WAV, *.wav
## References
- #25 Play Audio - Play from Folder
- #38 Play first file in folder | 1.0 | New action, "Play Folder" - ## Purpose
This issue aims to centralize the discussion of adding a new action that allows users to play audio from a folder.
## Overview
It should be possible, from a new action, to select a folder, and then play audio from that folder when pressing the action. The action should provide a lot of the functionality seen within the existing "Play Audio" action, i.e. device selection and playback action.
## Options
- Playback Device
- Should include "Default" and "Default (Communications)"
- Show list all other playback devices.
- Folder
- The folder that contains the audio files.
- Files
- All(default)
- First (fulfils #38)
- Last
- Order
- Date created
- Date modified
- File name (default)
- Random
- Title (ID3 tag)
- Track Order (ID3 tag)
- Action
- Play / Next (default)
- Play / Stop
- Play All / Stop ¹
- Loop / Stop
- Loop All / Stop ¹
- Loop All / Stop (Reset) ¹
¹ _Only available when the files selection is "All"_
## Requirements
- The action must be able to play audio similar to the "Play Audio" action.
- The action must automatically detect file changes for the selected folder.
- The action must support the following file formats
- MP3, *.mp3, *.mpga
- OGG, *.oga, *.ogg, *.opus
- WAV, *.wav
## References
- #25 Play Audio - Play from Folder
- #38 Play first file in folder | priority | new action play folder purpose this issue aims to centralize the discussion of adding a new action that allows users to play audio from a folder overview it should be possible from a new action to select a folder and then play audio from that folder when pressing the action the action should provide a lot of the functionality seen within the existing play audio action i e device selection and playback action options playback device should include default and default communications show list all other playback devices folder the folder that contains the audio files files all default first fulfils last order date created date modified file name default random title tag track order tag action play next default play stop play all stop ¹ loop stop loop all stop ¹ loop all stop reset ¹ ¹ only available when the files selection is all requirements the action must be able to play audio similar to the play audio action the action must automatically detect file changes for the selected folder the action must support the following file formats mpga ogg oga ogg opus wav wav references play audio play from folder play first file in folder | 1 |
109,825 | 4,414,396,674 | IssuesEvent | 2016-08-13 11:57:25 | OpenCollective/OpenCollective | https://api.github.com/repos/OpenCollective/OpenCollective | closed | Github oauth is broken | bug high priority | It errors out with redirect-uri-mismatch.
On github, we have registered `https://api.opencollective.com/connected-accounts/github/callback`.
In our requests, we are sending `https://prod-opencollective-api.herokuapp.com/connected-accounts/github/callback`.
The `herokuapp` url is coming from `API_URL` in heroku settings. Changing that to `api.opencollective.com` triggers a cloudflare alert breaking the flow altogether:
`You've requested a page on a website (api.opencollective.com) that is on the CloudFlare network. Unfortunately, it is resolving to an IP address that is creating a conflict within CloudFlare's system.` | 1.0 | Github oauth is broken - It errors out with redirect-uri-mismatch.
On github, we have registered `https://api.opencollective.com/connected-accounts/github/callback`.
In our requests, we are sending `https://prod-opencollective-api.herokuapp.com/connected-accounts/github/callback`.
The `herokuapp` url is coming from `API_URL` in heroku settings. Changing that to `api.opencollective.com` triggers a cloudflare alert breaking the flow altogether:
`You've requested a page on a website (api.opencollective.com) that is on the CloudFlare network. Unfortunately, it is resolving to an IP address that is creating a conflict within CloudFlare's system.` | priority | github oauth is broken it errors out with redirect uri mismatch on github we have registered in our requests we are sending the herokuapp url is coming from api url in heroku settings changing that to api opencollective com triggers a cloudflare alert breaking the flow altogether you ve requested a page on a website api opencollective com that is on the cloudflare network unfortunately it is resolving to an ip address that is creating a conflict within cloudflare s system | 1 |
604,595 | 18,715,299,100 | IssuesEvent | 2021-11-03 03:12:50 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | History revert option is enabled for path without write permission | bug priority: high CI | ### Bug Report
#### Crafter CMS Version
3.1.x
#### Describe the bug
Using Editorial blueprint, with the following role, a user can see the revert option on the History dialog for some paths.
```
<role name="new-role">
<rule regex="/site/website/articles/2016/12/top-books-for-young-women/index.xml">
<allowed-permissions>
<permission>Read</permission>
<permission>Write</permission>
</allowed-permissions>
</rule>
</role>
```
#### To Reproduce
Steps to reproduce the behavior:
1. Create a new site with the Editorial blueprint
2. Update role as above
3. Create a new user with `new-role` only
4. Login as a user (user1) with 'new-role' permission.
5. check history of the file - top-books-for-young-women
Here you can see the revert option. This is acceptable.
6. Now check history for other pages. eg- Coffee is Good for Your Health
Expected behaviour - The user user1 should not see any 'revert' option.
Actual Behaviour - The user can see revert option. Refer image ref-1.
#### Logs
#### Screenshots
<img width="1445" alt="ref-1" src="https://user-images.githubusercontent.com/2996543/139823286-a8572b20-b73f-45c5-a918-9bf89e8033e3.png">
| 1.0 | History revert option is enabled for path without write permission - ### Bug Report
#### Crafter CMS Version
3.1.x
#### Describe the bug
Using Editorial blueprint, with the following role, a user can see the revert option on the History dialog for some paths.
```
<role name="new-role">
<rule regex="/site/website/articles/2016/12/top-books-for-young-women/index.xml">
<allowed-permissions>
<permission>Read</permission>
<permission>Write</permission>
</allowed-permissions>
</rule>
</role>
```
#### To Reproduce
Steps to reproduce the behavior:
1. Create a new site with the Editorial blueprint
2. Update role as above
3. Create a new user with `new-role` only
4. Login as a user (user1) with 'new-role' permission.
5. check history of the file - top-books-for-young-women
Here you can see the revert option. This is acceptable.
6. Now check history for other pages. eg- Coffee is Good for Your Health
Expected behaviour - The user user1 should not see any 'revert' option.
Actual Behaviour - The user can see revert option. Refer image ref-1.
#### Logs
#### Screenshots
<img width="1445" alt="ref-1" src="https://user-images.githubusercontent.com/2996543/139823286-a8572b20-b73f-45c5-a918-9bf89e8033e3.png">
| priority | history revert option is enabled for path without write permission bug report crafter cms version x describe the bug using editorial blueprint with the following role a user can see the revert option on the history dialog for some paths read write to reproduce steps to reproduce the behavior create a new site with the editorial blueprint update role as above create a new user with new role only login as a user with new role permission check history of the file top books for young women here you can see the revert option this is acceptable now check history for other pages eg coffee is good for your health expected behaviour the user should not see any revert option actual behaviour the user can see revert option refer image ref logs screenshots img width alt ref src | 1 |
492,944 | 14,223,222,395 | IssuesEvent | 2020-11-17 17:54:40 | aims-group/metagrid | https://api.github.com/repos/aims-group/metagrid | closed | Update GitHub Actions frontend workflow to use Environment Files instead of set-env | Priority: High Type: Configuration Type: DevOps | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
The `set-env` command is deprecated and will be disabled soon. Please upgrade to using Environment Files.
https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-commands-for-github-actions#environment-files
https://stackoverflow.com/questions/61117865/how-to-set-environment-variable-in-node-js-process-when-deploying-with-github-ac
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Update GitHub Actions build files
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| 1.0 | Update GitHub Actions frontend workflow to use Environment Files instead of set-env - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
The `set-env` command is deprecated and will be disabled soon. Please upgrade to using Environment Files.
https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-commands-for-github-actions#environment-files
https://stackoverflow.com/questions/61117865/how-to-set-environment-variable-in-node-js-process-when-deploying-with-github-ac
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Update GitHub Actions build files
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| priority | update github actions frontend workflow to use environment files instead of set env is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when the set env command is deprecated and will be disabled soon please upgrade to using environment files describe the solution you d like a clear and concise description of what you want to happen update github actions build files describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here | 1 |
208,224 | 7,137,165,075 | IssuesEvent | 2018-01-23 10:03:05 | noavish/24event | https://api.github.com/repos/noavish/24event | opened | get request - all events | high-priority server-side | - [ ] build get request for all events when the page loads - work with fetch (client side) | 1.0 | get request - all events - - [ ] build get request for all events when the page loads - work with fetch (client side) | priority | get request all events build get request for all events when the page loads work with fetch client side | 1 |
817,227 | 30,631,993,235 | IssuesEvent | 2023-07-24 15:05:35 | vscentrum/vsc-software-stack | https://api.github.com/repos/vscentrum/vsc-software-stack | closed | PICRUSt2 | difficulty: easy new priority: high Python site:ugent conda | * link to support ticket: [#2023060960000701](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=122238)
* website: https://github.com/picrust/picrust2
* installation docs: https://github.com/picrust/picrust2/wiki/Installation
* toolchain: `foss/2022b`
* easyblock to use: `PythonBundle`
* required dependencies:
* see https://github.com/picrust/picrust2/blob/master/setup.py
* notes:
* ...
* effort: *(TBD)*
* other install methods
* conda: yes (https://github.com/picrust/picrust2/wiki/Installation#install-from-bioconda)
* container image: no
* pre-built binaries (RHEL8 Linux x86_64): no
* easyconfig outside EasyBuild: no
| 1.0 | PICRUSt2 - * link to support ticket: [#2023060960000701](https://otrsdict.ugent.be/otrs/index.pl?Action=AgentTicketZoom;TicketID=122238)
* website: https://github.com/picrust/picrust2
* installation docs: https://github.com/picrust/picrust2/wiki/Installation
* toolchain: `foss/2022b`
* easyblock to use: `PythonBundle`
* required dependencies:
* see https://github.com/picrust/picrust2/blob/master/setup.py
* notes:
* ...
* effort: *(TBD)*
* other install methods
* conda: yes (https://github.com/picrust/picrust2/wiki/Installation#install-from-bioconda)
* container image: no
* pre-built binaries (RHEL8 Linux x86_64): no
* easyconfig outside EasyBuild: no
| priority | link to support ticket website installation docs toolchain foss easyblock to use pythonbundle required dependencies see notes effort tbd other install methods conda yes container image no pre built binaries linux no easyconfig outside easybuild no | 1 |
5,017 | 2,570,444,483 | IssuesEvent | 2015-02-10 09:32:53 | UnifiedViews/Core | https://api.github.com/repos/UnifiedViews/Core | closed | Backend stopped | priority: High severity: bug | Now I am not sure what caused backend to stop, whether the exceptions or the DPU update, which is the last log line (developUK branch)
<pre><code>
2014-11-20 07:20:47,783 [dpu: [INTLIB] PSP Extractor] WARN exec:84 dpu:2008 c.c.m.x.odcs.backend.db.SQLDatabaseReconnectAspect - failureTolerant has caught exception
org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Error Code: 0
Call: SELECT ID FROM dpu_instance WHERE (ID = ?)
bind => [2008]
Query: DoesExistQuery(referenceClass=DPUInstanceRecord sql="SELECT ID FROM dpu_instance WHERE (ID = ?)")
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:521) ~[spring-orm-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754) ~[spring-tx-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723) ~[spring-tx-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:387) ~[spring-tx-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.aspectj.AbstractTransactionAspect.ajc$afterReturning$org_springframework_transaction_aspectj_AbstractTransactionAspect$3$2a73e96c(AbstractTransactionAspect.aj:78) ~[spring-aspects-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at cz.cuni.mff.xrg.odcs.commons.app.facade.DPUFacadeImpl.save(DPUFacadeImpl.java:270) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_72]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_72]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) ~[spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:80) ~[spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at cz.cuni.mff.xrg.odcs.backend.db.SQLDatabaseReconnectAspect.failureTolerant(SQLDatabaseReconnectAspect.java:101) ~[backend-1.4.1-SNAPSHOT.jar:na]
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_72]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_72]
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:65) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:90) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at com.sun.proxy.$Proxy30.save(Unknown Source) [na:na]
at cz.cuni.mff.xrg.odcs.backend.EventListenerDatabase.onDPUEvent(EventListenerDatabase.java:29) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.EventListenerDatabase.onApplicationEvent(EventListenerDatabase.java:46) [backend-1.4.1-SNAPSHOT.jar:na]
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97) [spring-context-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:327) [spring-context-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at cz.cuni.mff.xrg.odcs.backend.context.Context.sendMessage(Context.java:280) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.context.Context.sendMessage(Context.java:264) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.context.Context.sendMessage(Context.java:257) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.opendata.linked.psp_cz.metadata.Extractor.innerExecute(Extractor.java:113) [bundlefile:na]
at cz.cuni.mff.xrg.uv.boost.dpu.advanced.DpuAdvancedBase.execute(DpuAdvancedBase.java:223) [boost-dpu-1.1.2.jar:na]
at cz.cuni.mff.xrg.odcs.backend.execution.dpu.DPUExecutor.executeInstance(DPUExecutor.java:231) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.execution.dpu.DPUExecutor.execute(DPUExecutor.java:369) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.execution.dpu.DPUExecutor.run(DPUExecutor.java:451) [backend-1.4.1-SNAPSHOT.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
Caused by: javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Error Code: 0
Call: SELECT ID FROM dpu_instance WHERE (ID = ?)
bind => [2008]
Query: DoesExistQuery(referenceClass=DPUInstanceRecord sql="SELECT ID FROM dpu_instance WHERE (ID = ?)")
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:157) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:512) ~[spring-orm-3.1.4.RELEASE.jar:3.1.4.RELEASE]
... 37 common frames omitted
Caused by: org.eclipse.persistence.exceptions.DatabaseException:
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Error Code: 0
Call: SELECT ID FROM dpu_instance WHERE (ID = ?)
bind => [2008]
Query: DoesExistQuery(referenceClass=DPUInstanceRecord sql="SELECT ID FROM dpu_instance WHERE (ID = ?)")
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1611) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:674) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:558) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1991) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:570) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:250) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectRowForDoesExist(DatasourceCallQueryMechanism.java:736) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.queries.DoesExistQuery.executeDatabaseQuery(DoesExistQuery.java:241) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:899) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:798) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1793) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1775) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1726) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.checkForUnregisteredExistingObject(UnitOfWorkImpl.java:785) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.discoverAndPersistUnregisteredNewObjects(UnitOfWorkImpl.java:4194) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.mappings.ObjectReferenceMapping.cascadeDiscoverAndPersistUnregisteredNewObjects(ObjectReferenceMapping.java:949) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.mappings.ObjectReferenceMapping.cascadeDiscoverAndPersistUnregisteredNewObjects(ObjectReferenceMapping.java:927) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.cascadeDiscoverAndPersistUnregisteredNewObjects(ObjectBuilder.java:2514) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.discoverAndPersistUnregisteredNewObjects(UnitOfWorkImpl.java:4207) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.discoverUnregisteredNewObjects(RepeatableWriteUnitOfWork.java:305) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.calculateChanges(UnitOfWorkImpl.java:723) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1516) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:277) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:132) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
... 38 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.GeneratedConstructorAccessor215.newInstance(Unknown Source) ~[na:na]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_72]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[na:1.7.0_72]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3246) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1917) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2060) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1885) ~[mysql-connector-java-5.1.6.jar:na]
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96) ~[commons-dbcp-1.4.jar:1.4]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeSelect(DatabaseAccessor.java:1007) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:642) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
... 64 common frames omitted
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.7.0_72]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) ~[na:1.7.0_72]
at java.net.SocketOutputStream.write(SocketOutputStream.java:159) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.7.0_72]
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3227) ~[mysql-connector-java-5.1.6.jar:na]
... 73 common frames omitted
2014-11-20 07:20:47,784 [dpu: [INTLIB] PSP Extractor] WARN exec:84 dpu:2008 c.c.m.x.odcs.backend.db.SQLDatabaseReconnectAspect - Database is down after 1 attempts.
... 73 common frames omitted
2014-11-20 07:20:47,784 [dpu: [INTLIB] PSP Extractor] WARN exec:84 dpu:2008 c.c.m.x.odcs.backend.db.SQLDatabaseReconnectAspect - Database is down after 1 attempts.
2014-11-20 07:20:53,719 [taskScheduler-6] TRACE exec: dpu: c.c.mff.xrg.odcs.commons.app.scheduling.Schedule - onTimeCheck started
2014-11-20 07:21:06,113 [taskScheduler-2] DEBUG exec: dpu: cz.cuni.mff.xrg.odcs.backend.execution.Engine - >>> Entering checkJobs()
2014-11-20 07:32:46,991 [File notifier server] DEBUG exec: dpu: c.c.m.x.o.c.app.module.osgi.OSGIChangeManager - Udating DPU in: extractor_rdfa_distiller
</code></pre> | 1.0 | Backend stopped - Now I am not sure what caused backend to stop, whether the exceptions or the DPU update, which is the last log line (developUK branch)
<pre><code>
2014-11-20 07:20:47,783 [dpu: [INTLIB] PSP Extractor] WARN exec:84 dpu:2008 c.c.m.x.odcs.backend.db.SQLDatabaseReconnectAspect - failureTolerant has caught exception
org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Error Code: 0
Call: SELECT ID FROM dpu_instance WHERE (ID = ?)
bind => [2008]
Query: DoesExistQuery(referenceClass=DPUInstanceRecord sql="SELECT ID FROM dpu_instance WHERE (ID = ?)")
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:521) ~[spring-orm-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754) ~[spring-tx-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723) ~[spring-tx-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:387) ~[spring-tx-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.transaction.aspectj.AbstractTransactionAspect.ajc$afterReturning$org_springframework_transaction_aspectj_AbstractTransactionAspect$3$2a73e96c(AbstractTransactionAspect.aj:78) ~[spring-aspects-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at cz.cuni.mff.xrg.odcs.commons.app.facade.DPUFacadeImpl.save(DPUFacadeImpl.java:270) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_72]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_72]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:319) ~[spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:80) ~[spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at cz.cuni.mff.xrg.odcs.backend.db.SQLDatabaseReconnectAspect.failureTolerant(SQLDatabaseReconnectAspect.java:101) ~[backend-1.4.1-SNAPSHOT.jar:na]
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_72]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_72]
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:621) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:610) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:65) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:90) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) [spring-aop-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at com.sun.proxy.$Proxy30.save(Unknown Source) [na:na]
at cz.cuni.mff.xrg.odcs.backend.EventListenerDatabase.onDPUEvent(EventListenerDatabase.java:29) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.EventListenerDatabase.onApplicationEvent(EventListenerDatabase.java:46) [backend-1.4.1-SNAPSHOT.jar:na]
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97) [spring-context-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:327) [spring-context-3.1.4.RELEASE.jar:3.1.4.RELEASE]
at cz.cuni.mff.xrg.odcs.backend.context.Context.sendMessage(Context.java:280) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.context.Context.sendMessage(Context.java:264) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.context.Context.sendMessage(Context.java:257) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.opendata.linked.psp_cz.metadata.Extractor.innerExecute(Extractor.java:113) [bundlefile:na]
at cz.cuni.mff.xrg.uv.boost.dpu.advanced.DpuAdvancedBase.execute(DpuAdvancedBase.java:223) [boost-dpu-1.1.2.jar:na]
at cz.cuni.mff.xrg.odcs.backend.execution.dpu.DPUExecutor.executeInstance(DPUExecutor.java:231) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.execution.dpu.DPUExecutor.execute(DPUExecutor.java:369) [backend-1.4.1-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.backend.execution.dpu.DPUExecutor.run(DPUExecutor.java:451) [backend-1.4.1-SNAPSHOT.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
Caused by: javax.persistence.RollbackException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.1.v20130918-f2b9fc5): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Error Code: 0
Call: SELECT ID FROM dpu_instance WHERE (ID = ?)
bind => [2008]
Query: DoesExistQuery(referenceClass=DPUInstanceRecord sql="SELECT ID FROM dpu_instance WHERE (ID = ?)")
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:157) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:512) ~[spring-orm-3.1.4.RELEASE.jar:3.1.4.RELEASE]
... 37 common frames omitted
Caused by: org.eclipse.persistence.exceptions.DatabaseException:
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Error Code: 0
Call: SELECT ID FROM dpu_instance WHERE (ID = ?)
bind => [2008]
Query: DoesExistQuery(referenceClass=DPUInstanceRecord sql="SELECT ID FROM dpu_instance WHERE (ID = ?)")
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1611) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:674) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:558) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1991) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:570) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:250) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectRowForDoesExist(DatasourceCallQueryMechanism.java:736) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.queries.DoesExistQuery.executeDatabaseQuery(DoesExistQuery.java:241) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:899) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.queries.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:798) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2896) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1793) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1775) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1726) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.checkForUnregisteredExistingObject(UnitOfWorkImpl.java:785) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.discoverAndPersistUnregisteredNewObjects(UnitOfWorkImpl.java:4194) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.mappings.ObjectReferenceMapping.cascadeDiscoverAndPersistUnregisteredNewObjects(ObjectReferenceMapping.java:949) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.mappings.ObjectReferenceMapping.cascadeDiscoverAndPersistUnregisteredNewObjects(ObjectReferenceMapping.java:927) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.descriptors.ObjectBuilder.cascadeDiscoverAndPersistUnregisteredNewObjects(ObjectBuilder.java:2514) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.discoverAndPersistUnregisteredNewObjects(UnitOfWorkImpl.java:4207) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.discoverUnregisteredNewObjects(RepeatableWriteUnitOfWork.java:305) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.calculateChanges(UnitOfWorkImpl.java:723) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1516) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:277) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:132) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
... 38 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was72817 seconds ago.The last packet sent successfully to the server was 72817 seconds ago, which is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
at sun.reflect.GeneratedConstructorAccessor215.newInstance(Unknown Source) ~[na:na]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_72]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[na:1.7.0_72]
at com.mysql.jdbc.Util.handleNewInstance(Util.java:406) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3246) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1917) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2060) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734) ~[mysql-connector-java-5.1.6.jar:na]
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1885) ~[mysql-connector-java-5.1.6.jar:na]
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96) ~[commons-dbcp-1.4.jar:1.4]
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96) ~[commons-dbcp-1.4.jar:1.4]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeSelect(DatabaseAccessor.java:1007) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:642) ~[commons-app-1.4.1-SNAPSHOT.jar:na]
... 64 common frames omitted
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method) ~[na:1.7.0_72]
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) ~[na:1.7.0_72]
at java.net.SocketOutputStream.write(SocketOutputStream.java:159) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.7.0_72]
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.7.0_72]
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3227) ~[mysql-connector-java-5.1.6.jar:na]
... 73 common frames omitted
2014-11-20 07:20:47,784 [dpu: [INTLIB] PSP Extractor] WARN exec:84 dpu:2008 c.c.m.x.odcs.backend.db.SQLDatabaseReconnectAspect - Database is down after 1 attempts.
... 73 common frames omitted
2014-11-20 07:20:47,784 [dpu: [INTLIB] PSP Extractor] WARN exec:84 dpu:2008 c.c.m.x.odcs.backend.db.SQLDatabaseReconnectAspect - Database is down after 1 attempts.
2014-11-20 07:20:53,719 [taskScheduler-6] TRACE exec: dpu: c.c.mff.xrg.odcs.commons.app.scheduling.Schedule - onTimeCheck started
2014-11-20 07:21:06,113 [taskScheduler-2] DEBUG exec: dpu: cz.cuni.mff.xrg.odcs.backend.execution.Engine - >>> Entering checkJobs()
2014-11-20 07:32:46,991 [File notifier server] DEBUG exec: dpu: c.c.m.x.o.c.app.module.osgi.OSGIChangeManager - Udating DPU in: extractor_rdfa_distiller
</code></pre> | priority | backend stopped now i am not sure what caused backend to stop whether the exceptions or the dpu update which is the last log line developuk branch psp extractor warn exec dpu c c m x odcs backend db sqldatabasereconnectaspect failuretolerant has caught exception org springframework transaction transactionsystemexception could not commit jpa transaction nested exception is javax persistence rollbackexception exception eclipse persistence services org eclipse persistence exceptions databaseexception internal exception com mysql jdbc exceptions communicationsexception the last packet successfully received from the server seconds ago the last packet sent successfully to the server was seconds ago which is longer than the server configured value of wait timeout you should consider either expiring and or testing connection validity before use in your application increasing the server configured values for client timeouts or using the connector j connection property autoreconnect true to avoid this problem error code call select id from dpu instance where id bind query doesexistquery referenceclass dpuinstancerecord sql select id from dpu instance where id at org springframework orm jpa jpatransactionmanager docommit jpatransactionmanager java at org springframework transaction support abstractplatformtransactionmanager processcommit abstractplatformtransactionmanager java at org springframework transaction support abstractplatformtransactionmanager commit abstractplatformtransactionmanager java at org springframework transaction interceptor transactionaspectsupport committransactionafterreturning transactionaspectsupport java at org springframework transaction aspectj abstracttransactionaspect ajc afterreturning org springframework transaction aspectj abstracttransactionaspect abstracttransactionaspect aj at cz cuni mff xrg odcs commons app facade dpufacadeimpl save dpufacadeimpl java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org springframework aop support aoputils invokejoinpointusingreflection aoputils java at org springframework aop framework reflectivemethodinvocation invokejoinpoint reflectivemethodinvocation java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop aspectj methodinvocationproceedingjoinpoint proceed methodinvocationproceedingjoinpoint java at cz cuni mff xrg odcs backend db sqldatabasereconnectaspect failuretolerant sqldatabasereconnectaspect java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org springframework aop aspectj abstractaspectjadvice invokeadvicemethodwithgivenargs abstractaspectjadvice java at org springframework aop aspectj abstractaspectjadvice invokeadvicemethod abstractaspectjadvice java at org springframework aop aspectj aspectjaroundadvice invoke aspectjaroundadvice java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop interceptor exposeinvocationinterceptor invoke exposeinvocationinterceptor java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop framework jdkdynamicaopproxy invoke jdkdynamicaopproxy java at com sun proxy save unknown source at cz cuni mff xrg odcs backend eventlistenerdatabase ondpuevent eventlistenerdatabase java at cz cuni mff xrg odcs backend eventlistenerdatabase onapplicationevent eventlistenerdatabase java at org springframework context event simpleapplicationeventmulticaster multicastevent simpleapplicationeventmulticaster java at org springframework context support abstractapplicationcontext publishevent abstractapplicationcontext java at cz cuni mff xrg odcs backend context context sendmessage context java at cz cuni mff xrg odcs backend context context sendmessage context java at cz cuni mff xrg odcs backend context context sendmessage context java at cz opendata linked psp cz metadata extractor innerexecute extractor java at cz cuni mff xrg uv boost dpu advanced dpuadvancedbase execute dpuadvancedbase java at cz cuni mff xrg odcs backend execution dpu dpuexecutor executeinstance dpuexecutor java at cz cuni mff xrg odcs backend execution dpu dpuexecutor execute dpuexecutor java at cz cuni mff xrg odcs backend execution dpu dpuexecutor run dpuexecutor java at java lang thread run thread java caused by javax persistence rollbackexception exception eclipse persistence services org eclipse persistence exceptions databaseexception internal exception com mysql jdbc exceptions communicationsexception the last packet successfully received from the server seconds ago the last packet sent successfully to the server was seconds ago which is longer than the server configured value of wait timeout you should consider either expiring and or testing connection validity before use in your application increasing the server configured values for client timeouts or using the connector j connection property autoreconnect true to avoid this problem error code call select id from dpu instance where id bind query doesexistquery referenceclass dpuinstancerecord sql select id from dpu instance where id at org eclipse persistence internal jpa transaction entitytransactionimpl commit entitytransactionimpl java at org springframework orm jpa jpatransactionmanager docommit jpatransactionmanager java common frames omitted caused by org eclipse persistence exceptions databaseexception internal exception com mysql jdbc exceptions communicationsexception the last packet successfully received from the server seconds ago the last packet sent successfully to the server was seconds ago which is longer than the server configured value of wait timeout you should consider either expiring and or testing connection validity before use in your application increasing the server configured values for client timeouts or using the connector j connection property autoreconnect true to avoid this problem error code call select id from dpu instance where id bind query doesexistquery referenceclass dpuinstancerecord sql select id from dpu instance where id at org eclipse persistence exceptions databaseexception sqlexception databaseexception java at org eclipse persistence internal databaseaccess databaseaccessor processexceptionforcommerror databaseaccessor java at org eclipse persistence internal databaseaccess databaseaccessor basicexecutecall databaseaccessor java at org eclipse persistence internal databaseaccess databaseaccessor executecall databaseaccessor java at org eclipse persistence internal sessions abstractsession basicexecutecall abstractsession java at org eclipse persistence sessions server serversession executecall serversession java at org eclipse persistence sessions server clientsession executecall clientsession java at org eclipse persistence internal queries datasourcecallquerymechanism executecall datasourcecallquerymechanism java at org eclipse persistence internal queries datasourcecallquerymechanism executecall datasourcecallquerymechanism java at org eclipse persistence internal queries datasourcecallquerymechanism selectrowfordoesexist datasourcecallquerymechanism java at org eclipse persistence queries doesexistquery executedatabasequery doesexistquery java at org eclipse persistence queries databasequery execute databasequery java at org eclipse persistence queries databasequery executeinunitofwork databasequery java at org eclipse persistence internal sessions unitofworkimpl internalexecutequery unitofworkimpl java at org eclipse persistence internal sessions abstractsession executequery abstractsession java at org eclipse persistence internal sessions abstractsession executequery abstractsession java at org eclipse persistence internal sessions abstractsession executequery abstractsession java at org eclipse persistence internal sessions unitofworkimpl checkforunregisteredexistingobject unitofworkimpl java at org eclipse persistence internal sessions unitofworkimpl discoverandpersistunregisterednewobjects unitofworkimpl java at org eclipse persistence mappings objectreferencemapping cascadediscoverandpersistunregisterednewobjects objectreferencemapping java at org eclipse persistence mappings objectreferencemapping cascadediscoverandpersistunregisterednewobjects objectreferencemapping java at org eclipse persistence internal descriptors objectbuilder cascadediscoverandpersistunregisterednewobjects objectbuilder java at org eclipse persistence internal sessions unitofworkimpl discoverandpersistunregisterednewobjects unitofworkimpl java at org eclipse persistence internal sessions repeatablewriteunitofwork discoverunregisterednewobjects repeatablewriteunitofwork java at org eclipse persistence internal sessions unitofworkimpl calculatechanges unitofworkimpl java at org eclipse persistence internal sessions unitofworkimpl committodatabasewithchangeset unitofworkimpl java at org eclipse persistence internal sessions repeatablewriteunitofwork commitrootunitofwork repeatablewriteunitofwork java at org eclipse persistence internal sessions unitofworkimpl commitandresume unitofworkimpl java at org eclipse persistence internal jpa transaction entitytransactionimpl commit entitytransactionimpl java common frames omitted caused by com mysql jdbc exceptions communicationsexception the last packet successfully received from the server seconds ago the last packet sent successfully to the server was seconds ago which is longer than the server configured value of wait timeout you should consider either expiring and or testing connection validity before use in your application increasing the server configured values for client timeouts or using the connector j connection property autoreconnect true to avoid this problem at sun reflect newinstance unknown source at sun reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java lang reflect constructor newinstance constructor java at com mysql jdbc util handlenewinstance util java at com mysql jdbc sqlerror createcommunicationsexception sqlerror java at com mysql jdbc mysqlio send mysqlio java at com mysql jdbc mysqlio sendcommand mysqlio java at com mysql jdbc mysqlio sqlquerydirect mysqlio java at com mysql jdbc connectionimpl execsql connectionimpl java at com mysql jdbc preparedstatement executeinternal preparedstatement java at com mysql jdbc preparedstatement executequery preparedstatement java at org apache commons dbcp delegatingpreparedstatement executequery delegatingpreparedstatement java at org apache commons dbcp delegatingpreparedstatement executequery delegatingpreparedstatement java at org eclipse persistence internal databaseaccess databaseaccessor executeselect databaseaccessor java at org eclipse persistence internal databaseaccess databaseaccessor basicexecutecall databaseaccessor java common frames omitted caused by java net socketexception broken pipe at java net socketoutputstream native method at java net socketoutputstream socketwrite socketoutputstream java at java net socketoutputstream write socketoutputstream java at java io bufferedoutputstream flushbuffer bufferedoutputstream java at java io bufferedoutputstream flush bufferedoutputstream java at com mysql jdbc mysqlio send mysqlio java common frames omitted psp extractor warn exec dpu c c m x odcs backend db sqldatabasereconnectaspect database is down after attempts common frames omitted psp extractor warn exec dpu c c m x odcs backend db sqldatabasereconnectaspect database is down after attempts trace exec dpu c c mff xrg odcs commons app scheduling schedule ontimecheck started debug exec dpu cz cuni mff xrg odcs backend execution engine entering checkjobs debug exec dpu c c m x o c app module osgi osgichangemanager udating dpu in extractor rdfa distiller | 1 |
554,033 | 16,387,847,413 | IssuesEvent | 2021-05-17 12:51:16 | gammapy/gammapy | https://api.github.com/repos/gammapy/gammapy | closed | Clean up model names | cleanup priority-high | This is a reminder issue for myself to clean up the model class names, once #2319 is done.
Moving the models broke all user scripts & notebooks anyways, so now is a good time to do renames of models.
Plan for names is to add "Source", "Spatial", "Spectral" and "Time" in a uniform way: https://github.com/gammapy/gammapy/blob/master/docs/development/pigs/pig-016.rst#introduce-gammapymodeling
| 1.0 | Clean up model names - This is a reminder issue for myself to clean up the model class names, once #2319 is done.
Moving the models broke all user scripts & notebooks anyways, so now is a good time to do renames of models.
Plan for names is to add "Source", "Spatial", "Spectral" and "Time" in a uniform way: https://github.com/gammapy/gammapy/blob/master/docs/development/pigs/pig-016.rst#introduce-gammapymodeling
| priority | clean up model names this is a reminder issue for myself to clean up the model class names once is done moving the models broke all user scripts notebooks anyways so now is a good time to do renames of models plan for names is to add source spatial spectral and time in a uniform way | 1 |
528,108 | 15,360,232,594 | IssuesEvent | 2021-03-01 16:42:32 | mantidproject/mantid | https://api.github.com/repos/mantidproject/mantid | closed | ISIS reduction always crashes | Bug Direct Inelastic High Priority Stale | <!-- TEMPLATE FOR BUG REPORTS -->
**Original reporter:** [ISIS excitation group]/[DaaaS testers]. <!--If the issue was raised by a user they should be named here.-->
Att of: @martyngigg; @NickDraper
### Expected behavior
### Actual behavior
ISIS reduction script always crashes on all Unix machines when runs long enough. It seems there is a memory leak. Windows machines are also affected but more difficult to diagnoze.
### Steps to reproduce the behavior
typical Isis reduction script demonstrating this behaviour are located at:
1) [LET/ReductionWrapper_withPerformance.py](https://github.com/mantidproject/scriptrepository/blob/master/direct_inelastic/LET/ReductionWrapper_withPerformance.py) -- the logger,
containing normal ISIS inelastic data reduction cycle with added facility to measure performance and memory usage of the reduction. It currently deploy `free -m` Unix command and Python `resource` module to measure memory usage. The performance logs are stored in script directory as text files with names build from instrument name and the host name.
and:
2) [MERLINReduction_2018_2WithPerformance.py](https://github.com/mantidproject/scriptrepository/blob/master/direct_inelastic/MERLIN/MERLINReduction_2018_2WithPerformance.py)
An executor script, which is normal ISIS reduction script, modified to use module (1). Other executors are availible but this one is the most efficient one.
The issue occurs the faster the lower memory a machine has, though isiscompute with 0.5Tb of memory is also susseptable when mutliple users occupy substantial chunk of memory.
To demonstrate the issue, one should place these two files in a single folder of a unix machine which has access to archive mounted at root. The mahcine should also have isis instrument parameters in /usr/local/mprogs/InstrumentFiles. See the executor script rows 150-151 where necessary path-es to the data are specified. The results and the log files are written into the script folder.
The typical issue can be best reproduced on 32Gb DaaaS virtual machine [(https://isis.analysis.stfc.ac.uk/#/login)](url) as follows.
The reduction script starts and have the following memory and performance characteristics, printed in the log file:
>
*** Test started: 29/10/2018 at 17:39
*** Self Memory: 487.6640625Kb; Kids memory 136.0234375Kb
*** total used free shared buff/cache available
*** Mem: 32169 700 30789 13 679 31027 -- **these are the values returned by flree -m**
*** Swap: 2047 647 1400
*** ---> File MER41624.nxs processed in 76.16sec
*** Self Memory: 15167.0Kb; Kids memory 305.0Kb -- **these are the values retrieved from resources**
*** Mem: 32169 15322 16087 13 758 16401
*** Swap: 2047 647 1400
*** ---> File MER41625.nxs processed in 33.68sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41626.nxs processed in 34.54sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Mem: 32169 5221 26103 13 844 26502
*** Swap: 2047 647 1400
The memory, reported by **resources** module starts creeping up.
When this memory reaches ~19Mb the script can not start subprocess to evaluate memory any more:
>
*** ---> File MER41650.nxs processed in 32.15sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41651.nxs processed in 32.60sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41652.nxs processed in 31.37sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
When this memory reaches ~25Mb Mantid crashes. Clearing Mantid memory using **ClearCaches** does not make any difference.
*** ---> File MER41842.nxs processed in 63.45sec
*** Self Memory: 24362.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41843.nxs processed in 30.09sec
*** Self Memory: 24564.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41844.nxs processed in 38.35sec
*** Self Memory: 24625.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
Crash
The critical values differ sligtly on different memory machines but the general trent is the same.
I.e, on 128Gb 30CPU DaaaS virtual machine, the issue start later (verified):
*** ---> File MER41847.nxs processed in 54.98sec
*** Self Memory: 58015.0Kb; Kids memory 46704.0Kb
*** Mem: 125909 62617 51295 56 11995 62118
*** Swap: 2047 0 2047
*** ---> File MER41848.nxs processed in 71.88sec
*** Self Memory: 69115.0Kb; Kids memory 57803.0Kb
*** Can not launch subprocess to evaluate free memory.
......
*** ---> File MER41853.nxs processed in 142.63sec
*** Self Memory: 122926.0Kb; Kids memory 57803.0Kb
*** Can not launch subprocess to evaluate free memory.
Crash
Note, comlication:. When isiscompute (0.5Tb of RAM) is not occupied, the creep-up does not occur.
When it has a lot of users, the behaviour is similar.
The issue is critical for excitation group.
### Platforms affected
Unix (RHEL7) There are sign of presence on Windows, but thorough tests were not perfomed.
| 1.0 | ISIS reduction always crashes - <!-- TEMPLATE FOR BUG REPORTS -->
**Original reporter:** [ISIS excitation group]/[DaaaS testers]. <!--If the issue was raised by a user they should be named here.-->
Att of: @martyngigg; @NickDraper
### Expected behavior
### Actual behavior
ISIS reduction script always crashes on all Unix machines when runs long enough. It seems there is a memory leak. Windows machines are also affected but more difficult to diagnoze.
### Steps to reproduce the behavior
typical Isis reduction script demonstrating this behaviour are located at:
1) [LET/ReductionWrapper_withPerformance.py](https://github.com/mantidproject/scriptrepository/blob/master/direct_inelastic/LET/ReductionWrapper_withPerformance.py) -- the logger,
containing normal ISIS inelastic data reduction cycle with added facility to measure performance and memory usage of the reduction. It currently deploy `free -m` Unix command and Python `resource` module to measure memory usage. The performance logs are stored in script directory as text files with names build from instrument name and the host name.
and:
2) [MERLINReduction_2018_2WithPerformance.py](https://github.com/mantidproject/scriptrepository/blob/master/direct_inelastic/MERLIN/MERLINReduction_2018_2WithPerformance.py)
An executor script, which is normal ISIS reduction script, modified to use module (1). Other executors are availible but this one is the most efficient one.
The issue occurs the faster the lower memory a machine has, though isiscompute with 0.5Tb of memory is also susseptable when mutliple users occupy substantial chunk of memory.
To demonstrate the issue, one should place these two files in a single folder of a unix machine which has access to archive mounted at root. The mahcine should also have isis instrument parameters in /usr/local/mprogs/InstrumentFiles. See the executor script rows 150-151 where necessary path-es to the data are specified. The results and the log files are written into the script folder.
The typical issue can be best reproduced on 32Gb DaaaS virtual machine [(https://isis.analysis.stfc.ac.uk/#/login)](url) as follows.
The reduction script starts and have the following memory and performance characteristics, printed in the log file:
>
*** Test started: 29/10/2018 at 17:39
*** Self Memory: 487.6640625Kb; Kids memory 136.0234375Kb
*** total used free shared buff/cache available
*** Mem: 32169 700 30789 13 679 31027 -- **these are the values returned by flree -m**
*** Swap: 2047 647 1400
*** ---> File MER41624.nxs processed in 76.16sec
*** Self Memory: 15167.0Kb; Kids memory 305.0Kb -- **these are the values retrieved from resources**
*** Mem: 32169 15322 16087 13 758 16401
*** Swap: 2047 647 1400
*** ---> File MER41625.nxs processed in 33.68sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41626.nxs processed in 34.54sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Mem: 32169 5221 26103 13 844 26502
*** Swap: 2047 647 1400
The memory, reported by **resources** module starts creeping up.
When this memory reaches ~19Mb the script can not start subprocess to evaluate memory any more:
>
*** ---> File MER41650.nxs processed in 32.15sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41651.nxs processed in 32.60sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41652.nxs processed in 31.37sec
*** Self Memory: 18495.0Kb; Kids memory 14860.0Kb
When this memory reaches ~25Mb Mantid crashes. Clearing Mantid memory using **ClearCaches** does not make any difference.
*** ---> File MER41842.nxs processed in 63.45sec
*** Self Memory: 24362.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41843.nxs processed in 30.09sec
*** Self Memory: 24564.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
*** ---> File MER41844.nxs processed in 38.35sec
*** Self Memory: 24625.0Kb; Kids memory 14860.0Kb
*** Can not launch subprocess to evaluate free memory.
Crash
The critical values differ sligtly on different memory machines but the general trent is the same.
I.e, on 128Gb 30CPU DaaaS virtual machine, the issue start later (verified):
*** ---> File MER41847.nxs processed in 54.98sec
*** Self Memory: 58015.0Kb; Kids memory 46704.0Kb
*** Mem: 125909 62617 51295 56 11995 62118
*** Swap: 2047 0 2047
*** ---> File MER41848.nxs processed in 71.88sec
*** Self Memory: 69115.0Kb; Kids memory 57803.0Kb
*** Can not launch subprocess to evaluate free memory.
......
*** ---> File MER41853.nxs processed in 142.63sec
*** Self Memory: 122926.0Kb; Kids memory 57803.0Kb
*** Can not launch subprocess to evaluate free memory.
Crash
Note, comlication:. When isiscompute (0.5Tb of RAM) is not occupied, the creep-up does not occur.
When it has a lot of users, the behaviour is similar.
The issue is critical for excitation group.
### Platforms affected
Unix (RHEL7) There are sign of presence on Windows, but thorough tests were not perfomed.
| priority | isis reduction always crashes original reporter att of martyngigg nickdraper expected behavior actual behavior isis reduction script always crashes on all unix machines when runs long enough it seems there is a memory leak windows machines are also affected but more difficult to diagnoze steps to reproduce the behavior typical isis reduction script demonstrating this behaviour are located at the logger containing normal isis inelastic data reduction cycle with added facility to measure performance and memory usage of the reduction it currently deploy free m unix command and python resource module to measure memory usage the performance logs are stored in script directory as text files with names build from instrument name and the host name and an executor script which is normal isis reduction script modified to use module other executors are availible but this one is the most efficient one the issue occurs the faster the lower memory a machine has though isiscompute with of memory is also susseptable when mutliple users occupy substantial chunk of memory to demonstrate the issue one should place these two files in a single folder of a unix machine which has access to archive mounted at root the mahcine should also have isis instrument parameters in usr local mprogs instrumentfiles see the executor script rows where necessary path es to the data are specified the results and the log files are written into the script folder the typical issue can be best reproduced on daaas virtual machine url as follows the reduction script starts and have the following memory and performance characteristics printed in the log file test started at self memory kids memory total used free shared buff cache available mem these are the values returned by flree m swap file nxs processed in self memory kids memory these are the values retrieved from resources mem swap file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory file nxs processed in self memory kids memory mem swap the memory reported by resources module starts creeping up when this memory reaches the script can not start subprocess to evaluate memory any more file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory file nxs processed in self memory kids memory when this memory reaches mantid crashes clearing mantid memory using clearcaches does not make any difference file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory crash the critical values differ sligtly on different memory machines but the general trent is the same i e on daaas virtual machine the issue start later verified file nxs processed in self memory kids memory mem swap file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory file nxs processed in self memory kids memory can not launch subprocess to evaluate free memory crash note comlication when isiscompute of ram is not occupied the creep up does not occur when it has a lot of users the behaviour is similar the issue is critical for excitation group platforms affected unix there are sign of presence on windows but thorough tests were not perfomed | 1 |
6,113 | 2,583,203,003 | IssuesEvent | 2015-02-16 01:37:27 | afollestad/cabinet-issue-tracker | https://api.github.com/repos/afollestad/cabinet-issue-tracker | closed | Ask for password on each SFTP connection | enhancement high priority | I don't want to be forced to save my password when I create a SFTP connection. The better (or an additional) way would be to ask for the password when connecting to the server.
THX | 1.0 | Ask for password on each SFTP connection - I don't want to be forced to save my password when I create a SFTP connection. The better (or an additional) way would be to ask for the password when connecting to the server.
THX | priority | ask for password on each sftp connection i don t want to be forced to save my password when i create a sftp connection the better or an additional way would be to ask for the password when connecting to the server thx | 1 |
273,604 | 8,550,755,600 | IssuesEvent | 2018-11-07 16:17:13 | huridocs/uwazi | https://api.github.com/repos/huridocs/uwazi | closed | Deleting a relationship deletes only that entry in the DB, not the other languages | Bug Priority: High Status: Sprint | If you access the 'relationship delete' method, it will delete only that particular query.
If _id is passed, it will delete only a single document in the DB, and leave the other shared ID documents lingering as ghosts.
This is what has created inconsistent relationships across different languages. | 1.0 | Deleting a relationship deletes only that entry in the DB, not the other languages - If you access the 'relationship delete' method, it will delete only that particular query.
If _id is passed, it will delete only a single document in the DB, and leave the other shared ID documents lingering as ghosts.
This is what has created inconsistent relationships across different languages. | priority | deleting a relationship deletes only that entry in the db not the other languages if you access the relationship delete method it will delete only that particular query if id is passed it will delete only a single document in the db and leave the other shared id documents lingering as ghosts this is what has created inconsistent relationships across different languages | 1 |
521,918 | 15,145,072,624 | IssuesEvent | 2021-02-11 03:16:53 | woocommerce/woocommerce | https://api.github.com/repos/woocommerce/woocommerce | closed | PHP8 Uncaught TypeError: Unsupported operand types: string + string | priority: high type: bug | **Prerequisites (mark completed items with an [x]):**
- [x] I have have carried out troubleshooting steps and I believe I have found a bug.
- [x] I have searched for similar bugs in both open and closed issues and cannot find a duplicate.
**Describe the bug**
When trying to save an order in PHP 8.0 (in WC5.0) I get the following error:
Fatal error: Uncaught TypeError: Unsupported operand types: string + string in wp-content/plugins/woocommerce/includes/abstracts/abstract-wc-order.php:1588
> Stack trace:
> 0 wp-content/plugins/woocommerce/includes/admin/wc-admin-functions.php(414): WC_Abstract_Order->update_taxes()
> 1 wp-content/plugins/woocommerce/includes/admin/meta-boxes/class-wc-meta-box-order-items.php(54): wc_save_order_items(1146, Array)
> 2 wp-includes/class-wp-hook.php(289): WC_Meta_Box_Order_Items::save(1146)
> 3 wp-includes/class-wp-hook.php(311): WP_Hook->apply_filters('', Array)
> 4 wp-includes/plugin.php(484): WP_Hook->do_action(Array)
> 5 wp-content/plugins/woocommerce/includes/admin/class-wc-admin-meta-boxes.php(220): do_action('woocommerce_pro...', 1146, Object(WP_Post))
> 6 wp-includes/class-wp-hook.php(289): WC_Admin_Meta_Boxes->save_meta_boxes(1146, Object(WP_Post))
> 7 wp-includes/class-wp-hook.php(311): WP_Hook->apply_filters(NULL, Array)
> 8 wp-includes/plugin.php(484): WP_Hook->do_action(Array)
> 9 wp-includes/post.php(4309): do_action('save_post', 1146, Object(WP_Post), true)
> 10 wp-includes/post.php(4411): wp_insert_post(Array, false, true)
> 11 wp-admin/includes/post.php(419): wp_update_post(Array)
> 12 wp-admin/post.php(227): edit_post()
> 13 {main} thrown in wp-content/plugins/woocommerce/includes/abstracts/abstract-wc-order.php on line 1588
https://github.com/woocommerce/woocommerce/blob/66a1c169f749a03e090cf66c6ee032580dcdf424/includes/abstracts/abstract-wc-order.php#L1588
**Expected behavior**
tax calculatiosn should not use strings
value should be cast/converted to float here:
https://github.com/woocommerce/woocommerce/blob/c15488d8402d149a1a6551d73057d31a0730bddb/includes/traits/trait-wc-item-totals.php#L84-L89
**Actual behavior**
see error
**Steps to reproduce the bug (We need to be able to reproduce the bug in order to fix it.)**
Steps to reproduce the bug:
1. Enable the Tax option Rounding - Round tax at subtotal level, instead of rounding per line
2. Go to order
3. Save order
4. See error
**Isolating the problem (mark completed items with an [x]):**
- [x] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active.
- [x] This bug happens with a default WordPress theme active, or [Storefront](https://woocommerce.com/storefront/).
- [x] I can reproduce this bug consistently using the steps above.
| 1.0 | PHP8 Uncaught TypeError: Unsupported operand types: string + string - **Prerequisites (mark completed items with an [x]):**
- [x] I have have carried out troubleshooting steps and I believe I have found a bug.
- [x] I have searched for similar bugs in both open and closed issues and cannot find a duplicate.
**Describe the bug**
When trying to save an order in PHP 8.0 (in WC5.0) I get the following error:
Fatal error: Uncaught TypeError: Unsupported operand types: string + string in wp-content/plugins/woocommerce/includes/abstracts/abstract-wc-order.php:1588
> Stack trace:
> 0 wp-content/plugins/woocommerce/includes/admin/wc-admin-functions.php(414): WC_Abstract_Order->update_taxes()
> 1 wp-content/plugins/woocommerce/includes/admin/meta-boxes/class-wc-meta-box-order-items.php(54): wc_save_order_items(1146, Array)
> 2 wp-includes/class-wp-hook.php(289): WC_Meta_Box_Order_Items::save(1146)
> 3 wp-includes/class-wp-hook.php(311): WP_Hook->apply_filters('', Array)
> 4 wp-includes/plugin.php(484): WP_Hook->do_action(Array)
> 5 wp-content/plugins/woocommerce/includes/admin/class-wc-admin-meta-boxes.php(220): do_action('woocommerce_pro...', 1146, Object(WP_Post))
> 6 wp-includes/class-wp-hook.php(289): WC_Admin_Meta_Boxes->save_meta_boxes(1146, Object(WP_Post))
> 7 wp-includes/class-wp-hook.php(311): WP_Hook->apply_filters(NULL, Array)
> 8 wp-includes/plugin.php(484): WP_Hook->do_action(Array)
> 9 wp-includes/post.php(4309): do_action('save_post', 1146, Object(WP_Post), true)
> 10 wp-includes/post.php(4411): wp_insert_post(Array, false, true)
> 11 wp-admin/includes/post.php(419): wp_update_post(Array)
> 12 wp-admin/post.php(227): edit_post()
> 13 {main} thrown in wp-content/plugins/woocommerce/includes/abstracts/abstract-wc-order.php on line 1588
https://github.com/woocommerce/woocommerce/blob/66a1c169f749a03e090cf66c6ee032580dcdf424/includes/abstracts/abstract-wc-order.php#L1588
**Expected behavior**
tax calculatiosn should not use strings
value should be cast/converted to float here:
https://github.com/woocommerce/woocommerce/blob/c15488d8402d149a1a6551d73057d31a0730bddb/includes/traits/trait-wc-item-totals.php#L84-L89
**Actual behavior**
see error
**Steps to reproduce the bug (We need to be able to reproduce the bug in order to fix it.)**
Steps to reproduce the bug:
1. Enable the Tax option Rounding - Round tax at subtotal level, instead of rounding per line
2. Go to order
3. Save order
4. See error
**Isolating the problem (mark completed items with an [x]):**
- [x] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active.
- [x] This bug happens with a default WordPress theme active, or [Storefront](https://woocommerce.com/storefront/).
- [x] I can reproduce this bug consistently using the steps above.
| priority | uncaught typeerror unsupported operand types string string prerequisites mark completed items with an i have have carried out troubleshooting steps and i believe i have found a bug i have searched for similar bugs in both open and closed issues and cannot find a duplicate describe the bug when trying to save an order in php in i get the following error fatal error uncaught typeerror unsupported operand types string string in wp content plugins woocommerce includes abstracts abstract wc order php stack trace wp content plugins woocommerce includes admin wc admin functions php wc abstract order update taxes wp content plugins woocommerce includes admin meta boxes class wc meta box order items php wc save order items array wp includes class wp hook php wc meta box order items save wp includes class wp hook php wp hook apply filters array wp includes plugin php wp hook do action array wp content plugins woocommerce includes admin class wc admin meta boxes php do action woocommerce pro object wp post wp includes class wp hook php wc admin meta boxes save meta boxes object wp post wp includes class wp hook php wp hook apply filters null array wp includes plugin php wp hook do action array wp includes post php do action save post object wp post true wp includes post php wp insert post array false true wp admin includes post php wp update post array wp admin post php edit post main thrown in wp content plugins woocommerce includes abstracts abstract wc order php on line expected behavior tax calculatiosn should not use strings value should be cast converted to float here actual behavior see error steps to reproduce the bug we need to be able to reproduce the bug in order to fix it steps to reproduce the bug enable the tax option rounding round tax at subtotal level instead of rounding per line go to order save order see error isolating the problem mark completed items with an i have deactivated other plugins and confirmed this bug occurs when only woocommerce plugin is active this bug happens with a default wordpress theme active or i can reproduce this bug consistently using the steps above | 1 |
3,117 | 2,537,074,573 | IssuesEvent | 2015-01-26 18:07:33 | NCPP/ocgis | https://api.github.com/repos/NCPP/ocgis | closed | allow messaging with callback function | enhancement high priority | who: @tatarinova
The callback implementation will be modeled after: https://gist.github.com/bekozi/b4cad27905bff7feb1ee
| 1.0 | allow messaging with callback function - who: @tatarinova
The callback implementation will be modeled after: https://gist.github.com/bekozi/b4cad27905bff7feb1ee
| priority | allow messaging with callback function who tatarinova the callback implementation will be modeled after | 1 |
548,712 | 16,074,198,331 | IssuesEvent | 2021-04-25 02:55:58 | rich-iannone/pointblank | https://api.github.com/repos/rich-iannone/pointblank | opened | Provide styled console output when using `yaml_exec()` | Difficulty: [2] Intermediate Effort: [2] Medium Priority: [3] High Type: ★ Enhancement | When using `yaml_exec()` to process YAML agents and informants *en masse*, it would be nice to be notified in the console about what happened (during interactive sessions). We can use {cli}-formatted messages like elsewhere in the package.
Also, the function should invisibly return *something* about what was written. Right now, it always returns `NULL`.
| 1.0 | Provide styled console output when using `yaml_exec()` - When using `yaml_exec()` to process YAML agents and informants *en masse*, it would be nice to be notified in the console about what happened (during interactive sessions). We can use {cli}-formatted messages like elsewhere in the package.
Also, the function should invisibly return *something* about what was written. Right now, it always returns `NULL`.
| priority | provide styled console output when using yaml exec when using yaml exec to process yaml agents and informants en masse it would be nice to be notified in the console about what happened during interactive sessions we can use cli formatted messages like elsewhere in the package also the function should invisibly return something about what was written right now it always returns null | 1 |
235,907 | 7,744,104,396 | IssuesEvent | 2018-05-29 14:34:55 | Gloirin/m2gTest | https://api.github.com/repos/Gloirin/m2gTest | closed | 0001118:
Preferences -> Default Application shows unchosable options | Tinebase bug high priority | **Reported by bbalazs on 27 Jun 2009 01:45**
**Version:** Leonie (2009-07) Milestone 1
In the selection box the user can choose apps that are actually not available to the user and the option "tinebase" which should not be available.
| 1.0 | 0001118:
Preferences -> Default Application shows unchosable options - **Reported by bbalazs on 27 Jun 2009 01:45**
**Version:** Leonie (2009-07) Milestone 1
In the selection box the user can choose apps that are actually not available to the user and the option "tinebase" which should not be available.
| priority | preferences default application shows unchosable options reported by bbalazs on jun version leonie milestone in the selection box the user can choose apps that are actually not available to the user and the option quot tinebase quot which should not be available | 1 |
465,464 | 13,386,322,241 | IssuesEvent | 2020-09-02 14:35:14 | CDH-Studio/UpSkill | https://api.github.com/repos/CDH-Studio/UpSkill | opened | Fix edge cases when updating the job title | Backend Front-end High priority bug | **Describe the bug**
When updating the job title, often times it will not properly update or create it and may overwrite it in the other language - it is just not an overall good experience.
**Additional context**
The way that the job title is being updated is a bit "hacky" and is not respecting the language of that API call. In the backend, when updating a profile, the call needs to specify the language, but when GEDS and, subsequently, the job title, it is passing both the English and French data in the same call - which should be done in two separate calls, to respect the original backend API structure.
The correct way to fix this would be to also alter the GEDS update profile call to only send the data in the appropriate call language specified, instead of both in the same call. This creates complexity in the backend and makes it less readable.
**To Reproduce**
Play with updating the job title in the forms
**Expected behavior**
For it to be properly saved
| 1.0 | Fix edge cases when updating the job title - **Describe the bug**
When updating the job title, often times it will not properly update or create it and may overwrite it in the other language - it is just not an overall good experience.
**Additional context**
The way that the job title is being updated is a bit "hacky" and is not respecting the language of that API call. In the backend, when updating a profile, the call needs to specify the language, but when GEDS and, subsequently, the job title, it is passing both the English and French data in the same call - which should be done in two separate calls, to respect the original backend API structure.
The correct way to fix this would be to also alter the GEDS update profile call to only send the data in the appropriate call language specified, instead of both in the same call. This creates complexity in the backend and makes it less readable.
**To Reproduce**
Play with updating the job title in the forms
**Expected behavior**
For it to be properly saved
| priority | fix edge cases when updating the job title describe the bug when updating the job title often times it will not properly update or create it and may overwrite it in the other language it is just not an overall good experience additional context the way that the job title is being updated is a bit hacky and is not respecting the language of that api call in the backend when updating a profile the call needs to specify the language but when geds and subsequently the job title it is passing both the english and french data in the same call which should be done in two separate calls to respect the original backend api structure the correct way to fix this would be to also alter the geds update profile call to only send the data in the appropriate call language specified instead of both in the same call this creates complexity in the backend and makes it less readable to reproduce play with updating the job title in the forms expected behavior for it to be properly saved | 1 |
698,984 | 23,999,017,958 | IssuesEvent | 2022-09-14 09:51:54 | CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | https://api.github.com/repos/CS3219-AY2223S1/cs3219-project-ay2223s1-g33 | opened | [Collaboration UI] User Join Notification | Module/Front-End Status/High-Priority Type/Feature | ## Description
The UI should prompt the user when the other user has joined the session.
## Parent Task
- #72 | 1.0 | [Collaboration UI] User Join Notification - ## Description
The UI should prompt the user when the other user has joined the session.
## Parent Task
- #72 | priority | user join notification description the ui should prompt the user when the other user has joined the session parent task | 1 |
44,409 | 2,904,745,320 | IssuesEvent | 2015-06-18 19:47:46 | JMurk/Utility_Viewer_Issues | https://api.github.com/repos/JMurk/Utility_Viewer_Issues | closed | Utility Viewer - Identify Tool Enhancement | enhancement high priority | Update the identify tool so that it automatically identifies all visible layers without having to select layers from the drop down. | 1.0 | Utility Viewer - Identify Tool Enhancement - Update the identify tool so that it automatically identifies all visible layers without having to select layers from the drop down. | priority | utility viewer identify tool enhancement update the identify tool so that it automatically identifies all visible layers without having to select layers from the drop down | 1 |
195,216 | 6,905,309,621 | IssuesEvent | 2017-11-27 06:17:55 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Slider follow circles start rectangular | bug framework fix required high priority | They're supposed to always be circular.
This is an issue with `CircularContainer`'s Invalidate(), but there's a reason for why it's done the way it is now, and there's a reason why it was done inside Invalidate() previously - traversing Git history will be required to figure out why as both solutions had issues.
**Test cases + comments are a must.**
Making this issue on osu! side because the slider follow circles provide a starting point to resolving this issue. | 1.0 | Slider follow circles start rectangular - They're supposed to always be circular.
This is an issue with `CircularContainer`'s Invalidate(), but there's a reason for why it's done the way it is now, and there's a reason why it was done inside Invalidate() previously - traversing Git history will be required to figure out why as both solutions had issues.
**Test cases + comments are a must.**
Making this issue on osu! side because the slider follow circles provide a starting point to resolving this issue. | priority | slider follow circles start rectangular they re supposed to always be circular this is an issue with circularcontainer s invalidate but there s a reason for why it s done the way it is now and there s a reason why it was done inside invalidate previously traversing git history will be required to figure out why as both solutions had issues test cases comments are a must making this issue on osu side because the slider follow circles provide a starting point to resolving this issue | 1 |
477,297 | 13,759,369,883 | IssuesEvent | 2020-10-07 02:49:03 | AY2021S1-CS2113-T13-2/tp | https://api.github.com/repos/AY2021S1-CS2113-T13-2/tp | closed | Find module by name | priority.High type.Task | Associate this issue with milestone 1
label : priority high + task
author : Jian Xiang
| 1.0 | Find module by name - Associate this issue with milestone 1
label : priority high + task
author : Jian Xiang
| priority | find module by name associate this issue with milestone label priority high task author jian xiang | 1 |
131,093 | 5,142,698,136 | IssuesEvent | 2017-01-12 14:06:16 | hpi-swt2/workshop-portal | https://api.github.com/repos/hpi-swt2/workshop-portal | closed | US_1.21: "Anfragen" for Workshops | High Priority team-helene | **As**
user
**I want to**
be able to make requests through the portal for a workshop with my group, class or something like that.
**in order to**
make the process more simple
** 🍪**
Acceptance Criteria
- [ ] I have input data fields for
*Anrede (dropdown, Herr/Frau)
*Vorname und Nachname
*Telefonnummer
*Adresse
*EMail Adresse
*Thema des gewünschten Workshops (2 Wünsche)
Zeitrahmen (plain text field)
Teilnehmeranzahl
Kenntnisstand der Teilnehmer
Freitext zur Anfrage (hint: Anmerkungen)
- [ ] I have send button
**Tasks:**
- [x] create migrations 🗓 6.1.17
- [x] create validations 🗓 (~6.1.17)
- [ ] create view & tests | 1.0 | US_1.21: "Anfragen" for Workshops - **As**
user
**I want to**
be able to make requests through the portal for a workshop with my group, class or something like that.
**in order to**
make the process more simple
** 🍪**
Acceptance Criteria
- [ ] I have input data fields for
*Anrede (dropdown, Herr/Frau)
*Vorname und Nachname
*Telefonnummer
*Adresse
*EMail Adresse
*Thema des gewünschten Workshops (2 Wünsche)
Zeitrahmen (plain text field)
Teilnehmeranzahl
Kenntnisstand der Teilnehmer
Freitext zur Anfrage (hint: Anmerkungen)
- [ ] I have send button
**Tasks:**
- [x] create migrations 🗓 6.1.17
- [x] create validations 🗓 (~6.1.17)
- [ ] create view & tests | priority | us anfragen for workshops as user i want to be able to make requests through the portal for a workshop with my group class or something like that in order to make the process more simple 🍪 acceptance criteria i have input data fields for anrede dropdown herr frau vorname und nachname telefonnummer adresse email adresse thema des gewünschten workshops wünsche zeitrahmen plain text field teilnehmeranzahl kenntnisstand der teilnehmer freitext zur anfrage hint anmerkungen i have send button tasks create migrations 🗓 create validations 🗓 create view tests | 1 |
57,790 | 3,083,794,796 | IssuesEvent | 2015-08-24 11:22:36 | mPowering/django-orb | https://api.github.com/repos/mPowering/django-orb | closed | Terms link on registration form not prominent | Effort: < 1 day enhancement high priority | Feedback from user:
The “terms” link didn’t stand out visually—higher contrast would be useful. I was looking for a separate link before I accidentally moused-over the inline link.
perhaps put in bold - also look at the criteria/guidance links on the add resource form - same issue? | 1.0 | Terms link on registration form not prominent - Feedback from user:
The “terms” link didn’t stand out visually—higher contrast would be useful. I was looking for a separate link before I accidentally moused-over the inline link.
perhaps put in bold - also look at the criteria/guidance links on the add resource form - same issue? | priority | terms link on registration form not prominent feedback from user the “terms” link didn’t stand out visually—higher contrast would be useful i was looking for a separate link before i accidentally moused over the inline link perhaps put in bold also look at the criteria guidance links on the add resource form same issue | 1 |
167,057 | 6,331,565,919 | IssuesEvent | 2017-07-26 10:15:29 | yarnpkg/yarn | https://api.github.com/repos/yarnpkg/yarn | closed | ignore-optional flag works not like expected | bug-high-priority bug-linking good-first-contribution triaged | **Do you want to request a *feature* or report a *bug*?**
Bug / feature ?
**What is the current behavior?**
`yarn install --ignore-optional` tries to install also optional dependencies of sub modules. The while install process fails if one of the optional module installation fails.
**What is the expected behavior?**
To skip the optional dependencies of sub modules or to continue on errors.
**Please mention your node.js, yarn and operating system version.**
Node 7.5.0, Yarn v0.19.1, Ubuntu 14.04 | 1.0 | ignore-optional flag works not like expected - **Do you want to request a *feature* or report a *bug*?**
Bug / feature ?
**What is the current behavior?**
`yarn install --ignore-optional` tries to install also optional dependencies of sub modules. The while install process fails if one of the optional module installation fails.
**What is the expected behavior?**
To skip the optional dependencies of sub modules or to continue on errors.
**Please mention your node.js, yarn and operating system version.**
Node 7.5.0, Yarn v0.19.1, Ubuntu 14.04 | priority | ignore optional flag works not like expected do you want to request a feature or report a bug bug feature what is the current behavior yarn install ignore optional tries to install also optional dependencies of sub modules the while install process fails if one of the optional module installation fails what is the expected behavior to skip the optional dependencies of sub modules or to continue on errors please mention your node js yarn and operating system version node yarn ubuntu | 1 |
438,994 | 12,675,337,109 | IssuesEvent | 2020-06-19 01:21:16 | openchronology/openchronology.github.io | https://api.github.com/repos/openchronology/openchronology.github.io | closed | BeginTime and EndTime for TimeScale | point: 13 priority: high type: necessity type: refactor | TimeScales need the ability to denote what their bounds are (if any), and _decide_ which type they're using for their constituent TimeSpace.
> **Note**: changes between timespace units (i.e. number to datetime, or datetime to string) may
> run into two scenarios:
>
> 1. Lossy - data is forgotten, but the morphism is still possible without any extra data
> 2. Needy - data is required; for instance number to datetime - what is 0? The increment? etc. | 1.0 | BeginTime and EndTime for TimeScale - TimeScales need the ability to denote what their bounds are (if any), and _decide_ which type they're using for their constituent TimeSpace.
> **Note**: changes between timespace units (i.e. number to datetime, or datetime to string) may
> run into two scenarios:
>
> 1. Lossy - data is forgotten, but the morphism is still possible without any extra data
> 2. Needy - data is required; for instance number to datetime - what is 0? The increment? etc. | priority | begintime and endtime for timescale timescales need the ability to denote what their bounds are if any and decide which type they re using for their constituent timespace note changes between timespace units i e number to datetime or datetime to string may run into two scenarios lossy data is forgotten but the morphism is still possible without any extra data needy data is required for instance number to datetime what is the increment etc | 1 |
543,363 | 15,880,675,172 | IssuesEvent | 2021-04-09 13:58:15 | wso2/product-apim | https://api.github.com/repos/wso2/product-apim | closed | [Publisher] Spacing needed after "of" in Edit Overview page | API-M 4.0.0 Priority/High React-UI Type/Bug | ### Description:

API-M 4.0.0 - Beta | 1.0 | [Publisher] Spacing needed after "of" in Edit Overview page - ### Description:

API-M 4.0.0 - Beta | priority | spacing needed after of in edit overview page description api m beta | 1 |
168,922 | 6,391,944,059 | IssuesEvent | 2017-08-04 00:19:32 | ampproject/amphtml | https://api.github.com/repos/ampproject/amphtml | opened | Missing implementation for WritableStreamDefaultWriter | Category: Framework Category: Tooling P1: High Priority Type: Bug | we currently have a custom extern for streams found in https://github.com/ampproject/amphtml/blob/master/third_party/closure-compiler/externs/streams.js
We will be replacing this with the official extern soon and need to implement the missing methods/props (see https://github.com/google/closure-compiler/blob/master/externs/browser/streamsapi.js#L371 for full interface)
cc @dvoytenko | 1.0 | Missing implementation for WritableStreamDefaultWriter - we currently have a custom extern for streams found in https://github.com/ampproject/amphtml/blob/master/third_party/closure-compiler/externs/streams.js
We will be replacing this with the official extern soon and need to implement the missing methods/props (see https://github.com/google/closure-compiler/blob/master/externs/browser/streamsapi.js#L371 for full interface)
cc @dvoytenko | priority | missing implementation for writablestreamdefaultwriter we currently have a custom extern for streams found in we will be replacing this with the official extern soon and need to implement the missing methods props see for full interface cc dvoytenko | 1 |
384,273 | 11,386,538,374 | IssuesEvent | 2020-01-29 13:29:03 | FAIRsharing/fairsharing.github.io | https://api.github.com/repos/FAIRsharing/fairsharing.github.io | opened | Need of data | High priority | We now need some real data (or a sample of real data) in the database so that we can start working on facetting and authentication. | 1.0 | Need of data - We now need some real data (or a sample of real data) in the database so that we can start working on facetting and authentication. | priority | need of data we now need some real data or a sample of real data in the database so that we can start working on facetting and authentication | 1 |
689,704 | 23,631,103,592 | IssuesEvent | 2022-08-25 09:23:30 | PIP-Technical-Team/pipapi | https://api.github.com/repos/PIP-Technical-Team/pipapi | closed | Disable `popshare` option for aggregate distribution | Priority: 1-Blocker Priority: 2-High Type: 1-Bug | `popshare` computations are incorrect for aggregate distributions | 2.0 | Disable `popshare` option for aggregate distribution - `popshare` computations are incorrect for aggregate distributions | priority | disable popshare option for aggregate distribution popshare computations are incorrect for aggregate distributions | 1 |
258,037 | 8,150,349,524 | IssuesEvent | 2018-08-22 12:43:50 | cms-gem-daq-project/cmsgemos | https://api.github.com/repos/cms-gem-daq-project/cmsgemos | closed | Bug Report: Error encountered when building gempython | Priority: High Type: Bug | <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
Build error encountered when trying to build `cmsgemos_gempython`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [X] Bug report (report an issue with the code)
- [ ] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Should build without issue.
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Received following build error:
```bash
g++ -g -O2 -Wall -fPIC -fno-omit-frame-pointer -DGIT_VERSION=\"v0.3.2-78-gb3bfbe8-dirty\" -DGEMDEVELOPER=\"dorney\" -std=c++1y -std=gnu++1y -DOS_VERSION_CODE=199168 -Dx86_64_centos -Dlinux -DLITTLE_ENDIAN__ -I/opt/xdaq/config -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/linux -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include -I/opt/xdaq/include -I/usr/include/python2.7 -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include -I/opt/cactus/include -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/x86_64_centos/include -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/x86_64_centos/include/linux -I/opt/xdaq/include -I/opt/xdaq/include/linux -c -o /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/linux/x86_64_centos/HwGenericAMC.o /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc
In file included from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:8:0,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/GEMHwDevice.h:24,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/HwGenericAMC.h:6,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc: In member function ‘void gem::hw::HwGenericAMC::ttcMMCMPhaseShift(bool, bool, bool)’:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:975:19: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:12:20: note: in expansion of macro ‘LOG4CPLUS_DEBUG’
#define DEBUG(MSG) LOG4CPLUS_DEBUG(m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:974:13: note: in expansion of macro ‘DEBUG’
DEBUG("HwGenericAMC::ttcMMCMPhaseShift 500 unlocks found after " << i+1 << " shifts:" +
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:997:15: error: expected ‘;’ before ‘getGEMHwInterface’
getGEMHwInterface().getNode("GEM_AMC.TTC.CTRL.PA_MANUAL_SHIFT_DIR").write(0);
^
In file included from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:8:0,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/GEMHwDevice.h:24,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/HwGenericAMC.h:6,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1023:18: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:14:20: note: in expansion of macro ‘LOG4CPLUS_WARN’
#define WARN( MSG) LOG4CPLUS_WARN( m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1022:13: note: in expansion of macro ‘WARN’
WARN("HwGenericAMC::ttcMMCMPhaseShift Unexpected unlock after " << i+1 << " shifts:" +
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1032:20: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:13:20: note: in expansion of macro ‘LOG4CPLUS_INFO’
#define INFO( MSG) LOG4CPLUS_INFO( m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1031:15: note: in expansion of macro ‘INFO’
INFO("HwGenericAMC::ttcMMCMPhaseShift Found next lock after " << i+1 << " shifts:" +
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1084:13: error: expected ‘;’ before ‘bestLockFound’
bestLockFound = true;
^
In file included from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:8:0,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/GEMHwDevice.h:24,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/HwGenericAMC.h:6,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1099:19: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:12:20: note: in expansion of macro ‘LOG4CPLUS_DEBUG’
#define DEBUG(MSG) LOG4CPLUS_DEBUG(m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1098:13: note: in expansion of macro ‘DEBUG’
DEBUG("HwGenericAMC::ttcMMCMPhaseShift Found next lock after " << i+1 << " shifts:" +
```
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
```bash
cd $BUILD_HOME/cmsgemos
source setup/etc/profile.d/gemdaqenv.sh
make clean
make gempython
```
## Possible Solution (for bugs)
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Missing `;` characters.
Additionally, doesn't seem to recognize multiline expressions in the form of:
https://github.com/cms-gem-daq-project/cmsgemos/blob/b3bfbe89a37b0095e0224be77681219067df5eef/gemhardware/src/common/HwGenericAMC.cc#L974-L978
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Prevents building pip package or rpm.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: b3bfbe89a37b0095e0224be77681219067df5eef
* Shell used: `zsh`
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| 1.0 | Bug Report: Error encountered when building gempython - <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
Build error encountered when trying to build `cmsgemos_gempython`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [X] Bug report (report an issue with the code)
- [ ] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Should build without issue.
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
Received following build error:
```bash
g++ -g -O2 -Wall -fPIC -fno-omit-frame-pointer -DGIT_VERSION=\"v0.3.2-78-gb3bfbe8-dirty\" -DGEMDEVELOPER=\"dorney\" -std=c++1y -std=gnu++1y -DOS_VERSION_CODE=199168 -Dx86_64_centos -Dlinux -DLITTLE_ENDIAN__ -I/opt/xdaq/config -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/linux -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include -I/opt/xdaq/include -I/usr/include/python2.7 -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include -I/opt/cactus/include -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/x86_64_centos/include -I/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/x86_64_centos/include/linux -I/opt/xdaq/include -I/opt/xdaq/include/linux -c -o /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/linux/x86_64_centos/HwGenericAMC.o /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc
In file included from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:8:0,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/GEMHwDevice.h:24,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/HwGenericAMC.h:6,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc: In member function ‘void gem::hw::HwGenericAMC::ttcMMCMPhaseShift(bool, bool, bool)’:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:975:19: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:12:20: note: in expansion of macro ‘LOG4CPLUS_DEBUG’
#define DEBUG(MSG) LOG4CPLUS_DEBUG(m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:974:13: note: in expansion of macro ‘DEBUG’
DEBUG("HwGenericAMC::ttcMMCMPhaseShift 500 unlocks found after " << i+1 << " shifts:" +
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:997:15: error: expected ‘;’ before ‘getGEMHwInterface’
getGEMHwInterface().getNode("GEM_AMC.TTC.CTRL.PA_MANUAL_SHIFT_DIR").write(0);
^
In file included from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:8:0,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/GEMHwDevice.h:24,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/HwGenericAMC.h:6,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1023:18: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:14:20: note: in expansion of macro ‘LOG4CPLUS_WARN’
#define WARN( MSG) LOG4CPLUS_WARN( m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1022:13: note: in expansion of macro ‘WARN’
WARN("HwGenericAMC::ttcMMCMPhaseShift Unexpected unlock after " << i+1 << " shifts:" +
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1032:20: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:13:20: note: in expansion of macro ‘LOG4CPLUS_INFO’
#define INFO( MSG) LOG4CPLUS_INFO( m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1031:15: note: in expansion of macro ‘INFO’
INFO("HwGenericAMC::ttcMMCMPhaseShift Found next lock after " << i+1 << " shifts:" +
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1084:13: error: expected ‘;’ before ‘bestLockFound’
bestLockFound = true;
^
In file included from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:8:0,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/GEMHwDevice.h:24,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/include/gem/hw/HwGenericAMC.h:6,
from /afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1:
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1099:19: error: expected primary-expression before ‘<<’ token
<< " bad locks " << nBadLocks
^
/opt/xdaq/include/log4cplus/loggingmacros.h:216:31: note: in definition of macro ‘LOG4CPLUS_MACRO_BODY’
_log4cplus_buf << logEvent; \
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemutils/include/gem/utils/GEMLogging.h:12:20: note: in expansion of macro ‘LOG4CPLUS_DEBUG’
#define DEBUG(MSG) LOG4CPLUS_DEBUG(m_gemLogger, MSG)
^
/afs/cern.ch/user/d/dorney/scratch0/CMS_GEM/CMS_GEM_DAQ/cmsgemos/gemhardware/src/common/HwGenericAMC.cc:1098:13: note: in expansion of macro ‘DEBUG’
DEBUG("HwGenericAMC::ttcMMCMPhaseShift Found next lock after " << i+1 << " shifts:" +
```
### Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
```bash
cd $BUILD_HOME/cmsgemos
source setup/etc/profile.d/gemdaqenv.sh
make clean
make gempython
```
## Possible Solution (for bugs)
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
<!--- or ideas how to implement the addition or change -->
Missing `;` characters.
Additionally, doesn't seem to recognize multiline expressions in the form of:
https://github.com/cms-gem-daq-project/cmsgemos/blob/b3bfbe89a37b0095e0224be77681219067df5eef/gemhardware/src/common/HwGenericAMC.cc#L974-L978
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Prevents building pip package or rpm.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: b3bfbe89a37b0095e0224be77681219067df5eef
* Shell used: `zsh`
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| priority | bug report error encountered when building gempython brief summary of issue build error encountered when trying to build cmsgemos gempython types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior should build without issue current behavior received following build error bash g g wall fpic fno omit frame pointer dgit version dirty dgemdeveloper dorney std c std gnu dos version code centos dlinux dlittle endian i opt xdaq config i afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include linux i afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include i opt xdaq include i usr include i afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include i afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include i opt cactus include i afs cern ch user d dorney cms gem cms gem daq centos include i afs cern ch user d dorney cms gem cms gem daq centos include linux i opt xdaq include i opt xdaq include linux c o afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src linux centos hwgenericamc o afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc in file included from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include gem utils gemlogging h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include gem hw gemhwdevice h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include gem hw hwgenericamc h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc in member function ‘void gem hw hwgenericamc ttcmmcmphaseshift bool bool bool ’ afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc error expected primary expression before ‘ ’ token bad locks nbadlocks opt xdaq include loggingmacros h note in definition of macro ‘ macro body’ buf logevent afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include gem utils gemlogging h note in expansion of macro ‘ debug’ define debug msg debug m gemlogger msg afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc note in expansion of macro ‘debug’ debug hwgenericamc ttcmmcmphaseshift unlocks found after i shifts afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc error expected ‘ ’ before ‘getgemhwinterface’ getgemhwinterface getnode gem amc ttc ctrl pa manual shift dir write in file included from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include gem utils gemlogging h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include gem hw gemhwdevice h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include gem hw hwgenericamc h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc error expected primary expression before ‘ ’ token bad locks nbadlocks opt xdaq include loggingmacros h note in definition of macro ‘ macro body’ buf logevent afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include gem utils gemlogging h note in expansion of macro ‘ warn’ define warn msg warn m gemlogger msg afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc note in expansion of macro ‘warn’ warn hwgenericamc ttcmmcmphaseshift unexpected unlock after i shifts afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc error expected primary expression before ‘ ’ token bad locks nbadlocks opt xdaq include loggingmacros h note in definition of macro ‘ macro body’ buf logevent afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include gem utils gemlogging h note in expansion of macro ‘ info’ define info msg info m gemlogger msg afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc note in expansion of macro ‘info’ info hwgenericamc ttcmmcmphaseshift found next lock after i shifts afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc error expected ‘ ’ before ‘bestlockfound’ bestlockfound true in file included from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include gem utils gemlogging h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include gem hw gemhwdevice h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware include gem hw hwgenericamc h from afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc error expected primary expression before ‘ ’ token bad locks nbadlocks opt xdaq include loggingmacros h note in definition of macro ‘ macro body’ buf logevent afs cern ch user d dorney cms gem cms gem daq cmsgemos gemutils include gem utils gemlogging h note in expansion of macro ‘ debug’ define debug msg debug m gemlogger msg afs cern ch user d dorney cms gem cms gem daq cmsgemos gemhardware src common hwgenericamc cc note in expansion of macro ‘debug’ debug hwgenericamc ttcmmcmphaseshift found next lock after i shifts steps to reproduce for bugs bash cd build home cmsgemos source setup etc profile d gemdaqenv sh make clean make gempython possible solution for bugs missing characters additionally doesn t seem to recognize multiline expressions in the form of context prevents building pip package or rpm your environment version used shell used zsh | 1 |
704,990 | 24,217,562,379 | IssuesEvent | 2022-09-26 08:10:16 | dodona-edu/dodona | https://api.github.com/repos/dodona-edu/dodona | closed | LTI authentication error in Ufora | bug high priority | When I want to make an LTI link to Dodona from Ufora, I get this authentication issue:

Veerle Fack pointed me on this issue. She already made links some two weeks ago (still worked back then), the links still worked some days ago, but she also noticed last Friday or last Monday that the links do not work anymore (she gets some LTI-related error). So probably also the same underlying LTI-authentication issue. | 1.0 | LTI authentication error in Ufora - When I want to make an LTI link to Dodona from Ufora, I get this authentication issue:

Veerle Fack pointed me on this issue. She already made links some two weeks ago (still worked back then), the links still worked some days ago, but she also noticed last Friday or last Monday that the links do not work anymore (she gets some LTI-related error). So probably also the same underlying LTI-authentication issue. | priority | lti authentication error in ufora when i want to make an lti link to dodona from ufora i get this authentication issue veerle fack pointed me on this issue she already made links some two weeks ago still worked back then the links still worked some days ago but she also noticed last friday or last monday that the links do not work anymore she gets some lti related error so probably also the same underlying lti authentication issue | 1 |
235,849 | 7,743,316,781 | IssuesEvent | 2018-05-29 12:28:36 | bradnoble/msc-vuejs | https://api.github.com/repos/bradnoble/msc-vuejs | closed | CSV has weird characters in download title | Component: Members Priority: High Status: To Do Type: Bug | Just downloaded CSV and it has weird numbers afterward. Might be the time, but it doesn't match my current time. It's 8:10 PM on 5/28 EDT. Download name:
MSC-Membership-List-2018-05-29T00_08_57.234Z.csv | 1.0 | CSV has weird characters in download title - Just downloaded CSV and it has weird numbers afterward. Might be the time, but it doesn't match my current time. It's 8:10 PM on 5/28 EDT. Download name:
MSC-Membership-List-2018-05-29T00_08_57.234Z.csv | priority | csv has weird characters in download title just downloaded csv and it has weird numbers afterward might be the time but it doesn t match my current time it s pm on edt download name msc membership list csv | 1 |
462,992 | 13,257,658,905 | IssuesEvent | 2020-08-20 14:22:50 | zeebe-io/zeebe | https://api.github.com/repos/zeebe-io/zeebe | closed | RaftServer is not ready until it receives a complete snapshot or an entry | Impact: Availability Priority: High Scope: broker Status: Ready Type: Maintenance | **Description**
While investigating issue #4784, we observed that if the snapshot contains many chunks, it can take too long to complete the replication. If a broker is restarting and if it has to catchup its log, then most probably the leader will be sending the snapshot instead of the log entries. However, RaftServer:start is completed only after the node has received a snapshot and/or applied atleast one entry.
https://github.com/zeebe-io/zeebe/issues/4784#issuecomment-653498147
In the following case, the Atomix startup is completed only after a snapshot containing 20000 chunks was replicated, which took 1384303 ms
```
11:44:18.308 [] [raft-server-2-raft-partition-partition-1] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.DbSnapshotStore - Committed new snapshot DbSnapshot{directory=/home/deepthi/work/src/github.com/zeebe-io/zeebe/data-zeebe-2/raft-partition/partitions/1/snapshots/313699-313-1592808460016, metadata=DbSnapshotMetadata{index=313699, term=313, timestamp=2020-06-22 08:47:40,016}}
11:44:18.549 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-2 [7/11]: cluster services started in 1384303 ms
```
Is it really necessary to wait for the snapshot or an entry to mark the server as ready? Isn't it enough to complete the join process and commit the configuration? It would be good to investigate this and speed up the restart if possible.
| 1.0 | RaftServer is not ready until it receives a complete snapshot or an entry - **Description**
While investigating issue #4784, we observed that if the snapshot contains many chunks, it can take too long to complete the replication. If a broker is restarting and if it has to catchup its log, then most probably the leader will be sending the snapshot instead of the log entries. However, RaftServer:start is completed only after the node has received a snapshot and/or applied atleast one entry.
https://github.com/zeebe-io/zeebe/issues/4784#issuecomment-653498147
In the following case, the Atomix startup is completed only after a snapshot containing 20000 chunks was replicated, which took 1384303 ms
```
11:44:18.308 [] [raft-server-2-raft-partition-partition-1] DEBUG io.zeebe.broker.clustering.atomix.storage.snapshot.DbSnapshotStore - Committed new snapshot DbSnapshot{directory=/home/deepthi/work/src/github.com/zeebe-io/zeebe/data-zeebe-2/raft-partition/partitions/1/snapshots/313699-313-1592808460016, metadata=DbSnapshotMetadata{index=313699, term=313, timestamp=2020-06-22 08:47:40,016}}
11:44:18.549 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-2 [7/11]: cluster services started in 1384303 ms
```
Is it really necessary to wait for the snapshot or an entry to mark the server as ready? Isn't it enough to complete the join process and commit the configuration? It would be good to investigate this and speed up the restart if possible.
| priority | raftserver is not ready until it receives a complete snapshot or an entry description while investigating issue we observed that if the snapshot contains many chunks it can take too long to complete the replication if a broker is restarting and if it has to catchup its log then most probably the leader will be sending the snapshot instead of the log entries however raftserver start is completed only after the node has received a snapshot and or applied atleast one entry in the following case the atomix startup is completed only after a snapshot containing chunks was replicated which took ms debug io zeebe broker clustering atomix storage snapshot dbsnapshotstore committed new snapshot dbsnapshot directory home deepthi work src github com zeebe io zeebe data zeebe raft partition partitions snapshots metadata dbsnapshotmetadata index term timestamp debug io zeebe broker system bootstrap broker cluster services started in ms is it really necessary to wait for the snapshot or an entry to mark the server as ready isn t it enough to complete the join process and commit the configuration it would be good to investigate this and speed up the restart if possible | 1 |
711,348 | 24,459,864,546 | IssuesEvent | 2022-10-07 10:07:05 | HiAvatar/backend | https://api.github.com/repos/HiAvatar/backend | closed | 영상 생성 로직에서 FileNotFoundException 예외 발생 | Priority: High | ### Description
영상 생성 로직에서 FileNotFoundException 예외 발생한다.
<br>
### Todo List
- [x] write to do.
<br>
### Conclusion
flask 서버 관련 Dockerfile에서 ffmpeg를 설치하는 코드를 주석처리했기 때문..
음성 파일은 생성되고 영상 파일이 생성 안됐던 것은 영상 파일을 생성하는 로직에 ffmpeg 명령어를 다루는 코드가 포함되어 있었다!
Dockerfile의 일부분을 다음과 같이 수정했다.
```
RUN apt-get update
# RUN apt-get upgrade
RUN apt-get install -y ffmpeg
# upgrade 부분에서 abort되어 주석으로 막아주었음
```
정리하자면 영상 파일을 생성하는 파이썬 로직에서 ffmpeg에 대한 명령어를 처리하지 않고 이후의 명령어를 실행함으로써 video_id는 반환이 됐지만 정작 로컬 디렉토리인 /result에는 파일이 생성되지 않았던 것이다.
| 1.0 | 영상 생성 로직에서 FileNotFoundException 예외 발생 - ### Description
영상 생성 로직에서 FileNotFoundException 예외 발생한다.
<br>
### Todo List
- [x] write to do.
<br>
### Conclusion
flask 서버 관련 Dockerfile에서 ffmpeg를 설치하는 코드를 주석처리했기 때문..
음성 파일은 생성되고 영상 파일이 생성 안됐던 것은 영상 파일을 생성하는 로직에 ffmpeg 명령어를 다루는 코드가 포함되어 있었다!
Dockerfile의 일부분을 다음과 같이 수정했다.
```
RUN apt-get update
# RUN apt-get upgrade
RUN apt-get install -y ffmpeg
# upgrade 부분에서 abort되어 주석으로 막아주었음
```
정리하자면 영상 파일을 생성하는 파이썬 로직에서 ffmpeg에 대한 명령어를 처리하지 않고 이후의 명령어를 실행함으로써 video_id는 반환이 됐지만 정작 로컬 디렉토리인 /result에는 파일이 생성되지 않았던 것이다.
| priority | 영상 생성 로직에서 filenotfoundexception 예외 발생 description 영상 생성 로직에서 filenotfoundexception 예외 발생한다 todo list write to do conclusion flask 서버 관련 dockerfile에서 ffmpeg를 설치하는 코드를 주석처리했기 때문 음성 파일은 생성되고 영상 파일이 생성 안됐던 것은 영상 파일을 생성하는 로직에 ffmpeg 명령어를 다루는 코드가 포함되어 있었다 dockerfile의 일부분을 다음과 같이 수정했다 run apt get update run apt get upgrade run apt get install y ffmpeg upgrade 부분에서 abort되어 주석으로 막아주었음 정리하자면 영상 파일을 생성하는 파이썬 로직에서 ffmpeg에 대한 명령어를 처리하지 않고 이후의 명령어를 실행함으로써 video id는 반환이 됐지만 정작 로컬 디렉토리인 result에는 파일이 생성되지 않았던 것이다 | 1 |
586,471 | 17,578,620,026 | IssuesEvent | 2021-08-16 02:11:38 | woowa-techcamp-2021/store-6 | https://api.github.com/repos/woowa-techcamp-2021/store-6 | closed | [BE] 백엔드 테스트코드 셋업 | setup high priority | ## :hammer: 기능 설명
백엔드 테스트코드를 셋업합니다.
## 📑 완료 조건
- [x] 테스트 환경을 구축하고 간단한 테스트 코드를 작성해야 합니다.
## :thought_balloon: 관련 Backlog
> [대분류] - [중분류] - [Backlog 이름]
[BE] 기타 - 테스트 - BE 테스트코드 셋업 | 1.0 | [BE] 백엔드 테스트코드 셋업 - ## :hammer: 기능 설명
백엔드 테스트코드를 셋업합니다.
## 📑 완료 조건
- [x] 테스트 환경을 구축하고 간단한 테스트 코드를 작성해야 합니다.
## :thought_balloon: 관련 Backlog
> [대분류] - [중분류] - [Backlog 이름]
[BE] 기타 - 테스트 - BE 테스트코드 셋업 | priority | 백엔드 테스트코드 셋업 hammer 기능 설명 백엔드 테스트코드를 셋업합니다 📑 완료 조건 테스트 환경을 구축하고 간단한 테스트 코드를 작성해야 합니다 thought balloon 관련 backlog 기타 테스트 be 테스트코드 셋업 | 1 |
2,554 | 2,528,483,297 | IssuesEvent | 2015-01-22 03:43:03 | jonesdy/StockSim | https://api.github.com/repos/jonesdy/StockSim | closed | Allow users to join games | priority:high status:active type:feature | Might be a good idea.
Just add a "join" button for public games on their game page. | 1.0 | Allow users to join games - Might be a good idea.
Just add a "join" button for public games on their game page. | priority | allow users to join games might be a good idea just add a join button for public games on their game page | 1 |
439,977 | 12,691,421,291 | IssuesEvent | 2020-06-21 16:57:09 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | stream edit: Make <Enter> submit the form to add more subscribers | area: keyboard UI area: stream settings help wanted priority: high | When editing an existing stream, since #14470, we have this a user-pills based UI for adding new subscribers, which is nice. However, it's still a bit awkward, in that after you've generated a few pills, you need to tab to the `Add` button and then hit enter to submit the form.
We should add a key handler to make `Enter` submit the form as though one had clicked the "Add" button. | 1.0 | stream edit: Make <Enter> submit the form to add more subscribers - When editing an existing stream, since #14470, we have this a user-pills based UI for adding new subscribers, which is nice. However, it's still a bit awkward, in that after you've generated a few pills, you need to tab to the `Add` button and then hit enter to submit the form.
We should add a key handler to make `Enter` submit the form as though one had clicked the "Add" button. | priority | stream edit make submit the form to add more subscribers when editing an existing stream since we have this a user pills based ui for adding new subscribers which is nice however it s still a bit awkward in that after you ve generated a few pills you need to tab to the add button and then hit enter to submit the form we should add a key handler to make enter submit the form as though one had clicked the add button | 1 |
139,800 | 5,390,170,283 | IssuesEvent | 2017-02-25 11:16:53 | cuckoosandbox/cuckoo | https://api.github.com/repos/cuckoosandbox/cuckoo | closed | Specifying the snapshot in virtualbox.conf breaks the host-agent connection string | Bug (to verify) High Priority | I updated the snapshot switch in the virtualbox.conf to snaphot = cuckootest and received the following error. After commenting that line out, the error went away. Looks like the snaphot name was appended onto the end of the host URL string for some reason.
2016-06-23 13:15:16,363 [lib.cuckoo.core.scheduler] ERROR: Failure in AnalysisManager.run
Traceback (most recent call last):
File "/home/malware/cuckoo/lib/cuckoo/core/scheduler.py", line 496, in run
self.launch_analysis()
File "/home/malware/cuckoo/lib/cuckoo/core/scheduler.py", line 382, in launch_analysis
self.guest_manage(options)
File "/home/malware/cuckoo/lib/cuckoo/core/scheduler.py", line 289, in guest_manage
self.guest_manager.start_analysis(options, monitor)
File "/home/malware/cuckoo/lib/cuckoo/core/guest.py", line 372, in start_analysis
r = self.get("/")
File "/home/malware/cuckoo/lib/cuckoo/core/guest.py", line 281, in get
return requests.get(url, _args, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/api.py", line 71, in get
return request('get', url, params=params, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/api.py", line 57, in request
return session.request(method=method, url=url, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/sessions.py", line 475, in request
resp = self.send(prep, *_send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/sessions.py", line 585, in send
r = adapter.send(request, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/adapters.py", line 467, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='192.168.56.101%0Asnapshot%20=%20cuckootest', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc0edbdedd0>: Failed to establish a new connection: [Errno -2] Name or service not known',))
| 1.0 | Specifying the snapshot in virtualbox.conf breaks the host-agent connection string - I updated the snapshot switch in the virtualbox.conf to snaphot = cuckootest and received the following error. After commenting that line out, the error went away. Looks like the snaphot name was appended onto the end of the host URL string for some reason.
2016-06-23 13:15:16,363 [lib.cuckoo.core.scheduler] ERROR: Failure in AnalysisManager.run
Traceback (most recent call last):
File "/home/malware/cuckoo/lib/cuckoo/core/scheduler.py", line 496, in run
self.launch_analysis()
File "/home/malware/cuckoo/lib/cuckoo/core/scheduler.py", line 382, in launch_analysis
self.guest_manage(options)
File "/home/malware/cuckoo/lib/cuckoo/core/scheduler.py", line 289, in guest_manage
self.guest_manager.start_analysis(options, monitor)
File "/home/malware/cuckoo/lib/cuckoo/core/guest.py", line 372, in start_analysis
r = self.get("/")
File "/home/malware/cuckoo/lib/cuckoo/core/guest.py", line 281, in get
return requests.get(url, _args, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/api.py", line 71, in get
return request('get', url, params=params, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/api.py", line 57, in request
return session.request(method=method, url=url, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/sessions.py", line 475, in request
resp = self.send(prep, *_send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/sessions.py", line 585, in send
r = adapter.send(request, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests-2.10.0-py2.7.egg/requests/adapters.py", line 467, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='192.168.56.101%0Asnapshot%20=%20cuckootest', port=8000): Max retries exceeded with url: / (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fc0edbdedd0>: Failed to establish a new connection: [Errno -2] Name or service not known',))
| priority | specifying the snapshot in virtualbox conf breaks the host agent connection string i updated the snapshot switch in the virtualbox conf to snaphot cuckootest and received the following error after commenting that line out the error went away looks like the snaphot name was appended onto the end of the host url string for some reason error failure in analysismanager run traceback most recent call last file home malware cuckoo lib cuckoo core scheduler py line in run self launch analysis file home malware cuckoo lib cuckoo core scheduler py line in launch analysis self guest manage options file home malware cuckoo lib cuckoo core scheduler py line in guest manage self guest manager start analysis options monitor file home malware cuckoo lib cuckoo core guest py line in start analysis r self get file home malware cuckoo lib cuckoo core guest py line in get return requests get url args kwargs file usr local lib dist packages requests egg requests api py line in get return request get url params params kwargs file usr local lib dist packages requests egg requests api py line in request return session request method method url url kwargs file usr local lib dist packages requests egg requests sessions py line in request resp self send prep send kwargs file usr local lib dist packages requests egg requests sessions py line in send r adapter send request kwargs file usr local lib dist packages requests egg requests adapters py line in send raise connectionerror e request request connectionerror httpconnectionpool host port max retries exceeded with url caused by newconnectionerror failed to establish a new connection name or service not known | 1 |
310,634 | 9,522,190,414 | IssuesEvent | 2019-04-27 05:51:28 | AugurProject/augur | https://api.github.com/repos/AugurProject/augur | closed | Acct Summary crash | Bug Priority: High | Steps to reproduce....
1) report on an open reporting market...after hitting submit on mm, I immediately click on the Acct sum pg (before reporting confirms) and console blows up...
see screenshot:
 | 1.0 | Acct Summary crash - Steps to reproduce....
1) report on an open reporting market...after hitting submit on mm, I immediately click on the Acct sum pg (before reporting confirms) and console blows up...
see screenshot:
 | priority | acct summary crash steps to reproduce report on an open reporting market after hitting submit on mm i immediately click on the acct sum pg before reporting confirms and console blows up see screenshot | 1 |
603,028 | 18,521,696,959 | IssuesEvent | 2021-10-20 15:38:36 | OregonDigital/OD2 | https://api.github.com/repos/OregonDigital/OD2 | closed | Downloaded zip file is broken again | Bug Priority - High Features Ready for Development | ### Descriptive summary
As a user, I would expect when I click download zip that my download starts immediately, even if I don't know how long it will take or how large the file will be.
I would also expect my downloaded file to open and contain the assets and metadata.
### Expected behavior
The user can stream the download as it's generated but does not need to know the final size (possible enhancement)
Once the download has finished, the zip is able to be opened and contains:
- For a regular work: The asset files and metadata
- For a compound work: The asset files of all children and metadata of all children (in one csv)
### Related work
#1449 QA for all of download
#1234 an additional version of download that needs to work
#1691 PR that broke this functionality
### Accessibility Concerns
| 1.0 | Downloaded zip file is broken again - ### Descriptive summary
As a user, I would expect when I click download zip that my download starts immediately, even if I don't know how long it will take or how large the file will be.
I would also expect my downloaded file to open and contain the assets and metadata.
### Expected behavior
The user can stream the download as it's generated but does not need to know the final size (possible enhancement)
Once the download has finished, the zip is able to be opened and contains:
- For a regular work: The asset files and metadata
- For a compound work: The asset files of all children and metadata of all children (in one csv)
### Related work
#1449 QA for all of download
#1234 an additional version of download that needs to work
#1691 PR that broke this functionality
### Accessibility Concerns
| priority | downloaded zip file is broken again descriptive summary as a user i would expect when i click download zip that my download starts immediately even if i don t know how long it will take or how large the file will be i would also expect my downloaded file to open and contain the assets and metadata expected behavior the user can stream the download as it s generated but does not need to know the final size possible enhancement once the download has finished the zip is able to be opened and contains for a regular work the asset files and metadata for a compound work the asset files of all children and metadata of all children in one csv related work qa for all of download an additional version of download that needs to work pr that broke this functionality accessibility concerns | 1 |
494,578 | 14,260,752,736 | IssuesEvent | 2020-11-20 10:16:59 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.bing.com - see bug description | browser-focus-geckoview engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62162 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.bing.com/images/search?q=fish
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Bing is seeing me and recalling a search I did a week ago. FFFocus is opening to Bing with this same seach and results before I touch anything. I dont have any MS products, so it must be FFF security issue issue.
**Steps to Reproduce**:
I have "closed&erased" and restarted. FFF still recalls the one search and it displays as soon as FFF is started and goes to bing homepage.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.bing.com - see bug description - <!-- @browser: Firefox Mobile 81.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:81.0) Gecko/81.0 Firefox/81.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/62162 -->
<!-- @extra_labels: browser-focus-geckoview -->
**URL**: https://www.bing.com/images/search?q=fish
**Browser / Version**: Firefox Mobile 81.0
**Operating System**: Android 7.0
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Bing is seeing me and recalling a search I did a week ago. FFFocus is opening to Bing with this same seach and results before I touch anything. I dont have any MS products, so it must be FFF security issue issue.
**Steps to Reproduce**:
I have "closed&erased" and restarted. FFF still recalls the one search and it displays as soon as FFF is started and goes to bing homepage.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description bing is seeing me and recalling a search i did a week ago fffocus is opening to bing with this same seach and results before i touch anything i dont have any ms products so it must be fff security issue issue steps to reproduce i have closed erased and restarted fff still recalls the one search and it displays as soon as fff is started and goes to bing homepage browser configuration none from with ❤️ | 1 |
446,715 | 12,877,298,965 | IssuesEvent | 2020-07-11 10:13:09 | 3YOURMIND/kotti | https://api.github.com/repos/3YOURMIND/kotti | opened | KtFormController(Object/List) should support inheritable props (isDisabled/hideValidation/etc) | priority:4-high type:enhancement | It will be the use-case that a controller object is entirely disabled based on another field for example, in which case, it would be annoying to have to pass it for every context consumer of the controller object.
**Additional context**
May need to support isOptional as an inheritable prop both **on the form** and the controllers or at least on the controller.
if the fields of the controller object are not required, the controller context should provide the isOptional prop: also a common use-case.
there is probably no use-case to make an entire form required or optional but for consistency of the inheritableProps type we may need this. | 1.0 | KtFormController(Object/List) should support inheritable props (isDisabled/hideValidation/etc) - It will be the use-case that a controller object is entirely disabled based on another field for example, in which case, it would be annoying to have to pass it for every context consumer of the controller object.
**Additional context**
May need to support isOptional as an inheritable prop both **on the form** and the controllers or at least on the controller.
if the fields of the controller object are not required, the controller context should provide the isOptional prop: also a common use-case.
there is probably no use-case to make an entire form required or optional but for consistency of the inheritableProps type we may need this. | priority | ktformcontroller object list should support inheritable props isdisabled hidevalidation etc it will be the use case that a controller object is entirely disabled based on another field for example in which case it would be annoying to have to pass it for every context consumer of the controller object additional context may need to support isoptional as an inheritable prop both on the form and the controllers or at least on the controller if the fields of the controller object are not required the controller context should provide the isoptional prop also a common use case there is probably no use case to make an entire form required or optional but for consistency of the inheritableprops type we may need this | 1 |
594,597 | 18,049,172,573 | IssuesEvent | 2021-09-19 12:43:40 | systems-cs-pub-ro/quiz-manager | https://api.github.com/repos/systems-cs-pub-ro/quiz-manager | closed | Refactor quiz-collection as a Python module | high-priority | Currently, the quiz-collection scripts are meant to be ran as a single file.
We want to refactor this scripts so they can be imported as a module in
the main python script (quiz-manager)
Think how you can refactor the functions so they can be easy to understand
for someone who wants to call them from the main script (mimic an API for conversion). | 1.0 | Refactor quiz-collection as a Python module - Currently, the quiz-collection scripts are meant to be ran as a single file.
We want to refactor this scripts so they can be imported as a module in
the main python script (quiz-manager)
Think how you can refactor the functions so they can be easy to understand
for someone who wants to call them from the main script (mimic an API for conversion). | priority | refactor quiz collection as a python module currently the quiz collection scripts are meant to be ran as a single file we want to refactor this scripts so they can be imported as a module in the main python script quiz manager think how you can refactor the functions so they can be easy to understand for someone who wants to call them from the main script mimic an api for conversion | 1 |
801,913 | 28,506,696,063 | IssuesEvent | 2023-04-18 22:15:15 | adrianmcastelo/monsteral-tech | https://api.github.com/repos/adrianmcastelo/monsteral-tech | opened | Aplicación | priority-high | ## Aplicación
**Prioridad alta**
- [ ] * Como no sé en que muy bien que pantalla encajar exactamente esto lo pongo aquí, cuando la aplicación pasa mucho tiempo en segundo plano (por tanto cambia de estado en su ciclo de vida), al volver a entrar nos vuelve a rederigir al login y lo que en verdad queremos es que se guarde la sesión del usuario.
- [ ] * Al girar el móvil sería conveniente revisar como quedan las pantallas cuando el movil queda en horizontal, sobre todo cuando se trata de pantallas grandes, veease tablets. (Adrián) | 1.0 | Aplicación - ## Aplicación
**Prioridad alta**
- [ ] * Como no sé en que muy bien que pantalla encajar exactamente esto lo pongo aquí, cuando la aplicación pasa mucho tiempo en segundo plano (por tanto cambia de estado en su ciclo de vida), al volver a entrar nos vuelve a rederigir al login y lo que en verdad queremos es que se guarde la sesión del usuario.
- [ ] * Al girar el móvil sería conveniente revisar como quedan las pantallas cuando el movil queda en horizontal, sobre todo cuando se trata de pantallas grandes, veease tablets. (Adrián) | priority | aplicación aplicación prioridad alta como no sé en que muy bien que pantalla encajar exactamente esto lo pongo aquí cuando la aplicación pasa mucho tiempo en segundo plano por tanto cambia de estado en su ciclo de vida al volver a entrar nos vuelve a rederigir al login y lo que en verdad queremos es que se guarde la sesión del usuario al girar el móvil sería conveniente revisar como quedan las pantallas cuando el movil queda en horizontal sobre todo cuando se trata de pantallas grandes veease tablets adrián | 1 |
450,093 | 12,980,469,213 | IssuesEvent | 2020-07-22 05:22:06 | wso2/docs-apim | https://api.github.com/repos/wso2/docs-apim | closed | Attach the x5c cert used for backend JWT generation to include in JWT header with x5c key | API-M-2.6.0 Priority/High | **Description:**
The configuration should be added to the documentation for issue https://github.com/wso2/product-apim/issues/8521
```
a. If you want to attach the x5c cert used for backend JWT generation to include in JWT header with
x5c key, please follow the instructions given below.
1. Navigate to the <API-M_HOME>/repository/conf/api-manager.xml file in KeyManager node.
2. Put the following property under <JWTConfigurations> section.
<EnableX5C>true</EnableX5C>
3. Restart the API Manager.
```
| 1.0 | Attach the x5c cert used for backend JWT generation to include in JWT header with x5c key - **Description:**
The configuration should be added to the documentation for issue https://github.com/wso2/product-apim/issues/8521
```
a. If you want to attach the x5c cert used for backend JWT generation to include in JWT header with
x5c key, please follow the instructions given below.
1. Navigate to the <API-M_HOME>/repository/conf/api-manager.xml file in KeyManager node.
2. Put the following property under <JWTConfigurations> section.
<EnableX5C>true</EnableX5C>
3. Restart the API Manager.
```
| priority | attach the cert used for backend jwt generation to include in jwt header with key description the configuration should be added to the documentation for issue a if you want to attach the cert used for backend jwt generation to include in jwt header with key please follow the instructions given below navigate to the repository conf api manager xml file in keymanager node put the following property under section true restart the api manager | 1 |
413,768 | 12,091,986,041 | IssuesEvent | 2020-04-19 13:57:53 | PyTorchLightning/pytorch-lightning | https://api.github.com/repos/PyTorchLightning/pytorch-lightning | closed | on_train_end seems to get called before logging of last epoch has finished | High Priority bug help wanted | <!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug
Maybe not a bug, but unexpected behavior. When using the `on_train_end` method to either upload a models latest .csv file created by TestTube to neptune or to print the last numeric channel value of a metric send to neptune, the values from the final epoch have not yet been logged. When training has finished, the last line of metrics.csv is `2020-04-02 17:23:16.029189,0.04208208369463682,30.0`, but for the outputs/uploads of `on_train_end` see code below:
#### Code sample
```
def on_epoch_end(self):
# Logging loss per epoch
train_loss_mean = np.mean(self.training_losses)
# Saves loss of final epoch for later visualization
self.final_loss = train_loss_mean
self.logger[0].experiment.log_metric('epoch/mean_absolute_loss', y=train_loss_mean, x=self.current_epoch)
self.logger[1].experiment.log({'epoch/mean_absolute_loss': train_loss_mean, 'epoch': self.current_epoch}, global_step=self.current_epoch)
self.training_losses = [] # reset for next epoch
```
```
def on_train_end(self):
save_dir = Path(self.logger[1].experiment.get_logdir()).parent/'metrics.csv'
self.logger[0].experiment.log_artifact(save_dir)
```
Last line of uploaded metrics.csv: `2020-04-02 15:27:57.044250 0.04208208404108882 29.0
`
```
def on_train_end(self):
log_last = self.logger[0].experiment.get_logs()
print('Last logged values: ', log_last)
```
Output: `Last logged values: {'epoch/mean_absolute_loss': Channel(channelType='numeric', id='b00cd0e5-a427-4a3c-a10c-5033808a930e', lastX=29.0, name='epoch/mean_absolute_loss', x=29.0, y='0.04208208404108882')}`
When printing `self.final_loss` in `on_train_end` I get the correct last value though.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The `on_train_end ` method to only get called after the last values have been logged.
| 1.0 | on_train_end seems to get called before logging of last epoch has finished - <!--
### Common bugs:
1. Tensorboard not showing in Jupyter-notebook see [issue 79](https://github.com/PyTorchLightning/pytorch-lightning/issues/79).
2. PyTorch 1.1.0 vs 1.2.0 support [see FAQ](https://github.com/PyTorchLightning/pytorch-lightning#faq)
-->
## 🐛 Bug
Maybe not a bug, but unexpected behavior. When using the `on_train_end` method to either upload a models latest .csv file created by TestTube to neptune or to print the last numeric channel value of a metric send to neptune, the values from the final epoch have not yet been logged. When training has finished, the last line of metrics.csv is `2020-04-02 17:23:16.029189,0.04208208369463682,30.0`, but for the outputs/uploads of `on_train_end` see code below:
#### Code sample
```
def on_epoch_end(self):
# Logging loss per epoch
train_loss_mean = np.mean(self.training_losses)
# Saves loss of final epoch for later visualization
self.final_loss = train_loss_mean
self.logger[0].experiment.log_metric('epoch/mean_absolute_loss', y=train_loss_mean, x=self.current_epoch)
self.logger[1].experiment.log({'epoch/mean_absolute_loss': train_loss_mean, 'epoch': self.current_epoch}, global_step=self.current_epoch)
self.training_losses = [] # reset for next epoch
```
```
def on_train_end(self):
save_dir = Path(self.logger[1].experiment.get_logdir()).parent/'metrics.csv'
self.logger[0].experiment.log_artifact(save_dir)
```
Last line of uploaded metrics.csv: `2020-04-02 15:27:57.044250 0.04208208404108882 29.0
`
```
def on_train_end(self):
log_last = self.logger[0].experiment.get_logs()
print('Last logged values: ', log_last)
```
Output: `Last logged values: {'epoch/mean_absolute_loss': Channel(channelType='numeric', id='b00cd0e5-a427-4a3c-a10c-5033808a930e', lastX=29.0, name='epoch/mean_absolute_loss', x=29.0, y='0.04208208404108882')}`
When printing `self.final_loss` in `on_train_end` I get the correct last value though.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The `on_train_end ` method to only get called after the last values have been logged.
| priority | on train end seems to get called before logging of last epoch has finished common bugs tensorboard not showing in jupyter notebook see pytorch vs support 🐛 bug maybe not a bug but unexpected behavior when using the on train end method to either upload a models latest csv file created by testtube to neptune or to print the last numeric channel value of a metric send to neptune the values from the final epoch have not yet been logged when training has finished the last line of metrics csv is but for the outputs uploads of on train end see code below code sample def on epoch end self logging loss per epoch train loss mean np mean self training losses saves loss of final epoch for later visualization self final loss train loss mean self logger experiment log metric epoch mean absolute loss y train loss mean x self current epoch self logger experiment log epoch mean absolute loss train loss mean epoch self current epoch global step self current epoch self training losses reset for next epoch def on train end self save dir path self logger experiment get logdir parent metrics csv self logger experiment log artifact save dir last line of uploaded metrics csv def on train end self log last self logger experiment get logs print last logged values log last output last logged values epoch mean absolute loss channel channeltype numeric id lastx name epoch mean absolute loss x y when printing self final loss in on train end i get the correct last value though expected behavior the on train end method to only get called after the last values have been logged | 1 |
1,547 | 2,515,342,035 | IssuesEvent | 2015-01-15 17:58:02 | wordpress-mobile/WordPress-iOS-Editor | https://api.github.com/repos/wordpress-mobile/WordPress-iOS-Editor | closed | Cursor set under keyboard | bug High Priority | If you type a single paragraph that extends beyond the visible text area, the cursor is not focused. Reported by a user and reproduced on iphone 5s, ios7
Repro
- create new post
- type a single, long paragraph that extends beyond the text area view
- notice that post does not scroll to focus where cursor is
- https://lookback.io/watch/WzmTtGfZcAPjXW3q4
note: may be resolved by fixes for https://github.com/wordpress-mobile/WordPress-iOS-Editor/issues/301, which I'm also still reproducing - will test both with next beta release
| 1.0 | Cursor set under keyboard - If you type a single paragraph that extends beyond the visible text area, the cursor is not focused. Reported by a user and reproduced on iphone 5s, ios7
Repro
- create new post
- type a single, long paragraph that extends beyond the text area view
- notice that post does not scroll to focus where cursor is
- https://lookback.io/watch/WzmTtGfZcAPjXW3q4
note: may be resolved by fixes for https://github.com/wordpress-mobile/WordPress-iOS-Editor/issues/301, which I'm also still reproducing - will test both with next beta release
| priority | cursor set under keyboard if you type a single paragraph that extends beyond the visible text area the cursor is not focused reported by a user and reproduced on iphone repro create new post type a single long paragraph that extends beyond the text area view notice that post does not scroll to focus where cursor is note may be resolved by fixes for which i m also still reproducing will test both with next beta release | 1 |
557,723 | 16,517,112,842 | IssuesEvent | 2021-05-26 10:51:44 | canonical-web-and-design/ubuntu.com | https://api.github.com/repos/canonical-web-and-design/ubuntu.com | closed | Microcerts view on small screens does not work | Priority: High | When viewing the microcerts on a small screen the modified mobile table pattern does not work.
 | 1.0 | Microcerts view on small screens does not work - When viewing the microcerts on a small screen the modified mobile table pattern does not work.
 | priority | microcerts view on small screens does not work when viewing the microcerts on a small screen the modified mobile table pattern does not work | 1 |
587,279 | 17,611,980,085 | IssuesEvent | 2021-08-18 03:27:38 | NCC-CNC/wheretowork | https://api.github.com/repos/NCC-CNC/wheretowork | closed | themes vs includes - shutting off both causes crash? | bug high priority | I've left it running for a while so may resolve, but it looks like shutting off all themes and includes causes a crash. Obviously not a real scenario, but I'm trying to break it...
Maybe this is a coincidence, but things seem otherwise stable and I've tried it 2X. Perhaps there needs to be a stop and a warning instead? | 1.0 | themes vs includes - shutting off both causes crash? - I've left it running for a while so may resolve, but it looks like shutting off all themes and includes causes a crash. Obviously not a real scenario, but I'm trying to break it...
Maybe this is a coincidence, but things seem otherwise stable and I've tried it 2X. Perhaps there needs to be a stop and a warning instead? | priority | themes vs includes shutting off both causes crash i ve left it running for a while so may resolve but it looks like shutting off all themes and includes causes a crash obviously not a real scenario but i m trying to break it maybe this is a coincidence but things seem otherwise stable and i ve tried it perhaps there needs to be a stop and a warning instead | 1 |
224,857 | 7,473,515,506 | IssuesEvent | 2018-04-03 15:34:13 | CS2103JAN2018-W09-B3/main | https://api.github.com/repos/CS2103JAN2018-W09-B3/main | closed | As a user I want to be able to change the profile pictures of students | priority.high type.story | so that I can have the best and most updated photo of each student | 1.0 | As a user I want to be able to change the profile pictures of students - so that I can have the best and most updated photo of each student | priority | as a user i want to be able to change the profile pictures of students so that i can have the best and most updated photo of each student | 1 |
82,510 | 3,614,306,544 | IssuesEvent | 2016-02-06 00:40:58 | RangerRick/CruiseMonkey | https://api.github.com/repos/RangerRick/CruiseMonkey | closed | entities still showing up when editing | bug high-priority | Post a tweet, then immediately edit it, it'll have HTML entities in it. | 1.0 | entities still showing up when editing - Post a tweet, then immediately edit it, it'll have HTML entities in it. | priority | entities still showing up when editing post a tweet then immediately edit it it ll have html entities in it | 1 |
717,527 | 24,678,866,212 | IssuesEvent | 2022-10-18 19:24:43 | oncokb/oncokb | https://api.github.com/repos/oncokb/oncokb | closed | Test redis cluster in beta | high priority | - [x] Setup the redis using Helm. This is the redis cluster helm chart https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
- [x] Update oncokb-meta to include CLUSTER("cluster") in the RedisType
- [x] Update oncokb-core-beta CacheConfiguration to use cluster config
This is the code generated from the jHipster, might need to adjust to our settings
```
if (jHipsterProperties.getCache().getRedis().isCluster()) {
ClusterServersConfig clusterServersConfig = config
.useClusterServers()
.setMasterConnectionPoolSize(jHipsterProperties.getCache().getRedis().getConnectionPoolSize())
.setMasterConnectionMinimumIdleSize(jHipsterProperties.getCache().getRedis().getConnectionMinimumIdleSize())
.setSubscriptionConnectionPoolSize(jHipsterProperties.getCache().getRedis().getSubscriptionConnectionPoolSize())
.addNodeAddress(jHipsterProperties.getCache().getRedis().getServer());
if (redisUri.getUserInfo() != null) {
clusterServersConfig.setPassword(redisUri.getUserInfo().substring(redisUri.getUserInfo().indexOf(':') + 1));
}
}
```
- [x] Update the configmap of oncokb-core-beta to use the cluster mode
- [x] Add the grafana chart, I imagine the cluster section should work now
- [x] Test the redis is working properly
| 1.0 | Test redis cluster in beta - - [x] Setup the redis using Helm. This is the redis cluster helm chart https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
- [x] Update oncokb-meta to include CLUSTER("cluster") in the RedisType
- [x] Update oncokb-core-beta CacheConfiguration to use cluster config
This is the code generated from the jHipster, might need to adjust to our settings
```
if (jHipsterProperties.getCache().getRedis().isCluster()) {
ClusterServersConfig clusterServersConfig = config
.useClusterServers()
.setMasterConnectionPoolSize(jHipsterProperties.getCache().getRedis().getConnectionPoolSize())
.setMasterConnectionMinimumIdleSize(jHipsterProperties.getCache().getRedis().getConnectionMinimumIdleSize())
.setSubscriptionConnectionPoolSize(jHipsterProperties.getCache().getRedis().getSubscriptionConnectionPoolSize())
.addNodeAddress(jHipsterProperties.getCache().getRedis().getServer());
if (redisUri.getUserInfo() != null) {
clusterServersConfig.setPassword(redisUri.getUserInfo().substring(redisUri.getUserInfo().indexOf(':') + 1));
}
}
```
- [x] Update the configmap of oncokb-core-beta to use the cluster mode
- [x] Add the grafana chart, I imagine the cluster section should work now
- [x] Test the redis is working properly
| priority | test redis cluster in beta setup the redis using helm this is the redis cluster helm chart update oncokb meta to include cluster cluster in the redistype update oncokb core beta cacheconfiguration to use cluster config this is the code generated from the jhipster might need to adjust to our settings if jhipsterproperties getcache getredis iscluster clusterserversconfig clusterserversconfig config useclusterservers setmasterconnectionpoolsize jhipsterproperties getcache getredis getconnectionpoolsize setmasterconnectionminimumidlesize jhipsterproperties getcache getredis getconnectionminimumidlesize setsubscriptionconnectionpoolsize jhipsterproperties getcache getredis getsubscriptionconnectionpoolsize addnodeaddress jhipsterproperties getcache getredis getserver if redisuri getuserinfo null clusterserversconfig setpassword redisuri getuserinfo substring redisuri getuserinfo indexof update the configmap of oncokb core beta to use the cluster mode add the grafana chart i imagine the cluster section should work now test the redis is working properly | 1 |
660,935 | 22,036,227,508 | IssuesEvent | 2022-05-28 16:22:00 | code4romania/website-factory | https://api.github.com/repos/code4romania/website-factory | closed | [Blog] 502 Bad gateway on adding a blog post | high-priority :fire: Blog | Tried to add a new blog post (the first one)
Got this error. Both on SAVE and PUBLISH.

| 1.0 | [Blog] 502 Bad gateway on adding a blog post - Tried to add a new blog post (the first one)
Got this error. Both on SAVE and PUBLISH.

| priority | bad gateway on adding a blog post tried to add a new blog post the first one got this error both on save and publish | 1 |
465,415 | 13,385,281,284 | IssuesEvent | 2020-09-02 13:15:46 | carbon-design-system/ibm-dotcom-library | https://api.github.com/repos/carbon-design-system/ibm-dotcom-library | closed | Web Component: Develop Feature Card Block - Large of the React version | Airtable Done dev package: web components priority: high | #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
IBM.com Library developer
> I need to:
create the `Feature Card Block - Large`
> so that I can:
provide ibm.com adopter developers a web component version for every react version available in the ibm.com Library
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- Create with Shadow DOM and Custom Elements standards
- **See the Epic for the Design and Functional specs information**
- [React canary environment](https://ibmdotcom-react-canary.mybluemix.net/?path=/docs/overview-getting-started--page)
- Prod QA testing issue (#3557)
#### Acceptance criteria
- [ ] Include README for the web component and corresponding styles
- [ ] Create Web Components styles in styles package
- [ ] No custom styles in web-components package
- [ ] Do not create knobs in Storybook that include JSON objects
- [ ] Break out Storybook stories into multiple variation stories, if applicable
- [ ] Create codesandbox example under `/packages/web-components/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] A comment is posted in the Prod QA issue, tagging Praveen when development is finished
| 1.0 | Web Component: Develop Feature Card Block - Large of the React version - #### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
IBM.com Library developer
> I need to:
create the `Feature Card Block - Large`
> so that I can:
provide ibm.com adopter developers a web component version for every react version available in the ibm.com Library
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- Create with Shadow DOM and Custom Elements standards
- **See the Epic for the Design and Functional specs information**
- [React canary environment](https://ibmdotcom-react-canary.mybluemix.net/?path=/docs/overview-getting-started--page)
- Prod QA testing issue (#3557)
#### Acceptance criteria
- [ ] Include README for the web component and corresponding styles
- [ ] Create Web Components styles in styles package
- [ ] No custom styles in web-components package
- [ ] Do not create knobs in Storybook that include JSON objects
- [ ] Break out Storybook stories into multiple variation stories, if applicable
- [ ] Create codesandbox example under `/packages/web-components/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] A comment is posted in the Prod QA issue, tagging Praveen when development is finished
| priority | web component develop feature card block large of the react version user story as a ibm com library developer i need to create the feature card block large so that i can provide ibm com adopter developers a web component version for every react version available in the ibm com library additional information story within storybook with corresponding knobs utilize carbon create with shadow dom and custom elements standards see the epic for the design and functional specs information prod qa testing issue acceptance criteria include readme for the web component and corresponding styles create web components styles in styles package no custom styles in web components package do not create knobs in storybook that include json objects break out storybook stories into multiple variation stories if applicable create codesandbox example under packages web components examples codesandbox and include in readme minimum unit test coverage a comment is posted in the prod qa issue tagging praveen when development is finished | 1 |
431,571 | 12,483,542,475 | IssuesEvent | 2020-05-30 09:57:15 | Badwater-Apps/github-label-manager-2 | https://api.github.com/repos/Badwater-Apps/github-label-manager-2 | opened | Re-do UI: Single-column design | complexity: 3/5 lang: html lang: javascript priority: high status: available type: enhancement type: feature request | # Issue
Re-do UI: Single-column design
## New UI
- Single column
- Card organization from top to bottom:
- "Login" card, the one that contains input fields of repo owner, repo, username, and personal access token
- "Copy from other repos" card
- "Labels/Milestones/FAQ management card
| 1.0 | Re-do UI: Single-column design - # Issue
Re-do UI: Single-column design
## New UI
- Single column
- Card organization from top to bottom:
- "Login" card, the one that contains input fields of repo owner, repo, username, and personal access token
- "Copy from other repos" card
- "Labels/Milestones/FAQ management card
| priority | re do ui single column design issue re do ui single column design new ui single column card organization from top to bottom login card the one that contains input fields of repo owner repo username and personal access token copy from other repos card labels milestones faq management card | 1 |
548,958 | 16,082,444,339 | IssuesEvent | 2021-04-26 07:11:27 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | zapier.com - see bug description | browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-normal | <!-- @browser: Firefox 89.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71785 -->
**URL**: https://zapier.com/
**Browser / Version**: Firefox 89.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Opera
**Problem type**: Something else
**Description**: loading slower than before update
**Steps to Reproduce**:
switching between zaps very slow now
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210422190146</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/4/aae333f7-8dfd-4d45-a736-3245c2153d39)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | zapier.com - see bug description - <!-- @browser: Firefox 89.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/71785 -->
**URL**: https://zapier.com/
**Browser / Version**: Firefox 89.0
**Operating System**: Windows 10
**Tested Another Browser**: Yes Opera
**Problem type**: Something else
**Description**: loading slower than before update
**Steps to Reproduce**:
switching between zaps very slow now
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210422190146</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/4/aae333f7-8dfd-4d45-a736-3245c2153d39)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | zapier com see bug description url browser version firefox operating system windows tested another browser yes opera problem type something else description loading slower than before update steps to reproduce switching between zaps very slow now browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 1 |
737,956 | 25,539,096,946 | IssuesEvent | 2022-11-29 14:09:18 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Make `torch.Library` (Python Registration) work with torchdeploy | high priority triaged module: deploy module: library | ### 🚀 The feature, motivation and pitch
Python Registration is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter
or not. We also need to ensure that if the old implementation for the dispatch key in question was just a fallthrough, then it still has the correct behavior.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @wconstab @anjali411 | 1.0 | Make `torch.Library` (Python Registration) work with torchdeploy - ### 🚀 The feature, motivation and pitch
Python Registration is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter
or not. We also need to ensure that if the old implementation for the dispatch key in question was just a fallthrough, then it still has the correct behavior.
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @gchanan @zou3519 @wconstab @anjali411 | priority | make torch library python registration work with torchdeploy 🚀 the feature motivation and pitch python registration is not compatible with torchdeploy to handle torchdeploy we must introduce a level of indirection whereby there is a single registered kernel for all python interpreters but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter or not we also need to ensure that if the old implementation for the dispatch key in question was just a fallthrough then it still has the correct behavior alternatives no response additional context no response cc ezyang gchanan wconstab | 1 |
620,744 | 19,568,912,704 | IssuesEvent | 2022-01-04 07:07:30 | bounswe/2021SpringGroup2 | https://api.github.com/repos/bounswe/2021SpringGroup2 | closed | [Frontend] Badge Functionality Unit Tests | type: enhancement priority: high state: in progress Frontend | I have implemented the badge functionality with various pages. All the badge information is kept in the backend; therefore, I think it would be a good idea to check the controller methods for this functionality using mock response. I am planning to check if the responses got from the backend are formatted correctly before used and if the requests sent to the backend are in correct format with method type and body. | 1.0 | [Frontend] Badge Functionality Unit Tests - I have implemented the badge functionality with various pages. All the badge information is kept in the backend; therefore, I think it would be a good idea to check the controller methods for this functionality using mock response. I am planning to check if the responses got from the backend are formatted correctly before used and if the requests sent to the backend are in correct format with method type and body. | priority | badge functionality unit tests i have implemented the badge functionality with various pages all the badge information is kept in the backend therefore i think it would be a good idea to check the controller methods for this functionality using mock response i am planning to check if the responses got from the backend are formatted correctly before used and if the requests sent to the backend are in correct format with method type and body | 1 |
812,680 | 30,347,842,454 | IssuesEvent | 2023-07-11 16:37:51 | virtualcell/vcell | https://api.github.com/repos/virtualcell/vcell | opened | Curator access to database needs fixes, specifically to flags | bug High Priority VCell-7.5.1 Next Release | update forms for publications to make operational, make stats page available
For publications: https://vcell-node4:8080/biomodel
For stats: http://code3.cam.uchc.edu/statistics.php (old)
replaced by https://vcellapi-beta.cam.uchc.edu:8080/rpc?stats | 1.0 | Curator access to database needs fixes, specifically to flags - update forms for publications to make operational, make stats page available
For publications: https://vcell-node4:8080/biomodel
For stats: http://code3.cam.uchc.edu/statistics.php (old)
replaced by https://vcellapi-beta.cam.uchc.edu:8080/rpc?stats | priority | curator access to database needs fixes specifically to flags update forms for publications to make operational make stats page available for publications for stats old replaced by | 1 |
767,078 | 26,910,078,375 | IssuesEvent | 2023-02-06 22:40:11 | nhoizey/nicolas-hoizey.photo | https://api.github.com/repos/nhoizey/nicolas-hoizey.photo | closed | Remove p.njk | priority: high 🟠 | I don't use short URL anyway.
Will reduce build time a little.
—
Created via [Raycast](https://www.raycast.com?ref=signatureGithub) | 1.0 | Remove p.njk - I don't use short URL anyway.
Will reduce build time a little.
—
Created via [Raycast](https://www.raycast.com?ref=signatureGithub) | priority | remove p njk i don t use short url anyway will reduce build time a little — created via | 1 |
227,629 | 7,540,446,039 | IssuesEvent | 2018-04-17 06:28:36 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | closed | Deduplication service error | kind/bug priority/high status/todo | Deduplication service error
```
=SUPERVISOR REPORT==== 21-Mar-2018::13:16:01 ===
Supervisor: {local,'Elixir.EHealth.DuplicatePersons.CleanupTasks'}
Context: child_terminated
Reason: {{badmatch,
{error,
#{<<"error">> =>
#{<<"errors">> =>
#{<<"detail">> =>
<<"Internal server error">>}},
<<"meta">> =>
#{<<"code">> => 500,
<<"request_id">> =>
<<"qqn8fs7b8bl8g6slcvko2h07vchit14q">>,
<<"type">> => <<"object">>,
<<"url">> =>
<<"http://api-svc.mpi/merge_candidates/81bd7fbd-fad1-4f02-b699-db46b770827b">>}}}},
[{'Elixir.EHealth.DuplicatePersons.Cleanup',cleanup,2,
[{file,"lib/ehealth/duplicate_persons/cleanup.ex"},
{line,22}]},
{'Elixir.Enum','-each/2-lists^foreach/1-0-',2,
[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Enum',each,2,[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Task.Supervised',do_apply,2,
[{file,"lib/task/supervised.ex"},{line,85}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
Offender: [{pid,<0.2340.0>},
{id,'Elixir.Task.Supervised'},
{mfargs,{'Elixir.Task.Supervised',start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
=SUPERVISOR REPORT==== 21-Mar-2018::13:16:01 ===
Supervisor: {local,'Elixir.EHealth.DuplicatePersons.CleanupTasks'}
Context: child_terminated
Reason: {{badmatch,
{error,
#{<<"error">> =>
#{<<"errors">> =>
#{<<"detail">> =>
<<"Internal server error">>}},
<<"meta">> =>
#{<<"code">> => 500,
<<"request_id">> =>
<<"sbelssmmmfv4c3iha79vgpl8b10kpdpf">>,
<<"type">> => <<"object">>,
<<"url">> =>
<<"http://api-svc.mpi/merge_candidates/8ff083ae-4cd7-4a3d-a2d0-d5ab5086c667">>}}}},
[{'Elixir.EHealth.DuplicatePersons.Cleanup',cleanup,2,
[{file,"lib/ehealth/duplicate_persons/cleanup.ex"},
{line,22}]},
{'Elixir.Enum','-each/2-lists^foreach/1-0-',2,
[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Enum',each,2,[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Task.Supervised',do_apply,2,
[{file,"lib/task/supervised.ex"},{line,85}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
Offender: [{pid,<0.2326.0>},
{id,'Elixir.Task.Supervised'},
{mfargs,{'Elixir.Task.Supervised',start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
Task #PID<0.2317.0> started from EHealth.DuplicatePersons.Signals terminating
** (MatchError) no match of right hand side value: {:error, %{"error" => %{"errors" => %{"detail" => "Internal server error"}}, "meta" => %{"code" => 500, "request_id" => "hagtgjd92dmh60tshh8onuaq062n00ju", "type" => "object", "url" => "http://api-svc.mpi/merge_candidates/3bcbf456-ffb1-48c6-a6e4-acd6dd305eea"}}}
(ehealth) lib/ehealth/duplicate_persons/cleanup.ex:22: EHealth.DuplicatePersons.Cleanup.cleanup/2
(elixir) lib/enum.ex:675: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:675: Enum.each/2
(elixir) lib/task/supervised.ex:85: Task.Supervised.do_apply/2
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
Function: #Function<4.71368659/0 in EHealth.DuplicatePersons.Signals.handle_call/3>
Args: []
``` | 1.0 | Deduplication service error - Deduplication service error
```
=SUPERVISOR REPORT==== 21-Mar-2018::13:16:01 ===
Supervisor: {local,'Elixir.EHealth.DuplicatePersons.CleanupTasks'}
Context: child_terminated
Reason: {{badmatch,
{error,
#{<<"error">> =>
#{<<"errors">> =>
#{<<"detail">> =>
<<"Internal server error">>}},
<<"meta">> =>
#{<<"code">> => 500,
<<"request_id">> =>
<<"qqn8fs7b8bl8g6slcvko2h07vchit14q">>,
<<"type">> => <<"object">>,
<<"url">> =>
<<"http://api-svc.mpi/merge_candidates/81bd7fbd-fad1-4f02-b699-db46b770827b">>}}}},
[{'Elixir.EHealth.DuplicatePersons.Cleanup',cleanup,2,
[{file,"lib/ehealth/duplicate_persons/cleanup.ex"},
{line,22}]},
{'Elixir.Enum','-each/2-lists^foreach/1-0-',2,
[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Enum',each,2,[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Task.Supervised',do_apply,2,
[{file,"lib/task/supervised.ex"},{line,85}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
Offender: [{pid,<0.2340.0>},
{id,'Elixir.Task.Supervised'},
{mfargs,{'Elixir.Task.Supervised',start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
=SUPERVISOR REPORT==== 21-Mar-2018::13:16:01 ===
Supervisor: {local,'Elixir.EHealth.DuplicatePersons.CleanupTasks'}
Context: child_terminated
Reason: {{badmatch,
{error,
#{<<"error">> =>
#{<<"errors">> =>
#{<<"detail">> =>
<<"Internal server error">>}},
<<"meta">> =>
#{<<"code">> => 500,
<<"request_id">> =>
<<"sbelssmmmfv4c3iha79vgpl8b10kpdpf">>,
<<"type">> => <<"object">>,
<<"url">> =>
<<"http://api-svc.mpi/merge_candidates/8ff083ae-4cd7-4a3d-a2d0-d5ab5086c667">>}}}},
[{'Elixir.EHealth.DuplicatePersons.Cleanup',cleanup,2,
[{file,"lib/ehealth/duplicate_persons/cleanup.ex"},
{line,22}]},
{'Elixir.Enum','-each/2-lists^foreach/1-0-',2,
[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Enum',each,2,[{file,"lib/enum.ex"},{line,675}]},
{'Elixir.Task.Supervised',do_apply,2,
[{file,"lib/task/supervised.ex"},{line,85}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,247}]}]}
Offender: [{pid,<0.2326.0>},
{id,'Elixir.Task.Supervised'},
{mfargs,{'Elixir.Task.Supervised',start_link,undefined}},
{restart_type,temporary},
{shutdown,5000},
{child_type,worker}]
Task #PID<0.2317.0> started from EHealth.DuplicatePersons.Signals terminating
** (MatchError) no match of right hand side value: {:error, %{"error" => %{"errors" => %{"detail" => "Internal server error"}}, "meta" => %{"code" => 500, "request_id" => "hagtgjd92dmh60tshh8onuaq062n00ju", "type" => "object", "url" => "http://api-svc.mpi/merge_candidates/3bcbf456-ffb1-48c6-a6e4-acd6dd305eea"}}}
(ehealth) lib/ehealth/duplicate_persons/cleanup.ex:22: EHealth.DuplicatePersons.Cleanup.cleanup/2
(elixir) lib/enum.ex:675: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:675: Enum.each/2
(elixir) lib/task/supervised.ex:85: Task.Supervised.do_apply/2
(stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
Function: #Function<4.71368659/0 in EHealth.DuplicatePersons.Signals.handle_call/3>
Args: []
``` | priority | deduplication service error deduplication service error supervisor report mar supervisor local elixir ehealth duplicatepersons cleanuptasks context child terminated reason badmatch error elixir ehealth duplicatepersons cleanup cleanup file lib ehealth duplicate persons cleanup ex line elixir enum each lists foreach elixir enum each elixir task supervised do apply proc lib init p do apply offender pid id elixir task supervised mfargs elixir task supervised start link undefined restart type temporary shutdown child type worker supervisor report mar supervisor local elixir ehealth duplicatepersons cleanuptasks context child terminated reason badmatch error elixir ehealth duplicatepersons cleanup cleanup file lib ehealth duplicate persons cleanup ex line elixir enum each lists foreach elixir enum each elixir task supervised do apply proc lib init p do apply offender pid id elixir task supervised mfargs elixir task supervised start link undefined restart type temporary shutdown child type worker task pid started from ehealth duplicatepersons signals terminating matcherror no match of right hand side value error error errors detail internal server error meta code request id type object url ehealth lib ehealth duplicate persons cleanup ex ehealth duplicatepersons cleanup cleanup elixir lib enum ex enum each lists foreach elixir lib enum ex enum each elixir lib task supervised ex task supervised do apply stdlib proc lib erl proc lib init p do apply function function args | 1 |
156,478 | 5,970,040,895 | IssuesEvent | 2017-05-30 21:38:11 | broadinstitute/gatk | https://api.github.com/repos/broadinstitute/gatk | closed | RMSMappingQuality explodes when there are 0 reads | bug Engine PRIORITY_HIGH | @kcibul reports getting the following error while running `GenotypeGVCFs`:
```
New error coming out of GenotypeGVCFs when running 1000 shards:
[11:20]
***********************************************************************
A USER ERROR has occurred: Bad input: Cannot calculate Root Mean Square Mapping Quality if there are 0 or less reads.
Number of reads recorded as :0
In VariantContext: [VC UG_call @ chr1:143249103 Q38.86 of type=SNP alleles=[C*, T] attr={DP=14, ExcessHet=3.0102999210357666, MLEAC=[2], MLEAF=[0.25], RAW_MQ=0.0} GT=[[09C81377 C*/C* GQ 6 DP 5 PL 0,6,90 {MIN_DP=4}],[09C83237 C*/C* GQ 3 DP 6 PL 0,3,45 {MIN_DP=6}],[09C97255 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[09C98651 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[09C99383 T/T GQ 3 PL 45,3,0 {PGT=0|1, PID=143249097_C_T, SB=[0, 0, 0, 0]}],[09C99677 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[10C100868 C*/C* GQ 3 DP 7 PL 0,3,45 {MIN_DP=4}],[10C101312 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[10C102545 ./. DP 1 PL 0,0,0 {MIN_DP=0}],[10C102782 ./. DP 0 PL 0,0,0 {MIN_DP=0}]]
[11:21]
```
I think this might be a bug, or at least a case of over-aggressive error checking. We have an extra check in our version of `RMSMappingQuality` that is not present in the GATK3 version:
```
if (numOfReads <= 0){
throw new UserException.BadInput("Cannot calculate Root Mean Square Mapping Quality if there are 0 or less reads." +
"\nNumber of reads recorded as :" +numOfReads +
"\nIn VariantContext: "+ vc.toStringDecodeGenotypes());
}
```
We should match the GATK3 behavior in this case, unless we can **prove** (and not merely infer) that GATK3 also explodes.
@lbergelson Since you are git blamed here, can you give your thoughts on this? | 1.0 | RMSMappingQuality explodes when there are 0 reads - @kcibul reports getting the following error while running `GenotypeGVCFs`:
```
New error coming out of GenotypeGVCFs when running 1000 shards:
[11:20]
***********************************************************************
A USER ERROR has occurred: Bad input: Cannot calculate Root Mean Square Mapping Quality if there are 0 or less reads.
Number of reads recorded as :0
In VariantContext: [VC UG_call @ chr1:143249103 Q38.86 of type=SNP alleles=[C*, T] attr={DP=14, ExcessHet=3.0102999210357666, MLEAC=[2], MLEAF=[0.25], RAW_MQ=0.0} GT=[[09C81377 C*/C* GQ 6 DP 5 PL 0,6,90 {MIN_DP=4}],[09C83237 C*/C* GQ 3 DP 6 PL 0,3,45 {MIN_DP=6}],[09C97255 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[09C98651 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[09C99383 T/T GQ 3 PL 45,3,0 {PGT=0|1, PID=143249097_C_T, SB=[0, 0, 0, 0]}],[09C99677 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[10C100868 C*/C* GQ 3 DP 7 PL 0,3,45 {MIN_DP=4}],[10C101312 ./. DP 0 PL 0,0,0 {MIN_DP=0}],[10C102545 ./. DP 1 PL 0,0,0 {MIN_DP=0}],[10C102782 ./. DP 0 PL 0,0,0 {MIN_DP=0}]]
[11:21]
```
I think this might be a bug, or at least a case of over-aggressive error checking. We have an extra check in our version of `RMSMappingQuality` that is not present in the GATK3 version:
```
if (numOfReads <= 0){
throw new UserException.BadInput("Cannot calculate Root Mean Square Mapping Quality if there are 0 or less reads." +
"\nNumber of reads recorded as :" +numOfReads +
"\nIn VariantContext: "+ vc.toStringDecodeGenotypes());
}
```
We should match the GATK3 behavior in this case, unless we can **prove** (and not merely infer) that GATK3 also explodes.
@lbergelson Since you are git blamed here, can you give your thoughts on this? | priority | rmsmappingquality explodes when there are reads kcibul reports getting the following error while running genotypegvcfs new error coming out of genotypegvcfs when running shards a user error has occurred bad input cannot calculate root mean square mapping quality if there are or less reads number of reads recorded as in variantcontext attr dp excesshet mleac mleaf raw mq gt i think this might be a bug or at least a case of over aggressive error checking we have an extra check in our version of rmsmappingquality that is not present in the version if numofreads throw new userexception badinput cannot calculate root mean square mapping quality if there are or less reads nnumber of reads recorded as numofreads nin variantcontext vc tostringdecodegenotypes we should match the behavior in this case unless we can prove and not merely infer that also explodes lbergelson since you are git blamed here can you give your thoughts on this | 1 |
196,914 | 6,950,810,358 | IssuesEvent | 2017-12-06 12:14:37 | OperationCode/operationcode_frontend | https://api.github.com/repos/OperationCode/operationcode_frontend | closed | Rework scholarships on front page | beginner friendly Priority: High Status: Available Type: Feature | <!-- Please fill out one of the sections below based on the type of issue you're creating -->
# Feature
## Why is this feature being added?
Add enough room to put our events on the front page.
## What should your feature do?
1. Move "Code School Scholarships" before "Conference Scholarships."
2. Change copy on "Code School Scholarships" to,
> Our scholarships provide opportunities for the military community to kickstart their careers in software development. We partner with tech conferences around the country and offers scholarship tickets to events throughout the year, as well as partial and full tuition scholarships to coding bootcamps.
3. Change "Code School Scholarships" title to "Scholarships."
4. Link the image and text to our Code Schools page (this is just for now) | 1.0 | Rework scholarships on front page - <!-- Please fill out one of the sections below based on the type of issue you're creating -->
# Feature
## Why is this feature being added?
Add enough room to put our events on the front page.
## What should your feature do?
1. Move "Code School Scholarships" before "Conference Scholarships."
2. Change copy on "Code School Scholarships" to,
> Our scholarships provide opportunities for the military community to kickstart their careers in software development. We partner with tech conferences around the country and offers scholarship tickets to events throughout the year, as well as partial and full tuition scholarships to coding bootcamps.
3. Change "Code School Scholarships" title to "Scholarships."
4. Link the image and text to our Code Schools page (this is just for now) | priority | rework scholarships on front page feature why is this feature being added add enough room to put our events on the front page what should your feature do move code school scholarships before conference scholarships change copy on code school scholarships to our scholarships provide opportunities for the military community to kickstart their careers in software development we partner with tech conferences around the country and offers scholarship tickets to events throughout the year as well as partial and full tuition scholarships to coding bootcamps change code school scholarships title to scholarships link the image and text to our code schools page this is just for now | 1 |
188,680 | 6,779,845,553 | IssuesEvent | 2017-10-29 06:00:57 | ballerinalang/composer | https://api.github.com/repos/ballerinalang/composer | opened | "Bad String" error when the key is not a string when importing a json | 0.94.1 Priority/High Severity/Major Type/Bug | **Steps**
1. Import the following json which a valid nested json object in Ballerina
`{fname:"Peter", lname:"Stallone", "age":32, address:{line:"20 Palm Grove",city:"Colombo 03",country:"Sri Lanka"}}`
**Issue**
"Bad String" error is displayed to the user and blocks the user from adding it

**Expected**
Valid json objects in Ballerina should also be able to import from composer
| 1.0 | "Bad String" error when the key is not a string when importing a json - **Steps**
1. Import the following json which a valid nested json object in Ballerina
`{fname:"Peter", lname:"Stallone", "age":32, address:{line:"20 Palm Grove",city:"Colombo 03",country:"Sri Lanka"}}`
**Issue**
"Bad String" error is displayed to the user and blocks the user from adding it

**Expected**
Valid json objects in Ballerina should also be able to import from composer
| priority | bad string error when the key is not a string when importing a json steps import the following json which a valid nested json object in ballerina fname peter lname stallone age address line palm grove city colombo country sri lanka issue bad string error is displayed to the user and blocks the user from adding it expected valid json objects in ballerina should also be able to import from composer | 1 |
457,898 | 13,164,784,625 | IssuesEvent | 2020-08-11 04:48:23 | justincmendes/pd-bot | https://api.github.com/repos/justincmendes/pd-bot | opened | Cooldown Time: 5 Seconds on Help Command Calls | enhancement high priority practical consideration | As help commands show an embed with lots of information it can be easily spammed. To avoid spam and overloading the bot, set a **5 second cooldown**. | 1.0 | Cooldown Time: 5 Seconds on Help Command Calls - As help commands show an embed with lots of information it can be easily spammed. To avoid spam and overloading the bot, set a **5 second cooldown**. | priority | cooldown time seconds on help command calls as help commands show an embed with lots of information it can be easily spammed to avoid spam and overloading the bot set a second cooldown | 1 |
238,903 | 7,784,359,629 | IssuesEvent | 2018-06-06 13:04:07 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | IE 11: typing / doesn't make the block autocomplete suggestions appear | Priority High [Status] In Progress [Type] Bug | In IE 11, typing `/` to make the blocks autocomplete suggestions appear, doesn't do anything. In the screenshot below: IE 11 and Chrome on Windows:

| 1.0 | IE 11: typing / doesn't make the block autocomplete suggestions appear - In IE 11, typing `/` to make the blocks autocomplete suggestions appear, doesn't do anything. In the screenshot below: IE 11 and Chrome on Windows:

| priority | ie typing doesn t make the block autocomplete suggestions appear in ie typing to make the blocks autocomplete suggestions appear doesn t do anything in the screenshot below ie and chrome on windows | 1 |
309,294 | 9,466,471,626 | IssuesEvent | 2019-04-18 04:43:17 | wso2/product-is | https://api.github.com/repos/wso2/product-is | reopened | When reCaptcha is enabled, multi option login steps are not shown for given number of failed attempts. | Affected/5.8.0-Alpha2 Complexity/Medium Component/Adaptive Auth Priority/High Severity/Critical Type/Bug | - Add a Service Provider (OIDC/SSO samples)
- Enable reCaptcha (after 1 failed login)
- Enable multi-option login (eg: basic auth as step 1 and Email OTP as step 2)
- Add a new claim to store the failed attempts before login.
- Use script based adaptive authentication and added the following code.
```
// This variable is used to define the number of invalid attempts allowed before prompting the second factor
var invalidAttemptsToStepup = 2;
var failedLoginAttemptsBeforeSuccessClaim= 'http://wso2.org/claims/identity/failedLoginAttemptsBeforeSuccess';
function onLoginRequest(context) {
doLogin(context);
}
function doLogin(context) {
executeStep(1, {
onSuccess : function(context){
var user = context.steps[1].subject;
if (isExceedInvalidAttempts(user)) {
executeStep(2);
}
},
onFail : function(context) {
// Retry the login..
doLogin(context);
}
});
}
function isExceedInvalidAttempts(user) {
if (user.localClaims[failedLoginAttemptsBeforeSuccessClaim] >= invalidAttemptsToStepup) {
return true;
} else {
return false;
}
}
```
When reCaptcha is set to come up after 1 failed login attempt and Email OTP to come up after 2 failed login attempts, to get E-mail OTP (which is set as the 2nd step) it takes 5 or more invalid login attempts. | 1.0 | When reCaptcha is enabled, multi option login steps are not shown for given number of failed attempts. - - Add a Service Provider (OIDC/SSO samples)
- Enable reCaptcha (after 1 failed login)
- Enable multi-option login (eg: basic auth as step 1 and Email OTP as step 2)
- Add a new claim to store the failed attempts before login.
- Use script based adaptive authentication and added the following code.
```
// This variable is used to define the number of invalid attempts allowed before prompting the second factor
var invalidAttemptsToStepup = 2;
var failedLoginAttemptsBeforeSuccessClaim= 'http://wso2.org/claims/identity/failedLoginAttemptsBeforeSuccess';
function onLoginRequest(context) {
doLogin(context);
}
function doLogin(context) {
executeStep(1, {
onSuccess : function(context){
var user = context.steps[1].subject;
if (isExceedInvalidAttempts(user)) {
executeStep(2);
}
},
onFail : function(context) {
// Retry the login..
doLogin(context);
}
});
}
function isExceedInvalidAttempts(user) {
if (user.localClaims[failedLoginAttemptsBeforeSuccessClaim] >= invalidAttemptsToStepup) {
return true;
} else {
return false;
}
}
```
When reCaptcha is set to come up after 1 failed login attempt and Email OTP to come up after 2 failed login attempts, to get E-mail OTP (which is set as the 2nd step) it takes 5 or more invalid login attempts. | priority | when recaptcha is enabled multi option login steps are not shown for given number of failed attempts add a service provider oidc sso samples enable recaptcha after failed login enable multi option login eg basic auth as step and email otp as step add a new claim to store the failed attempts before login use script based adaptive authentication and added the following code this variable is used to define the number of invalid attempts allowed before prompting the second factor var invalidattemptstostepup var failedloginattemptsbeforesuccessclaim function onloginrequest context dologin context function dologin context executestep onsuccess function context var user context steps subject if isexceedinvalidattempts user executestep onfail function context retry the login dologin context function isexceedinvalidattempts user if user localclaims invalidattemptstostepup return true else return false when recaptcha is set to come up after failed login attempt and email otp to come up after failed login attempts to get e mail otp which is set as the step it takes or more invalid login attempts | 1 |
439,786 | 12,687,075,649 | IssuesEvent | 2020-06-20 14:37:11 | klesun/deep-assoc-completion | https://api.github.com/repos/klesun/deep-assoc-completion | closed | Support for new array members | high priority status: todo type: improvement | Not sure if this is a feature request, bug or consequence of me doing things weird XD
I used psalm/phan syntax to define an array of arrays of a certain format, which, when accessing the top array works fine:

If I try to add a new member to the array, I do not get any completion:

Maybe there is a better way to define the array that already works. Since this is just to help me prevent typos inside a function I'm not fixed on a specific declaration syntax, I'd use what works =) | 1.0 | Support for new array members - Not sure if this is a feature request, bug or consequence of me doing things weird XD
I used psalm/phan syntax to define an array of arrays of a certain format, which, when accessing the top array works fine:

If I try to add a new member to the array, I do not get any completion:

Maybe there is a better way to define the array that already works. Since this is just to help me prevent typos inside a function I'm not fixed on a specific declaration syntax, I'd use what works =) | priority | support for new array members not sure if this is a feature request bug or consequence of me doing things weird xd i used psalm phan syntax to define an array of arrays of a certain format which when accessing the top array works fine if i try to add a new member to the array i do not get any completion maybe there is a better way to define the array that already works since this is just to help me prevent typos inside a function i m not fixed on a specific declaration syntax i d use what works | 1 |
247,632 | 7,921,347,606 | IssuesEvent | 2018-07-05 07:09:13 | canonical-websites/snapcraft.io | https://api.github.com/repos/canonical-websites/snapcraft.io | closed | On small screens home page 'tools' logos are invisible | Priority: High | Go to snapcraft.io on mobile, see 'tools' section. Tools items are very small with icons being invisible
<img width="369" alt="screen shot 2018-06-20 at 10 24 03" src="https://user-images.githubusercontent.com/83575/41646615-97e32514-7474-11e8-92e7-7f7003f02c79.png">
### Expected behaviour
It should look better ;)
| 1.0 | On small screens home page 'tools' logos are invisible - Go to snapcraft.io on mobile, see 'tools' section. Tools items are very small with icons being invisible
<img width="369" alt="screen shot 2018-06-20 at 10 24 03" src="https://user-images.githubusercontent.com/83575/41646615-97e32514-7474-11e8-92e7-7f7003f02c79.png">
### Expected behaviour
It should look better ;)
| priority | on small screens home page tools logos are invisible go to snapcraft io on mobile see tools section tools items are very small with icons being invisible img width alt screen shot at src expected behaviour it should look better | 1 |
595,967 | 18,092,204,536 | IssuesEvent | 2021-09-22 03:52:28 | open-rmf/rmf_internal_msgs | https://api.github.com/repos/open-rmf/rmf_internal_msgs | closed | Reduce bandwidth required by issue updates | enhancement priority:high | ## Feature request
### Description
Each query update currently includes a copy of the relevant query and the query ID. This information is redundant and wastes bandwidth, possibly significantly for complex queries. Remove these from the message without impacting the existing behaviour and robustness.
See [this comment](https://github.com/open-rmf/rmf_internal_msgs/pull/16#discussion_r658518030) for more information and context.
### Implementation considerations
The query is included to improve the robustness of the mirrors to errors in the schedule node. An alternative approach needs to be provided for mirrors to verify they are receiving updates for their query. | 1.0 | Reduce bandwidth required by issue updates - ## Feature request
### Description
Each query update currently includes a copy of the relevant query and the query ID. This information is redundant and wastes bandwidth, possibly significantly for complex queries. Remove these from the message without impacting the existing behaviour and robustness.
See [this comment](https://github.com/open-rmf/rmf_internal_msgs/pull/16#discussion_r658518030) for more information and context.
### Implementation considerations
The query is included to improve the robustness of the mirrors to errors in the schedule node. An alternative approach needs to be provided for mirrors to verify they are receiving updates for their query. | priority | reduce bandwidth required by issue updates feature request description each query update currently includes a copy of the relevant query and the query id this information is redundant and wastes bandwidth possibly significantly for complex queries remove these from the message without impacting the existing behaviour and robustness see for more information and context implementation considerations the query is included to improve the robustness of the mirrors to errors in the schedule node an alternative approach needs to be provided for mirrors to verify they are receiving updates for their query | 1 |
541,029 | 15,820,459,976 | IssuesEvent | 2021-04-05 19:00:27 | blitz-js/blitz | https://api.github.com/repos/blitz-js/blitz | closed | Upgrade react-query to v3 | kind/feature-change priority/high status/ready-to-work-on | ### What do you want and why?
We need to upgrade to react-query v3. This is a fairly sizable task and will required some typescript skills.
### Possible implementation(s)
[Read the v2 to v3 migration guide](https://react-query.tanstack.com/guides/migrating-to-react-query-3) and makes all changes required.
- Upgrade react-query v3
- Add new queryClient providers
- Update types for our useQuery hooks (there's some types renamed, etc in v3)
- Add new `useQueries` hook
- Change template to use `useQueryErrorResetBoundary`
- ... any other changes needed
- Update our docs for all changes
### To Figure Out
- How to instantiate a QueryClient that we can use internally but also allow users to pass config to.
- Maybe we export a new `<App>` component that user's must include at the root of their _app component and then this new `<App>` component can take react-query config as as a prop?
- How to automatically integrate ssr cache hydration?
- https://react-query.tanstack.com/guides/ssr
- Either add Hydrate to new `<App>` component as in previous point or in https://github.com/blitz-js/blitz/blob/canary/packages/core/src/blitz-app-root.tsx
| 1.0 | Upgrade react-query to v3 - ### What do you want and why?
We need to upgrade to react-query v3. This is a fairly sizable task and will required some typescript skills.
### Possible implementation(s)
[Read the v2 to v3 migration guide](https://react-query.tanstack.com/guides/migrating-to-react-query-3) and makes all changes required.
- Upgrade react-query v3
- Add new queryClient providers
- Update types for our useQuery hooks (there's some types renamed, etc in v3)
- Add new `useQueries` hook
- Change template to use `useQueryErrorResetBoundary`
- ... any other changes needed
- Update our docs for all changes
### To Figure Out
- How to instantiate a QueryClient that we can use internally but also allow users to pass config to.
- Maybe we export a new `<App>` component that user's must include at the root of their _app component and then this new `<App>` component can take react-query config as as a prop?
- How to automatically integrate ssr cache hydration?
- https://react-query.tanstack.com/guides/ssr
- Either add Hydrate to new `<App>` component as in previous point or in https://github.com/blitz-js/blitz/blob/canary/packages/core/src/blitz-app-root.tsx
| priority | upgrade react query to what do you want and why we need to upgrade to react query this is a fairly sizable task and will required some typescript skills possible implementation s and makes all changes required upgrade react query add new queryclient providers update types for our usequery hooks there s some types renamed etc in add new usequeries hook change template to use usequeryerrorresetboundary any other changes needed update our docs for all changes to figure out how to instantiate a queryclient that we can use internally but also allow users to pass config to maybe we export a new component that user s must include at the root of their app component and then this new component can take react query config as as a prop how to automatically integrate ssr cache hydration either add hydrate to new component as in previous point or in | 1 |
167,461 | 6,338,970,834 | IssuesEvent | 2017-07-27 06:55:25 | phansch/dotfiles | https://api.github.com/repos/phansch/dotfiles | closed | Use GNU stow for dotfiles | priority:high simplification | http://louistiao.me/posts/louis-does-dotfiles/
Stow allows me to group together related config files in directories. It's also possible to manage system-wide configuration, for example in `/etc`.
* [x] Move everything into nice stow packages
* [x] Remove .rcrc
* [x] Update setup script
* [x] Make sure all tests are still working, because some check hardcoded directories (shellcheck for example)
* [x] Update Readme | 1.0 | Use GNU stow for dotfiles - http://louistiao.me/posts/louis-does-dotfiles/
Stow allows me to group together related config files in directories. It's also possible to manage system-wide configuration, for example in `/etc`.
* [x] Move everything into nice stow packages
* [x] Remove .rcrc
* [x] Update setup script
* [x] Make sure all tests are still working, because some check hardcoded directories (shellcheck for example)
* [x] Update Readme | priority | use gnu stow for dotfiles stow allows me to group together related config files in directories it s also possible to manage system wide configuration for example in etc move everything into nice stow packages remove rcrc update setup script make sure all tests are still working because some check hardcoded directories shellcheck for example update readme | 1 |
152,774 | 5,868,452,543 | IssuesEvent | 2017-05-14 12:56:30 | Lets-Dev/lets-dev | https://api.github.com/repos/Lets-Dev/lets-dev | opened | Sign-in / Sign-up | Priority: High Type: Enhancement | Adding account creation with the following omniauths:
- [ ] Facebook
- [ ] Github
- [ ] Google | 1.0 | Sign-in / Sign-up - Adding account creation with the following omniauths:
- [ ] Facebook
- [ ] Github
- [ ] Google | priority | sign in sign up adding account creation with the following omniauths facebook github google | 1 |
192,203 | 6,847,591,827 | IssuesEvent | 2017-11-13 15:52:01 | CS2103AUG2017-T15-B1/main | https://api.github.com/repos/CS2103AUG2017-T15-B1/main | closed | Upload Test script to IVLE | checked.once checked.twice priority.high | Anton - Not Done
Wei Hong - Not Done
Ming Hui - Not Done
Jiahua - Not Done | 1.0 | Upload Test script to IVLE - Anton - Not Done
Wei Hong - Not Done
Ming Hui - Not Done
Jiahua - Not Done | priority | upload test script to ivle anton not done wei hong not done ming hui not done jiahua not done | 1 |
599,128 | 18,266,113,257 | IssuesEvent | 2021-10-04 08:39:09 | stevenwaterman/Lexoral | https://api.github.com/repos/stevenwaterman/Lexoral | opened | Add mouse controls | enhancement high priority editor | Lexoral is keyboard-first, but we still need mouse controls for things. Add buttons. | 1.0 | Add mouse controls - Lexoral is keyboard-first, but we still need mouse controls for things. Add buttons. | priority | add mouse controls lexoral is keyboard first but we still need mouse controls for things add buttons | 1 |
645,379 | 21,003,395,510 | IssuesEvent | 2022-03-29 19:47:19 | alakajam-team/alakajam | https://api.github.com/repos/alakajam-team/alakajam | opened | 500 Error when browsing rankings | bug high priority | https://alakajam.com/7th-akj-tournament/edit-rankings
```
0|alakajam | 2022-03-29 21:46:53.036 ERROR (/server/error.middleware.ts:60) HTTP 500: Something went wrong, sorry about that.
0|alakajam | Error: Undefined binding(s) detected when compiling SELECT query: select "entry".* from "entry" left join "entry_details" on "entry_details"."entry_id" = "entry"."id" where "entry"."event_id" = ?
and "division" in (?) order by "entry"."karma" DESC, "entry"."created_at" DESC
``` | 1.0 | 500 Error when browsing rankings - https://alakajam.com/7th-akj-tournament/edit-rankings
```
0|alakajam | 2022-03-29 21:46:53.036 ERROR (/server/error.middleware.ts:60) HTTP 500: Something went wrong, sorry about that.
0|alakajam | Error: Undefined binding(s) detected when compiling SELECT query: select "entry".* from "entry" left join "entry_details" on "entry_details"."entry_id" = "entry"."id" where "entry"."event_id" = ?
and "division" in (?) order by "entry"."karma" DESC, "entry"."created_at" DESC
``` | priority | error when browsing rankings alakajam error server error middleware ts http something went wrong sorry about that alakajam error undefined binding s detected when compiling select query select entry from entry left join entry details on entry details entry id entry id where entry event id and division in order by entry karma desc entry created at desc | 1 |
502,102 | 14,540,170,950 | IssuesEvent | 2020-12-15 12:57:20 | GridTools/gt4py | https://api.github.com/repos/GridTools/gt4py | closed | Interval ordering and stage merging do not respect data dependencies | module: analysis priority: high triage: bug | Since the merging of [PR #169](https://github.com/GridTools/gt4py/pull/169) into master, stencils in the FV3 dynamical core (`fv3core`) that previously validated no longer produce correct data. The following stencil snippet demonstrates the problem:
```python
@gtscript.stencil()
def reorder_issue(dm: gtscript.Field[float]):
with computation(FORWARD):
with interval(0, -2):
bb = 2.0 * (1.0 + dm)
with interval(0, 1):
bet = bb
with interval(1, None):
bet = bet[0, 0, -1]
```
After the intervals are sorted, the following code would be generated:
```python
for K in range(0, 1):
bet = bb
for K in range(0, domain[2] - 2):
bb = 2.0 * (1.0 + dm)
for K in range(1, domain[2]):
bet = bet[0, 0, -1]
```
This results in the temporary field, `bb`, being written to the `bet` field in the interval (0, 1) before `bb` is computed on interval (0, domain[2] - 2). If one introduces a new computation block to enforce separation,
```python
@gtscript.stencil()
def reorder_issue(dm: gtscript.Field[float]):
with computation(FORWARD):
with interval(0, -2):
bb = 2.0 * (1.0 + dm)
with computation(FORWARD):
with interval(0, 1):
bet = bb
with interval(1, None):
bet = bet[0, 0, -1]
```
the multi-stages are merged and the same output code is generated. Only changing the iteration order of the first multi-stage resolves the problem,
```python
@gtscript.stencil()
def reorder_issue(dm: gtscript.Field[float]):
with computation(PARALLEL):
with interval(0, -2):
bb = 2.0 * (1.0 + dm)
with computation(FORWARD):
with interval(0, 1):
bet = bb
with interval(1, None):
bet = bet[0, 0, -1]
```
by generating the following:
```python
parfor K in range(0, 1):
bet = bb
for K in range(0, domain[2] - 2):
bb = 2.0 * (1.0 + dm)
for K in range(1, domain[2]):
bet = bet[0, 0, -1]
``` | 1.0 | Interval ordering and stage merging do not respect data dependencies - Since the merging of [PR #169](https://github.com/GridTools/gt4py/pull/169) into master, stencils in the FV3 dynamical core (`fv3core`) that previously validated no longer produce correct data. The following stencil snippet demonstrates the problem:
```python
@gtscript.stencil()
def reorder_issue(dm: gtscript.Field[float]):
with computation(FORWARD):
with interval(0, -2):
bb = 2.0 * (1.0 + dm)
with interval(0, 1):
bet = bb
with interval(1, None):
bet = bet[0, 0, -1]
```
After the intervals are sorted, the following code would be generated:
```python
for K in range(0, 1):
bet = bb
for K in range(0, domain[2] - 2):
bb = 2.0 * (1.0 + dm)
for K in range(1, domain[2]):
bet = bet[0, 0, -1]
```
This results in the temporary field, `bb`, being written to the `bet` field in the interval (0, 1) before `bb` is computed on interval (0, domain[2] - 2). If one introduces a new computation block to enforce separation,
```python
@gtscript.stencil()
def reorder_issue(dm: gtscript.Field[float]):
with computation(FORWARD):
with interval(0, -2):
bb = 2.0 * (1.0 + dm)
with computation(FORWARD):
with interval(0, 1):
bet = bb
with interval(1, None):
bet = bet[0, 0, -1]
```
the multi-stages are merged and the same output code is generated. Only changing the iteration order of the first multi-stage resolves the problem,
```python
@gtscript.stencil()
def reorder_issue(dm: gtscript.Field[float]):
with computation(PARALLEL):
with interval(0, -2):
bb = 2.0 * (1.0 + dm)
with computation(FORWARD):
with interval(0, 1):
bet = bb
with interval(1, None):
bet = bet[0, 0, -1]
```
by generating the following:
```python
parfor K in range(0, 1):
bet = bb
for K in range(0, domain[2] - 2):
bb = 2.0 * (1.0 + dm)
for K in range(1, domain[2]):
bet = bet[0, 0, -1]
``` | priority | interval ordering and stage merging do not respect data dependencies since the merging of into master stencils in the dynamical core that previously validated no longer produce correct data the following stencil snippet demonstrates the problem python gtscript stencil def reorder issue dm gtscript field with computation forward with interval bb dm with interval bet bb with interval none bet bet after the intervals are sorted the following code would be generated python for k in range bet bb for k in range domain bb dm for k in range domain bet bet this results in the temporary field bb being written to the bet field in the interval before bb is computed on interval domain if one introduces a new computation block to enforce separation python gtscript stencil def reorder issue dm gtscript field with computation forward with interval bb dm with computation forward with interval bet bb with interval none bet bet the multi stages are merged and the same output code is generated only changing the iteration order of the first multi stage resolves the problem python gtscript stencil def reorder issue dm gtscript field with computation parallel with interval bb dm with computation forward with interval bet bb with interval none bet bet by generating the following python parfor k in range bet bb for k in range domain bb dm for k in range domain bet bet | 1 |
646,400 | 21,046,875,814 | IssuesEvent | 2022-03-31 16:50:07 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [studio] Add site status API | new feature priority: high | ### Feature Request
#### Is your feature request related to a problem? Please describe.
Site creation can take a while, it's hard to know when a site is ready.
#### Describe the solution you'd like
We need an API that lets us know if a site is ready for use or not.
Consider the endpoint: /site/status
##### Tasks
- [ ] Define the API in OAS and review
- [ ] Implement for 3.1
- [ ] Implement for 4.0
- [ ] Add to test automation
#### Describe alternatives you've considered
{{A clear and concise description of any alternative solutions or features you've considered.}}
| 1.0 | [studio] Add site status API - ### Feature Request
#### Is your feature request related to a problem? Please describe.
Site creation can take a while, it's hard to know when a site is ready.
#### Describe the solution you'd like
We need an API that lets us know if a site is ready for use or not.
Consider the endpoint: /site/status
##### Tasks
- [ ] Define the API in OAS and review
- [ ] Implement for 3.1
- [ ] Implement for 4.0
- [ ] Add to test automation
#### Describe alternatives you've considered
{{A clear and concise description of any alternative solutions or features you've considered.}}
| priority | add site status api feature request is your feature request related to a problem please describe site creation can take a while it s hard to know when a site is ready describe the solution you d like we need an api that lets us know if a site is ready for use or not consider the endpoint site status tasks define the api in oas and review implement for implement for add to test automation describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered | 1 |
239,730 | 7,799,978,996 | IssuesEvent | 2018-06-09 02:58:55 | tine20/Tine-2.0-Open-Source-Groupware-and-CRM | https://api.github.com/repos/tine20/Tine-2.0-Open-Source-Groupware-and-CRM | closed | 0006368:
add filter for internet (on/off/filtered) and class name | Courses Feature Request Mantis high priority | **Reported by pschuele on 2 May 2012 14:33**
add filter for internet (on/off/filtered) and class name
| 1.0 | 0006368:
add filter for internet (on/off/filtered) and class name - **Reported by pschuele on 2 May 2012 14:33**
add filter for internet (on/off/filtered) and class name
| priority | add filter for internet on off filtered and class name reported by pschuele on may add filter for internet on off filtered and class name | 1 |
30,798 | 2,725,607,018 | IssuesEvent | 2015-04-15 02:04:51 | Ecotrust/madrona-priorities | https://api.github.com/repos/Ecotrust/madrona-priorities | closed | Arrangement of data layer accordion tabs [ 2 hours ] | High Priority question | Most of this time will be one of discussion (clarification of order and request), and reviewing how these tabs are ordered in the first place. Being that they are auto-created during the import process, we'll have to look at when the ordering happens and what input options exist - the order may need to be determined by something within the input documents (order listed in spreadsheet, perhaps?)
Notes below:
* Arrangement of left-side tabs
* Why have model input data as third tab?
* Arrangement of species data - under 1 tab(?) -- discuss
* watershed factors (?) - discuss -- how about 4 main tabs, in this order:
* Focal Fish Species
* Watershed Condition
* Climate Change Vulnerability
* Invasive Species
| 1.0 | Arrangement of data layer accordion tabs [ 2 hours ] - Most of this time will be one of discussion (clarification of order and request), and reviewing how these tabs are ordered in the first place. Being that they are auto-created during the import process, we'll have to look at when the ordering happens and what input options exist - the order may need to be determined by something within the input documents (order listed in spreadsheet, perhaps?)
Notes below:
* Arrangement of left-side tabs
* Why have model input data as third tab?
* Arrangement of species data - under 1 tab(?) -- discuss
* watershed factors (?) - discuss -- how about 4 main tabs, in this order:
* Focal Fish Species
* Watershed Condition
* Climate Change Vulnerability
* Invasive Species
| priority | arrangement of data layer accordion tabs most of this time will be one of discussion clarification of order and request and reviewing how these tabs are ordered in the first place being that they are auto created during the import process we ll have to look at when the ordering happens and what input options exist the order may need to be determined by something within the input documents order listed in spreadsheet perhaps notes below arrangement of left side tabs why have model input data as third tab arrangement of species data under tab discuss watershed factors discuss how about main tabs in this order focal fish species watershed condition climate change vulnerability invasive species | 1 |
778,866 | 27,332,305,133 | IssuesEvent | 2023-02-25 19:42:14 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | DEBUG=1 build crashes at exit | high priority module: crash triaged | ### 🐛 Describe the bug
Building PyTorch from source with DEBUG=1 leads to crash at program exit
Build Command:
```
time TORCH_CUDA_ARCH_LIST="7.5" USE_DISTRIBUTED=1 USE_GLOO=1 BUILD_TEST=0 BUILD_CAFFE2=0 USE_CUDA=0 USE_ASAN=0 USE_MKLDNN=0 USE_KINETO=0 DEBUG=1 MAX_JOBS=$NCORES USE_XNNPACK=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_CUDNN=0 USE_NCCL=0 python setup.py develop
```
Snippet
```python
import torch
```
Output
```
pure virtual method called
terminate called without an active exception
Aborted (core dumped)
```
GDB Trace
```c++
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737350465344) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140737350465344) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140737350465344, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7cc4476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7caa7f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007fffe128b036 in __gnu_cxx::__verbose_terminate_handler () at ../../../../libstdc++-v3/libsupc++/vterminate.cc:95
#6 0x00007fffe1289524 in __cxxabiv1::__terminate (handler=<optimized out>) at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:48
#7 0x00007fffe1289576 in std::terminate () at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:58
#8 0x00007fffe128a063 in __cxxabiv1::__cxa_pure_virtual () at ../../../../libstdc++-v3/libsupc++/pure.cc:50
#9 0x00007fffeeff8dd1 in c10::SafePyObject::~SafePyObject (this=0x5555597ed658, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/c10/core/SafePyObject.h:32
#10 0x00007fffef86ba70 in torch::impl::dispatch::PythonKernelHolder::~PythonKernelHolder (this=0x5555597ed640, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/torch/csrc/utils/python_dispatch.cpp:103
#11 0x00007fffef86ba98 in torch::impl::dispatch::PythonKernelHolder::~PythonKernelHolder (this=0x5555597ed640, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/torch/csrc/utils/python_dispatch.cpp:103
#12 0x00007fffe1fc9b27 in c10::intrusive_ptr<c10::OperatorKernel, c10::detail::intrusive_target_default_null_type<c10::OperatorKernel> >::reset_ (
this=0x555555aa65b8) at /home/kshiteej/Pytorch/pytorch_functorch/c10/util/intrusive_ptr.h:291
#13 0x00007fffe1fc7700 in c10::intrusive_ptr<c10::OperatorKernel, c10::detail::intrusive_target_default_null_type<c10::OperatorKernel> >::~intrusive_ptr (
this=0x555555aa65b8, __in_chrg=<optimized out>) at /home/kshiteej/Pytorch/pytorch_functorch/c10/util/intrusive_ptr.h:370
#14 0x00007fffe1fc1a5c in c10::BoxedKernel::~BoxedKernel (this=0x555555aa65b8, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/boxing/BoxedKernel.h:74
#15 0x00007fffe1fc322a in c10::KernelFunction::~KernelFunction (this=0x555555aa65b8, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/boxing/KernelFunction.h:74
#16 0x00007fffe23479b0 in std::array<c10::KernelFunction, 112ul>::~array (this=0x555555aa6458, __in_chrg=<optimized out>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/array:94
#17 0x00007fffe2347a46 in c10::impl::OperatorEntry::~OperatorEntry (this=0x555555aa6360, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/dispatch/OperatorEntry.h:70
--Type <RET> for more, q to quit, c to continue without paging--
#18 0x00007fffe2354af6 in c10::Dispatcher::OperatorDef::~OperatorDef (this=0x555555aa6360, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/dispatch/Dispatcher.h:66
#19 0x00007fffe2354b16 in __gnu_cxx::new_allocator<std::_List_node<c10::Dispatcher::OperatorDef> >::destroy<c10::Dispatcher::OperatorDef> (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __p=0x555555aa6360)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/ext/new_allocator.h:162
#20 0x00007fffe23525fb in std::allocator_traits<std::allocator<std::_List_node<c10::Dispatcher::OperatorDef> > >::destroy<c10::Dispatcher::OperatorDef> (
__a=..., __p=0x555555aa6360) at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/alloc_traits.h:531
#21 0x00007fffe234f6f8 in std::__cxx11::_List_base<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::_M_clear (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/list.tcc:77
#22 0x00007fffe234c485 in std::__cxx11::_List_base<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::~_List_base (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/stl_list.h:499
#23 0x00007fffe2347f6c in std::__cxx11::list<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::~list (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/stl_list.h:827
#24 0x00007fffe234831d in c10::Dispatcher::~Dispatcher (this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/dispatch/Dispatcher.cpp:59
#25 0x00007ffff7cc7495 in __run_exit_handlers (status=0, listp=0x7ffff7e9b838 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true,
run_dtors=run_dtors@entry=true) at ./stdlib/exit.c:113
#26 0x00007ffff7cc7610 in __GI_exit (status=<optimized out>) at ./stdlib/exit.c:143
#27 0x00007ffff7cabd97 in __libc_start_call_main (main=main@entry=0x55555572f010 <main>, argc=argc@entry=2, argv=argv@entry=0x7fffffffb8e8)
at ../sysdeps/nptl/libc_start_call_main.h:74
#28 0x00007ffff7cabe40 in __libc_start_main_impl (main=0x55555572f010 <main>, argc=2, argv=0x7fffffffb8e8, init=<optimized out>, fini=<optimized out>,
rtld_fini=<optimized out>, stack_end=0x7fffffffb8d8) at ../csu/libc-start.c:392
#29 0x000055555572ef61 in _start ()
```
### Versions
master
cc @ezyang @gchanan @zou3519 | 1.0 | DEBUG=1 build crashes at exit - ### 🐛 Describe the bug
Building PyTorch from source with DEBUG=1 leads to crash at program exit
Build Command:
```
time TORCH_CUDA_ARCH_LIST="7.5" USE_DISTRIBUTED=1 USE_GLOO=1 BUILD_TEST=0 BUILD_CAFFE2=0 USE_CUDA=0 USE_ASAN=0 USE_MKLDNN=0 USE_KINETO=0 DEBUG=1 MAX_JOBS=$NCORES USE_XNNPACK=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_CUDNN=0 USE_NCCL=0 python setup.py develop
```
Snippet
```python
import torch
```
Output
```
pure virtual method called
terminate called without an active exception
Aborted (core dumped)
```
GDB Trace
```c++
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737350465344) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140737350465344) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140737350465344, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7cc4476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff7caa7f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007fffe128b036 in __gnu_cxx::__verbose_terminate_handler () at ../../../../libstdc++-v3/libsupc++/vterminate.cc:95
#6 0x00007fffe1289524 in __cxxabiv1::__terminate (handler=<optimized out>) at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:48
#7 0x00007fffe1289576 in std::terminate () at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:58
#8 0x00007fffe128a063 in __cxxabiv1::__cxa_pure_virtual () at ../../../../libstdc++-v3/libsupc++/pure.cc:50
#9 0x00007fffeeff8dd1 in c10::SafePyObject::~SafePyObject (this=0x5555597ed658, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/c10/core/SafePyObject.h:32
#10 0x00007fffef86ba70 in torch::impl::dispatch::PythonKernelHolder::~PythonKernelHolder (this=0x5555597ed640, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/torch/csrc/utils/python_dispatch.cpp:103
#11 0x00007fffef86ba98 in torch::impl::dispatch::PythonKernelHolder::~PythonKernelHolder (this=0x5555597ed640, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/torch/csrc/utils/python_dispatch.cpp:103
#12 0x00007fffe1fc9b27 in c10::intrusive_ptr<c10::OperatorKernel, c10::detail::intrusive_target_default_null_type<c10::OperatorKernel> >::reset_ (
this=0x555555aa65b8) at /home/kshiteej/Pytorch/pytorch_functorch/c10/util/intrusive_ptr.h:291
#13 0x00007fffe1fc7700 in c10::intrusive_ptr<c10::OperatorKernel, c10::detail::intrusive_target_default_null_type<c10::OperatorKernel> >::~intrusive_ptr (
this=0x555555aa65b8, __in_chrg=<optimized out>) at /home/kshiteej/Pytorch/pytorch_functorch/c10/util/intrusive_ptr.h:370
#14 0x00007fffe1fc1a5c in c10::BoxedKernel::~BoxedKernel (this=0x555555aa65b8, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/boxing/BoxedKernel.h:74
#15 0x00007fffe1fc322a in c10::KernelFunction::~KernelFunction (this=0x555555aa65b8, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/boxing/KernelFunction.h:74
#16 0x00007fffe23479b0 in std::array<c10::KernelFunction, 112ul>::~array (this=0x555555aa6458, __in_chrg=<optimized out>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/array:94
#17 0x00007fffe2347a46 in c10::impl::OperatorEntry::~OperatorEntry (this=0x555555aa6360, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/dispatch/OperatorEntry.h:70
--Type <RET> for more, q to quit, c to continue without paging--
#18 0x00007fffe2354af6 in c10::Dispatcher::OperatorDef::~OperatorDef (this=0x555555aa6360, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/dispatch/Dispatcher.h:66
#19 0x00007fffe2354b16 in __gnu_cxx::new_allocator<std::_List_node<c10::Dispatcher::OperatorDef> >::destroy<c10::Dispatcher::OperatorDef> (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __p=0x555555aa6360)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/ext/new_allocator.h:162
#20 0x00007fffe23525fb in std::allocator_traits<std::allocator<std::_List_node<c10::Dispatcher::OperatorDef> > >::destroy<c10::Dispatcher::OperatorDef> (
__a=..., __p=0x555555aa6360) at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/alloc_traits.h:531
#21 0x00007fffe234f6f8 in std::__cxx11::_List_base<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::_M_clear (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/list.tcc:77
#22 0x00007fffe234c485 in std::__cxx11::_List_base<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::~_List_base (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/stl_list.h:499
#23 0x00007fffe2347f6c in std::__cxx11::list<c10::Dispatcher::OperatorDef, std::allocator<c10::Dispatcher::OperatorDef> >::~list (
this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>)
at /home/kshiteej/.conda/envs/pytorch-cuda-dev/x86_64-conda-linux-gnu/include/c++/10.4.0/bits/stl_list.h:827
#24 0x00007fffe234831d in c10::Dispatcher::~Dispatcher (this=0x7fffee81e700 <c10::Dispatcher::realSingleton()::_singleton>, __in_chrg=<optimized out>)
at /home/kshiteej/Pytorch/pytorch_functorch/aten/src/ATen/core/dispatch/Dispatcher.cpp:59
#25 0x00007ffff7cc7495 in __run_exit_handlers (status=0, listp=0x7ffff7e9b838 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true,
run_dtors=run_dtors@entry=true) at ./stdlib/exit.c:113
#26 0x00007ffff7cc7610 in __GI_exit (status=<optimized out>) at ./stdlib/exit.c:143
#27 0x00007ffff7cabd97 in __libc_start_call_main (main=main@entry=0x55555572f010 <main>, argc=argc@entry=2, argv=argv@entry=0x7fffffffb8e8)
at ../sysdeps/nptl/libc_start_call_main.h:74
#28 0x00007ffff7cabe40 in __libc_start_main_impl (main=0x55555572f010 <main>, argc=2, argv=0x7fffffffb8e8, init=<optimized out>, fini=<optimized out>,
rtld_fini=<optimized out>, stack_end=0x7fffffffb8d8) at ../csu/libc-start.c:392
#29 0x000055555572ef61 in _start ()
```
### Versions
master
cc @ezyang @gchanan @zou3519 | priority | debug build crashes at exit 🐛 describe the bug building pytorch from source with debug leads to crash at program exit build command time torch cuda arch list use distributed use gloo build test build use cuda use asan use mkldnn use kineto debug max jobs ncores use xnnpack use fbgemm use nnpack use qnnpack use cudnn use nccl python setup py develop snippet python import torch output pure virtual method called terminate called without an active exception aborted core dumped gdb trace c pthread kill implementation no tid signo threadid at nptl pthread kill c pthread kill internal signo threadid at nptl pthread kill c gi pthread kill threadid signo signo entry at nptl pthread kill c in gi raise sig sig entry at sysdeps posix raise c in gi abort at stdlib abort c in gnu cxx verbose terminate handler at libstdc libsupc vterminate cc in terminate handler at libstdc libsupc eh terminate cc in std terminate at libstdc libsupc eh terminate cc in cxa pure virtual at libstdc libsupc pure cc in safepyobject safepyobject this in chrg at home kshiteej pytorch pytorch functorch core safepyobject h in torch impl dispatch pythonkernelholder pythonkernelholder this in chrg at home kshiteej pytorch pytorch functorch torch csrc utils python dispatch cpp in torch impl dispatch pythonkernelholder pythonkernelholder this in chrg at home kshiteej pytorch pytorch functorch torch csrc utils python dispatch cpp in intrusive ptr reset this at home kshiteej pytorch pytorch functorch util intrusive ptr h in intrusive ptr intrusive ptr this in chrg at home kshiteej pytorch pytorch functorch util intrusive ptr h in boxedkernel boxedkernel this in chrg at home kshiteej pytorch pytorch functorch aten src aten core boxing boxedkernel h in kernelfunction kernelfunction this in chrg at home kshiteej pytorch pytorch functorch aten src aten core boxing kernelfunction h in std array array this in chrg at home kshiteej conda envs pytorch cuda dev conda linux gnu include c array in impl operatorentry operatorentry this in chrg at home kshiteej pytorch pytorch functorch aten src aten core dispatch operatorentry h type for more q to quit c to continue without paging in dispatcher operatordef operatordef this in chrg at home kshiteej pytorch pytorch functorch aten src aten core dispatch dispatcher h in gnu cxx new allocator destroy this p at home kshiteej conda envs pytorch cuda dev conda linux gnu include c ext new allocator h in std allocator traits destroy a p at home kshiteej conda envs pytorch cuda dev conda linux gnu include c bits alloc traits h in std list base m clear this at home kshiteej conda envs pytorch cuda dev conda linux gnu include c bits list tcc in std list base list base this in chrg at home kshiteej conda envs pytorch cuda dev conda linux gnu include c bits stl list h in std list list this in chrg at home kshiteej conda envs pytorch cuda dev conda linux gnu include c bits stl list h in dispatcher dispatcher this in chrg at home kshiteej pytorch pytorch functorch aten src aten core dispatch dispatcher cpp in run exit handlers status listp run list atexit run list atexit entry true run dtors run dtors entry true at stdlib exit c in gi exit status at stdlib exit c in libc start call main main main entry argc argc entry argv argv entry at sysdeps nptl libc start call main h in libc start main impl main argc argv init fini rtld fini stack end at csu libc start c in start versions master cc ezyang gchanan | 1 |
433,588 | 12,507,329,648 | IssuesEvent | 2020-06-02 13:58:17 | canonical-web-and-design/dqlite.io | https://api.github.com/repos/canonical-web-and-design/dqlite.io | closed | Circle CI config is used in place of GH workflow | Priority: High | Project has a PR workflow set up in `.github/workflows/pr.yml`,
https://github.com/canonical-web-and-design/dqlite.io/blob/8cb4237080f28b7d8d7b7a7c1a4074fa4fc8b7a7/.github/workflows/pr.yml#L1-L9
but this file uses Circle CI config format instead of GitHub actions. This causes workflow to fail on every push to the repo:
https://github.com/canonical-web-and-design/dqlite.io/actions | 1.0 | Circle CI config is used in place of GH workflow - Project has a PR workflow set up in `.github/workflows/pr.yml`,
https://github.com/canonical-web-and-design/dqlite.io/blob/8cb4237080f28b7d8d7b7a7c1a4074fa4fc8b7a7/.github/workflows/pr.yml#L1-L9
but this file uses Circle CI config format instead of GitHub actions. This causes workflow to fail on every push to the repo:
https://github.com/canonical-web-and-design/dqlite.io/actions | priority | circle ci config is used in place of gh workflow project has a pr workflow set up in github workflows pr yml but this file uses circle ci config format instead of github actions this causes workflow to fail on every push to the repo | 1 |
153,969 | 5,906,740,451 | IssuesEvent | 2017-05-19 15:51:31 | bleenco/bterm | https://api.github.com/repos/bleenco/bterm | closed | [bug] Closing tabs | Priority: High Type: Bug | There should be a way in UI to close tabs.
Proposal: on hover there is `x` button to close it... additionally, two finger / mid-scroller mouse click. | 1.0 | [bug] Closing tabs - There should be a way in UI to close tabs.
Proposal: on hover there is `x` button to close it... additionally, two finger / mid-scroller mouse click. | priority | closing tabs there should be a way in ui to close tabs proposal on hover there is x button to close it additionally two finger mid scroller mouse click | 1 |
770,570 | 27,045,629,794 | IssuesEvent | 2023-02-13 09:33:57 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | closed | New `robot:flatten` tag for "flattening" keyword structures | enhancement priority: high effort: medium | Introduction
------------
With nested keyword structures, especially with recursive keyword calls and with WHILE and FOR loops, the log file can get hard do understand with many different nesting levels. Such nested structures also increase output.xml size, because even a simple keyword like
```robotframework
*** Keywords ***
Keyword
Log Robot
Log Framework
```
creates this much content:
```xml
<kw name="Keyword">
<kw name="Log" library="BuiltIn">
<arg>Robot</arg>
<doc>Logs the given message with the given level.</doc>
<msg timestamp="20230103 20:06:36.663" level="INFO">Robot</msg>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.663"/>
</kw>
<kw name="Log" library="BuiltIn">
<arg>Framework</arg>
<doc>Logs the given message with the given level.</doc>
<msg timestamp="20230103 20:06:36.663" level="INFO">Framework</msg>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.664"/>
</kw>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.664"/>
</kw>
```
We have had `--flattenkeywords` option for "flattening" such structures since RF 2.8.2 (#1551) and it works great. When a keyword is flattened, its child keywords and control structures are removed otherwise, but all their messages are preserved. It doesn't affect output.xml generated during execution, but flattening happens when output.xml files are parsed and can save huge amounts of memory. When `--flattenkeywords` is used with Rebot, it is possible to create a new flattened output.xml. For example, the above structure is converted into this if `Keyowrd` is flattened:
```
<kw name="Keyword">
<doc>_*Content flattened.*_</doc>
<msg timestamp="20230103 20:06:36.663" level="INFO">Robot</msg>
<msg timestamp="20230103 20:06:36.663" level="INFO">Framework</msg>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.664"/>
</kw>
```
Proposal
--------
Flattening works based on keyword names and based on tags, but it needs to be activated separately from the command line. This issue proposes adding new built-in tag `robot:flatten` that activates this behavior automatically. Removing top level keywords from tests and leaving only their messages doesn't make sense, so `robot:flatten` should be usable only as a keyword tag.
This functionality should work already during execution so that flattened keywords and control structures are never written to output.xml file. This avoid output.xml file growing big and is likely to also enhance the performance a bit.
Open questions
---------------
There are some open questions related to the design still:
- [ ] Should `start/end_keyword` listener methods be called with flattened keywords? I believe not, but I don't feel too strongly about this.
- [ ] Should we add *Content flattened* to keyword documentation like we do with `--flattenkeywords`? I believe not. There's the `robot:flatten` tag to indicate that anyway.
- [ ] Should `--flattenkeywords` be changed to work during execution as well? I believe yes, but that requires a separate issue.
- [ ] Should automatic TRACE level logging of arguments and return values of flattened keywords be disabled? I believe yes, but this isn't high priority.
Possible future enhancements
------------------------------
`--flattenkeywords` allows flattening WHILE or FOR loops or all loop iterations. Something like that would be convenient with built-in tags as well. We could consider something like `robot:flatten:while` and `robot:flatten:iteration` to support that, but I believe that's something that can wait until future versions.
Another alternative would be allowing tags with control structures as shown in the example below. This would require parser and model changes but could also have other use cases. That's certainly out of the scope of RF 6.1, though.
```robotframework
*** Keywords ***
Keyword
WHILE True
[Tags] robot:flatten
Nested
END
```
| 1.0 | New `robot:flatten` tag for "flattening" keyword structures - Introduction
------------
With nested keyword structures, especially with recursive keyword calls and with WHILE and FOR loops, the log file can get hard do understand with many different nesting levels. Such nested structures also increase output.xml size, because even a simple keyword like
```robotframework
*** Keywords ***
Keyword
Log Robot
Log Framework
```
creates this much content:
```xml
<kw name="Keyword">
<kw name="Log" library="BuiltIn">
<arg>Robot</arg>
<doc>Logs the given message with the given level.</doc>
<msg timestamp="20230103 20:06:36.663" level="INFO">Robot</msg>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.663"/>
</kw>
<kw name="Log" library="BuiltIn">
<arg>Framework</arg>
<doc>Logs the given message with the given level.</doc>
<msg timestamp="20230103 20:06:36.663" level="INFO">Framework</msg>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.664"/>
</kw>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.664"/>
</kw>
```
We have had `--flattenkeywords` option for "flattening" such structures since RF 2.8.2 (#1551) and it works great. When a keyword is flattened, its child keywords and control structures are removed otherwise, but all their messages are preserved. It doesn't affect output.xml generated during execution, but flattening happens when output.xml files are parsed and can save huge amounts of memory. When `--flattenkeywords` is used with Rebot, it is possible to create a new flattened output.xml. For example, the above structure is converted into this if `Keyowrd` is flattened:
```
<kw name="Keyword">
<doc>_*Content flattened.*_</doc>
<msg timestamp="20230103 20:06:36.663" level="INFO">Robot</msg>
<msg timestamp="20230103 20:06:36.663" level="INFO">Framework</msg>
<status status="PASS" starttime="20230103 20:06:36.663" endtime="20230103 20:06:36.664"/>
</kw>
```
Proposal
--------
Flattening works based on keyword names and based on tags, but it needs to be activated separately from the command line. This issue proposes adding new built-in tag `robot:flatten` that activates this behavior automatically. Removing top level keywords from tests and leaving only their messages doesn't make sense, so `robot:flatten` should be usable only as a keyword tag.
This functionality should work already during execution so that flattened keywords and control structures are never written to output.xml file. This avoid output.xml file growing big and is likely to also enhance the performance a bit.
Open questions
---------------
There are some open questions related to the design still:
- [ ] Should `start/end_keyword` listener methods be called with flattened keywords? I believe not, but I don't feel too strongly about this.
- [ ] Should we add *Content flattened* to keyword documentation like we do with `--flattenkeywords`? I believe not. There's the `robot:flatten` tag to indicate that anyway.
- [ ] Should `--flattenkeywords` be changed to work during execution as well? I believe yes, but that requires a separate issue.
- [ ] Should automatic TRACE level logging of arguments and return values of flattened keywords be disabled? I believe yes, but this isn't high priority.
Possible future enhancements
------------------------------
`--flattenkeywords` allows flattening WHILE or FOR loops or all loop iterations. Something like that would be convenient with built-in tags as well. We could consider something like `robot:flatten:while` and `robot:flatten:iteration` to support that, but I believe that's something that can wait until future versions.
Another alternative would be allowing tags with control structures as shown in the example below. This would require parser and model changes but could also have other use cases. That's certainly out of the scope of RF 6.1, though.
```robotframework
*** Keywords ***
Keyword
WHILE True
[Tags] robot:flatten
Nested
END
```
| priority | new robot flatten tag for flattening keyword structures introduction with nested keyword structures especially with recursive keyword calls and with while and for loops the log file can get hard do understand with many different nesting levels such nested structures also increase output xml size because even a simple keyword like robotframework keywords keyword log robot log framework creates this much content xml robot logs the given message with the given level robot framework logs the given message with the given level framework we have had flattenkeywords option for flattening such structures since rf and it works great when a keyword is flattened its child keywords and control structures are removed otherwise but all their messages are preserved it doesn t affect output xml generated during execution but flattening happens when output xml files are parsed and can save huge amounts of memory when flattenkeywords is used with rebot it is possible to create a new flattened output xml for example the above structure is converted into this if keyowrd is flattened content flattened robot framework proposal flattening works based on keyword names and based on tags but it needs to be activated separately from the command line this issue proposes adding new built in tag robot flatten that activates this behavior automatically removing top level keywords from tests and leaving only their messages doesn t make sense so robot flatten should be usable only as a keyword tag this functionality should work already during execution so that flattened keywords and control structures are never written to output xml file this avoid output xml file growing big and is likely to also enhance the performance a bit open questions there are some open questions related to the design still should start end keyword listener methods be called with flattened keywords i believe not but i don t feel too strongly about this should we add content flattened to keyword documentation like we do with flattenkeywords i believe not there s the robot flatten tag to indicate that anyway should flattenkeywords be changed to work during execution as well i believe yes but that requires a separate issue should automatic trace level logging of arguments and return values of flattened keywords be disabled i believe yes but this isn t high priority possible future enhancements flattenkeywords allows flattening while or for loops or all loop iterations something like that would be convenient with built in tags as well we could consider something like robot flatten while and robot flatten iteration to support that but i believe that s something that can wait until future versions another alternative would be allowing tags with control structures as shown in the example below this would require parser and model changes but could also have other use cases that s certainly out of the scope of rf though robotframework keywords keyword while true robot flatten nested end | 1 |
174,765 | 6,542,907,922 | IssuesEvent | 2017-09-02 14:38:49 | k0shk0sh/FastHub | https://api.github.com/repos/k0shk0sh/FastHub | closed | Indicate that a PR line has a comment | Priority: High Status: Completed Type: Enhancement Type: Feature Request | **FastHub Version: 4.0.0**
**Android Version: 7.1.2 (SDK: 25)**
**Device Information:**
- Google
- google
- Pixel
- Account Type: GitHub
---
There's no way to tell if a line has a comment on it. | 1.0 | Indicate that a PR line has a comment - **FastHub Version: 4.0.0**
**Android Version: 7.1.2 (SDK: 25)**
**Device Information:**
- Google
- google
- Pixel
- Account Type: GitHub
---
There's no way to tell if a line has a comment on it. | priority | indicate that a pr line has a comment fasthub version android version sdk device information google google pixel account type github there s no way to tell if a line has a comment on it | 1 |
162,281 | 6,150,003,184 | IssuesEvent | 2017-06-27 21:24:58 | Polymer/polymer-cli | https://api.github.com/repos/Polymer/polymer-cli | closed | Lazy imports not handled properly when generating push manifests | Priority: High Status: Accepted Type: Bug | <!--
If you are asking a question rather than filing a bug, you'll get better results
using one of these instead:
- Stack Overflow (http://stackoverflow.com/questions/tagged/polymer-cli)
- Polymer Slack Channel (https://bit.ly/polymerslack)
- Mailing List (https://groups.google.com/forum/#!forum/polymer-dev)
-->
<!-- Instructions For Filing a Bug: https://github.com/Polymer/polymer/blob/master/CONTRIBUTING.md#filing-bugs -->
### Description
<!-- Example: Build failure when loading scripts from CDNs -->
Unless I'm misunderstanding how they're supposed to work, it seems that lazy imports are not being handled properly when generating push manifests.
I would expect a lazy import...
* ...to be included on the top level of the push manifest (with its dependencies listed below it, if the build is unbundled)
* ...*not* to be included in the manifest as a dependency to be pushed with the document that lazily imported it
In fact, neither of the above is true – a lazy import *is not* included as a fragment on the top level of the push manifest and *is* listed as a resource to be pushed with the document that imported it.
### Versions & Environment
<!--
`polymer --version` will show the version for Polymer CLI
`node --version` will show the version for node
-->
- Polymer CLI: 1.0.0
- node: 6.3.0
- Operating System: macOS Sierra
#### Steps to Reproduce
<!--
Example:
1. Create an application project: `polymer init application`
2. Add script tag to index.html: `<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>`
3. Build: `polymer build`
-->
1. `polymer init polymer-2-starter-kit`
2. Delete the `fragments` entry from `polymer.json`, since lazy imports should be implicitly recognized as fragments (the fact that the fragments are explicitly specified seems to be a [separate bug](https://github.com/Polymer/polymer-cli/issues/748) in the template)
3. `polymer build --preset=es6-unbundled --add-push-manifest`, or `polymer build --preset=es6-bundled --add-push-manifest`
#### Expected Results
<!-- Example: No error is throw -->
Generated push manifest *does not* include the various `src/my-viewX.html` files as resources to be pushed for the `src/my-app.html` fragment.
Generated push manifest *does* include top-level entries for the `src/my-viewX.html` files
#### Actual Results
<!-- Example: Error is thrown -->
Generated push manifest *does* include the various `src/my-viewX.html` files as resources to be pushed for the `src/my-app.html` fragment.
Generated push manifest *does not* include top-level entries for the `src/my-viewX.html` files | 1.0 | Lazy imports not handled properly when generating push manifests - <!--
If you are asking a question rather than filing a bug, you'll get better results
using one of these instead:
- Stack Overflow (http://stackoverflow.com/questions/tagged/polymer-cli)
- Polymer Slack Channel (https://bit.ly/polymerslack)
- Mailing List (https://groups.google.com/forum/#!forum/polymer-dev)
-->
<!-- Instructions For Filing a Bug: https://github.com/Polymer/polymer/blob/master/CONTRIBUTING.md#filing-bugs -->
### Description
<!-- Example: Build failure when loading scripts from CDNs -->
Unless I'm misunderstanding how they're supposed to work, it seems that lazy imports are not being handled properly when generating push manifests.
I would expect a lazy import...
* ...to be included on the top level of the push manifest (with its dependencies listed below it, if the build is unbundled)
* ...*not* to be included in the manifest as a dependency to be pushed with the document that lazily imported it
In fact, neither of the above is true – a lazy import *is not* included as a fragment on the top level of the push manifest and *is* listed as a resource to be pushed with the document that imported it.
### Versions & Environment
<!--
`polymer --version` will show the version for Polymer CLI
`node --version` will show the version for node
-->
- Polymer CLI: 1.0.0
- node: 6.3.0
- Operating System: macOS Sierra
#### Steps to Reproduce
<!--
Example:
1. Create an application project: `polymer init application`
2. Add script tag to index.html: `<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js"></script>`
3. Build: `polymer build`
-->
1. `polymer init polymer-2-starter-kit`
2. Delete the `fragments` entry from `polymer.json`, since lazy imports should be implicitly recognized as fragments (the fact that the fragments are explicitly specified seems to be a [separate bug](https://github.com/Polymer/polymer-cli/issues/748) in the template)
3. `polymer build --preset=es6-unbundled --add-push-manifest`, or `polymer build --preset=es6-bundled --add-push-manifest`
#### Expected Results
<!-- Example: No error is throw -->
Generated push manifest *does not* include the various `src/my-viewX.html` files as resources to be pushed for the `src/my-app.html` fragment.
Generated push manifest *does* include top-level entries for the `src/my-viewX.html` files
#### Actual Results
<!-- Example: Error is thrown -->
Generated push manifest *does* include the various `src/my-viewX.html` files as resources to be pushed for the `src/my-app.html` fragment.
Generated push manifest *does not* include top-level entries for the `src/my-viewX.html` files | priority | lazy imports not handled properly when generating push manifests if you are asking a question rather than filing a bug you ll get better results using one of these instead stack overflow polymer slack channel mailing list description unless i m misunderstanding how they re supposed to work it seems that lazy imports are not being handled properly when generating push manifests i would expect a lazy import to be included on the top level of the push manifest with its dependencies listed below it if the build is unbundled not to be included in the manifest as a dependency to be pushed with the document that lazily imported it in fact neither of the above is true – a lazy import is not included as a fragment on the top level of the push manifest and is listed as a resource to be pushed with the document that imported it versions environment polymer version will show the version for polymer cli node version will show the version for node polymer cli node operating system macos sierra steps to reproduce example create an application project polymer init application add script tag to index html script src build polymer build polymer init polymer starter kit delete the fragments entry from polymer json since lazy imports should be implicitly recognized as fragments the fact that the fragments are explicitly specified seems to be a in the template polymer build preset unbundled add push manifest or polymer build preset bundled add push manifest expected results generated push manifest does not include the various src my viewx html files as resources to be pushed for the src my app html fragment generated push manifest does include top level entries for the src my viewx html files actual results generated push manifest does include the various src my viewx html files as resources to be pushed for the src my app html fragment generated push manifest does not include top level entries for the src my viewx html files | 1 |
438,366 | 12,626,943,225 | IssuesEvent | 2020-06-14 19:00:59 | cdnjs/cdnjs | https://api.github.com/repos/cdnjs/cdnjs | closed | Split cdnjs/cdnjs into human & robot repos | :rotating_light: High Priority | Following on from #13613 where I had begun looking at a plan to create a new "human" repo for cdnjs and leave this repository to become robot only...
The initial plan there had been to preserve as much as possible of the cdnjs "history" into the new repository. This including renaming cdnjs/cdnjs to cdnjs/packages so that PRs & issues were preserved into the new "human" repo as well as using scripts to clean the existing cdnjs commit history of all non-`package.json` files so that the full commit history still existed but with manageable "human" data inside only.
However, both my test scripts for cleaning the commit history are still running after multiple days and are nowhere near completed. With it becoming very apparent that it will not be possible to preserve the commit history in the new "human" repo, we'll instead have a single commit in the new "human" repo that will contain all the `package.json` files at the point of migration.
With this approach in mind, I no longer see the value in renaming cdnjs/cdnjs to cdnjs/packages and then mirroring back to a new cdnjs/cdnjs repository - if we preserve PRs, they will all become invalid as the repository will be force-pushed to a single commit. Further, by not immediately transferring all the issues over, we can use the new "human" repo as a fresh start for cdnjs. | 1.0 | Split cdnjs/cdnjs into human & robot repos - Following on from #13613 where I had begun looking at a plan to create a new "human" repo for cdnjs and leave this repository to become robot only...
The initial plan there had been to preserve as much as possible of the cdnjs "history" into the new repository. This including renaming cdnjs/cdnjs to cdnjs/packages so that PRs & issues were preserved into the new "human" repo as well as using scripts to clean the existing cdnjs commit history of all non-`package.json` files so that the full commit history still existed but with manageable "human" data inside only.
However, both my test scripts for cleaning the commit history are still running after multiple days and are nowhere near completed. With it becoming very apparent that it will not be possible to preserve the commit history in the new "human" repo, we'll instead have a single commit in the new "human" repo that will contain all the `package.json` files at the point of migration.
With this approach in mind, I no longer see the value in renaming cdnjs/cdnjs to cdnjs/packages and then mirroring back to a new cdnjs/cdnjs repository - if we preserve PRs, they will all become invalid as the repository will be force-pushed to a single commit. Further, by not immediately transferring all the issues over, we can use the new "human" repo as a fresh start for cdnjs. | priority | split cdnjs cdnjs into human robot repos following on from where i had begun looking at a plan to create a new human repo for cdnjs and leave this repository to become robot only the initial plan there had been to preserve as much as possible of the cdnjs history into the new repository this including renaming cdnjs cdnjs to cdnjs packages so that prs issues were preserved into the new human repo as well as using scripts to clean the existing cdnjs commit history of all non package json files so that the full commit history still existed but with manageable human data inside only however both my test scripts for cleaning the commit history are still running after multiple days and are nowhere near completed with it becoming very apparent that it will not be possible to preserve the commit history in the new human repo we ll instead have a single commit in the new human repo that will contain all the package json files at the point of migration with this approach in mind i no longer see the value in renaming cdnjs cdnjs to cdnjs packages and then mirroring back to a new cdnjs cdnjs repository if we preserve prs they will all become invalid as the repository will be force pushed to a single commit further by not immediately transferring all the issues over we can use the new human repo as a fresh start for cdnjs | 1 |
598,672 | 18,250,008,825 | IssuesEvent | 2021-10-02 03:26:04 | gambitph/Stackable | https://api.github.com/repos/gambitph/Stackable | closed | Blockquote icon has an `<a>` tag | bug high priority [version] V3 [block] Blockquote | 1. Add a blockquote block
2. Preview in the frontend and inspect the icon, you will see an `<a>` tag | 1.0 | Blockquote icon has an `<a>` tag - 1. Add a blockquote block
2. Preview in the frontend and inspect the icon, you will see an `<a>` tag | priority | blockquote icon has an tag add a blockquote block preview in the frontend and inspect the icon you will see an tag | 1 |
353,639 | 10,555,288,497 | IssuesEvent | 2019-10-03 21:28:55 | OpenSRP/opensrp-client-chw | https://api.github.com/repos/OpenSRP/opensrp-client-chw | closed | WASH task is not getting counted in the family task count | bug high priority | Steps to replicate:
1. Register a new family
2. On the family register list, the family appears but there are no due tasks:

The number of tasks shown on the family register list page should match the number of tasks that appear in the family due page, so in this case, it should be showing 1 task, because the WASH task is due:

| 1.0 | WASH task is not getting counted in the family task count - Steps to replicate:
1. Register a new family
2. On the family register list, the family appears but there are no due tasks:

The number of tasks shown on the family register list page should match the number of tasks that appear in the family due page, so in this case, it should be showing 1 task, because the WASH task is due:

| priority | wash task is not getting counted in the family task count steps to replicate register a new family on the family register list the family appears but there are no due tasks the number of tasks shown on the family register list page should match the number of tasks that appear in the family due page so in this case it should be showing task because the wash task is due | 1 |
423,229 | 12,293,169,933 | IssuesEvent | 2020-05-10 17:46:44 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | closed | Events cannot be edited when guest registrations were added | bug events priority: high | In GitLab by @se-bastiaan on Nov 3, 2019, 12:15
### One-sentence description
Events cannot be edited when guest registrations were added
### Current behaviour / Reproducing the bug
1. Create an event and publish
2. Add a guest registration
3. Edit the event
4. Crashes on the save of the push notification, the guest registration is `null`
### Expected behaviour
The guest registration should be filtered out of the list of members when the event is saved. | 1.0 | Events cannot be edited when guest registrations were added - In GitLab by @se-bastiaan on Nov 3, 2019, 12:15
### One-sentence description
Events cannot be edited when guest registrations were added
### Current behaviour / Reproducing the bug
1. Create an event and publish
2. Add a guest registration
3. Edit the event
4. Crashes on the save of the push notification, the guest registration is `null`
### Expected behaviour
The guest registration should be filtered out of the list of members when the event is saved. | priority | events cannot be edited when guest registrations were added in gitlab by se bastiaan on nov one sentence description events cannot be edited when guest registrations were added current behaviour reproducing the bug create an event and publish add a guest registration edit the event crashes on the save of the push notification the guest registration is null expected behaviour the guest registration should be filtered out of the list of members when the event is saved | 1 |
358,687 | 10,631,312,269 | IssuesEvent | 2019-10-15 08:05:57 | facebookresearch/nevergrad | https://api.github.com/repos/facebookresearch/nevergrad | opened | Consistent population-based algorithms | Difficulty: High Priority: Medium Type: Enhancement | Population based algorithms are written in a variety of ways while some common code could be extracted.
Also, for now they don't allow asking more than the population, which can be cumbersome (especially if some evaluation died without providing a value: in this case an individual of the population will always be ignored) | 1.0 | Consistent population-based algorithms - Population based algorithms are written in a variety of ways while some common code could be extracted.
Also, for now they don't allow asking more than the population, which can be cumbersome (especially if some evaluation died without providing a value: in this case an individual of the population will always be ignored) | priority | consistent population based algorithms population based algorithms are written in a variety of ways while some common code could be extracted also for now they don t allow asking more than the population which can be cumbersome especially if some evaluation died without providing a value in this case an individual of the population will always be ignored | 1 |
558,333 | 16,529,995,606 | IssuesEvent | 2021-05-27 03:48:18 | eliasreid/crystal-ai-ctrl | https://api.github.com/repos/eliasreid/crystal-ai-ctrl | closed | Disable input to wait for enemy to select initial pokemon | enhancement high priority | Similar to how input is disabled when waiting for enemy to choose action. Need to enforce the opponent actually choosing a pokemon before AI selects. | 1.0 | Disable input to wait for enemy to select initial pokemon - Similar to how input is disabled when waiting for enemy to choose action. Need to enforce the opponent actually choosing a pokemon before AI selects. | priority | disable input to wait for enemy to select initial pokemon similar to how input is disabled when waiting for enemy to choose action need to enforce the opponent actually choosing a pokemon before ai selects | 1 |
207,755 | 7,133,258,063 | IssuesEvent | 2018-01-22 17:00:59 | slaclab/happi | https://api.github.com/repos/slaclab/happi | opened | Travis by Conda | High Priority | Modify Travis to build and test using the Conda recipe then upload to `pcds-tag`, `pcds-dev` | 1.0 | Travis by Conda - Modify Travis to build and test using the Conda recipe then upload to `pcds-tag`, `pcds-dev` | priority | travis by conda modify travis to build and test using the conda recipe then upload to pcds tag pcds dev | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.