Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
12,808
15,184,789,425
IssuesEvent
2021-02-15 10:03:18
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] Need to fix the UI of logo in error page
Bug P2 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
A/R:- Currently app is shown some text in place of customer logo E/R:- Text need to be replaced with customer logo as per design ![image](https://user-images.githubusercontent.com/60500517/106480486-780d1d80-64d1-11eb-8307-cda6f3db1a85.png)
3.0
[PM] Need to fix the UI of logo in error page - A/R:- Currently app is shown some text in place of customer logo E/R:- Text need to be replaced with customer logo as per design ![image](https://user-images.githubusercontent.com/60500517/106480486-780d1d80-64d1-11eb-8307-cda6f3db1a85.png)
process
need to fix the ui of logo in error page a r currently app is shown some text in place of customer logo e r text need to be replaced with customer logo as per design
1
7,633
10,731,782,968
IssuesEvent
2019-10-28 20:20:04
GroceriStar/food-datasets-csv-parser
https://api.github.com/repos/GroceriStar/food-datasets-csv-parser
closed
refactor parsers to use `mainWrapper`
in-process
**Is your feature request related to a problem? Please describe.** with @sibasish14 latest changes in `mainWrapper` I think we should refactor all parsers to use it to reduce code duplication **Describe the solution you'd like** refactor all parsers to use `mainWrapper` fn in projects2.0
1.0
refactor parsers to use `mainWrapper` - **Is your feature request related to a problem? Please describe.** with @sibasish14 latest changes in `mainWrapper` I think we should refactor all parsers to use it to reduce code duplication **Describe the solution you'd like** refactor all parsers to use `mainWrapper` fn in projects2.0
process
refactor parsers to use mainwrapper is your feature request related to a problem please describe with latest changes in mainwrapper i think we should refactor all parsers to use it to reduce code duplication describe the solution you d like refactor all parsers to use mainwrapper fn in
1
142,065
13,012,790,931
IssuesEvent
2020-07-25 07:51:51
sebrock/BananoNault
https://api.github.com/repos/sebrock/BananoNault
closed
Message to team members and RoE
documentation
## Thank you for joining the team for Project BananoNault (working title) This **_sebrock/BananoNault_** is the Development/Test repo. Production code will be in BananoCoin/Nault. In this message I want to lay down some rules for the teamwork in this Project. Please feel free to comment and discuss below. Eventually, we should all be happy with the way of working and guidelines around it, so it is critical that we have a common understanding and level of comfort. ## Rule #1: Have fun, collaborate, live and learn Does not need much explanation other than: - it´s okay to not know something - nobody is perfect or knows all, we are all going to learn from this - mistakes are allowed - talk openly about them for maximum team learning effect. - when in doubt, ask someone to avoid mistakes, - there are no stupid questions - respect each other - have fun - sleep well - drink water - don´t sweat it ## Project/activity Planning Project planning is done in [BananoCoin/Nault](https://github.com/BananoCoin/Nault/projects) with the "**Projects**" function and [_Issues_](https://github.com/sebrock/BananoNault/issues) representing activities. There are four Projects which represent the 4 main workstreams: - Project Environment - Functionality - Visual Design - i18n Each activity is/has to be assigned to one or more Projects in order for it to show up in the Projects view. There, each one is represented by a _Card_ ## Projects in action At the Project level you will see a [Kanban board](https://en.wikipedia.org/wiki/Kanban_board) with the typical columns representing the activity/issue resolution workflow. Example: Visual Design ![image](https://user-images.githubusercontent.com/156824/88368112-aa54fa00-cd8d-11ea-9a2c-ef33ee88172e.png) If you are beginning to work on one of the activities in the "To do" column, drag the card into the "In Progress" column and assign it to yourself, if it hasn´t already been assigned to you. You can also add another assignee if you are going to work on it together. ## Labels There are a couple of standard/custom [_Labels_ ](https://github.com/BananoCoin/Nault/issues/labels )available, which should be used to categorize Issues. ![image](https://user-images.githubusercontent.com/156824/88368944-4df2da00-cd8f-11ea-8a6c-8f89abf3f041.png) Note that this enables us to use Issues not only to handle activity, but e.g. also documentation. That means that Information worth while keeping as documentation or reference for the project or future developments can and should be put into an Issue and flagged with the _documentation_ label. An example is the [Project Definition](https://github.com/BananoCoin/Nault/issues/21) or the [list of languages ](https://github.com/BananoCoin/Nault/issues/29 )for the i18n effort. ## Local development and testing Each of you should be working in a local clone of the repo, using [Github Desktop](https://desktop.github.com/). At a minimum, you will need the following installations (Also see [here](https://github.com/BananoCoin/Nault/issues/14)) - Node Package Manager: [Install NPM](https://www.npmjs.com/get-npm) - Typescript `npm install -g typescript` - Angular CLI: `npm install -g @angular/cli` Each of you should be testing the things you are changing/creating locally. You can compile and run the app locally by executing `ng serve --open` or `npm run wallet:dev` ## Commit and PRs Commits from your local repo to this (Dev/Test) repo should always be made to a separate branch at first. Then create a Pull Request to merge with sebrock/BananoNault:master and assign a reviewer. That way we can be more confident to merge the changes with [Upstream](https://github.com/BananoCoin/Nault)
1.0
Message to team members and RoE - ## Thank you for joining the team for Project BananoNault (working title) This **_sebrock/BananoNault_** is the Development/Test repo. Production code will be in BananoCoin/Nault. In this message I want to lay down some rules for the teamwork in this Project. Please feel free to comment and discuss below. Eventually, we should all be happy with the way of working and guidelines around it, so it is critical that we have a common understanding and level of comfort. ## Rule #1: Have fun, collaborate, live and learn Does not need much explanation other than: - it´s okay to not know something - nobody is perfect or knows all, we are all going to learn from this - mistakes are allowed - talk openly about them for maximum team learning effect. - when in doubt, ask someone to avoid mistakes, - there are no stupid questions - respect each other - have fun - sleep well - drink water - don´t sweat it ## Project/activity Planning Project planning is done in [BananoCoin/Nault](https://github.com/BananoCoin/Nault/projects) with the "**Projects**" function and [_Issues_](https://github.com/sebrock/BananoNault/issues) representing activities. There are four Projects which represent the 4 main workstreams: - Project Environment - Functionality - Visual Design - i18n Each activity is/has to be assigned to one or more Projects in order for it to show up in the Projects view. There, each one is represented by a _Card_ ## Projects in action At the Project level you will see a [Kanban board](https://en.wikipedia.org/wiki/Kanban_board) with the typical columns representing the activity/issue resolution workflow. Example: Visual Design ![image](https://user-images.githubusercontent.com/156824/88368112-aa54fa00-cd8d-11ea-9a2c-ef33ee88172e.png) If you are beginning to work on one of the activities in the "To do" column, drag the card into the "In Progress" column and assign it to yourself, if it hasn´t already been assigned to you. You can also add another assignee if you are going to work on it together. ## Labels There are a couple of standard/custom [_Labels_ ](https://github.com/BananoCoin/Nault/issues/labels )available, which should be used to categorize Issues. ![image](https://user-images.githubusercontent.com/156824/88368944-4df2da00-cd8f-11ea-8a6c-8f89abf3f041.png) Note that this enables us to use Issues not only to handle activity, but e.g. also documentation. That means that Information worth while keeping as documentation or reference for the project or future developments can and should be put into an Issue and flagged with the _documentation_ label. An example is the [Project Definition](https://github.com/BananoCoin/Nault/issues/21) or the [list of languages ](https://github.com/BananoCoin/Nault/issues/29 )for the i18n effort. ## Local development and testing Each of you should be working in a local clone of the repo, using [Github Desktop](https://desktop.github.com/). At a minimum, you will need the following installations (Also see [here](https://github.com/BananoCoin/Nault/issues/14)) - Node Package Manager: [Install NPM](https://www.npmjs.com/get-npm) - Typescript `npm install -g typescript` - Angular CLI: `npm install -g @angular/cli` Each of you should be testing the things you are changing/creating locally. You can compile and run the app locally by executing `ng serve --open` or `npm run wallet:dev` ## Commit and PRs Commits from your local repo to this (Dev/Test) repo should always be made to a separate branch at first. Then create a Pull Request to merge with sebrock/BananoNault:master and assign a reviewer. That way we can be more confident to merge the changes with [Upstream](https://github.com/BananoCoin/Nault)
non_process
message to team members and roe thank you for joining the team for project bananonault working title this sebrock bananonault is the development test repo production code will be in bananocoin nault in this message i want to lay down some rules for the teamwork in this project please feel free to comment and discuss below eventually we should all be happy with the way of working and guidelines around it so it is critical that we have a common understanding and level of comfort rule have fun collaborate live and learn does not need much explanation other than it´s okay to not know something nobody is perfect or knows all we are all going to learn from this mistakes are allowed talk openly about them for maximum team learning effect when in doubt ask someone to avoid mistakes there are no stupid questions respect each other have fun sleep well drink water don´t sweat it project activity planning project planning is done in with the projects function and representing activities there are four projects which represent the main workstreams project environment functionality visual design each activity is has to be assigned to one or more projects in order for it to show up in the projects view there each one is represented by a card projects in action at the project level you will see a with the typical columns representing the activity issue resolution workflow example visual design if you are beginning to work on one of the activities in the to do column drag the card into the in progress column and assign it to yourself if it hasn´t already been assigned to you you can also add another assignee if you are going to work on it together labels there are a couple of standard custom available which should be used to categorize issues note that this enables us to use issues not only to handle activity but e g also documentation that means that information worth while keeping as documentation or reference for the project or future developments can and should be put into an issue and flagged with the documentation label an example is the or the for the effort local development and testing each of you should be working in a local clone of the repo using at a minimum you will need the following installations also see node package manager typescript npm install g typescript angular cli npm install g angular cli each of you should be testing the things you are changing creating locally you can compile and run the app locally by executing ng serve open or npm run wallet dev commit and prs commits from your local repo to this dev test repo should always be made to a separate branch at first then create a pull request to merge with sebrock bananonault master and assign a reviewer that way we can be more confident to merge the changes with
0
2,621
5,396,080,299
IssuesEvent
2017-02-27 10:35:48
jlm2017/jlm-video-subtitles
https://api.github.com/repos/jlm2017/jlm-video-subtitles
closed
[subtitles] [fr] #RDLS18 - CETA, OTAN, EUROPE, SECOURS CATHOLIQUE À CALAIS
Language: French Process: [4] Ready for review (2)
# Video title #RDLS18 - CETA, OTAN, EUROPE, SECOURS CATHOLIQUE À CALAIS # URL https://www.youtube.com/watch?v=qhJVmuEtII8 # Youtube subtitles language French # Duration 30:34 # Subtitles URL https://www.youtube.com/timedtext_editor?ui=hd&action_mde_edit_form=1&bl=vmp&ref=player&v=qhJVmuEtII8&tab=captions&lang=fr
1.0
[subtitles] [fr] #RDLS18 - CETA, OTAN, EUROPE, SECOURS CATHOLIQUE À CALAIS - # Video title #RDLS18 - CETA, OTAN, EUROPE, SECOURS CATHOLIQUE À CALAIS # URL https://www.youtube.com/watch?v=qhJVmuEtII8 # Youtube subtitles language French # Duration 30:34 # Subtitles URL https://www.youtube.com/timedtext_editor?ui=hd&action_mde_edit_form=1&bl=vmp&ref=player&v=qhJVmuEtII8&tab=captions&lang=fr
process
ceta otan europe secours catholique à calais video title ceta otan europe secours catholique à calais url youtube subtitles language french duration subtitles url
1
186,028
15,044,092,523
IssuesEvent
2021-02-03 02:10:26
facebookresearch/droidlet
https://api.github.com/repos/facebookresearch/droidlet
closed
Updates for autocomplete tool
documentation
## Type of Issue The documentation should talk about - 1. datasets folder should be fetched and updated before starting with the app and perhaps point to it 2. The README of template tool should mention auto_complete is in it 3. Can I wipe out the `commands.txt` entirely ? I think this file should be blank / user should create from scratch 4. What is `args` in `python ~/droidlet/tools/data_processing/txt_to_JSON.py [args]` ? 5. I think step #1 and #4 should be run before running the backend and frontend ? 6. Nit to rename : `~/droidlet/tools/data_processing/txt_to_JSON.py` to something 7. We need to pretty print the dictionaries 8. The location for where to run `python ~/droidlet/tools/data_processing/txt_to_JSON.py` should be mentioned 9. Fix the path for `annotations_dir_path` in the script in #8 10. "point" inside "dance" doesn't work for autocomplete 11. Once we are over or under the index, either wipe out the command field or show a prompt saying "you are done annotating" . Right now, it keeps showing the last message and incrementing the count. 12. We should not track the files : `backend/command_dict_pairs.json` and `backend/commands.txt`. 13. when json validation fails, show a pop up. 14. Rename `autocomplete_annotation.txt` to `high_pri_commands.txt` in S3 Select the type of issue: - [ ] Bug report (to report a bug) - [ ] Feature request (to request an additional feature) - [ ] Tracker (I am just using this as a tracker) - [x] Documentation Ask ## Description Detailed description of the requested feature/documentation ask/bug report. ## Current Behavior Description of current behavior or functionality. ## Expected Behavior Description of the expected behavior or functionality. In case of feature request, please add input and expected output. ## Steps to reproduce Please add steps to reproduce the bug here along with the stack trace. ## Links to any relevant pastes or documents Please post links to any relevant documents that might contain any extra information. ## Checklist Use this checklist if you are using this issue as a tracker: - [ ] Task 1 - [ ] Task 2 - [ ] Task 3 ...
1.0
Updates for autocomplete tool - ## Type of Issue The documentation should talk about - 1. datasets folder should be fetched and updated before starting with the app and perhaps point to it 2. The README of template tool should mention auto_complete is in it 3. Can I wipe out the `commands.txt` entirely ? I think this file should be blank / user should create from scratch 4. What is `args` in `python ~/droidlet/tools/data_processing/txt_to_JSON.py [args]` ? 5. I think step #1 and #4 should be run before running the backend and frontend ? 6. Nit to rename : `~/droidlet/tools/data_processing/txt_to_JSON.py` to something 7. We need to pretty print the dictionaries 8. The location for where to run `python ~/droidlet/tools/data_processing/txt_to_JSON.py` should be mentioned 9. Fix the path for `annotations_dir_path` in the script in #8 10. "point" inside "dance" doesn't work for autocomplete 11. Once we are over or under the index, either wipe out the command field or show a prompt saying "you are done annotating" . Right now, it keeps showing the last message and incrementing the count. 12. We should not track the files : `backend/command_dict_pairs.json` and `backend/commands.txt`. 13. when json validation fails, show a pop up. 14. Rename `autocomplete_annotation.txt` to `high_pri_commands.txt` in S3 Select the type of issue: - [ ] Bug report (to report a bug) - [ ] Feature request (to request an additional feature) - [ ] Tracker (I am just using this as a tracker) - [x] Documentation Ask ## Description Detailed description of the requested feature/documentation ask/bug report. ## Current Behavior Description of current behavior or functionality. ## Expected Behavior Description of the expected behavior or functionality. In case of feature request, please add input and expected output. ## Steps to reproduce Please add steps to reproduce the bug here along with the stack trace. ## Links to any relevant pastes or documents Please post links to any relevant documents that might contain any extra information. ## Checklist Use this checklist if you are using this issue as a tracker: - [ ] Task 1 - [ ] Task 2 - [ ] Task 3 ...
non_process
updates for autocomplete tool type of issue the documentation should talk about datasets folder should be fetched and updated before starting with the app and perhaps point to it the readme of template tool should mention auto complete is in it can i wipe out the commands txt entirely i think this file should be blank user should create from scratch what is args in python droidlet tools data processing txt to json py i think step and should be run before running the backend and frontend nit to rename droidlet tools data processing txt to json py to something we need to pretty print the dictionaries the location for where to run python droidlet tools data processing txt to json py should be mentioned fix the path for annotations dir path in the script in point inside dance doesn t work for autocomplete once we are over or under the index either wipe out the command field or show a prompt saying you are done annotating right now it keeps showing the last message and incrementing the count we should not track the files backend command dict pairs json and backend commands txt when json validation fails show a pop up rename autocomplete annotation txt to high pri commands txt in select the type of issue bug report to report a bug feature request to request an additional feature tracker i am just using this as a tracker documentation ask description detailed description of the requested feature documentation ask bug report current behavior description of current behavior or functionality expected behavior description of the expected behavior or functionality in case of feature request please add input and expected output steps to reproduce please add steps to reproduce the bug here along with the stack trace links to any relevant pastes or documents please post links to any relevant documents that might contain any extra information checklist use this checklist if you are using this issue as a tracker task task task
0
9,260
27,818,389,183
IssuesEvent
2023-03-18 23:47:43
hackforla/website
https://api.github.com/repos/hackforla/website
closed
GitHub Actions: Bot adding and removing "Status: Updated" and "To Update!" label
role: back end/devOps Complexity: Large Status: Updated Feature: Board/GitHub Maintenance automation size: 2pt
### Overview There is a bug with the Github bot in which it both adds and removes the "Status: Updated" label. This should be fixed to avoid confusion and to make sure that the proper labels are being applied. ### Action Items - [x] Please go through the wiki article on [Hack for LA's GitHub Actions](https://github.com/hackforla/website/wiki/Hack-for-LA's-GitHub-Actions) - [x] Review the [add-label.js](https://github.com/hackforla/website/blob/2fe355b396ed7f785d382505b8c2ecafe09cb486/github-actions/add-update-label-weekly/add-label.js#L226) file, to understand how the bot adds and removes labels from the issue. - [x] Understand why the Github action bot adds and removes the "Status: Updated" and the "To Update!" label. - [x] Change the code logic to fix this error ### Checks - [x] Test in your local environment that it works ### Resources/Instructions Relevant files: - https://github.com/hackforla/website/blob/2fe355b396ed7f785d382505b8c2ecafe09cb486/github-actions/add-update-label-weekly/add-label.js - https://github.com/hackforla/website/blob/gh-pages/.github/workflows/add-update-label-weekly.yml Relevant Issues: Status: Updated Bug: - #2497 - #2561 - #2397 - #3059 To Update! Bug: - #2462 - #2317 Never done GitHub actions? [Start here!](https://docs.github.com/en/actions) - [GitHub Complex Workflows doc](https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows) - [GitHub Actions Workflow Directory](https://github.com/hackforla/website/tree/gh-pages/.github/workflows) - [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) - [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions) - [actions/github-script](https://github.com/actions/github-script) - [GitHub RESTAPI](https://docs.github.com/en/rest) #### Architecture Notes The idea behind the refactor is to organize our GitHub Actions so that developers can easily maintain and understand them. Currently, we want our GitHub Actions to be structured like so based on this [proposal](https://docs.google.com/spreadsheets/d/12NcZQoyGYlHlMQtJE2IM8xLYpHN75agb/edit#gid=1231634015): - Schedules (military time) - Schedule Friday 0700 - Schedule Thursday 1100 - Schedule Daily 1100 - Linters - Lint SCSS - PR Trigger - Add Linked Issue Labels to Pull Request - Add Pull Request Instructions - Issue Trigger - Add Missing Labels To Issues - WR - PR Trigger - WR Add Linked Issue Labels to Pull Request - WR Add Pull Request Instructions - WR - Issue Trigger Actions with the same triggers (excluding linters, which will be their own category) will live in the same github action file. Scheduled actions will live in the same file if they trigger on the same schedule (i.e. all files that trigger everyday at 11am will live in one file, while files that trigger on Friday at 7am will be on a separate file). That said, this structure is not set in stone. If any part of it feels strange, or you have questions, feel free to bring it up with the team so we can evolve this format!
1.0
GitHub Actions: Bot adding and removing "Status: Updated" and "To Update!" label - ### Overview There is a bug with the Github bot in which it both adds and removes the "Status: Updated" label. This should be fixed to avoid confusion and to make sure that the proper labels are being applied. ### Action Items - [x] Please go through the wiki article on [Hack for LA's GitHub Actions](https://github.com/hackforla/website/wiki/Hack-for-LA's-GitHub-Actions) - [x] Review the [add-label.js](https://github.com/hackforla/website/blob/2fe355b396ed7f785d382505b8c2ecafe09cb486/github-actions/add-update-label-weekly/add-label.js#L226) file, to understand how the bot adds and removes labels from the issue. - [x] Understand why the Github action bot adds and removes the "Status: Updated" and the "To Update!" label. - [x] Change the code logic to fix this error ### Checks - [x] Test in your local environment that it works ### Resources/Instructions Relevant files: - https://github.com/hackforla/website/blob/2fe355b396ed7f785d382505b8c2ecafe09cb486/github-actions/add-update-label-weekly/add-label.js - https://github.com/hackforla/website/blob/gh-pages/.github/workflows/add-update-label-weekly.yml Relevant Issues: Status: Updated Bug: - #2497 - #2561 - #2397 - #3059 To Update! Bug: - #2462 - #2317 Never done GitHub actions? [Start here!](https://docs.github.com/en/actions) - [GitHub Complex Workflows doc](https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows) - [GitHub Actions Workflow Directory](https://github.com/hackforla/website/tree/gh-pages/.github/workflows) - [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) - [Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions) - [actions/github-script](https://github.com/actions/github-script) - [GitHub RESTAPI](https://docs.github.com/en/rest) #### Architecture Notes The idea behind the refactor is to organize our GitHub Actions so that developers can easily maintain and understand them. Currently, we want our GitHub Actions to be structured like so based on this [proposal](https://docs.google.com/spreadsheets/d/12NcZQoyGYlHlMQtJE2IM8xLYpHN75agb/edit#gid=1231634015): - Schedules (military time) - Schedule Friday 0700 - Schedule Thursday 1100 - Schedule Daily 1100 - Linters - Lint SCSS - PR Trigger - Add Linked Issue Labels to Pull Request - Add Pull Request Instructions - Issue Trigger - Add Missing Labels To Issues - WR - PR Trigger - WR Add Linked Issue Labels to Pull Request - WR Add Pull Request Instructions - WR - Issue Trigger Actions with the same triggers (excluding linters, which will be their own category) will live in the same github action file. Scheduled actions will live in the same file if they trigger on the same schedule (i.e. all files that trigger everyday at 11am will live in one file, while files that trigger on Friday at 7am will be on a separate file). That said, this structure is not set in stone. If any part of it feels strange, or you have questions, feel free to bring it up with the team so we can evolve this format!
non_process
github actions bot adding and removing status updated and to update label overview there is a bug with the github bot in which it both adds and removes the status updated label this should be fixed to avoid confusion and to make sure that the proper labels are being applied action items please go through the wiki article on review the file to understand how the bot adds and removes labels from the issue understand why the github action bot adds and removes the status updated and the to update label change the code logic to fix this error checks test in your local environment that it works resources instructions relevant files relevant issues status updated bug to update bug never done github actions architecture notes the idea behind the refactor is to organize our github actions so that developers can easily maintain and understand them currently we want our github actions to be structured like so based on this schedules military time schedule friday schedule thursday schedule daily linters lint scss pr trigger add linked issue labels to pull request add pull request instructions issue trigger add missing labels to issues wr pr trigger wr add linked issue labels to pull request wr add pull request instructions wr issue trigger actions with the same triggers excluding linters which will be their own category will live in the same github action file scheduled actions will live in the same file if they trigger on the same schedule i e all files that trigger everyday at will live in one file while files that trigger on friday at will be on a separate file that said this structure is not set in stone if any part of it feels strange or you have questions feel free to bring it up with the team so we can evolve this format
0
38,210
8,697,523,238
IssuesEvent
2018-12-04 20:31:30
ReubenBond/dict4cn
https://api.github.com/repos/ReubenBond/dict4cn
closed
word not find
Priority-Medium Type-Defect auto-migrated
``` What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `ja...@mt.com.tw` on 15 Oct 2012 at 9:21
1.0
word not find - ``` What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Please provide any additional information below. ``` Original issue reported on code.google.com by `ja...@mt.com.tw` on 15 Oct 2012 at 9:21
non_process
word not find what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by ja mt com tw on oct at
0
6,590
2,590,025,448
IssuesEvent
2015-02-18 16:30:17
learningequality/ka-lite
https://api.github.com/repos/learningequality/ka-lite
closed
No indication is given to the user that a video is not available to watch from the topic bar
0.13.x bug bash bug has PR high priority
Branch: develop Expected Behavior: The sidebar should give some indication if content is not available to view, whether by greying out content in the sidebar or making it unclickable. Current Behavior: Content in the sidebar looks identical regardless of whether it is available or not. Steps to reproduce: Navigate to a topic that contains unavailable videos in the sidebar Screenshot(s): ![image](https://cloud.githubusercontent.com/assets/1680573/6116945/98a9fd82-b065-11e4-97eb-8c27b912a8c6.png)
1.0
No indication is given to the user that a video is not available to watch from the topic bar - Branch: develop Expected Behavior: The sidebar should give some indication if content is not available to view, whether by greying out content in the sidebar or making it unclickable. Current Behavior: Content in the sidebar looks identical regardless of whether it is available or not. Steps to reproduce: Navigate to a topic that contains unavailable videos in the sidebar Screenshot(s): ![image](https://cloud.githubusercontent.com/assets/1680573/6116945/98a9fd82-b065-11e4-97eb-8c27b912a8c6.png)
non_process
no indication is given to the user that a video is not available to watch from the topic bar branch develop expected behavior the sidebar should give some indication if content is not available to view whether by greying out content in the sidebar or making it unclickable current behavior content in the sidebar looks identical regardless of whether it is available or not steps to reproduce navigate to a topic that contains unavailable videos in the sidebar screenshot s
0
8,545
11,717,634,427
IssuesEvent
2020-03-09 17:36:02
shirou/gopsutil
https://api.github.com/repos/shirou/gopsutil
closed
Crash at call process.Cmdline
os:windows package:process
**Describe the bug** Crash at call process.Cmdline, occurs occasionally but must occur Details: 499: .......................................................................................................................................................................could not get CommandLine: could not get win32Proc: empty ......could not get CommandLine: could not get win32Proc: empty ........................................................................................ 500: ........................................................................................................................could not get CommandLine: could not get win32Proc: empty .....................................................could not get CommandLine: could not get win32Proc: empty .could not get CommandLine: could not get win32Proc: empty ...............................................................................Exception 0xc0000005 0x0 0xc00059a000 0x7ffa10a0dc12 PC=0x7ffa10a0dc12 syscall.Syscall(0x7ffa10a0e4a0, 0x2, 0xc00059a000, 0xc000598320, 0x0, 0x0, 0x0, 0x0) D:/Go/src/runtime/syscall_windows.go:188 +0xfa syscall.(*Proc).Call(0xc000004600, 0xc000598330, 0x2, 0x2, 0x449229, 0x0, 0x0, 0xc000175be8) D:/Go/src/syscall/dll_windows.go:173 +0x1f0 github.com/go-ole/go-ole.CLSIDFromProgID(0x5562e9, 0x1a, 0x4e1496, 0xc000004580, 0xc000598310) D:/Gopath/go/pkg/mod/github.com/go-ole/go-ole@v1.2.4/com.go:120 +0xb1 github.com/go-ole/go-ole.ClassIDFrom(0x5562e9, 0x1a, 0x42d901, 0x663800, 0x54c6c0) D:/Gopath/go/pkg/mod/github.com/go-ole/go-ole@v1.2.4/utility.go:14 +0x40 github.com/go-ole/go-ole/oleutil.CreateObject(0x5562e9, 0x1a, 0x0, 0x1, 0x57b460) D:/Gopath/go/pkg/mod/github.com/go-ole/go-ole@v1.2.4/oleutil/oleutil.go:16 +0x40 github.com/StackExchange/wmi.(*Client).Query(0x644620, 0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/!stack!exchange/wmi@v0.0.0-20190523213315-cbe66965904d/wmi.go:150 +0x2d3 github.com/StackExchange/wmi.Query(0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/!stack!exchange/wmi@v0.0.0-20190523213315-cbe66965904d/wmi.go:76 +0x10c github.com/shirou/gopsutil/internal/common.WMIQueryWithContext.func1(0xc0005759e0, 0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/internal/common/common_windows.go:131 +0x81 created by github.com/shirou/gopsutil/internal/common.WMIQueryWithContext D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/internal/common/common_windows.go:130 +0x122 goroutine 1 [select]: github.com/shirou/gopsutil/internal/common.WMIQueryWithContext(0x579e20, 0xc0005758c0, 0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0, 0x0, ...) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/internal/common/common_windows.go:134 +0x1d4 github.com/shirou/gopsutil/process.GetWin32ProcWithContext(0x579de0, 0xc0000100b0, 0x75e0, 0x646680, 0x1313, 0x2, 0xc0000ec080, 0xc000036000) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/process/process_windows.go:248 +0x12b github.com/shirou/gopsutil/process.(*Process).CmdlineWithContext(0xc00023a8f0, 0x579de0, 0xc0000100b0, 0x1, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/process/process_windows.go:312 +0x4b github.com/shirou/gopsutil/process.(*Process).Cmdline(...) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/process/process_windows.go:308 main.Test() E:/Project/33.dfagent/test/b/t.go:25 +0xc4 main.main() E:/Project/33.dfagent/test/b/t.go:43 +0xa8 rax 0x1 rbx 0x1 rcx 0x1bbfc58 rdi 0x1bbfce8 rsi 0xc00059a000 rbp 0x7ffa10bc2528 rsp 0x1bbfbe0 r8 0x0 r9 0x0 r10 0x1bbfc58 r11 0xc00059a000 r12 0x80040154 r13 0xffff8005ef3ca7b0 r14 0x0 r15 0x7ffa10c35860 rip 0x7ffa10a0dc12 rflags 0x10202 cs 0x33 fs 0x53 gs 0x2b exit status 2 **To Reproduce** ```go package main import ( "fmt" "time" "github.com/shirou/gopsutil/process" ) func IgnoreError() { if err := recover(); err != nil { fmt.Println(err) } } func Test() { v, err := process.Processes() if err != nil { fmt.Println(err) return } for _, p := range v { cmd, err := p.Cmdline() if err != nil { fmt.Println(err) continue } //fmt.Printf("%+v\n", cmd) _ = cmd fmt.Printf(".") time.Sleep(time.Duration(10) * time.Millisecond) } fmt.Printf("\n") } func main() { var i int = 0 for { fmt.Printf("%d:\n", i) Test() fmt.Printf("\n") i += 1 time.Sleep(time.Duration(3) * time.Second) } } ``` **Expected behavior** don't crash **Environment Microsoft Windows [版本 10.0.17134.829] **Additional context** [Cross-compiling? Paste the command you are using to cross-compile and the result of the corresponding `go env`]
1.0
Crash at call process.Cmdline - **Describe the bug** Crash at call process.Cmdline, occurs occasionally but must occur Details: 499: .......................................................................................................................................................................could not get CommandLine: could not get win32Proc: empty ......could not get CommandLine: could not get win32Proc: empty ........................................................................................ 500: ........................................................................................................................could not get CommandLine: could not get win32Proc: empty .....................................................could not get CommandLine: could not get win32Proc: empty .could not get CommandLine: could not get win32Proc: empty ...............................................................................Exception 0xc0000005 0x0 0xc00059a000 0x7ffa10a0dc12 PC=0x7ffa10a0dc12 syscall.Syscall(0x7ffa10a0e4a0, 0x2, 0xc00059a000, 0xc000598320, 0x0, 0x0, 0x0, 0x0) D:/Go/src/runtime/syscall_windows.go:188 +0xfa syscall.(*Proc).Call(0xc000004600, 0xc000598330, 0x2, 0x2, 0x449229, 0x0, 0x0, 0xc000175be8) D:/Go/src/syscall/dll_windows.go:173 +0x1f0 github.com/go-ole/go-ole.CLSIDFromProgID(0x5562e9, 0x1a, 0x4e1496, 0xc000004580, 0xc000598310) D:/Gopath/go/pkg/mod/github.com/go-ole/go-ole@v1.2.4/com.go:120 +0xb1 github.com/go-ole/go-ole.ClassIDFrom(0x5562e9, 0x1a, 0x42d901, 0x663800, 0x54c6c0) D:/Gopath/go/pkg/mod/github.com/go-ole/go-ole@v1.2.4/utility.go:14 +0x40 github.com/go-ole/go-ole/oleutil.CreateObject(0x5562e9, 0x1a, 0x0, 0x1, 0x57b460) D:/Gopath/go/pkg/mod/github.com/go-ole/go-ole@v1.2.4/oleutil/oleutil.go:16 +0x40 github.com/StackExchange/wmi.(*Client).Query(0x644620, 0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/!stack!exchange/wmi@v0.0.0-20190523213315-cbe66965904d/wmi.go:150 +0x2d3 github.com/StackExchange/wmi.Query(0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/!stack!exchange/wmi@v0.0.0-20190523213315-cbe66965904d/wmi.go:76 +0x10c github.com/shirou/gopsutil/internal/common.WMIQueryWithContext.func1(0xc0005759e0, 0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/internal/common/common_windows.go:131 +0x81 created by github.com/shirou/gopsutil/internal/common.WMIQueryWithContext D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/internal/common/common_windows.go:130 +0x122 goroutine 1 [select]: github.com/shirou/gopsutil/internal/common.WMIQueryWithContext(0x579e20, 0xc0005758c0, 0xc000578f00, 0x267, 0x514540, 0xc00049cec0, 0x0, 0x0, 0x0, 0x0, ...) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/internal/common/common_windows.go:134 +0x1d4 github.com/shirou/gopsutil/process.GetWin32ProcWithContext(0x579de0, 0xc0000100b0, 0x75e0, 0x646680, 0x1313, 0x2, 0xc0000ec080, 0xc000036000) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/process/process_windows.go:248 +0x12b github.com/shirou/gopsutil/process.(*Process).CmdlineWithContext(0xc00023a8f0, 0x579de0, 0xc0000100b0, 0x1, 0x0, 0x0, 0x0) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/process/process_windows.go:312 +0x4b github.com/shirou/gopsutil/process.(*Process).Cmdline(...) D:/Gopath/go/pkg/mod/github.com/shirou/gopsutil@v2.20.2+incompatible/process/process_windows.go:308 main.Test() E:/Project/33.dfagent/test/b/t.go:25 +0xc4 main.main() E:/Project/33.dfagent/test/b/t.go:43 +0xa8 rax 0x1 rbx 0x1 rcx 0x1bbfc58 rdi 0x1bbfce8 rsi 0xc00059a000 rbp 0x7ffa10bc2528 rsp 0x1bbfbe0 r8 0x0 r9 0x0 r10 0x1bbfc58 r11 0xc00059a000 r12 0x80040154 r13 0xffff8005ef3ca7b0 r14 0x0 r15 0x7ffa10c35860 rip 0x7ffa10a0dc12 rflags 0x10202 cs 0x33 fs 0x53 gs 0x2b exit status 2 **To Reproduce** ```go package main import ( "fmt" "time" "github.com/shirou/gopsutil/process" ) func IgnoreError() { if err := recover(); err != nil { fmt.Println(err) } } func Test() { v, err := process.Processes() if err != nil { fmt.Println(err) return } for _, p := range v { cmd, err := p.Cmdline() if err != nil { fmt.Println(err) continue } //fmt.Printf("%+v\n", cmd) _ = cmd fmt.Printf(".") time.Sleep(time.Duration(10) * time.Millisecond) } fmt.Printf("\n") } func main() { var i int = 0 for { fmt.Printf("%d:\n", i) Test() fmt.Printf("\n") i += 1 time.Sleep(time.Duration(3) * time.Second) } } ``` **Expected behavior** don't crash **Environment Microsoft Windows [版本 10.0.17134.829] **Additional context** [Cross-compiling? Paste the command you are using to cross-compile and the result of the corresponding `go env`]
process
crash at call process cmdline describe the bug crash at call process cmdline occurs occasionally but must occur details could not get commandline could not get empty could not get commandline could not get empty could not get commandline could not get empty could not get commandline could not get empty could not get commandline could not get empty exception pc syscall syscall d go src runtime syscall windows go syscall proc call d go src syscall dll windows go github com go ole go ole clsidfromprogid d gopath go pkg mod github com go ole go ole com go github com go ole go ole classidfrom d gopath go pkg mod github com go ole go ole utility go github com go ole go ole oleutil createobject d gopath go pkg mod github com go ole go ole oleutil oleutil go github com stackexchange wmi client query d gopath go pkg mod github com stack exchange wmi wmi go github com stackexchange wmi query d gopath go pkg mod github com stack exchange wmi wmi go github com shirou gopsutil internal common wmiquerywithcontext d gopath go pkg mod github com shirou gopsutil incompatible internal common common windows go created by github com shirou gopsutil internal common wmiquerywithcontext d gopath go pkg mod github com shirou gopsutil incompatible internal common common windows go goroutine github com shirou gopsutil internal common wmiquerywithcontext d gopath go pkg mod github com shirou gopsutil incompatible internal common common windows go github com shirou gopsutil process d gopath go pkg mod github com shirou gopsutil incompatible process process windows go github com shirou gopsutil process process cmdlinewithcontext d gopath go pkg mod github com shirou gopsutil incompatible process process windows go github com shirou gopsutil process process cmdline d gopath go pkg mod github com shirou gopsutil incompatible process process windows go main test e project dfagent test b t go main main e project dfagent test b t go rax rbx rcx rdi rsi rbp rsp rip rflags cs fs gs exit status to reproduce go package main import fmt time github com shirou gopsutil process func ignoreerror if err recover err nil fmt println err func test v err process processes if err nil fmt println err return for p range v cmd err p cmdline if err nil fmt println err continue fmt printf v n cmd cmd fmt printf time sleep time duration time millisecond fmt printf n func main var i int for fmt printf d n i test fmt printf n i time sleep time duration time second expected behavior don t crash environment microsoft windows additional context
1
8,206
11,402,552,746
IssuesEvent
2020-01-31 03:42:06
scala/community-build
https://api.github.com/repos/scala/community-build
opened
`./narrow` shouldn't modify version-controlled files
process
think about having `narrow` not modify `projs.conf` directly; it's annoying to have git always thinking it's a change I should check in perhaps `narrow` could write out a `projs-narrowed.conf` file that would be `.gitignore`d and would be used when present (with `projs.conf` used as a fallback)
1.0
`./narrow` shouldn't modify version-controlled files - think about having `narrow` not modify `projs.conf` directly; it's annoying to have git always thinking it's a change I should check in perhaps `narrow` could write out a `projs-narrowed.conf` file that would be `.gitignore`d and would be used when present (with `projs.conf` used as a fallback)
process
narrow shouldn t modify version controlled files think about having narrow not modify projs conf directly it s annoying to have git always thinking it s a change i should check in perhaps narrow could write out a projs narrowed conf file that would be gitignore d and would be used when present with projs conf used as a fallback
1
317,144
23,665,838,688
IssuesEvent
2022-08-26 20:48:50
hackforla/peopledepot
https://api.github.com/repos/hackforla/peopledepot
opened
PeopleDepot: PM Agenda
documentation help wanted question
### Overview We mainly need to onboard Eric Vennemeyer to PeopleDepot ### Action Items - [ ] Onboard Eric Vennemeyer #27 - [ ] Discuss using the next month's time to make PeopleDepot locally usable for VRMS development - [ ] Make milestones for PeopleDepot ### Resources [Roadmap for the near future](https://github.com/hackforla/peopledepot/wiki/Quick-roadmap)
1.0
PeopleDepot: PM Agenda - ### Overview We mainly need to onboard Eric Vennemeyer to PeopleDepot ### Action Items - [ ] Onboard Eric Vennemeyer #27 - [ ] Discuss using the next month's time to make PeopleDepot locally usable for VRMS development - [ ] Make milestones for PeopleDepot ### Resources [Roadmap for the near future](https://github.com/hackforla/peopledepot/wiki/Quick-roadmap)
non_process
peopledepot pm agenda overview we mainly need to onboard eric vennemeyer to peopledepot action items onboard eric vennemeyer discuss using the next month s time to make peopledepot locally usable for vrms development make milestones for peopledepot resources
0
267,970
23,337,218,691
IssuesEvent
2022-08-09 11:03:12
enonic/app-contentstudio-plus
https://api.github.com/repos/enonic/app-contentstudio-plus
closed
Version Widget - version items are not loaded after opening the widget
bug Test is Failing
1. Do login then open `Archive` browse panel. 2. Select an exisiting folder then open versions widget. **BUG**: version items are not loaded ![err_expand_version398394](https://user-images.githubusercontent.com/3728712/182599959-5ca7968f-89c8-4292-894f-f2e9b0124375.png) Items appear after refreshing the page in browser,
1.0
Version Widget - version items are not loaded after opening the widget - 1. Do login then open `Archive` browse panel. 2. Select an exisiting folder then open versions widget. **BUG**: version items are not loaded ![err_expand_version398394](https://user-images.githubusercontent.com/3728712/182599959-5ca7968f-89c8-4292-894f-f2e9b0124375.png) Items appear after refreshing the page in browser,
non_process
version widget version items are not loaded after opening the widget do login then open archive browse panel select an exisiting folder then open versions widget bug version items are not loaded items appear after refreshing the page in browser
0
6,733
9,854,687,262
IssuesEvent
2019-06-19 17:28:32
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
closed
Bigtable: 'test_create_instance_with_two_clusters' flakes modifying profile.
api: bigtable flaky testing type: process
Similar to #5928, but the failure occurs while re-modifying the instance's app profile. From [this Kokoro failure](https://source.cloud.google.com/results/invocations/b5f7a8bc-d02c-45f8-b23d-31b94e4d0493/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fbigtable/log): ```python ___________ TestInstanceAdminAPI.test_create_instance_w_two_clusters ___________ target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>) predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f7c299b70d0> sleep_generator = <generator object exponential_sleep_generator at 0x7f7c297d3a98> deadline = 10, on_error = None def retry_target(target, predicate, sleep_generator, deadline, on_error=None): """Call a function and retry if it fails. This is the lowest-level retry helper. Generally, you'll use the higher-level retry helper :class:`Retry`. Args: target(Callable): The function to call and retry. This must be a nullary function - apply arguments with `functools.partial`. predicate (Callable[Exception]): A callable used to determine if an exception raised by the target should be considered retryable. It should return True to retry or False otherwise. sleep_generator (Iterable[float]): An infinite iterator that determines how long to sleep between retries. deadline (float): How long to keep retrying the target. on_error (Callable): A function to call while processing a retryable exception. Any error raised by this function will *not* be caught. Returns: Any: the return value of the target function. Raises: google.api_core.RetryError: If the deadline is exceeded while retrying. ValueError: If the sleep generator stops yielding values. Exception: If the target raises a method that isn't retryable. """ if deadline is not None: deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta( seconds=deadline ) else: deadline_datetime = None last_exc = None for sleep in sleep_generator: try: > return target() ../api_core/google/api_core/retry.py:179: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.api_core.operation.Operation object at 0x7f7c280cee80> def _done_or_raise(self): """Check if the future is done and raise if it's not.""" if not self.done(): > raise _OperationNotComplete() E google.api_core.future.polling._OperationNotComplete ../api_core/google/api_core/future/polling.py:81: _OperationNotComplete The above exception was the direct cause of the following exception: self = <google.api_core.operation.Operation object at 0x7f7c280cee80> timeout = 10 def _blocking_poll(self, timeout=None): """Poll and wait for the Future to be resolved. Args: timeout (int): How long (in seconds) to wait for the operation to complete. If None, wait indefinitely. """ if self._result_set: return retry_ = self._retry.with_deadline(timeout) try: > retry_(self._done_or_raise)() ../api_core/google/api_core/future/polling.py:101: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (), kwargs = {} target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>) sleep_generator = <generator object exponential_sleep_generator at 0x7f7c297d3a98> @general_helpers.wraps(func) def retry_wrapped_func(*args, **kwargs): """A wrapper that calls target function with retry.""" target = functools.partial(func, *args, **kwargs) sleep_generator = exponential_sleep_generator( self._initial, self._maximum, multiplier=self._multiplier ) return retry_target( target, self._predicate, sleep_generator, self._deadline, > on_error=on_error, ) ../api_core/google/api_core/retry.py:270: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>) predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f7c299b70d0> sleep_generator = <generator object exponential_sleep_generator at 0x7f7c297d3a98> deadline = 10, on_error = None def retry_target(target, predicate, sleep_generator, deadline, on_error=None): """Call a function and retry if it fails. This is the lowest-level retry helper. Generally, you'll use the higher-level retry helper :class:`Retry`. Args: target(Callable): The function to call and retry. This must be a nullary function - apply arguments with `functools.partial`. predicate (Callable[Exception]): A callable used to determine if an exception raised by the target should be considered retryable. It should return True to retry or False otherwise. sleep_generator (Iterable[float]): An infinite iterator that determines how long to sleep between retries. deadline (float): How long to keep retrying the target. on_error (Callable): A function to call while processing a retryable exception. Any error raised by this function will *not* be caught. Returns: Any: the return value of the target function. Raises: google.api_core.RetryError: If the deadline is exceeded while retrying. ValueError: If the sleep generator stops yielding values. Exception: If the target raises a method that isn't retryable. """ if deadline is not None: deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta( seconds=deadline ) else: deadline_datetime = None last_exc = None for sleep in sleep_generator: try: return target() # pylint: disable=broad-except # This function explicitly must deal with broad exceptions. except Exception as exc: if not predicate(exc): raise last_exc = exc if on_error is not None: on_error(exc) now = datetime_helpers.utcnow() if deadline_datetime is not None and deadline_datetime < now: six.raise_from( exceptions.RetryError( "Deadline of {:.1f}s exceeded while calling {}".format( deadline, target ), last_exc, ), > last_exc, ) ../api_core/google/api_core/retry.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None, from_value = _OperationNotComplete() > ??? E google.api_core.exceptions.RetryError: Deadline of 10.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>), last exception: <string>:3: RetryError During handling of the above exception, another exception occurred: self = <tests.system.TestInstanceAdminAPI testMethod=test_create_instance_w_two_clusters> def test_create_instance_w_two_clusters(self): from google.cloud.bigtable import enums from google.cloud.bigtable.table import ClusterState _PRODUCTION = enums.Instance.Type.PRODUCTION ALT_INSTANCE_ID = "dif" + unique_resource_id("-") instance = Config.CLIENT.instance( ALT_INSTANCE_ID, instance_type=_PRODUCTION, labels=LABELS ) ALT_CLUSTER_ID_1 = ALT_INSTANCE_ID + "-c1" ALT_CLUSTER_ID_2 = ALT_INSTANCE_ID + "-c2" LOCATION_ID_2 = "us-central1-f" STORAGE_TYPE = enums.StorageType.HDD cluster_1 = instance.cluster( ALT_CLUSTER_ID_1, location_id=LOCATION_ID, serve_nodes=SERVE_NODES, default_storage_type=STORAGE_TYPE, ) cluster_2 = instance.cluster( ALT_CLUSTER_ID_2, location_id=LOCATION_ID_2, serve_nodes=SERVE_NODES, default_storage_type=STORAGE_TYPE, ) operation = instance.create(clusters=[cluster_1, cluster_2]) # Make sure this instance gets deleted after the test case. self.instances_to_delete.append(instance) # We want to make sure the operation completes. operation.result(timeout=10) # Create a new instance instance and make sure it is the same. instance_alt = Config.CLIENT.instance(ALT_INSTANCE_ID) instance_alt.reload() self.assertEqual(instance, instance_alt) self.assertEqual(instance.display_name, instance_alt.display_name) self.assertEqual(instance.type_, instance_alt.type_) clusters, failed_locations = instance_alt.list_clusters() self.assertEqual(failed_locations, []) clusters.sort(key=lambda x: x.name) alt_cluster_1, alt_cluster_2 = clusters self.assertEqual(cluster_1.location_id, alt_cluster_1.location_id) self.assertEqual(alt_cluster_1.state, enums.Cluster.State.READY) self.assertEqual(cluster_1.serve_nodes, alt_cluster_1.serve_nodes) self.assertEqual( cluster_1.default_storage_type, alt_cluster_1.default_storage_type ) self.assertEqual(cluster_2.location_id, alt_cluster_2.location_id) self.assertEqual(alt_cluster_2.state, enums.Cluster.State.READY) self.assertEqual(cluster_2.serve_nodes, alt_cluster_2.serve_nodes) self.assertEqual( cluster_2.default_storage_type, alt_cluster_2.default_storage_type ) # Test list clusters in project via 'client.list_clusters' clusters, failed_locations = Config.CLIENT.list_clusters() self.assertFalse(failed_locations) found = set([cluster.name for cluster in clusters]) self.assertTrue( {alt_cluster_1.name, alt_cluster_2.name, Config.CLUSTER.name}.issubset( found ) ) temp_table_id = "test-get-cluster-states" temp_table = instance.table(temp_table_id) temp_table.create() result = temp_table.get_cluster_states() ReplicationState = enums.Table.ReplicationState expected_results = [ ClusterState(ReplicationState.STATE_NOT_KNOWN), ClusterState(ReplicationState.INITIALIZING), ClusterState(ReplicationState.PLANNED_MAINTENANCE), ClusterState(ReplicationState.UNPLANNED_MAINTENANCE), ClusterState(ReplicationState.READY), ] cluster_id_list = result.keys() self.assertEqual(len(cluster_id_list), 2) self.assertIn(ALT_CLUSTER_ID_1, cluster_id_list) self.assertIn(ALT_CLUSTER_ID_2, cluster_id_list) for clusterstate in result.values(): self.assertIn(clusterstate, expected_results) # Test create app profile with multi_cluster_routing policy app_profiles_to_delete = [] description = "routing policy-multy" app_profile_id_1 = "app_profile_id_1" routing = enums.RoutingPolicyType.ANY self._test_create_app_profile_helper( app_profile_id_1, instance, routing_policy_type=routing, description=description, ignore_warnings=True, ) app_profiles_to_delete.append(app_profile_id_1) # Test list app profiles self._test_list_app_profiles_helper(instance, [app_profile_id_1]) # Test modify app profile app_profile_id_1 # routing policy to single cluster policy, # cluster -> ALT_CLUSTER_ID_1, # allow_transactional_writes -> disallowed # modify description description = "to routing policy-single" routing = enums.RoutingPolicyType.SINGLE self._test_modify_app_profile_helper( app_profile_id_1, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_1, allow_transactional_writes=False, ) # Test modify app profile app_profile_id_1 # cluster -> ALT_CLUSTER_ID_2, # allow_transactional_writes -> allowed self._test_modify_app_profile_helper( app_profile_id_1, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_2, allow_transactional_writes=True, ignore_warnings=True, ) # Test create app profile with single cluster routing policy description = "routing policy-single" app_profile_id_2 = "app_profile_id_2" routing = enums.RoutingPolicyType.SINGLE self._test_create_app_profile_helper( app_profile_id_2, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_2, allow_transactional_writes=False, ) app_profiles_to_delete.append(app_profile_id_2) # Test list app profiles self._test_list_app_profiles_helper( instance, [app_profile_id_1, app_profile_id_2] ) # Test modify app profile app_profile_id_2 to # allow transactional writes # Note: no need to set ``ignore_warnings`` to True # since we are not restrictings anything with this modification. self._test_modify_app_profile_helper( app_profile_id_2, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_2, > allow_transactional_writes=True, ) tests/system.py:409: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/system.py:613: in _test_modify_app_profile_helper operation.result(timeout=10) ../api_core/google/api_core/future/polling.py:122: in result self._blocking_poll(timeout=timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.api_core.operation.Operation object at 0x7f7c280cee80> timeout = 10 def _blocking_poll(self, timeout=None): """Poll and wait for the Future to be resolved. Args: timeout (int): How long (in seconds) to wait for the operation to complete. If None, wait indefinitely. """ if self._result_set: return retry_ = self._retry.with_deadline(timeout) try: retry_(self._done_or_raise)() except exceptions.RetryError: raise concurrent.futures.TimeoutError( > "Operation did not complete within the designated " "timeout." ) E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout. ../api_core/google/api_core/future/polling.py:104: TimeoutError ```
1.0
Bigtable: 'test_create_instance_with_two_clusters' flakes modifying profile. - Similar to #5928, but the failure occurs while re-modifying the instance's app profile. From [this Kokoro failure](https://source.cloud.google.com/results/invocations/b5f7a8bc-d02c-45f8-b23d-31b94e4d0493/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fbigtable/log): ```python ___________ TestInstanceAdminAPI.test_create_instance_w_two_clusters ___________ target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>) predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f7c299b70d0> sleep_generator = <generator object exponential_sleep_generator at 0x7f7c297d3a98> deadline = 10, on_error = None def retry_target(target, predicate, sleep_generator, deadline, on_error=None): """Call a function and retry if it fails. This is the lowest-level retry helper. Generally, you'll use the higher-level retry helper :class:`Retry`. Args: target(Callable): The function to call and retry. This must be a nullary function - apply arguments with `functools.partial`. predicate (Callable[Exception]): A callable used to determine if an exception raised by the target should be considered retryable. It should return True to retry or False otherwise. sleep_generator (Iterable[float]): An infinite iterator that determines how long to sleep between retries. deadline (float): How long to keep retrying the target. on_error (Callable): A function to call while processing a retryable exception. Any error raised by this function will *not* be caught. Returns: Any: the return value of the target function. Raises: google.api_core.RetryError: If the deadline is exceeded while retrying. ValueError: If the sleep generator stops yielding values. Exception: If the target raises a method that isn't retryable. """ if deadline is not None: deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta( seconds=deadline ) else: deadline_datetime = None last_exc = None for sleep in sleep_generator: try: > return target() ../api_core/google/api_core/retry.py:179: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.api_core.operation.Operation object at 0x7f7c280cee80> def _done_or_raise(self): """Check if the future is done and raise if it's not.""" if not self.done(): > raise _OperationNotComplete() E google.api_core.future.polling._OperationNotComplete ../api_core/google/api_core/future/polling.py:81: _OperationNotComplete The above exception was the direct cause of the following exception: self = <google.api_core.operation.Operation object at 0x7f7c280cee80> timeout = 10 def _blocking_poll(self, timeout=None): """Poll and wait for the Future to be resolved. Args: timeout (int): How long (in seconds) to wait for the operation to complete. If None, wait indefinitely. """ if self._result_set: return retry_ = self._retry.with_deadline(timeout) try: > retry_(self._done_or_raise)() ../api_core/google/api_core/future/polling.py:101: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (), kwargs = {} target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>) sleep_generator = <generator object exponential_sleep_generator at 0x7f7c297d3a98> @general_helpers.wraps(func) def retry_wrapped_func(*args, **kwargs): """A wrapper that calls target function with retry.""" target = functools.partial(func, *args, **kwargs) sleep_generator = exponential_sleep_generator( self._initial, self._maximum, multiplier=self._multiplier ) return retry_target( target, self._predicate, sleep_generator, self._deadline, > on_error=on_error, ) ../api_core/google/api_core/retry.py:270: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ target = functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>) predicate = <function if_exception_type.<locals>.if_exception_type_predicate at 0x7f7c299b70d0> sleep_generator = <generator object exponential_sleep_generator at 0x7f7c297d3a98> deadline = 10, on_error = None def retry_target(target, predicate, sleep_generator, deadline, on_error=None): """Call a function and retry if it fails. This is the lowest-level retry helper. Generally, you'll use the higher-level retry helper :class:`Retry`. Args: target(Callable): The function to call and retry. This must be a nullary function - apply arguments with `functools.partial`. predicate (Callable[Exception]): A callable used to determine if an exception raised by the target should be considered retryable. It should return True to retry or False otherwise. sleep_generator (Iterable[float]): An infinite iterator that determines how long to sleep between retries. deadline (float): How long to keep retrying the target. on_error (Callable): A function to call while processing a retryable exception. Any error raised by this function will *not* be caught. Returns: Any: the return value of the target function. Raises: google.api_core.RetryError: If the deadline is exceeded while retrying. ValueError: If the sleep generator stops yielding values. Exception: If the target raises a method that isn't retryable. """ if deadline is not None: deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta( seconds=deadline ) else: deadline_datetime = None last_exc = None for sleep in sleep_generator: try: return target() # pylint: disable=broad-except # This function explicitly must deal with broad exceptions. except Exception as exc: if not predicate(exc): raise last_exc = exc if on_error is not None: on_error(exc) now = datetime_helpers.utcnow() if deadline_datetime is not None and deadline_datetime < now: six.raise_from( exceptions.RetryError( "Deadline of {:.1f}s exceeded while calling {}".format( deadline, target ), last_exc, ), > last_exc, ) ../api_core/google/api_core/retry.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ value = None, from_value = _OperationNotComplete() > ??? E google.api_core.exceptions.RetryError: Deadline of 10.0s exceeded while calling functools.partial(<bound method PollingFuture._done_or_raise of <google.api_core.operation.Operation object at 0x7f7c280cee80>>), last exception: <string>:3: RetryError During handling of the above exception, another exception occurred: self = <tests.system.TestInstanceAdminAPI testMethod=test_create_instance_w_two_clusters> def test_create_instance_w_two_clusters(self): from google.cloud.bigtable import enums from google.cloud.bigtable.table import ClusterState _PRODUCTION = enums.Instance.Type.PRODUCTION ALT_INSTANCE_ID = "dif" + unique_resource_id("-") instance = Config.CLIENT.instance( ALT_INSTANCE_ID, instance_type=_PRODUCTION, labels=LABELS ) ALT_CLUSTER_ID_1 = ALT_INSTANCE_ID + "-c1" ALT_CLUSTER_ID_2 = ALT_INSTANCE_ID + "-c2" LOCATION_ID_2 = "us-central1-f" STORAGE_TYPE = enums.StorageType.HDD cluster_1 = instance.cluster( ALT_CLUSTER_ID_1, location_id=LOCATION_ID, serve_nodes=SERVE_NODES, default_storage_type=STORAGE_TYPE, ) cluster_2 = instance.cluster( ALT_CLUSTER_ID_2, location_id=LOCATION_ID_2, serve_nodes=SERVE_NODES, default_storage_type=STORAGE_TYPE, ) operation = instance.create(clusters=[cluster_1, cluster_2]) # Make sure this instance gets deleted after the test case. self.instances_to_delete.append(instance) # We want to make sure the operation completes. operation.result(timeout=10) # Create a new instance instance and make sure it is the same. instance_alt = Config.CLIENT.instance(ALT_INSTANCE_ID) instance_alt.reload() self.assertEqual(instance, instance_alt) self.assertEqual(instance.display_name, instance_alt.display_name) self.assertEqual(instance.type_, instance_alt.type_) clusters, failed_locations = instance_alt.list_clusters() self.assertEqual(failed_locations, []) clusters.sort(key=lambda x: x.name) alt_cluster_1, alt_cluster_2 = clusters self.assertEqual(cluster_1.location_id, alt_cluster_1.location_id) self.assertEqual(alt_cluster_1.state, enums.Cluster.State.READY) self.assertEqual(cluster_1.serve_nodes, alt_cluster_1.serve_nodes) self.assertEqual( cluster_1.default_storage_type, alt_cluster_1.default_storage_type ) self.assertEqual(cluster_2.location_id, alt_cluster_2.location_id) self.assertEqual(alt_cluster_2.state, enums.Cluster.State.READY) self.assertEqual(cluster_2.serve_nodes, alt_cluster_2.serve_nodes) self.assertEqual( cluster_2.default_storage_type, alt_cluster_2.default_storage_type ) # Test list clusters in project via 'client.list_clusters' clusters, failed_locations = Config.CLIENT.list_clusters() self.assertFalse(failed_locations) found = set([cluster.name for cluster in clusters]) self.assertTrue( {alt_cluster_1.name, alt_cluster_2.name, Config.CLUSTER.name}.issubset( found ) ) temp_table_id = "test-get-cluster-states" temp_table = instance.table(temp_table_id) temp_table.create() result = temp_table.get_cluster_states() ReplicationState = enums.Table.ReplicationState expected_results = [ ClusterState(ReplicationState.STATE_NOT_KNOWN), ClusterState(ReplicationState.INITIALIZING), ClusterState(ReplicationState.PLANNED_MAINTENANCE), ClusterState(ReplicationState.UNPLANNED_MAINTENANCE), ClusterState(ReplicationState.READY), ] cluster_id_list = result.keys() self.assertEqual(len(cluster_id_list), 2) self.assertIn(ALT_CLUSTER_ID_1, cluster_id_list) self.assertIn(ALT_CLUSTER_ID_2, cluster_id_list) for clusterstate in result.values(): self.assertIn(clusterstate, expected_results) # Test create app profile with multi_cluster_routing policy app_profiles_to_delete = [] description = "routing policy-multy" app_profile_id_1 = "app_profile_id_1" routing = enums.RoutingPolicyType.ANY self._test_create_app_profile_helper( app_profile_id_1, instance, routing_policy_type=routing, description=description, ignore_warnings=True, ) app_profiles_to_delete.append(app_profile_id_1) # Test list app profiles self._test_list_app_profiles_helper(instance, [app_profile_id_1]) # Test modify app profile app_profile_id_1 # routing policy to single cluster policy, # cluster -> ALT_CLUSTER_ID_1, # allow_transactional_writes -> disallowed # modify description description = "to routing policy-single" routing = enums.RoutingPolicyType.SINGLE self._test_modify_app_profile_helper( app_profile_id_1, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_1, allow_transactional_writes=False, ) # Test modify app profile app_profile_id_1 # cluster -> ALT_CLUSTER_ID_2, # allow_transactional_writes -> allowed self._test_modify_app_profile_helper( app_profile_id_1, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_2, allow_transactional_writes=True, ignore_warnings=True, ) # Test create app profile with single cluster routing policy description = "routing policy-single" app_profile_id_2 = "app_profile_id_2" routing = enums.RoutingPolicyType.SINGLE self._test_create_app_profile_helper( app_profile_id_2, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_2, allow_transactional_writes=False, ) app_profiles_to_delete.append(app_profile_id_2) # Test list app profiles self._test_list_app_profiles_helper( instance, [app_profile_id_1, app_profile_id_2] ) # Test modify app profile app_profile_id_2 to # allow transactional writes # Note: no need to set ``ignore_warnings`` to True # since we are not restrictings anything with this modification. self._test_modify_app_profile_helper( app_profile_id_2, instance, routing_policy_type=routing, description=description, cluster_id=ALT_CLUSTER_ID_2, > allow_transactional_writes=True, ) tests/system.py:409: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/system.py:613: in _test_modify_app_profile_helper operation.result(timeout=10) ../api_core/google/api_core/future/polling.py:122: in result self._blocking_poll(timeout=timeout) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <google.api_core.operation.Operation object at 0x7f7c280cee80> timeout = 10 def _blocking_poll(self, timeout=None): """Poll and wait for the Future to be resolved. Args: timeout (int): How long (in seconds) to wait for the operation to complete. If None, wait indefinitely. """ if self._result_set: return retry_ = self._retry.with_deadline(timeout) try: retry_(self._done_or_raise)() except exceptions.RetryError: raise concurrent.futures.TimeoutError( > "Operation did not complete within the designated " "timeout." ) E concurrent.futures._base.TimeoutError: Operation did not complete within the designated timeout. ../api_core/google/api_core/future/polling.py:104: TimeoutError ```
process
bigtable test create instance with two clusters flakes modifying profile similar to but the failure occurs while re modifying the instance s app profile from python testinstanceadminapi test create instance w two clusters target functools partial predicate if exception type predicate at sleep generator deadline on error none def retry target target predicate sleep generator deadline on error none call a function and retry if it fails this is the lowest level retry helper generally you ll use the higher level retry helper class retry args target callable the function to call and retry this must be a nullary function apply arguments with functools partial predicate callable a callable used to determine if an exception raised by the target should be considered retryable it should return true to retry or false otherwise sleep generator iterable an infinite iterator that determines how long to sleep between retries deadline float how long to keep retrying the target on error callable a function to call while processing a retryable exception any error raised by this function will not be caught returns any the return value of the target function raises google api core retryerror if the deadline is exceeded while retrying valueerror if the sleep generator stops yielding values exception if the target raises a method that isn t retryable if deadline is not none deadline datetime datetime helpers utcnow datetime timedelta seconds deadline else deadline datetime none last exc none for sleep in sleep generator try return target api core google api core retry py self def done or raise self check if the future is done and raise if it s not if not self done raise operationnotcomplete e google api core future polling operationnotcomplete api core google api core future polling py operationnotcomplete the above exception was the direct cause of the following exception self timeout def blocking poll self timeout none poll and wait for the future to be resolved args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely if self result set return retry self retry with deadline timeout try retry self done or raise api core google api core future polling py args kwargs target functools partial sleep generator general helpers wraps func def retry wrapped func args kwargs a wrapper that calls target function with retry target functools partial func args kwargs sleep generator exponential sleep generator self initial self maximum multiplier self multiplier return retry target target self predicate sleep generator self deadline on error on error api core google api core retry py target functools partial predicate if exception type predicate at sleep generator deadline on error none def retry target target predicate sleep generator deadline on error none call a function and retry if it fails this is the lowest level retry helper generally you ll use the higher level retry helper class retry args target callable the function to call and retry this must be a nullary function apply arguments with functools partial predicate callable a callable used to determine if an exception raised by the target should be considered retryable it should return true to retry or false otherwise sleep generator iterable an infinite iterator that determines how long to sleep between retries deadline float how long to keep retrying the target on error callable a function to call while processing a retryable exception any error raised by this function will not be caught returns any the return value of the target function raises google api core retryerror if the deadline is exceeded while retrying valueerror if the sleep generator stops yielding values exception if the target raises a method that isn t retryable if deadline is not none deadline datetime datetime helpers utcnow datetime timedelta seconds deadline else deadline datetime none last exc none for sleep in sleep generator try return target pylint disable broad except this function explicitly must deal with broad exceptions except exception as exc if not predicate exc raise last exc exc if on error is not none on error exc now datetime helpers utcnow if deadline datetime is not none and deadline datetime now six raise from exceptions retryerror deadline of s exceeded while calling format deadline target last exc last exc api core google api core retry py value none from value operationnotcomplete e google api core exceptions retryerror deadline of exceeded while calling functools partial last exception retryerror during handling of the above exception another exception occurred self def test create instance w two clusters self from google cloud bigtable import enums from google cloud bigtable table import clusterstate production enums instance type production alt instance id dif unique resource id instance config client instance alt instance id instance type production labels labels alt cluster id alt instance id alt cluster id alt instance id location id us f storage type enums storagetype hdd cluster instance cluster alt cluster id location id location id serve nodes serve nodes default storage type storage type cluster instance cluster alt cluster id location id location id serve nodes serve nodes default storage type storage type operation instance create clusters make sure this instance gets deleted after the test case self instances to delete append instance we want to make sure the operation completes operation result timeout create a new instance instance and make sure it is the same instance alt config client instance alt instance id instance alt reload self assertequal instance instance alt self assertequal instance display name instance alt display name self assertequal instance type instance alt type clusters failed locations instance alt list clusters self assertequal failed locations clusters sort key lambda x x name alt cluster alt cluster clusters self assertequal cluster location id alt cluster location id self assertequal alt cluster state enums cluster state ready self assertequal cluster serve nodes alt cluster serve nodes self assertequal cluster default storage type alt cluster default storage type self assertequal cluster location id alt cluster location id self assertequal alt cluster state enums cluster state ready self assertequal cluster serve nodes alt cluster serve nodes self assertequal cluster default storage type alt cluster default storage type test list clusters in project via client list clusters clusters failed locations config client list clusters self assertfalse failed locations found set self asserttrue alt cluster name alt cluster name config cluster name issubset found temp table id test get cluster states temp table instance table temp table id temp table create result temp table get cluster states replicationstate enums table replicationstate expected results clusterstate replicationstate state not known clusterstate replicationstate initializing clusterstate replicationstate planned maintenance clusterstate replicationstate unplanned maintenance clusterstate replicationstate ready cluster id list result keys self assertequal len cluster id list self assertin alt cluster id cluster id list self assertin alt cluster id cluster id list for clusterstate in result values self assertin clusterstate expected results test create app profile with multi cluster routing policy app profiles to delete description routing policy multy app profile id app profile id routing enums routingpolicytype any self test create app profile helper app profile id instance routing policy type routing description description ignore warnings true app profiles to delete append app profile id test list app profiles self test list app profiles helper instance test modify app profile app profile id routing policy to single cluster policy cluster alt cluster id allow transactional writes disallowed modify description description to routing policy single routing enums routingpolicytype single self test modify app profile helper app profile id instance routing policy type routing description description cluster id alt cluster id allow transactional writes false test modify app profile app profile id cluster alt cluster id allow transactional writes allowed self test modify app profile helper app profile id instance routing policy type routing description description cluster id alt cluster id allow transactional writes true ignore warnings true test create app profile with single cluster routing policy description routing policy single app profile id app profile id routing enums routingpolicytype single self test create app profile helper app profile id instance routing policy type routing description description cluster id alt cluster id allow transactional writes false app profiles to delete append app profile id test list app profiles self test list app profiles helper instance test modify app profile app profile id to allow transactional writes note no need to set ignore warnings to true since we are not restrictings anything with this modification self test modify app profile helper app profile id instance routing policy type routing description description cluster id alt cluster id allow transactional writes true tests system py tests system py in test modify app profile helper operation result timeout api core google api core future polling py in result self blocking poll timeout timeout self timeout def blocking poll self timeout none poll and wait for the future to be resolved args timeout int how long in seconds to wait for the operation to complete if none wait indefinitely if self result set return retry self retry with deadline timeout try retry self done or raise except exceptions retryerror raise concurrent futures timeouterror operation did not complete within the designated timeout e concurrent futures base timeouterror operation did not complete within the designated timeout api core google api core future polling py timeouterror
1
11,165
13,957,694,229
IssuesEvent
2020-10-24 08:11:11
alexanderkotsev/geoportal
https://api.github.com/repos/alexanderkotsev/geoportal
opened
UK: Missing resources in the Geoportal
Geoportal Harvesting process UK - United Kingdom
Collected from the Geoportal Workshop online survey answers: This metadata instance: 00838807-361d-48f3-af93-0494701ffe50 links directly to https://s3-eu-west-1.amazonaws.com/data.defra.gov.uk/AnimalWelfare/cattle-born-in-wales-slaughtered-england-andscotland_2006.csv; the proxy browser recognises this, but the Country Report lists it as one of &quot;No Download Service referencing the dataset could be identified by the INSPIRE Geoportal&quot; Similarly, 00e9217f-a1c8-3f12-99d1-988c345f9c4a links to http://aws2.caris.com/sfs/services/ows/wcs /UKHO_WCS?request=DescribeCoverage&amp;version=1.1.0&amp;service=WCS&amp;identifiers=dac.02000780 &quot;No Network Service could be contacted using the Resource Locator found in the service metadata&quot;: Actually, it seems I&#39;m misunderstanding the Country File. I have not made time to look through the hundreds of problems reported at each link.
1.0
UK: Missing resources in the Geoportal - Collected from the Geoportal Workshop online survey answers: This metadata instance: 00838807-361d-48f3-af93-0494701ffe50 links directly to https://s3-eu-west-1.amazonaws.com/data.defra.gov.uk/AnimalWelfare/cattle-born-in-wales-slaughtered-england-andscotland_2006.csv; the proxy browser recognises this, but the Country Report lists it as one of &quot;No Download Service referencing the dataset could be identified by the INSPIRE Geoportal&quot; Similarly, 00e9217f-a1c8-3f12-99d1-988c345f9c4a links to http://aws2.caris.com/sfs/services/ows/wcs /UKHO_WCS?request=DescribeCoverage&amp;version=1.1.0&amp;service=WCS&amp;identifiers=dac.02000780 &quot;No Network Service could be contacted using the Resource Locator found in the service metadata&quot;: Actually, it seems I&#39;m misunderstanding the Country File. I have not made time to look through the hundreds of problems reported at each link.
process
uk missing resources in the geoportal collected from the geoportal workshop online survey answers this metadata instance links directly to the proxy browser recognises this but the country report lists it as one of quot no download service referencing the dataset could be identified by the inspire geoportal quot similarly links to ukho wcs request describecoverage amp version amp service wcs amp identifiers dac quot no network service could be contacted using the resource locator found in the service metadata quot actually it seems i m misunderstanding the country file i have not made time to look through the hundreds of problems reported at each link
1
16,273
9,335,649,354
IssuesEvent
2019-03-28 19:06:59
counterfactual/monorepo
https://api.github.com/repos/counterfactual/monorepo
opened
[specs] Investigate how to implement fully optimistic closeouts
⚡️ Performance 💎 Ethereum 🔐 Protocol Related
It shouldn't be too difficult to implement full optimistic closeouts. For example, A is unresponsive to B and so B submits a hash of the proposed state transition that would occur as a result of a particular app's end state and then claims that resolution after the timeout passes. This is a spike. The outcome should be a actionable implementation strategy as a new github issue.
True
[specs] Investigate how to implement fully optimistic closeouts - It shouldn't be too difficult to implement full optimistic closeouts. For example, A is unresponsive to B and so B submits a hash of the proposed state transition that would occur as a result of a particular app's end state and then claims that resolution after the timeout passes. This is a spike. The outcome should be a actionable implementation strategy as a new github issue.
non_process
investigate how to implement fully optimistic closeouts it shouldn t be too difficult to implement full optimistic closeouts for example a is unresponsive to b and so b submits a hash of the proposed state transition that would occur as a result of a particular app s end state and then claims that resolution after the timeout passes this is a spike the outcome should be a actionable implementation strategy as a new github issue
0
36,205
8,059,914,004
IssuesEvent
2018-08-03 00:37:02
fdorg/flashdevelop
https://api.github.com/repos/fdorg/flashdevelop
closed
[Haxe][CodeComplete][InferVariableType] Wrong inference the type of the variable
bug coderefactor haxe
```haxe class Main { public static function main(?v$(EntryPoint) = "") { } } ``` actual result after execution `Generate private variable`: ```haxe class Main { static var v:Null<Dynamic>; public static function main(?v = "") { Main.v = v; } } ``` expected result ```haxe class Main { static var v:Null<String>; public static function main(?v = "") { Main.v = v; } } ```
1.0
[Haxe][CodeComplete][InferVariableType] Wrong inference the type of the variable - ```haxe class Main { public static function main(?v$(EntryPoint) = "") { } } ``` actual result after execution `Generate private variable`: ```haxe class Main { static var v:Null<Dynamic>; public static function main(?v = "") { Main.v = v; } } ``` expected result ```haxe class Main { static var v:Null<String>; public static function main(?v = "") { Main.v = v; } } ```
non_process
wrong inference the type of the variable haxe class main public static function main v entrypoint actual result after execution generate private variable haxe class main static var v null public static function main v main v v expected result haxe class main static var v null public static function main v main v v
0
1,980
4,806,274,575
IssuesEvent
2016-11-02 18:06:23
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
October release
Release blocker type: process
0.3.2 is finally out so creating that bug to track progress next release. We should probably wait a bit before cutting a new release. Ideally October release will be 0.4. Assigning to Kristina, this month release manager :)
1.0
October release - 0.3.2 is finally out so creating that bug to track progress next release. We should probably wait a bit before cutting a new release. Ideally October release will be 0.4. Assigning to Kristina, this month release manager :)
process
october release is finally out so creating that bug to track progress next release we should probably wait a bit before cutting a new release ideally october release will be assigning to kristina this month release manager
1
156,968
5,995,256,144
IssuesEvent
2017-06-03 01:37:53
universAAL/middleware
https://api.github.com/repos/universAAL/middleware
closed
addTypeFilter method of ServiceRequests does not work
bug imported priority 2
_Originally Opened: @Alfiva (2012-05-21 17:51:20_) _Originally Closed: 2012-09-17 19:41:50_ It appears that when setting a type filter to a ServiceRequest the restriction is not properly set. Practical example (in which I found it): I have a request for calling a [subprofiles]=getSubProfiles(user) service of the profiliing server which works ok. Then I add the following filter to the request: req.addTypeFilter(new String[]{ProfilingService.PROP_CONTROLS,Profilable.PROP_HAS_PROFILE,Profile.PROP_HAS_SUB_PROFILE}, Whatever.MY_URI); Where &quot;Whatever.MY_URI&quot; can be whatever type you can imagine, a HealthProfile, a Device, or an Elephant. The call always succeeds (it shouldn´t), regardless of the addTypeFilter. Also, in the serialized output in the console that represents the request, there is no mention to the filtered type at all. I tried to track the bug and the further I could get was the static method &quot;addRestriction(MergedRestriction r, String[] toPath, Hashtable restrictions)&quot; of class Service. From what I can infer, it seems that the restriction is not added to the restrictions table unless root=null, which in my scenario si not true (and can´t think of any scenario in which it is). -- From: _this issue has been automatically imported from our old issue tracker_
1.0
addTypeFilter method of ServiceRequests does not work - _Originally Opened: @Alfiva (2012-05-21 17:51:20_) _Originally Closed: 2012-09-17 19:41:50_ It appears that when setting a type filter to a ServiceRequest the restriction is not properly set. Practical example (in which I found it): I have a request for calling a [subprofiles]=getSubProfiles(user) service of the profiliing server which works ok. Then I add the following filter to the request: req.addTypeFilter(new String[]{ProfilingService.PROP_CONTROLS,Profilable.PROP_HAS_PROFILE,Profile.PROP_HAS_SUB_PROFILE}, Whatever.MY_URI); Where &quot;Whatever.MY_URI&quot; can be whatever type you can imagine, a HealthProfile, a Device, or an Elephant. The call always succeeds (it shouldn´t), regardless of the addTypeFilter. Also, in the serialized output in the console that represents the request, there is no mention to the filtered type at all. I tried to track the bug and the further I could get was the static method &quot;addRestriction(MergedRestriction r, String[] toPath, Hashtable restrictions)&quot; of class Service. From what I can infer, it seems that the restriction is not added to the restrictions table unless root=null, which in my scenario si not true (and can´t think of any scenario in which it is). -- From: _this issue has been automatically imported from our old issue tracker_
non_process
addtypefilter method of servicerequests does not work originally opened alfiva originally closed it appears that when setting a type filter to a servicerequest the restriction is not properly set practical example in which i found it i have a request for calling a getsubprofiles user service of the profiliing server which works ok then i add the following filter to the request req addtypefilter new string profilingservice prop controls profilable prop has profile profile prop has sub profile whatever my uri where quot whatever my uri quot can be whatever type you can imagine a healthprofile a device or an elephant the call always succeeds it shouldn´t regardless of the addtypefilter also in the serialized output in the console that represents the request there is no mention to the filtered type at all i tried to track the bug and the further i could get was the static method quot addrestriction mergedrestriction r string topath hashtable restrictions quot of class service from what i can infer it seems that the restriction is not added to the restrictions table unless root null which in my scenario si not true and can´t think of any scenario in which it is from this issue has been automatically imported from our old issue tracker
0
18,002
24,019,793,400
IssuesEvent
2022-09-15 06:32:07
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
opened
Split by line should be Split
Processing Bug
### What is the bug or the crash? Split by line processing alg, should be able to handle all polygons as well. I am filing it as a bug, hoping it can be added to 3.28! ### Steps to reproduce the issue 1- open two polygon layers 2- try to split one with another using Split by line 3- you need to convert one to a line to do that ### Versions 3.26 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [X] I tried with a new QGIS profile ### Additional context _No response_
1.0
Split by line should be Split - ### What is the bug or the crash? Split by line processing alg, should be able to handle all polygons as well. I am filing it as a bug, hoping it can be added to 3.28! ### Steps to reproduce the issue 1- open two polygon layers 2- try to split one with another using Split by line 3- you need to convert one to a line to do that ### Versions 3.26 ### Supported QGIS version - [X] I'm running a supported QGIS version according to the roadmap. ### New profile - [X] I tried with a new QGIS profile ### Additional context _No response_
process
split by line should be split what is the bug or the crash split by line processing alg should be able to handle all polygons as well i am filing it as a bug hoping it can be added to steps to reproduce the issue open two polygon layers try to split one with another using split by line you need to convert one to a line to do that versions supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
1
321,730
23,869,551,435
IssuesEvent
2022-09-07 13:51:01
abpframework/abp
https://api.github.com/repos/abpframework/abp
opened
Deployment notes document
documentation effort-sm
There are some settings in the ABP those can be configured in production environments, like `AbpDistributedCacheOptions.KeyPrefix` and [distributed lock prefix](https://github.com/abpframework/abp/issues/13948). It would be good to create an overall document to collect such settings together in a deployment notes document.
1.0
Deployment notes document - There are some settings in the ABP those can be configured in production environments, like `AbpDistributedCacheOptions.KeyPrefix` and [distributed lock prefix](https://github.com/abpframework/abp/issues/13948). It would be good to create an overall document to collect such settings together in a deployment notes document.
non_process
deployment notes document there are some settings in the abp those can be configured in production environments like abpdistributedcacheoptions keyprefix and it would be good to create an overall document to collect such settings together in a deployment notes document
0
22,523
31,623,303,734
IssuesEvent
2023-09-06 02:00:10
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Mon, 4 Sep 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera ### SoDaCam: Software-defined Cameras via Single-Photon Imaging - **Authors:** Varun Sundar, Andrei Ardelean, Tristan Swedish, Claudio Brusschini, Edoardo Charbon, Mohit Gupta - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.00066 - **Pdf link:** https://arxiv.org/pdf/2309.00066 - **Abstract** Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging. ### Dense Voxel 3D Reconstruction Using a Monocular Event Camera - **Authors:** Haodong Chen, Vera Chung, Li Tan, Xiaoming Chen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2309.00385 - **Pdf link:** https://arxiv.org/pdf/2309.00385 - **Abstract** Event cameras are sensors inspired by biological systems that specialize in capturing changes in brightness. These emerging cameras offer many advantages over conventional frame-based cameras, including high dynamic range, high frame rates, and extremely low power consumption. Due to these advantages, event cameras have increasingly been adapted in various fields, such as frame interpolation, semantic segmentation, odometry, and SLAM. However, their application in 3D reconstruction for VR applications is underexplored. Previous methods in this field mainly focused on 3D reconstruction through depth map estimation. Methods that produce dense 3D reconstruction generally require multiple cameras, while methods that utilize a single event camera can only produce a semi-dense result. Other single-camera methods that can produce dense 3D reconstruction rely on creating a pipeline that either incorporates the aforementioned methods or other existing Structure from Motion (SfM) or Multi-view Stereo (MVS) methods. In this paper, we propose a novel approach for solving dense 3D reconstruction using only a single event camera. To the best of our knowledge, our work is the first attempt in this regard. Our preliminary results demonstrate that the proposed method can produce visually distinguishable dense 3D reconstructions directly without requiring pipelines like those used by existing methods. Additionally, we have created a synthetic dataset with $39,739$ object scans using an event camera simulator. This dataset will help accelerate other relevant research in this field. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture - **Authors:** Shaohua Pan, Qi Ma, Xinyu Yi, Weifeng Hu, Xiong Wang, Xingkang Zhou, Jijunnan Li, Feng Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00310 - **Pdf link:** https://arxiv.org/pdf/2309.00310 - **Abstract** Either RGB images or inertial signals have been used for the task of motion capture (mocap), but combining them together is a new and interesting topic. We believe that the combination is complementary and able to solve the inherent difficulties of using one modality input, including occlusions, extreme lighting/texture, and out-of-view for visual mocap and global drifts for inertial mocap. To this end, we propose a method that fuses monocular images and sparse IMUs for real-time human motion capture. Our method contains a dual coordinate strategy to fully explore the IMU signals with different goals in motion capture. To be specific, besides one branch transforming the IMU signals to the camera coordinate system to combine with the image information, there is another branch to learn from the IMU signals in the body root coordinate system to better estimate body poses. Furthermore, a hidden state feedback mechanism is proposed for both two branches to compensate for their own drawbacks in extreme input cases. Thus our method can easily switch between the two kinds of signals or combine them in different cases to achieve a robust mocap. %The two divided parts can help each other for better mocap results under different conditions. Quantitative and qualitative results demonstrate that by delicately designing the fusion method, our technique significantly outperforms the state-of-the-art vision, IMU, and combined methods on both global orientation and local pose estimation. Our codes are available for research at https://shaohua-pan.github.io/robustcap-page/. ### Iterative Multi-granular Image Editing using Diffusion Models - **Authors:** K J Joseph, Prateksha Udhayanan, Tripti Shukla, Aishwarya Agarwal, Srikrishna Karanam, Koustava Goswami, Balaji Vasan Srinivasan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2309.00613 - **Pdf link:** https://arxiv.org/pdf/2309.00613 - **Abstract** Recent advances in text-guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets. To fully support such creative endeavors, the process should possess the ability to: 1) iteratively edit the generations and 2) control the spatial reach of desired changes (global, local or anything in between). We formalize this pragmatic problem setting as Iterative Multi-granular Editing. While there has been substantial progress with diffusion-based models for image synthesis and editing, they are all one shot (i.e., no iterative editing capabilities) and do not naturally yield multi-granular control (i.e., covering the full spectrum of local-to-global edits). To overcome these drawbacks, we propose EMILIE: Iterative Multi-granular Image Editor. EMILIE introduces a novel latent iteration strategy, which re-purposes a pre-trained diffusion model to facilitate iterative editing. This is complemented by a gradient control operation for multi-granular control. We introduce a new benchmark dataset to evaluate our newly proposed setting. We conduct exhaustive quantitatively and qualitatively evaluation against recent state-of-the-art approaches adapted to our task, to being out the mettle of EMILIE. We hope our work would attract attention to this newly identified, pragmatic problem setting. ## Keyword: ISP ### FACET: Fairness in Computer Vision Evaluation Benchmark - **Authors:** Laura Gustafson, Chloe Rolland, Nikhila Ravi, Quentin Duval, Aaron Adcock, Cheng-Yang Fu, Melissa Hall, Candace Ross - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2309.00035 - **Pdf link:** https://arxiv.org/pdf/2309.00035 - **Abstract** Computer vision models have known performance disparities across attributes such as gender and skin tone. This means during tasks such as classification and detection, model performance differs for certain classes based on the demographics of the people in the image. These disparities have been shown to exist, but until now there has not been a unified approach to measure these differences for common use-cases of computer vision models. We present a new benchmark named FACET (FAirness in Computer Vision EvaluaTion), a large, publicly available evaluation set of 32k images for some of the most common vision tasks - image classification, object detection and segmentation. For every image in FACET, we hired expert reviewers to manually annotate person-related attributes such as perceived skin tone and hair type, manually draw bounding boxes and label fine-grained person-related classes such as disk jockey or guitarist. In addition, we use FACET to benchmark state-of-the-art vision models and present a deeper understanding of potential performance disparities and challenges across sensitive demographic attributes. With the exhaustive annotations collected, we probe models using single demographics attributes as well as multiple attributes using an intersectional approach (e.g. hair color and perceived skin tone). Our results show that classification, detection, segmentation, and visual grounding models exhibit performance disparities across demographic attributes and intersections of attributes. These harms suggest that not all people represented in datasets receive fair and equitable treatment in these vision tasks. We hope current and future results using our benchmark will contribute to fairer, more robust vision models. FACET is available publicly at https://facet.metademolab.com/ ### Human trajectory prediction using LSTM with Attention mechanism - **Authors:** Amin Manafi Soltan Ahmadi, Samaneh Hoseini Semnani - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2309.00331 - **Pdf link:** https://arxiv.org/pdf/2309.00331 - **Abstract** In this paper, we propose a human trajectory prediction model that combines a Long Short-Term Memory (LSTM) network with an attention mechanism. To do that, we use attention scores to determine which parts of the input data the model should focus on when making predictions. Attention scores are calculated for each input feature, with a higher score indicating the greater significance of that feature in predicting the output. Initially, these scores are determined for the target human position, velocity, and their neighboring individual's positions and velocities. By using attention scores, our model can prioritize the most relevant information in the input data and make more accurate predictions. We extract attention scores from our attention mechanism and integrate them into the trajectory prediction module to predict human future trajectories. To achieve this, we introduce a new neural layer that processes attention scores after extracting them and concatenates them with positional information. We evaluate our approach on the publicly available ETH and UCY datasets and measure its performance using the final displacement error (FDE) and average displacement error (ADE) metrics. We show that our modified algorithm performs better than the Social LSTM in predicting the future trajectory of pedestrians in crowded spaces. Specifically, our model achieves an improvement of 6.2% in ADE and 6.3% in FDE compared to the Social LSTM results in the literature. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### SoDaCam: Software-defined Cameras via Single-Photon Imaging - **Authors:** Varun Sundar, Andrei Ardelean, Tristan Swedish, Claudio Brusschini, Edoardo Charbon, Mohit Gupta - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.00066 - **Pdf link:** https://arxiv.org/pdf/2309.00066 - **Abstract** Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging. ## Keyword: RAW ### FACET: Fairness in Computer Vision Evaluation Benchmark - **Authors:** Laura Gustafson, Chloe Rolland, Nikhila Ravi, Quentin Duval, Aaron Adcock, Cheng-Yang Fu, Melissa Hall, Candace Ross - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2309.00035 - **Pdf link:** https://arxiv.org/pdf/2309.00035 - **Abstract** Computer vision models have known performance disparities across attributes such as gender and skin tone. This means during tasks such as classification and detection, model performance differs for certain classes based on the demographics of the people in the image. These disparities have been shown to exist, but until now there has not been a unified approach to measure these differences for common use-cases of computer vision models. We present a new benchmark named FACET (FAirness in Computer Vision EvaluaTion), a large, publicly available evaluation set of 32k images for some of the most common vision tasks - image classification, object detection and segmentation. For every image in FACET, we hired expert reviewers to manually annotate person-related attributes such as perceived skin tone and hair type, manually draw bounding boxes and label fine-grained person-related classes such as disk jockey or guitarist. In addition, we use FACET to benchmark state-of-the-art vision models and present a deeper understanding of potential performance disparities and challenges across sensitive demographic attributes. With the exhaustive annotations collected, we probe models using single demographics attributes as well as multiple attributes using an intersectional approach (e.g. hair color and perceived skin tone). Our results show that classification, detection, segmentation, and visual grounding models exhibit performance disparities across demographic attributes and intersections of attributes. These harms suggest that not all people represented in datasets receive fair and equitable treatment in these vision tasks. We hope current and future results using our benchmark will contribute to fairer, more robust vision models. FACET is available publicly at https://facet.metademolab.com/ ### Bellybutton: Accessible and Customizable Deep-Learning Image Segmentation - **Authors:** Sam Dillavou, Jesse M. Hanlan, Anthony T. Chieco, Hongyi Xiao, Sage Fulco, Kevin T. Turner, Douglas J. Durian - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Soft Condensed Matter (cond-mat.soft) - **Arxiv link:** https://arxiv.org/abs/2309.00058 - **Pdf link:** https://arxiv.org/pdf/2309.00058 - **Abstract** The conversion of raw images into quantifiable data can be a major hurdle in experimental research, and typically involves identifying region(s) of interest, a process known as segmentation. Machine learning tools for image segmentation are often specific to a set of tasks, such as tracking cells, or require substantial compute or coding knowledge to train and use. Here we introduce an easy-to-use (no coding required), image segmentation method, using a 15-layer convolutional neural network that can be trained on a laptop: Bellybutton. The algorithm trains on user-provided segmentation of example images, but, as we show, just one or even a portion of one training image can be sufficient in some cases. We detail the machine learning method and give three use cases where Bellybutton correctly segments images despite substantial lighting, shape, size, focus, and/or structure variation across the regions(s) of interest. Instructions for easy download and use, with further details and the datasets used in this paper are available at pypi.org/project/Bellybuttonseg. ### Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation - **Authors:** Fei Gao, Yifan Zhu, Chang Jiang, Nannan Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM) - **Arxiv link:** https://arxiv.org/abs/2309.00216 - **Pdf link:** https://arxiv.org/pdf/2309.00216 - **Abstract** Facial sketch synthesis (FSS) aims to generate a vivid sketch portrait from a given facial photo. Existing FSS methods merely rely on 2D representations of facial semantic or appearance. However, professional human artists usually use outlines or shadings to covey 3D geometry. Thus facial 3D geometry (e.g. depth map) is extremely important for FSS. Besides, different artists may use diverse drawing techniques and create multiple styles of sketches; but the style is globally consistent in a sketch. Inspired by such observations, in this paper, we propose a novel Human-Inspired Dynamic Adaptation (HIDA) method. Specially, we propose to dynamically modulate neuron activations based on a joint consideration of both facial 3D geometry and 2D appearance, as well as globally consistent style control. Besides, we use deformable convolutions at coarse-scales to align deep features, for generating abstract and distinct outlines. Experiments show that HIDA can generate high-quality sketches in multiple styles, and significantly outperforms previous methods, over a large range of challenging faces. Besides, HIDA allows precise style control of the synthesized sketch, and generalizes well to natural scenes and other artistic styles. Our code and results have been released online at: https://github.com/AiArt-HDU/HIDA. ### Fast Diffusion EM: a diffusion model for blind inverse problems with application to deconvolution - **Authors:** Charles Laroche, Andrés Almansa, Eva Coupete - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00287 - **Pdf link:** https://arxiv.org/pdf/2309.00287 - **Abstract** Using diffusion models to solve inverse problems is a growing field of research. Current methods assume the degradation to be known and provide impressive results in terms of restoration quality and diversity. In this work, we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the degradation model. In particular, we designed an algorithm based on the well-known Expectation-Minimization (EM) estimation method and diffusion models. Our method alternates between approximating the expected log-likelihood of the inverse problem using samples drawn from a diffusion model and a maximization step to estimate unknown model parameters. For the maximization step, we also introduce a novel blur kernel regularization based on a Plug \& Play denoiser. Diffusion models are long to run, thus we provide a fast version of our algorithm. Extensive experiments on blind image deblurring demonstrate the effectiveness of our method when compared to other state-of-the-art approaches. ### Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture - **Authors:** Shaohua Pan, Qi Ma, Xinyu Yi, Weifeng Hu, Xiong Wang, Xingkang Zhou, Jijunnan Li, Feng Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00310 - **Pdf link:** https://arxiv.org/pdf/2309.00310 - **Abstract** Either RGB images or inertial signals have been used for the task of motion capture (mocap), but combining them together is a new and interesting topic. We believe that the combination is complementary and able to solve the inherent difficulties of using one modality input, including occlusions, extreme lighting/texture, and out-of-view for visual mocap and global drifts for inertial mocap. To this end, we propose a method that fuses monocular images and sparse IMUs for real-time human motion capture. Our method contains a dual coordinate strategy to fully explore the IMU signals with different goals in motion capture. To be specific, besides one branch transforming the IMU signals to the camera coordinate system to combine with the image information, there is another branch to learn from the IMU signals in the body root coordinate system to better estimate body poses. Furthermore, a hidden state feedback mechanism is proposed for both two branches to compensate for their own drawbacks in extreme input cases. Thus our method can easily switch between the two kinds of signals or combine them in different cases to achieve a robust mocap. %The two divided parts can help each other for better mocap results under different conditions. Quantitative and qualitative results demonstrate that by delicately designing the fusion method, our technique significantly outperforms the state-of-the-art vision, IMU, and combined methods on both global orientation and local pose estimation. Our codes are available for research at https://shaohua-pan.github.io/robustcap-page/. ### Robust Point Cloud Processing through Positional Embedding - **Authors:** Jianqiao Zheng, Xueqian Li, Sameera Ramasinghe, Simon Lucey - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00339 - **Pdf link:** https://arxiv.org/pdf/2309.00339 - **Abstract** End-to-end trained per-point embeddings are an essential ingredient of any state-of-the-art 3D point cloud processing such as detection or alignment. Methods like PointNet, or the more recent point cloud transformer -- and its variants -- all employ learned per-point embeddings. Despite impressive performance, such approaches are sensitive to out-of-distribution (OOD) noise and outliers. In this paper, we explore the role of an analytical per-point embedding based on the criterion of bandwidth. The concept of bandwidth enables us to draw connections with an alternate per-point embedding -- positional embedding, particularly random Fourier features. We present compelling robust results across downstream tasks such as point cloud classification and registration with several categories of OOD noise. ### Iterative Multi-granular Image Editing using Diffusion Models - **Authors:** K J Joseph, Prateksha Udhayanan, Tripti Shukla, Aishwarya Agarwal, Srikrishna Karanam, Koustava Goswami, Balaji Vasan Srinivasan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2309.00613 - **Pdf link:** https://arxiv.org/pdf/2309.00613 - **Abstract** Recent advances in text-guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets. To fully support such creative endeavors, the process should possess the ability to: 1) iteratively edit the generations and 2) control the spatial reach of desired changes (global, local or anything in between). We formalize this pragmatic problem setting as Iterative Multi-granular Editing. While there has been substantial progress with diffusion-based models for image synthesis and editing, they are all one shot (i.e., no iterative editing capabilities) and do not naturally yield multi-granular control (i.e., covering the full spectrum of local-to-global edits). To overcome these drawbacks, we propose EMILIE: Iterative Multi-granular Image Editor. EMILIE introduces a novel latent iteration strategy, which re-purposes a pre-trained diffusion model to facilitate iterative editing. This is complemented by a gradient control operation for multi-granular control. We introduce a new benchmark dataset to evaluate our newly proposed setting. We conduct exhaustive quantitatively and qualitatively evaluation against recent state-of-the-art approaches adapted to our task, to being out the mettle of EMILIE. We hope our work would attract attention to this newly identified, pragmatic problem setting. ## Keyword: raw image ### Bellybutton: Accessible and Customizable Deep-Learning Image Segmentation - **Authors:** Sam Dillavou, Jesse M. Hanlan, Anthony T. Chieco, Hongyi Xiao, Sage Fulco, Kevin T. Turner, Douglas J. Durian - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Soft Condensed Matter (cond-mat.soft) - **Arxiv link:** https://arxiv.org/abs/2309.00058 - **Pdf link:** https://arxiv.org/pdf/2309.00058 - **Abstract** The conversion of raw images into quantifiable data can be a major hurdle in experimental research, and typically involves identifying region(s) of interest, a process known as segmentation. Machine learning tools for image segmentation are often specific to a set of tasks, such as tracking cells, or require substantial compute or coding knowledge to train and use. Here we introduce an easy-to-use (no coding required), image segmentation method, using a 15-layer convolutional neural network that can be trained on a laptop: Bellybutton. The algorithm trains on user-provided segmentation of example images, but, as we show, just one or even a portion of one training image can be sufficient in some cases. We detail the machine learning method and give three use cases where Bellybutton correctly segments images despite substantial lighting, shape, size, focus, and/or structure variation across the regions(s) of interest. Instructions for easy download and use, with further details and the datasets used in this paper are available at pypi.org/project/Bellybuttonseg.
2.0
New submissions for Mon, 4 Sep 23 - ## Keyword: events There is no result ## Keyword: event camera ### SoDaCam: Software-defined Cameras via Single-Photon Imaging - **Authors:** Varun Sundar, Andrei Ardelean, Tristan Swedish, Claudio Brusschini, Edoardo Charbon, Mohit Gupta - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.00066 - **Pdf link:** https://arxiv.org/pdf/2309.00066 - **Abstract** Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging. ### Dense Voxel 3D Reconstruction Using a Monocular Event Camera - **Authors:** Haodong Chen, Vera Chung, Li Tan, Xiaoming Chen - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2309.00385 - **Pdf link:** https://arxiv.org/pdf/2309.00385 - **Abstract** Event cameras are sensors inspired by biological systems that specialize in capturing changes in brightness. These emerging cameras offer many advantages over conventional frame-based cameras, including high dynamic range, high frame rates, and extremely low power consumption. Due to these advantages, event cameras have increasingly been adapted in various fields, such as frame interpolation, semantic segmentation, odometry, and SLAM. However, their application in 3D reconstruction for VR applications is underexplored. Previous methods in this field mainly focused on 3D reconstruction through depth map estimation. Methods that produce dense 3D reconstruction generally require multiple cameras, while methods that utilize a single event camera can only produce a semi-dense result. Other single-camera methods that can produce dense 3D reconstruction rely on creating a pipeline that either incorporates the aforementioned methods or other existing Structure from Motion (SfM) or Multi-view Stereo (MVS) methods. In this paper, we propose a novel approach for solving dense 3D reconstruction using only a single event camera. To the best of our knowledge, our work is the first attempt in this regard. Our preliminary results demonstrate that the proposed method can produce visually distinguishable dense 3D reconstructions directly without requiring pipelines like those used by existing methods. Additionally, we have created a synthetic dataset with $39,739$ object scans using an event camera simulator. This dataset will help accelerate other relevant research in this field. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture - **Authors:** Shaohua Pan, Qi Ma, Xinyu Yi, Weifeng Hu, Xiong Wang, Xingkang Zhou, Jijunnan Li, Feng Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00310 - **Pdf link:** https://arxiv.org/pdf/2309.00310 - **Abstract** Either RGB images or inertial signals have been used for the task of motion capture (mocap), but combining them together is a new and interesting topic. We believe that the combination is complementary and able to solve the inherent difficulties of using one modality input, including occlusions, extreme lighting/texture, and out-of-view for visual mocap and global drifts for inertial mocap. To this end, we propose a method that fuses monocular images and sparse IMUs for real-time human motion capture. Our method contains a dual coordinate strategy to fully explore the IMU signals with different goals in motion capture. To be specific, besides one branch transforming the IMU signals to the camera coordinate system to combine with the image information, there is another branch to learn from the IMU signals in the body root coordinate system to better estimate body poses. Furthermore, a hidden state feedback mechanism is proposed for both two branches to compensate for their own drawbacks in extreme input cases. Thus our method can easily switch between the two kinds of signals or combine them in different cases to achieve a robust mocap. %The two divided parts can help each other for better mocap results under different conditions. Quantitative and qualitative results demonstrate that by delicately designing the fusion method, our technique significantly outperforms the state-of-the-art vision, IMU, and combined methods on both global orientation and local pose estimation. Our codes are available for research at https://shaohua-pan.github.io/robustcap-page/. ### Iterative Multi-granular Image Editing using Diffusion Models - **Authors:** K J Joseph, Prateksha Udhayanan, Tripti Shukla, Aishwarya Agarwal, Srikrishna Karanam, Koustava Goswami, Balaji Vasan Srinivasan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2309.00613 - **Pdf link:** https://arxiv.org/pdf/2309.00613 - **Abstract** Recent advances in text-guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets. To fully support such creative endeavors, the process should possess the ability to: 1) iteratively edit the generations and 2) control the spatial reach of desired changes (global, local or anything in between). We formalize this pragmatic problem setting as Iterative Multi-granular Editing. While there has been substantial progress with diffusion-based models for image synthesis and editing, they are all one shot (i.e., no iterative editing capabilities) and do not naturally yield multi-granular control (i.e., covering the full spectrum of local-to-global edits). To overcome these drawbacks, we propose EMILIE: Iterative Multi-granular Image Editor. EMILIE introduces a novel latent iteration strategy, which re-purposes a pre-trained diffusion model to facilitate iterative editing. This is complemented by a gradient control operation for multi-granular control. We introduce a new benchmark dataset to evaluate our newly proposed setting. We conduct exhaustive quantitatively and qualitatively evaluation against recent state-of-the-art approaches adapted to our task, to being out the mettle of EMILIE. We hope our work would attract attention to this newly identified, pragmatic problem setting. ## Keyword: ISP ### FACET: Fairness in Computer Vision Evaluation Benchmark - **Authors:** Laura Gustafson, Chloe Rolland, Nikhila Ravi, Quentin Duval, Aaron Adcock, Cheng-Yang Fu, Melissa Hall, Candace Ross - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2309.00035 - **Pdf link:** https://arxiv.org/pdf/2309.00035 - **Abstract** Computer vision models have known performance disparities across attributes such as gender and skin tone. This means during tasks such as classification and detection, model performance differs for certain classes based on the demographics of the people in the image. These disparities have been shown to exist, but until now there has not been a unified approach to measure these differences for common use-cases of computer vision models. We present a new benchmark named FACET (FAirness in Computer Vision EvaluaTion), a large, publicly available evaluation set of 32k images for some of the most common vision tasks - image classification, object detection and segmentation. For every image in FACET, we hired expert reviewers to manually annotate person-related attributes such as perceived skin tone and hair type, manually draw bounding boxes and label fine-grained person-related classes such as disk jockey or guitarist. In addition, we use FACET to benchmark state-of-the-art vision models and present a deeper understanding of potential performance disparities and challenges across sensitive demographic attributes. With the exhaustive annotations collected, we probe models using single demographics attributes as well as multiple attributes using an intersectional approach (e.g. hair color and perceived skin tone). Our results show that classification, detection, segmentation, and visual grounding models exhibit performance disparities across demographic attributes and intersections of attributes. These harms suggest that not all people represented in datasets receive fair and equitable treatment in these vision tasks. We hope current and future results using our benchmark will contribute to fairer, more robust vision models. FACET is available publicly at https://facet.metademolab.com/ ### Human trajectory prediction using LSTM with Attention mechanism - **Authors:** Amin Manafi Soltan Ahmadi, Samaneh Hoseini Semnani - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2309.00331 - **Pdf link:** https://arxiv.org/pdf/2309.00331 - **Abstract** In this paper, we propose a human trajectory prediction model that combines a Long Short-Term Memory (LSTM) network with an attention mechanism. To do that, we use attention scores to determine which parts of the input data the model should focus on when making predictions. Attention scores are calculated for each input feature, with a higher score indicating the greater significance of that feature in predicting the output. Initially, these scores are determined for the target human position, velocity, and their neighboring individual's positions and velocities. By using attention scores, our model can prioritize the most relevant information in the input data and make more accurate predictions. We extract attention scores from our attention mechanism and integrate them into the trajectory prediction module to predict human future trajectories. To achieve this, we introduce a new neural layer that processes attention scores after extracting them and concatenates them with positional information. We evaluate our approach on the publicly available ETH and UCY datasets and measure its performance using the final displacement error (FDE) and average displacement error (ADE) metrics. We show that our modified algorithm performs better than the Social LSTM in predicting the future trajectory of pedestrians in crowded spaces. Specifically, our model achieves an improvement of 6.2% in ADE and 6.3% in FDE compared to the Social LSTM results in the literature. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### SoDaCam: Software-defined Cameras via Single-Photon Imaging - **Authors:** Varun Sundar, Andrei Ardelean, Tristan Swedish, Claudio Brusschini, Edoardo Charbon, Mohit Gupta - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2309.00066 - **Pdf link:** https://arxiv.org/pdf/2309.00066 - **Abstract** Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging. ## Keyword: RAW ### FACET: Fairness in Computer Vision Evaluation Benchmark - **Authors:** Laura Gustafson, Chloe Rolland, Nikhila Ravi, Quentin Duval, Aaron Adcock, Cheng-Yang Fu, Melissa Hall, Candace Ross - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2309.00035 - **Pdf link:** https://arxiv.org/pdf/2309.00035 - **Abstract** Computer vision models have known performance disparities across attributes such as gender and skin tone. This means during tasks such as classification and detection, model performance differs for certain classes based on the demographics of the people in the image. These disparities have been shown to exist, but until now there has not been a unified approach to measure these differences for common use-cases of computer vision models. We present a new benchmark named FACET (FAirness in Computer Vision EvaluaTion), a large, publicly available evaluation set of 32k images for some of the most common vision tasks - image classification, object detection and segmentation. For every image in FACET, we hired expert reviewers to manually annotate person-related attributes such as perceived skin tone and hair type, manually draw bounding boxes and label fine-grained person-related classes such as disk jockey or guitarist. In addition, we use FACET to benchmark state-of-the-art vision models and present a deeper understanding of potential performance disparities and challenges across sensitive demographic attributes. With the exhaustive annotations collected, we probe models using single demographics attributes as well as multiple attributes using an intersectional approach (e.g. hair color and perceived skin tone). Our results show that classification, detection, segmentation, and visual grounding models exhibit performance disparities across demographic attributes and intersections of attributes. These harms suggest that not all people represented in datasets receive fair and equitable treatment in these vision tasks. We hope current and future results using our benchmark will contribute to fairer, more robust vision models. FACET is available publicly at https://facet.metademolab.com/ ### Bellybutton: Accessible and Customizable Deep-Learning Image Segmentation - **Authors:** Sam Dillavou, Jesse M. Hanlan, Anthony T. Chieco, Hongyi Xiao, Sage Fulco, Kevin T. Turner, Douglas J. Durian - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Soft Condensed Matter (cond-mat.soft) - **Arxiv link:** https://arxiv.org/abs/2309.00058 - **Pdf link:** https://arxiv.org/pdf/2309.00058 - **Abstract** The conversion of raw images into quantifiable data can be a major hurdle in experimental research, and typically involves identifying region(s) of interest, a process known as segmentation. Machine learning tools for image segmentation are often specific to a set of tasks, such as tracking cells, or require substantial compute or coding knowledge to train and use. Here we introduce an easy-to-use (no coding required), image segmentation method, using a 15-layer convolutional neural network that can be trained on a laptop: Bellybutton. The algorithm trains on user-provided segmentation of example images, but, as we show, just one or even a portion of one training image can be sufficient in some cases. We detail the machine learning method and give three use cases where Bellybutton correctly segments images despite substantial lighting, shape, size, focus, and/or structure variation across the regions(s) of interest. Instructions for easy download and use, with further details and the datasets used in this paper are available at pypi.org/project/Bellybuttonseg. ### Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation - **Authors:** Fei Gao, Yifan Zhu, Chang Jiang, Nannan Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Multimedia (cs.MM) - **Arxiv link:** https://arxiv.org/abs/2309.00216 - **Pdf link:** https://arxiv.org/pdf/2309.00216 - **Abstract** Facial sketch synthesis (FSS) aims to generate a vivid sketch portrait from a given facial photo. Existing FSS methods merely rely on 2D representations of facial semantic or appearance. However, professional human artists usually use outlines or shadings to covey 3D geometry. Thus facial 3D geometry (e.g. depth map) is extremely important for FSS. Besides, different artists may use diverse drawing techniques and create multiple styles of sketches; but the style is globally consistent in a sketch. Inspired by such observations, in this paper, we propose a novel Human-Inspired Dynamic Adaptation (HIDA) method. Specially, we propose to dynamically modulate neuron activations based on a joint consideration of both facial 3D geometry and 2D appearance, as well as globally consistent style control. Besides, we use deformable convolutions at coarse-scales to align deep features, for generating abstract and distinct outlines. Experiments show that HIDA can generate high-quality sketches in multiple styles, and significantly outperforms previous methods, over a large range of challenging faces. Besides, HIDA allows precise style control of the synthesized sketch, and generalizes well to natural scenes and other artistic styles. Our code and results have been released online at: https://github.com/AiArt-HDU/HIDA. ### Fast Diffusion EM: a diffusion model for blind inverse problems with application to deconvolution - **Authors:** Charles Laroche, Andrés Almansa, Eva Coupete - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00287 - **Pdf link:** https://arxiv.org/pdf/2309.00287 - **Abstract** Using diffusion models to solve inverse problems is a growing field of research. Current methods assume the degradation to be known and provide impressive results in terms of restoration quality and diversity. In this work, we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the degradation model. In particular, we designed an algorithm based on the well-known Expectation-Minimization (EM) estimation method and diffusion models. Our method alternates between approximating the expected log-likelihood of the inverse problem using samples drawn from a diffusion model and a maximization step to estimate unknown model parameters. For the maximization step, we also introduce a novel blur kernel regularization based on a Plug \& Play denoiser. Diffusion models are long to run, thus we provide a fast version of our algorithm. Extensive experiments on blind image deblurring demonstrate the effectiveness of our method when compared to other state-of-the-art approaches. ### Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture - **Authors:** Shaohua Pan, Qi Ma, Xinyu Yi, Weifeng Hu, Xiong Wang, Xingkang Zhou, Jijunnan Li, Feng Xu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00310 - **Pdf link:** https://arxiv.org/pdf/2309.00310 - **Abstract** Either RGB images or inertial signals have been used for the task of motion capture (mocap), but combining them together is a new and interesting topic. We believe that the combination is complementary and able to solve the inherent difficulties of using one modality input, including occlusions, extreme lighting/texture, and out-of-view for visual mocap and global drifts for inertial mocap. To this end, we propose a method that fuses monocular images and sparse IMUs for real-time human motion capture. Our method contains a dual coordinate strategy to fully explore the IMU signals with different goals in motion capture. To be specific, besides one branch transforming the IMU signals to the camera coordinate system to combine with the image information, there is another branch to learn from the IMU signals in the body root coordinate system to better estimate body poses. Furthermore, a hidden state feedback mechanism is proposed for both two branches to compensate for their own drawbacks in extreme input cases. Thus our method can easily switch between the two kinds of signals or combine them in different cases to achieve a robust mocap. %The two divided parts can help each other for better mocap results under different conditions. Quantitative and qualitative results demonstrate that by delicately designing the fusion method, our technique significantly outperforms the state-of-the-art vision, IMU, and combined methods on both global orientation and local pose estimation. Our codes are available for research at https://shaohua-pan.github.io/robustcap-page/. ### Robust Point Cloud Processing through Positional Embedding - **Authors:** Jianqiao Zheng, Xueqian Li, Sameera Ramasinghe, Simon Lucey - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2309.00339 - **Pdf link:** https://arxiv.org/pdf/2309.00339 - **Abstract** End-to-end trained per-point embeddings are an essential ingredient of any state-of-the-art 3D point cloud processing such as detection or alignment. Methods like PointNet, or the more recent point cloud transformer -- and its variants -- all employ learned per-point embeddings. Despite impressive performance, such approaches are sensitive to out-of-distribution (OOD) noise and outliers. In this paper, we explore the role of an analytical per-point embedding based on the criterion of bandwidth. The concept of bandwidth enables us to draw connections with an alternate per-point embedding -- positional embedding, particularly random Fourier features. We present compelling robust results across downstream tasks such as point cloud classification and registration with several categories of OOD noise. ### Iterative Multi-granular Image Editing using Diffusion Models - **Authors:** K J Joseph, Prateksha Udhayanan, Tripti Shukla, Aishwarya Agarwal, Srikrishna Karanam, Koustava Goswami, Balaji Vasan Srinivasan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2309.00613 - **Pdf link:** https://arxiv.org/pdf/2309.00613 - **Abstract** Recent advances in text-guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets. To fully support such creative endeavors, the process should possess the ability to: 1) iteratively edit the generations and 2) control the spatial reach of desired changes (global, local or anything in between). We formalize this pragmatic problem setting as Iterative Multi-granular Editing. While there has been substantial progress with diffusion-based models for image synthesis and editing, they are all one shot (i.e., no iterative editing capabilities) and do not naturally yield multi-granular control (i.e., covering the full spectrum of local-to-global edits). To overcome these drawbacks, we propose EMILIE: Iterative Multi-granular Image Editor. EMILIE introduces a novel latent iteration strategy, which re-purposes a pre-trained diffusion model to facilitate iterative editing. This is complemented by a gradient control operation for multi-granular control. We introduce a new benchmark dataset to evaluate our newly proposed setting. We conduct exhaustive quantitatively and qualitatively evaluation against recent state-of-the-art approaches adapted to our task, to being out the mettle of EMILIE. We hope our work would attract attention to this newly identified, pragmatic problem setting. ## Keyword: raw image ### Bellybutton: Accessible and Customizable Deep-Learning Image Segmentation - **Authors:** Sam Dillavou, Jesse M. Hanlan, Anthony T. Chieco, Hongyi Xiao, Sage Fulco, Kevin T. Turner, Douglas J. Durian - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Soft Condensed Matter (cond-mat.soft) - **Arxiv link:** https://arxiv.org/abs/2309.00058 - **Pdf link:** https://arxiv.org/pdf/2309.00058 - **Abstract** The conversion of raw images into quantifiable data can be a major hurdle in experimental research, and typically involves identifying region(s) of interest, a process known as segmentation. Machine learning tools for image segmentation are often specific to a set of tasks, such as tracking cells, or require substantial compute or coding knowledge to train and use. Here we introduce an easy-to-use (no coding required), image segmentation method, using a 15-layer convolutional neural network that can be trained on a laptop: Bellybutton. The algorithm trains on user-provided segmentation of example images, but, as we show, just one or even a portion of one training image can be sufficient in some cases. We detail the machine learning method and give three use cases where Bellybutton correctly segments images despite substantial lighting, shape, size, focus, and/or structure variation across the regions(s) of interest. Instructions for easy download and use, with further details and the datasets used in this paper are available at pypi.org/project/Bellybuttonseg.
process
new submissions for mon sep keyword events there is no result keyword event camera sodacam software defined cameras via single photon imaging authors varun sundar andrei ardelean tristan swedish claudio brusschini edoardo charbon mohit gupta subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract reinterpretable cameras are defined by their post processing capabilities that exceed traditional imaging we present sodacam that provides reinterpretable cameras at the granularity of photons from photon cubes acquired by single photon devices photon cubes represent the spatio temporal detections of photons as a sequence of binary frames at frame rates as high as khz we show that simple transformations of the photon cube or photon cube projections provide the functionality of numerous imaging systems including exposure bracketing flutter shutter cameras video compressive systems event cameras and even cameras that move during exposure our photon cube projections offer the flexibility of being software defined constructs that are only limited by what is computable and shot noise we exploit this flexibility to provide new capabilities for the emulated cameras as an added benefit our projections provide camera dependent compression of photon cubes which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single photon imaging dense voxel reconstruction using a monocular event camera authors haodong chen vera chung li tan xiaoming chen subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract event cameras are sensors inspired by biological systems that specialize in capturing changes in brightness these emerging cameras offer many advantages over conventional frame based cameras including high dynamic range high frame rates and extremely low power consumption due to these advantages event cameras have increasingly been adapted in various fields such as frame interpolation semantic segmentation odometry and slam however their application in reconstruction for vr applications is underexplored previous methods in this field mainly focused on reconstruction through depth map estimation methods that produce dense reconstruction generally require multiple cameras while methods that utilize a single event camera can only produce a semi dense result other single camera methods that can produce dense reconstruction rely on creating a pipeline that either incorporates the aforementioned methods or other existing structure from motion sfm or multi view stereo mvs methods in this paper we propose a novel approach for solving dense reconstruction using only a single event camera to the best of our knowledge our work is the first attempt in this regard our preliminary results demonstrate that the proposed method can produce visually distinguishable dense reconstructions directly without requiring pipelines like those used by existing methods additionally we have created a synthetic dataset with object scans using an event camera simulator this dataset will help accelerate other relevant research in this field keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb fusing monocular images and sparse imu signals for real time human motion capture authors shaohua pan qi ma xinyu yi weifeng hu xiong wang xingkang zhou jijunnan li feng xu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract either rgb images or inertial signals have been used for the task of motion capture mocap but combining them together is a new and interesting topic we believe that the combination is complementary and able to solve the inherent difficulties of using one modality input including occlusions extreme lighting texture and out of view for visual mocap and global drifts for inertial mocap to this end we propose a method that fuses monocular images and sparse imus for real time human motion capture our method contains a dual coordinate strategy to fully explore the imu signals with different goals in motion capture to be specific besides one branch transforming the imu signals to the camera coordinate system to combine with the image information there is another branch to learn from the imu signals in the body root coordinate system to better estimate body poses furthermore a hidden state feedback mechanism is proposed for both two branches to compensate for their own drawbacks in extreme input cases thus our method can easily switch between the two kinds of signals or combine them in different cases to achieve a robust mocap the two divided parts can help each other for better mocap results under different conditions quantitative and qualitative results demonstrate that by delicately designing the fusion method our technique significantly outperforms the state of the art vision imu and combined methods on both global orientation and local pose estimation our codes are available for research at iterative multi granular image editing using diffusion models authors k j joseph prateksha udhayanan tripti shukla aishwarya agarwal srikrishna karanam koustava goswami balaji vasan srinivasan subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract recent advances in text guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets to fully support such creative endeavors the process should possess the ability to iteratively edit the generations and control the spatial reach of desired changes global local or anything in between we formalize this pragmatic problem setting as iterative multi granular editing while there has been substantial progress with diffusion based models for image synthesis and editing they are all one shot i e no iterative editing capabilities and do not naturally yield multi granular control i e covering the full spectrum of local to global edits to overcome these drawbacks we propose emilie iterative multi granular image editor emilie introduces a novel latent iteration strategy which re purposes a pre trained diffusion model to facilitate iterative editing this is complemented by a gradient control operation for multi granular control we introduce a new benchmark dataset to evaluate our newly proposed setting we conduct exhaustive quantitatively and qualitatively evaluation against recent state of the art approaches adapted to our task to being out the mettle of emilie we hope our work would attract attention to this newly identified pragmatic problem setting keyword isp facet fairness in computer vision evaluation benchmark authors laura gustafson chloe rolland nikhila ravi quentin duval aaron adcock cheng yang fu melissa hall candace ross subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract computer vision models have known performance disparities across attributes such as gender and skin tone this means during tasks such as classification and detection model performance differs for certain classes based on the demographics of the people in the image these disparities have been shown to exist but until now there has not been a unified approach to measure these differences for common use cases of computer vision models we present a new benchmark named facet fairness in computer vision evaluation a large publicly available evaluation set of images for some of the most common vision tasks image classification object detection and segmentation for every image in facet we hired expert reviewers to manually annotate person related attributes such as perceived skin tone and hair type manually draw bounding boxes and label fine grained person related classes such as disk jockey or guitarist in addition we use facet to benchmark state of the art vision models and present a deeper understanding of potential performance disparities and challenges across sensitive demographic attributes with the exhaustive annotations collected we probe models using single demographics attributes as well as multiple attributes using an intersectional approach e g hair color and perceived skin tone our results show that classification detection segmentation and visual grounding models exhibit performance disparities across demographic attributes and intersections of attributes these harms suggest that not all people represented in datasets receive fair and equitable treatment in these vision tasks we hope current and future results using our benchmark will contribute to fairer more robust vision models facet is available publicly at human trajectory prediction using lstm with attention mechanism authors amin manafi soltan ahmadi samaneh hoseini semnani subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract in this paper we propose a human trajectory prediction model that combines a long short term memory lstm network with an attention mechanism to do that we use attention scores to determine which parts of the input data the model should focus on when making predictions attention scores are calculated for each input feature with a higher score indicating the greater significance of that feature in predicting the output initially these scores are determined for the target human position velocity and their neighboring individual s positions and velocities by using attention scores our model can prioritize the most relevant information in the input data and make more accurate predictions we extract attention scores from our attention mechanism and integrate them into the trajectory prediction module to predict human future trajectories to achieve this we introduce a new neural layer that processes attention scores after extracting them and concatenates them with positional information we evaluate our approach on the publicly available eth and ucy datasets and measure its performance using the final displacement error fde and average displacement error ade metrics we show that our modified algorithm performs better than the social lstm in predicting the future trajectory of pedestrians in crowded spaces specifically our model achieves an improvement of in ade and in fde compared to the social lstm results in the literature keyword image signal processing there is no result keyword image signal process there is no result keyword compression sodacam software defined cameras via single photon imaging authors varun sundar andrei ardelean tristan swedish claudio brusschini edoardo charbon mohit gupta subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract reinterpretable cameras are defined by their post processing capabilities that exceed traditional imaging we present sodacam that provides reinterpretable cameras at the granularity of photons from photon cubes acquired by single photon devices photon cubes represent the spatio temporal detections of photons as a sequence of binary frames at frame rates as high as khz we show that simple transformations of the photon cube or photon cube projections provide the functionality of numerous imaging systems including exposure bracketing flutter shutter cameras video compressive systems event cameras and even cameras that move during exposure our photon cube projections offer the flexibility of being software defined constructs that are only limited by what is computable and shot noise we exploit this flexibility to provide new capabilities for the emulated cameras as an added benefit our projections provide camera dependent compression of photon cubes which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single photon imaging keyword raw facet fairness in computer vision evaluation benchmark authors laura gustafson chloe rolland nikhila ravi quentin duval aaron adcock cheng yang fu melissa hall candace ross subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract computer vision models have known performance disparities across attributes such as gender and skin tone this means during tasks such as classification and detection model performance differs for certain classes based on the demographics of the people in the image these disparities have been shown to exist but until now there has not been a unified approach to measure these differences for common use cases of computer vision models we present a new benchmark named facet fairness in computer vision evaluation a large publicly available evaluation set of images for some of the most common vision tasks image classification object detection and segmentation for every image in facet we hired expert reviewers to manually annotate person related attributes such as perceived skin tone and hair type manually draw bounding boxes and label fine grained person related classes such as disk jockey or guitarist in addition we use facet to benchmark state of the art vision models and present a deeper understanding of potential performance disparities and challenges across sensitive demographic attributes with the exhaustive annotations collected we probe models using single demographics attributes as well as multiple attributes using an intersectional approach e g hair color and perceived skin tone our results show that classification detection segmentation and visual grounding models exhibit performance disparities across demographic attributes and intersections of attributes these harms suggest that not all people represented in datasets receive fair and equitable treatment in these vision tasks we hope current and future results using our benchmark will contribute to fairer more robust vision models facet is available publicly at bellybutton accessible and customizable deep learning image segmentation authors sam dillavou jesse m hanlan anthony t chieco hongyi xiao sage fulco kevin t turner douglas j durian subjects computer vision and pattern recognition cs cv soft condensed matter cond mat soft arxiv link pdf link abstract the conversion of raw images into quantifiable data can be a major hurdle in experimental research and typically involves identifying region s of interest a process known as segmentation machine learning tools for image segmentation are often specific to a set of tasks such as tracking cells or require substantial compute or coding knowledge to train and use here we introduce an easy to use no coding required image segmentation method using a layer convolutional neural network that can be trained on a laptop bellybutton the algorithm trains on user provided segmentation of example images but as we show just one or even a portion of one training image can be sufficient in some cases we detail the machine learning method and give three use cases where bellybutton correctly segments images despite substantial lighting shape size focus and or structure variation across the regions s of interest instructions for easy download and use with further details and the datasets used in this paper are available at pypi org project bellybuttonseg human inspired facial sketch synthesis with dynamic adaptation authors fei gao yifan zhu chang jiang nannan wang subjects computer vision and pattern recognition cs cv multimedia cs mm arxiv link pdf link abstract facial sketch synthesis fss aims to generate a vivid sketch portrait from a given facial photo existing fss methods merely rely on representations of facial semantic or appearance however professional human artists usually use outlines or shadings to covey geometry thus facial geometry e g depth map is extremely important for fss besides different artists may use diverse drawing techniques and create multiple styles of sketches but the style is globally consistent in a sketch inspired by such observations in this paper we propose a novel human inspired dynamic adaptation hida method specially we propose to dynamically modulate neuron activations based on a joint consideration of both facial geometry and appearance as well as globally consistent style control besides we use deformable convolutions at coarse scales to align deep features for generating abstract and distinct outlines experiments show that hida can generate high quality sketches in multiple styles and significantly outperforms previous methods over a large range of challenging faces besides hida allows precise style control of the synthesized sketch and generalizes well to natural scenes and other artistic styles our code and results have been released online at fast diffusion em a diffusion model for blind inverse problems with application to deconvolution authors charles laroche andrés almansa eva coupete subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract using diffusion models to solve inverse problems is a growing field of research current methods assume the degradation to be known and provide impressive results in terms of restoration quality and diversity in this work we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the degradation model in particular we designed an algorithm based on the well known expectation minimization em estimation method and diffusion models our method alternates between approximating the expected log likelihood of the inverse problem using samples drawn from a diffusion model and a maximization step to estimate unknown model parameters for the maximization step we also introduce a novel blur kernel regularization based on a plug play denoiser diffusion models are long to run thus we provide a fast version of our algorithm extensive experiments on blind image deblurring demonstrate the effectiveness of our method when compared to other state of the art approaches fusing monocular images and sparse imu signals for real time human motion capture authors shaohua pan qi ma xinyu yi weifeng hu xiong wang xingkang zhou jijunnan li feng xu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract either rgb images or inertial signals have been used for the task of motion capture mocap but combining them together is a new and interesting topic we believe that the combination is complementary and able to solve the inherent difficulties of using one modality input including occlusions extreme lighting texture and out of view for visual mocap and global drifts for inertial mocap to this end we propose a method that fuses monocular images and sparse imus for real time human motion capture our method contains a dual coordinate strategy to fully explore the imu signals with different goals in motion capture to be specific besides one branch transforming the imu signals to the camera coordinate system to combine with the image information there is another branch to learn from the imu signals in the body root coordinate system to better estimate body poses furthermore a hidden state feedback mechanism is proposed for both two branches to compensate for their own drawbacks in extreme input cases thus our method can easily switch between the two kinds of signals or combine them in different cases to achieve a robust mocap the two divided parts can help each other for better mocap results under different conditions quantitative and qualitative results demonstrate that by delicately designing the fusion method our technique significantly outperforms the state of the art vision imu and combined methods on both global orientation and local pose estimation our codes are available for research at robust point cloud processing through positional embedding authors jianqiao zheng xueqian li sameera ramasinghe simon lucey subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract end to end trained per point embeddings are an essential ingredient of any state of the art point cloud processing such as detection or alignment methods like pointnet or the more recent point cloud transformer and its variants all employ learned per point embeddings despite impressive performance such approaches are sensitive to out of distribution ood noise and outliers in this paper we explore the role of an analytical per point embedding based on the criterion of bandwidth the concept of bandwidth enables us to draw connections with an alternate per point embedding positional embedding particularly random fourier features we present compelling robust results across downstream tasks such as point cloud classification and registration with several categories of ood noise iterative multi granular image editing using diffusion models authors k j joseph prateksha udhayanan tripti shukla aishwarya agarwal srikrishna karanam koustava goswami balaji vasan srinivasan subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract recent advances in text guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets to fully support such creative endeavors the process should possess the ability to iteratively edit the generations and control the spatial reach of desired changes global local or anything in between we formalize this pragmatic problem setting as iterative multi granular editing while there has been substantial progress with diffusion based models for image synthesis and editing they are all one shot i e no iterative editing capabilities and do not naturally yield multi granular control i e covering the full spectrum of local to global edits to overcome these drawbacks we propose emilie iterative multi granular image editor emilie introduces a novel latent iteration strategy which re purposes a pre trained diffusion model to facilitate iterative editing this is complemented by a gradient control operation for multi granular control we introduce a new benchmark dataset to evaluate our newly proposed setting we conduct exhaustive quantitatively and qualitatively evaluation against recent state of the art approaches adapted to our task to being out the mettle of emilie we hope our work would attract attention to this newly identified pragmatic problem setting keyword raw image bellybutton accessible and customizable deep learning image segmentation authors sam dillavou jesse m hanlan anthony t chieco hongyi xiao sage fulco kevin t turner douglas j durian subjects computer vision and pattern recognition cs cv soft condensed matter cond mat soft arxiv link pdf link abstract the conversion of raw images into quantifiable data can be a major hurdle in experimental research and typically involves identifying region s of interest a process known as segmentation machine learning tools for image segmentation are often specific to a set of tasks such as tracking cells or require substantial compute or coding knowledge to train and use here we introduce an easy to use no coding required image segmentation method using a layer convolutional neural network that can be trained on a laptop bellybutton the algorithm trains on user provided segmentation of example images but as we show just one or even a portion of one training image can be sufficient in some cases we detail the machine learning method and give three use cases where bellybutton correctly segments images despite substantial lighting shape size focus and or structure variation across the regions s of interest instructions for easy download and use with further details and the datasets used in this paper are available at pypi org project bellybuttonseg
1
11,488
14,360,285,418
IssuesEvent
2020-11-30 16:39:45
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Zonal Statistics as batch process
Bug Feedback Processing
Hi all, I'm seeing that Zonal Statistics had changed the way how it works between QGIS 3.10 and 3.16, but I can't find any documentation about it. Zonal Statistics on QGIS 3.10 saved the metrics calculated in new fields of the vector layer containing zones. So, running the algorithm as batch process, I was able to calculate some statistics for hundreds of raster layers, all in the same vector layer containing zones. Now on QGIS 3.16, the default is to Zonal Statistics generate a new vector layer. Of course it's possible to create hundreds of vector layers and just consider the last one, but this seems a drawback in the way how Zonal Statistics used to work. I thought "Append to Layer" option from @nyalldawson could be the workaround, but I can't understand well this option: https://qgis.org/en/site/forusers/visualchangelog314/index.html#feature-allow-appending-processing-results-to-existing-layers Is there a way to overcome this? If not it seems a drawback in Zonal Statistics usage.
1.0
Zonal Statistics as batch process - Hi all, I'm seeing that Zonal Statistics had changed the way how it works between QGIS 3.10 and 3.16, but I can't find any documentation about it. Zonal Statistics on QGIS 3.10 saved the metrics calculated in new fields of the vector layer containing zones. So, running the algorithm as batch process, I was able to calculate some statistics for hundreds of raster layers, all in the same vector layer containing zones. Now on QGIS 3.16, the default is to Zonal Statistics generate a new vector layer. Of course it's possible to create hundreds of vector layers and just consider the last one, but this seems a drawback in the way how Zonal Statistics used to work. I thought "Append to Layer" option from @nyalldawson could be the workaround, but I can't understand well this option: https://qgis.org/en/site/forusers/visualchangelog314/index.html#feature-allow-appending-processing-results-to-existing-layers Is there a way to overcome this? If not it seems a drawback in Zonal Statistics usage.
process
zonal statistics as batch process hi all i m seeing that zonal statistics had changed the way how it works between qgis and but i can t find any documentation about it zonal statistics on qgis saved the metrics calculated in new fields of the vector layer containing zones so running the algorithm as batch process i was able to calculate some statistics for hundreds of raster layers all in the same vector layer containing zones now on qgis the default is to zonal statistics generate a new vector layer of course it s possible to create hundreds of vector layers and just consider the last one but this seems a drawback in the way how zonal statistics used to work i thought append to layer option from nyalldawson could be the workaround but i can t understand well this option is there a way to overcome this if not it seems a drawback in zonal statistics usage
1
430,425
30,182,646,912
IssuesEvent
2023-07-04 09:53:23
opentiny/tiny-vue
https://api.github.com/repos/opentiny/tiny-vue
closed
🐛 [Bug]: Some of the code in the installation document in the user guide is not clear.
documentation good first issue
The link is as follows: [https://opentiny.design/tiny-vue/zh-CN/os-theme/docs/installation](https://opentiny.design/tiny-vue/zh-CN/os-theme/docs/installation) <img width="365" alt="image" src="https://user-images.githubusercontent.com/9566362/207795265-4696c30a-3279-4070-8e28-1ccb46f8a927.png">
1.0
🐛 [Bug]: Some of the code in the installation document in the user guide is not clear. - The link is as follows: [https://opentiny.design/tiny-vue/zh-CN/os-theme/docs/installation](https://opentiny.design/tiny-vue/zh-CN/os-theme/docs/installation) <img width="365" alt="image" src="https://user-images.githubusercontent.com/9566362/207795265-4696c30a-3279-4070-8e28-1ccb46f8a927.png">
non_process
🐛 some of the code in the installation document in the user guide is not clear the link is as follows img width alt image src
0
2,397
5,192,319,263
IssuesEvent
2017-01-22 07:09:57
AllenFang/react-bootstrap-table
https://api.github.com/repos/AllenFang/react-bootstrap-table
closed
Custom Checkbox - Cannot read Property Id of undefined
bug inprocess
I am creating a custom checkbox for which I am using material-ui Checkbox using `customComponent : this.customMultiSelect`. in selectRowProp I am using clickToSelect: true, and have a table of around 300 records. The issue is coming in 3rd scenario 1. If I click on row , Checkbox gets activated/deactivated properly. Works well. 2. If I click on select All , All rows checkboxes gets activated/deactivated properly. Works well. 3. But If I Click on the row checkbox, it does not get checked or unchecked but instead throws an error on console. `Uncaught TypeError: Cannot read property 'id' of undefined` on `BootstrapTable.handleSelectRow` I am not able to figure out the issue as row object is not passed properly to the above function. I m using chrome Version 52.0.2743.116 m (64-bit). react-bootstrap-table: v3.0.0-beta.2 My code in custom component is ``` return (<div> <Checkbox id = {'checkbox'+rowIndex} ref = { input=>{if(input) { input.indeterminate = props.indeterminate}}} key= {'checkbox'+rowIndex} checked={checked} disabled = {disabled} onCheck ={(e) => onChange(e)}/> </div>) ```
1.0
Custom Checkbox - Cannot read Property Id of undefined - I am creating a custom checkbox for which I am using material-ui Checkbox using `customComponent : this.customMultiSelect`. in selectRowProp I am using clickToSelect: true, and have a table of around 300 records. The issue is coming in 3rd scenario 1. If I click on row , Checkbox gets activated/deactivated properly. Works well. 2. If I click on select All , All rows checkboxes gets activated/deactivated properly. Works well. 3. But If I Click on the row checkbox, it does not get checked or unchecked but instead throws an error on console. `Uncaught TypeError: Cannot read property 'id' of undefined` on `BootstrapTable.handleSelectRow` I am not able to figure out the issue as row object is not passed properly to the above function. I m using chrome Version 52.0.2743.116 m (64-bit). react-bootstrap-table: v3.0.0-beta.2 My code in custom component is ``` return (<div> <Checkbox id = {'checkbox'+rowIndex} ref = { input=>{if(input) { input.indeterminate = props.indeterminate}}} key= {'checkbox'+rowIndex} checked={checked} disabled = {disabled} onCheck ={(e) => onChange(e)}/> </div>) ```
process
custom checkbox cannot read property id of undefined i am creating a custom checkbox for which i am using material ui checkbox using customcomponent this custommultiselect in selectrowprop i am using clicktoselect true and have a table of around records the issue is coming in scenario if i click on row checkbox gets activated deactivated properly works well if i click on select all all rows checkboxes gets activated deactivated properly works well but if i click on the row checkbox it does not get checked or unchecked but instead throws an error on console uncaught typeerror cannot read property id of undefined on bootstraptable handleselectrow i am not able to figure out the issue as row object is not passed properly to the above function i m using chrome version m bit react bootstrap table beta my code in custom component is return if input input indeterminate props indeterminate key checkbox rowindex checked checked disabled disabled oncheck e onchange e
1
4,549
7,375,370,239
IssuesEvent
2018-03-14 00:04:50
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Guidance for AAD groups
active-directory assigned-to-author doc-enhancement in-process triaged
could you please provide recommendations for creating and managing Azure AD groups. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ed24fd26-b126-c171-7edd-9f6713f23ab5 * Version Independent ID: 1e240640-ff8c-8183-bd28-ebe3aa4674ff * [Content](https://docs.microsoft.com/en-us/azure/security/azure-security-identity-management-best-practices) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/security/azure-security-identity-management-best-practices.md) * Service: security
1.0
Guidance for AAD groups - could you please provide recommendations for creating and managing Azure AD groups. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: ed24fd26-b126-c171-7edd-9f6713f23ab5 * Version Independent ID: 1e240640-ff8c-8183-bd28-ebe3aa4674ff * [Content](https://docs.microsoft.com/en-us/azure/security/azure-security-identity-management-best-practices) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/security/azure-security-identity-management-best-practices.md) * Service: security
process
guidance for aad groups could you please provide recommendations for creating and managing azure ad groups document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service security
1
22,404
31,142,291,268
IssuesEvent
2023-08-16 01:44:41
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Flaky test: AssertionError: Timed out retrying after 4000ms: expected '<h1.pt-20px.font-medium.text-center.text-32px.text-body-gray-900>' to contain 'Choose a Browser'
OS: linux stage: needs review process: flaky test topic: flake ❄️ priority: medium topic: choose-a-browser stale
### Link to dashboard or CircleCI failure - https://app.circleci.com/pipelines/github/cypress-io/cypress/41290/workflows/ef0a59fb-fe3a-4fc1-aa0d-507a6cadb943/jobs/1708865 - https://app.circleci.com/pipelines/github/cypress-io/cypress/42175/workflows/ea923743-ac8c-4f52-a4ca-b320de09deff/jobs/1750218/tests#failed-test-0 ### Link to failing test in GitHub - https://github.com/cypress-io/cypress/blob/develop/packages/launchpad/cypress/e2e/choose-a-browser.cy.ts#L270 - https://github.com/cypress-io/cypress/blob/develop/packages/launchpad/cypress/e2e/config-warning.cy.ts#L71 ### Analysis <img width="1328" alt="Screen Shot 2022-08-05 at 12 40 57 PM" src="https://user-images.githubusercontent.com/26726429/183149250-1890eb55-67bb-45c8-b51c-bd4546c452dd.png"> When this test flakes, it hangs in the "Initializing config" loading state. It's reproducible locally too <img width="1281" alt="Screen Shot 2022-08-18 at 12 10 09 AM" src="https://user-images.githubusercontent.com/26726429/185330901-c9b66059-0aca-4597-964c-5f257c3f918e.png"> ### Cypress Version 10.4.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
1.0
Flaky test: AssertionError: Timed out retrying after 4000ms: expected '<h1.pt-20px.font-medium.text-center.text-32px.text-body-gray-900>' to contain 'Choose a Browser' - ### Link to dashboard or CircleCI failure - https://app.circleci.com/pipelines/github/cypress-io/cypress/41290/workflows/ef0a59fb-fe3a-4fc1-aa0d-507a6cadb943/jobs/1708865 - https://app.circleci.com/pipelines/github/cypress-io/cypress/42175/workflows/ea923743-ac8c-4f52-a4ca-b320de09deff/jobs/1750218/tests#failed-test-0 ### Link to failing test in GitHub - https://github.com/cypress-io/cypress/blob/develop/packages/launchpad/cypress/e2e/choose-a-browser.cy.ts#L270 - https://github.com/cypress-io/cypress/blob/develop/packages/launchpad/cypress/e2e/config-warning.cy.ts#L71 ### Analysis <img width="1328" alt="Screen Shot 2022-08-05 at 12 40 57 PM" src="https://user-images.githubusercontent.com/26726429/183149250-1890eb55-67bb-45c8-b51c-bd4546c452dd.png"> When this test flakes, it hangs in the "Initializing config" loading state. It's reproducible locally too <img width="1281" alt="Screen Shot 2022-08-18 at 12 10 09 AM" src="https://user-images.githubusercontent.com/26726429/185330901-c9b66059-0aca-4597-964c-5f257c3f918e.png"> ### Cypress Version 10.4.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
process
flaky test assertionerror timed out retrying after expected to contain choose a browser link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at pm src when this test flakes it hangs in the initializing config loading state it s reproducible locally too img width alt screen shot at am src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
1
20,827
27,581,613,703
IssuesEvent
2023-03-08 16:34:49
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Support WindowStyle without UseShellExecute in ProcessStartInfo and Process implementation on Windows
area-System.Diagnostics.Process in-pr
In PowerShell repository we have to use pinvoke to support WindowStyle without UseShellExecute. Today .Net supports WindowStyle only if UseShellExecute is true https://github.com/dotnet/runtime/blob/58719ec90b3bbae527dd81685bf8670b993fe8f9/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs#L23-L30 Current request is to add support for WindowStyle (on Windows) if UseShellExecute is false https://github.com/dotnet/runtime/blob/58719ec90b3bbae527dd81685bf8670b993fe8f9/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Windows.cs#L427 It looks like a very simple implementation. We would like confirmation soon from the .Net team that this can be implemented in .Net 8 so that we can avoid unnecessary work in the PowerShell repository.
1.0
Support WindowStyle without UseShellExecute in ProcessStartInfo and Process implementation on Windows - In PowerShell repository we have to use pinvoke to support WindowStyle without UseShellExecute. Today .Net supports WindowStyle only if UseShellExecute is true https://github.com/dotnet/runtime/blob/58719ec90b3bbae527dd81685bf8670b993fe8f9/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Win32.cs#L23-L30 Current request is to add support for WindowStyle (on Windows) if UseShellExecute is false https://github.com/dotnet/runtime/blob/58719ec90b3bbae527dd81685bf8670b993fe8f9/src/libraries/System.Diagnostics.Process/src/System/Diagnostics/Process.Windows.cs#L427 It looks like a very simple implementation. We would like confirmation soon from the .Net team that this can be implemented in .Net 8 so that we can avoid unnecessary work in the PowerShell repository.
process
support windowstyle without useshellexecute in processstartinfo and process implementation on windows in powershell repository we have to use pinvoke to support windowstyle without useshellexecute today net supports windowstyle only if useshellexecute is true current request is to add support for windowstyle on windows if useshellexecute is false it looks like a very simple implementation we would like confirmation soon from the net team that this can be implemented in net so that we can avoid unnecessary work in the powershell repository
1
8,134
11,339,309,747
IssuesEvent
2020-01-23 01:28:02
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
opened
Add applicant status pills to student landing page
Apply Process Landing page State Dept.
Who: Student applicants What: Applicant status pills appear on landing page Why: To provide visual status Acceptance Criteria: - Add student applicant status pills to student landing page. Currently only the words appear ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/c8a7e11c-9e45-4e90-9a3b-9d4418d2a644) Related ticket:
1.0
Add applicant status pills to student landing page - Who: Student applicants What: Applicant status pills appear on landing page Why: To provide visual status Acceptance Criteria: - Add student applicant status pills to student landing page. Currently only the words appear ![image.png](https://images.zenhubusercontent.com/59ee08f1a468affe6df7cd6f/c8a7e11c-9e45-4e90-9a3b-9d4418d2a644) Related ticket:
process
add applicant status pills to student landing page who student applicants what applicant status pills appear on landing page why to provide visual status acceptance criteria add student applicant status pills to student landing page currently only the words appear related ticket
1
10,205
13,066,772,103
IssuesEvent
2020-07-30 22:29:39
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Cannot see Model Output in batch process configuration of Model
Bug Modeller Processing Regression
<!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> **Describe the bug** Graphical Modeler. If there is only one output of the entire Model, it does not show in the ‘Run as batch Process’ settings configuration page, thereby unable to select its output file name and location. If there are multiple outputs, one of them does not get displayed in the ‘Run as batch Process’. **How to Reproduce** 1. Create a Graphical Model which gives a Model output 2. Run the Model 3. Select the Run as batch Process 4. In the settings page of Run as batch Process, the Model Output is not available as a column to select and feedin its settings. **QGIS and OS versions** QGIS version 3.14.0-Pi QGIS code revision 9f7028fd23 Compiled against Qt 5.11.2 Running against Qt 5.11.2 Compiled against GDAL/OGR 3.0.4 Running against GDAL/OGR 3.0.4 Compiled against GEOS 3.8.1-CAPI-1.13.3 Running against GEOS 3.8.1-CAPI-1.13.3 Compiled against SQLite 3.29.0 Running against SQLite 3.29.0 PostgreSQL Client Version 11.5 SpatiaLite Version 4.3.0 QWT Version 6.1.3 QScintilla2 Version 2.10.8 Compiled against PROJ 6.3.2 Running against PROJ Rel. 6.3.2, May 1st, 2020 OS Version Windows 10 (10.0) Active python plugins db_manager; MetaSearch; processing [GM - Copy.pdf](https://github.com/qgis/QGIS/files/4980432/GM.-.Copy.pdf) **Additional context** <!-- Add any other context about the problem here. -->
1.0
Cannot see Model Output in batch process configuration of Model - <!-- Bug fixing and feature development is a community responsibility, and not the responsibility of the QGIS project alone. If this bug report or feature request is high-priority for you, we suggest engaging a QGIS developer or support organisation and financially sponsoring a fix Checklist before submitting - [ ] Search through existing issue reports and gis.stackexchange.com to check whether the issue already exists - [ ] Test with a [clean new user profile](https://docs.qgis.org/testing/en/docs/user_manual/introduction/qgis_configuration.html?highlight=profile#working-with-user-profiles). - [ ] Create a light and self-contained sample dataset and project file which demonstrates the issue --> **Describe the bug** Graphical Modeler. If there is only one output of the entire Model, it does not show in the ‘Run as batch Process’ settings configuration page, thereby unable to select its output file name and location. If there are multiple outputs, one of them does not get displayed in the ‘Run as batch Process’. **How to Reproduce** 1. Create a Graphical Model which gives a Model output 2. Run the Model 3. Select the Run as batch Process 4. In the settings page of Run as batch Process, the Model Output is not available as a column to select and feedin its settings. **QGIS and OS versions** QGIS version 3.14.0-Pi QGIS code revision 9f7028fd23 Compiled against Qt 5.11.2 Running against Qt 5.11.2 Compiled against GDAL/OGR 3.0.4 Running against GDAL/OGR 3.0.4 Compiled against GEOS 3.8.1-CAPI-1.13.3 Running against GEOS 3.8.1-CAPI-1.13.3 Compiled against SQLite 3.29.0 Running against SQLite 3.29.0 PostgreSQL Client Version 11.5 SpatiaLite Version 4.3.0 QWT Version 6.1.3 QScintilla2 Version 2.10.8 Compiled against PROJ 6.3.2 Running against PROJ Rel. 6.3.2, May 1st, 2020 OS Version Windows 10 (10.0) Active python plugins db_manager; MetaSearch; processing [GM - Copy.pdf](https://github.com/qgis/QGIS/files/4980432/GM.-.Copy.pdf) **Additional context** <!-- Add any other context about the problem here. -->
process
cannot see model output in batch process configuration of model bug fixing and feature development is a community responsibility and not the responsibility of the qgis project alone if this bug report or feature request is high priority for you we suggest engaging a qgis developer or support organisation and financially sponsoring a fix checklist before submitting search through existing issue reports and gis stackexchange com to check whether the issue already exists test with a create a light and self contained sample dataset and project file which demonstrates the issue describe the bug graphical modeler if there is only one output of the entire model it does not show in the ‘run as batch process’ settings configuration page thereby unable to select its output file name and location if there are multiple outputs one of them does not get displayed in the ‘run as batch process’ how to reproduce create a graphical model which gives a model output run the model select the run as batch process in the settings page of run as batch process the model output is not available as a column to select and feedin its settings qgis and os versions qgis version pi qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows active python plugins db manager metasearch processing additional context
1
2,932
3,968,608,382
IssuesEvent
2016-05-03 20:16:44
tsoding/voronoi-diagram
https://api.github.com/repos/tsoding/voronoi-diagram
opened
The name of the executable depends on tha name of the main module
bug infrastructure
It should be independent and always the same. For example "voro" or something. Maybe there is an oasis option for that
1.0
The name of the executable depends on tha name of the main module - It should be independent and always the same. For example "voro" or something. Maybe there is an oasis option for that
non_process
the name of the executable depends on tha name of the main module it should be independent and always the same for example voro or something maybe there is an oasis option for that
0
75,874
7,495,264,721
IssuesEvent
2018-04-07 18:57:42
MajkiIT/polish-ads-filter
https://api.github.com/repos/MajkiIT/polish-ads-filter
closed
sport.tvp.pl
reguły gotowe/testowanie
Nie działa player http://sport.tvp.pl/36581833/tenis-wta-charleston-mecze-3-rundy http://sport.tvp.pl/36581837/tenis-wta-monterrey-mecz-2-rundy-s-vickery-a-sanchez ![opera zdjecie_2018-04-05_222258_sport tvp pl](https://user-images.githubusercontent.com/36385327/38389908-10ab826a-3920-11e8-8c4b-6e5622480a97.png) ![opera zdjecie_2018-04-05_222339_sport tvp pl](https://user-images.githubusercontent.com/36385327/38389913-12c057f6-3920-11e8-804a-02d915ca184b.png) Właśnie zauważyłem, że po odświeżeniu strony raz działa, a raz nie.
1.0
sport.tvp.pl - Nie działa player http://sport.tvp.pl/36581833/tenis-wta-charleston-mecze-3-rundy http://sport.tvp.pl/36581837/tenis-wta-monterrey-mecz-2-rundy-s-vickery-a-sanchez ![opera zdjecie_2018-04-05_222258_sport tvp pl](https://user-images.githubusercontent.com/36385327/38389908-10ab826a-3920-11e8-8c4b-6e5622480a97.png) ![opera zdjecie_2018-04-05_222339_sport tvp pl](https://user-images.githubusercontent.com/36385327/38389913-12c057f6-3920-11e8-804a-02d915ca184b.png) Właśnie zauważyłem, że po odświeżeniu strony raz działa, a raz nie.
non_process
sport tvp pl nie działa player właśnie zauważyłem że po odświeżeniu strony raz działa a raz nie
0
147,704
23,258,354,599
IssuesEvent
2022-08-04 11:20:56
microsoft/vscode-cpptools
https://api.github.com/repos/microsoft/vscode-cpptools
closed
Debugging
debugger by design
### Environment - OS and version: 22000.795 & 21H2 - VS Code:1.69.2 - C/C++ extension: Name: C/C++ Id: ms-vscode.cpptools Description: C/C++ IntelliSense, debugging, and code browsing. Version: 1.12.0 Publisher: Microsoft VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools - OS and version of remote machine (if applicable): - GDB / LLDB version:7.6.1 ### Bug Summary and Steps to Reproduce Bug Summary: Everytime I Debug a code on vscode the terminal takes a pause for somes sec and terminal gives the output 'c:\Users\sampu.vscode\extensions\ms-vscode.cpptools-1.11.4-win32-x64\debugAdapters\bin\WindowsDebugLauncher.exe' '--stdin=Microsoft-MIEngine-In-5or3txdx.55l' '--stdout=Microsoft-MIEngine-Out-2mbbrlkj.0my' '--stderr=Microsoft-MIEngine-Error-mn22flnn.vrz' '--pid=Microsoft-MIEngine-Pid-bdxzj0g3.l1b' '--dbgExe=C:\MinGW\bin\gdb.exe' '--interpreter=mi' ### Debugger Configurations ```shell { "tasks": [ { "type": "cppbuild", "label": "C/C++: g++.exe build active file", "command": "C:\\MinGW\\bin\\g++.exe", "args": [ "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "C:\\MinGW\\bin" }, "problemMatcher": [ "$gcc" ], "group": { "kind": "build", "isDefault": true }, "detail": "Task generated by Debugger." }, { "type": "cppbuild", "label": "C/C++: cpp.exe build active file", "command": "C:\\MinGW\\bin\\cpp.exe", "args": [ "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "${fileDirname}" }, "problemMatcher": [ "$gcc" ], "group": "build", "detail": "Task generated by Debugger." } ], "version": "2.0.0" } ``` ### Debugger Logs ```shell Debug Console shows this Loaded 'C:\WINDOWS\SysWOW64\kernel32.dll'. Symbols loaded. Loaded 'C:\WINDOWS\SysWOW64\KernelBase.dll'. Symbols loaded. Loaded 'C:\WINDOWS\SysWOW64\msvcrt.dll'. Symbols loaded. Loaded 'C:\MinGW\bin\libgcc_s_dw2-1.dll'. Symbols loaded. Loaded 'C:\MinGW\bin\libstdc++-6.dll'. Symbols loaded. [New Thread 19372.0xf00] 1023 COLD The program 'c:\Users\sampu\Desktop\vscode\hello1.exe' has exited with code 0 (0x00000000). and the terminal shows c:\Users\sampu.vscode\extensions\ms-vscode.cpptools-1.11.4-win32-x64\debugAdapters\bin\WindowsDebugLauncher.exe' '--stdin=Microsoft-MIEngine-In-5or3txdx.55l' '--stdout=Microsoft-MIEngine-Out-2mbbrlkj.0my' '--stderr=Microsoft-MIEngine-Error-mn22flnn.vrz' '--pid=Microsoft-MIEngine-Pid-bdxzj0g3.l1b' '--dbgExe=C:\MinGW\bin\gdb.exe' '--interpreter=mi' ``` ### Other Extensions <html> <body> <!--StartFragment--> Extension | Author (truncated) | Version -- | -- | -- doxdocgen | csc | 1.4.0 code-runner | for | 0.11.8 better-cpp-syntax | jef | 1.15.19 cmake-tools | ms- | 1.11.26 cpptools | ms- | 1.11.4 cpptools-extension-pack | ms- | 1.2.0 cmake | twx | 0.0.17 <p dir="auto" style="box-sizing: border-box; margin-top: 0px; margin-bottom: 16px; color: rgb(201, 209, 217); font-family: -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Helvetica, Arial, sans-serif, &quot;Apple Color Emoji&quot;, &quot;Segoe UI Emoji&quot;; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">(1 theme extensions excluded)</p><!--EndFragment--> </body> </html> ### Additional Information https://user-images.githubusercontent.com/92144348/182447164-920f1076-6840-48ed-8d37-ff90c7b5fcdf.mp4
1.0
Debugging - ### Environment - OS and version: 22000.795 & 21H2 - VS Code:1.69.2 - C/C++ extension: Name: C/C++ Id: ms-vscode.cpptools Description: C/C++ IntelliSense, debugging, and code browsing. Version: 1.12.0 Publisher: Microsoft VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools - OS and version of remote machine (if applicable): - GDB / LLDB version:7.6.1 ### Bug Summary and Steps to Reproduce Bug Summary: Everytime I Debug a code on vscode the terminal takes a pause for somes sec and terminal gives the output 'c:\Users\sampu.vscode\extensions\ms-vscode.cpptools-1.11.4-win32-x64\debugAdapters\bin\WindowsDebugLauncher.exe' '--stdin=Microsoft-MIEngine-In-5or3txdx.55l' '--stdout=Microsoft-MIEngine-Out-2mbbrlkj.0my' '--stderr=Microsoft-MIEngine-Error-mn22flnn.vrz' '--pid=Microsoft-MIEngine-Pid-bdxzj0g3.l1b' '--dbgExe=C:\MinGW\bin\gdb.exe' '--interpreter=mi' ### Debugger Configurations ```shell { "tasks": [ { "type": "cppbuild", "label": "C/C++: g++.exe build active file", "command": "C:\\MinGW\\bin\\g++.exe", "args": [ "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "C:\\MinGW\\bin" }, "problemMatcher": [ "$gcc" ], "group": { "kind": "build", "isDefault": true }, "detail": "Task generated by Debugger." }, { "type": "cppbuild", "label": "C/C++: cpp.exe build active file", "command": "C:\\MinGW\\bin\\cpp.exe", "args": [ "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "${fileDirname}" }, "problemMatcher": [ "$gcc" ], "group": "build", "detail": "Task generated by Debugger." } ], "version": "2.0.0" } ``` ### Debugger Logs ```shell Debug Console shows this Loaded 'C:\WINDOWS\SysWOW64\kernel32.dll'. Symbols loaded. Loaded 'C:\WINDOWS\SysWOW64\KernelBase.dll'. Symbols loaded. Loaded 'C:\WINDOWS\SysWOW64\msvcrt.dll'. Symbols loaded. Loaded 'C:\MinGW\bin\libgcc_s_dw2-1.dll'. Symbols loaded. Loaded 'C:\MinGW\bin\libstdc++-6.dll'. Symbols loaded. [New Thread 19372.0xf00] 1023 COLD The program 'c:\Users\sampu\Desktop\vscode\hello1.exe' has exited with code 0 (0x00000000). and the terminal shows c:\Users\sampu.vscode\extensions\ms-vscode.cpptools-1.11.4-win32-x64\debugAdapters\bin\WindowsDebugLauncher.exe' '--stdin=Microsoft-MIEngine-In-5or3txdx.55l' '--stdout=Microsoft-MIEngine-Out-2mbbrlkj.0my' '--stderr=Microsoft-MIEngine-Error-mn22flnn.vrz' '--pid=Microsoft-MIEngine-Pid-bdxzj0g3.l1b' '--dbgExe=C:\MinGW\bin\gdb.exe' '--interpreter=mi' ``` ### Other Extensions <html> <body> <!--StartFragment--> Extension | Author (truncated) | Version -- | -- | -- doxdocgen | csc | 1.4.0 code-runner | for | 0.11.8 better-cpp-syntax | jef | 1.15.19 cmake-tools | ms- | 1.11.26 cpptools | ms- | 1.11.4 cpptools-extension-pack | ms- | 1.2.0 cmake | twx | 0.0.17 <p dir="auto" style="box-sizing: border-box; margin-top: 0px; margin-bottom: 16px; color: rgb(201, 209, 217); font-family: -apple-system, BlinkMacSystemFont, &quot;Segoe UI&quot;, Helvetica, Arial, sans-serif, &quot;Apple Color Emoji&quot;, &quot;Segoe UI Emoji&quot;; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">(1 theme extensions excluded)</p><!--EndFragment--> </body> </html> ### Additional Information https://user-images.githubusercontent.com/92144348/182447164-920f1076-6840-48ed-8d37-ff90c7b5fcdf.mp4
non_process
debugging environment os and version vs code c c extension name c c id ms vscode cpptools description c c intellisense debugging and code browsing version publisher microsoft vs marketplace link os and version of remote machine if applicable gdb lldb version bug summary and steps to reproduce bug summary everytime i debug a code on vscode the terminal takes a pause for somes sec and terminal gives the output c users sampu vscode extensions ms vscode cpptools debugadapters bin windowsdebuglauncher exe stdin microsoft miengine in stdout microsoft miengine out stderr microsoft miengine error vrz pid microsoft miengine pid dbgexe c mingw bin gdb exe interpreter mi debugger configurations shell tasks type cppbuild label c c g exe build active file command c mingw bin g exe args fdiagnostics color always g file o filedirname filebasenamenoextension exe options cwd c mingw bin problemmatcher gcc group kind build isdefault true detail task generated by debugger type cppbuild label c c cpp exe build active file command c mingw bin cpp exe args fdiagnostics color always g file o filedirname filebasenamenoextension exe options cwd filedirname problemmatcher gcc group build detail task generated by debugger version debugger logs shell debug console shows this loaded c windows dll symbols loaded loaded c windows kernelbase dll symbols loaded loaded c windows msvcrt dll symbols loaded loaded c mingw bin libgcc s dll symbols loaded loaded c mingw bin libstdc dll symbols loaded cold the program c users sampu desktop vscode exe has exited with code and the terminal shows c users sampu vscode extensions ms vscode cpptools debugadapters bin windowsdebuglauncher exe stdin microsoft miengine in stdout microsoft miengine out stderr microsoft miengine error vrz pid microsoft miengine pid dbgexe c mingw bin gdb exe interpreter mi other extensions extension author truncated version doxdocgen csc code runner for better cpp syntax jef cmake tools ms cpptools ms cpptools extension pack ms cmake twx theme extensions excluded additional information
0
12,969
15,344,592,562
IssuesEvent
2021-02-28 02:05:04
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Charset info dublication if page already contains <meta charset> tag
AREA: server STATE: Stale SYSTEM: resource processing TYPE: enhancement
What encoding should be applied? Can we don't add the default `utf-8` encoding info in this case. ![Elements](https://user-images.githubusercontent.com/4133518/35806726-910c21de-0a91-11e8-95f7-f6f532196a55.jpg)
1.0
Charset info dublication if page already contains <meta charset> tag - What encoding should be applied? Can we don't add the default `utf-8` encoding info in this case. ![Elements](https://user-images.githubusercontent.com/4133518/35806726-910c21de-0a91-11e8-95f7-f6f532196a55.jpg)
process
charset info dublication if page already contains tag what encoding should be applied can we don t add the default utf encoding info in this case
1
830,314
32,001,580,695
IssuesEvent
2023-09-21 12:37:00
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
www.svt.se - see bug description
browser-firefox priority-normal os-mac engine-gecko
<!-- @browser: Firefox 78.0 --> <!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:78.0) Gecko/20100101 Firefox/78.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/127289 --> **URL**: https://www.svt.se/text-tv/100 **Browser / Version**: Firefox 78.0 **Operating System**: Mac OS X 10.10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: When looking at the page it appears for less than a second, then it disappears **Steps to Reproduce**: It works just as it should without any problems <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
www.svt.se - see bug description - <!-- @browser: Firefox 78.0 --> <!-- @ua_header: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:78.0) Gecko/20100101 Firefox/78.0 --> <!-- @reported_with: unknown --> <!-- @public_url: https://github.com/webcompat/web-bugs/issues/127289 --> **URL**: https://www.svt.se/text-tv/100 **Browser / Version**: Firefox 78.0 **Operating System**: Mac OS X 10.10 **Tested Another Browser**: Yes Chrome **Problem type**: Something else **Description**: When looking at the page it appears for less than a second, then it disappears **Steps to Reproduce**: It works just as it should without any problems <details> <summary>Browser Configuration</summary> <ul> <li>None</li> </ul> </details> _From [webcompat.com](https://webcompat.com/) with ❤️_
non_process
see bug description url browser version firefox operating system mac os x tested another browser yes chrome problem type something else description when looking at the page it appears for less than a second then it disappears steps to reproduce it works just as it should without any problems browser configuration none from with ❤️
0
84,617
15,724,725,001
IssuesEvent
2021-03-29 09:08:47
crouchr/learnage
https://api.github.com/repos/crouchr/learnage
opened
CVE-2013-0339 (Medium) detected in https://source.codeaurora.org/quic/la/platform/external/libxml2/AU_LINUX_ANDROID_LA.AF.1.1.05.00.00.164.085, gettextv0.19.8.1
security vulnerability
## CVE-2013-0339 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>https://source.codeaurora.org/quic/la/platform/external/libxml2/AU_LINUX_ANDROID_LA.AF.1.1.05.00.00.164.085</b>, <b>gettextv0.19.8.1</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> libxml2 through 2.9.1 does not properly handle external entities expansion unless an application developer uses the xmlSAX2ResolveEntity or xmlSetExternalEntityLoader function, which allows remote attackers to cause a denial of service (resource consumption), send HTTP requests to intranet servers, or read arbitrary files via a crafted XML document, aka an XML External Entity (XXE) issue. NOTE: it could be argued that because libxml2 already provides the ability to disable external entity expansion, the responsibility for resolving this issue lies with application developers; according to this argument, this entry should be REJECTed and each affected application would need its own CVE. <p>Publish Date: 2014-01-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0339>CVE-2013-0339</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-0339">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-0339</a></p> <p>Release Date: 2014-01-21</p> <p>Fix Resolution: v2.9.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2013-0339 (Medium) detected in https://source.codeaurora.org/quic/la/platform/external/libxml2/AU_LINUX_ANDROID_LA.AF.1.1.05.00.00.164.085, gettextv0.19.8.1 - ## CVE-2013-0339 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>https://source.codeaurora.org/quic/la/platform/external/libxml2/AU_LINUX_ANDROID_LA.AF.1.1.05.00.00.164.085</b>, <b>gettextv0.19.8.1</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> libxml2 through 2.9.1 does not properly handle external entities expansion unless an application developer uses the xmlSAX2ResolveEntity or xmlSetExternalEntityLoader function, which allows remote attackers to cause a denial of service (resource consumption), send HTTP requests to intranet servers, or read arbitrary files via a crafted XML document, aka an XML External Entity (XXE) issue. NOTE: it could be argued that because libxml2 already provides the ability to disable external entity expansion, the responsibility for resolving this issue lies with application developers; according to this argument, this entry should be REJECTed and each affected application would need its own CVE. <p>Publish Date: 2014-01-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0339>CVE-2013-0339</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>6.8</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-0339">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2013-0339</a></p> <p>Release Date: 2014-01-21</p> <p>Fix Resolution: v2.9.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in cve medium severity vulnerability vulnerable libraries vulnerability details through does not properly handle external entities expansion unless an application developer uses the or xmlsetexternalentityloader function which allows remote attackers to cause a denial of service resource consumption send http requests to intranet servers or read arbitrary files via a crafted xml document aka an xml external entity xxe issue note it could be argued that because already provides the ability to disable external entity expansion the responsibility for resolving this issue lies with application developers according to this argument this entry should be rejected and each affected application would need its own cve publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
261,017
27,785,114,971
IssuesEvent
2023-03-17 02:04:11
turkdevops/babel-bot
https://api.github.com/repos/turkdevops/babel-bot
opened
CVE-2023-28155 (Medium) detected in request-2.88.2.tgz, request-2.79.0.tgz
Mend: dependency security vulnerability
## CVE-2023-28155 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>request-2.88.2.tgz</b>, <b>request-2.79.0.tgz</b></p></summary> <p> <details><summary><b>request-2.88.2.tgz</b></p></summary> <p>Simplified HTTP request client.</p> <p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.88.2.tgz">https://registry.npmjs.org/request/-/request-2.88.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/request</p> <p> Dependency Hierarchy: - jest-25.4.0.tgz (Root Library) - core-25.4.0.tgz - jest-config-25.4.0.tgz - jest-environment-jsdom-25.4.0.tgz - jsdom-15.2.1.tgz - :x: **request-2.88.2.tgz** (Vulnerable Library) </details> <details><summary><b>request-2.79.0.tgz</b></p></summary> <p>Simplified HTTP request client.</p> <p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.79.0.tgz">https://registry.npmjs.org/request/-/request-2.79.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/request</p> <p> Dependency Hierarchy: - nodemon-1.11.0.tgz (Root Library) - chokidar-1.7.0.tgz - fsevents-1.0.17.tgz - node-pre-gyp-0.6.32.tgz - :x: **request-2.79.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/turkdevops/babel-bot/commit/3892a076299e4fa06454dfb5727e7957834586de">3892a076299e4fa06454dfb5727e7957834586de</a></p> <p>Found in base branch: <b>update-dep</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ** UNSUPPORTED WHEN ASSIGNED ** The Request package through 2.88.1 for Node.js allows a bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP). NOTE: This vulnerability only affects products that are no longer supported by the maintainer. <p>Publish Date: 2023-03-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-28155>CVE-2023-28155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2023-28155 (Medium) detected in request-2.88.2.tgz, request-2.79.0.tgz - ## CVE-2023-28155 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>request-2.88.2.tgz</b>, <b>request-2.79.0.tgz</b></p></summary> <p> <details><summary><b>request-2.88.2.tgz</b></p></summary> <p>Simplified HTTP request client.</p> <p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.88.2.tgz">https://registry.npmjs.org/request/-/request-2.88.2.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/request</p> <p> Dependency Hierarchy: - jest-25.4.0.tgz (Root Library) - core-25.4.0.tgz - jest-config-25.4.0.tgz - jest-environment-jsdom-25.4.0.tgz - jsdom-15.2.1.tgz - :x: **request-2.88.2.tgz** (Vulnerable Library) </details> <details><summary><b>request-2.79.0.tgz</b></p></summary> <p>Simplified HTTP request client.</p> <p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.79.0.tgz">https://registry.npmjs.org/request/-/request-2.79.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/request</p> <p> Dependency Hierarchy: - nodemon-1.11.0.tgz (Root Library) - chokidar-1.7.0.tgz - fsevents-1.0.17.tgz - node-pre-gyp-0.6.32.tgz - :x: **request-2.79.0.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/turkdevops/babel-bot/commit/3892a076299e4fa06454dfb5727e7957834586de">3892a076299e4fa06454dfb5727e7957834586de</a></p> <p>Found in base branch: <b>update-dep</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ** UNSUPPORTED WHEN ASSIGNED ** The Request package through 2.88.1 for Node.js allows a bypass of SSRF mitigations via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP). NOTE: This vulnerability only affects products that are no longer supported by the maintainer. <p>Publish Date: 2023-03-16 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-28155>CVE-2023-28155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in request tgz request tgz cve medium severity vulnerability vulnerable libraries request tgz request tgz request tgz simplified http request client library home page a href path to dependency file package json path to vulnerable library node modules request dependency hierarchy jest tgz root library core tgz jest config tgz jest environment jsdom tgz jsdom tgz x request tgz vulnerable library request tgz simplified http request client library home page a href path to dependency file package json path to vulnerable library node modules request dependency hierarchy nodemon tgz root library chokidar tgz fsevents tgz node pre gyp tgz x request tgz vulnerable library found in head commit a href found in base branch update dep vulnerability details unsupported when assigned the request package through for node js allows a bypass of ssrf mitigations via an attacker controller server that does a cross protocol redirect http to https or https to http note this vulnerability only affects products that are no longer supported by the maintainer publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
0
12,374
14,896,938,455
IssuesEvent
2021-01-21 11:03:33
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[iOS] Closed study > Incorrect error message 'Your session is expired' is shown on using old token
Bug P2 Process: Fixed Process: Tested dev iOS
**Steps:** 1. PM admin adds an user to site A for a study 2. mobile app user enrolls into study successfully 3. WIthdraw from the study 4. PM admin adds the same user to site B for the same study 5. Enter old token of site A in eligibility screen 6. Observe the error message **Actual:** Incorrect error message 'Your session is expired' is shown on using old token **Expected:** 'Token already in use' should be displayed **Screenshots:** iOS: ![iOS](https://user-images.githubusercontent.com/60386291/104907615-5da05380-59ab-11eb-84c0-1e1a08253820.png) Android for reference: ![Android](https://user-images.githubusercontent.com/60386291/104907620-609b4400-59ab-11eb-82bf-b30797611e61.jpg)
2.0
[iOS] Closed study > Incorrect error message 'Your session is expired' is shown on using old token - **Steps:** 1. PM admin adds an user to site A for a study 2. mobile app user enrolls into study successfully 3. WIthdraw from the study 4. PM admin adds the same user to site B for the same study 5. Enter old token of site A in eligibility screen 6. Observe the error message **Actual:** Incorrect error message 'Your session is expired' is shown on using old token **Expected:** 'Token already in use' should be displayed **Screenshots:** iOS: ![iOS](https://user-images.githubusercontent.com/60386291/104907615-5da05380-59ab-11eb-84c0-1e1a08253820.png) Android for reference: ![Android](https://user-images.githubusercontent.com/60386291/104907620-609b4400-59ab-11eb-82bf-b30797611e61.jpg)
process
closed study incorrect error message your session is expired is shown on using old token steps pm admin adds an user to site a for a study mobile app user enrolls into study successfully withdraw from the study pm admin adds the same user to site b for the same study enter old token of site a in eligibility screen observe the error message actual incorrect error message your session is expired is shown on using old token expected token already in use should be displayed screenshots ios android for reference
1
570
3,032,058,620
IssuesEvent
2015-08-05 05:37:42
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
Помилка у слові Заявка(Завявка)
In process of testing move to backlog test
У вкладці Статуси, при вводі номера недійсної заявки та натисканні Переглянути, в підсказці є помилка у слові Заявка(Завявка). Скрін тут http://screencast.com/t/2QOwjeyfSrH
1.0
Помилка у слові Заявка(Завявка) - У вкладці Статуси, при вводі номера недійсної заявки та натисканні Переглянути, в підсказці є помилка у слові Заявка(Завявка). Скрін тут http://screencast.com/t/2QOwjeyfSrH
process
помилка у слові заявка завявка у вкладці статуси при вводі номера недійсної заявки та натисканні переглянути в підсказці є помилка у слові заявка завявка скрін тут
1
3,543
6,584,406,738
IssuesEvent
2017-09-13 10:04:16
DynareTeam/dynare
https://api.github.com/repos/DynareTeam/dynare
closed
Make preprocessor recognize state variables introduced by optimal policy
bug preprocessor
In the mod-file ``` var pai, c, n, r, a; varexo u; parameters beta, rho, epsilon, omega, phi, gamma; beta=0.99; gamma=3; omega=17; epsilon=8; phi=1; rho=0.95; model; a = rho*a(-1)+u; 1/c = beta*r/(c(+1)*pai(+1)); pai*(pai-1)/c = beta*pai(+1)*(pai(+1)-1)/c(+1)+epsilon*phi*n^(gamma+1)/omega -exp(a)*n*(epsilon-1)/(omega*c); exp(a)*n = c+(omega/2)*(pai-1)^2; end; initval; r=1; end; histval; a(0)=1; r(0)=1; end; steady_state_model; a = 0; pai = beta*r; c = find_c(0.96,pai,beta,epsilon,phi,gamma,omega); n = c+(omega/2)*(pai-1)^2; end; shocks; var u; stderr 0.008; var u; periods 1; values 1; end; options_.dr_display_tol=0; planner_objective(ln(c)-phi*((n^(1+gamma))/(1+gamma))); ramsey_policy(planner_discount=0.99,order=1,instruments=(r)); ``` `r` becomes a state variable during Ramsey computations (albeit with 0 coefficients). As a state, one should be able to set its lagged value using `histval`. But this is not possible, as the preprocessor does not recognize `r` as a state.
1.0
Make preprocessor recognize state variables introduced by optimal policy - In the mod-file ``` var pai, c, n, r, a; varexo u; parameters beta, rho, epsilon, omega, phi, gamma; beta=0.99; gamma=3; omega=17; epsilon=8; phi=1; rho=0.95; model; a = rho*a(-1)+u; 1/c = beta*r/(c(+1)*pai(+1)); pai*(pai-1)/c = beta*pai(+1)*(pai(+1)-1)/c(+1)+epsilon*phi*n^(gamma+1)/omega -exp(a)*n*(epsilon-1)/(omega*c); exp(a)*n = c+(omega/2)*(pai-1)^2; end; initval; r=1; end; histval; a(0)=1; r(0)=1; end; steady_state_model; a = 0; pai = beta*r; c = find_c(0.96,pai,beta,epsilon,phi,gamma,omega); n = c+(omega/2)*(pai-1)^2; end; shocks; var u; stderr 0.008; var u; periods 1; values 1; end; options_.dr_display_tol=0; planner_objective(ln(c)-phi*((n^(1+gamma))/(1+gamma))); ramsey_policy(planner_discount=0.99,order=1,instruments=(r)); ``` `r` becomes a state variable during Ramsey computations (albeit with 0 coefficients). As a state, one should be able to set its lagged value using `histval`. But this is not possible, as the preprocessor does not recognize `r` as a state.
process
make preprocessor recognize state variables introduced by optimal policy in the mod file var pai c n r a varexo u parameters beta rho epsilon omega phi gamma beta gamma omega epsilon phi rho model a rho a u c beta r c pai pai pai c beta pai pai c epsilon phi n gamma omega exp a n epsilon omega c exp a n c omega pai end initval r end histval a r end steady state model a pai beta r c find c pai beta epsilon phi gamma omega n c omega pai end shocks var u stderr var u periods values end options dr display tol planner objective ln c phi n gamma gamma ramsey policy planner discount order instruments r r becomes a state variable during ramsey computations albeit with coefficients as a state one should be able to set its lagged value using histval but this is not possible as the preprocessor does not recognize r as a state
1
36,541
17,778,373,919
IssuesEvent
2021-08-30 22:48:10
google/iree
https://api.github.com/repos/google/iree
closed
Add Vulkan tracing via Tracy
enhancement ➕ runtime performance ⚡ hal/vulkan
Super hacky (but working) WIP in the benvanik-tracy-vulkan branch. We'll likely want to expose HAL APIs for timestamping and debug groups so that we can get a consistent style of command buffer submission tracking across all backends (not just Vulkan).
True
Add Vulkan tracing via Tracy - Super hacky (but working) WIP in the benvanik-tracy-vulkan branch. We'll likely want to expose HAL APIs for timestamping and debug groups so that we can get a consistent style of command buffer submission tracking across all backends (not just Vulkan).
non_process
add vulkan tracing via tracy super hacky but working wip in the benvanik tracy vulkan branch we ll likely want to expose hal apis for timestamping and debug groups so that we can get a consistent style of command buffer submission tracking across all backends not just vulkan
0
9,641
6,412,178,182
IssuesEvent
2017-08-08 02:03:17
FReBOmusic/FReBO
https://api.github.com/repos/FReBOmusic/FReBO
opened
Horizontal Scroller
Usability
In the event that a Listing Widget, with multiple returned images, is displayed on the screen. **Expected Response**: The user should be able to horizontally scroll through the available images for the listed item, on the background of the List Widget.
True
Horizontal Scroller - In the event that a Listing Widget, with multiple returned images, is displayed on the screen. **Expected Response**: The user should be able to horizontally scroll through the available images for the listed item, on the background of the List Widget.
non_process
horizontal scroller in the event that a listing widget with multiple returned images is displayed on the screen expected response the user should be able to horizontally scroll through the available images for the listed item on the background of the list widget
0
13,478
16,006,392,028
IssuesEvent
2021-04-20 03:39:07
ankidroid/Anki-Android
https://api.github.com/repos/ankidroid/Anki-Android
closed
Amazon AppStore publish rejected for camera permission request
Dev Needs Triage Release process
###### Reproduction Steps 1. Attempt to submit a build of AnkiDroid from release-2.14 branch to amazon app store 2. 3. ###### Expected Result Submission successful! ###### Actual Result ``` Your app submission does not meet one or more of our acceptance criteria for some or all targeted devices. Failure reason(s) are listed below: To ensure the best user experience apps may not request permissions for capabilities not present within Amazon Fire tablet or Fire TV devices. Camera functionality is not currently supported on Fire TV devices. Therefore, please remove the camera permission request and resubmit your app. For additional details see the Test Criteria for Amazon Appstore Apps section of the Appstore Developer Console. You can view our guidelines on the Appstore Developer Portal. ``` https://developer.amazon.com/docs/app-submission/faq-submission.html#device-targeting :shrug: ###### Debug info Refer to the [support page](https://ankidroid.org/docs/help.html) if you are unsure where to get the "debug info". ###### Research *Enter an [x] character to confirm the points below:* - [ ] I have read the [support page](https://ankidroid.org/docs/help.html) and am reporting a bug or enhancement request specific to AnkiDroid - [ ] I have checked the [manual](https://ankidroid.org/docs/manual.html) and the [FAQ](https://github.com/ankidroid/Anki-Android/wiki/FAQ) and could not find a solution to my issue - [ ] I have searched for similar existing issues here and on the user forum - [ ] (Optional) I have confirmed the issue is not resolved in the latest alpha release ([instructions](https://docs.ankidroid.org/manual.html#betaTesting))
1.0
Amazon AppStore publish rejected for camera permission request - ###### Reproduction Steps 1. Attempt to submit a build of AnkiDroid from release-2.14 branch to amazon app store 2. 3. ###### Expected Result Submission successful! ###### Actual Result ``` Your app submission does not meet one or more of our acceptance criteria for some or all targeted devices. Failure reason(s) are listed below: To ensure the best user experience apps may not request permissions for capabilities not present within Amazon Fire tablet or Fire TV devices. Camera functionality is not currently supported on Fire TV devices. Therefore, please remove the camera permission request and resubmit your app. For additional details see the Test Criteria for Amazon Appstore Apps section of the Appstore Developer Console. You can view our guidelines on the Appstore Developer Portal. ``` https://developer.amazon.com/docs/app-submission/faq-submission.html#device-targeting :shrug: ###### Debug info Refer to the [support page](https://ankidroid.org/docs/help.html) if you are unsure where to get the "debug info". ###### Research *Enter an [x] character to confirm the points below:* - [ ] I have read the [support page](https://ankidroid.org/docs/help.html) and am reporting a bug or enhancement request specific to AnkiDroid - [ ] I have checked the [manual](https://ankidroid.org/docs/manual.html) and the [FAQ](https://github.com/ankidroid/Anki-Android/wiki/FAQ) and could not find a solution to my issue - [ ] I have searched for similar existing issues here and on the user forum - [ ] (Optional) I have confirmed the issue is not resolved in the latest alpha release ([instructions](https://docs.ankidroid.org/manual.html#betaTesting))
process
amazon appstore publish rejected for camera permission request reproduction steps attempt to submit a build of ankidroid from release branch to amazon app store expected result submission successful actual result your app submission does not meet one or more of our acceptance criteria for some or all targeted devices failure reason s are listed below to ensure the best user experience apps may not request permissions for capabilities not present within amazon fire tablet or fire tv devices camera functionality is not currently supported on fire tv devices therefore please remove the camera permission request and resubmit your app for additional details see the test criteria for amazon appstore apps section of the appstore developer console you can view our guidelines on the appstore developer portal shrug debug info refer to the if you are unsure where to get the debug info research enter an character to confirm the points below i have read the and am reporting a bug or enhancement request specific to ankidroid i have checked the and the and could not find a solution to my issue i have searched for similar existing issues here and on the user forum optional i have confirmed the issue is not resolved in the latest alpha release
1
22,177
30,727,899,466
IssuesEvent
2023-07-27 21:22:56
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[processor/transform] not enough preconditions to guard against warnings
enhancement priority:p2 processor/transform pkg/ottl
### Component(s) processor/transform ### Is your feature request related to a problem? Please describe. I have a transform processor configured to truncate log attributes and the body field: ```yaml transform/newrelic: log_statements: - context: resource statements: - truncate_all(attributes, 4090) - context: log statements: - truncate_all(attributes, 4090) - set(body, Substring(body.string, 0, 4090)) ``` This works, but the problem is when we send a `body` whose length is less than 4090 this warning message occurs: `invalid range for substring function, 4090 cannot be greater than the length of target string` ### Describe the solution you'd like It would be nice to be able to have a precondition like `WHERE body.Length() > 4090` to be able to guard against this warning. ### Describe alternatives you've considered _No response_ ### Additional context _No response_
1.0
[processor/transform] not enough preconditions to guard against warnings - ### Component(s) processor/transform ### Is your feature request related to a problem? Please describe. I have a transform processor configured to truncate log attributes and the body field: ```yaml transform/newrelic: log_statements: - context: resource statements: - truncate_all(attributes, 4090) - context: log statements: - truncate_all(attributes, 4090) - set(body, Substring(body.string, 0, 4090)) ``` This works, but the problem is when we send a `body` whose length is less than 4090 this warning message occurs: `invalid range for substring function, 4090 cannot be greater than the length of target string` ### Describe the solution you'd like It would be nice to be able to have a precondition like `WHERE body.Length() > 4090` to be able to guard against this warning. ### Describe alternatives you've considered _No response_ ### Additional context _No response_
process
not enough preconditions to guard against warnings component s processor transform is your feature request related to a problem please describe i have a transform processor configured to truncate log attributes and the body field yaml transform newrelic log statements context resource statements truncate all attributes context log statements truncate all attributes set body substring body string this works but the problem is when we send a body whose length is less than this warning message occurs invalid range for substring function cannot be greater than the length of target string describe the solution you d like it would be nice to be able to have a precondition like where body length to be able to guard against this warning describe alternatives you ve considered no response additional context no response
1
15,760
19,912,486,154
IssuesEvent
2022-01-25 18:37:48
MunchBit/MunchLove
https://api.github.com/repos/MunchBit/MunchLove
opened
Send Restaurant Administrators their collated taking after predefined/configurable number of days. Eg the money of their sales accrued over 3 days for restaurant X is transferred into their account
feature Payment Process
**Title** Send Restaurant Administrators their collated taking after predefined/configurable number of days. Eg the money of their sales accrued over 3 days for restaurant X is transferred into their account **Description** Send Restaurant Administrators their collated taking after predefined/configurable number of days. Eg the money of their sales accrued over 3 days for restaurant X is transferred into their account
1.0
Send Restaurant Administrators their collated taking after predefined/configurable number of days. Eg the money of their sales accrued over 3 days for restaurant X is transferred into their account - **Title** Send Restaurant Administrators their collated taking after predefined/configurable number of days. Eg the money of their sales accrued over 3 days for restaurant X is transferred into their account **Description** Send Restaurant Administrators their collated taking after predefined/configurable number of days. Eg the money of their sales accrued over 3 days for restaurant X is transferred into their account
process
send restaurant administrators their collated taking after predefined configurable number of days eg the money of their sales accrued over days for restaurant x is transferred into their account title send restaurant administrators their collated taking after predefined configurable number of days eg the money of their sales accrued over days for restaurant x is transferred into their account description send restaurant administrators their collated taking after predefined configurable number of days eg the money of their sales accrued over days for restaurant x is transferred into their account
1
484
2,920,565,782
IssuesEvent
2015-06-24 19:36:11
e-government-ua/i
https://api.github.com/repos/e-government-ua/i
closed
На главном портале, в футере(внизу) справа, добавить еще одну ссылку
In process of testing test
Приєднатись <br>на GitHub https://github.com/e-government-ua/i/wiki/%D0%AF%D0%BA-%D0%BF%D0%BE%D1%87%D0%B0%D1%82%D0%B8-%D1%80%D0%BE%D0%B1%D0%BE%D1%82%D1%83 (самая правая)
1.0
На главном портале, в футере(внизу) справа, добавить еще одну ссылку - Приєднатись <br>на GitHub https://github.com/e-government-ua/i/wiki/%D0%AF%D0%BA-%D0%BF%D0%BE%D1%87%D0%B0%D1%82%D0%B8-%D1%80%D0%BE%D0%B1%D0%BE%D1%82%D1%83 (самая правая)
process
на главном портале в футере внизу справа добавить еще одну ссылку приєднатись на github самая правая
1
110,528
11,705,042,255
IssuesEvent
2020-03-07 13:31:45
bounswe/bounswe2020group4
https://api.github.com/repos/bounswe/bounswe2020group4
closed
Writing meeting notes with customer
Effort: Medium Priority: High Status: In-Progress Type: Documentation
On Tuesday 03/03 Berke, Berkay and I attended a meeting with our customers. Meeting notes should be added to wiki. **Deadline:** 05/03/2020 23:59
1.0
Writing meeting notes with customer - On Tuesday 03/03 Berke, Berkay and I attended a meeting with our customers. Meeting notes should be added to wiki. **Deadline:** 05/03/2020 23:59
non_process
writing meeting notes with customer on tuesday berke berkay and i attended a meeting with our customers meeting notes should be added to wiki deadline
0
375,097
26,147,518,123
IssuesEvent
2022-12-30 08:11:12
sparks-baird/self-driving-lab-demo
https://api.github.com/repos/sparks-baird/self-driving-lab-demo
closed
Wire gauges
documentation
The 18 gauge electrical wire I've linked to elsewhere may be too thick, and the 20 gauge wire I linked to may be too flimsy. Planning to update soon. The 14 gauge sculpting wire from Amazon should work just fine, though.
1.0
Wire gauges - The 18 gauge electrical wire I've linked to elsewhere may be too thick, and the 20 gauge wire I linked to may be too flimsy. Planning to update soon. The 14 gauge sculpting wire from Amazon should work just fine, though.
non_process
wire gauges the gauge electrical wire i ve linked to elsewhere may be too thick and the gauge wire i linked to may be too flimsy planning to update soon the gauge sculpting wire from amazon should work just fine though
0
39,021
9,160,024,564
IssuesEvent
2019-03-01 05:33:56
alex-hhh/emacs-sql-indent
https://api.github.com/repos/alex-hhh/emacs-sql-indent
closed
Erroneous regexp in sqlind-good-if-candidate
defect
The regexp ``` "end\\|table\\|view\\|index\\|trigger\\procedude\\|function\\|package\\|body"``` in `sqlind-good-if-candidate` needs some attention: there seems to be a missing vertical bar before "procedure", which also looks misspelled.
1.0
Erroneous regexp in sqlind-good-if-candidate - The regexp ``` "end\\|table\\|view\\|index\\|trigger\\procedude\\|function\\|package\\|body"``` in `sqlind-good-if-candidate` needs some attention: there seems to be a missing vertical bar before "procedure", which also looks misspelled.
non_process
erroneous regexp in sqlind good if candidate the regexp end table view index trigger procedude function package body in sqlind good if candidate needs some attention there seems to be a missing vertical bar before procedure which also looks misspelled
0
20,510
27,168,851,660
IssuesEvent
2023-02-17 17:29:15
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
[connectors/spanmetrics] Drop `_total` from generated metrics names.
processor/spanmetrics needs triage connector/spanmetrics
### Component(s) connector/spanmetrics ### Describe the issue you're reporting ## Description The `spanmeterics` connector generates a metric with `_total` which is done to follow the Prometheus naming convention. The `spanmeterics` connector is OTel component and should be agnostic to any downstream components, we would like to drop `_total` from metrics names. In the case of Prometheus exporters, exporters will automatically add `_total` where it is needed. See discussion https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/18199#issuecomment-1426002784 It is the change that will be done in the new (not yet enabled) component, so it does not break anything for the existing users. However, we have to inform users about these changes when migrating from `spanmeterics` processor to `spanmeterics` connector.
1.0
[connectors/spanmetrics] Drop `_total` from generated metrics names. - ### Component(s) connector/spanmetrics ### Describe the issue you're reporting ## Description The `spanmeterics` connector generates a metric with `_total` which is done to follow the Prometheus naming convention. The `spanmeterics` connector is OTel component and should be agnostic to any downstream components, we would like to drop `_total` from metrics names. In the case of Prometheus exporters, exporters will automatically add `_total` where it is needed. See discussion https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/18199#issuecomment-1426002784 It is the change that will be done in the new (not yet enabled) component, so it does not break anything for the existing users. However, we have to inform users about these changes when migrating from `spanmeterics` processor to `spanmeterics` connector.
process
drop total from generated metrics names component s connector spanmetrics describe the issue you re reporting description the spanmeterics connector generates a metric with total which is done to follow the prometheus naming convention the spanmeterics connector is otel component and should be agnostic to any downstream components we would like to drop total from metrics names in the case of prometheus exporters exporters will automatically add total where it is needed see discussion it is the change that will be done in the new not yet enabled component so it does not break anything for the existing users however we have to inform users about these changes when migrating from spanmeterics processor to spanmeterics connector
1
9,263
12,294,715,047
IssuesEvent
2020-05-11 01:15:56
allinurl/goaccess
https://api.github.com/repos/allinurl/goaccess
closed
On-disk database crashes with SIGSEGV
bug duplicate log-processing on-disk unable to replicate
Hi. One day i tried to restart GoAccess got this in logs: ``` SIGINT caught! Closing GoAccess... Stopping WebSocket server... ==7467== GoAccess 1.1.1 crashed by Sig 11 ==7467== ==7467== VALUES AT CRASH POINT ==7467== ==7467== Line number: 1123223 ==7467== Offset: 0 ==7467== Invalid data: 75451 ==7467== Piping: 0 ==7467== Response size: 20278454359 bytes ==7467== ==7467== STACK TRACE: ==7467== ==7467== 0 /usr/bin/goaccess(sigsegv_handler+0x13e) [0x40d10e] ==7467== 1 /lib/x86_64-linux-gnu/libc.so.6(+0x36d40) [0x7f564114cd40] ==7467== 2 /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(+0x4066d) [0x7f564172366d] ==7467== 3 /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(tcbdbput+0xcf) [0x7f564172433f] ==7467== 4 /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(tcadbput+0x1ec) [0x7f564174874c] ==7467== 5 /usr/bin/goaccess() [0x428777] ==7467== 6 /usr/bin/goaccess() [0x4287dd] ==7467== 7 /usr/bin/goaccess() [0x41b6e0] ==7467== 8 /usr/bin/goaccess() [0x41bcc6] ==7467== 9 /usr/bin/goaccess(parse_log+0xce) [0x41bf2e] ==7467== 10 /usr/bin/goaccess(main+0x21d) [0x409acd] ==7467== 11 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f5641137ec5] ==7467== 12 /usr/bin/goaccess() [0x40aac9] ==7467== ==7467== Please report it by opening an issue on GitHub: ==7467== https://github.com/allinurl/goaccess/issues ``` Here is current config: ``` time-format %H:%M:%S date-format %d/%b/%Y log-format %h - %^ [%d:%t %^] "%M %U" %s/%T %^/%^/%^ %^/%^ %b/%^ "%R" "%u" "%^" %^ config-dialog false hl-header true html-prefs {"theme":"bright","perPage":20,"layout":"vertical","showTables":true,"visitors":{"plot":{"chartType":"bar"}}} json-pretty-print false no-color false no-column-names false no-csv-summary false no-progress false no-tab-scroll false with-mouse false addr 127.0.0.1 origin http://somedomain.tld port 7890 real-time-html true ws-url somedomain.tld/weblive/ log-file /var/log/nginx/somedomain.tld.access.log agent-list false with-output-resolver false http-method yes http-protocol yes output-format /var/tmp/webstat.html no-query-string false no-term-resolver false 444-as-404 false 4xx-to-unique-count false all-static-files true double-decode false ignore-crawlers false crawlers-only false ignore-referer somedomain.tld ignore-referer *.somedomain.tld real-os true static-file .css static-file .js static-file .jpg static-file .png static-file .gif static-file .ico static-file .jpeg static-file .pdf static-file .txt static-file .csv static-file .zip static-file .mp3 static-file .mp4 static-file .mpeg static-file .mpg static-file .exe static-file .swf static-file .woff static-file .woff2 static-file .xls static-file .xlsx static-file .doc static-file .docx static-file .ppt static-file .pptx static-file .iso static-file .gz static-file .rar static-file .svg static-file .bmp static-file .tar static-file .tgz static-file .tiff static-file .tif static-file .ttf static-file .flv geoip-database /usr/share/GeoIP/GeoLiteCity.dat keep-db-files true load-from-disk true db-path /var/lib/goaccess/ ``` And additional info about server: ``` root@someserver:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.3 LTS Release: 14.04 Codename: trusty root@someserver (20170107-1531-MSK):~# uname -a Linux someserver.tld 4.4.0-47-generic #68~14.04.1-Ubuntu SMP Wed Oct 26 19:42:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux root@someserver:~# dpkg -l 'goaccess*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-===============================================-============================-============================-==================================================================================================== rc goaccess 1:1.1.1 amd64 no description given ii goaccess-tcb 1:1.1.1 amd64 no description given root@someserver:~# apt-cache policy goaccess-tcb goaccess-tcb: Installed: 1:1.1.1 Candidate: 1:1.1.1 Version table: *** 1:1.1.1 0 500 http://deb.goaccess.io/ trusty/main amd64 Packages 100 /var/lib/dpkg/status ``` Is there anything else i could show to clarify this issue?
1.0
On-disk database crashes with SIGSEGV - Hi. One day i tried to restart GoAccess got this in logs: ``` SIGINT caught! Closing GoAccess... Stopping WebSocket server... ==7467== GoAccess 1.1.1 crashed by Sig 11 ==7467== ==7467== VALUES AT CRASH POINT ==7467== ==7467== Line number: 1123223 ==7467== Offset: 0 ==7467== Invalid data: 75451 ==7467== Piping: 0 ==7467== Response size: 20278454359 bytes ==7467== ==7467== STACK TRACE: ==7467== ==7467== 0 /usr/bin/goaccess(sigsegv_handler+0x13e) [0x40d10e] ==7467== 1 /lib/x86_64-linux-gnu/libc.so.6(+0x36d40) [0x7f564114cd40] ==7467== 2 /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(+0x4066d) [0x7f564172366d] ==7467== 3 /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(tcbdbput+0xcf) [0x7f564172433f] ==7467== 4 /usr/lib/x86_64-linux-gnu/libtokyocabinet.so.9(tcadbput+0x1ec) [0x7f564174874c] ==7467== 5 /usr/bin/goaccess() [0x428777] ==7467== 6 /usr/bin/goaccess() [0x4287dd] ==7467== 7 /usr/bin/goaccess() [0x41b6e0] ==7467== 8 /usr/bin/goaccess() [0x41bcc6] ==7467== 9 /usr/bin/goaccess(parse_log+0xce) [0x41bf2e] ==7467== 10 /usr/bin/goaccess(main+0x21d) [0x409acd] ==7467== 11 /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f5641137ec5] ==7467== 12 /usr/bin/goaccess() [0x40aac9] ==7467== ==7467== Please report it by opening an issue on GitHub: ==7467== https://github.com/allinurl/goaccess/issues ``` Here is current config: ``` time-format %H:%M:%S date-format %d/%b/%Y log-format %h - %^ [%d:%t %^] "%M %U" %s/%T %^/%^/%^ %^/%^ %b/%^ "%R" "%u" "%^" %^ config-dialog false hl-header true html-prefs {"theme":"bright","perPage":20,"layout":"vertical","showTables":true,"visitors":{"plot":{"chartType":"bar"}}} json-pretty-print false no-color false no-column-names false no-csv-summary false no-progress false no-tab-scroll false with-mouse false addr 127.0.0.1 origin http://somedomain.tld port 7890 real-time-html true ws-url somedomain.tld/weblive/ log-file /var/log/nginx/somedomain.tld.access.log agent-list false with-output-resolver false http-method yes http-protocol yes output-format /var/tmp/webstat.html no-query-string false no-term-resolver false 444-as-404 false 4xx-to-unique-count false all-static-files true double-decode false ignore-crawlers false crawlers-only false ignore-referer somedomain.tld ignore-referer *.somedomain.tld real-os true static-file .css static-file .js static-file .jpg static-file .png static-file .gif static-file .ico static-file .jpeg static-file .pdf static-file .txt static-file .csv static-file .zip static-file .mp3 static-file .mp4 static-file .mpeg static-file .mpg static-file .exe static-file .swf static-file .woff static-file .woff2 static-file .xls static-file .xlsx static-file .doc static-file .docx static-file .ppt static-file .pptx static-file .iso static-file .gz static-file .rar static-file .svg static-file .bmp static-file .tar static-file .tgz static-file .tiff static-file .tif static-file .ttf static-file .flv geoip-database /usr/share/GeoIP/GeoLiteCity.dat keep-db-files true load-from-disk true db-path /var/lib/goaccess/ ``` And additional info about server: ``` root@someserver:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.3 LTS Release: 14.04 Codename: trusty root@someserver (20170107-1531-MSK):~# uname -a Linux someserver.tld 4.4.0-47-generic #68~14.04.1-Ubuntu SMP Wed Oct 26 19:42:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux root@someserver:~# dpkg -l 'goaccess*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-===============================================-============================-============================-==================================================================================================== rc goaccess 1:1.1.1 amd64 no description given ii goaccess-tcb 1:1.1.1 amd64 no description given root@someserver:~# apt-cache policy goaccess-tcb goaccess-tcb: Installed: 1:1.1.1 Candidate: 1:1.1.1 Version table: *** 1:1.1.1 0 500 http://deb.goaccess.io/ trusty/main amd64 Packages 100 /var/lib/dpkg/status ``` Is there anything else i could show to clarify this issue?
process
on disk database crashes with sigsegv hi one day i tried to restart goaccess got this in logs sigint caught closing goaccess stopping websocket server goaccess crashed by sig values at crash point line number offset invalid data piping response size bytes stack trace usr bin goaccess sigsegv handler lib linux gnu libc so usr lib linux gnu libtokyocabinet so usr lib linux gnu libtokyocabinet so tcbdbput usr lib linux gnu libtokyocabinet so tcadbput usr bin goaccess usr bin goaccess usr bin goaccess usr bin goaccess usr bin goaccess parse log usr bin goaccess main lib linux gnu libc so libc start main usr bin goaccess please report it by opening an issue on github here is current config time format h m s date format d b y log format h m u s t b r u config dialog false hl header true html prefs theme bright perpage layout vertical showtables true visitors plot charttype bar json pretty print false no color false no column names false no csv summary false no progress false no tab scroll false with mouse false addr origin port real time html true ws url somedomain tld weblive log file var log nginx somedomain tld access log agent list false with output resolver false http method yes http protocol yes output format var tmp webstat html no query string false no term resolver false as false to unique count false all static files true double decode false ignore crawlers false crawlers only false ignore referer somedomain tld ignore referer somedomain tld real os true static file css static file js static file jpg static file png static file gif static file ico static file jpeg static file pdf static file txt static file csv static file zip static file static file static file mpeg static file mpg static file exe static file swf static file woff static file static file xls static file xlsx static file doc static file docx static file ppt static file pptx static file iso static file gz static file rar static file svg static file bmp static file tar static file tgz static file tiff static file tif static file ttf static file flv geoip database usr share geoip geolitecity dat keep db files true load from disk true db path var lib goaccess and additional info about server root someserver lsb release a no lsb modules are available distributor id ubuntu description ubuntu lts release codename trusty root someserver msk uname a linux someserver tld generic ubuntu smp wed oct utc gnu linux root someserver dpkg l goaccess desired unknown install remove purge hold status not inst conf files unpacked half conf half inst trig await trig pend err none reinst required status err uppercase bad name version architecture description rc goaccess no description given ii goaccess tcb no description given root someserver apt cache policy goaccess tcb goaccess tcb installed candidate version table trusty main packages var lib dpkg status is there anything else i could show to clarify this issue
1
12,525
14,968,211,638
IssuesEvent
2021-01-27 16:34:29
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
tRNA surveillance
New term request PomBase RNA processes community curation
We'd like a new term to use with PMID:32841241: id: GO:new name: tRNA surveillance namespace: biological_process def: "The set of processes involved in identifying and degrading defective or aberrant tRNAs." [GOC:mah, PMID:32841241] synonym: "tRNA quality control" EXACT [GOC:mah] is_a: GO:0071025 ! RNA surveillance I also think GO:0071038 could be moved from is_a GO:0016078 to is_a GO:new. The paper introduction describes two tRNA QC pathways in budding yeast -- "nuclear surveillance" and "rapid tRNA decay (RTD)" -- but I don't have time to look into whether GO:0071038 corresponds to one or the other of them. S. pombe appears to have the RTD pathway, but we can annotate to the more general term requested here.
1.0
tRNA surveillance - We'd like a new term to use with PMID:32841241: id: GO:new name: tRNA surveillance namespace: biological_process def: "The set of processes involved in identifying and degrading defective or aberrant tRNAs." [GOC:mah, PMID:32841241] synonym: "tRNA quality control" EXACT [GOC:mah] is_a: GO:0071025 ! RNA surveillance I also think GO:0071038 could be moved from is_a GO:0016078 to is_a GO:new. The paper introduction describes two tRNA QC pathways in budding yeast -- "nuclear surveillance" and "rapid tRNA decay (RTD)" -- but I don't have time to look into whether GO:0071038 corresponds to one or the other of them. S. pombe appears to have the RTD pathway, but we can annotate to the more general term requested here.
process
trna surveillance we d like a new term to use with pmid id go new name trna surveillance namespace biological process def the set of processes involved in identifying and degrading defective or aberrant trnas synonym trna quality control exact is a go rna surveillance i also think go could be moved from is a go to is a go new the paper introduction describes two trna qc pathways in budding yeast nuclear surveillance and rapid trna decay rtd but i don t have time to look into whether go corresponds to one or the other of them s pombe appears to have the rtd pathway but we can annotate to the more general term requested here
1
17,255
23,038,891,550
IssuesEvent
2022-07-22 23:03:12
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
closed
all: sync branch workflow is triggered and fails on forked repositories by default
type: process
Sync branch workflow has a schedule trigger. https://github.com/googleapis/google-cloud-go/blob/8a8ba85311f85701c97fd7c10f1d88b738ce423f/.github/workflows/sync_branch.yaml#L2-L4 And it was triggered on my fork then it failed because the required secret was not set. https://github.com/neglect-yp/google-cloud-go/actions/runs/2715223009 I can configure to disable GitHub Actions on my fork so that workflows are not triggered, but I think it's better that workflows are not triggered or all jobs in the workflow are skipped on forks by default.
1.0
all: sync branch workflow is triggered and fails on forked repositories by default - Sync branch workflow has a schedule trigger. https://github.com/googleapis/google-cloud-go/blob/8a8ba85311f85701c97fd7c10f1d88b738ce423f/.github/workflows/sync_branch.yaml#L2-L4 And it was triggered on my fork then it failed because the required secret was not set. https://github.com/neglect-yp/google-cloud-go/actions/runs/2715223009 I can configure to disable GitHub Actions on my fork so that workflows are not triggered, but I think it's better that workflows are not triggered or all jobs in the workflow are skipped on forks by default.
process
all sync branch workflow is triggered and fails on forked repositories by default sync branch workflow has a schedule trigger and it was triggered on my fork then it failed because the required secret was not set i can configure to disable github actions on my fork so that workflows are not triggered but i think it s better that workflows are not triggered or all jobs in the workflow are skipped on forks by default
1
3,588
6,621,671,691
IssuesEvent
2017-09-21 20:06:30
WikiWatershed/rapid-watershed-delineation
https://api.github.com/repos/WikiWatershed/rapid-watershed-delineation
closed
Round geojson data to significant figures
BigCZ Geoprocessing API
For smaller payloads, round the result geojson lat/lngs to 4 or 5 digits of precision.
1.0
Round geojson data to significant figures - For smaller payloads, round the result geojson lat/lngs to 4 or 5 digits of precision.
process
round geojson data to significant figures for smaller payloads round the result geojson lat lngs to or digits of precision
1
20,452
27,115,533,813
IssuesEvent
2023-02-15 18:15:48
hashgraph/hedera-json-rpc-relay
https://api.github.com/repos/hashgraph/hedera-json-rpc-relay
opened
Automate git branch process for main and release branch
enhancement P2 process
### Problem The current release process sees the following manual steps 1. A PR to bump the SNAPSHOT version in `main` 2. Creation of a new `release/<major>.<minor>` branch and a PR to bump the version to an `alpha-1`/`rc-1` version This results in us continuously questioning if we got every file in the bump PRs ### Solution We should automate the process to make it more reliable We can utilize a git action for this Resources - [Mirror Node Automated Release Process](https://github.com/hashgraph/hedera-mirror-node/blob/main/.github/workflows/release-automation.yml) - https://cli.github.com/manual/gh_release ### Alternatives _No response_
1.0
Automate git branch process for main and release branch - ### Problem The current release process sees the following manual steps 1. A PR to bump the SNAPSHOT version in `main` 2. Creation of a new `release/<major>.<minor>` branch and a PR to bump the version to an `alpha-1`/`rc-1` version This results in us continuously questioning if we got every file in the bump PRs ### Solution We should automate the process to make it more reliable We can utilize a git action for this Resources - [Mirror Node Automated Release Process](https://github.com/hashgraph/hedera-mirror-node/blob/main/.github/workflows/release-automation.yml) - https://cli.github.com/manual/gh_release ### Alternatives _No response_
process
automate git branch process for main and release branch problem the current release process sees the following manual steps a pr to bump the snapshot version in main creation of a new release branch and a pr to bump the version to an alpha rc version this results in us continuously questioning if we got every file in the bump prs solution we should automate the process to make it more reliable we can utilize a git action for this resources alternatives no response
1
25,955
12,807,378,235
IssuesEvent
2020-07-03 11:21:41
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
JIT generates redundant vmovaps
tenet-performance
### Description JIT generates redundant vmovaps. ### Configuration SharpLab's Release x64 ### Regression? No. ### Data [sharplab.io output](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKGIAYACY8gOgCUBXAOwwEt8YLAMIR8AB14AbGFADKMgG68wMXAG4a9Jq049+ggJI8ovLrmXrNjZu258BLIxhNmLLABoAOJBuqaAzEykDEIIDADeNAzRDFExANoAsjAYABYQACYG4pIAFMlpmdlikgDyYnwQZiwAggDmdbC45gowRpKmpnUAlAC6cdFiJgrYGDDaSAwZEBzA0iEwUl2yuDAo5LnTs/MK3QMxALwAfAwra6xCix1cdbJg2JLYULkAajBgGNDkpJ7CsKMwO4PJ4AVTM2AAZjBcrtuiwACoQIGPZ7dXz7JIpdJZHL5LFFHLlSrVeqNVQtNpca5dPr7Ia8EZjCZTGZzcbwqDce5jM7rTasnZ7agxQ4nXk6GZcDKIgBaMiR9xRr3enyg31+Qn+PMVoPBUJh3ThiORT1yaP2+xFkDMGBZ23ZNRBDAODESo1SLAACgYAFSkXwijEFbHFPLBgklIm8Kq4WoNJoUqP4XgAL1G0a4tOFMXpjPGzEmWzZDDYMAyHBUGyL8zSKWw3QilpiIygDAgzoYtYw2ADIoYLbbwRdbrSLAAYhxVhlEhxJHwSgBPGoZDK5S5LG7i3K5eGOhg+hjBOBthsH3LkBgAegYu5BhrQN8dD4g5uzIoHEECw/d48nZZnc68Iuy6rnAHJcgCW4QMEZ4Xtet73o+ILPqQr59kwADsbb+L20QAL40EG+I4iUeKFCRZQVBmsakgmDIwEmqbplUWYirmALMtW4yluWKikPy9qdqkdYNpEb4xF22AdiOHoTlOAHzpIS4rmuVzLKsfI7nuB5HkJIn7gw55XkhiG3g+kloX2knST+cn/rOinKaB4FcNygIaRs1mwcZCHdA+Zl6d2lkisQWGSbhDAEX41B4UAA===) ``` Cx.Reduce1(Double) L0000: vzeroupper L0003: vmovsd xmm1, [Cx.Reduce1(Double)] L000b: vsubsd xmm1, xmm1, xmm0 L000f: vmulsd xmm1, xmm1, [Cx.Reduce1(Double)] L0017: vroundsd xmm1, xmm1, xmm1, 0xa L001d: vmovsd xmm2, [Cx.Reduce1(Double)] L0025: vfmadd213sd xmm1, xmm2, xmm0 L002a: vmovaps xmm0, xmm1 L002e: vmulsd xmm0, xmm0, [Cx.Reduce1(Double)] L0036: vroundsd xmm0, xmm0, xmm0, 0xb L003c: vmovsd xmm2, [Cx.Reduce1(Double)] L0044: vxorps xmm0, xmm0, xmm2 L0048: vmovsd xmm2, [Cx.Reduce1(Double)] L0050: vfmadd213sd xmm0, xmm2, xmm1 L0055: ret Cx.Reduce2(Double) L0000: vzeroupper L0003: vmovsd xmm1, [Cx.Reduce2(Double)] L000b: vsubsd xmm1, xmm1, xmm0 L000f: vmulsd xmm1, xmm1, [Cx.Reduce2(Double)] L0017: vroundsd xmm1, xmm1, xmm1, 0xa L001d: vmovsd xmm2, [Cx.Reduce2(Double)] L0025: vfmadd213sd xmm1, xmm2, xmm0 L002a: vmovaps xmm0, xmm1 → L002e: vmovaps xmm1, xmm0 L0032: vmulsd xmm1, xmm1, [Cx.Reduce2(Double)] L003a: vroundsd xmm1, xmm1, xmm1, 0xb L0040: vmovsd xmm2, [Cx.Reduce2(Double)] L0048: vxorps xmm1, xmm1, xmm2 L004c: vmovsd xmm2, [Cx.Reduce2(Double)] L0054: vfmadd213sd xmm1, xmm2, xmm0 → L0059: vmovaps xmm0, xmm1 L005d: ret ``` ### Analysis Something's not right, I think? ``` L002a: vmovaps xmm0, xmm1 L002e: vmovaps xmm1, xmm0 ``` Reduce1 and Reduce2 should preferably be identical.
True
JIT generates redundant vmovaps - ### Description JIT generates redundant vmovaps. ### Configuration SharpLab's Release x64 ### Regression? No. ### Data [sharplab.io output](https://sharplab.io/#v2:EYLgxg9gTgpgtADwGwBYA0AXEBDAzgWwB8ABAJgEYBYAKGIAYACY8gOgCUBXAOwwEt8YLAMIR8AB14AbGFADKMgG68wMXAG4a9Jq049+ggJI8ovLrmXrNjZu258BLIxhNmLLABoAOJBuqaAzEykDEIIDADeNAzRDFExANoAsjAYABYQACYG4pIAFMlpmdlikgDyYnwQZiwAggDmdbC45gowRpKmpnUAlAC6cdFiJgrYGDDaSAwZEBzA0iEwUl2yuDAo5LnTs/MK3QMxALwAfAwra6xCix1cdbJg2JLYULkAajBgGNDkpJ7CsKMwO4PJ4AVTM2AAZjBcrtuiwACoQIGPZ7dXz7JIpdJZHL5LFFHLlSrVeqNVQtNpca5dPr7Ia8EZjCZTGZzcbwqDce5jM7rTasnZ7agxQ4nXk6GZcDKIgBaMiR9xRr3enyg31+Qn+PMVoPBUJh3ThiORT1yaP2+xFkDMGBZ23ZNRBDAODESo1SLAACgYAFSkXwijEFbHFPLBgklIm8Kq4WoNJoUqP4XgAL1G0a4tOFMXpjPGzEmWzZDDYMAyHBUGyL8zSKWw3QilpiIygDAgzoYtYw2ADIoYLbbwRdbrSLAAYhxVhlEhxJHwSgBPGoZDK5S5LG7i3K5eGOhg+hjBOBthsH3LkBgAegYu5BhrQN8dD4g5uzIoHEECw/d48nZZnc68Iuy6rnAHJcgCW4QMEZ4Xtet73o+ILPqQr59kwADsbb+L20QAL40EG+I4iUeKFCRZQVBmsakgmDIwEmqbplUWYirmALMtW4yluWKikPy9qdqkdYNpEb4xF22AdiOHoTlOAHzpIS4rmuVzLKsfI7nuB5HkJIn7gw55XkhiG3g+kloX2knST+cn/rOinKaB4FcNygIaRs1mwcZCHdA+Zl6d2lkisQWGSbhDAEX41B4UAA===) ``` Cx.Reduce1(Double) L0000: vzeroupper L0003: vmovsd xmm1, [Cx.Reduce1(Double)] L000b: vsubsd xmm1, xmm1, xmm0 L000f: vmulsd xmm1, xmm1, [Cx.Reduce1(Double)] L0017: vroundsd xmm1, xmm1, xmm1, 0xa L001d: vmovsd xmm2, [Cx.Reduce1(Double)] L0025: vfmadd213sd xmm1, xmm2, xmm0 L002a: vmovaps xmm0, xmm1 L002e: vmulsd xmm0, xmm0, [Cx.Reduce1(Double)] L0036: vroundsd xmm0, xmm0, xmm0, 0xb L003c: vmovsd xmm2, [Cx.Reduce1(Double)] L0044: vxorps xmm0, xmm0, xmm2 L0048: vmovsd xmm2, [Cx.Reduce1(Double)] L0050: vfmadd213sd xmm0, xmm2, xmm1 L0055: ret Cx.Reduce2(Double) L0000: vzeroupper L0003: vmovsd xmm1, [Cx.Reduce2(Double)] L000b: vsubsd xmm1, xmm1, xmm0 L000f: vmulsd xmm1, xmm1, [Cx.Reduce2(Double)] L0017: vroundsd xmm1, xmm1, xmm1, 0xa L001d: vmovsd xmm2, [Cx.Reduce2(Double)] L0025: vfmadd213sd xmm1, xmm2, xmm0 L002a: vmovaps xmm0, xmm1 → L002e: vmovaps xmm1, xmm0 L0032: vmulsd xmm1, xmm1, [Cx.Reduce2(Double)] L003a: vroundsd xmm1, xmm1, xmm1, 0xb L0040: vmovsd xmm2, [Cx.Reduce2(Double)] L0048: vxorps xmm1, xmm1, xmm2 L004c: vmovsd xmm2, [Cx.Reduce2(Double)] L0054: vfmadd213sd xmm1, xmm2, xmm0 → L0059: vmovaps xmm0, xmm1 L005d: ret ``` ### Analysis Something's not right, I think? ``` L002a: vmovaps xmm0, xmm1 L002e: vmovaps xmm1, xmm0 ``` Reduce1 and Reduce2 should preferably be identical.
non_process
jit generates redundant vmovaps description jit generates redundant vmovaps configuration sharplab s release regression no data cx double vzeroupper vmovsd vsubsd vmulsd vroundsd vmovsd vmovaps vmulsd vroundsd vmovsd vxorps vmovsd ret cx double vzeroupper vmovsd vsubsd vmulsd vroundsd vmovsd vmovaps → vmovaps vmulsd vroundsd vmovsd vxorps vmovsd → vmovaps ret analysis something s not right i think vmovaps vmovaps and should preferably be identical
0
310,426
23,337,429,082
IssuesEvent
2022-08-09 11:14:31
nesmamanasra/TODO-Angular-Internship-PITA
https://api.github.com/repos/nesmamanasra/TODO-Angular-Internship-PITA
opened
TODO component
documentation
TODO component in this component the user can do - create TODO - update TODO text - update TODO status (TODO status start with todo -> in progress -> done) - delete TODO - add a filter to search in TODO status - add a filter to search in TODO text - the TODOS should be grouped by date (this means you should add search by date) - user can share the TODO with other users, if the user shares a TODO other users can only view this TODO without taking any action
1.0
TODO component - TODO component in this component the user can do - create TODO - update TODO text - update TODO status (TODO status start with todo -> in progress -> done) - delete TODO - add a filter to search in TODO status - add a filter to search in TODO text - the TODOS should be grouped by date (this means you should add search by date) - user can share the TODO with other users, if the user shares a TODO other users can only view this TODO without taking any action
non_process
todo component todo component in this component the user can do create todo update todo text update todo status todo status start with todo in progress done delete todo add a filter to search in todo status add a filter to search in todo text the todos should be grouped by date this means you should add search by date user can share the todo with other users if the user shares a todo other users can only view this todo without taking any action
0
3,314
6,419,177,099
IssuesEvent
2017-08-08 20:37:41
TomorrowPartners/tomorrow-web
https://api.github.com/repos/TomorrowPartners/tomorrow-web
closed
Chat about Tomorrow's current and future social media strategy
process_in progress
From the Slacks, summarized: `Question -- what do we do now for Tomorrow social media/news updates? Looks like we update Facebook and Twitter maybe once per month (and maybe our company LinkedIn is connected to our Facebook), we have a Google+ that doesn't have anything on it, maybe no Instagram, and we haven't updated the microblog on the site....since last year ;P Does that sound about right? And Is it anyone's responsibility to keep track of all that? ` Where we live: - [Twitter](https://twitter.com/TmrrwPartners) - [Facebook](https://www.facebook.com/TomorrowPartners) - [Google+](https://plus.google.com/100180325385417658164) - [LinkedIn](https://www.linkedin.com/company-beta/3578978/)
1.0
Chat about Tomorrow's current and future social media strategy - From the Slacks, summarized: `Question -- what do we do now for Tomorrow social media/news updates? Looks like we update Facebook and Twitter maybe once per month (and maybe our company LinkedIn is connected to our Facebook), we have a Google+ that doesn't have anything on it, maybe no Instagram, and we haven't updated the microblog on the site....since last year ;P Does that sound about right? And Is it anyone's responsibility to keep track of all that? ` Where we live: - [Twitter](https://twitter.com/TmrrwPartners) - [Facebook](https://www.facebook.com/TomorrowPartners) - [Google+](https://plus.google.com/100180325385417658164) - [LinkedIn](https://www.linkedin.com/company-beta/3578978/)
process
chat about tomorrow s current and future social media strategy from the slacks summarized question what do we do now for tomorrow social media news updates looks like we update facebook and twitter maybe once per month and maybe our company linkedin is connected to our facebook we have a google that doesn t have anything on it maybe no instagram and we haven t updated the microblog on the site since last year p does that sound about right and is it anyone s responsibility to keep track of all that where we live
1
17,969
23,983,644,388
IssuesEvent
2022-09-13 17:03:13
JeroenMathon/NeosVR-Research-Initiative
https://api.github.com/repos/JeroenMathon/NeosVR-Research-Initiative
opened
Process information from Archived Sites tip
help wanted processing
Collected Data/Unverified/Discord/NCR and NEOS Team General Discussion/Archived_websites.txt
1.0
Process information from Archived Sites tip - Collected Data/Unverified/Discord/NCR and NEOS Team General Discussion/Archived_websites.txt
process
process information from archived sites tip collected data unverified discord ncr and neos team general discussion archived websites txt
1
95,954
12,065,718,880
IssuesEvent
2020-04-16 10:26:49
Lenkly/festivent
https://api.github.com/repos/Lenkly/festivent
closed
Create a design for the app
design frontend
The app needs a design that looks good, is flat and clean and self-explanatory to the user. ### UX As a user I want to understand how the app works without studying guidelines. The menu navigation should be self-explanatory and easy to handle. There shouldn't be any obstacles to create a new account. I want to save my favourite festivals in my profile and handle them there (fav and unfav). I want to handle my account easily and switch between themes. ### UI As a user I want a clean, clear app without distractions. It should be readable without having to zoom in. I want input fields and buttons that are big enough to press without pressing the wrong button by accident.
1.0
Create a design for the app - The app needs a design that looks good, is flat and clean and self-explanatory to the user. ### UX As a user I want to understand how the app works without studying guidelines. The menu navigation should be self-explanatory and easy to handle. There shouldn't be any obstacles to create a new account. I want to save my favourite festivals in my profile and handle them there (fav and unfav). I want to handle my account easily and switch between themes. ### UI As a user I want a clean, clear app without distractions. It should be readable without having to zoom in. I want input fields and buttons that are big enough to press without pressing the wrong button by accident.
non_process
create a design for the app the app needs a design that looks good is flat and clean and self explanatory to the user ux as a user i want to understand how the app works without studying guidelines the menu navigation should be self explanatory and easy to handle there shouldn t be any obstacles to create a new account i want to save my favourite festivals in my profile and handle them there fav and unfav i want to handle my account easily and switch between themes ui as a user i want a clean clear app without distractions it should be readable without having to zoom in i want input fields and buttons that are big enough to press without pressing the wrong button by accident
0
1,357
3,910,726,444
IssuesEvent
2016-04-20 00:30:04
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
opened
Routing based on bind IP/port
ADMIN MYSQL PROTOCOL QUERY PROCESSOR ROUTING
We need to extend `mysql_query_rules` with more parameters to make routing decision: - client source IP address (and optionally netmask) - ProxySQL IP address - ProxySQL port
1.0
Routing based on bind IP/port - We need to extend `mysql_query_rules` with more parameters to make routing decision: - client source IP address (and optionally netmask) - ProxySQL IP address - ProxySQL port
process
routing based on bind ip port we need to extend mysql query rules with more parameters to make routing decision client source ip address and optionally netmask proxysql ip address proxysql port
1
22,350
31,027,460,344
IssuesEvent
2023-08-10 10:06:06
DxytJuly3/gitalk_blog
https://api.github.com/repos/DxytJuly3/gitalk_blog
opened
[Linux] 系统进程相关概念、系统调用、Linux进程详析、进程查看、fork()初识 - July.cc Blogs
Gitalk /posts/Linux-Process-Concept&Processes
https://www.julysblog.cn/posts/Linux-Process-Concept&Processes 关于什么是进程这个问题, 一般都会用一句简单的话来回答:运行起来的程序就是进程. 这句话不能说是错的, 但也不全对。如果运行起来的程序就是进程, 那么进程和程序又有什么区别呢?
2.0
[Linux] 系统进程相关概念、系统调用、Linux进程详析、进程查看、fork()初识 - July.cc Blogs - https://www.julysblog.cn/posts/Linux-Process-Concept&Processes 关于什么是进程这个问题, 一般都会用一句简单的话来回答:运行起来的程序就是进程. 这句话不能说是错的, 但也不全对。如果运行起来的程序就是进程, 那么进程和程序又有什么区别呢?
process
系统进程相关概念、系统调用、linux进程详析、进程查看、fork 初识 july cc blogs 关于什么是进程这个问题 一般都会用一句简单的话来回答:运行起来的程序就是进程 这句话不能说是错的 但也不全对。如果运行起来的程序就是进程 那么进程和程序又有什么区别呢?
1
228,207
25,169,549,362
IssuesEvent
2022-11-11 01:05:38
opr1cob/synk_test_repo
https://api.github.com/repos/opr1cob/synk_test_repo
opened
spring-boot-starter-security-2.7.0.jar: 1 vulnerabilities (highest severity is: 9.8)
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-security-2.7.0.jar</b></p></summary> <p></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-core/5.7.1/7e98028d3b1afab1fc9e24006d0a95ea08304281/spring-security-core-5.7.1.jar</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-boot-starter-security version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-31692](https://www.mend.io/vulnerability-database/CVE-2022-31692) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | spring-security-core-5.7.1.jar | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31692</summary> ### Vulnerable Library - <b>spring-security-core-5.7.1.jar</b></p> <p>Spring Security</p> <p>Library home page: <a href="https://spring.io/projects/spring-security">https://spring.io/projects/spring-security</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-core/5.7.1/7e98028d3b1afab1fc9e24006d0a95ea08304281/spring-security-core-5.7.1.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-security-2.7.0.jar (Root Library) - spring-security-web-5.7.1.jar - :x: **spring-security-core-5.7.1.jar** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Spring Security, versions 5.7 prior to 5.7.5 and 5.6 prior to 5.6.9 could be susceptible to authorization rules bypass via forward or include dispatcher types. Specifically, an application is vulnerable when all of the following are true: The application expects that Spring Security applies security to forward and include dispatcher types. The application uses the AuthorizationFilter either manually or via the authorizeHttpRequests() method. The application configures the FilterChainProxy to apply to forward and/or include requests (e.g. spring.security.filter.dispatcher-types = request, error, async, forward, include). The application may forward or include the request to a higher privilege-secured endpoint.The application configures Spring Security to apply to every dispatcher type via authorizeHttpRequests().shouldFilterAllDispatcherTypes(true) <p>Publish Date: 2022-10-31 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31692>CVE-2022-31692</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-mmmh-wcxm-2wr4">https://github.com/advisories/GHSA-mmmh-wcxm-2wr4</a></p> <p>Release Date: 2022-10-31</p> <p>Fix Resolution: org.springframework.security:spring-security-core:5.6.9,5.7.5</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
True
spring-boot-starter-security-2.7.0.jar: 1 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-boot-starter-security-2.7.0.jar</b></p></summary> <p></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-core/5.7.1/7e98028d3b1afab1fc9e24006d0a95ea08304281/spring-security-core-5.7.1.jar</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (spring-boot-starter-security version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2022-31692](https://www.mend.io/vulnerability-database/CVE-2022-31692) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | spring-security-core-5.7.1.jar | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-31692</summary> ### Vulnerable Library - <b>spring-security-core-5.7.1.jar</b></p> <p>Spring Security</p> <p>Library home page: <a href="https://spring.io/projects/spring-security">https://spring.io/projects/spring-security</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-core/5.7.1/7e98028d3b1afab1fc9e24006d0a95ea08304281/spring-security-core-5.7.1.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-security-2.7.0.jar (Root Library) - spring-security-web-5.7.1.jar - :x: **spring-security-core-5.7.1.jar** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> Spring Security, versions 5.7 prior to 5.7.5 and 5.6 prior to 5.6.9 could be susceptible to authorization rules bypass via forward or include dispatcher types. Specifically, an application is vulnerable when all of the following are true: The application expects that Spring Security applies security to forward and include dispatcher types. The application uses the AuthorizationFilter either manually or via the authorizeHttpRequests() method. The application configures the FilterChainProxy to apply to forward and/or include requests (e.g. spring.security.filter.dispatcher-types = request, error, async, forward, include). The application may forward or include the request to a higher privilege-secured endpoint.The application configures Spring Security to apply to every dispatcher type via authorizeHttpRequests().shouldFilterAllDispatcherTypes(true) <p>Publish Date: 2022-10-31 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-31692>CVE-2022-31692</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>9.8</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-mmmh-wcxm-2wr4">https://github.com/advisories/GHSA-mmmh-wcxm-2wr4</a></p> <p>Release Date: 2022-10-31</p> <p>Fix Resolution: org.springframework.security:spring-security-core:5.6.9,5.7.5</p> </p> <p></p> Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) </details>
non_process
spring boot starter security jar vulnerabilities highest severity is vulnerable library spring boot starter security jar path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org springframework security spring security core spring security core jar vulnerabilities cve severity cvss dependency type fixed in spring boot starter security version remediation available high spring security core jar transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library spring security core jar spring security library home page a href path to dependency file build gradle path to vulnerable library home wss scanner gradle caches modules files org springframework security spring security core spring security core jar dependency hierarchy spring boot starter security jar root library spring security web jar x spring security core jar vulnerable library found in base branch main vulnerability details spring security versions prior to and prior to could be susceptible to authorization rules bypass via forward or include dispatcher types specifically an application is vulnerable when all of the following are true the application expects that spring security applies security to forward and include dispatcher types the application uses the authorizationfilter either manually or via the authorizehttprequests method the application configures the filterchainproxy to apply to forward and or include requests e g spring security filter dispatcher types request error async forward include the application may forward or include the request to a higher privilege secured endpoint the application configures spring security to apply to every dispatcher type via authorizehttprequests shouldfilteralldispatchertypes true publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security core step up your open source security game with mend
0
21,559
29,893,188,358
IssuesEvent
2023-06-21 00:55:11
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
Test failure System.Diagnostics.Tests.ProcessTests.TotalProcessorTime_PerformLoop_TotalProcessorTimeValid
arch-arm64 area-System.Diagnostics.Process os-linux JitStress
**Failed in:** [runtime-coreclr libraries-jitstress 20230524.1](https://dev.azure.com/dnceng-public/public/_build/results?buildId=284448&view=ms.vss-test-web.build-test-results-tab&runId=5661946&resultId=143566&paneView=debug) **Failed tests:** ``` net8.0-linux-Release-arm64-CoreCLR_checked-jitstress1_tiered-(Ubuntu.1804.Arm64.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm64v8 - System.Diagnostics.Tests.ProcessTests.TotalProcessorTime_PerformLoop_TotalProcessorTimeValid ``` **Error message:** ``` Assert.InRange() Failure Range: (0 - 1) Actual: 1.0463972542536049 ``` **Stack trace:** ``` at System.Diagnostics.Tests.ProcessTests.TotalProcessorTime_PerformLoop_TotalProcessorTimeValid() in /_/src/libraries/System.Diagnostics.Process/tests/ProcessTests.cs:line 910 at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor) at System.Reflection.MethodInvoker.Invoke(Object obj, IntPtr* args, BindingFlags invokeAttr) in /_/src/libraries/System.Private.CoreLib/src/System/Reflection/MethodInvoker.cs:line 59 ```
1.0
Test failure System.Diagnostics.Tests.ProcessTests.TotalProcessorTime_PerformLoop_TotalProcessorTimeValid - **Failed in:** [runtime-coreclr libraries-jitstress 20230524.1](https://dev.azure.com/dnceng-public/public/_build/results?buildId=284448&view=ms.vss-test-web.build-test-results-tab&runId=5661946&resultId=143566&paneView=debug) **Failed tests:** ``` net8.0-linux-Release-arm64-CoreCLR_checked-jitstress1_tiered-(Ubuntu.1804.Arm64.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm64v8 - System.Diagnostics.Tests.ProcessTests.TotalProcessorTime_PerformLoop_TotalProcessorTimeValid ``` **Error message:** ``` Assert.InRange() Failure Range: (0 - 1) Actual: 1.0463972542536049 ``` **Stack trace:** ``` at System.Diagnostics.Tests.ProcessTests.TotalProcessorTime_PerformLoop_TotalProcessorTimeValid() in /_/src/libraries/System.Diagnostics.Process/tests/ProcessTests.cs:line 910 at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor) at System.Reflection.MethodInvoker.Invoke(Object obj, IntPtr* args, BindingFlags invokeAttr) in /_/src/libraries/System.Private.CoreLib/src/System/Reflection/MethodInvoker.cs:line 59 ```
process
test failure system diagnostics tests processtests totalprocessortime performloop totalprocessortimevalid failed in failed tests linux release coreclr checked tiered ubuntu open ubuntu armarch open mcr microsoft com dotnet buildtools prereqs ubuntu helix system diagnostics tests processtests totalprocessortime performloop totalprocessortimevalid error message assert inrange failure range actual stack trace at system diagnostics tests processtests totalprocessortime performloop totalprocessortimevalid in src libraries system diagnostics process tests processtests cs line at system runtimemethodhandle invokemethod object target void arguments signature sig boolean isconstructor at system reflection methodinvoker invoke object obj intptr args bindingflags invokeattr in src libraries system private corelib src system reflection methodinvoker cs line
1
12,861
15,252,209,798
IssuesEvent
2021-02-20 01:58:16
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Improved log file for processing models
Feature Request Feedback Processing stale
Author Name: **Magnus Nilsson** (Magnus Nilsson) Original Redmine Issue: [20964](https://issues.qgis.org/issues/20964) Redmine category:processing/modeller --- I miss a few aspects in the log file for a processing model: - Time when processing started - Time when processing finished - Total processing time (not not just the time for each individual tool). Useful for finetuning the model. - Number of features read and written
1.0
Improved log file for processing models - Author Name: **Magnus Nilsson** (Magnus Nilsson) Original Redmine Issue: [20964](https://issues.qgis.org/issues/20964) Redmine category:processing/modeller --- I miss a few aspects in the log file for a processing model: - Time when processing started - Time when processing finished - Total processing time (not not just the time for each individual tool). Useful for finetuning the model. - Number of features read and written
process
improved log file for processing models author name magnus nilsson magnus nilsson original redmine issue redmine category processing modeller i miss a few aspects in the log file for a processing model time when processing started time when processing finished total processing time not not just the time for each individual tool useful for finetuning the model number of features read and written
1
21,729
30,242,938,904
IssuesEvent
2023-07-06 14:36:10
microsoft/vscode
https://api.github.com/repos/microsoft/vscode
closed
Only launch pty host when it's actually needed
feature-request perf terminal-process
After https://github.com/microsoft/vscode/pull/182631, we launch the pty host off the main process (or server) as soon as a window asks for a connection. The window connection will always happen after `LifecyclePhase.Restored` but we could push this even further by only launching the pty host when it's needed. Currently we pre-emptively fetch terminal profiles but we could change that to happen on demand, meaning if you loaded VS Code without the terminal showing, and then ran the open terminal with profile command, it would be slightly delayed as it needs to wait for the pty host to load up as profiles are evaluated there. The benefits of this are: - Less work on startup (even if it's after restored) - ~40mb less constant memory used by the pty host if you don't use the terminal - We'd want to do this anyway if we wanted to lazy load the entire terminal contrib which has been an idea we've thrown around a little in the past
1.0
Only launch pty host when it's actually needed - After https://github.com/microsoft/vscode/pull/182631, we launch the pty host off the main process (or server) as soon as a window asks for a connection. The window connection will always happen after `LifecyclePhase.Restored` but we could push this even further by only launching the pty host when it's needed. Currently we pre-emptively fetch terminal profiles but we could change that to happen on demand, meaning if you loaded VS Code without the terminal showing, and then ran the open terminal with profile command, it would be slightly delayed as it needs to wait for the pty host to load up as profiles are evaluated there. The benefits of this are: - Less work on startup (even if it's after restored) - ~40mb less constant memory used by the pty host if you don't use the terminal - We'd want to do this anyway if we wanted to lazy load the entire terminal contrib which has been an idea we've thrown around a little in the past
process
only launch pty host when it s actually needed after we launch the pty host off the main process or server as soon as a window asks for a connection the window connection will always happen after lifecyclephase restored but we could push this even further by only launching the pty host when it s needed currently we pre emptively fetch terminal profiles but we could change that to happen on demand meaning if you loaded vs code without the terminal showing and then ran the open terminal with profile command it would be slightly delayed as it needs to wait for the pty host to load up as profiles are evaluated there the benefits of this are less work on startup even if it s after restored less constant memory used by the pty host if you don t use the terminal we d want to do this anyway if we wanted to lazy load the entire terminal contrib which has been an idea we ve thrown around a little in the past
1
27,902
4,065,761,601
IssuesEvent
2016-05-26 12:40:13
johnmarint/JCHR4IV6WIWXKAUKECK6744D
https://api.github.com/repos/johnmarint/JCHR4IV6WIWXKAUKECK6744D
reopened
qsuuzgNMELZ5TqkiZm2lmU+2LBeXgC0CnEC7I0gbIJH+anfF0mrRHzTzFtHNPl9CbZa6vRakqAhoqkHLjjXUF3oFXJrT2i1uDAz6QG/KMW2UhA6q00eCdsvTk5LUJwYQ/IxV6AGYQ4/xvT6mRkS+fUSvVcoXaSYeF28SzP5QVJw=
design
UvLJvn/dcgCU7IiRaN6Q8ujRS2buVYEf6H4cBgWT/k1B6W0sRB7O5YSH8wQ0TlroVPO9EHPHxbzj620luvO0UUjkIM45Bi2QiWbTaLk/cnJNc3W7OiYxh+Oe6w8yjmPAsdilkGaKp3zQPwcrWoMnC3VNwYy+v5iPfIh9XQ4u4qdfVJkrHX0bSLanLe8ge+rFIGSItcduheP8dJhzfYYqJ4bs2CCKmYHmfC5qFM3sDNvxeuV7Hy9Q0i4DqTY/wbA1LfrGMIcx+A+wDIjlRFXEu0b7q2fTO7QaLlD+wTFBB8xZRwX3z1gcnnvI2h0gWgj6H3Zaycmyx45i0XPn91ut1+TEWvqPQN6vf84+JeoZB2mVvc2y+aYwgg3FF9XUiNkEihjEA0nv9HgwHLL09YTMUaFboR2KMSyORNV39AxWpG4jRlF1Vsoc/axw5lY2bqOSVhCj4DWQIKXCYq/vNkRmzzzqIQZxMXNZTvUj62I/h8Y7NrV7hvs787rB10vSHJauFu2aQ4iQdxzUMqSmOnrqzMOM7GWoh2NjWZocM8CFJmyKm9OpulNFiwhwfaJYsFO4bKviatonfLukO+h4GSk6kwmIJz4LewP2G6gOTOygGyjLiukF5tm0oJs2oN9/bXQOCJ6VA2TTgKNQDwD3shWSZsoA44nb+0jTFd3moqJCfgTuCX50FHkGEL5Hd4nPyv2OBxZcgjAOjIHUBXhMelGsk7lxBHhkvcBsoZst7U7Q1UXGFa3s7vC0h/oSCGNhBvucy5FYD45pJPsex6u9Avb0LMoA44nb+0jTFd3moqJCfgRyaLZBnUP80Fr119WWwhtETdr4osPIaEVf1lSAdvrK0yMTIe6fS7oQMmXwixtyi67uW2guGMZKp7HuEB4lHaOeqFRUwQHxD1d0QckxJmJTGvNi36ActZqOxr4iQu21ph004wMViY5kiQBrc10KpLvLqivPhyYDYTJohUQuTF591QiuBAYy1fEDFWTJPy7as+Y3X+AD1k2JcT/0nL9JnlgzmY0Nxp5CLG+h2tv/PzZ62fC8CB/s/rlbQOM38t2WJz2mbPAWhnc2Zd41VVGmgYIRmZoqRVjpZ7oZD0+G/HeyxoLHYACfb6vG6xn2NB9xJJwWQ4lYviqrFP7FkxNW8KLiYv+1OBkxug+3qj7kXY4wwwquNV3f1APom9Z9BL3adrctjg/bQAe4BfFfgAqexZAxHlV/otYZyhoXqMbrUEbs7WhXa/WJnCxMDhMRb6X7lTrIKjSc9D66r9kI0eJlc9XlY9bxQZnSKrZjgmHwen38d4LF306XXGjlJgsTaIyfpPKz/bWR8mUm2trqQJg/Ttsl7DiDI2u2JaQ3qU5qbPLgDqFON+q7NmTYfUgBZsPbGPAYful1d+VvrqMlOACNlP7j74rs6K4mx3a9X9GNBgzDcBRhoFsTKPj47Ua23b1lyCjIKjSc9D66r9kI0eJlc9XlssJd1dYDMYx4fbrCdFOyz1x/++IaAqFnJvr7NfARCD6y91z86mrpvtJx2To5jD5HXQHsYwtVERiM++pKB9CEhZPG3PO7Eip6a8RjpjnCVe5jDNOWsSIoE6vXcciELBfv6/lOtPPtub+Evln/DOXvk3VPSS0MiRL55UzHqWLGvF3sqsQcM/bJjPYKtMnWI9HSXBLhcdmYF9QY9xyEOl65ocrmTnat4lJcaIb+I+F+3ysPtQ7cqJ3OFfzZ+UuXoVVAQFohNNVsOMsTzy4fP1wn17TA/Ld36yoZv1wdYvej21GF0VC/M+8LJCmuf+sv+Wl9qQx+SpvWiOw4459oLiq4zQz/HcwCF5MC1Yd3GyisxYJwCHhWwvK3Wo/3VKocwx3U97TV6O+bZW1Y7pP3K5FFIVYYVDLNS2Yw9fl3vkPECWM=
1.0
qsuuzgNMELZ5TqkiZm2lmU+2LBeXgC0CnEC7I0gbIJH+anfF0mrRHzTzFtHNPl9CbZa6vRakqAhoqkHLjjXUF3oFXJrT2i1uDAz6QG/KMW2UhA6q00eCdsvTk5LUJwYQ/IxV6AGYQ4/xvT6mRkS+fUSvVcoXaSYeF28SzP5QVJw= - UvLJvn/dcgCU7IiRaN6Q8ujRS2buVYEf6H4cBgWT/k1B6W0sRB7O5YSH8wQ0TlroVPO9EHPHxbzj620luvO0UUjkIM45Bi2QiWbTaLk/cnJNc3W7OiYxh+Oe6w8yjmPAsdilkGaKp3zQPwcrWoMnC3VNwYy+v5iPfIh9XQ4u4qdfVJkrHX0bSLanLe8ge+rFIGSItcduheP8dJhzfYYqJ4bs2CCKmYHmfC5qFM3sDNvxeuV7Hy9Q0i4DqTY/wbA1LfrGMIcx+A+wDIjlRFXEu0b7q2fTO7QaLlD+wTFBB8xZRwX3z1gcnnvI2h0gWgj6H3Zaycmyx45i0XPn91ut1+TEWvqPQN6vf84+JeoZB2mVvc2y+aYwgg3FF9XUiNkEihjEA0nv9HgwHLL09YTMUaFboR2KMSyORNV39AxWpG4jRlF1Vsoc/axw5lY2bqOSVhCj4DWQIKXCYq/vNkRmzzzqIQZxMXNZTvUj62I/h8Y7NrV7hvs787rB10vSHJauFu2aQ4iQdxzUMqSmOnrqzMOM7GWoh2NjWZocM8CFJmyKm9OpulNFiwhwfaJYsFO4bKviatonfLukO+h4GSk6kwmIJz4LewP2G6gOTOygGyjLiukF5tm0oJs2oN9/bXQOCJ6VA2TTgKNQDwD3shWSZsoA44nb+0jTFd3moqJCfgTuCX50FHkGEL5Hd4nPyv2OBxZcgjAOjIHUBXhMelGsk7lxBHhkvcBsoZst7U7Q1UXGFa3s7vC0h/oSCGNhBvucy5FYD45pJPsex6u9Avb0LMoA44nb+0jTFd3moqJCfgRyaLZBnUP80Fr119WWwhtETdr4osPIaEVf1lSAdvrK0yMTIe6fS7oQMmXwixtyi67uW2guGMZKp7HuEB4lHaOeqFRUwQHxD1d0QckxJmJTGvNi36ActZqOxr4iQu21ph004wMViY5kiQBrc10KpLvLqivPhyYDYTJohUQuTF591QiuBAYy1fEDFWTJPy7as+Y3X+AD1k2JcT/0nL9JnlgzmY0Nxp5CLG+h2tv/PzZ62fC8CB/s/rlbQOM38t2WJz2mbPAWhnc2Zd41VVGmgYIRmZoqRVjpZ7oZD0+G/HeyxoLHYACfb6vG6xn2NB9xJJwWQ4lYviqrFP7FkxNW8KLiYv+1OBkxug+3qj7kXY4wwwquNV3f1APom9Z9BL3adrctjg/bQAe4BfFfgAqexZAxHlV/otYZyhoXqMbrUEbs7WhXa/WJnCxMDhMRb6X7lTrIKjSc9D66r9kI0eJlc9XlY9bxQZnSKrZjgmHwen38d4LF306XXGjlJgsTaIyfpPKz/bWR8mUm2trqQJg/Ttsl7DiDI2u2JaQ3qU5qbPLgDqFON+q7NmTYfUgBZsPbGPAYful1d+VvrqMlOACNlP7j74rs6K4mx3a9X9GNBgzDcBRhoFsTKPj47Ua23b1lyCjIKjSc9D66r9kI0eJlc9XlssJd1dYDMYx4fbrCdFOyz1x/++IaAqFnJvr7NfARCD6y91z86mrpvtJx2To5jD5HXQHsYwtVERiM++pKB9CEhZPG3PO7Eip6a8RjpjnCVe5jDNOWsSIoE6vXcciELBfv6/lOtPPtub+Evln/DOXvk3VPSS0MiRL55UzHqWLGvF3sqsQcM/bJjPYKtMnWI9HSXBLhcdmYF9QY9xyEOl65ocrmTnat4lJcaIb+I+F+3ysPtQ7cqJ3OFfzZ+UuXoVVAQFohNNVsOMsTzy4fP1wn17TA/Ld36yoZv1wdYvej21GF0VC/M+8LJCmuf+sv+Wl9qQx+SpvWiOw4459oLiq4zQz/HcwCF5MC1Yd3GyisxYJwCHhWwvK3Wo/3VKocwx3U97TV6O+bZW1Y7pP3K5FFIVYYVDLNS2Yw9fl3vkPECWM=
non_process
uvljvn a s g lotpptub evln i f m sv
0
20,879
27,695,781,581
IssuesEvent
2023-03-14 02:00:13
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Tue, 14 Mar 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos - **Authors:** Teng Wang, Jinrui Zhang, Feng Zheng, Wenhao Jiang, Ran Cheng, Ping Luo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06378 - **Pdf link:** https://arxiv.org/pdf/2303.06378 - **Abstract** Joint video-language learning has received increasing attention in recent years. However, existing works mainly focus on single or multiple trimmed video clips (events), which makes human-annotated event boundaries necessary during inference. To break away from the ties, we propose a grounded vision-language learning framework for untrimmed videos, which automatically detects informative events and effectively excavates the alignments between multi-sentence descriptions and corresponding event segments. Instead of coarse-level video-language alignments, we present two dual pretext tasks to encourage fine-grained segment-level alignments, i.e., text-to-event grounding (TEG) and event-to-text generation (ETG). TEG learns to adaptively ground the possible event proposals given a set of sentences by estimating the cross-modal distance in a joint semantic space. Meanwhile, ETG aims to reconstruct (generate) the matched texts given event proposals, encouraging the event representation to retain meaningful semantic information. To encourage accurate label assignment between the event set and the text set, we propose a novel semantic-aware cost to mitigate the sub-optimal matching results caused by ambiguous boundary annotations. Our framework is easily extensible to tasks covering visually-grounded language understanding and generation. We achieve state-of-the-art dense video captioning performance on ActivityNet Captions, YouCook2 and YouMakeup, and competitive performance on several other language generation and understanding tasks. Our method also achieved 1st place in both the MTVG and MDVC tasks of the PIC 4th Challenge. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Pretrained ViTs Yield Versatile Representations For Medical Images - **Authors:** Christos Matsoukas, Johan Fredin Haslum, Magnus Söderberg, Kevin Smith - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07034 - **Pdf link:** https://arxiv.org/pdf/2303.07034 - **Abstract** Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Over the last years, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore the benefits and drawbacks of transformer-based models for medical image classification. We conduct a series of experiments on several standard 2D medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers can perform on par with CNNs when pretrained on ImageNet, both in a supervised and self-supervised setting, rendering them as a viable alternative to CNNs. ### Mobile Mapping Mesh Change Detection and Update - **Authors:** Teng Wu, Bruno Vallet, Cédric Demonceaux - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07182 - **Pdf link:** https://arxiv.org/pdf/2303.07182 - **Abstract** Mobile mapping, in particular, Mobile Lidar Scanning (MLS) is increasingly widespread to monitor and map urban scenes at city scale with unprecedented resolution and accuracy. The resulting point cloud sampling of the scene geometry can be meshed in order to create a continuous representation for different applications: visualization, simulation, navigation, etc. Because of the highly dynamic nature of these urban scenes, long term mapping should rely on frequent map updates. A trivial solution is to simply replace old data with newer data each time a new acquisition is made. However it has two drawbacks: 1) the old data may be of higher quality (resolution, precision) than the new and 2) the coverage of the scene might be different in various acquisitions, including varying occlusions. In this paper, we propose a fully automatic pipeline to address these two issues by formulating the problem of merging meshes with different quality, coverage and acquisition time. Our method is based on a combined distance and visibility based change detection, a time series analysis to assess the sustainability of changes, a mesh mosaicking based on a global boolean optimization and finally a stitching of the resulting mesh pieces boundaries with triangle strips. Finally, our method is demonstrated on Robotcar and Stereopolis datasets. ## Keyword: ISP ### Iterative Geometry Encoding Volume for Stereo Matching - **Authors:** Gangwei Xu, Xianqi Wang, Xiaohuan Ding, Xin Yang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06615 - **Pdf link:** https://arxiv.org/pdf/2303.06615 - **Abstract** Recurrent All-Pairs Field Transforms (RAFT) has shown great potentials in matching tasks. However, all-pairs correlations lack non-local geometry knowledge and have difficulties tackling local ambiguities in ill-posed regions. In this paper, we propose Iterative Geometry Encoding Volume (IGEV-Stereo), a new deep network architecture for stereo matching. The proposed IGEV-Stereo builds a combined geometry encoding volume that encodes geometry and context information as well as local matching details, and iteratively indexes it to update the disparity map. To speed up the convergence, we exploit GEV to regress an accurate starting point for ConvGRUs iterations. On KITTI 2015, IGEV-Stereo ranks $1^{st}$ among all published methods and is the fastest among the top 10 methods. In addition, IGEV-Stereo has strong cross-dataset generalization as well as high inference efficiency. We also extend our IGEV to multi-view stereo (MVS), i.e. IGEV-MVS, which achieves competitive accuracy on DTU benchmark. Code is available at https://github.com/gangweiX/IGEV. ### Ensemble Learning of Myocardial Displacements for Myocardial Infarction Detection in Echocardiography - **Authors:** Nguyen Tuan, Phi Nguyen, Dai Tran, Hung Pham, Quang Nguyen, Thanh Le, Hanh Van, Bach Do, Phuong Tran, Vinh Le, Thuy Nguyen, Long Tran, Hieu Pham - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06744 - **Pdf link:** https://arxiv.org/pdf/2303.06744 - **Abstract** Early detection and localization of myocardial infarction (MI) can reduce the severity of cardiac damage through timely treatment interventions. In recent years, deep learning techniques have shown promise for detecting MI in echocardiographic images. However, there has been no examination of how segmentation accuracy affects MI classification performance and the potential benefits of using ensemble learning approaches. Our study investigates this relationship and introduces a robust method that combines features from multiple segmentation models to improve MI classification performance by leveraging ensemble learning. Our method combines myocardial segment displacement features from multiple segmentation models, which are then input into a typical classifier to estimate the risk of MI. We validated the proposed approach on two datasets: the public HMC-QU dataset (109 echocardiograms) for training and validation, and an E-Hospital dataset (60 echocardiograms) from a local clinical site in Vietnam for independent testing. Model performance was evaluated based on accuracy, sensitivity, and specificity. The proposed approach demonstrated excellent performance in detecting MI. The results showed that the proposed approach outperformed the state-of-the-art feature-based method. Further research is necessary to determine its potential use in clinical settings as a tool to assist cardiologists and technicians with objective assessments and reduce dependence on operator subjectivity. Our research codes are available on GitHub at https://github.com/vinuni-vishc/mi-detection-echo. ### Upcycling Models under Domain and Category Shift - **Authors:** Sanqing Qu, Tianpei Zou, Florian Roehrbein, Cewu Lu, Guang Chen, Dacheng Tao, Changjun Jiang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2303.07110 - **Pdf link:** https://arxiv.org/pdf/2303.07110 - **Abstract** Deep neural networks (DNNs) often perform poorly in the presence of domain shift and category shift. How to upcycle DNNs and adapt them to the target task remains an important open problem. Unsupervised Domain Adaptation (UDA), especially recently proposed Source-free Domain Adaptation (SFDA), has become a promising technology to address this issue. Nevertheless, existing SFDA methods require that the source domain and target domain share the same label space, consequently being only applicable to the vanilla closed-set setting. In this paper, we take one step further and explore the Source-free Universal Domain Adaptation (SF-UniDA). The goal is to identify "known" data samples under both domain and category shift, and reject those "unknown" data samples (not present in source classes), with only the knowledge from standard pre-trained source model. To this end, we introduce an innovative global and local clustering learning technique (GLC). Specifically, we design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes and introduce a local k-NN clustering strategy to alleviate negative transfer. We examine the superiority of our GLC on multiple benchmarks with different category shift scenarios, including partial-set, open-set, and open-partial-set DA. Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8\% on the VisDA benchmark. The code is available at https://github.com/ispc-lab/GLC. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### OTOV2: Automatic, Generic, User-Friendly - **Authors:** Tianyi Chen, Luming Liang, Tianyu Ding, Zhihui Zhu, Ilya Zharkov - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2303.06862 - **Pdf link:** https://arxiv.org/pdf/2303.06862 - **Abstract** The existing model compression methods via structured pruning typically require complicated multi-stage procedures. Each individual stage necessitates numerous engineering efforts and domain-knowledge from the end-users which prevent their wider applications onto broader scenarios. We propose the second generation of Only-Train-Once (OTOv2), which first automatically trains and compresses a general DNN only once from scratch to produce a more compact model with competitive performance without fine-tuning. OTOv2 is automatic and pluggable into various deep learning applications, and requires almost minimal engineering efforts from the users. Methodologically, OTOv2 proposes two major improvements: (i) Autonomy: automatically exploits the dependency of general DNNs, partitions the trainable variables into Zero-Invariant Groups (ZIGs), and constructs the compressed model; and (ii) Dual Half-Space Projected Gradient (DHSPG): a novel optimizer to more reliably solve structured-sparsity problems. Numerically, we demonstrate the generality and autonomy of OTOv2 on a variety of model architectures such as VGG, ResNet, CARN, ConvNeXt, DenseNet and StackedUnets, the majority of which cannot be handled by other methods without extensive handcrafting efforts. Together with benchmark datasets including CIFAR10/100, DIV2K, Fashion-MNIST, SVNH and ImageNet, its effectiveness is validated by performing competitively or even better than the state-of-the-arts. The source code is available at https://github.com/tianyic/only_train_once. ### DR2: Diffusion-based Robust Degradation Remover for Blind Face Restoration - **Authors:** Zhixin Wang, Xiaoyun Zhang, Ziying Zhang, Huangjie Zheng, Mingyuan Zhou, Ya Zhang, Yanfeng Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06885 - **Pdf link:** https://arxiv.org/pdf/2303.06885 - **Abstract** Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training, while more complex cases could happen in the real world. This gap between the assumed and actual degradation hurts the restoration performance where artifacts are often observed in the output. However, it is expensive and infeasible to include every type of degradation to cover real-world cases in the training data. To tackle this robustness issue, we propose Diffusion-based Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image. By leveraging a well-performing denoising diffusion probabilistic model, our DR2 diffuses input images to a noisy status where various types of degradation give way to Gaussian noise, and then captures semantic information through iterative denoising steps. As a result, DR2 is robust against common degradation (e.g. blur, resize, noise and compression) and compatible with different designs of enhancement modules. Experiments in various settings show that our framework outperforms state-of-the-art methods on heavily degraded synthetic and real-world datasets. ## Keyword: RAW ### Enhanced K-Radar: Optimal Density Reduction to Improve Detection Performance and Accessibility of 4D Radar Tensor-based Object Detection - **Authors:** Dong-Hee Paek, Seung-Hyun Kong, Kevin Tirta Wijaya - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2303.06342 - **Pdf link:** https://arxiv.org/pdf/2303.06342 - **Abstract** Recent works have shown the superior robustness of four-dimensional (4D) Radar-based three-dimensional (3D) object detection in adverse weather conditions. However, processing 4D Radar data remains a challenge due to the large data size, which require substantial amount of memory for computing and storage. In previous work, an online density reduction is performed on the 4D Radar Tensor (4DRT) to reduce the data size, in which the density reduction level is chosen arbitrarily. However, the impact of density reduction on the detection performance and memory consumption remains largely unknown. In this paper, we aim to address this issue by conducting extensive hyperparamter tuning on the density reduction level. Experimental results show that increasing the density level from 0.01% to 50% of the original 4DRT density level proportionally improves the detection performance, at a cost of memory consumption. However, when the density level is increased beyond 5%, only the memory consumption increases, while the detection performance oscillates below the peak point. In addition to the optimized density hyperparameter, we also introduce 4D Sparse Radar Tensor (4DSRT), a new representation for 4D Radar data with offline density reduction, leading to a significantly reduced raw data size. An optimized development kit for training the neural networks is also provided, which along with the utilization of 4DSRT, improves training speed by a factor of 17.1 compared to the state-of-the-art 4DRT-based neural networks. All codes are available at: https://github.com/kaist-avelab/K-Radar. ### TranSG: Transformer-Based Skeleton Graph Prototype Contrastive Learning with Structure-Trajectory Prompted Reconstruction for Person Re-Identification - **Authors:** Haocong Rao, Chunyan Miao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06819 - **Pdf link:** https://arxiv.org/pdf/2303.06819 - **Abstract** Person re-identification (re-ID) via 3D skeleton data is an emerging topic with prominent advantages. Existing methods usually design skeleton descriptors with raw body joints or perform skeleton sequence representation learning. However, they typically cannot concurrently model different body-component relations, and rarely explore useful semantics from fine-grained representations of body joints. In this paper, we propose a generic Transformer-based Skeleton Graph prototype contrastive learning (TranSG) approach with structure-trajectory prompted reconstruction to fully capture skeletal relations and valuable spatial-temporal semantics from skeleton graphs for person re-ID. Specifically, we first devise the Skeleton Graph Transformer (SGT) to simultaneously learn body and motion relations within skeleton graphs, so as to aggregate key correlative node features into graph representations. Then, we propose the Graph Prototype Contrastive learning (GPC) to mine the most typical graph features (graph prototypes) of each identity, and contrast the inherent similarity between graph representations and different prototypes from both skeleton and sequence levels to learn discriminative graph representations. Last, a graph Structure-Trajectory Prompted Reconstruction (STPR) mechanism is proposed to exploit the spatial and temporal contexts of graph nodes to prompt skeleton graph reconstruction, which facilitates capturing more valuable patterns and graph semantics for person re-ID. Empirical evaluations demonstrate that TranSG significantly outperforms existing state-of-the-art methods. We further show its generality under different graph modeling, RGB-estimated skeletons, and unsupervised scenarios. ### DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep Inconsistency Prior - **Authors:** Shuangping Jin, Bingbing Yu, Minhao Jing, Yi Zhou, Jiajun Liang, Renhe Ji - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06834 - **Pdf link:** https://arxiv.org/pdf/2303.06834 - **Abstract** RGB-NIR fusion is a promising method for low-light imaging. However, high-intensity noise in low-light images amplifies the effect of structure inconsistency between RGB-NIR images, which fails existing algorithms. To handle this, we propose a new RGB-NIR fusion algorithm called Dark Vision Net (DVN) with two technical novelties: Deep Structure and Deep Inconsistency Prior (DIP). The Deep Structure extracts clear structure details in deep multiscale feature space rather than raw input space, which is more robust to noisy inputs. Based on the deep structures from both RGB and NIR domains, we introduce the DIP to leverage the structure inconsistency to guide the fusion of RGB-NIR. Benefiting from this, the proposed DVN obtains high-quality lowlight images without the visual artifacts. We also propose a new dataset called Dark Vision Dataset (DVD), consisting of aligned RGB-NIR image pairs, as the first public RGBNIR fusion benchmark. Quantitative and qualitative results on the proposed benchmark show that DVN significantly outperforms other comparison algorithms in PSNR and SSIM, especially in extremely low light conditions. ### Robust Contrastive Language-Image Pretraining against Adversarial Attacks - **Authors:** Wenhan Yang, Baharan Mirzasoleiman - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Cryptography and Security (cs.CR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2303.06854 - **Pdf link:** https://arxiv.org/pdf/2303.06854 - **Abstract** Contrastive vision-language representation learning has achieved state-of-the-art performance for zero-shot classification, by learning from millions of image-caption pairs crawled from the internet. However, the massive data that powers large multimodal models such as CLIP, makes them extremely vulnerable to various types of adversarial attacks, including targeted and backdoor data poisoning attacks. Despite this vulnerability, robust contrastive vision-language pretraining against adversarial attacks has remained unaddressed. In this work, we propose RoCLIP, the first effective method for robust pretraining {and fine-tuning} multimodal vision-language models. RoCLIP effectively breaks the association between poisoned image-caption pairs by considering a pool of random examples, and (1) matching every image with the text that is most similar to its caption in the pool, and (2) matching every caption with the image that is most similar to its image in the pool. Our extensive experiments show that our method renders state-of-the-art targeted data poisoning and backdoor attacks ineffective during pre-training or fine-tuning of CLIP. In particular, RoCLIP decreases the poison and backdoor attack success rates down to 0\% during pre-training and 1\%-4\% during fine-tuning, and effectively improves the model's performance. ### Pretrained ViTs Yield Versatile Representations For Medical Images - **Authors:** Christos Matsoukas, Johan Fredin Haslum, Magnus Söderberg, Kevin Smith - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07034 - **Pdf link:** https://arxiv.org/pdf/2303.07034 - **Abstract** Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Over the last years, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore the benefits and drawbacks of transformer-based models for medical image classification. We conduct a series of experiments on several standard 2D medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers can perform on par with CNNs when pretrained on ImageNet, both in a supervised and self-supervised setting, rendering them as a viable alternative to CNNs. ### Mobile Mapping Mesh Change Detection and Update - **Authors:** Teng Wu, Bruno Vallet, Cédric Demonceaux - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07182 - **Pdf link:** https://arxiv.org/pdf/2303.07182 - **Abstract** Mobile mapping, in particular, Mobile Lidar Scanning (MLS) is increasingly widespread to monitor and map urban scenes at city scale with unprecedented resolution and accuracy. The resulting point cloud sampling of the scene geometry can be meshed in order to create a continuous representation for different applications: visualization, simulation, navigation, etc. Because of the highly dynamic nature of these urban scenes, long term mapping should rely on frequent map updates. A trivial solution is to simply replace old data with newer data each time a new acquisition is made. However it has two drawbacks: 1) the old data may be of higher quality (resolution, precision) than the new and 2) the coverage of the scene might be different in various acquisitions, including varying occlusions. In this paper, we propose a fully automatic pipeline to address these two issues by formulating the problem of merging meshes with different quality, coverage and acquisition time. Our method is based on a combined distance and visibility based change detection, a time series analysis to assess the sustainability of changes, a mesh mosaicking based on a global boolean optimization and finally a stitching of the resulting mesh pieces boundaries with triangle strips. Finally, our method is demonstrated on Robotcar and Stereopolis datasets. ## Keyword: raw image There is no result
2.0
New submissions for Tue, 14 Mar 23 - ## Keyword: events ### Learning Grounded Vision-Language Representation for Versatile Understanding in Untrimmed Videos - **Authors:** Teng Wang, Jinrui Zhang, Feng Zheng, Wenhao Jiang, Ran Cheng, Ping Luo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06378 - **Pdf link:** https://arxiv.org/pdf/2303.06378 - **Abstract** Joint video-language learning has received increasing attention in recent years. However, existing works mainly focus on single or multiple trimmed video clips (events), which makes human-annotated event boundaries necessary during inference. To break away from the ties, we propose a grounded vision-language learning framework for untrimmed videos, which automatically detects informative events and effectively excavates the alignments between multi-sentence descriptions and corresponding event segments. Instead of coarse-level video-language alignments, we present two dual pretext tasks to encourage fine-grained segment-level alignments, i.e., text-to-event grounding (TEG) and event-to-text generation (ETG). TEG learns to adaptively ground the possible event proposals given a set of sentences by estimating the cross-modal distance in a joint semantic space. Meanwhile, ETG aims to reconstruct (generate) the matched texts given event proposals, encouraging the event representation to retain meaningful semantic information. To encourage accurate label assignment between the event set and the text set, we propose a novel semantic-aware cost to mitigate the sub-optimal matching results caused by ambiguous boundary annotations. Our framework is easily extensible to tasks covering visually-grounded language understanding and generation. We achieve state-of-the-art dense video captioning performance on ActivityNet Captions, YouCook2 and YouMakeup, and competitive performance on several other language generation and understanding tasks. Our method also achieved 1st place in both the MTVG and MDVC tasks of the PIC 4th Challenge. ## Keyword: event camera There is no result ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Pretrained ViTs Yield Versatile Representations For Medical Images - **Authors:** Christos Matsoukas, Johan Fredin Haslum, Magnus Söderberg, Kevin Smith - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07034 - **Pdf link:** https://arxiv.org/pdf/2303.07034 - **Abstract** Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Over the last years, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore the benefits and drawbacks of transformer-based models for medical image classification. We conduct a series of experiments on several standard 2D medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers can perform on par with CNNs when pretrained on ImageNet, both in a supervised and self-supervised setting, rendering them as a viable alternative to CNNs. ### Mobile Mapping Mesh Change Detection and Update - **Authors:** Teng Wu, Bruno Vallet, Cédric Demonceaux - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07182 - **Pdf link:** https://arxiv.org/pdf/2303.07182 - **Abstract** Mobile mapping, in particular, Mobile Lidar Scanning (MLS) is increasingly widespread to monitor and map urban scenes at city scale with unprecedented resolution and accuracy. The resulting point cloud sampling of the scene geometry can be meshed in order to create a continuous representation for different applications: visualization, simulation, navigation, etc. Because of the highly dynamic nature of these urban scenes, long term mapping should rely on frequent map updates. A trivial solution is to simply replace old data with newer data each time a new acquisition is made. However it has two drawbacks: 1) the old data may be of higher quality (resolution, precision) than the new and 2) the coverage of the scene might be different in various acquisitions, including varying occlusions. In this paper, we propose a fully automatic pipeline to address these two issues by formulating the problem of merging meshes with different quality, coverage and acquisition time. Our method is based on a combined distance and visibility based change detection, a time series analysis to assess the sustainability of changes, a mesh mosaicking based on a global boolean optimization and finally a stitching of the resulting mesh pieces boundaries with triangle strips. Finally, our method is demonstrated on Robotcar and Stereopolis datasets. ## Keyword: ISP ### Iterative Geometry Encoding Volume for Stereo Matching - **Authors:** Gangwei Xu, Xianqi Wang, Xiaohuan Ding, Xin Yang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06615 - **Pdf link:** https://arxiv.org/pdf/2303.06615 - **Abstract** Recurrent All-Pairs Field Transforms (RAFT) has shown great potentials in matching tasks. However, all-pairs correlations lack non-local geometry knowledge and have difficulties tackling local ambiguities in ill-posed regions. In this paper, we propose Iterative Geometry Encoding Volume (IGEV-Stereo), a new deep network architecture for stereo matching. The proposed IGEV-Stereo builds a combined geometry encoding volume that encodes geometry and context information as well as local matching details, and iteratively indexes it to update the disparity map. To speed up the convergence, we exploit GEV to regress an accurate starting point for ConvGRUs iterations. On KITTI 2015, IGEV-Stereo ranks $1^{st}$ among all published methods and is the fastest among the top 10 methods. In addition, IGEV-Stereo has strong cross-dataset generalization as well as high inference efficiency. We also extend our IGEV to multi-view stereo (MVS), i.e. IGEV-MVS, which achieves competitive accuracy on DTU benchmark. Code is available at https://github.com/gangweiX/IGEV. ### Ensemble Learning of Myocardial Displacements for Myocardial Infarction Detection in Echocardiography - **Authors:** Nguyen Tuan, Phi Nguyen, Dai Tran, Hung Pham, Quang Nguyen, Thanh Le, Hanh Van, Bach Do, Phuong Tran, Vinh Le, Thuy Nguyen, Long Tran, Hieu Pham - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06744 - **Pdf link:** https://arxiv.org/pdf/2303.06744 - **Abstract** Early detection and localization of myocardial infarction (MI) can reduce the severity of cardiac damage through timely treatment interventions. In recent years, deep learning techniques have shown promise for detecting MI in echocardiographic images. However, there has been no examination of how segmentation accuracy affects MI classification performance and the potential benefits of using ensemble learning approaches. Our study investigates this relationship and introduces a robust method that combines features from multiple segmentation models to improve MI classification performance by leveraging ensemble learning. Our method combines myocardial segment displacement features from multiple segmentation models, which are then input into a typical classifier to estimate the risk of MI. We validated the proposed approach on two datasets: the public HMC-QU dataset (109 echocardiograms) for training and validation, and an E-Hospital dataset (60 echocardiograms) from a local clinical site in Vietnam for independent testing. Model performance was evaluated based on accuracy, sensitivity, and specificity. The proposed approach demonstrated excellent performance in detecting MI. The results showed that the proposed approach outperformed the state-of-the-art feature-based method. Further research is necessary to determine its potential use in clinical settings as a tool to assist cardiologists and technicians with objective assessments and reduce dependence on operator subjectivity. Our research codes are available on GitHub at https://github.com/vinuni-vishc/mi-detection-echo. ### Upcycling Models under Domain and Category Shift - **Authors:** Sanqing Qu, Tianpei Zou, Florian Roehrbein, Cewu Lu, Guang Chen, Dacheng Tao, Changjun Jiang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2303.07110 - **Pdf link:** https://arxiv.org/pdf/2303.07110 - **Abstract** Deep neural networks (DNNs) often perform poorly in the presence of domain shift and category shift. How to upcycle DNNs and adapt them to the target task remains an important open problem. Unsupervised Domain Adaptation (UDA), especially recently proposed Source-free Domain Adaptation (SFDA), has become a promising technology to address this issue. Nevertheless, existing SFDA methods require that the source domain and target domain share the same label space, consequently being only applicable to the vanilla closed-set setting. In this paper, we take one step further and explore the Source-free Universal Domain Adaptation (SF-UniDA). The goal is to identify "known" data samples under both domain and category shift, and reject those "unknown" data samples (not present in source classes), with only the knowledge from standard pre-trained source model. To this end, we introduce an innovative global and local clustering learning technique (GLC). Specifically, we design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes and introduce a local k-NN clustering strategy to alleviate negative transfer. We examine the superiority of our GLC on multiple benchmarks with different category shift scenarios, including partial-set, open-set, and open-partial-set DA. Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8\% on the VisDA benchmark. The code is available at https://github.com/ispc-lab/GLC. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### OTOV2: Automatic, Generic, User-Friendly - **Authors:** Tianyi Chen, Luming Liang, Tianyu Ding, Zhihui Zhu, Ilya Zharkov - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2303.06862 - **Pdf link:** https://arxiv.org/pdf/2303.06862 - **Abstract** The existing model compression methods via structured pruning typically require complicated multi-stage procedures. Each individual stage necessitates numerous engineering efforts and domain-knowledge from the end-users which prevent their wider applications onto broader scenarios. We propose the second generation of Only-Train-Once (OTOv2), which first automatically trains and compresses a general DNN only once from scratch to produce a more compact model with competitive performance without fine-tuning. OTOv2 is automatic and pluggable into various deep learning applications, and requires almost minimal engineering efforts from the users. Methodologically, OTOv2 proposes two major improvements: (i) Autonomy: automatically exploits the dependency of general DNNs, partitions the trainable variables into Zero-Invariant Groups (ZIGs), and constructs the compressed model; and (ii) Dual Half-Space Projected Gradient (DHSPG): a novel optimizer to more reliably solve structured-sparsity problems. Numerically, we demonstrate the generality and autonomy of OTOv2 on a variety of model architectures such as VGG, ResNet, CARN, ConvNeXt, DenseNet and StackedUnets, the majority of which cannot be handled by other methods without extensive handcrafting efforts. Together with benchmark datasets including CIFAR10/100, DIV2K, Fashion-MNIST, SVNH and ImageNet, its effectiveness is validated by performing competitively or even better than the state-of-the-arts. The source code is available at https://github.com/tianyic/only_train_once. ### DR2: Diffusion-based Robust Degradation Remover for Blind Face Restoration - **Authors:** Zhixin Wang, Xiaoyun Zhang, Ziying Zhang, Huangjie Zheng, Mingyuan Zhou, Ya Zhang, Yanfeng Wang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06885 - **Pdf link:** https://arxiv.org/pdf/2303.06885 - **Abstract** Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training, while more complex cases could happen in the real world. This gap between the assumed and actual degradation hurts the restoration performance where artifacts are often observed in the output. However, it is expensive and infeasible to include every type of degradation to cover real-world cases in the training data. To tackle this robustness issue, we propose Diffusion-based Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image. By leveraging a well-performing denoising diffusion probabilistic model, our DR2 diffuses input images to a noisy status where various types of degradation give way to Gaussian noise, and then captures semantic information through iterative denoising steps. As a result, DR2 is robust against common degradation (e.g. blur, resize, noise and compression) and compatible with different designs of enhancement modules. Experiments in various settings show that our framework outperforms state-of-the-art methods on heavily degraded synthetic and real-world datasets. ## Keyword: RAW ### Enhanced K-Radar: Optimal Density Reduction to Improve Detection Performance and Accessibility of 4D Radar Tensor-based Object Detection - **Authors:** Dong-Hee Paek, Seung-Hyun Kong, Kevin Tirta Wijaya - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2303.06342 - **Pdf link:** https://arxiv.org/pdf/2303.06342 - **Abstract** Recent works have shown the superior robustness of four-dimensional (4D) Radar-based three-dimensional (3D) object detection in adverse weather conditions. However, processing 4D Radar data remains a challenge due to the large data size, which require substantial amount of memory for computing and storage. In previous work, an online density reduction is performed on the 4D Radar Tensor (4DRT) to reduce the data size, in which the density reduction level is chosen arbitrarily. However, the impact of density reduction on the detection performance and memory consumption remains largely unknown. In this paper, we aim to address this issue by conducting extensive hyperparamter tuning on the density reduction level. Experimental results show that increasing the density level from 0.01% to 50% of the original 4DRT density level proportionally improves the detection performance, at a cost of memory consumption. However, when the density level is increased beyond 5%, only the memory consumption increases, while the detection performance oscillates below the peak point. In addition to the optimized density hyperparameter, we also introduce 4D Sparse Radar Tensor (4DSRT), a new representation for 4D Radar data with offline density reduction, leading to a significantly reduced raw data size. An optimized development kit for training the neural networks is also provided, which along with the utilization of 4DSRT, improves training speed by a factor of 17.1 compared to the state-of-the-art 4DRT-based neural networks. All codes are available at: https://github.com/kaist-avelab/K-Radar. ### TranSG: Transformer-Based Skeleton Graph Prototype Contrastive Learning with Structure-Trajectory Prompted Reconstruction for Person Re-Identification - **Authors:** Haocong Rao, Chunyan Miao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06819 - **Pdf link:** https://arxiv.org/pdf/2303.06819 - **Abstract** Person re-identification (re-ID) via 3D skeleton data is an emerging topic with prominent advantages. Existing methods usually design skeleton descriptors with raw body joints or perform skeleton sequence representation learning. However, they typically cannot concurrently model different body-component relations, and rarely explore useful semantics from fine-grained representations of body joints. In this paper, we propose a generic Transformer-based Skeleton Graph prototype contrastive learning (TranSG) approach with structure-trajectory prompted reconstruction to fully capture skeletal relations and valuable spatial-temporal semantics from skeleton graphs for person re-ID. Specifically, we first devise the Skeleton Graph Transformer (SGT) to simultaneously learn body and motion relations within skeleton graphs, so as to aggregate key correlative node features into graph representations. Then, we propose the Graph Prototype Contrastive learning (GPC) to mine the most typical graph features (graph prototypes) of each identity, and contrast the inherent similarity between graph representations and different prototypes from both skeleton and sequence levels to learn discriminative graph representations. Last, a graph Structure-Trajectory Prompted Reconstruction (STPR) mechanism is proposed to exploit the spatial and temporal contexts of graph nodes to prompt skeleton graph reconstruction, which facilitates capturing more valuable patterns and graph semantics for person re-ID. Empirical evaluations demonstrate that TranSG significantly outperforms existing state-of-the-art methods. We further show its generality under different graph modeling, RGB-estimated skeletons, and unsupervised scenarios. ### DarkVisionNet: Low-Light Imaging via RGB-NIR Fusion with Deep Inconsistency Prior - **Authors:** Shuangping Jin, Bingbing Yu, Minhao Jing, Yi Zhou, Jiajun Liang, Renhe Ji - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.06834 - **Pdf link:** https://arxiv.org/pdf/2303.06834 - **Abstract** RGB-NIR fusion is a promising method for low-light imaging. However, high-intensity noise in low-light images amplifies the effect of structure inconsistency between RGB-NIR images, which fails existing algorithms. To handle this, we propose a new RGB-NIR fusion algorithm called Dark Vision Net (DVN) with two technical novelties: Deep Structure and Deep Inconsistency Prior (DIP). The Deep Structure extracts clear structure details in deep multiscale feature space rather than raw input space, which is more robust to noisy inputs. Based on the deep structures from both RGB and NIR domains, we introduce the DIP to leverage the structure inconsistency to guide the fusion of RGB-NIR. Benefiting from this, the proposed DVN obtains high-quality lowlight images without the visual artifacts. We also propose a new dataset called Dark Vision Dataset (DVD), consisting of aligned RGB-NIR image pairs, as the first public RGBNIR fusion benchmark. Quantitative and qualitative results on the proposed benchmark show that DVN significantly outperforms other comparison algorithms in PSNR and SSIM, especially in extremely low light conditions. ### Robust Contrastive Language-Image Pretraining against Adversarial Attacks - **Authors:** Wenhan Yang, Baharan Mirzasoleiman - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Cryptography and Security (cs.CR); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2303.06854 - **Pdf link:** https://arxiv.org/pdf/2303.06854 - **Abstract** Contrastive vision-language representation learning has achieved state-of-the-art performance for zero-shot classification, by learning from millions of image-caption pairs crawled from the internet. However, the massive data that powers large multimodal models such as CLIP, makes them extremely vulnerable to various types of adversarial attacks, including targeted and backdoor data poisoning attacks. Despite this vulnerability, robust contrastive vision-language pretraining against adversarial attacks has remained unaddressed. In this work, we propose RoCLIP, the first effective method for robust pretraining {and fine-tuning} multimodal vision-language models. RoCLIP effectively breaks the association between poisoned image-caption pairs by considering a pool of random examples, and (1) matching every image with the text that is most similar to its caption in the pool, and (2) matching every caption with the image that is most similar to its image in the pool. Our extensive experiments show that our method renders state-of-the-art targeted data poisoning and backdoor attacks ineffective during pre-training or fine-tuning of CLIP. In particular, RoCLIP decreases the poison and backdoor attack success rates down to 0\% during pre-training and 1\%-4\% during fine-tuning, and effectively improves the model's performance. ### Pretrained ViTs Yield Versatile Representations For Medical Images - **Authors:** Christos Matsoukas, Johan Fredin Haslum, Magnus Söderberg, Kevin Smith - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07034 - **Pdf link:** https://arxiv.org/pdf/2303.07034 - **Abstract** Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Over the last years, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore the benefits and drawbacks of transformer-based models for medical image classification. We conduct a series of experiments on several standard 2D medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers can perform on par with CNNs when pretrained on ImageNet, both in a supervised and self-supervised setting, rendering them as a viable alternative to CNNs. ### Mobile Mapping Mesh Change Detection and Update - **Authors:** Teng Wu, Bruno Vallet, Cédric Demonceaux - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2303.07182 - **Pdf link:** https://arxiv.org/pdf/2303.07182 - **Abstract** Mobile mapping, in particular, Mobile Lidar Scanning (MLS) is increasingly widespread to monitor and map urban scenes at city scale with unprecedented resolution and accuracy. The resulting point cloud sampling of the scene geometry can be meshed in order to create a continuous representation for different applications: visualization, simulation, navigation, etc. Because of the highly dynamic nature of these urban scenes, long term mapping should rely on frequent map updates. A trivial solution is to simply replace old data with newer data each time a new acquisition is made. However it has two drawbacks: 1) the old data may be of higher quality (resolution, precision) than the new and 2) the coverage of the scene might be different in various acquisitions, including varying occlusions. In this paper, we propose a fully automatic pipeline to address these two issues by formulating the problem of merging meshes with different quality, coverage and acquisition time. Our method is based on a combined distance and visibility based change detection, a time series analysis to assess the sustainability of changes, a mesh mosaicking based on a global boolean optimization and finally a stitching of the resulting mesh pieces boundaries with triangle strips. Finally, our method is demonstrated on Robotcar and Stereopolis datasets. ## Keyword: raw image There is no result
process
new submissions for tue mar keyword events learning grounded vision language representation for versatile understanding in untrimmed videos authors teng wang jinrui zhang feng zheng wenhao jiang ran cheng ping luo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract joint video language learning has received increasing attention in recent years however existing works mainly focus on single or multiple trimmed video clips events which makes human annotated event boundaries necessary during inference to break away from the ties we propose a grounded vision language learning framework for untrimmed videos which automatically detects informative events and effectively excavates the alignments between multi sentence descriptions and corresponding event segments instead of coarse level video language alignments we present two dual pretext tasks to encourage fine grained segment level alignments i e text to event grounding teg and event to text generation etg teg learns to adaptively ground the possible event proposals given a set of sentences by estimating the cross modal distance in a joint semantic space meanwhile etg aims to reconstruct generate the matched texts given event proposals encouraging the event representation to retain meaningful semantic information to encourage accurate label assignment between the event set and the text set we propose a novel semantic aware cost to mitigate the sub optimal matching results caused by ambiguous boundary annotations our framework is easily extensible to tasks covering visually grounded language understanding and generation we achieve state of the art dense video captioning performance on activitynet captions and youmakeup and competitive performance on several other language generation and understanding tasks our method also achieved place in both the mtvg and mdvc tasks of the pic challenge keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb pretrained vits yield versatile representations for medical images authors christos matsoukas johan fredin haslum magnus söderberg kevin smith subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract convolutional neural networks cnns have reigned for a decade as the de facto approach to automated medical image diagnosis pushing the state of the art in classification detection and segmentation tasks over the last years vision transformers vits have appeared as a competitive alternative to cnns yielding impressive levels of performance in the natural image domain while possessing several interesting properties that could prove beneficial for medical imaging tasks in this work we explore the benefits and drawbacks of transformer based models for medical image classification we conduct a series of experiments on several standard medical image benchmark datasets and tasks our findings show that while cnns perform better if trained from scratch off the shelf vision transformers can perform on par with cnns when pretrained on imagenet both in a supervised and self supervised setting rendering them as a viable alternative to cnns mobile mapping mesh change detection and update authors teng wu bruno vallet cédric demonceaux subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract mobile mapping in particular mobile lidar scanning mls is increasingly widespread to monitor and map urban scenes at city scale with unprecedented resolution and accuracy the resulting point cloud sampling of the scene geometry can be meshed in order to create a continuous representation for different applications visualization simulation navigation etc because of the highly dynamic nature of these urban scenes long term mapping should rely on frequent map updates a trivial solution is to simply replace old data with newer data each time a new acquisition is made however it has two drawbacks the old data may be of higher quality resolution precision than the new and the coverage of the scene might be different in various acquisitions including varying occlusions in this paper we propose a fully automatic pipeline to address these two issues by formulating the problem of merging meshes with different quality coverage and acquisition time our method is based on a combined distance and visibility based change detection a time series analysis to assess the sustainability of changes a mesh mosaicking based on a global boolean optimization and finally a stitching of the resulting mesh pieces boundaries with triangle strips finally our method is demonstrated on robotcar and stereopolis datasets keyword isp iterative geometry encoding volume for stereo matching authors gangwei xu xianqi wang xiaohuan ding xin yang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recurrent all pairs field transforms raft has shown great potentials in matching tasks however all pairs correlations lack non local geometry knowledge and have difficulties tackling local ambiguities in ill posed regions in this paper we propose iterative geometry encoding volume igev stereo a new deep network architecture for stereo matching the proposed igev stereo builds a combined geometry encoding volume that encodes geometry and context information as well as local matching details and iteratively indexes it to update the disparity map to speed up the convergence we exploit gev to regress an accurate starting point for convgrus iterations on kitti igev stereo ranks st among all published methods and is the fastest among the top methods in addition igev stereo has strong cross dataset generalization as well as high inference efficiency we also extend our igev to multi view stereo mvs i e igev mvs which achieves competitive accuracy on dtu benchmark code is available at ensemble learning of myocardial displacements for myocardial infarction detection in echocardiography authors nguyen tuan phi nguyen dai tran hung pham quang nguyen thanh le hanh van bach do phuong tran vinh le thuy nguyen long tran hieu pham subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract early detection and localization of myocardial infarction mi can reduce the severity of cardiac damage through timely treatment interventions in recent years deep learning techniques have shown promise for detecting mi in echocardiographic images however there has been no examination of how segmentation accuracy affects mi classification performance and the potential benefits of using ensemble learning approaches our study investigates this relationship and introduces a robust method that combines features from multiple segmentation models to improve mi classification performance by leveraging ensemble learning our method combines myocardial segment displacement features from multiple segmentation models which are then input into a typical classifier to estimate the risk of mi we validated the proposed approach on two datasets the public hmc qu dataset echocardiograms for training and validation and an e hospital dataset echocardiograms from a local clinical site in vietnam for independent testing model performance was evaluated based on accuracy sensitivity and specificity the proposed approach demonstrated excellent performance in detecting mi the results showed that the proposed approach outperformed the state of the art feature based method further research is necessary to determine its potential use in clinical settings as a tool to assist cardiologists and technicians with objective assessments and reduce dependence on operator subjectivity our research codes are available on github at upcycling models under domain and category shift authors sanqing qu tianpei zou florian roehrbein cewu lu guang chen dacheng tao changjun jiang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg arxiv link pdf link abstract deep neural networks dnns often perform poorly in the presence of domain shift and category shift how to upcycle dnns and adapt them to the target task remains an important open problem unsupervised domain adaptation uda especially recently proposed source free domain adaptation sfda has become a promising technology to address this issue nevertheless existing sfda methods require that the source domain and target domain share the same label space consequently being only applicable to the vanilla closed set setting in this paper we take one step further and explore the source free universal domain adaptation sf unida the goal is to identify known data samples under both domain and category shift and reject those unknown data samples not present in source classes with only the knowledge from standard pre trained source model to this end we introduce an innovative global and local clustering learning technique glc specifically we design a novel adaptive one vs all global clustering algorithm to achieve the distinction across different target classes and introduce a local k nn clustering strategy to alleviate negative transfer we examine the superiority of our glc on multiple benchmarks with different category shift scenarios including partial set open set and open partial set da remarkably in the most challenging open partial set da scenario glc outperforms umad by on the visda benchmark the code is available at keyword image signal processing there is no result keyword image signal process there is no result keyword compression automatic generic user friendly authors tianyi chen luming liang tianyu ding zhihui zhu ilya zharkov subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract the existing model compression methods via structured pruning typically require complicated multi stage procedures each individual stage necessitates numerous engineering efforts and domain knowledge from the end users which prevent their wider applications onto broader scenarios we propose the second generation of only train once which first automatically trains and compresses a general dnn only once from scratch to produce a more compact model with competitive performance without fine tuning is automatic and pluggable into various deep learning applications and requires almost minimal engineering efforts from the users methodologically proposes two major improvements i autonomy automatically exploits the dependency of general dnns partitions the trainable variables into zero invariant groups zigs and constructs the compressed model and ii dual half space projected gradient dhspg a novel optimizer to more reliably solve structured sparsity problems numerically we demonstrate the generality and autonomy of on a variety of model architectures such as vgg resnet carn convnext densenet and stackedunets the majority of which cannot be handled by other methods without extensive handcrafting efforts together with benchmark datasets including fashion mnist svnh and imagenet its effectiveness is validated by performing competitively or even better than the state of the arts the source code is available at diffusion based robust degradation remover for blind face restoration authors zhixin wang xiaoyun zhang ziying zhang huangjie zheng mingyuan zhou ya zhang yanfeng wang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract blind face restoration usually synthesizes degraded low quality data with a pre defined degradation model for training while more complex cases could happen in the real world this gap between the assumed and actual degradation hurts the restoration performance where artifacts are often observed in the output however it is expensive and infeasible to include every type of degradation to cover real world cases in the training data to tackle this robustness issue we propose diffusion based robust degradation remover to first transform the degraded image to a coarse but degradation invariant prediction then employ an enhancement module to restore the coarse prediction to a high quality image by leveraging a well performing denoising diffusion probabilistic model our diffuses input images to a noisy status where various types of degradation give way to gaussian noise and then captures semantic information through iterative denoising steps as a result is robust against common degradation e g blur resize noise and compression and compatible with different designs of enhancement modules experiments in various settings show that our framework outperforms state of the art methods on heavily degraded synthetic and real world datasets keyword raw enhanced k radar optimal density reduction to improve detection performance and accessibility of radar tensor based object detection authors dong hee paek seung hyun kong kevin tirta wijaya subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract recent works have shown the superior robustness of four dimensional radar based three dimensional object detection in adverse weather conditions however processing radar data remains a challenge due to the large data size which require substantial amount of memory for computing and storage in previous work an online density reduction is performed on the radar tensor to reduce the data size in which the density reduction level is chosen arbitrarily however the impact of density reduction on the detection performance and memory consumption remains largely unknown in this paper we aim to address this issue by conducting extensive hyperparamter tuning on the density reduction level experimental results show that increasing the density level from to of the original density level proportionally improves the detection performance at a cost of memory consumption however when the density level is increased beyond only the memory consumption increases while the detection performance oscillates below the peak point in addition to the optimized density hyperparameter we also introduce sparse radar tensor a new representation for radar data with offline density reduction leading to a significantly reduced raw data size an optimized development kit for training the neural networks is also provided which along with the utilization of improves training speed by a factor of compared to the state of the art based neural networks all codes are available at transg transformer based skeleton graph prototype contrastive learning with structure trajectory prompted reconstruction for person re identification authors haocong rao chunyan miao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract person re identification re id via skeleton data is an emerging topic with prominent advantages existing methods usually design skeleton descriptors with raw body joints or perform skeleton sequence representation learning however they typically cannot concurrently model different body component relations and rarely explore useful semantics from fine grained representations of body joints in this paper we propose a generic transformer based skeleton graph prototype contrastive learning transg approach with structure trajectory prompted reconstruction to fully capture skeletal relations and valuable spatial temporal semantics from skeleton graphs for person re id specifically we first devise the skeleton graph transformer sgt to simultaneously learn body and motion relations within skeleton graphs so as to aggregate key correlative node features into graph representations then we propose the graph prototype contrastive learning gpc to mine the most typical graph features graph prototypes of each identity and contrast the inherent similarity between graph representations and different prototypes from both skeleton and sequence levels to learn discriminative graph representations last a graph structure trajectory prompted reconstruction stpr mechanism is proposed to exploit the spatial and temporal contexts of graph nodes to prompt skeleton graph reconstruction which facilitates capturing more valuable patterns and graph semantics for person re id empirical evaluations demonstrate that transg significantly outperforms existing state of the art methods we further show its generality under different graph modeling rgb estimated skeletons and unsupervised scenarios darkvisionnet low light imaging via rgb nir fusion with deep inconsistency prior authors shuangping jin bingbing yu minhao jing yi zhou jiajun liang renhe ji subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract rgb nir fusion is a promising method for low light imaging however high intensity noise in low light images amplifies the effect of structure inconsistency between rgb nir images which fails existing algorithms to handle this we propose a new rgb nir fusion algorithm called dark vision net dvn with two technical novelties deep structure and deep inconsistency prior dip the deep structure extracts clear structure details in deep multiscale feature space rather than raw input space which is more robust to noisy inputs based on the deep structures from both rgb and nir domains we introduce the dip to leverage the structure inconsistency to guide the fusion of rgb nir benefiting from this the proposed dvn obtains high quality lowlight images without the visual artifacts we also propose a new dataset called dark vision dataset dvd consisting of aligned rgb nir image pairs as the first public rgbnir fusion benchmark quantitative and qualitative results on the proposed benchmark show that dvn significantly outperforms other comparison algorithms in psnr and ssim especially in extremely low light conditions robust contrastive language image pretraining against adversarial attacks authors wenhan yang baharan mirzasoleiman subjects computer vision and pattern recognition cs cv computation and language cs cl cryptography and security cs cr machine learning cs lg arxiv link pdf link abstract contrastive vision language representation learning has achieved state of the art performance for zero shot classification by learning from millions of image caption pairs crawled from the internet however the massive data that powers large multimodal models such as clip makes them extremely vulnerable to various types of adversarial attacks including targeted and backdoor data poisoning attacks despite this vulnerability robust contrastive vision language pretraining against adversarial attacks has remained unaddressed in this work we propose roclip the first effective method for robust pretraining and fine tuning multimodal vision language models roclip effectively breaks the association between poisoned image caption pairs by considering a pool of random examples and matching every image with the text that is most similar to its caption in the pool and matching every caption with the image that is most similar to its image in the pool our extensive experiments show that our method renders state of the art targeted data poisoning and backdoor attacks ineffective during pre training or fine tuning of clip in particular roclip decreases the poison and backdoor attack success rates down to during pre training and during fine tuning and effectively improves the model s performance pretrained vits yield versatile representations for medical images authors christos matsoukas johan fredin haslum magnus söderberg kevin smith subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract convolutional neural networks cnns have reigned for a decade as the de facto approach to automated medical image diagnosis pushing the state of the art in classification detection and segmentation tasks over the last years vision transformers vits have appeared as a competitive alternative to cnns yielding impressive levels of performance in the natural image domain while possessing several interesting properties that could prove beneficial for medical imaging tasks in this work we explore the benefits and drawbacks of transformer based models for medical image classification we conduct a series of experiments on several standard medical image benchmark datasets and tasks our findings show that while cnns perform better if trained from scratch off the shelf vision transformers can perform on par with cnns when pretrained on imagenet both in a supervised and self supervised setting rendering them as a viable alternative to cnns mobile mapping mesh change detection and update authors teng wu bruno vallet cédric demonceaux subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract mobile mapping in particular mobile lidar scanning mls is increasingly widespread to monitor and map urban scenes at city scale with unprecedented resolution and accuracy the resulting point cloud sampling of the scene geometry can be meshed in order to create a continuous representation for different applications visualization simulation navigation etc because of the highly dynamic nature of these urban scenes long term mapping should rely on frequent map updates a trivial solution is to simply replace old data with newer data each time a new acquisition is made however it has two drawbacks the old data may be of higher quality resolution precision than the new and the coverage of the scene might be different in various acquisitions including varying occlusions in this paper we propose a fully automatic pipeline to address these two issues by formulating the problem of merging meshes with different quality coverage and acquisition time our method is based on a combined distance and visibility based change detection a time series analysis to assess the sustainability of changes a mesh mosaicking based on a global boolean optimization and finally a stitching of the resulting mesh pieces boundaries with triangle strips finally our method is demonstrated on robotcar and stereopolis datasets keyword raw image there is no result
1
11,754
14,590,040,716
IssuesEvent
2020-12-19 05:24:35
pingcap/tidb
https://api.github.com/repos/pingcap/tidb
closed
JSON_EXTRACT fails to cast as bool
component/coprocessor component/json correctness good-first-issue severity/major sig/execution status/help-wanted type/bug
## Bug Report Please answer these questions before submitting your issue. Thanks! 1. What did you do? in both MySQL and TiDB, run the following: ``` create database testjson; use testjson; create table testjson( id int auto_increment not null primary key, j json )default charset=utf8 engine=innodb; insert into testjson set j='{"test":1}'; select id from testjson where json_extract(j, '$.test'); ``` 2. What did you expect to see? MySQL returns this: ``` +----+ | id | +----+ | 1 | +----+ 1 row in set (0.001 sec) ``` 3. What did you see instead? TiDB returns this: ``` ERROR 1105 (HY000): InvalidDataType("can\'t convert Json(1) to bool") ``` 4. What version of TiDB are you using (`tidb-server -V` or run `select tidb_version();` on TiDB)? v3.0.0-rc.1-256-g1ddb31720
1.0
JSON_EXTRACT fails to cast as bool - ## Bug Report Please answer these questions before submitting your issue. Thanks! 1. What did you do? in both MySQL and TiDB, run the following: ``` create database testjson; use testjson; create table testjson( id int auto_increment not null primary key, j json )default charset=utf8 engine=innodb; insert into testjson set j='{"test":1}'; select id from testjson where json_extract(j, '$.test'); ``` 2. What did you expect to see? MySQL returns this: ``` +----+ | id | +----+ | 1 | +----+ 1 row in set (0.001 sec) ``` 3. What did you see instead? TiDB returns this: ``` ERROR 1105 (HY000): InvalidDataType("can\'t convert Json(1) to bool") ``` 4. What version of TiDB are you using (`tidb-server -V` or run `select tidb_version();` on TiDB)? v3.0.0-rc.1-256-g1ddb31720
process
json extract fails to cast as bool bug report please answer these questions before submitting your issue thanks what did you do in both mysql and tidb run the following create database testjson use testjson create table testjson id int auto increment not null primary key j json default charset engine innodb insert into testjson set j test select id from testjson where json extract j test what did you expect to see mysql returns this id row in set sec what did you see instead tidb returns this error invaliddatatype can t convert json to bool what version of tidb are you using tidb server v or run select tidb version on tidb rc
1
480,635
13,864,296,749
IssuesEvent
2020-10-16 01:01:52
ChromaKey81/the-creepers-source-code
https://api.github.com/repos/ChromaKey81/the-creepers-source-code
closed
"Gems Galore" advancement description is now inaccurate
priority: low
The description of "Gems Galore" states "Create a gemstone from a crystalline item by dropping it on a stonecutter," but gemstones are no longer produced by dropping the source item onto the stonecutter.
1.0
"Gems Galore" advancement description is now inaccurate - The description of "Gems Galore" states "Create a gemstone from a crystalline item by dropping it on a stonecutter," but gemstones are no longer produced by dropping the source item onto the stonecutter.
non_process
gems galore advancement description is now inaccurate the description of gems galore states create a gemstone from a crystalline item by dropping it on a stonecutter but gemstones are no longer produced by dropping the source item onto the stonecutter
0
154,514
13,551,752,013
IssuesEvent
2020-09-17 11:35:48
ahnaf-zamil/zenora
https://api.github.com/repos/ahnaf-zamil/zenora
closed
Contributing.MD misses requirements on Pre-Commit Hooks Installation
documentation good first issue
**Describe the bug** Documentation misses requirements for git hooks installation. 1. Install pre-commit: `pip install pre-commit` 2. Add pre-commit to `requirements.txt` 3. Define `.pre-commit-config.yam`l with the hooks, you want to include. 4. Execute pre-commit install to install git hooks in your `.git/ directory.` (I am not sure, but I believe there are other dependencies to be installed)
1.0
Contributing.MD misses requirements on Pre-Commit Hooks Installation - **Describe the bug** Documentation misses requirements for git hooks installation. 1. Install pre-commit: `pip install pre-commit` 2. Add pre-commit to `requirements.txt` 3. Define `.pre-commit-config.yam`l with the hooks, you want to include. 4. Execute pre-commit install to install git hooks in your `.git/ directory.` (I am not sure, but I believe there are other dependencies to be installed)
non_process
contributing md misses requirements on pre commit hooks installation describe the bug documentation misses requirements for git hooks installation install pre commit pip install pre commit add pre commit to requirements txt define pre commit config yam l with the hooks you want to include execute pre commit install to install git hooks in your git directory i am not sure but i believe there are other dependencies to be installed
0
368,554
10,880,973,221
IssuesEvent
2019-11-17 14:49:37
OrkanMetin/SWE-573
https://api.github.com/repos/OrkanMetin/SWE-573
opened
Learn JSON LD
Activity: Research Priority: Low Status: To Do
I will be examining JSON LD or JSON Schema to use in my project. Watching videos: https://www.youtube.com/watch?v=vioCbTo3C-4 What is JSON-LD?
1.0
Learn JSON LD - I will be examining JSON LD or JSON Schema to use in my project. Watching videos: https://www.youtube.com/watch?v=vioCbTo3C-4 What is JSON-LD?
non_process
learn json ld i will be examining json ld or json schema to use in my project watching videos what is json ld
0
1,609
4,226,469,089
IssuesEvent
2016-07-02 13:42:43
iTXTech/Genisys
https://api.github.com/repos/iTXTech/Genisys
closed
W10 edition bugs
bug processing
- [ ] 不知道为何使用win10版的,进入服务器后无法合成东西(合成后,等一下会立刻变回原材料,就好像卡了一下,但实际上好几个服务器都是如此) Unable to craft in W10 Edition. - [x] 同样也是win10的,进入服务器后按**E**可以打开背包,但是再次按**E**合上背包后,会导致手中的物品抛出去,类似与**Q**键的功能。但这是不期望的功能。 Player throw things when use "E" to close the Inventory. This behavior is unexpected. - [ ] Items are not dropped. When you try to drop an item dragging it from inventory window, it just disappears. - [ ] In creative, items cannot be put into the hotbar unless the player is holding the chosen hotbar slot.
1.0
W10 edition bugs - - [ ] 不知道为何使用win10版的,进入服务器后无法合成东西(合成后,等一下会立刻变回原材料,就好像卡了一下,但实际上好几个服务器都是如此) Unable to craft in W10 Edition. - [x] 同样也是win10的,进入服务器后按**E**可以打开背包,但是再次按**E**合上背包后,会导致手中的物品抛出去,类似与**Q**键的功能。但这是不期望的功能。 Player throw things when use "E" to close the Inventory. This behavior is unexpected. - [ ] Items are not dropped. When you try to drop an item dragging it from inventory window, it just disappears. - [ ] In creative, items cannot be put into the hotbar unless the player is holding the chosen hotbar slot.
process
edition bugs ,进入服务器后无法合成东西(合成后,等一下会立刻变回原材料,就好像卡了一下,但实际上好几个服务器都是如此) unable to craft in edition ,进入服务器后按 e 可以打开背包,但是再次按 e 合上背包后,会导致手中的物品抛出去,类似与 q 键的功能。但这是不期望的功能。 player throw things when use e to close the inventory this behavior is unexpected items are not dropped when you try to drop an item dragging it from inventory window it just disappears in creative items cannot be put into the hotbar unless the player is holding the chosen hotbar slot
1
270,324
20,597,698,212
IssuesEvent
2022-03-05 19:24:21
bloguetronica/cp2130-qt
https://api.github.com/repos/bloguetronica/cp2130-qt
closed
Error in comment inside "cp2130.cpp", line 417
documentation
The comment on line 417 contains an error. Where it is written "corresponds to bits 3:0 of byte 0", should be written "corresponds to bits 2:0 of byte 0".
1.0
Error in comment inside "cp2130.cpp", line 417 - The comment on line 417 contains an error. Where it is written "corresponds to bits 3:0 of byte 0", should be written "corresponds to bits 2:0 of byte 0".
non_process
error in comment inside cpp line the comment on line contains an error where it is written corresponds to bits of byte should be written corresponds to bits of byte
0
130,530
18,076,552,432
IssuesEvent
2021-09-21 10:31:50
etcd-io/website
https://api.github.com/repos/etcd-io/website
opened
Site search appears in two places (desktop)
design e0-minutes ux
For wide displays, site search appears in two places -- see below for a screenshot. It should only appear in the top-nav in this case. Note that for medium displays the top-nav search disappears. For narrow displays, the site search is just above the title. Both of these are ok. <img width="1152" alt="Screen Shot 2021-09-21 at 6 28 31 AM" src="https://user-images.githubusercontent.com/4140793/134155491-47751256-3970-40d7-af43-c598dfe860e8.png">
1.0
Site search appears in two places (desktop) - For wide displays, site search appears in two places -- see below for a screenshot. It should only appear in the top-nav in this case. Note that for medium displays the top-nav search disappears. For narrow displays, the site search is just above the title. Both of these are ok. <img width="1152" alt="Screen Shot 2021-09-21 at 6 28 31 AM" src="https://user-images.githubusercontent.com/4140793/134155491-47751256-3970-40d7-af43-c598dfe860e8.png">
non_process
site search appears in two places desktop for wide displays site search appears in two places see below for a screenshot it should only appear in the top nav in this case note that for medium displays the top nav search disappears for narrow displays the site search is just above the title both of these are ok img width alt screen shot at am src
0
156
2,581,536,365
IssuesEvent
2015-02-14 04:34:27
tinkerpop/tinkerpop3
https://api.github.com/repos/tinkerpop/tinkerpop3
opened
[Proposal] Provide support for OLAP to OLTP to OLAP to OLTP
enhancement process
I'm trying to figure out how we can, within a "single traversal", move between OLAP and OLTP at different sections of the traversal. E.g. ```groovy [g.V.out.has('age',lt,25)]OLAP[out('parent').out('workPlace')]OLTP[out('coworkers').age.groupCount]OLAP ``` Going from OLAP to OLTP is easy. We have solved that already as OLAP queries return a `Traversal<S,E>` and thus, can be further processed in OLTP. But what about going from OLTP back into OLAP? We need to be able to stream the OLTP results back into traversers on the vertices of the graph -- TinkerGraph (easy), Hadoop (dynamic editing of the disk format!? crazy) .. is there a general pattern that works for all graphs? Finally, what about when the objects are NOT vertices/edges/etc. See the next issue. @mbroecheler
1.0
[Proposal] Provide support for OLAP to OLTP to OLAP to OLTP - I'm trying to figure out how we can, within a "single traversal", move between OLAP and OLTP at different sections of the traversal. E.g. ```groovy [g.V.out.has('age',lt,25)]OLAP[out('parent').out('workPlace')]OLTP[out('coworkers').age.groupCount]OLAP ``` Going from OLAP to OLTP is easy. We have solved that already as OLAP queries return a `Traversal<S,E>` and thus, can be further processed in OLTP. But what about going from OLTP back into OLAP? We need to be able to stream the OLTP results back into traversers on the vertices of the graph -- TinkerGraph (easy), Hadoop (dynamic editing of the disk format!? crazy) .. is there a general pattern that works for all graphs? Finally, what about when the objects are NOT vertices/edges/etc. See the next issue. @mbroecheler
process
provide support for olap to oltp to olap to oltp i m trying to figure out how we can within a single traversal move between olap and oltp at different sections of the traversal e g groovy olap oltp olap going from olap to oltp is easy we have solved that already as olap queries return a traversal and thus can be further processed in oltp but what about going from oltp back into olap we need to be able to stream the oltp results back into traversers on the vertices of the graph tinkergraph easy hadoop dynamic editing of the disk format crazy is there a general pattern that works for all graphs finally what about when the objects are not vertices edges etc see the next issue mbroecheler
1
36,719
6,547,377,649
IssuesEvent
2017-09-04 14:31:38
Sciss/ScalaColliderUGens
https://api.github.com/repos/Sciss/ScalaColliderUGens
closed
improve README: extend list in de/sciss/synth/UGenSpec.scala when adding ugen xml spec
Documentation
I think the UGen needs to be registered in the list in order to build it. This could be mentioned in the tutorial
1.0
improve README: extend list in de/sciss/synth/UGenSpec.scala when adding ugen xml spec - I think the UGen needs to be registered in the list in order to build it. This could be mentioned in the tutorial
non_process
improve readme extend list in de sciss synth ugenspec scala when adding ugen xml spec i think the ugen needs to be registered in the list in order to build it this could be mentioned in the tutorial
0
7,011
10,153,340,576
IssuesEvent
2019-08-06 04:02:52
linnovate/root
https://api.github.com/repos/linnovate/root
closed
duplicated task only shows up in the list after refreshing the page
Fixed Process bug Tasks
go to tasks create a new task press on the three dots menu in the task template press on duplicate task the duplicated task doesnt show in the main pane, it only appears after refreshing the page
1.0
duplicated task only shows up in the list after refreshing the page - go to tasks create a new task press on the three dots menu in the task template press on duplicate task the duplicated task doesnt show in the main pane, it only appears after refreshing the page
process
duplicated task only shows up in the list after refreshing the page go to tasks create a new task press on the three dots menu in the task template press on duplicate task the duplicated task doesnt show in the main pane it only appears after refreshing the page
1
3,834
4,727,676,814
IssuesEvent
2016-10-18 14:07:58
nodejs/node
https://api.github.com/repos/nodejs/node
opened
security,unix: audit use of process.env in lib/ for setuid binary
security
Functions like `os.tmpdir()` and `Module._initPaths()` use file paths from environment variables. This is unsafe when the node binary has the setuid bit set - i.e., when it runs with the privileges of a different user (usually root) than the user executing it - because it can be used to read or write files that otherwise wouldn't be accessible. On the C++ side we have `secure_getenv()` which checks that the real uid and gid match the effective uid and gid before accessing an environment variable. Perhaps we need something similar for JS land. Caveat emptor: our implementation of `secure_getenv()` does not take Linux process capabilities into consideration but neither does glibc's, as far as I can tell.
True
security,unix: audit use of process.env in lib/ for setuid binary - Functions like `os.tmpdir()` and `Module._initPaths()` use file paths from environment variables. This is unsafe when the node binary has the setuid bit set - i.e., when it runs with the privileges of a different user (usually root) than the user executing it - because it can be used to read or write files that otherwise wouldn't be accessible. On the C++ side we have `secure_getenv()` which checks that the real uid and gid match the effective uid and gid before accessing an environment variable. Perhaps we need something similar for JS land. Caveat emptor: our implementation of `secure_getenv()` does not take Linux process capabilities into consideration but neither does glibc's, as far as I can tell.
non_process
security unix audit use of process env in lib for setuid binary functions like os tmpdir and module initpaths use file paths from environment variables this is unsafe when the node binary has the setuid bit set i e when it runs with the privileges of a different user usually root than the user executing it because it can be used to read or write files that otherwise wouldn t be accessible on the c side we have secure getenv which checks that the real uid and gid match the effective uid and gid before accessing an environment variable perhaps we need something similar for js land caveat emptor our implementation of secure getenv does not take linux process capabilities into consideration but neither does glibc s as far as i can tell
0
22,347
31,023,060,062
IssuesEvent
2023-08-10 07:13:33
elastic/beats
https://api.github.com/repos/elastic/beats
closed
translate_sid - Mark processor as GA
libbeat :Processors Team:Security-External Integrations
The `translate_sid` processor is marked as a beta feature. It has been thoroughly tested through many deployments so let's mark it as GA. - Update the asciidocs. - Remove any logger statements from the code the warn about it being beta.
1.0
translate_sid - Mark processor as GA - The `translate_sid` processor is marked as a beta feature. It has been thoroughly tested through many deployments so let's mark it as GA. - Update the asciidocs. - Remove any logger statements from the code the warn about it being beta.
process
translate sid mark processor as ga the translate sid processor is marked as a beta feature it has been thoroughly tested through many deployments so let s mark it as ga update the asciidocs remove any logger statements from the code the warn about it being beta
1
3,167
6,223,995,943
IssuesEvent
2017-07-10 13:21:29
dzhw/zofar
https://api.github.com/repos/dzhw/zofar
opened
archive / filing process closed surveys
category: service.processes prio: 2 status: discussion type: backlog.item
**Participant:** - not involved **User:** - i want to take a look at closed surveys without much efford **Service:** - not involved **Dev:** - we want a procress to have not more information on the prod system as currently needed
1.0
archive / filing process closed surveys - **Participant:** - not involved **User:** - i want to take a look at closed surveys without much efford **Service:** - not involved **Dev:** - we want a procress to have not more information on the prod system as currently needed
process
archive filing process closed surveys participant not involved user i want to take a look at closed surveys without much efford service not involved dev we want a procress to have not more information on the prod system as currently needed
1
42,050
5,412,647,282
IssuesEvent
2017-03-01 15:02:21
jupyterlab/jupyterlab
https://api.github.com/repos/jupyterlab/jupyterlab
closed
Selected cell "moves out of sight" when adding a new cell below
cat:Design and UX component:Notebook type:Bug
The title is maybe not the best description, but a gif is probably clearer. What I do here in the gif, is simply pressing a few times 'b' to add new cells below the existing one (and scrolling down at the end): ![peek 2017-02-13 14-39](https://cloud.githubusercontent.com/assets/1020496/22885636/8cc4b4ac-f1fa-11e6-9927-3bbd48ac3414.gif) As you can see, the selected cell (the new cell that is added below) moves out of sight, as the view does not update and does not scroll down automatically with the selected cell. In the classic notebook, the selected cell keeps being focused and the view is updated on each 'b' stroke. Reported with latest jupyterlab from conda-forge (0.16.0), using Firefox on Ubuntu 16.04.
1.0
Selected cell "moves out of sight" when adding a new cell below - The title is maybe not the best description, but a gif is probably clearer. What I do here in the gif, is simply pressing a few times 'b' to add new cells below the existing one (and scrolling down at the end): ![peek 2017-02-13 14-39](https://cloud.githubusercontent.com/assets/1020496/22885636/8cc4b4ac-f1fa-11e6-9927-3bbd48ac3414.gif) As you can see, the selected cell (the new cell that is added below) moves out of sight, as the view does not update and does not scroll down automatically with the selected cell. In the classic notebook, the selected cell keeps being focused and the view is updated on each 'b' stroke. Reported with latest jupyterlab from conda-forge (0.16.0), using Firefox on Ubuntu 16.04.
non_process
selected cell moves out of sight when adding a new cell below the title is maybe not the best description but a gif is probably clearer what i do here in the gif is simply pressing a few times b to add new cells below the existing one and scrolling down at the end as you can see the selected cell the new cell that is added below moves out of sight as the view does not update and does not scroll down automatically with the selected cell in the classic notebook the selected cell keeps being focused and the view is updated on each b stroke reported with latest jupyterlab from conda forge using firefox on ubuntu
0
15,234
9,887,905,780
IssuesEvent
2019-06-25 10:13:53
andrewfstratton/quando
https://api.github.com/repos/andrewfstratton/quando
closed
Add Placeholder (empty text) to inventor input
enhancement usability
Set through placeholder="whatever" * Need to fix width of input field - not set using placeholder if greater than normal input (since it's empty, it uses min).
True
Add Placeholder (empty text) to inventor input - Set through placeholder="whatever" * Need to fix width of input field - not set using placeholder if greater than normal input (since it's empty, it uses min).
non_process
add placeholder empty text to inventor input set through placeholder whatever need to fix width of input field not set using placeholder if greater than normal input since it s empty it uses min
0
54,616
11,268,215,715
IssuesEvent
2020-01-14 05:21:36
backdrop/backdrop-issues
https://api.github.com/repos/backdrop/backdrop-issues
closed
Possible regression in 1.15.x-dev when adding blocks with file fields
pr - needs code review pr - works for me status - has pull request type - bug report
**Description of the bug** When newly adding certain types of blocks to a layout, they don't appear on the "Manage blocks" page after clicking "Add block". Also the message "This form has unsaved changes. ..." won't show. After flushing the layout cache via admin menu, or even simply reloading the layout form page, both show in the form (block and message). **Steps To Reproduce** 1. Go to a layout and add a hero block 2. Save the block Please note: not all types of blocks seem affected. It only affects block types with a file field like the core Hero block type. Nothing in dblog, nothing in error_log. **Expected behavior** Block and message should show up. **Additional information** - Backdrop CMS version: 1.15.x-dev (latest) - Web server and its version: Apache 2.4 - PHP version 7.2 --- PR by @indigoxela: https://github.com/backdrop/backdrop/pull/3037
1.0
Possible regression in 1.15.x-dev when adding blocks with file fields - **Description of the bug** When newly adding certain types of blocks to a layout, they don't appear on the "Manage blocks" page after clicking "Add block". Also the message "This form has unsaved changes. ..." won't show. After flushing the layout cache via admin menu, or even simply reloading the layout form page, both show in the form (block and message). **Steps To Reproduce** 1. Go to a layout and add a hero block 2. Save the block Please note: not all types of blocks seem affected. It only affects block types with a file field like the core Hero block type. Nothing in dblog, nothing in error_log. **Expected behavior** Block and message should show up. **Additional information** - Backdrop CMS version: 1.15.x-dev (latest) - Web server and its version: Apache 2.4 - PHP version 7.2 --- PR by @indigoxela: https://github.com/backdrop/backdrop/pull/3037
non_process
possible regression in x dev when adding blocks with file fields description of the bug when newly adding certain types of blocks to a layout they don t appear on the manage blocks page after clicking add block also the message this form has unsaved changes won t show after flushing the layout cache via admin menu or even simply reloading the layout form page both show in the form block and message steps to reproduce go to a layout and add a hero block save the block please note not all types of blocks seem affected it only affects block types with a file field like the core hero block type nothing in dblog nothing in error log expected behavior block and message should show up additional information backdrop cms version x dev latest web server and its version apache php version pr by indigoxela
0
5,364
2,772,144,906
IssuesEvent
2015-05-02 11:16:59
breaker27/smarthomatic
https://api.github.com/repos/breaker27/smarthomatic
closed
RGB dimmer: Add brightness message support
Feature Firmware Testing
Support brightness message. Set internal value additionally to the e2p value. Resulting brightness is then user_brightness * e2p_brightness. Use case for supporting brightness message is as follows: The user can change the brightness depending on the brightness in the room or the time of the day. He can then use the same animation messages to create basically the same animations (but with different brightness).
1.0
RGB dimmer: Add brightness message support - Support brightness message. Set internal value additionally to the e2p value. Resulting brightness is then user_brightness * e2p_brightness. Use case for supporting brightness message is as follows: The user can change the brightness depending on the brightness in the room or the time of the day. He can then use the same animation messages to create basically the same animations (but with different brightness).
non_process
rgb dimmer add brightness message support support brightness message set internal value additionally to the value resulting brightness is then user brightness brightness use case for supporting brightness message is as follows the user can change the brightness depending on the brightness in the room or the time of the day he can then use the same animation messages to create basically the same animations but with different brightness
0
259
2,683,898,358
IssuesEvent
2015-03-28 12:56:04
FG-Team/HCJ-Website-Builder
https://api.github.com/repos/FG-Team/HCJ-Website-Builder
opened
Verbesserung der Code Conventions
Processing
Die Code Conventions sollen (weiter) vervollständigt werden. Bitte meldet hier Fehler, Fehlendes und Widersprüchlichkeiten, die Euch auffallen. Ich werde sie dann in die Konvention einbauen.
1.0
Verbesserung der Code Conventions - Die Code Conventions sollen (weiter) vervollständigt werden. Bitte meldet hier Fehler, Fehlendes und Widersprüchlichkeiten, die Euch auffallen. Ich werde sie dann in die Konvention einbauen.
process
verbesserung der code conventions die code conventions sollen weiter vervollständigt werden bitte meldet hier fehler fehlendes und widersprüchlichkeiten die euch auffallen ich werde sie dann in die konvention einbauen
1
578,313
17,146,594,632
IssuesEvent
2021-07-13 15:11:46
GoogleCloudPlatform/python-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
closed
cloudiot_mqtt_image_test: test_image_recv failed
api: cloudiot flakybot: issue priority: p1 samples type: bug
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: b99df8d36109e4fe3e397bfd2cbacac06960340c buildURL: [Build Status](https://source.cloud.google.com/results/invocations/325e6648-2604-4a2d-a611-53de8a0492aa), [Sponge](http://sponge2/325e6648-2604-4a2d-a611-53de8a0492aa) status: failed <details><summary>Test output</summary><br><pre>args = (name: "projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465" ,) kwargs = {'metadata': [('x-goog-request-params', 'name=projects/python-docs-samples-tests/locations/us-central1/registries/test...b1c58347ca9297ff360e849399-1626168465'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.31.0 gapic/2.2.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) .nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:67: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f136c2d2ba8> request = name: "projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465" timeout = None metadata = [('x-goog-request-params', 'name=projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.31.0 gapic/2.2.0')] credentials = None, wait_for_ready = None, compression = None def __call__(self, request, timeout=None, metadata=None, credentials=None, wait_for_ready=None, compression=None): state, call, = self._blocking(request, timeout, metadata, credentials, wait_for_ready, compression) > return _end_unary_response_blocking(state, call, False, None) .nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:946: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7f136c150ba8> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f136c2e2088> with_call = False, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _MultiThreadedRendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _InactiveRpcError(state) E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: E status = StatusCode.FAILED_PRECONDITION E details = "The registry with the name 'projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465' can't be deleted because it's not empty." E debug_error_string = "{"created":"@1626168504.755314011","description":"Error received from peer ipv4:74.125.142.95:443","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"The registry with the name 'projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465' can't be deleted because it's not empty.","grpc_status":9}" E > .nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:849: _InactiveRpcError The above exception was the direct cause of the following exception: test_topic = name: "projects/python-docs-samples-tests/topics/test-device-events-topic-c58f6227-ebf7-4ad6-b8ae-e0519b5c22aa" @pytest.fixture(scope="session") def test_registry_id(test_topic): @backoff.on_exception(backoff.expo, HttpError, max_time=60) def create_registry(): manager.open_registry( service_account_json, project_id, cloud_region, test_topic.name, registry_id ) create_registry() yield registry_id @backoff.on_exception(backoff.expo, HttpError, max_time=60) def delete_registry(): try: manager.delete_registry( service_account_json, project_id, cloud_region, registry_id ) except NotFound as e: # We ignore this case. print("The registry doesn't exist: detail: {}".format(str(e))) > delete_registry() fixtures.py:111: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .nox/py-3-6/lib/python3.6/site-packages/backoff/_sync.py:94: in retry ret = target(*args, **kwargs) fixtures.py:105: in delete_registry service_account_json, project_id, cloud_region, registry_id ../manager/manager.py:217: in delete_registry client.delete_device_registry(request={"name": registry_path}) .nox/py-3-6/lib/python3.6/site-packages/google/cloud/iot_v1/services/device_manager/client.py:666: in delete_device_registry request, retry=retry, timeout=timeout, metadata=metadata, .nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in __call__ return wrapped_func(*args, **kwargs) .nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:290: in retry_wrapped_func on_error=on_error, .nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:188: in retry_target return target() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (name: "projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465" ,) kwargs = {'metadata': [('x-goog-request-params', 'name=projects/python-docs-samples-tests/locations/us-central1/registries/test...b1c58347ca9297ff360e849399-1626168465'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.31.0 gapic/2.2.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: return callable_(*args, **kwargs) except grpc.RpcError as exc: > six.raise_from(exceptions.from_grpc_error(exc), exc) E google.api_core.exceptions.FailedPrecondition: 400 The registry with the name 'projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465' can't be deleted because it's not empty. .nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:69: FailedPrecondition</pre></details>
1.0
cloudiot_mqtt_image_test: test_image_recv failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: b99df8d36109e4fe3e397bfd2cbacac06960340c buildURL: [Build Status](https://source.cloud.google.com/results/invocations/325e6648-2604-4a2d-a611-53de8a0492aa), [Sponge](http://sponge2/325e6648-2604-4a2d-a611-53de8a0492aa) status: failed <details><summary>Test output</summary><br><pre>args = (name: "projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465" ,) kwargs = {'metadata': [('x-goog-request-params', 'name=projects/python-docs-samples-tests/locations/us-central1/registries/test...b1c58347ca9297ff360e849399-1626168465'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.31.0 gapic/2.2.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: > return callable_(*args, **kwargs) .nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:67: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f136c2d2ba8> request = name: "projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465" timeout = None metadata = [('x-goog-request-params', 'name=projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.31.0 gapic/2.2.0')] credentials = None, wait_for_ready = None, compression = None def __call__(self, request, timeout=None, metadata=None, credentials=None, wait_for_ready=None, compression=None): state, call, = self._blocking(request, timeout, metadata, credentials, wait_for_ready, compression) > return _end_unary_response_blocking(state, call, False, None) .nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:946: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ state = <grpc._channel._RPCState object at 0x7f136c150ba8> call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f136c2e2088> with_call = False, deadline = None def _end_unary_response_blocking(state, call, with_call, deadline): if state.code is grpc.StatusCode.OK: if with_call: rendezvous = _MultiThreadedRendezvous(state, call, None, deadline) return state.response, rendezvous else: return state.response else: > raise _InactiveRpcError(state) E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: E status = StatusCode.FAILED_PRECONDITION E details = "The registry with the name 'projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465' can't be deleted because it's not empty." E debug_error_string = "{"created":"@1626168504.755314011","description":"Error received from peer ipv4:74.125.142.95:443","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"The registry with the name 'projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465' can't be deleted because it's not empty.","grpc_status":9}" E > .nox/py-3-6/lib/python3.6/site-packages/grpc/_channel.py:849: _InactiveRpcError The above exception was the direct cause of the following exception: test_topic = name: "projects/python-docs-samples-tests/topics/test-device-events-topic-c58f6227-ebf7-4ad6-b8ae-e0519b5c22aa" @pytest.fixture(scope="session") def test_registry_id(test_topic): @backoff.on_exception(backoff.expo, HttpError, max_time=60) def create_registry(): manager.open_registry( service_account_json, project_id, cloud_region, test_topic.name, registry_id ) create_registry() yield registry_id @backoff.on_exception(backoff.expo, HttpError, max_time=60) def delete_registry(): try: manager.delete_registry( service_account_json, project_id, cloud_region, registry_id ) except NotFound as e: # We ignore this case. print("The registry doesn't exist: detail: {}".format(str(e))) > delete_registry() fixtures.py:111: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .nox/py-3-6/lib/python3.6/site-packages/backoff/_sync.py:94: in retry ret = target(*args, **kwargs) fixtures.py:105: in delete_registry service_account_json, project_id, cloud_region, registry_id ../manager/manager.py:217: in delete_registry client.delete_device_registry(request={"name": registry_path}) .nox/py-3-6/lib/python3.6/site-packages/google/cloud/iot_v1/services/device_manager/client.py:666: in delete_device_registry request, retry=retry, timeout=timeout, metadata=metadata, .nox/py-3-6/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py:145: in __call__ return wrapped_func(*args, **kwargs) .nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:290: in retry_wrapped_func on_error=on_error, .nox/py-3-6/lib/python3.6/site-packages/google/api_core/retry.py:188: in retry_target return target() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (name: "projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465" ,) kwargs = {'metadata': [('x-goog-request-params', 'name=projects/python-docs-samples-tests/locations/us-central1/registries/test...b1c58347ca9297ff360e849399-1626168465'), ('x-goog-api-client', 'gl-python/3.6.13 grpc/1.38.1 gax/1.31.0 gapic/2.2.0')]} @six.wraps(callable_) def error_remapped_callable(*args, **kwargs): try: return callable_(*args, **kwargs) except grpc.RpcError as exc: > six.raise_from(exceptions.from_grpc_error(exc), exc) E google.api_core.exceptions.FailedPrecondition: 400 The registry with the name 'projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-f3acbfb1c58347ca9297ff360e849399-1626168465' can't be deleted because it's not empty. .nox/py-3-6/lib/python3.6/site-packages/google/api_core/grpc_helpers.py:69: FailedPrecondition</pre></details>
non_process
cloudiot mqtt image test test image recv failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output args name projects python docs samples tests locations us registries test registry kwargs metadata six wraps callable def error remapped callable args kwargs try return callable args kwargs nox py lib site packages google api core grpc helpers py self request name projects python docs samples tests locations us registries test registry timeout none metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox py lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode failed precondition e details the registry with the name projects python docs samples tests locations us registries test registry can t be deleted because it s not empty e debug error string created description error received from peer file src core lib surface call cc file line grpc message the registry with the name projects python docs samples tests locations us registries test registry can t be deleted because it s not empty grpc status e nox py lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception test topic name projects python docs samples tests topics test device events topic pytest fixture scope session def test registry id test topic backoff on exception backoff expo httperror max time def create registry manager open registry service account json project id cloud region test topic name registry id create registry yield registry id backoff on exception backoff expo httperror max time def delete registry try manager delete registry service account json project id cloud region registry id except notfound as e we ignore this case print the registry doesn t exist detail format str e delete registry fixtures py nox py lib site packages backoff sync py in retry ret target args kwargs fixtures py in delete registry service account json project id cloud region registry id manager manager py in delete registry client delete device registry request name registry path nox py lib site packages google cloud iot services device manager client py in delete device registry request retry retry timeout timeout metadata metadata nox py lib site packages google api core gapic method py in call return wrapped func args kwargs nox py lib site packages google api core retry py in retry wrapped func on error on error nox py lib site packages google api core retry py in retry target return target args name projects python docs samples tests locations us registries test registry kwargs metadata six wraps callable def error remapped callable args kwargs try return callable args kwargs except grpc rpcerror as exc six raise from exceptions from grpc error exc exc e google api core exceptions failedprecondition the registry with the name projects python docs samples tests locations us registries test registry can t be deleted because it s not empty nox py lib site packages google api core grpc helpers py failedprecondition
0
12,584
14,991,277,097
IssuesEvent
2021-01-29 08:04:26
panther-labs/panther
https://api.github.com/repos/panther-labs/panther
opened
Limit the number of output files by the rules engine
bug p1 team:data processing
### Describe the bug The rules engine will output one S3 object per alert/batch. In case of a rule with big number of matches and high dedup string cardinality, this means we can have a bit number of output S3 objects. This can cause several problems: 1. Alerts API is slow to return the events that matched such an alert, since it has to list potentially 1000s of s3 objects 2. High S3 costs (lots of S3 requests) 3. Slow Athena queries ### Steps to reproduce Steps to reproduce the behavior: 1. Create a rule that matches all incoming events 2. Create a high cardinality dedup string e.g. use `p_row_id` 3. See alerts being generated. 4. Try to access one such alert, experience the timeout. ### Expected behavior Rules Engine should output one S3 object per batch/log type/rule containing events from multiple alerts. ### Environment How are you deploying or using Panther? - Panther version or commit: 1.15.2
1.0
Limit the number of output files by the rules engine - ### Describe the bug The rules engine will output one S3 object per alert/batch. In case of a rule with big number of matches and high dedup string cardinality, this means we can have a bit number of output S3 objects. This can cause several problems: 1. Alerts API is slow to return the events that matched such an alert, since it has to list potentially 1000s of s3 objects 2. High S3 costs (lots of S3 requests) 3. Slow Athena queries ### Steps to reproduce Steps to reproduce the behavior: 1. Create a rule that matches all incoming events 2. Create a high cardinality dedup string e.g. use `p_row_id` 3. See alerts being generated. 4. Try to access one such alert, experience the timeout. ### Expected behavior Rules Engine should output one S3 object per batch/log type/rule containing events from multiple alerts. ### Environment How are you deploying or using Panther? - Panther version or commit: 1.15.2
process
limit the number of output files by the rules engine describe the bug the rules engine will output one object per alert batch in case of a rule with big number of matches and high dedup string cardinality this means we can have a bit number of output objects this can cause several problems alerts api is slow to return the events that matched such an alert since it has to list potentially of objects high costs lots of requests slow athena queries steps to reproduce steps to reproduce the behavior create a rule that matches all incoming events create a high cardinality dedup string e g use p row id see alerts being generated try to access one such alert experience the timeout expected behavior rules engine should output one object per batch log type rule containing events from multiple alerts environment how are you deploying or using panther panther version or commit
1
10,728
13,530,643,291
IssuesEvent
2020-09-15 20:15:15
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
"Join attributes by nearest" tool defaults to "None" when using Max distance of 0
Bug Processing
**Describe the bug** Using "Join attributes by nearest" will interpret a value of 0 for Max distance as "None". I believe this is a bug, because you can leave this parameter unspecified, which more accurately represents "None". **How to Reproduce** Data: [join_attributes_by_nearest_bug_data.zip](https://github.com/qgis/QGIS/files/5223442/join_attributes_by_nearest_bug_data.zip) Load data in the attached zip file into a QGIS document Specify inputs as follows: ![image](https://user-images.githubusercontent.com/13814358/93184647-cb5a1b80-f73c-11ea-9fb7-58a0a3723005.png) Despite no polygons containing more than 2 points, it creates 10 polygons for each input polygon, even though the "max distance" parameter is specified as 0. **QGIS and OS versions** QGIS version | 3.10.7-A Coruña | QGIS code revision | 7b4ca4c8d0 -- | -- | -- | -- Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2 Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4 Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3 Compiled against SQLite | 3.29.0 | Running against SQLite | 3.29.0 PostgreSQL Client Version | 11.5 | SpatiaLite Version | 4.3.0 QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8 Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020 OS Version | Windows Server 2016 (10.0) Active python plugins | quick_map_services; db_manager; MetaSearch; processing **Additional context** <!-- Add any other context about the problem here. -->
1.0
"Join attributes by nearest" tool defaults to "None" when using Max distance of 0 - **Describe the bug** Using "Join attributes by nearest" will interpret a value of 0 for Max distance as "None". I believe this is a bug, because you can leave this parameter unspecified, which more accurately represents "None". **How to Reproduce** Data: [join_attributes_by_nearest_bug_data.zip](https://github.com/qgis/QGIS/files/5223442/join_attributes_by_nearest_bug_data.zip) Load data in the attached zip file into a QGIS document Specify inputs as follows: ![image](https://user-images.githubusercontent.com/13814358/93184647-cb5a1b80-f73c-11ea-9fb7-58a0a3723005.png) Despite no polygons containing more than 2 points, it creates 10 polygons for each input polygon, even though the "max distance" parameter is specified as 0. **QGIS and OS versions** QGIS version | 3.10.7-A Coruña | QGIS code revision | 7b4ca4c8d0 -- | -- | -- | -- Compiled against Qt | 5.11.2 | Running against Qt | 5.11.2 Compiled against GDAL/OGR | 3.0.4 | Running against GDAL/OGR | 3.0.4 Compiled against GEOS | 3.8.1-CAPI-1.13.3 | Running against GEOS | 3.8.1-CAPI-1.13.3 Compiled against SQLite | 3.29.0 | Running against SQLite | 3.29.0 PostgreSQL Client Version | 11.5 | SpatiaLite Version | 4.3.0 QWT Version | 6.1.3 | QScintilla2 Version | 2.10.8 Compiled against PROJ | 6.3.2 | Running against PROJ | Rel. 6.3.2, May 1st, 2020 OS Version | Windows Server 2016 (10.0) Active python plugins | quick_map_services; db_manager; MetaSearch; processing **Additional context** <!-- Add any other context about the problem here. -->
process
join attributes by nearest tool defaults to none when using max distance of describe the bug using join attributes by nearest will interpret a value of for max distance as none i believe this is a bug because you can leave this parameter unspecified which more accurately represents none how to reproduce data load data in the attached zip file into a qgis document specify inputs as follows despite no polygons containing more than points it creates polygons for each input polygon even though the max distance parameter is specified as qgis and os versions qgis version a coruña qgis code revision compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel may os version windows server active python plugins quick map services db manager metasearch processing additional context
1
20,670
27,335,050,126
IssuesEvent
2023-02-26 04:35:17
cse442-at-ub/project_s23-cinco
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
closed
Create a forgot my password frame and create interactivity to the buttons to go to and from the new frame
Processing Task Sprint 1
Test 1: Go to figma homepage: https://www.figma.com/file/5qJUyXFUAdbtiobIQYqH20/Project-Prototype?node-id=151%3A3&t=QH31e7QFg894LkNj-0 click on log in click on "forgot your password" see that it takes you to the "reset password" frame. confirm that the "Back to Login" link takes you back to the login screen
1.0
Create a forgot my password frame and create interactivity to the buttons to go to and from the new frame - Test 1: Go to figma homepage: https://www.figma.com/file/5qJUyXFUAdbtiobIQYqH20/Project-Prototype?node-id=151%3A3&t=QH31e7QFg894LkNj-0 click on log in click on "forgot your password" see that it takes you to the "reset password" frame. confirm that the "Back to Login" link takes you back to the login screen
process
create a forgot my password frame and create interactivity to the buttons to go to and from the new frame test go to figma homepage click on log in click on forgot your password see that it takes you to the reset password frame confirm that the back to login link takes you back to the login screen
1
825,409
31,388,822,634
IssuesEvent
2023-08-26 04:32:09
decline-cookies/anvil-unity-dots
https://api.github.com/repos/decline-cookies/anvil-unity-dots
opened
Task Driver - Migrate from Entity Keyed Tasks to RequestIDs
effort-intense priority-medium type-feature
Using Entities to key tasks becomes limiting when we start using Tasks Drivers in more advanced ways. Particularly if we setup circular or recursive uses of the same task driver instance. In other words, it lets us create complex hierarchies where a task request can make its way through the same Task Driver multiple times. Right now cancel is preventing a proper sequential unwind when the same Task Driver instance is visited multiple times in the hierarchy. All task instances that are keyed on the same entity will be cancelled at once. ## What do we need? -[] A stable request ID that can be derived for an Entity - Manually requested IDs can be considered ephemeral while an Entity converted to an ID would be stable/permanent. Need to make sure there are no collisions. - Translating the Entity's Index and Version into an ID space that the ephemeral IDs don't occupy is probably the best bet. -[] Create a mechanism to map and connect a request from one ID to the next so a virtual hierarchy may be built for cancellation. Need to be able to bridge between requestID changes if it's the same conceptual intent but needs a new ID to be uniquely identified as it passes through a task driver again. ## When does this come up? - World Unique task drivers that prevent them from being added into multiple task driver hierarchies. - Heavy, frequently depended on Task Drivers that developers don't want to have many instances of. Example: ContextFiltered -> QuestTD -> Navigate ->(new requestID) ContextFiltered -> QuestTD -> Navigate -> ...
1.0
Task Driver - Migrate from Entity Keyed Tasks to RequestIDs - Using Entities to key tasks becomes limiting when we start using Tasks Drivers in more advanced ways. Particularly if we setup circular or recursive uses of the same task driver instance. In other words, it lets us create complex hierarchies where a task request can make its way through the same Task Driver multiple times. Right now cancel is preventing a proper sequential unwind when the same Task Driver instance is visited multiple times in the hierarchy. All task instances that are keyed on the same entity will be cancelled at once. ## What do we need? -[] A stable request ID that can be derived for an Entity - Manually requested IDs can be considered ephemeral while an Entity converted to an ID would be stable/permanent. Need to make sure there are no collisions. - Translating the Entity's Index and Version into an ID space that the ephemeral IDs don't occupy is probably the best bet. -[] Create a mechanism to map and connect a request from one ID to the next so a virtual hierarchy may be built for cancellation. Need to be able to bridge between requestID changes if it's the same conceptual intent but needs a new ID to be uniquely identified as it passes through a task driver again. ## When does this come up? - World Unique task drivers that prevent them from being added into multiple task driver hierarchies. - Heavy, frequently depended on Task Drivers that developers don't want to have many instances of. Example: ContextFiltered -> QuestTD -> Navigate ->(new requestID) ContextFiltered -> QuestTD -> Navigate -> ...
non_process
task driver migrate from entity keyed tasks to requestids using entities to key tasks becomes limiting when we start using tasks drivers in more advanced ways particularly if we setup circular or recursive uses of the same task driver instance in other words it lets us create complex hierarchies where a task request can make its way through the same task driver multiple times right now cancel is preventing a proper sequential unwind when the same task driver instance is visited multiple times in the hierarchy all task instances that are keyed on the same entity will be cancelled at once what do we need a stable request id that can be derived for an entity manually requested ids can be considered ephemeral while an entity converted to an id would be stable permanent need to make sure there are no collisions translating the entity s index and version into an id space that the ephemeral ids don t occupy is probably the best bet create a mechanism to map and connect a request from one id to the next so a virtual hierarchy may be built for cancellation need to be able to bridge between requestid changes if it s the same conceptual intent but needs a new id to be uniquely identified as it passes through a task driver again when does this come up world unique task drivers that prevent them from being added into multiple task driver hierarchies heavy frequently depended on task drivers that developers don t want to have many instances of example contextfiltered questtd navigate new requestid contextfiltered questtd navigate
0
21,829
30,318,346,368
IssuesEvent
2023-07-10 17:13:00
geneontology/go-ontology
https://api.github.com/repos/geneontology/go-ontology
closed
NTR symbiont-mediated activation of host interferon signaling pathway
multi-species process
@genegodbold to provide references
1.0
NTR symbiont-mediated activation of host interferon signaling pathway - @genegodbold to provide references
process
ntr symbiont mediated activation of host interferon signaling pathway genegodbold to provide references
1
18,995
3,411,293,291
IssuesEvent
2015-12-05 01:09:59
benk691/HomeProjects
https://api.github.com/repos/benk691/HomeProjects
opened
Design a budget nicer layout
design
This is a design task to figure out whether this should just be a Latex PDF print out of everything (status, budget, ...) or if you want to move towards a GUI.
1.0
Design a budget nicer layout - This is a design task to figure out whether this should just be a Latex PDF print out of everything (status, budget, ...) or if you want to move towards a GUI.
non_process
design a budget nicer layout this is a design task to figure out whether this should just be a latex pdf print out of everything status budget or if you want to move towards a gui
0
19,399
25,539,604,428
IssuesEvent
2022-11-29 14:27:53
zotero/zotero
https://api.github.com/repos/zotero/zotero
closed
Link Mendeley citations in documents to imported items
Word Processor Integration
From [Mendeley Import](https://www.zotero.org/support/kb/mendeley_import): > When using the Zotero word processor plugins, document citations created with Mendeley won’t currently be linked to imported citations in your Zotero database. Zotero’s word processor plugins can, however, read Mendeley citations and their embedded metadata, so you can continue using the same documents with Zotero and insert additional instances of those citations by choosing from the “Cited” section of the search results in the citation dialog. We store Mendeley item UUIDs for imported items. I haven't checked what metadata Mendeley stores in field codes, but if they're using the UUID, it would be nice to automatically use the imported item, both for metadata edits and for disambiguation (e.g., so that adding the imported item from the library itself wouldn't add a duplicate bibliography entry). We could also consider just rewriting the item as a Zotero citation on refresh, which would probably be easier, but that might cause problems for people trying to collaborate with other Mendeley users.
1.0
Link Mendeley citations in documents to imported items - From [Mendeley Import](https://www.zotero.org/support/kb/mendeley_import): > When using the Zotero word processor plugins, document citations created with Mendeley won’t currently be linked to imported citations in your Zotero database. Zotero’s word processor plugins can, however, read Mendeley citations and their embedded metadata, so you can continue using the same documents with Zotero and insert additional instances of those citations by choosing from the “Cited” section of the search results in the citation dialog. We store Mendeley item UUIDs for imported items. I haven't checked what metadata Mendeley stores in field codes, but if they're using the UUID, it would be nice to automatically use the imported item, both for metadata edits and for disambiguation (e.g., so that adding the imported item from the library itself wouldn't add a duplicate bibliography entry). We could also consider just rewriting the item as a Zotero citation on refresh, which would probably be easier, but that might cause problems for people trying to collaborate with other Mendeley users.
process
link mendeley citations in documents to imported items from when using the zotero word processor plugins document citations created with mendeley won’t currently be linked to imported citations in your zotero database zotero’s word processor plugins can however read mendeley citations and their embedded metadata so you can continue using the same documents with zotero and insert additional instances of those citations by choosing from the “cited” section of the search results in the citation dialog we store mendeley item uuids for imported items i haven t checked what metadata mendeley stores in field codes but if they re using the uuid it would be nice to automatically use the imported item both for metadata edits and for disambiguation e g so that adding the imported item from the library itself wouldn t add a duplicate bibliography entry we could also consider just rewriting the item as a zotero citation on refresh which would probably be easier but that might cause problems for people trying to collaborate with other mendeley users
1
18,339
24,460,272,291
IssuesEvent
2022-10-07 10:28:24
Open-Data-Product-Initiative/open-data-product-spec
https://api.github.com/repos/Open-Data-Product-Initiative/open-data-product-spec
closed
Data Pipeline component and element renaming to DataOps
enhancement processed
``` "dataOps": { "infrastructure": { "platform": "Azure", "storageTechnology": "Azure SQL", "storageType": "sql", "containerTool": "helm", "format": "yaml", "status": "development", "scriptURL": "http://192.168.10.1/rundatapipeline.yml", "deploymentDocumentationURL": "http://192.168.10.1/datapipeline", "hashType": "SHA-2", "checksum": "7b7444ab8f5832e9ae8f54834782af995d0a83b4a1d77a75833eda7e19b4c921" } } ``` DataOps is intended for workflow automation, so it is the most descriptive name for the component and element. In the example, the data product is in SQL format, and container or multi-container-group is deployed by running a YAML script.
1.0
Data Pipeline component and element renaming to DataOps - ``` "dataOps": { "infrastructure": { "platform": "Azure", "storageTechnology": "Azure SQL", "storageType": "sql", "containerTool": "helm", "format": "yaml", "status": "development", "scriptURL": "http://192.168.10.1/rundatapipeline.yml", "deploymentDocumentationURL": "http://192.168.10.1/datapipeline", "hashType": "SHA-2", "checksum": "7b7444ab8f5832e9ae8f54834782af995d0a83b4a1d77a75833eda7e19b4c921" } } ``` DataOps is intended for workflow automation, so it is the most descriptive name for the component and element. In the example, the data product is in SQL format, and container or multi-container-group is deployed by running a YAML script.
process
data pipeline component and element renaming to dataops dataops infrastructure platform azure storagetechnology azure sql storagetype sql containertool helm format yaml status development scripturl deploymentdocumentationurl hashtype sha checksum dataops is intended for workflow automation so it is the most descriptive name for the component and element in the example the data product is in sql format and container or multi container group is deployed by running a yaml script
1
209,934
16,326,026,062
IssuesEvent
2021-05-12 01:09:12
WesleyBranton/Custom-Scrollbar
https://api.github.com/repos/WesleyBranton/Custom-Scrollbar
opened
Document settings profile feature
P3 documentation
A settings profile system has been added to the add-on (see #56). This feature needs some documentation on the wiki.
1.0
Document settings profile feature - A settings profile system has been added to the add-on (see #56). This feature needs some documentation on the wiki.
non_process
document settings profile feature a settings profile system has been added to the add on see this feature needs some documentation on the wiki
0
12,624
15,015,824,810
IssuesEvent
2021-02-01 08:49:21
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
Add new admin/Edit admin details page > Role > Label should be changed to 'Assign superadmin role'
Bug P2 Participant manager Process: Dev Process: Fixed Process: Tested dev
AR : Role is displaying as 'Super Admin' ER : Add new user page > Role > 'Super Admin' label should be changed to 'Assign superadmin role' ![role](https://user-images.githubusercontent.com/71445210/100261797-4d529600-2f71-11eb-83ff-5a6aa5ec70aa.png)
3.0
Add new admin/Edit admin details page > Role > Label should be changed to 'Assign superadmin role' - AR : Role is displaying as 'Super Admin' ER : Add new user page > Role > 'Super Admin' label should be changed to 'Assign superadmin role' ![role](https://user-images.githubusercontent.com/71445210/100261797-4d529600-2f71-11eb-83ff-5a6aa5ec70aa.png)
process
add new admin edit admin details page role label should be changed to assign superadmin role ar role is displaying as super admin er add new user page role super admin label should be changed to assign superadmin role
1
81,458
23,468,726,717
IssuesEvent
2022-08-16 19:25:57
DynamoRIO/dynamorio
https://api.github.com/repos/DynamoRIO/dynamorio
closed
docs fail to build with doxygen 1.9.4
Component-Build
With doxygen 1.9.4, first we have an error about the CLASS_GRAPH option being set when the deprecated CLASS_DIAGRAMS is set. Then we have: ``` CMake Error at src/api/docs/CMake_rundoxygen.cmake:154 (message): *** /usr/bin/doxygen failed: *** build_x64_dbg_tests/include/dr_config.h:683: warning: The following parameter of dr_register_client(const char *process_name, process_id_t pid, bool global, dr_platform_t dr_platform, client_id_t client_id, size_t client_pri, const char *client_path, const char *client_options) is not documented: parameter 'client_options' build_x64_dbg_tests/include/dr_events.h:133: warning: found documented return type for dr_register_bb_event that does not return anything build_x64_dbg_tests/include/dr_events.h:367: warning: found documented return type for dr_register_trace_event that does not return anything src/api/docs/deployment.dox:751: warning: Invalid list item found src/api/docs/deployment.dox:751: warning: Invalid list item found ``` I understand the two return type ones and I can fix those. I do not understand the others yet.
1.0
docs fail to build with doxygen 1.9.4 - With doxygen 1.9.4, first we have an error about the CLASS_GRAPH option being set when the deprecated CLASS_DIAGRAMS is set. Then we have: ``` CMake Error at src/api/docs/CMake_rundoxygen.cmake:154 (message): *** /usr/bin/doxygen failed: *** build_x64_dbg_tests/include/dr_config.h:683: warning: The following parameter of dr_register_client(const char *process_name, process_id_t pid, bool global, dr_platform_t dr_platform, client_id_t client_id, size_t client_pri, const char *client_path, const char *client_options) is not documented: parameter 'client_options' build_x64_dbg_tests/include/dr_events.h:133: warning: found documented return type for dr_register_bb_event that does not return anything build_x64_dbg_tests/include/dr_events.h:367: warning: found documented return type for dr_register_trace_event that does not return anything src/api/docs/deployment.dox:751: warning: Invalid list item found src/api/docs/deployment.dox:751: warning: Invalid list item found ``` I understand the two return type ones and I can fix those. I do not understand the others yet.
non_process
docs fail to build with doxygen with doxygen first we have an error about the class graph option being set when the deprecated class diagrams is set then we have cmake error at src api docs cmake rundoxygen cmake message usr bin doxygen failed build dbg tests include dr config h warning the following parameter of dr register client const char process name process id t pid bool global dr platform t dr platform client id t client id size t client pri const char client path const char client options is not documented parameter client options build dbg tests include dr events h warning found documented return type for dr register bb event that does not return anything build dbg tests include dr events h warning found documented return type for dr register trace event that does not return anything src api docs deployment dox warning invalid list item found src api docs deployment dox warning invalid list item found i understand the two return type ones and i can fix those i do not understand the others yet
0
15,251
19,189,479,117
IssuesEvent
2021-12-05 19:10:01
Scott-Collier/MA5851_SP86_2021
https://api.github.com/repos/Scott-Collier/MA5851_SP86_2021
closed
Playback format needs to be multi-encoded
Processing
One product can have more than one playback format. This needs to be multi encoded. nan = Not Mentioned
1.0
Playback format needs to be multi-encoded - One product can have more than one playback format. This needs to be multi encoded. nan = Not Mentioned
process
playback format needs to be multi encoded one product can have more than one playback format this needs to be multi encoded nan not mentioned
1
20,756
27,488,882,194
IssuesEvent
2023-03-04 11:19:52
nodejs/node
https://api.github.com/repos/nodejs/node
closed
child_process spawn missing stdout on Ubuntu 20.04
child_process
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name --> * **Version**: v12.16.1 * **Platform**: Linux ${edited_hostname} 5.4.0-769-generic #12~1577743911~20.04~05e6ba8-Ubuntu SMP Tue Dec 31 00:30:08 UTC x86_64 x86_64 x86_64 GNU/Linux * **Subsystem**: child_process ### What steps will reproduce the bug? <!-- Enter details about your bug, preferably a simple code snippet that can be run using `node` directly without installing third-party dependencies. --> *Note:* Test code [here](https://github.com/doubleswirve/spawns-on-spawns) as well. Create an executable `a.js`: ```js #!/usr/bin/env node const { spawn } = require("child_process"); spawn("echo", ["-n", "hello, world"], { stdio: "inherit" }); ``` When I run `./a.js`, I receive the `hello, world` output I'm expecting. Then I create another JavaScript file `b.js` that calls `./a.js`: ```js const { spawn } = require("child_process"); let data = ""; let proc = spawn("./a.js", [], { cwd: process.env.PWD }); proc.stdout.on("data", chunk => data += chunk); proc.on("close", code => console.log(`Code: ${code}\nData: ${data}`)); ``` On Linux (unlike macOS), `data` remains an empty string within the `close` event handler. ### How often does it reproduce? Is there a required condition? I receive the same output for every run on Linux. ### What is the expected behavior? <!-- If possible please provide textual output instead of screenshots. --> I expected to receive the same output as macOS (Catalina 10.15.3), i.e., ```txt Code: 0 Data: hello, world ``` ### What do you see instead? <!-- If possible please provide textual output instead of screenshots. --> On Linux, I receive: ```txt Code: 0 Data: ``` ### Additional information <!-- Tell us anything else you think we should know. --> Thanks for all the help!
1.0
child_process spawn missing stdout on Ubuntu 20.04 - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name --> * **Version**: v12.16.1 * **Platform**: Linux ${edited_hostname} 5.4.0-769-generic #12~1577743911~20.04~05e6ba8-Ubuntu SMP Tue Dec 31 00:30:08 UTC x86_64 x86_64 x86_64 GNU/Linux * **Subsystem**: child_process ### What steps will reproduce the bug? <!-- Enter details about your bug, preferably a simple code snippet that can be run using `node` directly without installing third-party dependencies. --> *Note:* Test code [here](https://github.com/doubleswirve/spawns-on-spawns) as well. Create an executable `a.js`: ```js #!/usr/bin/env node const { spawn } = require("child_process"); spawn("echo", ["-n", "hello, world"], { stdio: "inherit" }); ``` When I run `./a.js`, I receive the `hello, world` output I'm expecting. Then I create another JavaScript file `b.js` that calls `./a.js`: ```js const { spawn } = require("child_process"); let data = ""; let proc = spawn("./a.js", [], { cwd: process.env.PWD }); proc.stdout.on("data", chunk => data += chunk); proc.on("close", code => console.log(`Code: ${code}\nData: ${data}`)); ``` On Linux (unlike macOS), `data` remains an empty string within the `close` event handler. ### How often does it reproduce? Is there a required condition? I receive the same output for every run on Linux. ### What is the expected behavior? <!-- If possible please provide textual output instead of screenshots. --> I expected to receive the same output as macOS (Catalina 10.15.3), i.e., ```txt Code: 0 Data: hello, world ``` ### What do you see instead? <!-- If possible please provide textual output instead of screenshots. --> On Linux, I receive: ```txt Code: 0 Data: ``` ### Additional information <!-- Tell us anything else you think we should know. --> Thanks for all the help!
process
child process spawn missing stdout on ubuntu thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name version platform linux edited hostname generic ubuntu smp tue dec utc gnu linux subsystem child process what steps will reproduce the bug enter details about your bug preferably a simple code snippet that can be run using node directly without installing third party dependencies note test code as well create an executable a js js usr bin env node const spawn require child process spawn echo stdio inherit when i run a js i receive the hello world output i m expecting then i create another javascript file b js that calls a js js const spawn require child process let data let proc spawn a js cwd process env pwd proc stdout on data chunk data chunk proc on close code console log code code ndata data on linux unlike macos data remains an empty string within the close event handler how often does it reproduce is there a required condition i receive the same output for every run on linux what is the expected behavior if possible please provide textual output instead of screenshots i expected to receive the same output as macos catalina i e txt code data hello world what do you see instead if possible please provide textual output instead of screenshots on linux i receive txt code data additional information tell us anything else you think we should know thanks for all the help
1
8,852
11,953,125,138
IssuesEvent
2020-04-03 20:15:06
E3SM-Project/E3SM
https://api.github.com/repos/E3SM-Project/E3SM
opened
usr_mech_infile option doesn't build chemistry preprocessor on cori
Atmosphere Chemistry Pre-Processor Cori bug
The CAM_CONFIG_OPTS option "-usr_mech_infile" is really useful for testing new chemistry mechanisms and it is much needed for the NGD AtmPhys tasks of new chemistry mechanism development. It worked in July 2019 as documented [here](https://github.com/E3SM-Project/E3SM/pull/3045#issuecomment-511054628) to recreate the E3SMv1 chemistry code, but it doesn't work on cori with the current master. It complains about missing the tar output from the pre-processor when building the namelists. I tracked down the error and found that the chemistry pre-processor failed to build with components/cam/chem_proc/src/Makefile (COMPILE=pgf95). However, my offline build with components/cam/chem_proc/src/make_chempp.cori (COMPILER=gfortran) is successful. The namelist seems created by a perl script (components/cam/bld/configure). I don't know perl enough to fix it. @singhbalwinder , you used it successfully last year as mentioned above. Do you have any ideas why it fails now? Do you have a fix for it? It seems to me that we need to use make_chempp.cori instead of Makefile on cori, but I am not sure if that will cause problems on other machines. @wlin7 , I know you use perl. Do you have any suggestions? Thanks.
1.0
usr_mech_infile option doesn't build chemistry preprocessor on cori - The CAM_CONFIG_OPTS option "-usr_mech_infile" is really useful for testing new chemistry mechanisms and it is much needed for the NGD AtmPhys tasks of new chemistry mechanism development. It worked in July 2019 as documented [here](https://github.com/E3SM-Project/E3SM/pull/3045#issuecomment-511054628) to recreate the E3SMv1 chemistry code, but it doesn't work on cori with the current master. It complains about missing the tar output from the pre-processor when building the namelists. I tracked down the error and found that the chemistry pre-processor failed to build with components/cam/chem_proc/src/Makefile (COMPILE=pgf95). However, my offline build with components/cam/chem_proc/src/make_chempp.cori (COMPILER=gfortran) is successful. The namelist seems created by a perl script (components/cam/bld/configure). I don't know perl enough to fix it. @singhbalwinder , you used it successfully last year as mentioned above. Do you have any ideas why it fails now? Do you have a fix for it? It seems to me that we need to use make_chempp.cori instead of Makefile on cori, but I am not sure if that will cause problems on other machines. @wlin7 , I know you use perl. Do you have any suggestions? Thanks.
process
usr mech infile option doesn t build chemistry preprocessor on cori the cam config opts option usr mech infile is really useful for testing new chemistry mechanisms and it is much needed for the ngd atmphys tasks of new chemistry mechanism development it worked in july as documented to recreate the chemistry code but it doesn t work on cori with the current master it complains about missing the tar output from the pre processor when building the namelists i tracked down the error and found that the chemistry pre processor failed to build with components cam chem proc src makefile compile however my offline build with components cam chem proc src make chempp cori compiler gfortran is successful the namelist seems created by a perl script components cam bld configure i don t know perl enough to fix it singhbalwinder you used it successfully last year as mentioned above do you have any ideas why it fails now do you have a fix for it it seems to me that we need to use make chempp cori instead of makefile on cori but i am not sure if that will cause problems on other machines i know you use perl do you have any suggestions thanks
1