Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
434,213 | 12,515,631,951 | IssuesEvent | 2020-06-03 08:01:52 | canonical-web-and-design/build.snapcraft.io | https://api.github.com/repos/canonical-web-and-design/build.snapcraft.io | closed | the unclickable cells are more colorful overall than the clickable one | Priority: Medium | We need to make sure this is tested when we do testing…
_From Google Docs_
> Look at the penultimate row: there are three cells with ✔, but only the first two are clickable. How can you tell the third is not? …Its … text is green? That’s the only clear difference. The down arrows ⌵ are a good move, but it’s still the case that the unclickable cells are more colorful overall than the clickable ones.
| 1.0 | the unclickable cells are more colorful overall than the clickable one - We need to make sure this is tested when we do testing…
_From Google Docs_
> Look at the penultimate row: there are three cells with ✔, but only the first two are clickable. How can you tell the third is not? …Its … text is green? That’s the only clear difference. The down arrows ⌵ are a good move, but it’s still the case that the unclickable cells are more colorful overall than the clickable ones.
| priority | the unclickable cells are more colorful overall than the clickable one we need to make sure this is tested when we do testing… from google docs look at the penultimate row there are three cells with ✔ but only the first two are clickable how can you tell the third is not …its … text is green that’s the only clear difference the down arrows ⌵ are a good move but it’s still the case that the unclickable cells are more colorful overall than the clickable ones | 1 |
741,195 | 25,783,422,673 | IssuesEvent | 2022-12-09 17:59:49 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Add metrics to the RBS code path | kind/enhancement area/docdb priority/medium | Jira Link: [DB-2776](https://yugabyte.atlassian.net/browse/DB-2776)
### Description
Add metrics for the time spent in RBS as well as the CRC checksum validation part, if the CRC time is a big factor of the total time for RBS, then potentially skip the CRC checksum as the tablet bootstrap will eventually do the validation. | 1.0 | [DocDB] Add metrics to the RBS code path - Jira Link: [DB-2776](https://yugabyte.atlassian.net/browse/DB-2776)
### Description
Add metrics for the time spent in RBS as well as the CRC checksum validation part, if the CRC time is a big factor of the total time for RBS, then potentially skip the CRC checksum as the tablet bootstrap will eventually do the validation. | priority | add metrics to the rbs code path jira link description add metrics for the time spent in rbs as well as the crc checksum validation part if the crc time is a big factor of the total time for rbs then potentially skip the crc checksum as the tablet bootstrap will eventually do the validation | 1 |
195,787 | 6,918,337,293 | IssuesEvent | 2017-11-29 11:48:03 | vmware/harbor | https://api.github.com/repos/vmware/harbor | closed | Test connection is sending wrong password | area/ui kind/bug priority/medium target/vic-1.3 | In the edit dialog of replication/endpoint.
After setting the password, tested connection and save.
Edit it again, uncheck "verify remote cert" (I'm using a http endpoint).
Click "test connection" again, it will always fail, seems UI is sending the wrong password.
| 1.0 | Test connection is sending wrong password - In the edit dialog of replication/endpoint.
After setting the password, tested connection and save.
Edit it again, uncheck "verify remote cert" (I'm using a http endpoint).
Click "test connection" again, it will always fail, seems UI is sending the wrong password.
| priority | test connection is sending wrong password in the edit dialog of replication endpoint after setting the password tested connection and save edit it again uncheck verify remote cert i m using a http endpoint click test connection again it will always fail seems ui is sending the wrong password | 1 |
727,722 | 25,045,416,304 | IssuesEvent | 2022-11-05 06:57:39 | KendallDoesCoding/mogul-christmas | https://api.github.com/repos/KendallDoesCoding/mogul-christmas | closed | [YOUTUBE API] Standard volume for songs played using the website | enhancement help wanted good first issue javascript EddieHub:good-first-issue 🟨 priority: medium Hacktoberfest-Accepted | Currently, the sound of the video/embed depends on the LAST watched YouTube video volume, however, that could've been on mute, then as mentioned in the README.md, the site will not be able to play the music - "Please ensure that the last YouTube video you watched wasn't on mute, otherwise the songs will not play, and the autoplay for the lyrics' directory will automatically be set on mute too."
So, I was researching and it seems we can add some JavaScript (Youtube API) that the volume of every embed will be standard set amount by us (i hope this is how it works, i'm unsure) we can set a standard volume.
Set the standard volume to 50/50%
**Reference: https://developers.google.com/youtube/iframe_api_reference#setVolume**
@TechStudent10 can you do this pls? | 1.0 | [YOUTUBE API] Standard volume for songs played using the website - Currently, the sound of the video/embed depends on the LAST watched YouTube video volume, however, that could've been on mute, then as mentioned in the README.md, the site will not be able to play the music - "Please ensure that the last YouTube video you watched wasn't on mute, otherwise the songs will not play, and the autoplay for the lyrics' directory will automatically be set on mute too."
So, I was researching and it seems we can add some JavaScript (Youtube API) that the volume of every embed will be standard set amount by us (i hope this is how it works, i'm unsure) we can set a standard volume.
Set the standard volume to 50/50%
**Reference: https://developers.google.com/youtube/iframe_api_reference#setVolume**
@TechStudent10 can you do this pls? | priority | standard volume for songs played using the website currently the sound of the video embed depends on the last watched youtube video volume however that could ve been on mute then as mentioned in the readme md the site will not be able to play the music please ensure that the last youtube video you watched wasn t on mute otherwise the songs will not play and the autoplay for the lyrics directory will automatically be set on mute too so i was researching and it seems we can add some javascript youtube api that the volume of every embed will be standard set amount by us i hope this is how it works i m unsure we can set a standard volume set the standard volume to reference can you do this pls | 1 |
357,020 | 10,600,756,086 | IssuesEvent | 2019-10-10 10:46:02 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | obj: unable to free space when pool is almost full | Exposure: Medium Priority: 4 low Type: Feature | If application requires transaction to free multiple objects to make progress it can get stuck, because pmemobj_tx_free can abort the transaction (freeing more than 8 objects requires allocation).
The prime example is pmemfile - if application creates a file using all available space, unlinking it fails. I can work around some cases (by carefully using atomic free), but some are not that easy. I could preallocate some space at pool creation time, free it it when transactional free fails and restart the transaction, but this is race'y (another thread could use what we freed before we got to transactional free).
I think pmemobj should expose some API/ctl to reserve space for internal purposes.
| 1.0 | obj: unable to free space when pool is almost full - If application requires transaction to free multiple objects to make progress it can get stuck, because pmemobj_tx_free can abort the transaction (freeing more than 8 objects requires allocation).
The prime example is pmemfile - if application creates a file using all available space, unlinking it fails. I can work around some cases (by carefully using atomic free), but some are not that easy. I could preallocate some space at pool creation time, free it it when transactional free fails and restart the transaction, but this is race'y (another thread could use what we freed before we got to transactional free).
I think pmemobj should expose some API/ctl to reserve space for internal purposes.
| priority | obj unable to free space when pool is almost full if application requires transaction to free multiple objects to make progress it can get stuck because pmemobj tx free can abort the transaction freeing more than objects requires allocation the prime example is pmemfile if application creates a file using all available space unlinking it fails i can work around some cases by carefully using atomic free but some are not that easy i could preallocate some space at pool creation time free it it when transactional free fails and restart the transaction but this is race y another thread could use what we freed before we got to transactional free i think pmemobj should expose some api ctl to reserve space for internal purposes | 1 |
690,347 | 23,654,745,674 | IssuesEvent | 2022-08-26 10:05:19 | projectdiscovery/nuclei | https://api.github.com/repos/projectdiscovery/nuclei | closed | Advanced template filtering | Priority: Medium Status: Completed Type: Enhancement | ### Please describe your feature request:
Allow advanced template filtering for execution with values from **id / info** section
### Describe the use case of this feature:
More control over template execution
### Proposed solutions:
New CLI flag:
```
-tc, -template-condition templates to run based on dsl filer
```
## Examples
```console
nuclei -tc "author=='pdteam' && ('wordpress','xss') in tags"
nuclei -tc "severity=='high' && metadata.verified"
```
```bash
id=='tect-detect' || contains(tags,'wordpress')
contains(author,'pdteam')
tag=='wordpress' || contains(tags,'xss') && !contains(tags,'excluded')
severity=='high'
```
| 1.0 | Advanced template filtering - ### Please describe your feature request:
Allow advanced template filtering for execution with values from **id / info** section
### Describe the use case of this feature:
More control over template execution
### Proposed solutions:
New CLI flag:
```
-tc, -template-condition templates to run based on dsl filer
```
## Examples
```console
nuclei -tc "author=='pdteam' && ('wordpress','xss') in tags"
nuclei -tc "severity=='high' && metadata.verified"
```
```bash
id=='tect-detect' || contains(tags,'wordpress')
contains(author,'pdteam')
tag=='wordpress' || contains(tags,'xss') && !contains(tags,'excluded')
severity=='high'
```
| priority | advanced template filtering please describe your feature request allow advanced template filtering for execution with values from id info section describe the use case of this feature more control over template execution proposed solutions new cli flag tc template condition templates to run based on dsl filer examples console nuclei tc author pdteam wordpress xss in tags nuclei tc severity high metadata verified bash id tect detect contains tags wordpress contains author pdteam tag wordpress contains tags xss contains tags excluded severity high | 1 |
501,383 | 14,527,065,998 | IssuesEvent | 2020-12-14 14:56:21 | carbon-design-system/carbon-for-ibm-dotcom | https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom | closed | Web component: Content block - segmented Prod QA testing | QA dev complete package: web components priority: medium | <!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
developer using the ibm.com Library `Cotnent block - segmented`
> I need to:
have a version of the component that has been tested for accessibility compliance as well as on multiple browsers and platforms
> so that I can:
be confident that my ibm.com web site users will have a good experience
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- [Browser Stack link](https://ibm.ent.box.com/notes/578734426612)
- [Browser Standard](https://w3.ibm.com/standards/web/browser/)
- Browser versions to be tested: Tier 1 browsers will be tested with defects created as Sev 1 or Sev 2. Tier 2 browser defects will be created as Sev 3 defects.
- Platforms to be tested, by priority: 1) Desktop 2) Mobile 3) Tablet
- Mobile & Tablet iOS versions: 13.1, 13.3 and 14
- Mobile & Tablet Android versions: 9.0 Pie and 8.1 Oreo
- Browsers to be tested: Desktop: Chrome, Firefox, Safari, Edge, Mobile: Chrome, Safari, Samsung Internet, UC Browser, Tablet: Safari, Chrome, Android
- [Accessibility Checklist](https://www.ibm.com/able/guidelines/ci162/accessibility_checklist.html)
- [Creating a QA bug](https://ibm.ent.box.com/notes/603242247385)
- **See the Epic for the Design and Functional specs information**
- Dev issue (#3790)
- Once development is finished the updated code is available in the [**Web Components Canary Environment**](https://ibmdotcom-web-components-canary.mybluemix.net/?path=/story/overview-getting-started--page) for testing.
- [**React canary environment**](https://ibmdotcom-react-canary.mybluemix.net/?path=/story/overview-getting-started--page)
#### Acceptance criteria
- [ ] Accessibility testing is complete. Component is compliant.
- [ ] All browser versions are tested
- [ ] All operating systems are tested
- [ ] All devices are tested
- [ ] Defects are recorded and retested when fixed | 1.0 | Web component: Content block - segmented Prod QA testing - <!-- Avoid any type of solutions in this user story -->
<!-- replace _{{...}}_ with your own words or remove -->
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
developer using the ibm.com Library `Cotnent block - segmented`
> I need to:
have a version of the component that has been tested for accessibility compliance as well as on multiple browsers and platforms
> so that I can:
be confident that my ibm.com web site users will have a good experience
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- [Browser Stack link](https://ibm.ent.box.com/notes/578734426612)
- [Browser Standard](https://w3.ibm.com/standards/web/browser/)
- Browser versions to be tested: Tier 1 browsers will be tested with defects created as Sev 1 or Sev 2. Tier 2 browser defects will be created as Sev 3 defects.
- Platforms to be tested, by priority: 1) Desktop 2) Mobile 3) Tablet
- Mobile & Tablet iOS versions: 13.1, 13.3 and 14
- Mobile & Tablet Android versions: 9.0 Pie and 8.1 Oreo
- Browsers to be tested: Desktop: Chrome, Firefox, Safari, Edge, Mobile: Chrome, Safari, Samsung Internet, UC Browser, Tablet: Safari, Chrome, Android
- [Accessibility Checklist](https://www.ibm.com/able/guidelines/ci162/accessibility_checklist.html)
- [Creating a QA bug](https://ibm.ent.box.com/notes/603242247385)
- **See the Epic for the Design and Functional specs information**
- Dev issue (#3790)
- Once development is finished the updated code is available in the [**Web Components Canary Environment**](https://ibmdotcom-web-components-canary.mybluemix.net/?path=/story/overview-getting-started--page) for testing.
- [**React canary environment**](https://ibmdotcom-react-canary.mybluemix.net/?path=/story/overview-getting-started--page)
#### Acceptance criteria
- [ ] Accessibility testing is complete. Component is compliant.
- [ ] All browser versions are tested
- [ ] All operating systems are tested
- [ ] All devices are tested
- [ ] Defects are recorded and retested when fixed | priority | web component content block segmented prod qa testing user story as a developer using the ibm com library cotnent block segmented i need to have a version of the component that has been tested for accessibility compliance as well as on multiple browsers and platforms so that i can be confident that my ibm com web site users will have a good experience additional information browser versions to be tested tier browsers will be tested with defects created as sev or sev tier browser defects will be created as sev defects platforms to be tested by priority desktop mobile tablet mobile tablet ios versions and mobile tablet android versions pie and oreo browsers to be tested desktop chrome firefox safari edge mobile chrome safari samsung internet uc browser tablet safari chrome android see the epic for the design and functional specs information dev issue once development is finished the updated code is available in the for testing acceptance criteria accessibility testing is complete component is compliant all browser versions are tested all operating systems are tested all devices are tested defects are recorded and retested when fixed | 1 |
277,917 | 8,634,438,031 | IssuesEvent | 2018-11-22 16:50:27 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | opened | Download signed (by NHS) contract is not available. Demo, #J537 | kind/support priority/medium | 1. Створюємо заявку.
2. Підтверджуємо створену заявку
3. Чекаємо на підпис НЗСУ
4. Із блоку "urgent" скачувати підписаний контент по урлі з типом - "SIGNED_CONTENT".
Створена заявка id - dc0f1e3d-b2b0-4fab-ab22-7e2782f716a2
Подробиці тут https://drive.google.com/drive/u/0/folders/1TRp5TpDFEDsClCv-8aIrIQq-EDm_geJL?ogsrc=32
Пріорітет стандартний, але це тестування контрактів.
| 1.0 | Download signed (by NHS) contract is not available. Demo, #J537 - 1. Створюємо заявку.
2. Підтверджуємо створену заявку
3. Чекаємо на підпис НЗСУ
4. Із блоку "urgent" скачувати підписаний контент по урлі з типом - "SIGNED_CONTENT".
Створена заявка id - dc0f1e3d-b2b0-4fab-ab22-7e2782f716a2
Подробиці тут https://drive.google.com/drive/u/0/folders/1TRp5TpDFEDsClCv-8aIrIQq-EDm_geJL?ogsrc=32
Пріорітет стандартний, але це тестування контрактів.
| priority | download signed by nhs contract is not available demo створюємо заявку підтверджуємо створену заявку чекаємо на підпис нзсу із блоку urgent скачувати підписаний контент по урлі з типом signed content створена заявка id подробиці тут пріорітет стандартний але це тестування контрактів | 1 |
449,120 | 12,963,634,633 | IssuesEvent | 2020-07-20 19:06:51 | ansible/awx | https://api.github.com/repos/ansible/awx | opened | Hide sync icon for smart inventory rows in Inventory List | component:ui_next priority:medium state:needs_devel type:bug | ##### ISSUE TYPE
- Bug Report
##### SUMMARY
<img width="1407" alt="Screen Shot 2020-07-20 at 3 05 48 PM" src="https://user-images.githubusercontent.com/9889020/87976119-901fd100-ca9a-11ea-9fd2-976a80a7f55b.png">
Smart inventories don't have sources so this icon is not relevant
| 1.0 | Hide sync icon for smart inventory rows in Inventory List - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
<img width="1407" alt="Screen Shot 2020-07-20 at 3 05 48 PM" src="https://user-images.githubusercontent.com/9889020/87976119-901fd100-ca9a-11ea-9fd2-976a80a7f55b.png">
Smart inventories don't have sources so this icon is not relevant
| priority | hide sync icon for smart inventory rows in inventory list issue type bug report summary img width alt screen shot at pm src smart inventories don t have sources so this icon is not relevant | 1 |
57,612 | 3,083,124,406 | IssuesEvent | 2015-08-24 06:26:56 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | В calcBlockSize стреляет dcassert(aFileSize > 0); // [+] IRainman fix. | bug imported Priority-Medium | _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on May 22, 2013 09:32:52_
Для повторения падения нужно расшарить файл с размером = 0
1. Причина в HashManager::Hasher::run()
File f(m_fname, File::READ, File::OPEN);
const int64_t bs = TigerTree::getMaxBlockSize(f.getSize());
2. мы пытаемся открывать файл даже если размер у него = 0
при этом после открытия зовем API для определения размера f.getSize()
Хотя перед этим мы уже узнали размер файла
const int64_t l_size = File::getSize(m_fname);
TODO
- Если файл пустой, то его не нужно открывать?
- Даже если открыли не звать f.getSize() ?
- Пройти этот кусок под отладкой.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1043_ | 1.0 | В calcBlockSize стреляет dcassert(aFileSize > 0); // [+] IRainman fix. - _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on May 22, 2013 09:32:52_
Для повторения падения нужно расшарить файл с размером = 0
1. Причина в HashManager::Hasher::run()
File f(m_fname, File::READ, File::OPEN);
const int64_t bs = TigerTree::getMaxBlockSize(f.getSize());
2. мы пытаемся открывать файл даже если размер у него = 0
при этом после открытия зовем API для определения размера f.getSize()
Хотя перед этим мы уже узнали размер файла
const int64_t l_size = File::getSize(m_fname);
TODO
- Если файл пустой, то его не нужно открывать?
- Даже если открыли не звать f.getSize() ?
- Пройти этот кусок под отладкой.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1043_ | priority | в calcblocksize стреляет dcassert afilesize irainman fix from on may для повторения падения нужно расшарить файл с размером причина в hashmanager hasher run file f m fname file read file open const t bs tigertree getmaxblocksize f getsize мы пытаемся открывать файл даже если размер у него при этом после открытия зовем api для определения размера f getsize хотя перед этим мы уже узнали размер файла const t l size file getsize m fname todo если файл пустой то его не нужно открывать даже если открыли не звать f getsize пройти этот кусок под отладкой original issue | 1 |
230,709 | 7,613,018,642 | IssuesEvent | 2018-05-01 19:40:18 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | closed | Remove hello-world from extensions in samples | bug-type: unexpected behavior priority: medium problem: bug | `api/hello-world` was created to show a very basic example of an extension.
It has been added to a number of the sample page `tpl` files.
This extension is intercepting double clicks (which should trigger a zoom) and presents a dialog which is difficult to clear.
`We detected a double click, but you may have just clicked twice slowly. Proceed anyways?`
It should be removed from the main samples. We can make a special sample that uses it if we want to preserve the glory of `hello-world`. | 1.0 | Remove hello-world from extensions in samples - `api/hello-world` was created to show a very basic example of an extension.
It has been added to a number of the sample page `tpl` files.
This extension is intercepting double clicks (which should trigger a zoom) and presents a dialog which is difficult to clear.
`We detected a double click, but you may have just clicked twice slowly. Proceed anyways?`
It should be removed from the main samples. We can make a special sample that uses it if we want to preserve the glory of `hello-world`. | priority | remove hello world from extensions in samples api hello world was created to show a very basic example of an extension it has been added to a number of the sample page tpl files this extension is intercepting double clicks which should trigger a zoom and presents a dialog which is difficult to clear we detected a double click but you may have just clicked twice slowly proceed anyways it should be removed from the main samples we can make a special sample that uses it if we want to preserve the glory of hello world | 1 |
675,458 | 23,095,247,288 | IssuesEvent | 2022-07-26 18:51:41 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | yb-master fails to restart after errors during first run | kind/bug area/docdb priority/medium | Jira Link: [DB-1889](https://yugabyte.atlassian.net/browse/DB-1889)
Repro: Start yb-master with an incorrect master_addresses on first run, see it crash, fix the master_addresses and restart the master, it will not start up properly. This is similar to #4866 but a different failure point.
------------------
During yb-master startup, we create the instance file early in the fs manager init and then use the presence of this marker file to guide initialization of the sys catalog raft metadata. This is kind of broken because there are steps that can fail in between the instance file creation and the sys catalog metadata initialization.
This issue is to track the proper fix to all these kinds of issues. We need to fix the sys catalog Load/Create code to either do a LoadOrCreate or maybe have a marker to make sure it has gotten out of the "first run" phase successfully at least once.
Details
-------------------
```
sanketh@varahi:~/code/yugabyte-db$ ~/yugabyte-2.1.8.2/bin/yb-master --fs_data_dirs=/tmp/testcrash1 --rpc_bind_addresses=127.0.0.1:7100 --master_addresses=127.0.0.2:7100 --replication_factor=1
F0730 20:43:44.338073 17057 master_main.cc:120] Illegal state (yb/master/catalog_manager.cc:1273): Unable to initialize catalog manager: Failed to initialize sys tables async: None of the local addresses are present in master_addresses 127.0.0.2:7100.
Fatal failure details written to /tmp/testcrash1/yb-data/master/logs/yb-master.FATAL.details.2020-07-30T20_43_44.pid17057.txt
F20200730 20:43:44 ../../src/yb/master/master_main.cc:120] Illegal state (yb/master/catalog_manager.cc:1273): Unable to initialize catalog manager: Failed to initialize sys tables async: None of the local addresses are present in master_addresses 127.0.0.2:7100.
@ 0x7fb07a43cc0c yb::LogFatalHandlerSink::send()
@ 0x7fb079624346 google::LogMessage::SendToLog()
@ 0x7fb0796217aa google::LogMessage::Flush()
@ 0x7fb079624879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fb075283825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
*** Check failure stack trace: ***
@ 0x7fb07a43aff1 yb::(anonymous namespace)::DumpStackTraceAndExit()
@ 0x7fb079621c5d google::LogMessage::Fail()
@ 0x7fb079623dcd google::LogMessage::SendToLog()
@ 0x7fb0796217aa google::LogMessage::Flush()
@ 0x7fb079624879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fb075283825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
Aborted (core dumped)
sanketh@varahi:~/code/yugabyte-db$ ~/yugabyte-2.1.8.2/bin/yb-master --fs_data_dirs=/tmp/testcrash1 --rpc_bind_addresses=127.0.0.1:7100 --master_addresses=127.0.0.1:7100 --replication_factor=1
F0730 20:43:48.776080 17094 master_main.cc:120] Not found (yb/util/env_posix.cc:1482): Unable to initialize catalog manager: Failed to initialize sys tables async: Could not load Raft group metadata from /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: No such file or directory (system error 2)
Fatal failure details written to /tmp/testcrash1/yb-data/master/logs/yb-master.FATAL.details.2020-07-30T20_43_48.pid17094.txt
F20200730 20:43:48 ../../src/yb/master/master_main.cc:120] Not found (yb/util/env_posix.cc:1482): Unable to initialize catalog manager: Failed to initialize sys tables async: Could not load Raft group metadata from /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: No such file or directory (system error 2)
@ 0x7fc4c87f9c0c yb::LogFatalHandlerSink::send()
@ 0x7fc4c79e1346 google::LogMessage::SendToLog()
@ 0x7fc4c79de7aa google::LogMessage::Flush()
@ 0x7fc4c79e1879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fc4c3640825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
*** Check failure stack trace: ***
@ 0x7fc4c87f7ff1 yb::(anonymous namespace)::DumpStackTraceAndExit()
@ 0x7fc4c79dec5d google::LogMessage::Fail()
@ 0x7fc4c79e0dcd google::LogMessage::SendToLog()
@ 0x7fc4c79de7aa google::LogMessage::Flush()
@ 0x7fc4c79e1879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fc4c3640825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
Aborted (core dumped)
```
| 1.0 | yb-master fails to restart after errors during first run - Jira Link: [DB-1889](https://yugabyte.atlassian.net/browse/DB-1889)
Repro: Start yb-master with an incorrect master_addresses on first run, see it crash, fix the master_addresses and restart the master, it will not start up properly. This is similar to #4866 but a different failure point.
------------------
During yb-master startup, we create the instance file early in the fs manager init and then use the presence of this marker file to guide initialization of the sys catalog raft metadata. This is kind of broken because there are steps that can fail in between the instance file creation and the sys catalog metadata initialization.
This issue is to track the proper fix to all these kinds of issues. We need to fix the sys catalog Load/Create code to either do a LoadOrCreate or maybe have a marker to make sure it has gotten out of the "first run" phase successfully at least once.
Details
-------------------
```
sanketh@varahi:~/code/yugabyte-db$ ~/yugabyte-2.1.8.2/bin/yb-master --fs_data_dirs=/tmp/testcrash1 --rpc_bind_addresses=127.0.0.1:7100 --master_addresses=127.0.0.2:7100 --replication_factor=1
F0730 20:43:44.338073 17057 master_main.cc:120] Illegal state (yb/master/catalog_manager.cc:1273): Unable to initialize catalog manager: Failed to initialize sys tables async: None of the local addresses are present in master_addresses 127.0.0.2:7100.
Fatal failure details written to /tmp/testcrash1/yb-data/master/logs/yb-master.FATAL.details.2020-07-30T20_43_44.pid17057.txt
F20200730 20:43:44 ../../src/yb/master/master_main.cc:120] Illegal state (yb/master/catalog_manager.cc:1273): Unable to initialize catalog manager: Failed to initialize sys tables async: None of the local addresses are present in master_addresses 127.0.0.2:7100.
@ 0x7fb07a43cc0c yb::LogFatalHandlerSink::send()
@ 0x7fb079624346 google::LogMessage::SendToLog()
@ 0x7fb0796217aa google::LogMessage::Flush()
@ 0x7fb079624879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fb075283825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
*** Check failure stack trace: ***
@ 0x7fb07a43aff1 yb::(anonymous namespace)::DumpStackTraceAndExit()
@ 0x7fb079621c5d google::LogMessage::Fail()
@ 0x7fb079623dcd google::LogMessage::SendToLog()
@ 0x7fb0796217aa google::LogMessage::Flush()
@ 0x7fb079624879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fb075283825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
Aborted (core dumped)
sanketh@varahi:~/code/yugabyte-db$ ~/yugabyte-2.1.8.2/bin/yb-master --fs_data_dirs=/tmp/testcrash1 --rpc_bind_addresses=127.0.0.1:7100 --master_addresses=127.0.0.1:7100 --replication_factor=1
F0730 20:43:48.776080 17094 master_main.cc:120] Not found (yb/util/env_posix.cc:1482): Unable to initialize catalog manager: Failed to initialize sys tables async: Could not load Raft group metadata from /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: No such file or directory (system error 2)
Fatal failure details written to /tmp/testcrash1/yb-data/master/logs/yb-master.FATAL.details.2020-07-30T20_43_48.pid17094.txt
F20200730 20:43:48 ../../src/yb/master/master_main.cc:120] Not found (yb/util/env_posix.cc:1482): Unable to initialize catalog manager: Failed to initialize sys tables async: Could not load Raft group metadata from /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: /tmp/testcrash1/yb-data/master/tablet-meta/00000000000000000000000000000000: No such file or directory (system error 2)
@ 0x7fc4c87f9c0c yb::LogFatalHandlerSink::send()
@ 0x7fc4c79e1346 google::LogMessage::SendToLog()
@ 0x7fc4c79de7aa google::LogMessage::Flush()
@ 0x7fc4c79e1879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fc4c3640825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
*** Check failure stack trace: ***
@ 0x7fc4c87f7ff1 yb::(anonymous namespace)::DumpStackTraceAndExit()
@ 0x7fc4c79dec5d google::LogMessage::Fail()
@ 0x7fc4c79e0dcd google::LogMessage::SendToLog()
@ 0x7fc4c79de7aa google::LogMessage::Flush()
@ 0x7fc4c79e1879 google::LogMessageFatal::~LogMessageFatal()
@ 0x408fb4 yb::master::MasterMain()
@ 0x7fc4c3640825 __libc_start_main
@ 0x408369 _start
@ (nil) (unknown)
Aborted (core dumped)
```
| priority | yb master fails to restart after errors during first run jira link repro start yb master with an incorrect master addresses on first run see it crash fix the master addresses and restart the master it will not start up properly this is similar to but a different failure point during yb master startup we create the instance file early in the fs manager init and then use the presence of this marker file to guide initialization of the sys catalog raft metadata this is kind of broken because there are steps that can fail in between the instance file creation and the sys catalog metadata initialization this issue is to track the proper fix to all these kinds of issues we need to fix the sys catalog load create code to either do a loadorcreate or maybe have a marker to make sure it has gotten out of the first run phase successfully at least once details sanketh varahi code yugabyte db yugabyte bin yb master fs data dirs tmp rpc bind addresses master addresses replication factor master main cc illegal state yb master catalog manager cc unable to initialize catalog manager failed to initialize sys tables async none of the local addresses are present in master addresses fatal failure details written to tmp yb data master logs yb master fatal details txt src yb master master main cc illegal state yb master catalog manager cc unable to initialize catalog manager failed to initialize sys tables async none of the local addresses are present in master addresses yb logfatalhandlersink send google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb master mastermain libc start main start nil unknown check failure stack trace yb anonymous namespace dumpstacktraceandexit google logmessage fail google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb master mastermain libc start main start nil unknown aborted core dumped sanketh varahi code yugabyte db yugabyte bin yb master fs data dirs tmp rpc bind addresses master addresses replication factor master main cc not found yb util env posix cc unable to initialize catalog manager failed to initialize sys tables async could not load raft group metadata from tmp yb data master tablet meta tmp yb data master tablet meta no such file or directory system error fatal failure details written to tmp yb data master logs yb master fatal details txt src yb master master main cc not found yb util env posix cc unable to initialize catalog manager failed to initialize sys tables async could not load raft group metadata from tmp yb data master tablet meta tmp yb data master tablet meta no such file or directory system error yb logfatalhandlersink send google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb master mastermain libc start main start nil unknown check failure stack trace yb anonymous namespace dumpstacktraceandexit google logmessage fail google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb master mastermain libc start main start nil unknown aborted core dumped | 1 |
72,590 | 3,388,399,005 | IssuesEvent | 2015-11-29 08:19:43 | crutchcorn/stagger | https://api.github.com/repos/crutchcorn/stagger | closed | Add a function for deleting tags | enhancement Priority Medium | ```
>>> stagger.delete("test.mp3")
>>> stagger.read("test.mp3")
=> NoTagError
```
Original issue reported on code.google.com by `Karoly.Lorentey` on 13 Jun 2009 at 5:50 | 1.0 | Add a function for deleting tags - ```
>>> stagger.delete("test.mp3")
>>> stagger.read("test.mp3")
=> NoTagError
```
Original issue reported on code.google.com by `Karoly.Lorentey` on 13 Jun 2009 at 5:50 | priority | add a function for deleting tags stagger delete test stagger read test notagerror original issue reported on code google com by karoly lorentey on jun at | 1 |
125,599 | 4,958,505,497 | IssuesEvent | 2016-12-02 10:00:42 | GeographicaGS/gipc | https://api.github.com/repos/GeographicaGS/gipc | closed | Nuevo storymap - no se ve el mapa de la ficha | bug priority:medium | He creado un nuevo storymap para el paisaje_id = '42'. No sé si hay algo que no estoy poniendo bien porque no se ve el mapa.
http://landscapes.geographica.gs/storymap/42
Mapa en Carto (yo incluyo la url_visualización de Carto en la tabla 'paisajes_modos_narrativos):
https://gipc-admin.carto.com/viz/833f2f57-128d-441b-a163-bce3cb71f6fe/public_map
| 1.0 | Nuevo storymap - no se ve el mapa de la ficha - He creado un nuevo storymap para el paisaje_id = '42'. No sé si hay algo que no estoy poniendo bien porque no se ve el mapa.
http://landscapes.geographica.gs/storymap/42
Mapa en Carto (yo incluyo la url_visualización de Carto en la tabla 'paisajes_modos_narrativos):
https://gipc-admin.carto.com/viz/833f2f57-128d-441b-a163-bce3cb71f6fe/public_map
| priority | nuevo storymap no se ve el mapa de la ficha he creado un nuevo storymap para el paisaje id no sé si hay algo que no estoy poniendo bien porque no se ve el mapa mapa en carto yo incluyo la url visualización de carto en la tabla paisajes modos narrativos | 1 |
158,806 | 6,035,689,084 | IssuesEvent | 2017-06-09 14:27:54 | brandon1024/find | https://api.github.com/repos/brandon1024/find | closed | Selected Text Auto Search | feature medium priority | Beta Feature: When something is highlighted in the web page, and the extension is opened, whatever is highlighted in the web page should be automatically entered in the search field. This is common to IDEs. | 1.0 | Selected Text Auto Search - Beta Feature: When something is highlighted in the web page, and the extension is opened, whatever is highlighted in the web page should be automatically entered in the search field. This is common to IDEs. | priority | selected text auto search beta feature when something is highlighted in the web page and the extension is opened whatever is highlighted in the web page should be automatically entered in the search field this is common to ides | 1 |
479,047 | 13,790,434,235 | IssuesEvent | 2020-10-09 10:25:13 | madmachineio/MadMachineIDE | https://api.github.com/repos/madmachineio/MadMachineIDE | closed | Search function in the editor overlap with code content | bug priority: medium | **Describe the bug**
Search function in the editor overlap with code content
**To Reproduce**
Steps to reproduce the behavior:
1. Open any project in MadMachine IDE
2. Open any file in the editor
3. Press `command + f`
4. See error
**Expected behavior**
The content sould avoid to be overlaped with search content
| 1.0 | Search function in the editor overlap with code content - **Describe the bug**
Search function in the editor overlap with code content
**To Reproduce**
Steps to reproduce the behavior:
1. Open any project in MadMachine IDE
2. Open any file in the editor
3. Press `command + f`
4. See error
**Expected behavior**
The content sould avoid to be overlaped with search content
| priority | search function in the editor overlap with code content describe the bug search function in the editor overlap with code content to reproduce steps to reproduce the behavior open any project in madmachine ide open any file in the editor press command f see error expected behavior the content sould avoid to be overlaped with search content | 1 |
323,949 | 9,881,138,401 | IssuesEvent | 2019-06-24 14:07:11 | georchestra/georchestra | https://api.github.com/repos/georchestra/georchestra | closed | Drop epsg_extension in favor of vanilla geoserver user projections support? | priority-medium | It seems that the `epsg-extension` module was added back in [May 2011](https://github.com/georchestra/georchestra/blame/18.06/epsg-extension/src/main/java/org/geotools/referencing/factory/epsg/CustomCodes.java) right before GeoServer added support for custom CRS definitions in `<data dir>/user_projections/epsg.properties` in [June](https://github.com/geoserver/geoserver/blame/44bacafcf1352bfe0b1836262df686fd58036585/src/main/src/main/java/org/vfny/geoserver/crs/GeoserverCustomWKTFactory.java) of the same year.
Neither have changed over the years, so I wonder if we shouldn't just get rid of the custom `epsg_extension` module in favor of vanilla geoserver support for the same functionality.
Only thing to consider would be upgrading the config of anyone using the `-DCUSTOM_EPSG_FILE` system property by `-Duser.projections.file` or just saving the custom projections in `<data dir>/user_projections/epsg.properties`.
@fvanderbiest @pmauduit comments?
| 1.0 | Drop epsg_extension in favor of vanilla geoserver user projections support? - It seems that the `epsg-extension` module was added back in [May 2011](https://github.com/georchestra/georchestra/blame/18.06/epsg-extension/src/main/java/org/geotools/referencing/factory/epsg/CustomCodes.java) right before GeoServer added support for custom CRS definitions in `<data dir>/user_projections/epsg.properties` in [June](https://github.com/geoserver/geoserver/blame/44bacafcf1352bfe0b1836262df686fd58036585/src/main/src/main/java/org/vfny/geoserver/crs/GeoserverCustomWKTFactory.java) of the same year.
Neither have changed over the years, so I wonder if we shouldn't just get rid of the custom `epsg_extension` module in favor of vanilla geoserver support for the same functionality.
Only thing to consider would be upgrading the config of anyone using the `-DCUSTOM_EPSG_FILE` system property by `-Duser.projections.file` or just saving the custom projections in `<data dir>/user_projections/epsg.properties`.
@fvanderbiest @pmauduit comments?
| priority | drop epsg extension in favor of vanilla geoserver user projections support it seems that the epsg extension module was added back in right before geoserver added support for custom crs definitions in user projections epsg properties in of the same year neither have changed over the years so i wonder if we shouldn t just get rid of the custom epsg extension module in favor of vanilla geoserver support for the same functionality only thing to consider would be upgrading the config of anyone using the dcustom epsg file system property by duser projections file or just saving the custom projections in user projections epsg properties fvanderbiest pmauduit comments | 1 |
562,472 | 16,661,662,990 | IssuesEvent | 2021-06-06 12:41:35 | kaushiksk/portfolio-server-api | https://api.github.com/repos/kaushiksk/portfolio-server-api | closed | Migrate to FastAPI | enhancement investigation medium-priority | I am completely sold on FastAPI. It seems like a sleeker replacement for Flask and comes with swagger docs inbuilt. It also provides inbuilt json validation of input through pydactic.
Following changed will be needed:
- No more dependency on Flask-PyMongo - we'll need to use pymongo directly
- Cannot run flask commands anymore - we will replace command scripts with a manage.py file
- Tests drivers need to be updated to use FastAPI test client
- Routes changed to use FastAPI | 1.0 | Migrate to FastAPI - I am completely sold on FastAPI. It seems like a sleeker replacement for Flask and comes with swagger docs inbuilt. It also provides inbuilt json validation of input through pydactic.
Following changed will be needed:
- No more dependency on Flask-PyMongo - we'll need to use pymongo directly
- Cannot run flask commands anymore - we will replace command scripts with a manage.py file
- Tests drivers need to be updated to use FastAPI test client
- Routes changed to use FastAPI | priority | migrate to fastapi i am completely sold on fastapi it seems like a sleeker replacement for flask and comes with swagger docs inbuilt it also provides inbuilt json validation of input through pydactic following changed will be needed no more dependency on flask pymongo we ll need to use pymongo directly cannot run flask commands anymore we will replace command scripts with a manage py file tests drivers need to be updated to use fastapi test client routes changed to use fastapi | 1 |
769,151 | 26,994,649,131 | IssuesEvent | 2023-02-09 23:18:09 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [engine] Setting CORS configurations for accessControlAllowHeaders does not appear to have an effect | bug priority: medium CI can't reproduce | ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
_No response_
### Steps to reproduce
Steps:
1. Create a site
2. Configure CORS headers as follows
```
<cors>
<enable>true</enable>
<accessControlMaxAge>3600</accessControlMaxAge>
<accessControlAllowOrigin>*</accessControlAllowOrigin>
<accessControlAllowMethods>*</accessControlAllowMethods>
<accessControlAllowHeaders>X-Custom-Header, Content-Type</accessControlAllowHeaders>
<accessControlAllowCredentials>true</accessControlAllowCredentials>
</cors>
```
3. Attempt to set X-Custom-Heder or Content-Type and note that they are not sent.
### Relevant log output
_No response_
### Screenshots and/or videos
_No response_ | 1.0 | [engine] Setting CORS configurations for accessControlAllowHeaders does not appear to have an effect - ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] The issue is in the latest released 4.0.x
- [ ] The issue is in the latest released 3.1.x
### Describe the issue
_No response_
### Steps to reproduce
Steps:
1. Create a site
2. Configure CORS headers as follows
```
<cors>
<enable>true</enable>
<accessControlMaxAge>3600</accessControlMaxAge>
<accessControlAllowOrigin>*</accessControlAllowOrigin>
<accessControlAllowMethods>*</accessControlAllowMethods>
<accessControlAllowHeaders>X-Custom-Header, Content-Type</accessControlAllowHeaders>
<accessControlAllowCredentials>true</accessControlAllowCredentials>
</cors>
```
3. Attempt to set X-Custom-Heder or Content-Type and note that they are not sent.
### Relevant log output
_No response_
### Screenshots and/or videos
_No response_ | priority | setting cors configurations for accesscontrolallowheaders does not appear to have an effect duplicates i have searched the existing issues latest version the issue is in the latest released x the issue is in the latest released x describe the issue no response steps to reproduce steps create a site configure cors headers as follows true x custom header content type true attempt to set x custom heder or content type and note that they are not sent relevant log output no response screenshots and or videos no response | 1 |
405,624 | 11,879,905,326 | IssuesEvent | 2020-03-27 09:41:42 | input-output-hk/jormungandr | https://api.github.com/repos/input-output-hk/jormungandr | closed | When starting gossiping, network does not check for already connected node | Priority - Medium bug jörmungandr subsys-network | Indeed, we call `topology.view(Any)` and quickly we already to `connect_and_propagate` with the node without checking if the node is already connected to us or not:
https://github.com/input-output-hk/jormungandr/blob/19f8eb49e15165e4a50a248e6a19221376f07cf6/jormungandr/src/network/mod.rs#L404-L414
---
instead of doing the pre-filtering of the already connected node it might be easier to do the already connected check in the `connect_and_propagate` function. | 1.0 | When starting gossiping, network does not check for already connected node - Indeed, we call `topology.view(Any)` and quickly we already to `connect_and_propagate` with the node without checking if the node is already connected to us or not:
https://github.com/input-output-hk/jormungandr/blob/19f8eb49e15165e4a50a248e6a19221376f07cf6/jormungandr/src/network/mod.rs#L404-L414
---
instead of doing the pre-filtering of the already connected node it might be easier to do the already connected check in the `connect_and_propagate` function. | priority | when starting gossiping network does not check for already connected node indeed we call topology view any and quickly we already to connect and propagate with the node without checking if the node is already connected to us or not instead of doing the pre filtering of the already connected node it might be easier to do the already connected check in the connect and propagate function | 1 |
54,745 | 3,071,170,783 | IssuesEvent | 2015-08-19 10:16:00 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Сохранение истории поисковых запросов при перезапуске программы | enhancement imported Priority-Medium | _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on April 28, 2010 21:43:16_
Во флае надо сделать чтобы история поисковых запросов сохранялась при выходе
из программы.
Вот я каждый день ищу один и тот же сериал и надо заново придумывать строку
поиска
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=84_ | 1.0 | Сохранение истории поисковых запросов при перезапуске программы - _From [a.rain...@gmail.com](https://code.google.com/u/117892482479228821242/) on April 28, 2010 21:43:16_
Во флае надо сделать чтобы история поисковых запросов сохранялась при выходе
из программы.
Вот я каждый день ищу один и тот же сериал и надо заново придумывать строку
поиска
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=84_ | priority | сохранение истории поисковых запросов при перезапуске программы from on april во флае надо сделать чтобы история поисковых запросов сохранялась при выходе из программы вот я каждый день ищу один и тот же сериал и надо заново придумывать строку поиска original issue | 1 |
22,158 | 2,645,695,151 | IssuesEvent | 2015-03-13 01:11:52 | prikhi/evoluspencil | https://api.github.com/repos/prikhi/evoluspencil | opened | bitmap image resizing problem | 2–5 stars bug imported Priority-Medium | _From [moonsha...@gmail.com](https://code.google.com/u/109720614888486048684/) on September 09, 2008 00:51:05_
What steps will reproduce the problem? 1. create new project
2. drag the "bitmap image" from common shapes onto the working panel
3. resize the bitmap image's upper right anchor / lower left anchor What is the expected output? What do you see instead? resizing according to the movement of the mouse, instead reverse resizing
occurs What version of the product are you using? On what operating system? FF addon V1.0.2, OS XP SP3 ,Browser FF 3.0.1 Please provide any additional information below.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=50_ | 1.0 | bitmap image resizing problem - _From [moonsha...@gmail.com](https://code.google.com/u/109720614888486048684/) on September 09, 2008 00:51:05_
What steps will reproduce the problem? 1. create new project
2. drag the "bitmap image" from common shapes onto the working panel
3. resize the bitmap image's upper right anchor / lower left anchor What is the expected output? What do you see instead? resizing according to the movement of the mouse, instead reverse resizing
occurs What version of the product are you using? On what operating system? FF addon V1.0.2, OS XP SP3 ,Browser FF 3.0.1 Please provide any additional information below.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=50_ | priority | bitmap image resizing problem from on september what steps will reproduce the problem create new project drag the bitmap image from common shapes onto the working panel resize the bitmap image s upper right anchor lower left anchor what is the expected output what do you see instead resizing according to the movement of the mouse instead reverse resizing occurs what version of the product are you using on what operating system ff addon os xp browser ff please provide any additional information below original issue | 1 |
223,409 | 7,453,540,191 | IssuesEvent | 2018-03-29 12:22:01 | huridocs/uwazi | https://api.github.com/repos/huridocs/uwazi | closed | URLs with parentesis break markdown links | Bug Priority: Medium Status: Sprint | **How to reproduce it**
Add a filter URL to a link, for example http://localhost:3000/en/library/?q=(order:desc,sort:metadata.date)
`[This will break closing on the first parentesis](http://localhost:3000/en/library/?q=(order:desc,sort:metadata.date))`
| 1.0 | URLs with parentesis break markdown links - **How to reproduce it**
Add a filter URL to a link, for example http://localhost:3000/en/library/?q=(order:desc,sort:metadata.date)
`[This will break closing on the first parentesis](http://localhost:3000/en/library/?q=(order:desc,sort:metadata.date))`
| priority | urls with parentesis break markdown links how to reproduce it add a filter url to a link for example | 1 |
206,300 | 7,111,368,152 | IssuesEvent | 2018-01-17 14:01:48 | hpi-swt2/sport-portal | https://api.github.com/repos/hpi-swt2/sport-portal | closed | Tournament overview table | po-review priority medium team swteam user story | **As a** tournament participant
**I want to** have a page that shows the progress/history of the tournament
**In order to** see how far each team came / its status
**Acceptance Criteria:**
- [x] Table with columns: Team-Name | Platzierung |
- [x] Platzierung should show how far the players is/got in the Tournament (e.g. "ist momentan im Viertelfinale", "ist im Viertelfinale ausgeschieden", "Erster/Zweiter/Dritter/Vierter Platz")
- [x] the table should be in sync with the progression of the Tournament, and be found / linked on the event page (compare to #31, #32, synchronize with Team issue number 5)
prototypical Table:
|Team-name|Platzierung|
|-------|------------|
|Team1 |im Viertelfinale ausgeschieden|
|Team2 |im Finale (gegen bla, falls bekannt)|
|Team3|im Halbfinale ausgeschieden|
|Team4|im Viertelfinale ausgeschieden|
|Team5|im Viertelfinale ausgeschieden|
|Team6|im Halbfinale|
|Team7|im Viertelfinale ausgeschieden|
|Team8|im Halbfinale| | 1.0 | Tournament overview table - **As a** tournament participant
**I want to** have a page that shows the progress/history of the tournament
**In order to** see how far each team came / its status
**Acceptance Criteria:**
- [x] Table with columns: Team-Name | Platzierung |
- [x] Platzierung should show how far the players is/got in the Tournament (e.g. "ist momentan im Viertelfinale", "ist im Viertelfinale ausgeschieden", "Erster/Zweiter/Dritter/Vierter Platz")
- [x] the table should be in sync with the progression of the Tournament, and be found / linked on the event page (compare to #31, #32, synchronize with Team issue number 5)
prototypical Table:
|Team-name|Platzierung|
|-------|------------|
|Team1 |im Viertelfinale ausgeschieden|
|Team2 |im Finale (gegen bla, falls bekannt)|
|Team3|im Halbfinale ausgeschieden|
|Team4|im Viertelfinale ausgeschieden|
|Team5|im Viertelfinale ausgeschieden|
|Team6|im Halbfinale|
|Team7|im Viertelfinale ausgeschieden|
|Team8|im Halbfinale| | priority | tournament overview table as a tournament participant i want to have a page that shows the progress history of the tournament in order to see how far each team came its status acceptance criteria table with columns team name platzierung platzierung should show how far the players is got in the tournament e g ist momentan im viertelfinale ist im viertelfinale ausgeschieden erster zweiter dritter vierter platz the table should be in sync with the progression of the tournament and be found linked on the event page compare to synchronize with team issue number prototypical table team name platzierung im viertelfinale ausgeschieden im finale gegen bla falls bekannt im halbfinale ausgeschieden im viertelfinale ausgeschieden im viertelfinale ausgeschieden im halbfinale im viertelfinale ausgeschieden im halbfinale | 1 |
296,057 | 9,103,890,561 | IssuesEvent | 2019-02-20 16:50:55 | salesagility/SuiteCRM | https://api.github.com/repos/salesagility/SuiteCRM | closed | 7.11.1: Newer version of PHPMailer is not compatible with Email:email2Send method | Emails Fix Proposed Medium Priority Resolved: Next Release bug | SuiteCRM 7.11 comes with newer version of PHPMailer. Method **addAttachment** in PHPMailer object got new condition: **isPermittedPath** which blocks all pathes with ://
Email:emal2Send for $request['documents'] and $request['templateAttachments'] uses pathes:
`$fileLocation = "upload://{$GUID}";`
which will be blocked by isPermittedPath.
`SugarPHPMailer encountered an error: Could not access file: upload://c2a5ebd1-5a6d-0f2f-eafe-5c6286a20e74`
#### Your Environment
* SuiteCRM Version used: 7.11.1
* Environment name and version (e.g. MySQL, PHP 7): PHP 7.1
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 16.04.4 LTS
| 1.0 | 7.11.1: Newer version of PHPMailer is not compatible with Email:email2Send method - SuiteCRM 7.11 comes with newer version of PHPMailer. Method **addAttachment** in PHPMailer object got new condition: **isPermittedPath** which blocks all pathes with ://
Email:emal2Send for $request['documents'] and $request['templateAttachments'] uses pathes:
`$fileLocation = "upload://{$GUID}";`
which will be blocked by isPermittedPath.
`SugarPHPMailer encountered an error: Could not access file: upload://c2a5ebd1-5a6d-0f2f-eafe-5c6286a20e74`
#### Your Environment
* SuiteCRM Version used: 7.11.1
* Environment name and version (e.g. MySQL, PHP 7): PHP 7.1
* Operating System and version (e.g Ubuntu 16.04): Ubuntu 16.04.4 LTS
| priority | newer version of phpmailer is not compatible with email method suitecrm comes with newer version of phpmailer method addattachment in phpmailer object got new condition ispermittedpath which blocks all pathes with email for request and request uses pathes filelocation upload guid which will be blocked by ispermittedpath sugarphpmailer encountered an error could not access file upload eafe your environment suitecrm version used environment name and version e g mysql php php operating system and version e g ubuntu ubuntu lts | 1 |
324,863 | 9,913,629,998 | IssuesEvent | 2019-06-28 12:23:47 | kirbydesign/designsystem | https://api.github.com/repos/kirbydesign/designsystem | closed | [Enhancement] Align on colors and naming + add 1 grey style | effort: hours enhancement medium priority | **Is your enhancement request related to a problem? Please describe.**
UX and Kirby needs some alignment on colors and naming, if its up2date! Also there has been added a semi-light, which is used for disabled style.
- [x] Color:
- [x] Add Semi-Light + Semi-Dark
- [x] Rename:
- [x] base-color => Background-color
- [x] contrast-light => White (hviiiii)
- [x] contrast-dark => Black (sååååårt)
- [x] Update hex codes | 1.0 | [Enhancement] Align on colors and naming + add 1 grey style - **Is your enhancement request related to a problem? Please describe.**
UX and Kirby needs some alignment on colors and naming, if its up2date! Also there has been added a semi-light, which is used for disabled style.
- [x] Color:
- [x] Add Semi-Light + Semi-Dark
- [x] Rename:
- [x] base-color => Background-color
- [x] contrast-light => White (hviiiii)
- [x] contrast-dark => Black (sååååårt)
- [x] Update hex codes | priority | align on colors and naming add grey style is your enhancement request related to a problem please describe ux and kirby needs some alignment on colors and naming if its also there has been added a semi light which is used for disabled style color add semi light semi dark rename base color background color contrast light white hviiiii contrast dark black sååååårt update hex codes | 1 |
734,331 | 25,344,829,342 | IssuesEvent | 2022-11-19 04:26:12 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Disallow packed row for co-located tables | kind/bug area/docdb priority/medium 2.16 Backport Required | Jira Link: [DB-4002](https://yugabyte.atlassian.net/browse/DB-4002)
### Description
Disallow packed row for co-located tables, while we identify and fix the backup-restore and xcluster integration with packed row feature to be stable. | 1.0 | [DocDB] Disallow packed row for co-located tables - Jira Link: [DB-4002](https://yugabyte.atlassian.net/browse/DB-4002)
### Description
Disallow packed row for co-located tables, while we identify and fix the backup-restore and xcluster integration with packed row feature to be stable. | priority | disallow packed row for co located tables jira link description disallow packed row for co located tables while we identify and fix the backup restore and xcluster integration with packed row feature to be stable | 1 |
309,208 | 9,462,992,651 | IssuesEvent | 2019-04-17 16:39:03 | smacademic/project-bdf | https://api.github.com/repos/smacademic/project-bdf | opened | Create connecction to twitter | priority: medium type: missing | **Is your feature request related to a problem? Please describe.**
As a part of our twitter extension, a connection to twitter needs to be set up as a part of our bot
**Describe the solution you'd like**
Create a connection to twitter using `tweepy`, a library used for creating twitter connections in a similar manner that `praw` uses to create connections to reddit.
**Describe alternatives you've considered**
Searching google for tweets. Direct connection cuts out the 'middle-man'.
| 1.0 | Create connecction to twitter - **Is your feature request related to a problem? Please describe.**
As a part of our twitter extension, a connection to twitter needs to be set up as a part of our bot
**Describe the solution you'd like**
Create a connection to twitter using `tweepy`, a library used for creating twitter connections in a similar manner that `praw` uses to create connections to reddit.
**Describe alternatives you've considered**
Searching google for tweets. Direct connection cuts out the 'middle-man'.
| priority | create connecction to twitter is your feature request related to a problem please describe as a part of our twitter extension a connection to twitter needs to be set up as a part of our bot describe the solution you d like create a connection to twitter using tweepy a library used for creating twitter connections in a similar manner that praw uses to create connections to reddit describe alternatives you ve considered searching google for tweets direct connection cuts out the middle man | 1 |
55,396 | 3,073,083,972 | IssuesEvent | 2015-08-19 20:09:00 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | solo.clickLongOnScreen cannot work | bug imported Priority-Medium | _From [onlyfors...@gmail.com](https://code.google.com/u/101589604104493003872/) on October 18, 2012 19:18:26_
What steps will reproduce the problem? 1.Add solo.clickLongOnScreen into test operation What is the expected output? What do you see instead? The item list should be displayed for long clicking on screen. But actually no list is displayed. What version of the product are you using? On what operating system? robotium-solo-3.5.jar on linux operating system Please provide any additional information below. This function work well on robotium-solo-3.4.1.jar and before.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=343_ | 1.0 | solo.clickLongOnScreen cannot work - _From [onlyfors...@gmail.com](https://code.google.com/u/101589604104493003872/) on October 18, 2012 19:18:26_
What steps will reproduce the problem? 1.Add solo.clickLongOnScreen into test operation What is the expected output? What do you see instead? The item list should be displayed for long clicking on screen. But actually no list is displayed. What version of the product are you using? On what operating system? robotium-solo-3.5.jar on linux operating system Please provide any additional information below. This function work well on robotium-solo-3.4.1.jar and before.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=343_ | priority | solo clicklongonscreen cannot work from on october what steps will reproduce the problem add solo clicklongonscreen into test operation what is the expected output what do you see instead the item list should be displayed for long clicking on screen but actually no list is displayed what version of the product are you using on what operating system robotium solo jar on linux operating system please provide any additional information below this function work well on robotium solo jar and before original issue | 1 |
525,891 | 15,268,085,318 | IssuesEvent | 2021-02-22 10:55:27 | truecharts/truecharts | https://api.github.com/repos/truecharts/truecharts | closed | [traefik] acmeDNS CertManager generates error | Priority/Medium bug | When attempting to create the `traefik` app using `acmeDNS` CertManager I got the following error.
<details>
<summary>Installing</summary>
`Error: [EFAULT] Failed to install catalog item: b'W0221 05:24:04.930155 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:04.964322 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:04.988432 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.014405 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.031023 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.052552 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.071766 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.104369 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.119070 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.124455 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.129408 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.134605 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.139315 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.144124 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.148737 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.153569 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nError: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in Secret.stringData.acmedns-json\n'`
</details>
`*redacted*` is being used to maintain security. All settings are default with the exception of the following.
Email Address: `*redacted*@*redacted*.com`
Wildcard Domain: `*redacted*.*redacted*.com`
CertManager Provider: `acmeDNS`
host: `auth.acme-dns.io`
acmednsjson: `{ "allowfrom": [], "fulldomain": "*redacted*.auth.acme-dns.io", "password": "*redacted*", "subdomain": "*redacted*", "username": "*redacted*" }`
_Originally posted by @whiskerz007 in https://github.com/truecharts/truecharts/issues/150#issuecomment-782859285_ | 1.0 | [traefik] acmeDNS CertManager generates error - When attempting to create the `traefik` app using `acmeDNS` CertManager I got the following error.
<details>
<summary>Installing</summary>
`Error: [EFAULT] Failed to install catalog item: b'W0221 05:24:04.930155 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:04.964322 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:04.988432 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.014405 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.031023 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.052552 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.071766 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:05.104369 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.119070 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.124455 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.129408 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.134605 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.139315 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.144124 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.148737 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0221 05:24:07.153569 3274797 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nError: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in Secret.stringData.acmedns-json\n'`
</details>
`*redacted*` is being used to maintain security. All settings are default with the exception of the following.
Email Address: `*redacted*@*redacted*.com`
Wildcard Domain: `*redacted*.*redacted*.com`
CertManager Provider: `acmeDNS`
host: `auth.acme-dns.io`
acmednsjson: `{ "allowfrom": [], "fulldomain": "*redacted*.auth.acme-dns.io", "password": "*redacted*", "subdomain": "*redacted*", "username": "*redacted*" }`
_Originally posted by @whiskerz007 in https://github.com/truecharts/truecharts/issues/150#issuecomment-782859285_ | priority | acmedns certmanager generates error when attempting to create the traefik app using acmedns certmanager i got the following error installing error failed to install catalog item b warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition warnings go apiextensions io customresourcedefinition is deprecated in unavailable in use apiextensions io customresourcedefinition nerror unable to build kubernetes objects from release manifest error validating error validating data unknown object type nil in secret stringdata acmedns json n redacted is being used to maintain security all settings are default with the exception of the following email address redacted redacted com wildcard domain redacted redacted com certmanager provider acmedns host auth acme dns io acmednsjson allowfrom fulldomain redacted auth acme dns io password redacted subdomain redacted username redacted originally posted by in | 1 |
189,135 | 6,794,519,481 | IssuesEvent | 2017-11-01 12:31:27 | textpattern/textpattern | https://api.github.com/repos/textpattern/textpattern | closed | Fine-graining control of expired articles | Component: core Enhancement Priority: medium Usability | _From [r.wetzlmayr](https://code.google.com/u/r.wetzlmayr/) on July 09, 2009 09:32:48_
Past their expiry date, expired articles are output according to the
'publish_expired_articles' prefs - a site-wide setting which affects the
range of electable articles for both XHTML output and RSS/Atom feeds.
For feeds, this behaviour seems sufficient.
OTOH, XHTML pages should allow expired articles to get published in any
arbitrary fashion, at the designer's discretion. An additional attribute to
<txp:article[_custom] />, <txp:recent_articles /> and <txp:related_articles
/> would be reasonable.
Related discussion: http://forum.textpattern.com/viewtopic.php?id=30726
_Original issue: http://code.google.com/p/textpattern/issues/detail?id=14_
| 1.0 | Fine-graining control of expired articles - _From [r.wetzlmayr](https://code.google.com/u/r.wetzlmayr/) on July 09, 2009 09:32:48_
Past their expiry date, expired articles are output according to the
'publish_expired_articles' prefs - a site-wide setting which affects the
range of electable articles for both XHTML output and RSS/Atom feeds.
For feeds, this behaviour seems sufficient.
OTOH, XHTML pages should allow expired articles to get published in any
arbitrary fashion, at the designer's discretion. An additional attribute to
<txp:article[_custom] />, <txp:recent_articles /> and <txp:related_articles
/> would be reasonable.
Related discussion: http://forum.textpattern.com/viewtopic.php?id=30726
_Original issue: http://code.google.com/p/textpattern/issues/detail?id=14_
| priority | fine graining control of expired articles from on july past their expiry date expired articles are output according to the publish expired articles prefs a site wide setting which affects the range of electable articles for both xhtml output and rss atom feeds for feeds this behaviour seems sufficient otoh xhtml pages should allow expired articles to get published in any arbitrary fashion at the designer s discretion an additional attribute to and txp related articles would be reasonable related discussion original issue | 1 |
88,409 | 3,777,317,422 | IssuesEvent | 2016-03-17 19:37:30 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | opened | Structured Legends | addition: feature priority: medium | Provide support for a prescriptive legend structure defined in the viewer configuration. This would allow for the default hierarchical structure of the map services to be overridden in order to tailor the appearance when publishing a thematic map.
Further scoping and requirements TBD. | 1.0 | Structured Legends - Provide support for a prescriptive legend structure defined in the viewer configuration. This would allow for the default hierarchical structure of the map services to be overridden in order to tailor the appearance when publishing a thematic map.
Further scoping and requirements TBD. | priority | structured legends provide support for a prescriptive legend structure defined in the viewer configuration this would allow for the default hierarchical structure of the map services to be overridden in order to tailor the appearance when publishing a thematic map further scoping and requirements tbd | 1 |
631,290 | 20,150,081,803 | IssuesEvent | 2022-02-09 11:28:58 | ita-social-projects/horondi_client_fe | https://api.github.com/repos/ita-social-projects/horondi_client_fe | closed | [Створити категорію] Button 'Зберегти' is active when mandatory fields are blank | bug priority: medium Functional Admin part | **Environment:** Windows 10 Pro, Google Chrome, version 86.0.4240.183.
**Reproducible:** always.
**Build found:** commit 1fdc570
**Preconditions:**
Go to https://horondi-admin-staging.azurewebsites.net
Log into Administrator page as Administrator
**Steps to reproduce:**
- Go to 'Категорії' menu item.
- Upload photo more less than15MB.
- Leave empty input fields
**Actual result:**
1. Button 'Зберегти' is active.
<img width="847" alt="error messages empty fields 2" src="https://user-images.githubusercontent.com/75261055/103460396-86bfb380-4d1e-11eb-9dfe-072e6d1f072b.png">
Page 'Категорії' is opened.
No error message 'Розмір зображення має бути менше 15 Мб' occurs.
Page 'Категорії' fails to display after clicking on 'Категорії' menu item.
**Expected result**
1. 'Зберегти' button is disabled
2. Mouse pointer is not changing its state on it

User story and test case links:
User story: [LVHRB-15](https://jira.softserve.academy/browse/LVHRB-15)
Test [LVHRB-72](https://jira.softserve.academy/browse/LVHRB-72)
| 1.0 | [Створити категорію] Button 'Зберегти' is active when mandatory fields are blank - **Environment:** Windows 10 Pro, Google Chrome, version 86.0.4240.183.
**Reproducible:** always.
**Build found:** commit 1fdc570
**Preconditions:**
Go to https://horondi-admin-staging.azurewebsites.net
Log into Administrator page as Administrator
**Steps to reproduce:**
- Go to 'Категорії' menu item.
- Upload photo more less than15MB.
- Leave empty input fields
**Actual result:**
1. Button 'Зберегти' is active.
<img width="847" alt="error messages empty fields 2" src="https://user-images.githubusercontent.com/75261055/103460396-86bfb380-4d1e-11eb-9dfe-072e6d1f072b.png">
Page 'Категорії' is opened.
No error message 'Розмір зображення має бути менше 15 Мб' occurs.
Page 'Категорії' fails to display after clicking on 'Категорії' menu item.
**Expected result**
1. 'Зберегти' button is disabled
2. Mouse pointer is not changing its state on it

User story and test case links:
User story: [LVHRB-15](https://jira.softserve.academy/browse/LVHRB-15)
Test [LVHRB-72](https://jira.softserve.academy/browse/LVHRB-72)
| priority | button зберегти is active when mandatory fields are blank environment windows pro google chrome version reproducible always build found commit preconditions go to log into administrator page as administrator steps to reproduce go to категорії menu item upload photo more less leave empty input fields actual result button зберегти is active img width alt error messages empty fields src page категорії is opened no error message розмір зображення має бути менше мб occurs page категорії fails to display after clicking on категорії menu item expected result зберегти button is disabled mouse pointer is not changing its state on it user story and test case links user story test | 1 |
676,426 | 23,124,822,730 | IssuesEvent | 2022-07-28 03:48:21 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | Index backfill: timestamp_history_retention_interval_sec should be increased appropriately for unique indexes during backfill. | kind/bug area/docdb priority/medium | Jira Link: [DB-1280](https://yugabyte.atlassian.net/browse/DB-1280)
| 1.0 | Index backfill: timestamp_history_retention_interval_sec should be increased appropriately for unique indexes during backfill. - Jira Link: [DB-1280](https://yugabyte.atlassian.net/browse/DB-1280)
| priority | index backfill timestamp history retention interval sec should be increased appropriately for unique indexes during backfill jira link | 1 |
434,721 | 12,522,129,300 | IssuesEvent | 2020-06-03 18:36:37 | ithriv/ithriv_web | https://api.github.com/repos/ithriv/ithriv_web | closed | New Resources/Events - description not showing | bug medium priority | Hello Ravi,
For recently created pages, the first part of the description is not showing on the small tiles. It either shows up blank or looks like <p...
Go to https://portal.ithriv.org/#/category/1039 and see how tiles look there.
Maybe issue with new Angular version? | 1.0 | New Resources/Events - description not showing - Hello Ravi,
For recently created pages, the first part of the description is not showing on the small tiles. It either shows up blank or looks like <p...
Go to https://portal.ithriv.org/#/category/1039 and see how tiles look there.
Maybe issue with new Angular version? | priority | new resources events description not showing hello ravi for recently created pages the first part of the description is not showing on the small tiles it either shows up blank or looks like p go to and see how tiles look there maybe issue with new angular version | 1 |
599,856 | 18,284,663,534 | IssuesEvent | 2021-10-05 08:57:16 | rancher/k3d | https://api.github.com/repos/rancher/k3d | closed | [BUG] DNS not resolving | bug help wanted runtime priority/medium | **What did you do?**
- How was the cluster created?
```
k3d create -n mycluster
```
- What did you do afterwards?
Start a pod and try a DNS query:
```
$ export KUBECONFIG="$(k3d get-kubeconfig --name='mycluster')"
$ kubectl run --restart=Never --rm -i --tty tmp --image=alpine -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup www.gmail.com
Server: 10.43.0.10
Address: 10.43.0.10:53
;; connection timed out; no servers could be reached
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
/ # exit
```
Exec into the `k3d` container and do the same DNS query:
```
docker exec -it k3d-endpoint-server sh
/ # nslookup www.gmail.com
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
www.gmail.com canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name: googlemail.l.google.com
Address: 172.217.164.101
Non-authoritative answer:
www.gmail.com canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name: googlemail.l.google.com
Address: 2607:f8b0:4005:80b::2005
/ # cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
/ # exit
```
**What did you expect to happen?**
I would expect the pods in the k3d cluster to be able to resolve DNS names
**Which OS & Architecture?**
MacOS 10.15.3
**Which version of `k3d`?**
- output of `k3d --version`
```
k3d version v1.7.0
```
**Which version of docker?**
- output of `docker version`
```
docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:34 2019
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:29:19 2019
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: v1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
``` | 1.0 | [BUG] DNS not resolving - **What did you do?**
- How was the cluster created?
```
k3d create -n mycluster
```
- What did you do afterwards?
Start a pod and try a DNS query:
```
$ export KUBECONFIG="$(k3d get-kubeconfig --name='mycluster')"
$ kubectl run --restart=Never --rm -i --tty tmp --image=alpine -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup www.gmail.com
Server: 10.43.0.10
Address: 10.43.0.10:53
;; connection timed out; no servers could be reached
/ # cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
/ # exit
```
Exec into the `k3d` container and do the same DNS query:
```
docker exec -it k3d-endpoint-server sh
/ # nslookup www.gmail.com
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
www.gmail.com canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name: googlemail.l.google.com
Address: 172.217.164.101
Non-authoritative answer:
www.gmail.com canonical name = mail.google.com
mail.google.com canonical name = googlemail.l.google.com
Name: googlemail.l.google.com
Address: 2607:f8b0:4005:80b::2005
/ # cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0
/ # exit
```
**What did you expect to happen?**
I would expect the pods in the k3d cluster to be able to resolve DNS names
**Which OS & Architecture?**
MacOS 10.15.3
**Which version of `k3d`?**
- output of `k3d --version`
```
k3d version v1.7.0
```
**Which version of docker?**
- output of `docker version`
```
docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:34 2019
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:29:19 2019
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: v1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
``` | priority | dns not resolving what did you do how was the cluster created create n mycluster what did you do afterwards start a pod and try a dns query export kubeconfig get kubeconfig name mycluster kubectl run restart never rm i tty tmp image alpine sh if you don t see a command prompt try pressing enter nslookup server address connection timed out no servers could be reached cat etc resolv conf search default svc cluster local svc cluster local cluster local nameserver options ndots exit exec into the container and do the same dns query docker exec it endpoint server sh nslookup server address non authoritative answer canonical name mail google com mail google com canonical name googlemail l google com name googlemail l google com address non authoritative answer canonical name mail google com mail google com canonical name googlemail l google com name googlemail l google com address cat etc resolv conf nameserver options ndots exit what did you expect to happen i would expect the pods in the cluster to be able to resolve dns names which os architecture macos which version of output of version version which version of docker output of docker version docker version client docker engine community version api version go version git commit built wed nov os arch darwin experimental false server docker engine community engine version api version minimum version go version git commit built wed nov os arch linux experimental true containerd version gitcommit runc version dev gitcommit docker init version gitcommit | 1 |
63,296 | 3,194,560,461 | IssuesEvent | 2015-09-30 12:54:14 | MinetestForFun/server-minetestforfun-creative | https://api.github.com/repos/MinetestForFun/server-minetestforfun-creative | closed | Worldmap seems wrong | Modding Priority: Medium | When playing on the creative,
we sometimes can see ourselves on the worldmap. It seems there is a misconfiguration on the creative server ?
| 1.0 | Worldmap seems wrong - When playing on the creative,
we sometimes can see ourselves on the worldmap. It seems there is a misconfiguration on the creative server ?
| priority | worldmap seems wrong when playing on the creative we sometimes can see ourselves on the worldmap it seems there is a misconfiguration on the creative server | 1 |
79,365 | 3,535,071,170 | IssuesEvent | 2016-01-16 06:55:11 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | closed | BackingStoreException on headless | * Missing use case > Bug In Headless Priority Medium | ```
What steps will reproduce the problem?
1. I run headless simulation on 3 servers with 32 cores after 380 simulations the program
raises a exception:
java.util.prefs.backingstoreexception : couldn't get file lock
What version of the product are you using? On what operating system?
I'm using headless version 1.6 on gentoo
```
Original issue reported on code.google.com by `morgan.seston@ird.fr` on 2014-01-30 08:16:38 | 1.0 | BackingStoreException on headless - ```
What steps will reproduce the problem?
1. I run headless simulation on 3 servers with 32 cores after 380 simulations the program
raises a exception:
java.util.prefs.backingstoreexception : couldn't get file lock
What version of the product are you using? On what operating system?
I'm using headless version 1.6 on gentoo
```
Original issue reported on code.google.com by `morgan.seston@ird.fr` on 2014-01-30 08:16:38 | priority | backingstoreexception on headless what steps will reproduce the problem i run headless simulation on servers with cores after simulations the program raises a exception java util prefs backingstoreexception couldn t get file lock what version of the product are you using on what operating system i m using headless version on gentoo original issue reported on code google com by morgan seston ird fr on | 1 |
678,264 | 23,190,870,927 | IssuesEvent | 2022-08-01 12:33:37 | SAP/xsk | https://api.github.com/repos/SAP/xsk | closed | [Core] Log xssecurity processor errors using ProblemsFacade | wontfix core priority-medium customer supportability incomplete | Implement it for the **xssecurity** processor.
This will help for customer testing.
Using method `logProcessorsErrors()` in [class](https://github.com/SAP/xsk/blob/main/modules/engines/engine-commons/src/main/java/com/sap/xsk/utils/XSKCommonsUtils.java).
Related to:
- https://github.com/SAP/xsk/issues/465
- https://github.com/SAP/xsk/issues/31
- https://github.com/SAP/xsk/issues/550
- https://github.com/SAP/xsk/issues/81 | 1.0 | [Core] Log xssecurity processor errors using ProblemsFacade - Implement it for the **xssecurity** processor.
This will help for customer testing.
Using method `logProcessorsErrors()` in [class](https://github.com/SAP/xsk/blob/main/modules/engines/engine-commons/src/main/java/com/sap/xsk/utils/XSKCommonsUtils.java).
Related to:
- https://github.com/SAP/xsk/issues/465
- https://github.com/SAP/xsk/issues/31
- https://github.com/SAP/xsk/issues/550
- https://github.com/SAP/xsk/issues/81 | priority | log xssecurity processor errors using problemsfacade implement it for the xssecurity processor this will help for customer testing using method logprocessorserrors in related to | 1 |
349,952 | 10,476,152,556 | IssuesEvent | 2019-09-23 17:56:37 | oVirt/ovirt-web-ui | https://api.github.com/repos/oVirt/ovirt-web-ui | reopened | VM Details > Disks Card > Editor Dialog - Combine the size and extend fields | Priority: Low Severity: Medium Type: Enhancement | The #838 implementation of disk edit includes 2 size fields, the first displays the current size while the second specifies how much to expand the disk size:

The proposed change is to combine the display and expand size, with an initial and minimum value of the current disk size:
> For the 'Edit Disk' modal:
>
> * I would get rid of the 'Extend size by' field and only have the 'Size' field. The 'Size' field could feature the same info tip that I noted in the comment above about 'Once you have created a disk, you can only extend the size of the disk if you make any edits to it.' The 'Size' field in edit mode could feature the up and down value adding buttons but have the down arrow disabled so the user could only increase the value. An error message could also appear below the field if they enter a value that is below the current size. | 1.0 | VM Details > Disks Card > Editor Dialog - Combine the size and extend fields - The #838 implementation of disk edit includes 2 size fields, the first displays the current size while the second specifies how much to expand the disk size:

The proposed change is to combine the display and expand size, with an initial and minimum value of the current disk size:
> For the 'Edit Disk' modal:
>
> * I would get rid of the 'Extend size by' field and only have the 'Size' field. The 'Size' field could feature the same info tip that I noted in the comment above about 'Once you have created a disk, you can only extend the size of the disk if you make any edits to it.' The 'Size' field in edit mode could feature the up and down value adding buttons but have the down arrow disabled so the user could only increase the value. An error message could also appear below the field if they enter a value that is below the current size. | priority | vm details disks card editor dialog combine the size and extend fields the implementation of disk edit includes size fields the first displays the current size while the second specifies how much to expand the disk size the proposed change is to combine the display and expand size with an initial and minimum value of the current disk size for the edit disk modal i would get rid of the extend size by field and only have the size field the size field could feature the same info tip that i noted in the comment above about once you have created a disk you can only extend the size of the disk if you make any edits to it the size field in edit mode could feature the up and down value adding buttons but have the down arrow disabled so the user could only increase the value an error message could also appear below the field if they enter a value that is below the current size | 1 |
439,565 | 12,683,897,312 | IssuesEvent | 2020-06-19 20:57:10 | LBNL-ETA/BEDES-Manager | https://api.github.com/repos/LBNL-ETA/BEDES-Manager | closed | BEDES unit ignored on import? | bug medium priority | In the import .csv file, if I specify the unit for the BEDES composite term, I don't get the same unit when I later click on that composite term in the search results (i.e., I specified "m2" in the .csv file, but the composite term had "ft2"). This seems to work properly for application terms, but not for BEDES composite terms. | 1.0 | BEDES unit ignored on import? - In the import .csv file, if I specify the unit for the BEDES composite term, I don't get the same unit when I later click on that composite term in the search results (i.e., I specified "m2" in the .csv file, but the composite term had "ft2"). This seems to work properly for application terms, but not for BEDES composite terms. | priority | bedes unit ignored on import in the import csv file if i specify the unit for the bedes composite term i don t get the same unit when i later click on that composite term in the search results i e i specified in the csv file but the composite term had this seems to work properly for application terms but not for bedes composite terms | 1 |
604,960 | 18,722,054,564 | IssuesEvent | 2021-11-03 12:55:17 | Psychoanalytic-Electronic-Publishing/PEP-Web-Configuration | https://api.github.com/repos/Psychoanalytic-Electronic-Publishing/PEP-Web-Configuration | opened | Annual Production Process Setup | Medium Priority (within 3 or 4 weeks) | We don't yet have the system set up for annual production of new journals and books.
Each year, we add X new journals, potentially some new books, and release a "new PEP-Web" on January 22 (approximately).
To get data in place for that release we put it in PEPFuture, which only shows up on stage. Then when we are ready to release, we move that data to PEPArchive and PEPCurrent as applicable.
So: we need a means to put data in PEPFuture on Stage, and have those items show up in PEP-Web Stage, but not be pushed to Production during normal push processing, or more likely, since everything is pushed when we push Solr, to make it such that PEPFuture does not show up on production. Not just the articles, but the journal names and books which appear in the browse listing. They should show up on Stage, for testing and proofing, but not on Production.
That will likely need to be done at the client, server and PaDS levels, and may also involve DevOps based processing.
Open for Ideas and discussion below. _This year we are adding around 5 new journals!_ So we need to handle this relatively quickly.
| 1.0 | Annual Production Process Setup - We don't yet have the system set up for annual production of new journals and books.
Each year, we add X new journals, potentially some new books, and release a "new PEP-Web" on January 22 (approximately).
To get data in place for that release we put it in PEPFuture, which only shows up on stage. Then when we are ready to release, we move that data to PEPArchive and PEPCurrent as applicable.
So: we need a means to put data in PEPFuture on Stage, and have those items show up in PEP-Web Stage, but not be pushed to Production during normal push processing, or more likely, since everything is pushed when we push Solr, to make it such that PEPFuture does not show up on production. Not just the articles, but the journal names and books which appear in the browse listing. They should show up on Stage, for testing and proofing, but not on Production.
That will likely need to be done at the client, server and PaDS levels, and may also involve DevOps based processing.
Open for Ideas and discussion below. _This year we are adding around 5 new journals!_ So we need to handle this relatively quickly.
| priority | annual production process setup we don t yet have the system set up for annual production of new journals and books each year we add x new journals potentially some new books and release a new pep web on january approximately to get data in place for that release we put it in pepfuture which only shows up on stage then when we are ready to release we move that data to peparchive and pepcurrent as applicable so we need a means to put data in pepfuture on stage and have those items show up in pep web stage but not be pushed to production during normal push processing or more likely since everything is pushed when we push solr to make it such that pepfuture does not show up on production not just the articles but the journal names and books which appear in the browse listing they should show up on stage for testing and proofing but not on production that will likely need to be done at the client server and pads levels and may also involve devops based processing open for ideas and discussion below this year we are adding around new journals so we need to handle this relatively quickly | 1 |
132,549 | 5,188,153,955 | IssuesEvent | 2017-01-20 19:03:37 | phetsims/unit-rates | https://api.github.com/repos/phetsims/unit-rates | closed | Location of objects on the shelf not saved in state | priority:3-medium type:bug type:wontfix | Consider whether location of objects on the shelf should be saved. Right now, the objects can shift from when you leave a particular type of fruit and then return back to it later. Seems like a bug, but not particularly high priority.
| 1.0 | Location of objects on the shelf not saved in state - Consider whether location of objects on the shelf should be saved. Right now, the objects can shift from when you leave a particular type of fruit and then return back to it later. Seems like a bug, but not particularly high priority.
| priority | location of objects on the shelf not saved in state consider whether location of objects on the shelf should be saved right now the objects can shift from when you leave a particular type of fruit and then return back to it later seems like a bug but not particularly high priority | 1 |
209,205 | 7,166,248,036 | IssuesEvent | 2018-01-29 16:40:28 | HeathWallace/ethereum-pos | https://api.github.com/repos/HeathWallace/ethereum-pos | opened | Display customer username and photo | Priority: Medium Status: Blocked Type: UI | A UI component for the customer name/photo display needs to be built.
- Component should account for customers with and without names/photos. | 1.0 | Display customer username and photo - A UI component for the customer name/photo display needs to be built.
- Component should account for customers with and without names/photos. | priority | display customer username and photo a ui component for the customer name photo display needs to be built component should account for customers with and without names photos | 1 |
822,081 | 30,851,392,426 | IssuesEvent | 2023-08-02 17:01:09 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Potential race on RemoteTablet::replicas_ between RemoteTablet::GetRemoteTabletServers and RemoteTablet::Refresh | kind/bug area/docdb priority/medium 2.14 Backport Required 2.16 Backport Required 2.18 Backport Required | Jira Link: [DB-6936](https://yugabyte.atlassian.net/browse/DB-6936)
### Description
Test: `org.yb.loadtester.TestClusterWithHighLoadAndSlowSync.testClusterFullMoveWithHighLoadAndSlowSync`
Analyze Trends: https://detective-gcp.dev.yugabyte.com/stability/test?analyze_trends=true&branch=master&build_type=all&class=org.yb.loadtester.TestClusterWithHighLoadAndSlowSync&fail_tag=all&name=testClusterFullMoveWithHighLoadAndSlowSync&platform=linux
ERROR: `java.lang.Exception: Operation timed out after 45000ms`
Jenkins run: https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-alma8-master-clang16-asan/103/testReport/junit/org.yb.loadtester/TestClusterWithHighLoadAndSlowSync/testClusterFullMoveWithHighLoadAndSlowSync/
Corresponding asan issue: container-overflow on the `replicas_` vector
```
ts1|pid21481|:19588 ==21481==ERROR: AddressSanitizer: container-overflow on address 0x60c00001e178 at pc 0x7f76e2d3360f bp 0x7f76c29754b0 sp 0x7f76c29754a8
ts1|pid21481|:19588 WRITE of size 4 at 0x60c00001e178 thread T26 (TabletServer_re)
ts1|pid21481|:19588 I0620 14:00:46.384110 22201 tablet_metadata.cc:734] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Successfully destroyed provisional records DB at: ${TEST_TMPDIR}/ts-127.108.122.172-19588-1687269473247/yb-data/tserver/data/rocksdb/table-646d0a9c0f2047439ac4fcc86d12f62b/tablet-{stock_ticker_raw_tablet_id5}.intents
ts1|pid21481|:19588 I0620 14:00:46.393419 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 81ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.399729 21854 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts3_peer_id}: Injecting 93ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.406644 21803 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts2_peer_id}: Injecting 120ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.414602 21822 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts3_peer_id}: Injecting 102ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.442550 22201 ts_tablet_manager.cc:2853] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Tablet deleted. Last logged OpId: 2.1658
ts1|pid21481|:19588 I0620 14:00:46.442641 22201 log.cc:1612] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Deleting WAL dir ${TEST_TMPDIR}/ts-127.108.122.172-19588-1687269473247/yb-data/tserver/wals/table-646d0a9c0f2047439ac4fcc86d12f62b/tablet-{stock_ticker_raw_tablet_id5}
ts1|pid21481|:19588 I0620 14:00:46.443351 22201 tablet_bootstrap_if.cc:96] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Deleted tablet blocks from disk
ts1|pid21481|:19588 I0620 14:00:46.443431 22201 ts_tablet_manager.cc:2595] Unregister data/wal directory assignment map for table: 646d0a9c0f2047439ac4fcc86d12f62b and tablet {stock_ticker_raw_tablet_id5}
ts1|pid21481|:19588 I0620 14:00:46.443476 22201 ts_tablet_manager.cc:2928] Deleted transition in progress deleting tablet for tablet {stock_ticker_raw_tablet_id5}
ts1|pid21481|:19588 I0620 14:00:46.475014 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 107ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.493301 21854 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts3_peer_id}: Injecting 54ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.502784 21825 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts3_peer_id}: Injecting 68ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.510865 21823 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts2_peer_id}: Injecting 145ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.539144 21853 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts2_peer_id}: Injecting 100ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.571316 21825 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts3_peer_id}: Injecting 14ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.647917 21854 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts3_peer_id}: Injecting 71ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.658509 21803 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts2_peer_id}: Injecting 68ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.658881 21853 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts2_peer_id}: Injecting 81ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.659333 21822 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts3_peer_id}: Injecting 74ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.661159 21823 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts2_peer_id}: Injecting 120ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.689364 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 14ms of latency in Log::Sync()
m4|pid21957|:23052 I0620 14:00:46.685217 22281 cluster_balance.cc:1488] Removing replica {ts3_peer_id} from tablet {stock_ticker_raw_tablet_id5}
m4|pid21957|:23052 W0620 14:00:46.685534 22281 cluster_balance.cc:540] Skipping add replicas for 646d0a9c0f2047439ac4fcc86d12f62b: Operation failed. Try again (yb/master/cluster_balance.cc:897): Cannot add replicas. Currently have a total overreplication of 1, when max allowed is 1, overreplicated tablets: {stock_ticker_raw_tablet_id6}
m4|pid21957|:23052 W0620 14:00:46.685632 22281 cluster_balance.cc:540] Skipping add replicas for 1b1993edc0394e55a2bf38cc5db378e4: Operation failed. Try again (yb/master/cluster_balance.cc:897): Cannot add replicas. Currently have a total overreplication of 1, when max allowed is 1, overreplicated tablets: {stock_ticker_1min_tablet_id2}
ts6|pid22511|:18601 I0620 14:00:46.689904 23146 raft_consensus.cc:2420] Received ChangeConfig request tablet_id: "{stock_ticker_raw_tablet_id5}" type: REMOVE_SERVER server { permanent_uuid: "{ts3_peer_id}" } dest_uuid: "{ts6_peer_id}" cas_config_opid_index: 1660
ts6|pid22511|:18601 I0620 14:00:46.690189 23146 raft_consensus.cc:3055] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id} [term 2 LEADER]: Setting replicate pending config peers { permanent_uuid: "{ts6_peer_id}" member_type: VOTER last_known_private_addr { host: "127.121.62.204" port: 18601 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts5_peer_id}" member_type: VOTER last_known_private_addr { host: "127.106.39.35" port: 29444 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }, type = REMOVE_SERVER
ts6|pid22511|:18601 I0620 14:00:46.690390 23146 consensus_meta.cc:317] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id}: Updating active role from LEADER to LEADER. Consensus state: current_term: 2 leader_uuid: "{ts6_peer_id}" config { peers { permanent_uuid: "{ts6_peer_id}" member_type: VOTER last_known_private_addr { host: "127.121.62.204" port: 18601 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts5_peer_id}" member_type: VOTER last_known_private_addr { host: "127.106.39.35" port: 29444 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 1
ts6|pid22511|:18601 I0620 14:00:46.690582 23146 consensus_peers.cc:584] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id} -> Peer {ts3_peer_id} ([host: "127.162.6.81" port: 24461], []): Closing peer
ts6|pid22511|:18601 I0620 14:00:46.690708 23146 consensus_queue.cc:279] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id} [LEADER]: Queue going to LEADER mode. State: All replicated op: 0.0, Majority replicated op: 2.1660, Committed index: 2.1660, Last applied: 2.1660, Last appended: 2.1660, Current term: 2, Majority size: 2, State: QUEUE_OPEN, Mode: LEADER, active raft config: peers { permanent_uuid: "{ts6_peer_id}" member_type: VOTER last_known_private_addr { host: "127.121.62.204" port: 18601 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts5_peer_id}" member_type: VOTER last_known_private_addr { host: "127.106.39.35" port: 29444 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }
ts1|pid21481|:19588 #0 0x7f76e2d3360e in yb::client::internal::RemoteTablet::GetRemoteTabletServers(std::vector<yb::client::internal::RemoteTabletServer*, std::allocator<yb::client::internal::RemoteTabletServer*>>*, yb::StronglyTypedBool<yb::client::internal::IncludeFailedReplicas_Tag>) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:558:31
ts1|pid21481|:19588 #1 0x7f76e2e3aa53 in yb::client::internal::TabletInvoker::SelectTabletServer() ${BUILD_ROOT}/../../src/yb/client/tablet_rpc.cc:148:14
ts1|pid21481|:19588 #2 0x7f76e2e3cf31 in yb::client::internal::TabletInvoker::Execute(string const&, bool) ${BUILD_ROOT}/../../src/yb/client/tablet_rpc.cc:229:5
ts1|pid21481|:19588 #3 0x7f76e2abb3d4 in yb::client::internal::AsyncRpc::SendRpc() ${BUILD_ROOT}/../../src/yb/client/async_rpc.cc:222:19
ts1|pid21481|:19588 #4 0x7f76deae33c5 in yb::rpc::RpcRetrier::DoRetry(yb::rpc::RpcCommand*, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:227:10
ts1|pid21481|:19588 #5 0x7f76e363ee71 in boost::function1<void, yb::Status const&>::operator()(yb::Status const&) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/include/boost/function/function_template.hpp:763:14
ts1|pid21481|:19588 #6 0x7f76deacdf31 in yb::rpc::DelayedTask::TimerHandler(ev::timer&, int) ${BUILD_ROOT}/../../src/yb/rpc/delayed_task.cc:152:5
ts1|pid21481|:19588 #7 0x7f76dc4936ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca)
ts1|pid21481|:19588 #8 0x7f76dc4943c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6)
ts1|pid21481|:19588 #9 0x7f76dea9b4fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7
ts1|pid21481|:19588 #10 0x7f76dea9b4fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9
ts1|pid21481|:19588 #11 0x7f76dd1db590 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #12 0x7f76dd1db590 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #13 0x7f76dd1db590 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3
ts1|pid21481|:19588 #14 0x7f76d82521c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823)
ts1|pid21481|:19588 #15 0x7f76d7ca6e72 in clone (/lib64/libc.so.6+0x39e72) (BuildId: 6d1dc58340cb6c575073da1e2efb8ac2a3cadc23)
ts1|pid21481|:19588
ts1|pid21481|:19588 0x60c00001e178 is located 120 bytes inside of 128-byte region [0x60c00001e100,0x60c00001e180)
ts1|pid21481|:19588 allocated by thread T45 (rpc_tp_TabletSe) here:
ts1|pid21481|:19588 #0 0x562533f6ec6d in operator new(unsigned long) /opt/yb-build/llvm/yb-llvm-v16.0.3-yb-1-1683786200-c8b432af-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_new_delete.cpp:95:3
ts1|pid21481|:19588 #1 0x7f76e2d6fd2a in void* std::__libcpp_operator_new[abi:v160003]<unsigned long>(unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/new:266:10
ts1|pid21481|:19588 #2 0x7f76e2d6fd2a in std::__libcpp_allocate[abi:v160003](unsigned long, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/new:292:10
ts1|pid21481|:19588 #3 0x7f76e2d6fd2a in std::allocator<yb::client::internal::RemoteReplica>::allocate[abi:v160003](unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__memory/allocator.h:115:38
ts1|pid21481|:19588 #4 0x7f76e2d6fd2a in std::__allocation_result<std::allocator_traits<std::allocator<yb::client::internal::RemoteReplica>>::pointer> std::__allocate_at_least[abi:v160003]<std::allocator<yb::client::internal::RemoteReplica>>(std::allocator<yb::client::internal::RemoteReplica>&, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__memory/allocate_at_least.h:55:19
ts1|pid21481|:19588 #5 0x7f76e2d6fd2a in std::__split_buffer<yb::client::internal::RemoteReplica, std::allocator<yb::client::internal::RemoteReplica>&>::__split_buffer(unsigned long, unsigned long, std::allocator<yb::client::internal::RemoteReplica>&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__split_buffer:323:29
ts1|pid21481|:19588 #6 0x7f76e2d6f76c in void std::vector<yb::client::internal::RemoteReplica, std::allocator<yb::client::internal::RemoteReplica>>::__emplace_back_slow_path<yb::client::internal::RemoteTabletServer*, yb::PeerRole>(yb::client::internal::RemoteTabletServer*&&, yb::PeerRole&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1580:49
ts1|pid21481|:19588 #7 0x7f76e2d2f25c in yb::client::internal::RemoteReplica& std::vector<yb::client::internal::RemoteReplica, std::allocator<yb::client::internal::RemoteReplica>>::emplace_back<yb::client::internal::RemoteTabletServer*, yb::PeerRole>(yb::client::internal::RemoteTabletServer*&&, yb::PeerRole&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1603:9
ts1|pid21481|:19588 #8 0x7f76e2d2f25c in yb::client::internal::RemoteTablet::Refresh(std::unordered_map<string, std::shared_ptr<yb::client::internal::RemoteTabletServer>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, std::shared_ptr<yb::client::internal::RemoteTabletServer>>>> const&, google::protobuf::RepeatedPtrField<yb::master::TabletLocationsPB_ReplicaPB> const&) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:354:15
ts1|pid21481|:19588 #9 0x7f76e2d40814 in yb::client::internal::MetaCache::ProcessTabletLocation(yb::master::TabletLocationsPB const&, std::unordered_map<string, std::unordered_map<string, scoped_refptr<yb::client::internal::RemoteTablet>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, scoped_refptr<yb::client::internal::RemoteTablet>>>>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, std::unordered_map<string, scoped_refptr<yb::client::internal::RemoteTablet>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, scoped_refptr<yb::client::internal::RemoteTablet>>>>>>>*, boost::optional<unsigned int> const&, yb::client::internal::LookupRpc*) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:1085:13
ts1|pid21481|:19588 #10 0x7f76e2d3d048 in yb::client::internal::MetaCache::ProcessTabletLocations(google::protobuf::RepeatedPtrField<yb::master::TabletLocationsPB> const&, boost::optional<unsigned int>, yb::client::internal::LookupRpc*) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:948:21
ts1|pid21481|:19588 #11 0x7f76e2d90d66 in yb::client::internal::LookupByKeyRpc::ProcessTabletLocations(google::protobuf::RepeatedPtrField<yb::master::TabletLocationsPB> const&, boost::optional<unsigned int>) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:1611:26
ts1|pid21481|:19588 #12 0x7f76e2d93c47 in void yb::client::internal::LookupRpc::DoProcessResponse<yb::master::GetTableLocationsResponsePB>(yb::Status const&, yb::master::GetTableLocationsResponsePB const&) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:850:18
ts1|pid21481|:19588 #13 0x7f76e2bb1e0a in yb::client::internal::ClientMasterRpcBase::Finished(yb::Status const&) ${BUILD_ROOT}/../../src/yb/client/client_master_rpc.cc:157:3
ts1|pid21481|:19588 #14 0x7f76e2d93436 in decltype(*std::declval<yb::client::internal::LookupByKeyRpc*&>().*std::declval<void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&)>()(std::declval<yb::Status::OK&>())) std::__invoke[abi:v160003]<void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&), yb::client::internal::LookupByKeyRpc*&, yb::Status::OK&, void>(void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&), yb::client::internal::LookupByKeyRpc*&, yb::Status::OK&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/invoke.h:359:23
ts1|pid21481|:19588 #15 0x7f76e2d93436 in std::__bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>, __is_valid_bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>>::value>::type std::__apply_functor[abi:v160003]<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, 0ul, 1ul, std::tuple<>>(void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&, std::__tuple_indices<0ul, 1ul>, std::tuple<>&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/bind.h:263:12
ts1|pid21481|:19588 #16 0x7f76e2d93436 in std::__bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>, __is_valid_bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>>::value>::type std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>::operator()[abi:v160003]<>() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/bind.h:295:20
ts1|pid21481|:19588 #17 0x7f76e2d93436 in decltype(std::declval<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&>()()) std::__invoke[abi:v160003]<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&>(std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/invoke.h:394:23
ts1|pid21481|:19588 #18 0x7f76e2d93436 in void std::__invoke_void_return_wrapper<void, true>::__call<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&>(std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/invoke.h:487:9
ts1|pid21481|:19588 #19 0x7f76e2d93436 in std::__function::__alloc_func<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::allocator<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>>, void ()>::operator()[abi:v160003]() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:185:16
ts1|pid21481|:19588 #20 0x7f76e2d93436 in std::__function::__func<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::allocator<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:356:12
ts1|pid21481|:19588 #21 0x7f76dea61665 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #22 0x7f76dea61665 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #23 0x7f76dea61665 in yb::rpc::OutboundCall::InvokeCallbackSync() ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:353:3
ts1|pid21481|:19588 #24 0x7f76dea61504 in yb::rpc::InvokeCallbackTask::Run() ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:124:10
ts1|pid21481|:19588 #25 0x7f76deba6290 in yb::rpc::(anonymous namespace)::Worker::Execute() ${BUILD_ROOT}/../../src/yb/rpc/thread_pool.cc:104:15
ts1|pid21481|:19588 #26 0x7f76dd1db590 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #27 0x7f76dd1db590 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #28 0x7f76dd1db590 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3
ts1|pid21481|:19588 #29 0x7f76d82521c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823)
ts1|pid21481|:19588
ts1|pid21481|:19588 Thread T26 (TabletServer_re) created by T0 here:
ts2|pid21484|:24938 I0620 14:00:46.747344 21803 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts2_peer_id}: Injecting 113ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.749097 21822 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts3_peer_id}: Injecting 155ms of latency in Log::Sync()
m4|pid21957|:23052 W0620 14:00:46.748585 22493 catalog_manager.cc:7682] Stale heartbeat for Tablet {stock_ticker_raw_tablet_id6} (table stock_ticker_raw [id=646d0a9c0f2047439ac4fcc86d12f62b]) on TS {ts4_peer_id} cstate=current_term: 1 config { opid_index: -1 peers { permanent_uuid: "{ts2_peer_id}" member_type: VOTER last_known_private_addr { host: "127.111.38.153" port: 24938 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts1_peer_id}" member_type: VOTER last_known_private_addr { host: "127.108.122.172" port: 19588 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts3_peer_id}" member_type: VOTER last_known_private_addr { host: "127.162.6.81" port: 24461 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, prev_cstate=current_term: 1 leader_uuid: "{ts3_peer_id}" config { opid_index: 1524 peers { permanent_uuid: "{ts2_peer_id}" member_type: VOTER last_known_private_addr { host: "127.111.38.153" port: 24938 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts1_peer_id}" member_type: VOTER last_known_private_addr { host: "127.108.122.172" port: 19588 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts3_peer_id}" member_type: VOTER last_known_private_addr { host: "127.162.6.81" port: 24461 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts4_peer_id}" member_type: PRE_VOTER last_known_private_addr { host: "127.1.193.107" port: 10558 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
ts2|pid21484|:24938 I0620 14:00:46.783416 21823 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts2_peer_id}: Injecting 133ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.785916 21825 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts3_peer_id}: Injecting 85ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.786628 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 80ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.872215 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 157ms of latency in Log::Sync()
ts1|pid21481|:19588 #0 0x562533f1c35a in pthread_create /opt/yb-build/llvm/yb-llvm-v16.0.3-yb-1-1683786200-c8b432af-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_interceptors.cpp:208:3
ts1|pid21481|:19588 #1 0x7f76dd1d83d2 in yb::Thread::StartThread(string const&, string const&, std::function<void ()>, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:763:15
ts1|pid21481|:19588 #2 0x7f76dea9abab in yb::Status yb::Thread::Create<void (yb::rpc::Reactor::*)(), yb::rpc::Reactor*>(string const&, string const&, void (yb::rpc::Reactor::* const&)(), yb::rpc::Reactor* const&, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.h:165:12
ts1|pid21481|:19588 #3 0x7f76dea9abab in yb::rpc::Reactor::Init() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:274:10
ts1|pid21481|:19588 #4 0x7f76dea39881 in yb::rpc::Messenger::Init(yb::rpc::MessengerBuilder const&) ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:616:5
ts1|pid21481|:19588 #5 0x7f76dea390d5 in yb::rpc::MessengerBuilder::Build() ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:154:3
ts1|pid21481|:19588 #6 0x7f76e1b9423a in yb::server::RpcServerBase::Init() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:303:16
ts1|pid21481|:19588 #7 0x7f76e1b9bf3d in yb::server::RpcAndWebServerBase::Init() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:514:3
ts1|pid21481|:19588 #8 0x7f76ebc67e6b in yb::tserver::DbServerBase::Init() ${BUILD_ROOT}/../../src/yb/tserver/db_server_base.cc:47:3
ts1|pid21481|:19588 #9 0x7f76ebe91776 in yb::tserver::TabletServer::Init() ${BUILD_ROOT}/../../src/yb/tserver/tablet_server.cc:383:3
ts1|pid21481|:19588 #10 0x7f76ec73972b in yb::tserver::TabletServerMain(int, char**) ${BUILD_ROOT}/../../src/yb/tserver/tablet_server_main_impl.cc:208:3
ts1|pid21481|:19588 #11 0x7f76d7ca7d84 in __libc_start_main (/lib64/libc.so.6+0x3ad84) (BuildId: 6d1dc58340cb6c575073da1e2efb8ac2a3cadc23)
ts1|pid21481|:19588
ts1|pid21481|:19588 Thread T45 (rpc_tp_TabletSe) created by T26 (TabletServer_re) here:
ts1|pid21481|:19588 #0 0x562533f1c35a in pthread_create /opt/yb-build/llvm/yb-llvm-v16.0.3-yb-1-1683786200-c8b432af-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_interceptors.cpp:208:3
ts1|pid21481|:19588 #1 0x7f76dd1d83d2 in yb::Thread::StartThread(string const&, string const&, std::function<void ()>, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:763:15
ts1|pid21481|:19588 #2 0x7f76deba8889 in yb::Status yb::Thread::Create<void (yb::rpc::(anonymous namespace)::Worker::*)(), yb::rpc::(anonymous namespace)::Worker*>(string const&, string const&, void (yb::rpc::(anonymous namespace)::Worker::* const&)(), yb::rpc::(anonymous namespace)::Worker* const&, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.h:165:12
ts1|pid21481|:19588 #3 0x7f76deba8889 in yb::rpc::(anonymous namespace)::Worker::Start(unsigned long) ${BUILD_ROOT}/../../src/yb/rpc/thread_pool.cc:61:12
ts1|pid21481|:19588 #4 0x7f76deba8889 in yb::rpc::ThreadPool::Impl::Enqueue(yb::rpc::ThreadPoolTask*) ${BUILD_ROOT}/../../src/yb/rpc/thread_pool.cc:201:35
ts1|pid21481|:19588 #5 0x7f76dea69dc2 in yb::rpc::OutboundCall::InvokeCallback() ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:338:28
ts1|pid21481|:19588 #6 0x7f76dea6aa98 in yb::rpc::OutboundCall::SetResponse(yb::rpc::CallResponse&&) ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:402:7
ts1|pid21481|:19588 #7 0x7f76de9f630b in yb::rpc::Connection::HandleCallResponse(yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:357:9
ts1|pid21481|:19588 #8 0x7f76debb9258 in yb::rpc::YBOutboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:447:22
ts1|pid21481|:19588 #9 0x7f76debb9258 in non-virtual thunk to yb::rpc::YBOutboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc
ts1|pid21481|:19588 #10 0x7f76de9cb9bc in yb::rpc::BinaryCallParser::Parse(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>, std::shared_ptr<yb::MemTracker> const*) ${BUILD_ROOT}/../../src/yb/rpc/binary_call_parser.cc:167:7
ts1|pid21481|:19588 #11 0x7f76debb9b27 in yb::rpc::YBOutboundConnectionContext::ProcessCalls(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:468:19
ts1|pid21481|:19588 #12 0x7f76de9f4935 in yb::rpc::Connection::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:317:27
ts1|pid21481|:19588 #13 0x7f76dead5b03 in yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc
ts1|pid21481|:19588 #14 0x7f76dead7de1 in non-virtual thunk to yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc
ts1|pid21481|:19588 #15 0x7f76deb9947e in yb::rpc::TcpStream::TryProcessReceived() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:408:17
ts1|pid21481|:19588 #16 0x7f76deb95fb3 in yb::rpc::TcpStream::ReadHandler() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:334:31
ts1|pid21481|:19588 #17 0x7f76deb94a44 in yb::rpc::TcpStream::Handler(ev::io&, int) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:276:14
ts1|pid21481|:19588 #18 0x7f76dc4936ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca)
ts1|pid21481|:19588 #19 0x7f76dc4943c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6)
ts1|pid21481|:19588 #20 0x7f76dea9b4fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7
ts1|pid21481|:19588 #21 0x7f76dea9b4fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9
ts1|pid21481|:19588 #22 0x7f76dd1db590 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #23 0x7f76dd1db590 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #24 0x7f76dd1db590 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3
ts1|pid21481|:19588 #25 0x7f76d82521c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823)
ts1|pid21481|:19588
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6936]: https://yugabyte.atlassian.net/browse/DB-6936?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [DocDB] Potential race on RemoteTablet::replicas_ between RemoteTablet::GetRemoteTabletServers and RemoteTablet::Refresh - Jira Link: [DB-6936](https://yugabyte.atlassian.net/browse/DB-6936)
### Description
Test: `org.yb.loadtester.TestClusterWithHighLoadAndSlowSync.testClusterFullMoveWithHighLoadAndSlowSync`
Analyze Trends: https://detective-gcp.dev.yugabyte.com/stability/test?analyze_trends=true&branch=master&build_type=all&class=org.yb.loadtester.TestClusterWithHighLoadAndSlowSync&fail_tag=all&name=testClusterFullMoveWithHighLoadAndSlowSync&platform=linux
ERROR: `java.lang.Exception: Operation timed out after 45000ms`
Jenkins run: https://jenkins.dev.yugabyte.com/job/github-yugabyte-db-alma8-master-clang16-asan/103/testReport/junit/org.yb.loadtester/TestClusterWithHighLoadAndSlowSync/testClusterFullMoveWithHighLoadAndSlowSync/
Corresponding asan issue: container-overflow on the `replicas_` vector
```
ts1|pid21481|:19588 ==21481==ERROR: AddressSanitizer: container-overflow on address 0x60c00001e178 at pc 0x7f76e2d3360f bp 0x7f76c29754b0 sp 0x7f76c29754a8
ts1|pid21481|:19588 WRITE of size 4 at 0x60c00001e178 thread T26 (TabletServer_re)
ts1|pid21481|:19588 I0620 14:00:46.384110 22201 tablet_metadata.cc:734] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Successfully destroyed provisional records DB at: ${TEST_TMPDIR}/ts-127.108.122.172-19588-1687269473247/yb-data/tserver/data/rocksdb/table-646d0a9c0f2047439ac4fcc86d12f62b/tablet-{stock_ticker_raw_tablet_id5}.intents
ts1|pid21481|:19588 I0620 14:00:46.393419 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 81ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.399729 21854 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts3_peer_id}: Injecting 93ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.406644 21803 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts2_peer_id}: Injecting 120ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.414602 21822 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts3_peer_id}: Injecting 102ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.442550 22201 ts_tablet_manager.cc:2853] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Tablet deleted. Last logged OpId: 2.1658
ts1|pid21481|:19588 I0620 14:00:46.442641 22201 log.cc:1612] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Deleting WAL dir ${TEST_TMPDIR}/ts-127.108.122.172-19588-1687269473247/yb-data/tserver/wals/table-646d0a9c0f2047439ac4fcc86d12f62b/tablet-{stock_ticker_raw_tablet_id5}
ts1|pid21481|:19588 I0620 14:00:46.443351 22201 tablet_bootstrap_if.cc:96] T {stock_ticker_raw_tablet_id5} P {ts1_peer_id}: Deleted tablet blocks from disk
ts1|pid21481|:19588 I0620 14:00:46.443431 22201 ts_tablet_manager.cc:2595] Unregister data/wal directory assignment map for table: 646d0a9c0f2047439ac4fcc86d12f62b and tablet {stock_ticker_raw_tablet_id5}
ts1|pid21481|:19588 I0620 14:00:46.443476 22201 ts_tablet_manager.cc:2928] Deleted transition in progress deleting tablet for tablet {stock_ticker_raw_tablet_id5}
ts1|pid21481|:19588 I0620 14:00:46.475014 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 107ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.493301 21854 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts3_peer_id}: Injecting 54ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.502784 21825 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts3_peer_id}: Injecting 68ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.510865 21823 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts2_peer_id}: Injecting 145ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.539144 21853 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts2_peer_id}: Injecting 100ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.571316 21825 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts3_peer_id}: Injecting 14ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.647917 21854 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts3_peer_id}: Injecting 71ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.658509 21803 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts2_peer_id}: Injecting 68ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.658881 21853 log.cc:1226] T {stock_ticker_raw_tablet_id9} P {ts2_peer_id}: Injecting 81ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.659333 21822 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts3_peer_id}: Injecting 74ms of latency in Log::Sync()
ts2|pid21484|:24938 I0620 14:00:46.661159 21823 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts2_peer_id}: Injecting 120ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.689364 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 14ms of latency in Log::Sync()
m4|pid21957|:23052 I0620 14:00:46.685217 22281 cluster_balance.cc:1488] Removing replica {ts3_peer_id} from tablet {stock_ticker_raw_tablet_id5}
m4|pid21957|:23052 W0620 14:00:46.685534 22281 cluster_balance.cc:540] Skipping add replicas for 646d0a9c0f2047439ac4fcc86d12f62b: Operation failed. Try again (yb/master/cluster_balance.cc:897): Cannot add replicas. Currently have a total overreplication of 1, when max allowed is 1, overreplicated tablets: {stock_ticker_raw_tablet_id6}
m4|pid21957|:23052 W0620 14:00:46.685632 22281 cluster_balance.cc:540] Skipping add replicas for 1b1993edc0394e55a2bf38cc5db378e4: Operation failed. Try again (yb/master/cluster_balance.cc:897): Cannot add replicas. Currently have a total overreplication of 1, when max allowed is 1, overreplicated tablets: {stock_ticker_1min_tablet_id2}
ts6|pid22511|:18601 I0620 14:00:46.689904 23146 raft_consensus.cc:2420] Received ChangeConfig request tablet_id: "{stock_ticker_raw_tablet_id5}" type: REMOVE_SERVER server { permanent_uuid: "{ts3_peer_id}" } dest_uuid: "{ts6_peer_id}" cas_config_opid_index: 1660
ts6|pid22511|:18601 I0620 14:00:46.690189 23146 raft_consensus.cc:3055] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id} [term 2 LEADER]: Setting replicate pending config peers { permanent_uuid: "{ts6_peer_id}" member_type: VOTER last_known_private_addr { host: "127.121.62.204" port: 18601 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts5_peer_id}" member_type: VOTER last_known_private_addr { host: "127.106.39.35" port: 29444 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }, type = REMOVE_SERVER
ts6|pid22511|:18601 I0620 14:00:46.690390 23146 consensus_meta.cc:317] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id}: Updating active role from LEADER to LEADER. Consensus state: current_term: 2 leader_uuid: "{ts6_peer_id}" config { peers { permanent_uuid: "{ts6_peer_id}" member_type: VOTER last_known_private_addr { host: "127.121.62.204" port: 18601 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts5_peer_id}" member_type: VOTER last_known_private_addr { host: "127.106.39.35" port: 29444 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, has_pending_config = 1
ts6|pid22511|:18601 I0620 14:00:46.690582 23146 consensus_peers.cc:584] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id} -> Peer {ts3_peer_id} ([host: "127.162.6.81" port: 24461], []): Closing peer
ts6|pid22511|:18601 I0620 14:00:46.690708 23146 consensus_queue.cc:279] T {stock_ticker_raw_tablet_id5} P {ts6_peer_id} [LEADER]: Queue going to LEADER mode. State: All replicated op: 0.0, Majority replicated op: 2.1660, Committed index: 2.1660, Last applied: 2.1660, Last appended: 2.1660, Current term: 2, Majority size: 2, State: QUEUE_OPEN, Mode: LEADER, active raft config: peers { permanent_uuid: "{ts6_peer_id}" member_type: VOTER last_known_private_addr { host: "127.121.62.204" port: 18601 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts5_peer_id}" member_type: VOTER last_known_private_addr { host: "127.106.39.35" port: 29444 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } }
ts1|pid21481|:19588 #0 0x7f76e2d3360e in yb::client::internal::RemoteTablet::GetRemoteTabletServers(std::vector<yb::client::internal::RemoteTabletServer*, std::allocator<yb::client::internal::RemoteTabletServer*>>*, yb::StronglyTypedBool<yb::client::internal::IncludeFailedReplicas_Tag>) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:558:31
ts1|pid21481|:19588 #1 0x7f76e2e3aa53 in yb::client::internal::TabletInvoker::SelectTabletServer() ${BUILD_ROOT}/../../src/yb/client/tablet_rpc.cc:148:14
ts1|pid21481|:19588 #2 0x7f76e2e3cf31 in yb::client::internal::TabletInvoker::Execute(string const&, bool) ${BUILD_ROOT}/../../src/yb/client/tablet_rpc.cc:229:5
ts1|pid21481|:19588 #3 0x7f76e2abb3d4 in yb::client::internal::AsyncRpc::SendRpc() ${BUILD_ROOT}/../../src/yb/client/async_rpc.cc:222:19
ts1|pid21481|:19588 #4 0x7f76deae33c5 in yb::rpc::RpcRetrier::DoRetry(yb::rpc::RpcCommand*, yb::Status const&) ${BUILD_ROOT}/../../src/yb/rpc/rpc.cc:227:10
ts1|pid21481|:19588 #5 0x7f76e363ee71 in boost::function1<void, yb::Status const&>::operator()(yb::Status const&) const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/include/boost/function/function_template.hpp:763:14
ts1|pid21481|:19588 #6 0x7f76deacdf31 in yb::rpc::DelayedTask::TimerHandler(ev::timer&, int) ${BUILD_ROOT}/../../src/yb/rpc/delayed_task.cc:152:5
ts1|pid21481|:19588 #7 0x7f76dc4936ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca)
ts1|pid21481|:19588 #8 0x7f76dc4943c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6)
ts1|pid21481|:19588 #9 0x7f76dea9b4fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7
ts1|pid21481|:19588 #10 0x7f76dea9b4fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9
ts1|pid21481|:19588 #11 0x7f76dd1db590 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #12 0x7f76dd1db590 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #13 0x7f76dd1db590 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3
ts1|pid21481|:19588 #14 0x7f76d82521c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823)
ts1|pid21481|:19588 #15 0x7f76d7ca6e72 in clone (/lib64/libc.so.6+0x39e72) (BuildId: 6d1dc58340cb6c575073da1e2efb8ac2a3cadc23)
ts1|pid21481|:19588
ts1|pid21481|:19588 0x60c00001e178 is located 120 bytes inside of 128-byte region [0x60c00001e100,0x60c00001e180)
ts1|pid21481|:19588 allocated by thread T45 (rpc_tp_TabletSe) here:
ts1|pid21481|:19588 #0 0x562533f6ec6d in operator new(unsigned long) /opt/yb-build/llvm/yb-llvm-v16.0.3-yb-1-1683786200-c8b432af-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_new_delete.cpp:95:3
ts1|pid21481|:19588 #1 0x7f76e2d6fd2a in void* std::__libcpp_operator_new[abi:v160003]<unsigned long>(unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/new:266:10
ts1|pid21481|:19588 #2 0x7f76e2d6fd2a in std::__libcpp_allocate[abi:v160003](unsigned long, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/new:292:10
ts1|pid21481|:19588 #3 0x7f76e2d6fd2a in std::allocator<yb::client::internal::RemoteReplica>::allocate[abi:v160003](unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__memory/allocator.h:115:38
ts1|pid21481|:19588 #4 0x7f76e2d6fd2a in std::__allocation_result<std::allocator_traits<std::allocator<yb::client::internal::RemoteReplica>>::pointer> std::__allocate_at_least[abi:v160003]<std::allocator<yb::client::internal::RemoteReplica>>(std::allocator<yb::client::internal::RemoteReplica>&, unsigned long) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__memory/allocate_at_least.h:55:19
ts1|pid21481|:19588 #5 0x7f76e2d6fd2a in std::__split_buffer<yb::client::internal::RemoteReplica, std::allocator<yb::client::internal::RemoteReplica>&>::__split_buffer(unsigned long, unsigned long, std::allocator<yb::client::internal::RemoteReplica>&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__split_buffer:323:29
ts1|pid21481|:19588 #6 0x7f76e2d6f76c in void std::vector<yb::client::internal::RemoteReplica, std::allocator<yb::client::internal::RemoteReplica>>::__emplace_back_slow_path<yb::client::internal::RemoteTabletServer*, yb::PeerRole>(yb::client::internal::RemoteTabletServer*&&, yb::PeerRole&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1580:49
ts1|pid21481|:19588 #7 0x7f76e2d2f25c in yb::client::internal::RemoteReplica& std::vector<yb::client::internal::RemoteReplica, std::allocator<yb::client::internal::RemoteReplica>>::emplace_back<yb::client::internal::RemoteTabletServer*, yb::PeerRole>(yb::client::internal::RemoteTabletServer*&&, yb::PeerRole&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/vector:1603:9
ts1|pid21481|:19588 #8 0x7f76e2d2f25c in yb::client::internal::RemoteTablet::Refresh(std::unordered_map<string, std::shared_ptr<yb::client::internal::RemoteTabletServer>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, std::shared_ptr<yb::client::internal::RemoteTabletServer>>>> const&, google::protobuf::RepeatedPtrField<yb::master::TabletLocationsPB_ReplicaPB> const&) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:354:15
ts1|pid21481|:19588 #9 0x7f76e2d40814 in yb::client::internal::MetaCache::ProcessTabletLocation(yb::master::TabletLocationsPB const&, std::unordered_map<string, std::unordered_map<string, scoped_refptr<yb::client::internal::RemoteTablet>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, scoped_refptr<yb::client::internal::RemoteTablet>>>>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, std::unordered_map<string, scoped_refptr<yb::client::internal::RemoteTablet>, std::hash<string>, std::equal_to<string>, std::allocator<std::pair<string const, scoped_refptr<yb::client::internal::RemoteTablet>>>>>>>*, boost::optional<unsigned int> const&, yb::client::internal::LookupRpc*) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:1085:13
ts1|pid21481|:19588 #10 0x7f76e2d3d048 in yb::client::internal::MetaCache::ProcessTabletLocations(google::protobuf::RepeatedPtrField<yb::master::TabletLocationsPB> const&, boost::optional<unsigned int>, yb::client::internal::LookupRpc*) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:948:21
ts1|pid21481|:19588 #11 0x7f76e2d90d66 in yb::client::internal::LookupByKeyRpc::ProcessTabletLocations(google::protobuf::RepeatedPtrField<yb::master::TabletLocationsPB> const&, boost::optional<unsigned int>) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:1611:26
ts1|pid21481|:19588 #12 0x7f76e2d93c47 in void yb::client::internal::LookupRpc::DoProcessResponse<yb::master::GetTableLocationsResponsePB>(yb::Status const&, yb::master::GetTableLocationsResponsePB const&) ${BUILD_ROOT}/../../src/yb/client/meta_cache.cc:850:18
ts1|pid21481|:19588 #13 0x7f76e2bb1e0a in yb::client::internal::ClientMasterRpcBase::Finished(yb::Status const&) ${BUILD_ROOT}/../../src/yb/client/client_master_rpc.cc:157:3
ts1|pid21481|:19588 #14 0x7f76e2d93436 in decltype(*std::declval<yb::client::internal::LookupByKeyRpc*&>().*std::declval<void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&)>()(std::declval<yb::Status::OK&>())) std::__invoke[abi:v160003]<void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&), yb::client::internal::LookupByKeyRpc*&, yb::Status::OK&, void>(void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&), yb::client::internal::LookupByKeyRpc*&, yb::Status::OK&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/invoke.h:359:23
ts1|pid21481|:19588 #15 0x7f76e2d93436 in std::__bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>, __is_valid_bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>>::value>::type std::__apply_functor[abi:v160003]<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, 0ul, 1ul, std::tuple<>>(void (yb::client::internal::ClientMasterRpcBase::*&)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&, std::__tuple_indices<0ul, 1ul>, std::tuple<>&&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/bind.h:263:12
ts1|pid21481|:19588 #16 0x7f76e2d93436 in std::__bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>, __is_valid_bind_return<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), std::tuple<yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::tuple<>>::value>::type std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>::operator()[abi:v160003]<>() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/bind.h:295:20
ts1|pid21481|:19588 #17 0x7f76e2d93436 in decltype(std::declval<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&>()()) std::__invoke[abi:v160003]<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&>(std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/invoke.h:394:23
ts1|pid21481|:19588 #18 0x7f76e2d93436 in void std::__invoke_void_return_wrapper<void, true>::__call<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&>(std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>&) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/invoke.h:487:9
ts1|pid21481|:19588 #19 0x7f76e2d93436 in std::__function::__alloc_func<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::allocator<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>>, void ()>::operator()[abi:v160003]() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:185:16
ts1|pid21481|:19588 #20 0x7f76e2d93436 in std::__function::__func<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>, std::allocator<std::__bind<void (yb::client::internal::ClientMasterRpcBase::*)(yb::Status const&), yb::client::internal::LookupByKeyRpc*, yb::Status::OK>>, void ()>::operator()() /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:356:12
ts1|pid21481|:19588 #21 0x7f76dea61665 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #22 0x7f76dea61665 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #23 0x7f76dea61665 in yb::rpc::OutboundCall::InvokeCallbackSync() ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:353:3
ts1|pid21481|:19588 #24 0x7f76dea61504 in yb::rpc::InvokeCallbackTask::Run() ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:124:10
ts1|pid21481|:19588 #25 0x7f76deba6290 in yb::rpc::(anonymous namespace)::Worker::Execute() ${BUILD_ROOT}/../../src/yb/rpc/thread_pool.cc:104:15
ts1|pid21481|:19588 #26 0x7f76dd1db590 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #27 0x7f76dd1db590 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #28 0x7f76dd1db590 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3
ts1|pid21481|:19588 #29 0x7f76d82521c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823)
ts1|pid21481|:19588
ts1|pid21481|:19588 Thread T26 (TabletServer_re) created by T0 here:
ts2|pid21484|:24938 I0620 14:00:46.747344 21803 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts2_peer_id}: Injecting 113ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.749097 21822 log.cc:1226] T {stock_ticker_raw_tablet_id7} P {ts3_peer_id}: Injecting 155ms of latency in Log::Sync()
m4|pid21957|:23052 W0620 14:00:46.748585 22493 catalog_manager.cc:7682] Stale heartbeat for Tablet {stock_ticker_raw_tablet_id6} (table stock_ticker_raw [id=646d0a9c0f2047439ac4fcc86d12f62b]) on TS {ts4_peer_id} cstate=current_term: 1 config { opid_index: -1 peers { permanent_uuid: "{ts2_peer_id}" member_type: VOTER last_known_private_addr { host: "127.111.38.153" port: 24938 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts1_peer_id}" member_type: VOTER last_known_private_addr { host: "127.108.122.172" port: 19588 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts3_peer_id}" member_type: VOTER last_known_private_addr { host: "127.162.6.81" port: 24461 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }, prev_cstate=current_term: 1 leader_uuid: "{ts3_peer_id}" config { opid_index: 1524 peers { permanent_uuid: "{ts2_peer_id}" member_type: VOTER last_known_private_addr { host: "127.111.38.153" port: 24938 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts1_peer_id}" member_type: VOTER last_known_private_addr { host: "127.108.122.172" port: 19588 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts3_peer_id}" member_type: VOTER last_known_private_addr { host: "127.162.6.81" port: 24461 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } peers { permanent_uuid: "{ts4_peer_id}" member_type: PRE_VOTER last_known_private_addr { host: "127.1.193.107" port: 10558 } cloud_info { placement_cloud: "cloud1" placement_region: "datacenter1" placement_zone: "rack1" } } }
ts2|pid21484|:24938 I0620 14:00:46.783416 21823 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts2_peer_id}: Injecting 133ms of latency in Log::Sync()
ts3|pid21538|:24461 I0620 14:00:46.785916 21825 log.cc:1226] T {stock_ticker_raw_tablet_id6} P {ts3_peer_id}: Injecting 85ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.786628 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 80ms of latency in Log::Sync()
ts1|pid21481|:19588 I0620 14:00:46.872215 21493 log.cc:1226] T {stock_ticker_raw_tablet_id8} P {ts1_peer_id}: Injecting 157ms of latency in Log::Sync()
ts1|pid21481|:19588 #0 0x562533f1c35a in pthread_create /opt/yb-build/llvm/yb-llvm-v16.0.3-yb-1-1683786200-c8b432af-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_interceptors.cpp:208:3
ts1|pid21481|:19588 #1 0x7f76dd1d83d2 in yb::Thread::StartThread(string const&, string const&, std::function<void ()>, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:763:15
ts1|pid21481|:19588 #2 0x7f76dea9abab in yb::Status yb::Thread::Create<void (yb::rpc::Reactor::*)(), yb::rpc::Reactor*>(string const&, string const&, void (yb::rpc::Reactor::* const&)(), yb::rpc::Reactor* const&, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.h:165:12
ts1|pid21481|:19588 #3 0x7f76dea9abab in yb::rpc::Reactor::Init() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:274:10
ts1|pid21481|:19588 #4 0x7f76dea39881 in yb::rpc::Messenger::Init(yb::rpc::MessengerBuilder const&) ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:616:5
ts1|pid21481|:19588 #5 0x7f76dea390d5 in yb::rpc::MessengerBuilder::Build() ${BUILD_ROOT}/../../src/yb/rpc/messenger.cc:154:3
ts1|pid21481|:19588 #6 0x7f76e1b9423a in yb::server::RpcServerBase::Init() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:303:16
ts1|pid21481|:19588 #7 0x7f76e1b9bf3d in yb::server::RpcAndWebServerBase::Init() ${BUILD_ROOT}/../../src/yb/server/server_base.cc:514:3
ts1|pid21481|:19588 #8 0x7f76ebc67e6b in yb::tserver::DbServerBase::Init() ${BUILD_ROOT}/../../src/yb/tserver/db_server_base.cc:47:3
ts1|pid21481|:19588 #9 0x7f76ebe91776 in yb::tserver::TabletServer::Init() ${BUILD_ROOT}/../../src/yb/tserver/tablet_server.cc:383:3
ts1|pid21481|:19588 #10 0x7f76ec73972b in yb::tserver::TabletServerMain(int, char**) ${BUILD_ROOT}/../../src/yb/tserver/tablet_server_main_impl.cc:208:3
ts1|pid21481|:19588 #11 0x7f76d7ca7d84 in __libc_start_main (/lib64/libc.so.6+0x3ad84) (BuildId: 6d1dc58340cb6c575073da1e2efb8ac2a3cadc23)
ts1|pid21481|:19588
ts1|pid21481|:19588 Thread T45 (rpc_tp_TabletSe) created by T26 (TabletServer_re) here:
ts1|pid21481|:19588 #0 0x562533f1c35a in pthread_create /opt/yb-build/llvm/yb-llvm-v16.0.3-yb-1-1683786200-c8b432af-almalinux8-x86_64-build/src/llvm-project/compiler-rt/lib/asan/asan_interceptors.cpp:208:3
ts1|pid21481|:19588 #1 0x7f76dd1d83d2 in yb::Thread::StartThread(string const&, string const&, std::function<void ()>, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:763:15
ts1|pid21481|:19588 #2 0x7f76deba8889 in yb::Status yb::Thread::Create<void (yb::rpc::(anonymous namespace)::Worker::*)(), yb::rpc::(anonymous namespace)::Worker*>(string const&, string const&, void (yb::rpc::(anonymous namespace)::Worker::* const&)(), yb::rpc::(anonymous namespace)::Worker* const&, scoped_refptr<yb::Thread>*) ${BUILD_ROOT}/../../src/yb/util/thread.h:165:12
ts1|pid21481|:19588 #3 0x7f76deba8889 in yb::rpc::(anonymous namespace)::Worker::Start(unsigned long) ${BUILD_ROOT}/../../src/yb/rpc/thread_pool.cc:61:12
ts1|pid21481|:19588 #4 0x7f76deba8889 in yb::rpc::ThreadPool::Impl::Enqueue(yb::rpc::ThreadPoolTask*) ${BUILD_ROOT}/../../src/yb/rpc/thread_pool.cc:201:35
ts1|pid21481|:19588 #5 0x7f76dea69dc2 in yb::rpc::OutboundCall::InvokeCallback() ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:338:28
ts1|pid21481|:19588 #6 0x7f76dea6aa98 in yb::rpc::OutboundCall::SetResponse(yb::rpc::CallResponse&&) ${BUILD_ROOT}/../../src/yb/rpc/outbound_call.cc:402:7
ts1|pid21481|:19588 #7 0x7f76de9f630b in yb::rpc::Connection::HandleCallResponse(yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:357:9
ts1|pid21481|:19588 #8 0x7f76debb9258 in yb::rpc::YBOutboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:447:22
ts1|pid21481|:19588 #9 0x7f76debb9258 in non-virtual thunk to yb::rpc::YBOutboundConnectionContext::HandleCall(std::shared_ptr<yb::rpc::Connection> const&, yb::rpc::CallData*) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc
ts1|pid21481|:19588 #10 0x7f76de9cb9bc in yb::rpc::BinaryCallParser::Parse(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>, std::shared_ptr<yb::MemTracker> const*) ${BUILD_ROOT}/../../src/yb/rpc/binary_call_parser.cc:167:7
ts1|pid21481|:19588 #11 0x7f76debb9b27 in yb::rpc::YBOutboundConnectionContext::ProcessCalls(std::shared_ptr<yb::rpc::Connection> const&, boost::container::small_vector<iovec, 4ul, void, void> const&, yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/yb_rpc.cc:468:19
ts1|pid21481|:19588 #12 0x7f76de9f4935 in yb::rpc::Connection::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/connection.cc:317:27
ts1|pid21481|:19588 #13 0x7f76dead5b03 in yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc
ts1|pid21481|:19588 #14 0x7f76dead7de1 in non-virtual thunk to yb::rpc::RefinedStream::ProcessReceived(yb::StronglyTypedBool<yb::rpc::ReadBufferFull_Tag>) ${BUILD_ROOT}/../../src/yb/rpc/refined_stream.cc
ts1|pid21481|:19588 #15 0x7f76deb9947e in yb::rpc::TcpStream::TryProcessReceived() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:408:17
ts1|pid21481|:19588 #16 0x7f76deb95fb3 in yb::rpc::TcpStream::ReadHandler() ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:334:31
ts1|pid21481|:19588 #17 0x7f76deb94a44 in yb::rpc::TcpStream::Handler(ev::io&, int) ${BUILD_ROOT}/../../src/yb/rpc/tcp_stream.cc:276:14
ts1|pid21481|:19588 #18 0x7f76dc4936ca in ev_invoke_pending (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x86ca)
ts1|pid21481|:19588 #19 0x7f76dc4943c6 in ev_run (/opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/lib/libev.so.4+0x93c6)
ts1|pid21481|:19588 #20 0x7f76dea9b4fc in ev::loop_ref::run(int) /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/common/include/ev++.h:211:7
ts1|pid21481|:19588 #21 0x7f76dea9b4fc in yb::rpc::Reactor::RunThread() ${BUILD_ROOT}/../../src/yb/rpc/reactor.cc:630:9
ts1|pid21481|:19588 #22 0x7f76dd1db590 in std::__function::__value_func<void ()>::operator()[abi:v160003]() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:510:16
ts1|pid21481|:19588 #23 0x7f76dd1db590 in std::function<void ()>::operator()() const /opt/yb-build/thirdparty/yugabyte-db-thirdparty-v20230519215509-04b5c61ec3-almalinux8-x86_64-clang16/installed/asan/libcxx/include/c++/v1/__functional/function.h:1156:12
ts1|pid21481|:19588 #24 0x7f76dd1db590 in yb::Thread::SuperviseThread(void*) ${BUILD_ROOT}/../../src/yb/util/thread.cc:842:3
ts1|pid21481|:19588 #25 0x7f76d82521c9 in start_thread (/lib64/libpthread.so.0+0x81c9) (BuildId: c46c0e44b55ff27501f607770ed2ae993fe0b823)
ts1|pid21481|:19588
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6936]: https://yugabyte.atlassian.net/browse/DB-6936?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | potential race on remotetablet replicas between remotetablet getremotetabletservers and remotetablet refresh jira link description test org yb loadtester testclusterwithhighloadandslowsync testclusterfullmovewithhighloadandslowsync analyze trends error java lang exception operation timed out after jenkins run corresponding asan issue container overflow on the replicas vector error addresssanitizer container overflow on address at pc bp sp write of size at thread tabletserver re tablet metadata cc t stock ticker raw tablet p peer id successfully destroyed provisional records db at test tmpdir ts yb data tserver data rocksdb table tablet stock ticker raw tablet intents log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync ts tablet manager cc t stock ticker raw tablet p peer id tablet deleted last logged opid log cc t stock ticker raw tablet p peer id deleting wal dir test tmpdir ts yb data tserver wals table tablet stock ticker raw tablet tablet bootstrap if cc t stock ticker raw tablet p peer id deleted tablet blocks from disk ts tablet manager cc unregister data wal directory assignment map for table and tablet stock ticker raw tablet ts tablet manager cc deleted transition in progress deleting tablet for tablet stock ticker raw tablet log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync cluster balance cc removing replica peer id from tablet stock ticker raw tablet cluster balance cc skipping add replicas for operation failed try again yb master cluster balance cc cannot add replicas currently have a total overreplication of when max allowed is overreplicated tablets stock ticker raw tablet cluster balance cc skipping add replicas for operation failed try again yb master cluster balance cc cannot add replicas currently have a total overreplication of when max allowed is overreplicated tablets stock ticker tablet raft consensus cc received changeconfig request tablet id stock ticker raw tablet type remove server server permanent uuid peer id dest uuid peer id cas config opid index raft consensus cc t stock ticker raw tablet p peer id setting replicate pending config peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone type remove server consensus meta cc t stock ticker raw tablet p peer id updating active role from leader to leader consensus state current term leader uuid peer id config peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone has pending config consensus peers cc t stock ticker raw tablet p peer id peer peer id closing peer consensus queue cc t stock ticker raw tablet p peer id queue going to leader mode state all replicated op majority replicated op committed index last applied last appended current term majority size state queue open mode leader active raft config peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone in yb client internal remotetablet getremotetabletservers std vector yb stronglytypedbool build root src yb client meta cache cc in yb client internal tabletinvoker selecttabletserver build root src yb client tablet rpc cc in yb client internal tabletinvoker execute string const bool build root src yb client tablet rpc cc in yb client internal asyncrpc sendrpc build root src yb client async rpc cc in yb rpc rpcretrier doretry yb rpc rpccommand yb status const build root src yb rpc rpc cc in boost operator yb status const const opt yb build thirdparty yugabyte db thirdparty installed asan include boost function function template hpp in yb rpc delayedtask timerhandler ev timer int build root src yb rpc delayed task cc in ev invoke pending opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev run opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev loop ref run int opt yb build thirdparty yugabyte db thirdparty installed common include ev h in yb rpc reactor runthread build root src yb rpc reactor cc in std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in yb thread supervisethread void build root src yb util thread cc in start thread libpthread so buildid in clone libc so buildid is located bytes inside of byte region allocated by thread rpc tp tabletse here in operator new unsigned long opt yb build llvm yb llvm yb build src llvm project compiler rt lib asan asan new delete cpp in void std libcpp operator new unsigned long opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c new in std libcpp allocate unsigned long unsigned long opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c new in std allocator allocate unsigned long opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c memory allocator h in std allocation result pointer std allocate at least std allocator unsigned long opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c memory allocate at least h in std split buffer split buffer unsigned long unsigned long std allocator opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c split buffer in void std vector emplace back slow path yb client internal remotetabletserver yb peerrole opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c vector in yb client internal remotereplica std vector emplace back yb client internal remotetabletserver yb peerrole opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c vector in yb client internal remotetablet refresh std unordered map std hash std equal to std allocator const google protobuf repeatedptrfield const build root src yb client meta cache cc in yb client internal metacache processtabletlocation yb master tabletlocationspb const std unordered map std hash std equal to std allocator std hash std equal to std allocator std hash std equal to std allocator boost optional const yb client internal lookuprpc build root src yb client meta cache cc in yb client internal metacache processtabletlocations google protobuf repeatedptrfield const boost optional yb client internal lookuprpc build root src yb client meta cache cc in yb client internal lookupbykeyrpc processtabletlocations google protobuf repeatedptrfield const boost optional build root src yb client meta cache cc in void yb client internal lookuprpc doprocessresponse yb status const yb master gettablelocationsresponsepb const build root src yb client meta cache cc in yb client internal clientmasterrpcbase finished yb status const build root src yb client client master rpc cc in decltype std declval std declval std declval std invoke void yb client internal clientmasterrpcbase yb status const yb client internal lookupbykeyrpc yb status ok opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional invoke h in std bind return std tuple is valid bind return std tuple value type std apply functor std tuple void yb client internal clientmasterrpcbase yb status const std tuple std tuple indices std tuple opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional bind h in std bind return std tuple is valid bind return std tuple value type std bind operator opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional bind h in decltype std declval std invoke std bind opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional invoke h in void std invoke void return wrapper call std bind opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional invoke h in std function alloc func std allocator void operator opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function func std allocator void operator opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in yb rpc outboundcall invokecallbacksync build root src yb rpc outbound call cc in yb rpc invokecallbacktask run build root src yb rpc outbound call cc in yb rpc anonymous namespace worker execute build root src yb rpc thread pool cc in std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in yb thread supervisethread void build root src yb util thread cc in start thread libpthread so buildid thread tabletserver re created by here log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync catalog manager cc stale heartbeat for tablet stock ticker raw tablet table stock ticker raw on ts peer id cstate current term config opid index peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone prev cstate current term leader uuid peer id config opid index peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type voter last known private addr host port cloud info placement cloud placement region placement zone peers permanent uuid peer id member type pre voter last known private addr host port cloud info placement cloud placement region placement zone log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync log cc t stock ticker raw tablet p peer id injecting of latency in log sync in pthread create opt yb build llvm yb llvm yb build src llvm project compiler rt lib asan asan interceptors cpp in yb thread startthread string const string const std function scoped refptr build root src yb util thread cc in yb status yb thread create string const string const void yb rpc reactor const yb rpc reactor const scoped refptr build root src yb util thread h in yb rpc reactor init build root src yb rpc reactor cc in yb rpc messenger init yb rpc messengerbuilder const build root src yb rpc messenger cc in yb rpc messengerbuilder build build root src yb rpc messenger cc in yb server rpcserverbase init build root src yb server server base cc in yb server rpcandwebserverbase init build root src yb server server base cc in yb tserver dbserverbase init build root src yb tserver db server base cc in yb tserver tabletserver init build root src yb tserver tablet server cc in yb tserver tabletservermain int char build root src yb tserver tablet server main impl cc in libc start main libc so buildid thread rpc tp tabletse created by tabletserver re here in pthread create opt yb build llvm yb llvm yb build src llvm project compiler rt lib asan asan interceptors cpp in yb thread startthread string const string const std function scoped refptr build root src yb util thread cc in yb status yb thread create string const string const void yb rpc anonymous namespace worker const yb rpc anonymous namespace worker const scoped refptr build root src yb util thread h in yb rpc anonymous namespace worker start unsigned long build root src yb rpc thread pool cc in yb rpc threadpool impl enqueue yb rpc threadpooltask build root src yb rpc thread pool cc in yb rpc outboundcall invokecallback build root src yb rpc outbound call cc in yb rpc outboundcall setresponse yb rpc callresponse build root src yb rpc outbound call cc in yb rpc connection handlecallresponse yb rpc calldata build root src yb rpc connection cc in yb rpc yboutboundconnectioncontext handlecall std shared ptr const yb rpc calldata build root src yb rpc yb rpc cc in non virtual thunk to yb rpc yboutboundconnectioncontext handlecall std shared ptr const yb rpc calldata build root src yb rpc yb rpc cc in yb rpc binarycallparser parse std shared ptr const boost container small vector const yb stronglytypedbool std shared ptr const build root src yb rpc binary call parser cc in yb rpc yboutboundconnectioncontext processcalls std shared ptr const boost container small vector const yb stronglytypedbool build root src yb rpc yb rpc cc in yb rpc connection processreceived yb stronglytypedbool build root src yb rpc connection cc in yb rpc refinedstream processreceived yb stronglytypedbool build root src yb rpc refined stream cc in non virtual thunk to yb rpc refinedstream processreceived yb stronglytypedbool build root src yb rpc refined stream cc in yb rpc tcpstream tryprocessreceived build root src yb rpc tcp stream cc in yb rpc tcpstream readhandler build root src yb rpc tcp stream cc in yb rpc tcpstream handler ev io int build root src yb rpc tcp stream cc in ev invoke pending opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev run opt yb build thirdparty yugabyte db thirdparty installed common lib libev so in ev loop ref run int opt yb build thirdparty yugabyte db thirdparty installed common include ev h in yb rpc reactor runthread build root src yb rpc reactor cc in std function value func operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in std function operator const opt yb build thirdparty yugabyte db thirdparty installed asan libcxx include c functional function h in yb thread supervisethread void build root src yb util thread cc in start thread libpthread so buildid warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
544,979 | 15,933,180,123 | IssuesEvent | 2021-04-14 07:05:13 | GreenDelta/Sophena | https://api.github.com/repos/GreenDelta/Sophena | opened | Sortierung der Wärmeerzeuger | bug medium priority | Die Erzeugungsanlagen werden nicht numerisch nach dem Rang sortiert, sondern alphabetisch nach dem Namen. Das wird erst sichtbar, wenn mehr als 9 Erzeuger verwendet werden. Dann erscheinen die Ränge 10, 11, 12, usw. vor den Rängen 2, 3, 4… | 1.0 | Sortierung der Wärmeerzeuger - Die Erzeugungsanlagen werden nicht numerisch nach dem Rang sortiert, sondern alphabetisch nach dem Namen. Das wird erst sichtbar, wenn mehr als 9 Erzeuger verwendet werden. Dann erscheinen die Ränge 10, 11, 12, usw. vor den Rängen 2, 3, 4… | priority | sortierung der wärmeerzeuger die erzeugungsanlagen werden nicht numerisch nach dem rang sortiert sondern alphabetisch nach dem namen das wird erst sichtbar wenn mehr als erzeuger verwendet werden dann erscheinen die ränge usw vor den rängen … | 1 |
413,795 | 12,092,270,256 | IssuesEvent | 2020-04-19 15:01:36 | ClaudiaLapalme/pikaroute | https://api.github.com/repos/ClaudiaLapalme/pikaroute | closed | US-5 As a user, I want to click on the next event in my Google Calendar to obtain directions so that I can reach my next event easily | Medium Priority | If someone clicks on their next event, the application will provide routes to go from their **current location** to the address saved in the event. **If there is no address in the event**, the application will prompt the user for a **destination address** and then it will generate the routes.
Requirements:
- [ ] The application should display suggested routes to go from the user's location to the destination in their event if the user clicks on the event
- [ ] The application should prompt the user for a destination if none is associated with the event
- [ ] After the prompt, the application should generate suggested routes


| 1.0 | US-5 As a user, I want to click on the next event in my Google Calendar to obtain directions so that I can reach my next event easily - If someone clicks on their next event, the application will provide routes to go from their **current location** to the address saved in the event. **If there is no address in the event**, the application will prompt the user for a **destination address** and then it will generate the routes.
Requirements:
- [ ] The application should display suggested routes to go from the user's location to the destination in their event if the user clicks on the event
- [ ] The application should prompt the user for a destination if none is associated with the event
- [ ] After the prompt, the application should generate suggested routes


| priority | us as a user i want to click on the next event in my google calendar to obtain directions so that i can reach my next event easily if someone clicks on their next event the application will provide routes to go from their current location to the address saved in the event if there is no address in the event the application will prompt the user for a destination address and then it will generate the routes requirements the application should display suggested routes to go from the user s location to the destination in their event if the user clicks on the event the application should prompt the user for a destination if none is associated with the event after the prompt the application should generate suggested routes | 1 |
89,932 | 3,807,025,842 | IssuesEvent | 2016-03-25 04:18:02 | TheValarProject/TheValarProjectWebsite | https://api.github.com/repos/TheValarProject/TheValarProjectWebsite | opened | Server status does not update automatically | bug priority-medium | The server status on the second right sidebar of the website does not update automatically. It should be changed based on a ping request to the server. | 1.0 | Server status does not update automatically - The server status on the second right sidebar of the website does not update automatically. It should be changed based on a ping request to the server. | priority | server status does not update automatically the server status on the second right sidebar of the website does not update automatically it should be changed based on a ping request to the server | 1 |
375,740 | 11,133,663,084 | IssuesEvent | 2019-12-20 09:54:23 | incognitochain/incognito-chain | https://api.github.com/repos/incognitochain/incognito-chain | closed | [Analytic] - Build analytic system for incognito | Priority: Medium Type: Maintenance | Collect data from incognito fullnode for reporting
Build on https://github.com/incognitochain/incognito-analytic | 1.0 | [Analytic] - Build analytic system for incognito - Collect data from incognito fullnode for reporting
Build on https://github.com/incognitochain/incognito-analytic | priority | build analytic system for incognito collect data from incognito fullnode for reporting build on | 1 |
150,542 | 5,774,888,854 | IssuesEvent | 2017-04-28 08:38:45 | minio/minio-go | https://api.github.com/repos/minio/minio-go | closed | Support new API GetObjectPartial with preconditions | priority: medium | Reference: https://github.com/minio/minio/issues/3521
We need the following API in minio-go to fix the race explained in the issue above:
```
GetObject(bucket, object string, startRange, length int, preconditions map[string]string, writer io.Writer) (err error) {
}
```
| 1.0 | Support new API GetObjectPartial with preconditions - Reference: https://github.com/minio/minio/issues/3521
We need the following API in minio-go to fix the race explained in the issue above:
```
GetObject(bucket, object string, startRange, length int, preconditions map[string]string, writer io.Writer) (err error) {
}
```
| priority | support new api getobjectpartial with preconditions reference we need the following api in minio go to fix the race explained in the issue above getobject bucket object string startrange length int preconditions map string writer io writer err error | 1 |
86,261 | 3,704,395,261 | IssuesEvent | 2016-02-29 23:59:22 | SpeedCurve-Metrics/SpeedCurve | https://api.github.com/repos/SpeedCurve-Metrics/SpeedCurve | closed | [Benchmark] Filmstrip not refreshed when switching between templates | priority medium status accepted type bug | In the "Benchmark" section, when switching between templates, the filmstrip view is greyed and not refreshed. Reloading the page refreshes the filmstrip but is annoying :).
<img width="1271" alt="screen shot 2015-12-04 at 10 46 28 am" src="https://cloud.githubusercontent.com/assets/2169585/11586580/655f581a-9a74-11e5-8d87-7ab1a7e3bfad.png">
| 1.0 | [Benchmark] Filmstrip not refreshed when switching between templates - In the "Benchmark" section, when switching between templates, the filmstrip view is greyed and not refreshed. Reloading the page refreshes the filmstrip but is annoying :).
<img width="1271" alt="screen shot 2015-12-04 at 10 46 28 am" src="https://cloud.githubusercontent.com/assets/2169585/11586580/655f581a-9a74-11e5-8d87-7ab1a7e3bfad.png">
| priority | filmstrip not refreshed when switching between templates in the benchmark section when switching between templates the filmstrip view is greyed and not refreshed reloading the page refreshes the filmstrip but is annoying img width alt screen shot at am src | 1 |
701,157 | 24,088,452,595 | IssuesEvent | 2022-09-19 13:00:49 | BurnedLand/BurnedLand-Report | https://api.github.com/repos/BurnedLand/BurnedLand-Report | closed | NPC Anzu | Medium Priority | Rajah, ally, Night elf
NPC Anzu
https://www.wowhead.com/npc=23035/anzu
controllare solo item loot, l'npc funziona correttamente adesso
| 1.0 | NPC Anzu - Rajah, ally, Night elf
NPC Anzu
https://www.wowhead.com/npc=23035/anzu
controllare solo item loot, l'npc funziona correttamente adesso
| priority | npc anzu rajah ally night elf npc anzu controllare solo item loot l npc funziona correttamente adesso | 1 |
641,869 | 20,842,425,289 | IssuesEvent | 2022-03-21 03:02:44 | ml4ai/tomcat | https://api.github.com/repos/ml4ai/tomcat | closed | Provide Internet access to Windows machine | enhancement Priority: Medium | @CalebUAz provided the [document](https://docs.google.com/document/d/17iDVugiepXJbUjFuFjx_7Ra3zQfbwS8rCXX_kFfhvDY/edit) detailing how we were able to provide Internet access to the Windows computer | 1.0 | Provide Internet access to Windows machine - @CalebUAz provided the [document](https://docs.google.com/document/d/17iDVugiepXJbUjFuFjx_7Ra3zQfbwS8rCXX_kFfhvDY/edit) detailing how we were able to provide Internet access to the Windows computer | priority | provide internet access to windows machine calebuaz provided the detailing how we were able to provide internet access to the windows computer | 1 |
210,621 | 7,191,530,603 | IssuesEvent | 2018-02-02 21:24:36 | OpenTransitTools/trimet-mod-pelias | https://api.github.com/repos/OpenTransitTools/trimet-mod-pelias | opened | CONSULTING: Populating Pelias in Production | medium priority question | TriMet could learn more from what MapZen did to keep the system both up and up-to-date. | 1.0 | CONSULTING: Populating Pelias in Production - TriMet could learn more from what MapZen did to keep the system both up and up-to-date. | priority | consulting populating pelias in production trimet could learn more from what mapzen did to keep the system both up and up to date | 1 |
353,117 | 10,549,179,888 | IssuesEvent | 2019-10-03 08:06:23 | AY1920S1-CS2113T-T12-1/main | https://api.github.com/repos/AY1920S1-CS2113T-T12-1/main | opened | As a user I want to add seats from different performances to a customer's purchase | priority.Medium type.Story | so that I can manage bookings across multiple performances in one transaction. | 1.0 | As a user I want to add seats from different performances to a customer's purchase - so that I can manage bookings across multiple performances in one transaction. | priority | as a user i want to add seats from different performances to a customer s purchase so that i can manage bookings across multiple performances in one transaction | 1 |
364,114 | 10,759,070,058 | IssuesEvent | 2019-10-31 15:59:51 | AY1920S1-CS2103T-F14-1/main | https://api.github.com/repos/AY1920S1-CS2103T-F14-1/main | closed | Tabs | priority.Medium severity.Medium type.Enhancement | - [x] Create tabs system
- [x] Create tab for home page
- [x] Create tab for viewing questions
- [x] Create tab for attempting questions | 1.0 | Tabs - - [x] Create tabs system
- [x] Create tab for home page
- [x] Create tab for viewing questions
- [x] Create tab for attempting questions | priority | tabs create tabs system create tab for home page create tab for viewing questions create tab for attempting questions | 1 |
26,144 | 2,684,194,830 | IssuesEvent | 2015-03-28 19:01:02 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Ctrl-C does not kill process on cygwin/zsh console | 2–5 stars bug imported Priority-Medium | _From [jmugur...@gmail.com](https://code.google.com/u/103466037919225061032/) on October 06, 2012 06:00:50_
OS version:Win7SP1 x64 ConEmu version: ConEmuPack.120916 *Bug description* I run java process, or 'sleep 1000' and Ctrl-C does not kill it in cygwin/zsh. Ctrl-c works in other consoles like pycmd *Steps to reproduction* 1. run 'sleep 1000' on cygwin/zsh console
2. Ctrl-C
3. process does not die
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=747_ | 1.0 | Ctrl-C does not kill process on cygwin/zsh console - _From [jmugur...@gmail.com](https://code.google.com/u/103466037919225061032/) on October 06, 2012 06:00:50_
OS version:Win7SP1 x64 ConEmu version: ConEmuPack.120916 *Bug description* I run java process, or 'sleep 1000' and Ctrl-C does not kill it in cygwin/zsh. Ctrl-c works in other consoles like pycmd *Steps to reproduction* 1. run 'sleep 1000' on cygwin/zsh console
2. Ctrl-C
3. process does not die
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=747_ | priority | ctrl c does not kill process on cygwin zsh console from on october os version conemu version conemupack bug description i run java process or sleep and ctrl c does not kill it in cygwin zsh ctrl c works in other consoles like pycmd steps to reproduction run sleep on cygwin zsh console ctrl c process does not die original issue | 1 |
597,766 | 18,171,255,900 | IssuesEvent | 2021-09-27 20:19:33 | CanberraOceanRacingClub/namadgi3 | https://api.github.com/repos/CanberraOceanRacingClub/namadgi3 | closed | Water maker leak | priority 2: Medium | Peter Lucey reports:
There had been a leak under the starboard aft bunk for some time. We had an air lock in the pump. There was no flow so the discharge line from the pump was removed and air lock removed. No leak evident since reassembly.
The badly corroded wing nut on the filter could not be removed to clear the filter.
@bullswool will be interested | 1.0 | Water maker leak - Peter Lucey reports:
There had been a leak under the starboard aft bunk for some time. We had an air lock in the pump. There was no flow so the discharge line from the pump was removed and air lock removed. No leak evident since reassembly.
The badly corroded wing nut on the filter could not be removed to clear the filter.
@bullswool will be interested | priority | water maker leak peter lucey reports there had been a leak under the starboard aft bunk for some time we had an air lock in the pump there was no flow so the discharge line from the pump was removed and air lock removed no leak evident since reassembly the badly corroded wing nut on the filter could not be removed to clear the filter bullswool will be interested | 1 |
416,385 | 12,145,557,200 | IssuesEvent | 2020-04-24 09:32:01 | AbsaOSS/enceladus | https://api.github.com/repos/AbsaOSS/enceladus | opened | Menas User Flows Improvements I | Epic Menas UX priority: medium under discussion | ## Background
Some of the UI flows in Menas needs improvments to make easier use, less clicks and to help avoiding miss-configuration.
| 1.0 | Menas User Flows Improvements I - ## Background
Some of the UI flows in Menas needs improvments to make easier use, less clicks and to help avoiding miss-configuration.
| priority | menas user flows improvements i background some of the ui flows in menas needs improvments to make easier use less clicks and to help avoiding miss configuration | 1 |
375,364 | 11,103,221,432 | IssuesEvent | 2019-12-17 03:00:05 | grimeyg/wheel-of-fortune | https://api.github.com/repos/grimeyg/wheel-of-fortune | closed | Create Round class basic structure | Functionality Iteration 0 Priority: Medium | Should have:
- currentPuzzle property ( passed in as property)
- trashLetters (an array of letters)
- constantsAvailable (an array of letters)
- vowelsBought (an array of letters)
- vowelsAvailable (an array of letters)
We will probably ultimately have a take turn method that will handle turn details such as the guess, not sure if we need a separate Turn class?
| 1.0 | Create Round class basic structure - Should have:
- currentPuzzle property ( passed in as property)
- trashLetters (an array of letters)
- constantsAvailable (an array of letters)
- vowelsBought (an array of letters)
- vowelsAvailable (an array of letters)
We will probably ultimately have a take turn method that will handle turn details such as the guess, not sure if we need a separate Turn class?
| priority | create round class basic structure should have currentpuzzle property passed in as property trashletters an array of letters constantsavailable an array of letters vowelsbought an array of letters vowelsavailable an array of letters we will probably ultimately have a take turn method that will handle turn details such as the guess not sure if we need a separate turn class | 1 |
658,996 | 21,914,373,877 | IssuesEvent | 2022-05-21 15:24:38 | ApplETS/Notre-Dame | https://api.github.com/repos/ApplETS/Notre-Dame | opened | Notification for important date. | enhancement platform: ios platform: android :stop_sign: blocked :stop_sign: feature: notifications priority: medium | **Is your feature request related to a problem? Please describe.**
This feature request is linked to #204
**Describe the solution you'd like**
Notify the user 1 day before the important date.
| 1.0 | Notification for important date. - **Is your feature request related to a problem? Please describe.**
This feature request is linked to #204
**Describe the solution you'd like**
Notify the user 1 day before the important date.
| priority | notification for important date is your feature request related to a problem please describe this feature request is linked to describe the solution you d like notify the user day before the important date | 1 |
270,148 | 8,452,852,160 | IssuesEvent | 2018-10-20 09:13:45 | EUCweb/BIS-F | https://api.github.com/repos/EUCweb/BIS-F | closed | Citrix AppLayering - Create C:\Windows\Logs folder automatically if it doesn't exist | Priority: Medium Status: Review Needed Type: Optimization | From @FangLudi
Citrix App Layering will delete C:\Windows\Logs folder when you finalize the layer and this folder won't be created immediately (or you can wait a few hours to let this folder automatically created by OS) when you start up the packaging machine or target device next time, however, BIS-F personalization script needs this folder to generate log files when the server starts up. Even if you relocate the BIS-F log folder to another location such as Write Cache drive, it still needs to create temp log file in this folder. So please add a feature to automatically create C:\Windows\Logs folder if it does not exist.
https://support.citrix.com/article/CTX236075
| 1.0 | Citrix AppLayering - Create C:\Windows\Logs folder automatically if it doesn't exist - From @FangLudi
Citrix App Layering will delete C:\Windows\Logs folder when you finalize the layer and this folder won't be created immediately (or you can wait a few hours to let this folder automatically created by OS) when you start up the packaging machine or target device next time, however, BIS-F personalization script needs this folder to generate log files when the server starts up. Even if you relocate the BIS-F log folder to another location such as Write Cache drive, it still needs to create temp log file in this folder. So please add a feature to automatically create C:\Windows\Logs folder if it does not exist.
https://support.citrix.com/article/CTX236075
| priority | citrix applayering create c windows logs folder automatically if it doesn t exist from fangludi citrix app layering will delete c windows logs folder when you finalize the layer and this folder won t be created immediately or you can wait a few hours to let this folder automatically created by os when you start up the packaging machine or target device next time however bis f personalization script needs this folder to generate log files when the server starts up even if you relocate the bis f log folder to another location such as write cache drive it still needs to create temp log file in this folder so please add a feature to automatically create c windows logs folder if it does not exist | 1 |
761,010 | 26,663,275,841 | IssuesEvent | 2023-01-25 23:34:43 | clt313/SuperballVR | https://api.github.com/repos/clt313/SuperballVR | closed | Add controls/guide to main menu | priority: medium | To ensure players know how to play right away, I'm thinking we should add a UI panel to the left or right of the main menu panel to tell the player the control scheme.
Edit: we can also add a quick guide on how to play too! | 1.0 | Add controls/guide to main menu - To ensure players know how to play right away, I'm thinking we should add a UI panel to the left or right of the main menu panel to tell the player the control scheme.
Edit: we can also add a quick guide on how to play too! | priority | add controls guide to main menu to ensure players know how to play right away i m thinking we should add a ui panel to the left or right of the main menu panel to tell the player the control scheme edit we can also add a quick guide on how to play too | 1 |
435,678 | 12,539,328,594 | IssuesEvent | 2020-06-05 08:25:06 | naFila-pt/nafila | https://api.github.com/repos/naFila-pt/nafila | closed | Erase Data when password with less than 6 characters - register | Priority: Medium bug good first issue | When entering data, and after entering password with less than 6 characters, the inserted data is automatically erased. Needs a hint for minimum password size. | 1.0 | Erase Data when password with less than 6 characters - register - When entering data, and after entering password with less than 6 characters, the inserted data is automatically erased. Needs a hint for minimum password size. | priority | erase data when password with less than characters register when entering data and after entering password with less than characters the inserted data is automatically erased needs a hint for minimum password size | 1 |
318,553 | 9,694,151,400 | IssuesEvent | 2019-05-24 18:06:46 | richelbilderbeek/djog_unos_2018 | https://api.github.com/repos/richelbilderbeek/djog_unos_2018 | closed | Game design: agents should not fall off tiles yes or no? | medium priority | **Is your feature request related to a problem? Please describe.**
Currently, agents fall off tiles, for example, this poor crocodile:

**Describe the solution you'd like**
Actually, I think this a feature, not a bug: one needs to care for the creatures now!
@Joshua260403: what do you think: is this a feature (agents fall off), or a bug?
* If agents should fall off tiles, this Issue can be closed
* If agents should not fall off tiles, let us know :1st_place_medal:
**Describe alternatives you've considered**
None.
**Additional context**
Earlier Joshua decided to let agents not fall off. But now the movement is different, so perhaps he does like the current behavior now. | 1.0 | Game design: agents should not fall off tiles yes or no? - **Is your feature request related to a problem? Please describe.**
Currently, agents fall off tiles, for example, this poor crocodile:

**Describe the solution you'd like**
Actually, I think this a feature, not a bug: one needs to care for the creatures now!
@Joshua260403: what do you think: is this a feature (agents fall off), or a bug?
* If agents should fall off tiles, this Issue can be closed
* If agents should not fall off tiles, let us know :1st_place_medal:
**Describe alternatives you've considered**
None.
**Additional context**
Earlier Joshua decided to let agents not fall off. But now the movement is different, so perhaps he does like the current behavior now. | priority | game design agents should not fall off tiles yes or no is your feature request related to a problem please describe currently agents fall off tiles for example this poor crocodile describe the solution you d like actually i think this a feature not a bug one needs to care for the creatures now what do you think is this a feature agents fall off or a bug if agents should fall off tiles this issue can be closed if agents should not fall off tiles let us know place medal describe alternatives you ve considered none additional context earlier joshua decided to let agents not fall off but now the movement is different so perhaps he does like the current behavior now | 1 |
188,357 | 6,775,601,757 | IssuesEvent | 2017-10-27 14:49:04 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | closed | The new fulllscreen doesn't work inside an iframe | bug-type: broken use case priority: medium problem: bug | The fullscreen functionality is not working from inside an iframe.
`iframe` needs `allowfullscreen`. | 1.0 | The new fulllscreen doesn't work inside an iframe - The fullscreen functionality is not working from inside an iframe.
`iframe` needs `allowfullscreen`. | priority | the new fulllscreen doesn t work inside an iframe the fullscreen functionality is not working from inside an iframe iframe needs allowfullscreen | 1 |
387,202 | 11,457,843,059 | IssuesEvent | 2020-02-07 01:08:11 | diegobarros0701/noge | https://api.github.com/repos/diegobarros0701/noge | closed | Get id column from model when generating relations | Priority: MEDIUM enhancement | ## The problem
In many cases the columns are `id` and `relation_name_id`, so there is no need to specify the columns names all the time, except if it is different from the default.
## How to solve
By dynamically getting the columns names.
Has many example:
```bash
noge model user --has-many user_project
```
That should do the following:
* Get the `join.from.column` by reading the `User` model and get his `idColumn`.
* Get the `join.to.column` column by converting the relation name to `user_project_id`.
Belongs to example:
```bash
noge model user_project --belongs-to user
```
That should do the following:
* Get the `join.from` column by reading the `UserProject` model and get his `idColumn`.
* Get the `join.to` column by converting the relation name to `user_id`.
Many to many example:
```bash
noge model person --many-to-many person_movie
```
That should do the following:
* Get the `join.from.column` by reading the `Person` model and get his `idColumn`.
* Get the `join.through.from.column` by converting the name `person` from the relation `person_movie`to `person_id`
* Get the `join.through.to.column` by converting the name `movie` from the relation `person_movie` to `movie_id`
* Get the `join.to.column` by reading the `Movie` model and get his `idColumn`.
## What to not do to
* Do not create the relation `user_project` if it not exists, except if the user pass the `--create-relations` option
## Options to add
* `--create-relations` - this will create the relations model if not exists | 1.0 | Get id column from model when generating relations - ## The problem
In many cases the columns are `id` and `relation_name_id`, so there is no need to specify the columns names all the time, except if it is different from the default.
## How to solve
By dynamically getting the columns names.
Has many example:
```bash
noge model user --has-many user_project
```
That should do the following:
* Get the `join.from.column` by reading the `User` model and get his `idColumn`.
* Get the `join.to.column` column by converting the relation name to `user_project_id`.
Belongs to example:
```bash
noge model user_project --belongs-to user
```
That should do the following:
* Get the `join.from` column by reading the `UserProject` model and get his `idColumn`.
* Get the `join.to` column by converting the relation name to `user_id`.
Many to many example:
```bash
noge model person --many-to-many person_movie
```
That should do the following:
* Get the `join.from.column` by reading the `Person` model and get his `idColumn`.
* Get the `join.through.from.column` by converting the name `person` from the relation `person_movie`to `person_id`
* Get the `join.through.to.column` by converting the name `movie` from the relation `person_movie` to `movie_id`
* Get the `join.to.column` by reading the `Movie` model and get his `idColumn`.
## What to not do to
* Do not create the relation `user_project` if it not exists, except if the user pass the `--create-relations` option
## Options to add
* `--create-relations` - this will create the relations model if not exists | priority | get id column from model when generating relations the problem in many cases the columns are id and relation name id so there is no need to specify the columns names all the time except if it is different from the default how to solve by dynamically getting the columns names has many example bash noge model user has many user project that should do the following get the join from column by reading the user model and get his idcolumn get the join to column column by converting the relation name to user project id belongs to example bash noge model user project belongs to user that should do the following get the join from column by reading the userproject model and get his idcolumn get the join to column by converting the relation name to user id many to many example bash noge model person many to many person movie that should do the following get the join from column by reading the person model and get his idcolumn get the join through from column by converting the name person from the relation person movie to person id get the join through to column by converting the name movie from the relation person movie to movie id get the join to column by reading the movie model and get his idcolumn what to not do to do not create the relation user project if it not exists except if the user pass the create relations option options to add create relations this will create the relations model if not exists | 1 |
589,354 | 17,695,130,437 | IssuesEvent | 2021-08-24 14:32:18 | teamforus/general | https://api.github.com/repos/teamforus/general | closed | CMS Updates and improvements | Priority: Must have Epic Scope: Too Big (should split) project-100 Urgency: Medium Impact: Significant WBSO-Forus VIA-2019 project-148 | ## Context
The platform currently has a basic CMS. The current functionality is inadequate. Issues are being opened with proposals for improving the CMS.
## Goal of this issue
- Have an overview of what needs to happen to the CMS in order to become adequate. (link CR's to this epic)
- Be able to discus general strategy of CMS; short term quickfixes vs a long term stragegy in the comments of this issue.
Figma: https://www.figma.com/file/TLxmLE6tw9YvigAPF1SPvp/?node-id=63%3A2386
Proposal: https://docs.google.com/document/d/14JLbaZfMjHHuVD12wISsf4Vp5KAAEFS1RDrhO9Dawic/edit | 1.0 | CMS Updates and improvements - ## Context
The platform currently has a basic CMS. The current functionality is inadequate. Issues are being opened with proposals for improving the CMS.
## Goal of this issue
- Have an overview of what needs to happen to the CMS in order to become adequate. (link CR's to this epic)
- Be able to discus general strategy of CMS; short term quickfixes vs a long term stragegy in the comments of this issue.
Figma: https://www.figma.com/file/TLxmLE6tw9YvigAPF1SPvp/?node-id=63%3A2386
Proposal: https://docs.google.com/document/d/14JLbaZfMjHHuVD12wISsf4Vp5KAAEFS1RDrhO9Dawic/edit | priority | cms updates and improvements context the platform currently has a basic cms the current functionality is inadequate issues are being opened with proposals for improving the cms goal of this issue have an overview of what needs to happen to the cms in order to become adequate link cr s to this epic be able to discus general strategy of cms short term quickfixes vs a long term stragegy in the comments of this issue figma proposal | 1 |
16,557 | 2,615,118,810 | IssuesEvent | 2015-03-01 05:44:33 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | @JsonString for numbers | auto-migrated Component-JSON Milestone-Version1.3.0 Priority-Medium Type-Enhancement | ```
External references, such as a standards document, or specification?
http://tools.ietf.org/html/rfc4627
Java environments (e.g. Java 6, Android 2.2, App Engine 1.3.7, or All)?
All.
Please describe the feature requested.
Currently we assume that 64-bit numbers are represented as Strings in the JSON
wire format. The reasoning was that JavaScript clients cannot handle 64-bit
numbers properly, so they prefer numbers be stored as strings. However, that
doesn't represent the most general use of JSON. For example, OAuth 2 uses a
64-bit JSON number for the "expires_in" field.
The problem is that then there is no way to parse 64-bit (and higher precision)
JSON numbers. Another problem: it is unintuitive that an int field is stored
as a JSON number but a long field is stored as a JSON string.
The proposal is to instead always assume Java numbers are stored as JSON
number. If someone wants to store a Java number as a JSON string instead they
must add a new @JsonString annotation.
For example, this parses a JSON number:
{"value" : 12345768901234576890123457689012345768901234576890}
class A {
@Key BigInteger value;
}
And this parses a JSON string:
{"value" : "12345768901234576890123457689012345768901234576890"}
class B {
@Key @JsonString BigInteger value;
}
If there is a mismatch between the declared Java field and the JSON value, the
parser will throw an IllegalArgumentException. This will happen for example if
one tries to use class A for the second example, or class B for the first
example. This ensures that serialization of the data class matches the
original parsed JSON data.
```
Original issue reported on code.google.com by `yan...@google.com` on 14 Feb 2011 at 4:19 | 1.0 | @JsonString for numbers - ```
External references, such as a standards document, or specification?
http://tools.ietf.org/html/rfc4627
Java environments (e.g. Java 6, Android 2.2, App Engine 1.3.7, or All)?
All.
Please describe the feature requested.
Currently we assume that 64-bit numbers are represented as Strings in the JSON
wire format. The reasoning was that JavaScript clients cannot handle 64-bit
numbers properly, so they prefer numbers be stored as strings. However, that
doesn't represent the most general use of JSON. For example, OAuth 2 uses a
64-bit JSON number for the "expires_in" field.
The problem is that then there is no way to parse 64-bit (and higher precision)
JSON numbers. Another problem: it is unintuitive that an int field is stored
as a JSON number but a long field is stored as a JSON string.
The proposal is to instead always assume Java numbers are stored as JSON
number. If someone wants to store a Java number as a JSON string instead they
must add a new @JsonString annotation.
For example, this parses a JSON number:
{"value" : 12345768901234576890123457689012345768901234576890}
class A {
@Key BigInteger value;
}
And this parses a JSON string:
{"value" : "12345768901234576890123457689012345768901234576890"}
class B {
@Key @JsonString BigInteger value;
}
If there is a mismatch between the declared Java field and the JSON value, the
parser will throw an IllegalArgumentException. This will happen for example if
one tries to use class A for the second example, or class B for the first
example. This ensures that serialization of the data class matches the
original parsed JSON data.
```
Original issue reported on code.google.com by `yan...@google.com` on 14 Feb 2011 at 4:19 | priority | jsonstring for numbers external references such as a standards document or specification java environments e g java android app engine or all all please describe the feature requested currently we assume that bit numbers are represented as strings in the json wire format the reasoning was that javascript clients cannot handle bit numbers properly so they prefer numbers be stored as strings however that doesn t represent the most general use of json for example oauth uses a bit json number for the expires in field the problem is that then there is no way to parse bit and higher precision json numbers another problem it is unintuitive that an int field is stored as a json number but a long field is stored as a json string the proposal is to instead always assume java numbers are stored as json number if someone wants to store a java number as a json string instead they must add a new jsonstring annotation for example this parses a json number value class a key biginteger value and this parses a json string value class b key jsonstring biginteger value if there is a mismatch between the declared java field and the json value the parser will throw an illegalargumentexception this will happen for example if one tries to use class a for the second example or class b for the first example this ensures that serialization of the data class matches the original parsed json data original issue reported on code google com by yan google com on feb at | 1 |
676,213 | 23,119,317,182 | IssuesEvent | 2022-07-27 19:41:39 | codbex/codbex-kronos | https://api.github.com/repos/codbex/codbex-kronos | opened | [CI/CD] Configure the nightly build to run on Windows | CI/CD priority-medium effort-medium | From xsk created by [vmutafov](https://github.com/vmutafov): SAP/xsk#458
Currently, the nightly build of the XSK runs on ubuntu only. We should run it on windows too in order to quickly find any regressions happening on the Windows OS.
There are some steps in the nightly build that should be changed depending on the OS. For example, some dependencies are being downloaded using Linux-specific package managers and some tools used in the GitHub action may not be available on Windows. | 1.0 | [CI/CD] Configure the nightly build to run on Windows - From xsk created by [vmutafov](https://github.com/vmutafov): SAP/xsk#458
Currently, the nightly build of the XSK runs on ubuntu only. We should run it on windows too in order to quickly find any regressions happening on the Windows OS.
There are some steps in the nightly build that should be changed depending on the OS. For example, some dependencies are being downloaded using Linux-specific package managers and some tools used in the GitHub action may not be available on Windows. | priority | configure the nightly build to run on windows from xsk created by sap xsk currently the nightly build of the xsk runs on ubuntu only we should run it on windows too in order to quickly find any regressions happening on the windows os there are some steps in the nightly build that should be changed depending on the os for example some dependencies are being downloaded using linux specific package managers and some tools used in the github action may not be available on windows | 1 |
48,770 | 2,999,809,421 | IssuesEvent | 2015-07-23 20:59:04 | zhengj2007/BFO-test | https://api.github.com/repos/zhengj2007/BFO-test | closed | core classes lack declarations of disjointness | imported Priority-Medium Type-BFO2-Reference | _From [dosu...@gmail.com](https://code.google.com/u/102674886352087815907/) on June 23, 2011 06:42:44_
The core class hierarchy appears to lack declarations of disjointness entirely. Please could they be added wherever valid.
_Original issue: http://code.google.com/p/bfo/issues/detail?id=14_ | 1.0 | core classes lack declarations of disjointness - _From [dosu...@gmail.com](https://code.google.com/u/102674886352087815907/) on June 23, 2011 06:42:44_
The core class hierarchy appears to lack declarations of disjointness entirely. Please could they be added wherever valid.
_Original issue: http://code.google.com/p/bfo/issues/detail?id=14_ | priority | core classes lack declarations of disjointness from on june the core class hierarchy appears to lack declarations of disjointness entirely please could they be added wherever valid original issue | 1 |
246,579 | 7,895,405,483 | IssuesEvent | 2018-06-29 03:04:06 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Patch for VTK 8.0 | Expected Use: 3 - Occasional Feature Impact: 3 - Medium OS: All Priority: Normal Support Group: Any version: 2.12.3 | The following patch was sent visit users list for VTK 8.0.
From: Christoph Statz via visit-users [mailto:visit-users@elist.ornl.gov]
Sent: Thursday, July 20, 2017 12:18 AM
To: visit-users@elist.ornl.gov
Cc: Christoph Statz <christoph.statz@tu-dresden.de>
Subject: [visit-users] VisIt VTK 7.1 Upgrade -> VTK 8.0
Dear VisIt-Developers,
I'm not sure who is working on the VTK 7.1 Upgrade recently. I just wanted to mention the VTK 7.1 branch builds fine against VTK 8.0 (at least under OSX) except the "mesh plot".
The API of vtkOpenGLPolyDataMapper changed, the attached patch for the vtkOpenGLMeshPlotMapperHelper adapts to that change.
Index: vtkOpenGLMeshPlotMapper.C
===================================================================
--- vtkOpenGLMeshPlotMapper.C (revision 31263)
+++ vtkOpenGLMeshPlotMapper.C (working copy)
@@ -93,8 +93,8 @@
//-----------------------------------------------------------------------------
void vtkOpenGLMeshPlotMapperHelper::RenderPieceDraw(vtkRenderer *ren, vtkActor *actor) {
- int linesIC = this->Lines.IBO->IndexCount;
- int polysIC = this->Tris.IBO->IndexCount;
+ int linesIC = this->Primitives[PrimitiveLines].IBO->IndexCount;
+ int polysIC = this->Primitives[PrimitiveTris].IBO->IndexCount;
// draw surface first
if (this->Owner->GetUsePolys())
@@ -101,16 +101,16 @@
{
this->DrawingPolys = true;
this->DrawingLines = false;
- this->Lines.IBO->IndexCount = 0;
+ this->Primitives[PrimitiveLines].IBO->IndexCount = 0;
this->Superclass::RenderPieceDraw(ren, actor);
}
// draw lines second
this->DrawingPolys = false;
this->DrawingLines = true;
- this->Lines.IBO->IndexCount = linesIC;
- this->Tris.IBO->IndexCount = 0;
+ this->Primitives[PrimitiveLines].IBO->IndexCount = linesIC;
+ this->Primitives[PrimitiveTris].IBO->IndexCount = 0;
this->Superclass::RenderPieceDraw(ren, actor);
- this->Tris.IBO->IndexCount = polysIC;
+ this->Primitives[PrimitiveTris].IBO->IndexCount = polysIC;
}
//-------------------------------------------------------------------------
Mit freundlichen Grüßen,
Christoph Statz
--
Dipl.-Ing. Christoph Statz
Wissenschaftlicher Mitarbeiter
Technische Universität Dresden
Institut für Nachrichtentechnik
Lehrstuhl Hochfrequenztechnik
Helmholtzstr. 18
01062 Dresden
Tel: 0351 - 463 32287
Fax: 0351 - 463 37163
Email: christoph.statz@tu-dresden.de
Best regards,
Christoph
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 07/26/2017 02:30 pm
Original update: 09/14/2017 11:23 am
Ticket number: 2871 | 1.0 | Patch for VTK 8.0 - The following patch was sent visit users list for VTK 8.0.
From: Christoph Statz via visit-users [mailto:visit-users@elist.ornl.gov]
Sent: Thursday, July 20, 2017 12:18 AM
To: visit-users@elist.ornl.gov
Cc: Christoph Statz <christoph.statz@tu-dresden.de>
Subject: [visit-users] VisIt VTK 7.1 Upgrade -> VTK 8.0
Dear VisIt-Developers,
I'm not sure who is working on the VTK 7.1 Upgrade recently. I just wanted to mention the VTK 7.1 branch builds fine against VTK 8.0 (at least under OSX) except the "mesh plot".
The API of vtkOpenGLPolyDataMapper changed, the attached patch for the vtkOpenGLMeshPlotMapperHelper adapts to that change.
Index: vtkOpenGLMeshPlotMapper.C
===================================================================
--- vtkOpenGLMeshPlotMapper.C (revision 31263)
+++ vtkOpenGLMeshPlotMapper.C (working copy)
@@ -93,8 +93,8 @@
//-----------------------------------------------------------------------------
void vtkOpenGLMeshPlotMapperHelper::RenderPieceDraw(vtkRenderer *ren, vtkActor *actor) {
- int linesIC = this->Lines.IBO->IndexCount;
- int polysIC = this->Tris.IBO->IndexCount;
+ int linesIC = this->Primitives[PrimitiveLines].IBO->IndexCount;
+ int polysIC = this->Primitives[PrimitiveTris].IBO->IndexCount;
// draw surface first
if (this->Owner->GetUsePolys())
@@ -101,16 +101,16 @@
{
this->DrawingPolys = true;
this->DrawingLines = false;
- this->Lines.IBO->IndexCount = 0;
+ this->Primitives[PrimitiveLines].IBO->IndexCount = 0;
this->Superclass::RenderPieceDraw(ren, actor);
}
// draw lines second
this->DrawingPolys = false;
this->DrawingLines = true;
- this->Lines.IBO->IndexCount = linesIC;
- this->Tris.IBO->IndexCount = 0;
+ this->Primitives[PrimitiveLines].IBO->IndexCount = linesIC;
+ this->Primitives[PrimitiveTris].IBO->IndexCount = 0;
this->Superclass::RenderPieceDraw(ren, actor);
- this->Tris.IBO->IndexCount = polysIC;
+ this->Primitives[PrimitiveTris].IBO->IndexCount = polysIC;
}
//-------------------------------------------------------------------------
Mit freundlichen Grüßen,
Christoph Statz
--
Dipl.-Ing. Christoph Statz
Wissenschaftlicher Mitarbeiter
Technische Universität Dresden
Institut für Nachrichtentechnik
Lehrstuhl Hochfrequenztechnik
Helmholtzstr. 18
01062 Dresden
Tel: 0351 - 463 32287
Fax: 0351 - 463 37163
Email: christoph.statz@tu-dresden.de
Best regards,
Christoph
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Eric Brugger
Original creation: 07/26/2017 02:30 pm
Original update: 09/14/2017 11:23 am
Ticket number: 2871 | priority | patch for vtk the following patch was sent visit users list for vtk from christoph statz via visit users sent thursday july am to visit users elist ornl gov cc christoph statz subject visit vtk upgrade vtk dear visit developers i m not sure who is working on the vtk upgrade recently i just wanted to mention the vtk branch builds fine against vtk at least under osx except the mesh plot the api of vtkopenglpolydatamapper changed the attached patch for the vtkopenglmeshplotmapperhelper adapts to that change index vtkopenglmeshplotmapper c vtkopenglmeshplotmapper c revision vtkopenglmeshplotmapper c working copy void vtkopenglmeshplotmapperhelper renderpiecedraw vtkrenderer ren vtkactor actor int linesic this lines ibo indexcount int polysic this tris ibo indexcount int linesic this primitives ibo indexcount int polysic this primitives ibo indexcount draw surface first if this owner getusepolys this drawingpolys true this drawinglines false this lines ibo indexcount this primitives ibo indexcount this superclass renderpiecedraw ren actor draw lines second this drawingpolys false this drawinglines true this lines ibo indexcount linesic this tris ibo indexcount this primitives ibo indexcount linesic this primitives ibo indexcount this superclass renderpiecedraw ren actor this tris ibo indexcount polysic this primitives ibo indexcount polysic mit freundlichen grüßen christoph statz dipl ing christoph statz wissenschaftlicher mitarbeiter technische universität dresden institut für nachrichtentechnik lehrstuhl hochfrequenztechnik helmholtzstr dresden tel fax email christoph statz tu dresden de best regards christoph redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author eric brugger original creation pm original update am ticket number | 1 |
776,372 | 27,257,970,399 | IssuesEvent | 2023-02-22 12:58:37 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | verify .sig for tarballs | type: look into type: feature stage: queue priority: medium complex: medium whiteboard | many tarballs come with .sig files (especially, for GNU), it might be worth to improve tools.download helper to optionally verify downloaded tarballs, if possible | 1.0 | verify .sig for tarballs - many tarballs come with .sig files (especially, for GNU), it might be worth to improve tools.download helper to optionally verify downloaded tarballs, if possible | priority | verify sig for tarballs many tarballs come with sig files especially for gnu it might be worth to improve tools download helper to optionally verify downloaded tarballs if possible | 1 |
411,323 | 12,016,890,202 | IssuesEvent | 2020-04-10 17:07:24 | alan-turing-institute/distr6 | https://api.github.com/repos/alan-turing-institute/distr6 | opened | Return type for simulations from multivariate distributions | analytical medium priority | There's currently an inconsistency in how random draws are turned for multivariate distributions. Current set-up:
| | Univariate | Multivariate |
|-|---------- |---|
**Standard**| vector | data.table, rows = draws, cols = corresponding variable
**VectorDist** | data.table, rows = draws, cols = corresponding parameter | ??? |
I would like to find a way that doesn't involve arrays but not sure if that's possible... | 1.0 | Return type for simulations from multivariate distributions - There's currently an inconsistency in how random draws are turned for multivariate distributions. Current set-up:
| | Univariate | Multivariate |
|-|---------- |---|
**Standard**| vector | data.table, rows = draws, cols = corresponding variable
**VectorDist** | data.table, rows = draws, cols = corresponding parameter | ??? |
I would like to find a way that doesn't involve arrays but not sure if that's possible... | priority | return type for simulations from multivariate distributions there s currently an inconsistency in how random draws are turned for multivariate distributions current set up univariate multivariate standard vector data table rows draws cols corresponding variable vectordist data table rows draws cols corresponding parameter i would like to find a way that doesn t involve arrays but not sure if that s possible | 1 |
591,477 | 17,840,705,137 | IssuesEvent | 2021-09-03 09:41:53 | francheska-vicente/cssweng | https://api.github.com/repos/francheska-vicente/cssweng | opened | Rooms that offer monthly rate should not have extra persons | bug priority: high issue: back-end severity: medium issue: validation | ### Summary
- Rooms that offer monthly rates should not entertain extra pax.
### Steps to Reproduce
1. login
2. choose any date for booking
3. choose room 305
4. input 100 in the number of pax field
### Visual Proof

### Expected Results:
- Number of persons should be limited to at most 4 pax for twin bed (rm. 305).
### Actual Results:
- There is no limit to the number of extra pax for the monthly rate of twin bed (rm. 305).
| Additional Information | |
| ----------- | ----------- |
| Platform | V8 engine (Google) |
| Operating System | Windows 10 | | 1.0 | Rooms that offer monthly rate should not have extra persons - ### Summary
- Rooms that offer monthly rates should not entertain extra pax.
### Steps to Reproduce
1. login
2. choose any date for booking
3. choose room 305
4. input 100 in the number of pax field
### Visual Proof

### Expected Results:
- Number of persons should be limited to at most 4 pax for twin bed (rm. 305).
### Actual Results:
- There is no limit to the number of extra pax for the monthly rate of twin bed (rm. 305).
| Additional Information | |
| ----------- | ----------- |
| Platform | V8 engine (Google) |
| Operating System | Windows 10 | | priority | rooms that offer monthly rate should not have extra persons summary rooms that offer monthly rates should not entertain extra pax steps to reproduce login choose any date for booking choose room input in the number of pax field visual proof expected results number of persons should be limited to at most pax for twin bed rm actual results there is no limit to the number of extra pax for the monthly rate of twin bed rm additional information platform engine google operating system windows | 1 |
823,920 | 31,073,719,530 | IssuesEvent | 2023-08-12 08:02:27 | kkrt-labs/kakarot-rpc | https://api.github.com/repos/kkrt-labs/kakarot-rpc | closed | feat: eth_getProof | new-feature stale priority-medium | # eth_getProof
## Metadata
- name: getProof
- prefix: eth
- state: ⚠️
- [specification](https://github.com/ethereum/execution-apis/api-documentation/)
## Specification Description
Returns the account and storage values of the specified account including the Merkle-proof.
This call can be used to verify that the data you are pulling from is not tampered with.
Describe the method
### Parameters
<img width="700px" alt="image" src="https://github.com/kkrt-labs/kakarot-rpc/assets/41180869/919ce1df-7998-4bf7-8890-f406a025744d">
### Returns
<img width="1292" alt="image" src="https://github.com/kkrt-labs/kakarot-rpc/assets/41180869/1549c3d8-f2e9-47f8-9eb4-607d16f6cd81">
## Kakarot Logic
todo
### Kakarot methods
todo
### Starknet methods
todo
| 1.0 | feat: eth_getProof - # eth_getProof
## Metadata
- name: getProof
- prefix: eth
- state: ⚠️
- [specification](https://github.com/ethereum/execution-apis/api-documentation/)
## Specification Description
Returns the account and storage values of the specified account including the Merkle-proof.
This call can be used to verify that the data you are pulling from is not tampered with.
Describe the method
### Parameters
<img width="700px" alt="image" src="https://github.com/kkrt-labs/kakarot-rpc/assets/41180869/919ce1df-7998-4bf7-8890-f406a025744d">
### Returns
<img width="1292" alt="image" src="https://github.com/kkrt-labs/kakarot-rpc/assets/41180869/1549c3d8-f2e9-47f8-9eb4-607d16f6cd81">
## Kakarot Logic
todo
### Kakarot methods
todo
### Starknet methods
todo
| priority | feat eth getproof eth getproof metadata name getproof prefix eth state ⚠️ specification description returns the account and storage values of the specified account including the merkle proof this call can be used to verify that the data you are pulling from is not tampered with describe the method parameters img width alt image src returns img width alt image src kakarot logic todo kakarot methods todo starknet methods todo | 1 |
618,592 | 19,476,252,758 | IssuesEvent | 2021-12-24 13:00:16 | vaexio/vaex | https://api.github.com/repos/vaexio/vaex | closed | [Locking] When converting tsv files, it would be nice to specify the directory where the vaex lock files go | enhancement priority: medium | **Description**
When converting tsv files, it would be nice to specify the directory where the vaex lock files go. If we could create an option for example:
```
vx.from_csv(path_to_file, lock_folder="/tmp")
```
that could make this clutter a bit better also, sometimes the current working directory is read only even though the location for the HDF5 and tsv files are writeable.
**Is your feature request related to a problem? Please describe.**
Currently its in the active working directory which might clutter things and is not actually cleaned up.
| 1.0 | [Locking] When converting tsv files, it would be nice to specify the directory where the vaex lock files go - **Description**
When converting tsv files, it would be nice to specify the directory where the vaex lock files go. If we could create an option for example:
```
vx.from_csv(path_to_file, lock_folder="/tmp")
```
that could make this clutter a bit better also, sometimes the current working directory is read only even though the location for the HDF5 and tsv files are writeable.
**Is your feature request related to a problem? Please describe.**
Currently its in the active working directory which might clutter things and is not actually cleaned up.
| priority | when converting tsv files it would be nice to specify the directory where the vaex lock files go description when converting tsv files it would be nice to specify the directory where the vaex lock files go if we could create an option for example vx from csv path to file lock folder tmp that could make this clutter a bit better also sometimes the current working directory is read only even though the location for the and tsv files are writeable is your feature request related to a problem please describe currently its in the active working directory which might clutter things and is not actually cleaned up | 1 |
317,619 | 9,667,001,322 | IssuesEvent | 2019-05-21 12:16:34 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | Prepare for diff-rotations from different points of view. | Effort High Feature Request Package Intermediate Priority Medium coordinates | I've got this from a [draft document](https://issues.cosmos.esa.int/solarorbiterwiki/download/attachments/5801215/Triplet-TN%20SOL-SGS-TN-0020%20v0_2.pdf?version=1&modificationDate=1499950338000&api=v2) from [Solar Orbiter team](https://issues.cosmos.esa.int/solarorbiterwiki/display/SOSP/SOC+Documents):
> 4.3 SOC handling of the differential rotation
> SOC will use the following model of differential rotation to propagate from the triplet
> epoch:
> ω(Φ) = A + B sin2(Φ) + C sin4(Φ)
> Where ω is the rotation rate (in deg/day)
> And Φ is the solar latitude
>
> SOC will choose one of two sets of parameters:
> For magnetic features, meaning sunspots/active regions (derived from “Magnetic”
> in [DIFF])
>
> A = 14.252
> B = -1.678
> C = -2.401
> For non-magnetic features, e.g. coronal holes
> A = 14.705
> B = 0.0
> C = 0.0
> Note that this is a “rigid” rotation corresponding to 26.24 day synodic period from
> Earth (=> 24.48 day sidereal period).
None of the current values we've got matches what they are planing.
```python
>>> howard.to(u.deg / u.day)
<Quantity [ 14.32632838, -2.11875209, -1.83163148] deg / d>
>>> snodgrass.to(u.deg / u.day)
<Quantity [ 14.1134631 , -1.69797189, -2.34646844] deg / d>
>>> allen.to(u.deg / u.day)
<Quantity [ 14.44, -3. , 0. ] deg / d>
```
We do have a `synodic` correction as an option: `rotation -= 0.9856 * u.deg / u.day * duration` | 1.0 | Prepare for diff-rotations from different points of view. - I've got this from a [draft document](https://issues.cosmos.esa.int/solarorbiterwiki/download/attachments/5801215/Triplet-TN%20SOL-SGS-TN-0020%20v0_2.pdf?version=1&modificationDate=1499950338000&api=v2) from [Solar Orbiter team](https://issues.cosmos.esa.int/solarorbiterwiki/display/SOSP/SOC+Documents):
> 4.3 SOC handling of the differential rotation
> SOC will use the following model of differential rotation to propagate from the triplet
> epoch:
> ω(Φ) = A + B sin2(Φ) + C sin4(Φ)
> Where ω is the rotation rate (in deg/day)
> And Φ is the solar latitude
>
> SOC will choose one of two sets of parameters:
> For magnetic features, meaning sunspots/active regions (derived from “Magnetic”
> in [DIFF])
>
> A = 14.252
> B = -1.678
> C = -2.401
> For non-magnetic features, e.g. coronal holes
> A = 14.705
> B = 0.0
> C = 0.0
> Note that this is a “rigid” rotation corresponding to 26.24 day synodic period from
> Earth (=> 24.48 day sidereal period).
None of the current values we've got matches what they are planing.
```python
>>> howard.to(u.deg / u.day)
<Quantity [ 14.32632838, -2.11875209, -1.83163148] deg / d>
>>> snodgrass.to(u.deg / u.day)
<Quantity [ 14.1134631 , -1.69797189, -2.34646844] deg / d>
>>> allen.to(u.deg / u.day)
<Quantity [ 14.44, -3. , 0. ] deg / d>
```
We do have a `synodic` correction as an option: `rotation -= 0.9856 * u.deg / u.day * duration` | priority | prepare for diff rotations from different points of view i ve got this from a from soc handling of the differential rotation soc will use the following model of differential rotation to propagate from the triplet epoch ω φ a b φ c φ where ω is the rotation rate in deg day and φ is the solar latitude soc will choose one of two sets of parameters for magnetic features meaning sunspots active regions derived from “magnetic” in a b c for non magnetic features e g coronal holes a b c note that this is a “rigid” rotation corresponding to day synodic period from earth day sidereal period none of the current values we ve got matches what they are planing python howard to u deg u day snodgrass to u deg u day allen to u deg u day we do have a synodic correction as an option rotation u deg u day duration | 1 |
618,350 | 19,432,951,410 | IssuesEvent | 2021-12-21 14:05:08 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Soft error while creating MariaDB user in 10.6.5 during deployment | BUG WMAgent Medium Priority MariaDB | **Impact of the bug**
WMAgent
**Describe the bug**
In the first real exercise with this new mariadb version (10.6.5) integrated into the WMCore stack, there is a soft error during the agent deployment while creating a mariadb user [1]. From the logs, I think this error happens because two users (root and cmsdataops, the unix user used to deploy these services) have already been created when MariaDB is installed
**How to reproduce it**
Deploy WMAgent Py3 tag 1.5.7.pre1 with MariaDB backend (using deployment tag HG2201b).
**Expected behavior**
If that local user is already properly created and have all the necessary priveleges, then we could likely remove these lines from the manage script:
https://github.com/dmwm/deployment/blob/master/wmagentpy3/manage#L290-L292
but further debugging is still necessary.
**Additional context and error message**
[1] Logs from the agent deployment
```
starting mysqld_safe...
Checking MySQL Socket file exists...
Socket file exists: /storage/local/data1/cmsdataops/srv/wmagent/v1.5.7.pre1/install/mysql/logs/mysql.sock
MySQL has not been initialised... running post initialisation
Installing the mysql schema...
Socket file exists, proceeding with schema install...
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'cmsdataops'@'localhost'
Installing WMAgent Database: wmagent
Checking Server connection...
Connection OK
Done!
```
| 1.0 | Soft error while creating MariaDB user in 10.6.5 during deployment - **Impact of the bug**
WMAgent
**Describe the bug**
In the first real exercise with this new mariadb version (10.6.5) integrated into the WMCore stack, there is a soft error during the agent deployment while creating a mariadb user [1]. From the logs, I think this error happens because two users (root and cmsdataops, the unix user used to deploy these services) have already been created when MariaDB is installed
**How to reproduce it**
Deploy WMAgent Py3 tag 1.5.7.pre1 with MariaDB backend (using deployment tag HG2201b).
**Expected behavior**
If that local user is already properly created and have all the necessary priveleges, then we could likely remove these lines from the manage script:
https://github.com/dmwm/deployment/blob/master/wmagentpy3/manage#L290-L292
but further debugging is still necessary.
**Additional context and error message**
[1] Logs from the agent deployment
```
starting mysqld_safe...
Checking MySQL Socket file exists...
Socket file exists: /storage/local/data1/cmsdataops/srv/wmagent/v1.5.7.pre1/install/mysql/logs/mysql.sock
MySQL has not been initialised... running post initialisation
Installing the mysql schema...
Socket file exists, proceeding with schema install...
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'cmsdataops'@'localhost'
Installing WMAgent Database: wmagent
Checking Server connection...
Connection OK
Done!
```
| priority | soft error while creating mariadb user in during deployment impact of the bug wmagent describe the bug in the first real exercise with this new mariadb version integrated into the wmcore stack there is a soft error during the agent deployment while creating a mariadb user from the logs i think this error happens because two users root and cmsdataops the unix user used to deploy these services have already been created when mariadb is installed how to reproduce it deploy wmagent tag with mariadb backend using deployment tag expected behavior if that local user is already properly created and have all the necessary priveleges then we could likely remove these lines from the manage script but further debugging is still necessary additional context and error message logs from the agent deployment starting mysqld safe checking mysql socket file exists socket file exists storage local cmsdataops srv wmagent install mysql logs mysql sock mysql has not been initialised running post initialisation installing the mysql schema socket file exists proceeding with schema install error at line operation create user failed for cmsdataops localhost installing wmagent database wmagent checking server connection connection ok done | 1 |
222,000 | 7,404,424,006 | IssuesEvent | 2018-03-20 04:40:22 | codenameone/CodenameOne | https://api.github.com/repos/codenameone/CodenameOne | reopened | Version numbering on iOS forces review on Testflight. | Priority-Medium Type-Enhancement | Original [issue 1406](https://code.google.com/p/codenameone/issues/detail?id=1406) created by codenameone on 2015-03-18T22:24:44.000Z:
##
## Apologies if the terminology is incorrect. I'm talking about the (app) version number styled as 1.60 for example and the bundle version number styled as 160
Due to the new testing system from Apple builds need to be reviewed if the version number changes, but not if only the bundle version number changes. But also its not possible to upload two builds with the same bundle version number.
Codenameone's management of version numbers means that new builds are flagged as new versions and need to go through review, which means its 48 hours or so before customers can get hold of updates for testing. If there was an option in CN1 that allowed setting the bundle version number independently of the application version then it would be possible to update testflight builds without review and get them out faster.
| 1.0 | Version numbering on iOS forces review on Testflight. - Original [issue 1406](https://code.google.com/p/codenameone/issues/detail?id=1406) created by codenameone on 2015-03-18T22:24:44.000Z:
##
## Apologies if the terminology is incorrect. I'm talking about the (app) version number styled as 1.60 for example and the bundle version number styled as 160
Due to the new testing system from Apple builds need to be reviewed if the version number changes, but not if only the bundle version number changes. But also its not possible to upload two builds with the same bundle version number.
Codenameone's management of version numbers means that new builds are flagged as new versions and need to go through review, which means its 48 hours or so before customers can get hold of updates for testing. If there was an option in CN1 that allowed setting the bundle version number independently of the application version then it would be possible to update testflight builds without review and get them out faster.
| priority | version numbering on ios forces review on testflight original created by codenameone on apologies if the terminology is incorrect i m talking about the app version number styled as for example and the bundle version number styled as due to the new testing system from apple builds need to be reviewed if the version number changes but not if only the bundle version number changes but also its not possible to upload two builds with the same bundle version number codenameone s management of version numbers means that new builds are flagged as new versions and need to go through review which means its hours or so before customers can get hold of updates for testing if there was an option in that allowed setting the bundle version number independently of the application version then it would be possible to update testflight builds without review and get them out faster | 1 |
514,370 | 14,937,865,934 | IssuesEvent | 2021-01-25 15:07:27 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | Display error message for check connection | area/frontend priority/medium type/enhancement | ## Tell us about the problem you're trying to solve
`CheckConnectionRead` _optionally_ returns a `message` field. When present this message field will contain information that is helpful to the user for figuring out why their connection did not succeed. It gets displayed in the logs right now, but there's _a lot_ of information there, so we want to make it easier to find.
## Describe the solution you’d like
* If the check connection fails and the message field is present on the `CheckConnectionRead` instead of displaying "Could not connect with provided credentials" instead we should display the message. If no message is present then we continue to do the same thing we already do.
* This should be done for when a source / destination is being created and when it is being updated in settings.
* The message field is unfortunately not guaranteed to be small or well formatted. Ideally the UI doesn't break if there's a lot of text here. No matter what if it's badly formatted there's not too much we will be able to do.

| 1.0 | Display error message for check connection - ## Tell us about the problem you're trying to solve
`CheckConnectionRead` _optionally_ returns a `message` field. When present this message field will contain information that is helpful to the user for figuring out why their connection did not succeed. It gets displayed in the logs right now, but there's _a lot_ of information there, so we want to make it easier to find.
## Describe the solution you’d like
* If the check connection fails and the message field is present on the `CheckConnectionRead` instead of displaying "Could not connect with provided credentials" instead we should display the message. If no message is present then we continue to do the same thing we already do.
* This should be done for when a source / destination is being created and when it is being updated in settings.
* The message field is unfortunately not guaranteed to be small or well formatted. Ideally the UI doesn't break if there's a lot of text here. No matter what if it's badly formatted there's not too much we will be able to do.

| priority | display error message for check connection tell us about the problem you re trying to solve checkconnectionread optionally returns a message field when present this message field will contain information that is helpful to the user for figuring out why their connection did not succeed it gets displayed in the logs right now but there s a lot of information there so we want to make it easier to find describe the solution you’d like if the check connection fails and the message field is present on the checkconnectionread instead of displaying could not connect with provided credentials instead we should display the message if no message is present then we continue to do the same thing we already do this should be done for when a source destination is being created and when it is being updated in settings the message field is unfortunately not guaranteed to be small or well formatted ideally the ui doesn t break if there s a lot of text here no matter what if it s badly formatted there s not too much we will be able to do | 1 |
648,799 | 21,194,248,612 | IssuesEvent | 2022-04-08 21:26:07 | spacetelescope/mirage | https://api.github.com/repos/spacetelescope/mirage | closed | Add input 1D spectrum and transmission curve to FITS file for `SossSim` | Enhancement Medium Priority SOSS niriss | Validation of of extracted spectra are very important so storing this information in the final output would be very beneficial so users can make direct comparisons. Put `STAR` and `PLANET` extension in the output FITS file(s). | 1.0 | Add input 1D spectrum and transmission curve to FITS file for `SossSim` - Validation of of extracted spectra are very important so storing this information in the final output would be very beneficial so users can make direct comparisons. Put `STAR` and `PLANET` extension in the output FITS file(s). | priority | add input spectrum and transmission curve to fits file for sosssim validation of of extracted spectra are very important so storing this information in the final output would be very beneficial so users can make direct comparisons put star and planet extension in the output fits file s | 1 |
230,007 | 7,603,254,874 | IssuesEvent | 2018-04-29 12:37:35 | RPGHacker/asar | https://api.github.com/repos/RPGHacker/asar | closed | Add support for include guards | new feature priority: medium | Add an include guard command like C++'s "pragma once" which makes a file only be included at most once via an incsrc.
Current solution is to use ?= for this, which works, but is ugly and impractical. | 1.0 | Add support for include guards - Add an include guard command like C++'s "pragma once" which makes a file only be included at most once via an incsrc.
Current solution is to use ?= for this, which works, but is ugly and impractical. | priority | add support for include guards add an include guard command like c s pragma once which makes a file only be included at most once via an incsrc current solution is to use for this which works but is ugly and impractical | 1 |
351,476 | 10,519,194,743 | IssuesEvent | 2019-09-29 16:15:15 | ChoiSojung/playover | https://api.github.com/repos/ChoiSojung/playover | opened | App push notifications | medium priority | As a user I would like the option to receive push notifications when activity occurs within my app so that I don't have to open my app to check if anything has occured. | 1.0 | App push notifications - As a user I would like the option to receive push notifications when activity occurs within my app so that I don't have to open my app to check if anything has occured. | priority | app push notifications as a user i would like the option to receive push notifications when activity occurs within my app so that i don t have to open my app to check if anything has occured | 1 |
728,241 | 25,072,585,319 | IssuesEvent | 2022-11-07 13:18:45 | AY2223S1-CS2113-T18-2/tp | https://api.github.com/repos/AY2223S1-CS2113-T18-2/tp | closed | Follow standard test method names | priority.Medium | The standard naming convention for test methods should be `whatIsBeingTested_descriptionOfTestInputs_expectedOutcome`. Some of the test methods do not follow this standard naming convention. It will be great if all of us can follow so that the entire code seems more coherent.


| 1.0 | Follow standard test method names - The standard naming convention for test methods should be `whatIsBeingTested_descriptionOfTestInputs_expectedOutcome`. Some of the test methods do not follow this standard naming convention. It will be great if all of us can follow so that the entire code seems more coherent.


| priority | follow standard test method names the standard naming convention for test methods should be whatisbeingtested descriptionoftestinputs expectedoutcome some of the test methods do not follow this standard naming convention it will be great if all of us can follow so that the entire code seems more coherent | 1 |
705,876 | 24,253,070,970 | IssuesEvent | 2022-09-27 15:31:23 | netdata/netdata-cloud | https://api.github.com/repos/netdata/netdata-cloud | closed | [Bug]: Chart plays after user lands in single node view through Go to Chart/Run correlations | bug internal submit priority/medium cloud-frontend alerts-team | ### Bug description
After a user clicks on an alert and selects to run correlations around the alert duration or go to chart for that alert, the datetimepicker is supposed to PAUSE after user lands in single node view
### Expected behavior
datetimepicker should be PAUSE after user lands in single node view after clicking go to chart/run correlations and chart shouldn't be getting udpated
### Steps to reproduce
1. Navigate to Netdata Cloud
2. Claim an agent
3. Trigger an alert to that agent
4. Open the alert's details modal by navigating to Alerts tab and clicking the desired alert name
5. Click view dedicated alert page
6. Click either Go To Chart or Run Correlations
7. In single node view move the mouse around, chart initially paused, starts playing
### Screenshots

### Error Logs
_No response_
### Desktop
OS: MacOS
Browser: Chrome
Browser Version: 103
### Additional context
_No response_ | 1.0 | [Bug]: Chart plays after user lands in single node view through Go to Chart/Run correlations - ### Bug description
After a user clicks on an alert and selects to run correlations around the alert duration or go to chart for that alert, the datetimepicker is supposed to PAUSE after user lands in single node view
### Expected behavior
datetimepicker should be PAUSE after user lands in single node view after clicking go to chart/run correlations and chart shouldn't be getting udpated
### Steps to reproduce
1. Navigate to Netdata Cloud
2. Claim an agent
3. Trigger an alert to that agent
4. Open the alert's details modal by navigating to Alerts tab and clicking the desired alert name
5. Click view dedicated alert page
6. Click either Go To Chart or Run Correlations
7. In single node view move the mouse around, chart initially paused, starts playing
### Screenshots

### Error Logs
_No response_
### Desktop
OS: MacOS
Browser: Chrome
Browser Version: 103
### Additional context
_No response_ | priority | chart plays after user lands in single node view through go to chart run correlations bug description after a user clicks on an alert and selects to run correlations around the alert duration or go to chart for that alert the datetimepicker is supposed to pause after user lands in single node view expected behavior datetimepicker should be pause after user lands in single node view after clicking go to chart run correlations and chart shouldn t be getting udpated steps to reproduce navigate to netdata cloud claim an agent trigger an alert to that agent open the alert s details modal by navigating to alerts tab and clicking the desired alert name click view dedicated alert page click either go to chart or run correlations in single node view move the mouse around chart initially paused starts playing screenshots error logs no response desktop os macos browser chrome browser version additional context no response | 1 |
398,753 | 11,742,296,766 | IssuesEvent | 2020-03-12 00:16:32 | thaliawww/concrexit | https://api.github.com/repos/thaliawww/concrexit | closed | Bij de bestuursbeschrijving witruimte kunnen toevoegen. | feature priority: medium | In GitLab by jguijt on Nov 9, 2017, 15:18
### One-sentence description
Bij de bestuursbeschrijving kan ik ook witruimte toevoegen.
### Desired behaviour
Bij de bestuursbeschrijving wordt de witruimte nu weggehaald. Het zou fijn zijn als dit een HTMLveld wordt, zodat er wat meer opties zijn om een mooi tekstje te schrijven. | 1.0 | Bij de bestuursbeschrijving witruimte kunnen toevoegen. - In GitLab by jguijt on Nov 9, 2017, 15:18
### One-sentence description
Bij de bestuursbeschrijving kan ik ook witruimte toevoegen.
### Desired behaviour
Bij de bestuursbeschrijving wordt de witruimte nu weggehaald. Het zou fijn zijn als dit een HTMLveld wordt, zodat er wat meer opties zijn om een mooi tekstje te schrijven. | priority | bij de bestuursbeschrijving witruimte kunnen toevoegen in gitlab by jguijt on nov one sentence description bij de bestuursbeschrijving kan ik ook witruimte toevoegen desired behaviour bij de bestuursbeschrijving wordt de witruimte nu weggehaald het zou fijn zijn als dit een htmlveld wordt zodat er wat meer opties zijn om een mooi tekstje te schrijven | 1 |
231,135 | 7,623,870,961 | IssuesEvent | 2018-05-03 16:11:13 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | [ Crash ] Map_Server | component:skill mode:prerenewal mode:renewal priority:medium type:bug | <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: https://github.com/rathena/rathena/commit/524260183e6369cc2a4bc9441fd73a169e63915b
* **Client Date**: 2017-05-31
* **Server Mode**: RE
* **Description of Issue**: Crash in map_server because of a skill.
CORE_DUMP:
```
[New LWP 1212]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `./map-server'.
Program terminated with signal 11, Segmentation fault.
#0 0x00000000005b1a5f in skill_castend_pos2 (src=0xadc40a0, x=76, y=78, skill_id=<optimized out>,
skill_lv=5, tick=55370813, flag=0) at skill.cpp:12287
12287 skill_delunit(ud->skillunit[i_su]->unit);
Missing separate debuginfos, use: debuginfo-install glibc-2.17-106.el7_2.8.x86_64 libgcc-4.8.5-11.el7.x86_64 libstdc++-4.8.5-11.el7.x86_64 pcre-8.32-15.el7_2.1.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt full
#0 0x00000000005b1a5f in skill_castend_pos2 (src=0xadc40a0, x=76, y=78, skill_id=<optimized out>,
skill_lv=5, tick=55370813, flag=0) at skill.cpp:12287
acid_lv = <optimized out>
i_su = 2
ud = <optimized out>
sc = <optimized out>
sce = <optimized out>
sg = <optimized out>
type = <optimized out>
i = <optimized out>
sd = 0xadc40a0
flag = 0
skill_lv = 5
y = 78
tick = 55370813
skill_id = <optimized out>
x = 76
src = 0xadc40a0
#1 0x00000000005b2fb6 in skill_castend_pos (tid=tid@entry=-1, tick=tick@entry=55370813,
id=<optimized out>, data=data@entry=0) at skill.cpp:11530
maxcount = <optimized out>
__FUNCTION__ = "skill_castend_pos"
src = 0xadc40a0
sd = <optimized out>
ud = 0xadc40c0
md = 0x0
#2 0x00000000005f69b9 in unit_skilluse_pos2 (src=src@entry=0xadc40a0, skill_x=skill_x@entry=76,
skill_y=skill_y@entry=78, skill_id=skill_id@entry=2486, skill_lv=skill_lv@entry=5, casttime=0,
castcancel=castcancel@entry=1) at unit.cpp:2117
sc = <optimized out>
bl = {next = 0x74136ea286fc, prev = 0x578b8e <skill_chk(uint16*)+30>, id = 5, m = 962,
x = 76, y = 78, type = BL_NUL}
range = <optimized out>
__FUNCTION__ = "unit_skilluse_pos2"
---Type <return> to continue, or q <return> to quit---
sd = 0xadc40a0
ud = <optimized out>
tick = 55370813
#3 0x00000000005f6a84 in unit_skilluse_pos (src=0xadc40a0, skill_x=<optimized out>,
skill_y=<optimized out>, skill_id=2486, skill_lv=<optimized out>) at unit.cpp:1958
No locals.
#4 0x0000000000480bf0 in clif_parse_UseSkillToPos (fd=<optimized out>, sd=<optimized out>)
at clif.cpp:12314
info = <optimized out>
#5 0x00000000004a0d7d in clif_parse (fd=89) at clif.cpp:20350
cmd = 2343
packet_len = 10
sd = 0xadc40a0
pnum = 0
#6 0x00000000005fc95d in do_sockets (next=<optimized out>) at socket.c:916
rfd = {__fds_bits = {0, 33554432, 0 <repeats 14 times>}}
timeout = {tv_sec = 0, tv_usec = 7438}
ret = 0
i = 89
#7 0x00000000004076d3 in main (argc=1, argv=0x74136ea28998) at core.cpp:371
next = <optimized out>
(gdb)
```
FILES:
1. skill.cpp:12287
2. skill.cpp:11530
3. unit.cpp:2117
4. unit.cpp:1958
5. clif.cpp:20350
6. socket.c:916
7. core.cpp:371
DB: 2486 | 1.0 | [ Crash ] Map_Server - <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**: https://github.com/rathena/rathena/commit/524260183e6369cc2a4bc9441fd73a169e63915b
* **Client Date**: 2017-05-31
* **Server Mode**: RE
* **Description of Issue**: Crash in map_server because of a skill.
CORE_DUMP:
```
[New LWP 1212]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `./map-server'.
Program terminated with signal 11, Segmentation fault.
#0 0x00000000005b1a5f in skill_castend_pos2 (src=0xadc40a0, x=76, y=78, skill_id=<optimized out>,
skill_lv=5, tick=55370813, flag=0) at skill.cpp:12287
12287 skill_delunit(ud->skillunit[i_su]->unit);
Missing separate debuginfos, use: debuginfo-install glibc-2.17-106.el7_2.8.x86_64 libgcc-4.8.5-11.el7.x86_64 libstdc++-4.8.5-11.el7.x86_64 pcre-8.32-15.el7_2.1.x86_64 zlib-1.2.7-17.el7.x86_64
(gdb) bt full
#0 0x00000000005b1a5f in skill_castend_pos2 (src=0xadc40a0, x=76, y=78, skill_id=<optimized out>,
skill_lv=5, tick=55370813, flag=0) at skill.cpp:12287
acid_lv = <optimized out>
i_su = 2
ud = <optimized out>
sc = <optimized out>
sce = <optimized out>
sg = <optimized out>
type = <optimized out>
i = <optimized out>
sd = 0xadc40a0
flag = 0
skill_lv = 5
y = 78
tick = 55370813
skill_id = <optimized out>
x = 76
src = 0xadc40a0
#1 0x00000000005b2fb6 in skill_castend_pos (tid=tid@entry=-1, tick=tick@entry=55370813,
id=<optimized out>, data=data@entry=0) at skill.cpp:11530
maxcount = <optimized out>
__FUNCTION__ = "skill_castend_pos"
src = 0xadc40a0
sd = <optimized out>
ud = 0xadc40c0
md = 0x0
#2 0x00000000005f69b9 in unit_skilluse_pos2 (src=src@entry=0xadc40a0, skill_x=skill_x@entry=76,
skill_y=skill_y@entry=78, skill_id=skill_id@entry=2486, skill_lv=skill_lv@entry=5, casttime=0,
castcancel=castcancel@entry=1) at unit.cpp:2117
sc = <optimized out>
bl = {next = 0x74136ea286fc, prev = 0x578b8e <skill_chk(uint16*)+30>, id = 5, m = 962,
x = 76, y = 78, type = BL_NUL}
range = <optimized out>
__FUNCTION__ = "unit_skilluse_pos2"
---Type <return> to continue, or q <return> to quit---
sd = 0xadc40a0
ud = <optimized out>
tick = 55370813
#3 0x00000000005f6a84 in unit_skilluse_pos (src=0xadc40a0, skill_x=<optimized out>,
skill_y=<optimized out>, skill_id=2486, skill_lv=<optimized out>) at unit.cpp:1958
No locals.
#4 0x0000000000480bf0 in clif_parse_UseSkillToPos (fd=<optimized out>, sd=<optimized out>)
at clif.cpp:12314
info = <optimized out>
#5 0x00000000004a0d7d in clif_parse (fd=89) at clif.cpp:20350
cmd = 2343
packet_len = 10
sd = 0xadc40a0
pnum = 0
#6 0x00000000005fc95d in do_sockets (next=<optimized out>) at socket.c:916
rfd = {__fds_bits = {0, 33554432, 0 <repeats 14 times>}}
timeout = {tv_sec = 0, tv_usec = 7438}
ret = 0
i = 89
#7 0x00000000004076d3 in main (argc=1, argv=0x74136ea28998) at core.cpp:371
next = <optimized out>
(gdb)
```
FILES:
1. skill.cpp:12287
2. skill.cpp:11530
3. unit.cpp:2117
4. unit.cpp:1958
5. clif.cpp:20350
6. socket.c:916
7. core.cpp:371
DB: 2486 | priority | map server rathena hash client date server mode re description of issue crash in map server because of a skill core dump using host libthread db library libthread db so core was generated by map server program terminated with signal segmentation fault in skill castend src x y skill id skill lv tick flag at skill cpp skill delunit ud skillunit unit missing separate debuginfos use debuginfo install glibc libgcc libstdc pcre zlib gdb bt full in skill castend src x y skill id skill lv tick flag at skill cpp acid lv i su ud sc sce sg type i sd flag skill lv y tick skill id x src in skill castend pos tid tid entry tick tick entry id data data entry at skill cpp maxcount function skill castend pos src sd ud md in unit skilluse src src entry skill x skill x entry skill y skill y entry skill id skill id entry skill lv skill lv entry casttime castcancel castcancel entry at unit cpp sc bl next prev id m x y type bl nul range function unit skilluse type to continue or q to quit sd ud tick in unit skilluse pos src skill x skill y skill id skill lv at unit cpp no locals in clif parse useskilltopos fd sd at clif cpp info in clif parse fd at clif cpp cmd packet len sd pnum in do sockets next at socket c rfd fds bits timeout tv sec tv usec ret i in main argc argv at core cpp next gdb files skill cpp skill cpp unit cpp unit cpp clif cpp socket c core cpp db | 1 |
123,944 | 4,889,317,171 | IssuesEvent | 2016-11-18 09:48:25 | tardis-sn/tardis | https://api.github.com/repos/tardis-sn/tardis | closed | zone boundaries with read in models | model priority - medium | Problems can arise if one tries to set the inner/outer boundary value for the velocity (in the model part of the yaml file) to be very close (exactly?) at the boundary of a zone as defined by a read in ascii file (containing the density profile). Problem, I think is that one can end up with zones that have no volume. Should fix this with a check that says we can only subdivide zones from the input file if the velocity boundary is sufficiently different (above some threshold difference, or something).
| 1.0 | zone boundaries with read in models - Problems can arise if one tries to set the inner/outer boundary value for the velocity (in the model part of the yaml file) to be very close (exactly?) at the boundary of a zone as defined by a read in ascii file (containing the density profile). Problem, I think is that one can end up with zones that have no volume. Should fix this with a check that says we can only subdivide zones from the input file if the velocity boundary is sufficiently different (above some threshold difference, or something).
| priority | zone boundaries with read in models problems can arise if one tries to set the inner outer boundary value for the velocity in the model part of the yaml file to be very close exactly at the boundary of a zone as defined by a read in ascii file containing the density profile problem i think is that one can end up with zones that have no volume should fix this with a check that says we can only subdivide zones from the input file if the velocity boundary is sufficiently different above some threshold difference or something | 1 |
683,258 | 23,374,487,278 | IssuesEvent | 2022-08-11 00:17:56 | lxndr-rl/UAE-SICAU | https://api.github.com/repos/lxndr-rl/UAE-SICAU | closed | Añadir vista para el desglose de notas | enhancement medium-priority | Con #19 se llegó a la conclusión de añadir estos nuevos datos que ofrece el sistema de notas oficial (ya compatible por el API) dentro de la vista de SICAU.

| 1.0 | Añadir vista para el desglose de notas - Con #19 se llegó a la conclusión de añadir estos nuevos datos que ofrece el sistema de notas oficial (ya compatible por el API) dentro de la vista de SICAU.

| priority | añadir vista para el desglose de notas con se llegó a la conclusión de añadir estos nuevos datos que ofrece el sistema de notas oficial ya compatible por el api dentro de la vista de sicau | 1 |
659,975 | 21,946,544,249 | IssuesEvent | 2022-05-24 01:40:05 | Accident-Prone/Visceral-Carnage_GAME | https://api.github.com/repos/Accident-Prone/Visceral-Carnage_GAME | closed | Ranged AI gets stuck | bug Priority: Medium Status: Assigned | **Describe the bug**
Ranged enemy AI was wandering to a location it couldn't get to, causing it to get stuck and not attack player.
**Version Found**
Visceral Carnage Playtest Release v0.1 Alpha
**To Reproduce**
Steps to reproduce the behaviour:
1. Start game
2. Observe range AI's movements.
**Expected Behaviour**
Not get stuck. Should have similar behaviour to melee AI.
**Screenshots**
If applicable, add screenshots to help explain you problem.
**Desktop (please complete the following information):**
OS: Unreal Engine version 4.27.2
**Additional context**
Add any other context about the problem here. | 1.0 | Ranged AI gets stuck - **Describe the bug**
Ranged enemy AI was wandering to a location it couldn't get to, causing it to get stuck and not attack player.
**Version Found**
Visceral Carnage Playtest Release v0.1 Alpha
**To Reproduce**
Steps to reproduce the behaviour:
1. Start game
2. Observe range AI's movements.
**Expected Behaviour**
Not get stuck. Should have similar behaviour to melee AI.
**Screenshots**
If applicable, add screenshots to help explain you problem.
**Desktop (please complete the following information):**
OS: Unreal Engine version 4.27.2
**Additional context**
Add any other context about the problem here. | priority | ranged ai gets stuck describe the bug ranged enemy ai was wandering to a location it couldn t get to causing it to get stuck and not attack player version found visceral carnage playtest release alpha to reproduce steps to reproduce the behaviour start game observe range ai s movements expected behaviour not get stuck should have similar behaviour to melee ai screenshots if applicable add screenshots to help explain you problem desktop please complete the following information os unreal engine version additional context add any other context about the problem here | 1 |
306,912 | 9,412,789,600 | IssuesEvent | 2019-04-10 05:44:49 | S0lRaK/cmps-253-rec-center-app | https://api.github.com/repos/S0lRaK/cmps-253-rec-center-app | closed | Login as dropdown from navbar | priority: medium | Show the input for _username_ and _password_ when clicking the Login from the navbar.
It will show a box under that same element with the inputs and button to confirm. | 1.0 | Login as dropdown from navbar - Show the input for _username_ and _password_ when clicking the Login from the navbar.
It will show a box under that same element with the inputs and button to confirm. | priority | login as dropdown from navbar show the input for username and password when clicking the login from the navbar it will show a box under that same element with the inputs and button to confirm | 1 |
648,128 | 21,176,444,723 | IssuesEvent | 2022-04-08 00:43:40 | PrimeBIue/mini-AmazonSolution | https://api.github.com/repos/PrimeBIue/mini-AmazonSolution | closed | Login Form | medium priority user story | Have either a single button login or a login with username and password (depending on what's possible) | 1.0 | Login Form - Have either a single button login or a login with username and password (depending on what's possible) | priority | login form have either a single button login or a login with username and password depending on what s possible | 1 |
56,250 | 3,078,631,851 | IssuesEvent | 2015-08-21 11:40:40 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Странности с bbcode | bug imported Priority-Medium | _From [reaor...@gmail.com](https://code.google.com/u/102418317896447533964/) on August 08, 2011 05:03:25_
Странности с bbcode:
1) [code][/code] и смайлы (1.png)
2) [b] [i] [u] [s] 2 [/s] [/u] [/i] [/b] правильно отображается (2.png)
[b] [s] [i] [u] 2 [/u] [/i] [/s] [/b] неправильно (3.png)
[b] [i] [u] [s] 2 [/b] [/s] [/i] [/u] неправильно (4.png)
**Attachment:** [1.png 2.png 3.png 4.png](http://code.google.com/p/flylinkdc/issues/detail?id=527)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=527_ | 1.0 | Странности с bbcode - _From [reaor...@gmail.com](https://code.google.com/u/102418317896447533964/) on August 08, 2011 05:03:25_
Странности с bbcode:
1) [code][/code] и смайлы (1.png)
2) [b] [i] [u] [s] 2 [/s] [/u] [/i] [/b] правильно отображается (2.png)
[b] [s] [i] [u] 2 [/u] [/i] [/s] [/b] неправильно (3.png)
[b] [i] [u] [s] 2 [/b] [/s] [/i] [/u] неправильно (4.png)
**Attachment:** [1.png 2.png 3.png 4.png](http://code.google.com/p/flylinkdc/issues/detail?id=527)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=527_ | priority | странности с bbcode from on august странности с bbcode и смайлы png правильно отображается png неправильно png неправильно png attachment original issue | 1 |
54,686 | 3,070,968,759 | IssuesEvent | 2015-08-19 09:02:31 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Отсутствует подкраска файлов в ADLSearch | bug imported Priority-Medium Usability | _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on August 01, 2009 16:46:59_
Пользуюсь FlylinkDC++ r389 -build-2353.
В ней есть очень удобная вещь ADL-поиск. Но в нем почему-то не
выделяются цветом файлы, которые уже у меня есть или которые
скачивались ранее. Конечно можно по правой кнопке выбрать "перейти в
папку" и там уже посмотреть на ситуацию. Но это если файлов немного и
они расположены в небольшом количестве папок. А если файлов
отфильтруется несколько сотен и они находятся в десятках папок, то
такой подход уже не очень удобен. Приходится тыкать вслепую и часто
переходить в одну и ту же папку. Кстати, в ADL-поиске строка путь тоже
пустая и получается, что визуально сразу нельзя прикинуть расположение
файлов, пока не "перейдешь в папку"
Если есть возможность настроть такие опции, то подскажите пожалуйста
как. Если в текущей версии это не настраивается, то хотелось бы видеть
такие улучшения в следующих релизах вашей программы.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=31_ | 1.0 | Отсутствует подкраска файлов в ADLSearch - _From [Pavel.Pimenov@gmail.com](https://code.google.com/u/Pavel.Pimenov@gmail.com/) on August 01, 2009 16:46:59_
Пользуюсь FlylinkDC++ r389 -build-2353.
В ней есть очень удобная вещь ADL-поиск. Но в нем почему-то не
выделяются цветом файлы, которые уже у меня есть или которые
скачивались ранее. Конечно можно по правой кнопке выбрать "перейти в
папку" и там уже посмотреть на ситуацию. Но это если файлов немного и
они расположены в небольшом количестве папок. А если файлов
отфильтруется несколько сотен и они находятся в десятках папок, то
такой подход уже не очень удобен. Приходится тыкать вслепую и часто
переходить в одну и ту же папку. Кстати, в ADL-поиске строка путь тоже
пустая и получается, что визуально сразу нельзя прикинуть расположение
файлов, пока не "перейдешь в папку"
Если есть возможность настроть такие опции, то подскажите пожалуйста
как. Если в текущей версии это не настраивается, то хотелось бы видеть
такие улучшения в следующих релизах вашей программы.
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=31_ | priority | отсутствует подкраска файлов в adlsearch from on august пользуюсь flylinkdc build в ней есть очень удобная вещь adl поиск но в нем почему то не выделяются цветом файлы которые уже у меня есть или которые скачивались ранее конечно можно по правой кнопке выбрать перейти в папку и там уже посмотреть на ситуацию но это если файлов немного и они расположены в небольшом количестве папок а если файлов отфильтруется несколько сотен и они находятся в десятках папок то такой подход уже не очень удобен приходится тыкать вслепую и часто переходить в одну и ту же папку кстати в adl поиске строка путь тоже пустая и получается что визуально сразу нельзя прикинуть расположение файлов пока не перейдешь в папку если есть возможность настроть такие опции то подскажите пожалуйста как если в текущей версии это не настраивается то хотелось бы видеть такие улучшения в следующих релизах вашей программы original issue | 1 |
336,316 | 10,180,203,677 | IssuesEvent | 2019-08-09 09:42:19 | Jackodb/Visualization | https://api.github.com/repos/Jackodb/Visualization | closed | Values on x-axis | Medium priority | Dash uses a default interval on the x-axis which makes it quite hard to manually adjust it. I need to find a way to get price values on the x-axis while not displaying them all since that will make it very unclear (tested before). | 1.0 | Values on x-axis - Dash uses a default interval on the x-axis which makes it quite hard to manually adjust it. I need to find a way to get price values on the x-axis while not displaying them all since that will make it very unclear (tested before). | priority | values on x axis dash uses a default interval on the x axis which makes it quite hard to manually adjust it i need to find a way to get price values on the x axis while not displaying them all since that will make it very unclear tested before | 1 |
26,108 | 2,684,175,748 | IssuesEvent | 2015-03-28 18:38:14 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Quake-Mode ConEmu does not hide on loss of focus. | 1 star bug imported Priority-Medium | _From [MartinSG...@gmail.com](https://code.google.com/u/116306280495041062290/) on September 10, 2012 13:02:14_
Required information! OS version: Win7 SP1 x64 ConEmu version: 120909 Far version (if you are using Far Manager): ? *Bug description* Quake-Mode ConEmu does not hide on loss of focus. *Steps to reproduction* 1. Ensure "Quake style slide down" option is selected.
2. Use Win+c to ensure ConEmu is visible
3. Click on another window outside ConEmu (so that ConEmu loses focus).
4. Nothing happens.
*Expected Behaviour* ConEmu disappears after 3 seconds.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=702_ | 1.0 | Quake-Mode ConEmu does not hide on loss of focus. - _From [MartinSG...@gmail.com](https://code.google.com/u/116306280495041062290/) on September 10, 2012 13:02:14_
Required information! OS version: Win7 SP1 x64 ConEmu version: 120909 Far version (if you are using Far Manager): ? *Bug description* Quake-Mode ConEmu does not hide on loss of focus. *Steps to reproduction* 1. Ensure "Quake style slide down" option is selected.
2. Use Win+c to ensure ConEmu is visible
3. Click on another window outside ConEmu (so that ConEmu loses focus).
4. Nothing happens.
*Expected Behaviour* ConEmu disappears after 3 seconds.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=702_ | priority | quake mode conemu does not hide on loss of focus from on september required information os version conemu version far version if you are using far manager bug description quake mode conemu does not hide on loss of focus steps to reproduction ensure quake style slide down option is selected use win c to ensure conemu is visible click on another window outside conemu so that conemu loses focus nothing happens expected behaviour conemu disappears after seconds original issue | 1 |
174,398 | 6,539,749,308 | IssuesEvent | 2017-09-01 12:48:18 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | Removed transaction is shown in 1-1 chat | bug medium-priority | ### Description
*Type*: Bug
*Summary*: There are 2 transactions in Unsigned transaction screen. If remove first one then it's shown as "sent" in 1-1 chat for both sender and recipient. Expected: transaction is removed and 1-1 chat has no new messages about this transaction. Only signed transactions should be shown in 1-1 chat. Note: Issue does not happens if unsigned transactions contain 1 transaction that is removed.
#### Expected behavior
transaction is removed and 1-1 chat has no new messages about this transaction.
#### Actual behavior
If remove first one then it's shown as "sent" in 1-1 chat for both sender and recipient.

### Reproduction
Video: https://drive.google.com/open?id=0Bz3t9zSg1wb7ZGE4UENLQWMyWHM
- Open Status
- Open 1-1 chat
- send 1 transaction of 0.1 ETH (do not sign it, close Unsigned transaction screen - tap on cross in the top left corner)
- send second transaction of 0.2 ETH
- on Unsigned transaction screen remove first transaction of 0.1 ETH (tap on cross next to it). As a result
1. Transaction is removed from Unsigned transaction screen (Expected)
2. New send message 0.1 ETH is shown for both your contact and recipient in 1-1 chat (Issue)
### Additional Information
* Status version: 0.9.9
* Operating System:
Real device Samsung Galaxy S6, Android 6.0.1
Real device iPhone 6s, iOS 10.2.1 | 1.0 | Removed transaction is shown in 1-1 chat - ### Description
*Type*: Bug
*Summary*: There are 2 transactions in Unsigned transaction screen. If remove first one then it's shown as "sent" in 1-1 chat for both sender and recipient. Expected: transaction is removed and 1-1 chat has no new messages about this transaction. Only signed transactions should be shown in 1-1 chat. Note: Issue does not happens if unsigned transactions contain 1 transaction that is removed.
#### Expected behavior
transaction is removed and 1-1 chat has no new messages about this transaction.
#### Actual behavior
If remove first one then it's shown as "sent" in 1-1 chat for both sender and recipient.

### Reproduction
Video: https://drive.google.com/open?id=0Bz3t9zSg1wb7ZGE4UENLQWMyWHM
- Open Status
- Open 1-1 chat
- send 1 transaction of 0.1 ETH (do not sign it, close Unsigned transaction screen - tap on cross in the top left corner)
- send second transaction of 0.2 ETH
- on Unsigned transaction screen remove first transaction of 0.1 ETH (tap on cross next to it). As a result
1. Transaction is removed from Unsigned transaction screen (Expected)
2. New send message 0.1 ETH is shown for both your contact and recipient in 1-1 chat (Issue)
### Additional Information
* Status version: 0.9.9
* Operating System:
Real device Samsung Galaxy S6, Android 6.0.1
Real device iPhone 6s, iOS 10.2.1 | priority | removed transaction is shown in chat description type bug summary there are transactions in unsigned transaction screen if remove first one then it s shown as sent in chat for both sender and recipient expected transaction is removed and chat has no new messages about this transaction only signed transactions should be shown in chat note issue does not happens if unsigned transactions contain transaction that is removed expected behavior transaction is removed and chat has no new messages about this transaction actual behavior if remove first one then it s shown as sent in chat for both sender and recipient reproduction video open status open chat send transaction of eth do not sign it close unsigned transaction screen tap on cross in the top left corner send second transaction of eth on unsigned transaction screen remove first transaction of eth tap on cross next to it as a result transaction is removed from unsigned transaction screen expected new send message eth is shown for both your contact and recipient in chat issue additional information status version operating system real device samsung galaxy android real device iphone ios | 1 |
77,837 | 3,507,286,316 | IssuesEvent | 2016-01-08 12:24:19 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | opened | Talent [Ferocious Inspiration] (BB #819) | Category: Spells migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** Alex_Step
**Original Date:** 25.02.2015 21:29:58 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** new
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/819
<hr>
Talent [Ferocious Inspiration] (http://tbc.wowroad.info/?spell=34460)
How should work: When your pet scores a critical hit, all party members have all damage increased by 3% for 10 sec.
How it works now: The effect is applied only at the first stroke a pet, no matter whether it was a blow critical. Further talent no longer works at all, only helps relog. And as far as I have noticed in the group - it does not work. | 1.0 | Talent [Ferocious Inspiration] (BB #819) - This issue was migrated from bitbucket.
**Original Reporter:** Alex_Step
**Original Date:** 25.02.2015 21:29:58 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** new
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/819
<hr>
Talent [Ferocious Inspiration] (http://tbc.wowroad.info/?spell=34460)
How should work: When your pet scores a critical hit, all party members have all damage increased by 3% for 10 sec.
How it works now: The effect is applied only at the first stroke a pet, no matter whether it was a blow critical. Further talent no longer works at all, only helps relog. And as far as I have noticed in the group - it does not work. | priority | talent bb this issue was migrated from bitbucket original reporter alex step original date gmt original priority major original type bug original state new direct link talent how should work when your pet scores a critical hit all party members have all damage increased by for sec how it works now the effect is applied only at the first stroke a pet no matter whether it was a blow critical further talent no longer works at all only helps relog and as far as i have noticed in the group it does not work | 1 |
476,858 | 13,751,338,818 | IssuesEvent | 2020-10-06 13:14:46 | enso-org/ide | https://api.github.com/repos/enso-org/ide | closed | IDE fails to start if the `main` method is defined as `here.main` | Category: IDE Change: Non-Breaking Difficulty: Core Contributor Priority: Medium Type: Bug | <!--
Please ensure that you are using the latest version of Enso IDE before reporting
the bug! It may have been fixed since.
-->
### General Summary
As in title.
### Steps to Reproduce
Try opening project if the `Main.enso` is as following:
```
here.main = 50
```
### Expected Result
IDE opens and enters the `main` method definition.
### Actual Result
IDE fails to start, complaining that `main` cannot be found.
### Enso Version
3d0ea7d6c77e46996dcede07138a9b9da97453a9
| 1.0 | IDE fails to start if the `main` method is defined as `here.main` - <!--
Please ensure that you are using the latest version of Enso IDE before reporting
the bug! It may have been fixed since.
-->
### General Summary
As in title.
### Steps to Reproduce
Try opening project if the `Main.enso` is as following:
```
here.main = 50
```
### Expected Result
IDE opens and enters the `main` method definition.
### Actual Result
IDE fails to start, complaining that `main` cannot be found.
### Enso Version
3d0ea7d6c77e46996dcede07138a9b9da97453a9
| priority | ide fails to start if the main method is defined as here main please ensure that you are using the latest version of enso ide before reporting the bug it may have been fixed since general summary as in title steps to reproduce try opening project if the main enso is as following here main expected result ide opens and enters the main method definition actual result ide fails to start complaining that main cannot be found enso version | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.