Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 957 | labels stringlengths 4 795 | body stringlengths 1 259k | index stringclasses 12
values | text_combine stringlengths 96 259k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
793,675 | 28,006,880,913 | IssuesEvent | 2023-03-27 15:47:44 | janus-idp/backstage-showcase | https://api.github.com/repos/janus-idp/backstage-showcase | closed | Update Topology plugin to latest version | kind/dependency upgrade priority/medium | ### What do you want to improve?
Update Topology plugin to latest version | 1.0 | Update Topology plugin to latest version - ### What do you want to improve?
Update Topology plugin to latest version | priority | update topology plugin to latest version what do you want to improve update topology plugin to latest version | 1 |
488,260 | 14,075,046,964 | IssuesEvent | 2020-11-04 08:25:51 | makers-for-life/makair-control-ui | https://api.github.com/repos/makers-for-life/makair-control-ui | closed | Add machine details modal | class:feature priority:medium | Add a new modal providing real-time advanced information, such as:
* The machine serial number (`BootMessage.device_id`)
* The firmware mode, eg. production (`BootMessage.mode`)
* The firmware version (`BootMessage.version`)
* The current ventilation phase (`DataSnapshot.phase`)
* The current valve positions (`DataSnapshot.blower_valve_position` + `DataSnapshot.patient_valve_position`)
* The current blower RPM (`DataSnapshot.blower_rpm`)
* The current battery level in V (`DataSnapshot.battery_level`)
* The number of cycles since the MCU booted (`MachineStateSnapshot.cycle`)
* The number of milliseconds since the MCU booted (`DataSnapshot.systick`)
As well, provide some internal information about the UI:
* The UI version and build chain (release or production) used and Git commit hash (if possible)
* The UI uptime
Note that this modal could also be used to:
1. Configure the UI langage, see issue #26
2. View the machine's "odometer", see issue #21
_Note that some information might be updated live if the machine is running, eg. the angle of the valves._ | 1.0 | Add machine details modal - Add a new modal providing real-time advanced information, such as:
* The machine serial number (`BootMessage.device_id`)
* The firmware mode, eg. production (`BootMessage.mode`)
* The firmware version (`BootMessage.version`)
* The current ventilation phase (`DataSnapshot.phase`)
* The current valve positions (`DataSnapshot.blower_valve_position` + `DataSnapshot.patient_valve_position`)
* The current blower RPM (`DataSnapshot.blower_rpm`)
* The current battery level in V (`DataSnapshot.battery_level`)
* The number of cycles since the MCU booted (`MachineStateSnapshot.cycle`)
* The number of milliseconds since the MCU booted (`DataSnapshot.systick`)
As well, provide some internal information about the UI:
* The UI version and build chain (release or production) used and Git commit hash (if possible)
* The UI uptime
Note that this modal could also be used to:
1. Configure the UI langage, see issue #26
2. View the machine's "odometer", see issue #21
_Note that some information might be updated live if the machine is running, eg. the angle of the valves._ | priority | add machine details modal add a new modal providing real time advanced information such as the machine serial number bootmessage device id the firmware mode eg production bootmessage mode the firmware version bootmessage version the current ventilation phase datasnapshot phase the current valve positions datasnapshot blower valve position datasnapshot patient valve position the current blower rpm datasnapshot blower rpm the current battery level in v datasnapshot battery level the number of cycles since the mcu booted machinestatesnapshot cycle the number of milliseconds since the mcu booted datasnapshot systick as well provide some internal information about the ui the ui version and build chain release or production used and git commit hash if possible the ui uptime note that this modal could also be used to configure the ui langage see issue view the machine s odometer see issue note that some information might be updated live if the machine is running eg the angle of the valves | 1 |
735,812 | 25,415,252,600 | IssuesEvent | 2022-11-22 23:08:15 | hedgedoc/react-client | https://api.github.com/repos/hedgedoc/react-client | closed | Add cypressId to History Card Button | priority: 2 medium | <!-- If you're requesting a new feature, that isn't part of this project yet, then please consider filling out a "feature request" instead! -->
<!-- If you want to report a bug or an error, then please consider filling out a "bug report" instead! -->
**Which part of the project should be enhanced?**
History Page Tests
**Is your enhancement request related to a problem? Please describe.**
The card button does not have a cypressId, this seems to be a strong indicator, that there is something missing in the tests.
**Describe the solution you'd like**
Add a cypressId and add to the tests.
| 1.0 | Add cypressId to History Card Button - <!-- If you're requesting a new feature, that isn't part of this project yet, then please consider filling out a "feature request" instead! -->
<!-- If you want to report a bug or an error, then please consider filling out a "bug report" instead! -->
**Which part of the project should be enhanced?**
History Page Tests
**Is your enhancement request related to a problem? Please describe.**
The card button does not have a cypressId, this seems to be a strong indicator, that there is something missing in the tests.
**Describe the solution you'd like**
Add a cypressId and add to the tests.
| priority | add cypressid to history card button which part of the project should be enhanced history page tests is your enhancement request related to a problem please describe the card button does not have a cypressid this seems to be a strong indicator that there is something missing in the tests describe the solution you d like add a cypressid and add to the tests | 1 |
748,775 | 26,136,832,692 | IssuesEvent | 2022-12-29 13:11:31 | rtCamp/blank-theme | https://api.github.com/repos/rtCamp/blank-theme | closed | Fix incorrect namespace and class name(s) | [Type] Enhancement [Priority] Medium | ## Summary
The main class, the `Blank_Theme` class should be in snake-case with capitalized letters since it's a class. It's `BLANK_THEME` currently. The namespace also seems to have an incorrect value. Skim through the class names in the files and ensure classes and namespaces are named correctly.
<!-- A brief description of the task. -->
## References
- [class-blank-theme.php file](https://github.com/rtCamp/blank-theme/blob/master/inc/classes/class-blank-theme.php)
- [WordPress Naming Convention](https://developer.wordpress.org/coding-standards/wordpress-coding-standards/php/#naming-conventions)
- https://github.com/rtCamp/blank-theme/issues/79#issuecomment-1216841789
<!-- Add helpful links, design, demo site, documents links, etc. -->
| 1.0 | Fix incorrect namespace and class name(s) - ## Summary
The main class, the `Blank_Theme` class should be in snake-case with capitalized letters since it's a class. It's `BLANK_THEME` currently. The namespace also seems to have an incorrect value. Skim through the class names in the files and ensure classes and namespaces are named correctly.
<!-- A brief description of the task. -->
## References
- [class-blank-theme.php file](https://github.com/rtCamp/blank-theme/blob/master/inc/classes/class-blank-theme.php)
- [WordPress Naming Convention](https://developer.wordpress.org/coding-standards/wordpress-coding-standards/php/#naming-conventions)
- https://github.com/rtCamp/blank-theme/issues/79#issuecomment-1216841789
<!-- Add helpful links, design, demo site, documents links, etc. -->
| priority | fix incorrect namespace and class name s summary the main class the blank theme class should be in snake case with capitalized letters since it s a class it s blank theme currently the namespace also seems to have an incorrect value skim through the class names in the files and ensure classes and namespaces are named correctly references | 1 |
271,663 | 8,486,462,902 | IssuesEvent | 2018-10-26 10:58:16 | opentargets/platform | https://api.github.com/repos/opentargets/platform | opened | Investigate expression data p-value cut-off in the pipeline | Kind: Data Priority: Medium Topic: Pipeline | The p-value cut-off for expression data from Expression atlas is 0.05 but it is 1e-10 in the pipeline (any p-value above 1e-10 is converted to 0 resulting in an association score of 0).
- [ ] Investigate whether we have always used 1e-10 in the pipeline (look at data downloads).
- [ ] Are all experiments equally affected by this? Look at distributions of p-values.
| 1.0 | Investigate expression data p-value cut-off in the pipeline - The p-value cut-off for expression data from Expression atlas is 0.05 but it is 1e-10 in the pipeline (any p-value above 1e-10 is converted to 0 resulting in an association score of 0).
- [ ] Investigate whether we have always used 1e-10 in the pipeline (look at data downloads).
- [ ] Are all experiments equally affected by this? Look at distributions of p-values.
| priority | investigate expression data p value cut off in the pipeline the p value cut off for expression data from expression atlas is but it is in the pipeline any p value above is converted to resulting in an association score of investigate whether we have always used in the pipeline look at data downloads are all experiments equally affected by this look at distributions of p values | 1 |
25,956 | 2,684,063,320 | IssuesEvent | 2015-03-28 16:30:01 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | Изменение размера окна при закрытии одной из консолей | 1 star bug duplicate imported Priority-Medium | _From [cca...@gmail.com](https://code.google.com/u/115607388065392232035/) on April 13, 2012 04:24:50_
Required information! OS version: WinXP SP3 x86 ConEmu version: 20120401в
Far version: far2 bis27 *Bug description* В некоторых случаях окно conemu самопроизвольно изменяет свою ширину при закрытии одной из нескольких консолей. *Steps to reproduction* 1. запускаем conemu+far /w с ярлыка
2. распахиваем по Alt-F9
3. запускаем новую консоль far /w по Win-W enter
4. запахиваем по Alt-F9
5. штатно закрываем второй far
наблюдаем проблему
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=522_ | 1.0 | Изменение размера окна при закрытии одной из консолей - _From [cca...@gmail.com](https://code.google.com/u/115607388065392232035/) on April 13, 2012 04:24:50_
Required information! OS version: WinXP SP3 x86 ConEmu version: 20120401в
Far version: far2 bis27 *Bug description* В некоторых случаях окно conemu самопроизвольно изменяет свою ширину при закрытии одной из нескольких консолей. *Steps to reproduction* 1. запускаем conemu+far /w с ярлыка
2. распахиваем по Alt-F9
3. запускаем новую консоль far /w по Win-W enter
4. запахиваем по Alt-F9
5. штатно закрываем второй far
наблюдаем проблему
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=522_ | priority | изменение размера окна при закрытии одной из консолей from on april required information os version winxp conemu version far version bug description в некоторых случаях окно conemu самопроизвольно изменяет свою ширину при закрытии одной из нескольких консолей steps to reproduction запускаем conemu far w с ярлыка распахиваем по alt запускаем новую консоль far w по win w enter запахиваем по alt штатно закрываем второй far наблюдаем проблему original issue | 1 |
386,463 | 11,439,543,982 | IssuesEvent | 2020-02-05 07:34:00 | certificate-helper/TLS-Inspector | https://api.github.com/repos/certificate-helper/TLS-Inspector | closed | Mail Controller is not dismissed after sending feedback | bug medium priority user interface | **Affected Version:**
2.0.1
**Is this a Test Flight version or the App Store version?**
Both
**Device and iOS Version:**
All
**What steps will reproduce the problem?**
1. Fill out the contact form
2. Tap the send button
**What is the expected output?**
The email sheet goes away
**What do you see instead?**
The email sheet overstays its welcome
**Please provide any additional information below.**
| 1.0 | Mail Controller is not dismissed after sending feedback - **Affected Version:**
2.0.1
**Is this a Test Flight version or the App Store version?**
Both
**Device and iOS Version:**
All
**What steps will reproduce the problem?**
1. Fill out the contact form
2. Tap the send button
**What is the expected output?**
The email sheet goes away
**What do you see instead?**
The email sheet overstays its welcome
**Please provide any additional information below.**
| priority | mail controller is not dismissed after sending feedback affected version is this a test flight version or the app store version both device and ios version all what steps will reproduce the problem fill out the contact form tap the send button what is the expected output the email sheet goes away what do you see instead the email sheet overstays its welcome please provide any additional information below | 1 |
536,803 | 15,715,294,346 | IssuesEvent | 2021-03-28 00:26:55 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | Need clearer indicator that combat mode is on. | Area: UI Priority: 2-medium Type: Feature | Right now combat mode still basically locks up most interactions. It's not *that* bad anymore because there's a button now at least, but it could still be better.
Maybe like make the cursor red? Would require custom mouse cursors. | 1.0 | Need clearer indicator that combat mode is on. - Right now combat mode still basically locks up most interactions. It's not *that* bad anymore because there's a button now at least, but it could still be better.
Maybe like make the cursor red? Would require custom mouse cursors. | priority | need clearer indicator that combat mode is on right now combat mode still basically locks up most interactions it s not that bad anymore because there s a button now at least but it could still be better maybe like make the cursor red would require custom mouse cursors | 1 |
211,360 | 7,200,584,480 | IssuesEvent | 2018-02-05 19:33:21 | buttercup/buttercup-browser-extension | https://api.github.com/repos/buttercup/buttercup-browser-extension | closed | Injected Buttercup buttons change forms tab index | Priority: Medium Status: Available Type: Bug | The Buttons that are injected in the page are changing forms' tab indexes. So for example if I press tab after entering my username, password field wont be focus, instead buttercup button will be focused. I think we should change the tab index on Buttercup buttons to minus values so they dont get focused automatically.

| 1.0 | Injected Buttercup buttons change forms tab index - The Buttons that are injected in the page are changing forms' tab indexes. So for example if I press tab after entering my username, password field wont be focus, instead buttercup button will be focused. I think we should change the tab index on Buttercup buttons to minus values so they dont get focused automatically.

| priority | injected buttercup buttons change forms tab index the buttons that are injected in the page are changing forms tab indexes so for example if i press tab after entering my username password field wont be focus instead buttercup button will be focused i think we should change the tab index on buttercup buttons to minus values so they dont get focused automatically | 1 |
683,944 | 23,400,417,215 | IssuesEvent | 2022-08-12 07:20:25 | tensorchord/envd | https://api.github.com/repos/tensorchord/envd | closed | feat(buildkit): Export cache to registry | priority/2-medium 💛 type/feature 💡 status/needs-discussion 🪧 difficulty/medium | ## Description
The build cache can be exported to the registry. We need to investigate the benefits of AI/ML scenarios. | 1.0 | feat(buildkit): Export cache to registry - ## Description
The build cache can be exported to the registry. We need to investigate the benefits of AI/ML scenarios. | priority | feat buildkit export cache to registry description the build cache can be exported to the registry we need to investigate the benefits of ai ml scenarios | 1 |
632,155 | 20,175,134,349 | IssuesEvent | 2022-02-10 13:55:42 | reconness/reconness-frontend | https://api.github.com/repos/reconness/reconness-frontend | closed | Integrate microservides for Marketplace features | bug priority: medium severity: minor integration | Services are in repository https://github.com/reconness/reconness
- [x] Install agent from Marketplace
- [ ] Unistall agent installed from the Marketplace
- [x] List agents un Marketplace | 1.0 | Integrate microservides for Marketplace features - Services are in repository https://github.com/reconness/reconness
- [x] Install agent from Marketplace
- [ ] Unistall agent installed from the Marketplace
- [x] List agents un Marketplace | priority | integrate microservides for marketplace features services are in repository install agent from marketplace unistall agent installed from the marketplace list agents un marketplace | 1 |
712,559 | 24,499,131,105 | IssuesEvent | 2022-10-10 11:17:55 | EESSI/eessi-bot-software-layer | https://api.github.com/repos/EESSI/eessi-bot-software-layer | opened | Make sure the same container image is used | priority:low type:enhancement difficulty:medium | Make sure that same container image is used throughout the whole job. That is, it should be downloaded first, then it the downloaded container image should be (re-)used.
This should mainly improve consistency of the build environment.
A further improvement might be to download all needed container images first (maybe even at start of the bot or by admin before). | 1.0 | Make sure the same container image is used - Make sure that same container image is used throughout the whole job. That is, it should be downloaded first, then it the downloaded container image should be (re-)used.
This should mainly improve consistency of the build environment.
A further improvement might be to download all needed container images first (maybe even at start of the bot or by admin before). | priority | make sure the same container image is used make sure that same container image is used throughout the whole job that is it should be downloaded first then it the downloaded container image should be re used this should mainly improve consistency of the build environment a further improvement might be to download all needed container images first maybe even at start of the bot or by admin before | 1 |
278,525 | 8,643,357,929 | IssuesEvent | 2018-11-25 17:10:20 | sohara/broue-cli | https://api.github.com/repos/sohara/broue-cli | closed | Use ember-cli-deploy with lightning pack | (re)engineering priority:medium | Basically same deployment strategy as currently using but it's encapsulated in an addon.
| 1.0 | Use ember-cli-deploy with lightning pack - Basically same deployment strategy as currently using but it's encapsulated in an addon.
| priority | use ember cli deploy with lightning pack basically same deployment strategy as currently using but it s encapsulated in an addon | 1 |
153,209 | 5,886,997,136 | IssuesEvent | 2017-05-17 05:42:42 | ThoughtWorksInc/treadmill | https://api.github.com/repos/ThoughtWorksInc/treadmill | closed | Use the CLI to destroy a Treadmill cell
| Done Feature-AWS Support Priority-High Role-Administrator Size-Medium (M) | So that :
I can later script this into some kind of automated flow.
Tasks:
Extend the existing CLI to destroy a cell.
"Assumption:
Story '5c4533' has been played." | 1.0 | Use the CLI to destroy a Treadmill cell
- So that :
I can later script this into some kind of automated flow.
Tasks:
Extend the existing CLI to destroy a cell.
"Assumption:
Story '5c4533' has been played." | priority | use the cli to destroy a treadmill cell so that i can later script this into some kind of automated flow tasks extend the existing cli to destroy a cell assumption story has been played | 1 |
25,412 | 2,683,695,122 | IssuesEvent | 2015-03-28 06:54:05 | prikhi/pencil | https://api.github.com/repos/prikhi/pencil | closed | Programm not allow to create more than 12 columns in a table-element | 1 star bug duplicate imported Priority-Medium | _From [salmaja@yandex.ru](https://code.google.com/u/salmaja@yandex.ru/) on July 18, 2014 16:53:41_
What steps will reproduce the problem? 1. Create new file
2. Drag-n-drop element "table" from Native UI
3. Add 13 columns there. The element will look not like table. If you delete 1 column, the element will show properly.
Hope, that you can fix it and the programm will allow to create tables with 20 or 25 columns.
Version 2.0.5, Linux Mint 17
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=643_ | 1.0 | Programm not allow to create more than 12 columns in a table-element - _From [salmaja@yandex.ru](https://code.google.com/u/salmaja@yandex.ru/) on July 18, 2014 16:53:41_
What steps will reproduce the problem? 1. Create new file
2. Drag-n-drop element "table" from Native UI
3. Add 13 columns there. The element will look not like table. If you delete 1 column, the element will show properly.
Hope, that you can fix it and the programm will allow to create tables with 20 or 25 columns.
Version 2.0.5, Linux Mint 17
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=643_ | priority | programm not allow to create more than columns in a table element from on july what steps will reproduce the problem create new file drag n drop element table from native ui add columns there the element will look not like table if you delete column the element will show properly hope that you can fix it and the programm will allow to create tables with or columns version linux mint original issue | 1 |
558,462 | 16,534,164,869 | IssuesEvent | 2021-05-27 09:47:43 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | Buddyboss platform does allow duplicate nickname when I edit a user through WP user edit page. | bug priority: medium | **Describe the bug**
Buddyboss platform does allow duplicate nickname when I edit a user through WP user edit page but it should not allow duplicate nickname since it brakes @mention & messages.
**To Reproduce**
Steps to reproduce the behavior:
1. Edit a user through WP user edit page and it will allow you to add duplicate nickname.
**Expected behavior**
It should not allow duplicate nickname from WP user edit page
| 1.0 | Buddyboss platform does allow duplicate nickname when I edit a user through WP user edit page. - **Describe the bug**
Buddyboss platform does allow duplicate nickname when I edit a user through WP user edit page but it should not allow duplicate nickname since it brakes @mention & messages.
**To Reproduce**
Steps to reproduce the behavior:
1. Edit a user through WP user edit page and it will allow you to add duplicate nickname.
**Expected behavior**
It should not allow duplicate nickname from WP user edit page
| priority | buddyboss platform does allow duplicate nickname when i edit a user through wp user edit page describe the bug buddyboss platform does allow duplicate nickname when i edit a user through wp user edit page but it should not allow duplicate nickname since it brakes mention messages to reproduce steps to reproduce the behavior edit a user through wp user edit page and it will allow you to add duplicate nickname expected behavior it should not allow duplicate nickname from wp user edit page | 1 |
40,309 | 2,868,487,812 | IssuesEvent | 2015-06-05 19:07:25 | Pingus/pingus | https://api.github.com/repos/Pingus/pingus | closed | Editor should allow editing X/Y in ObjectProperties | auto-migrated Component-Editor Milestone-0.8.0 Priority-Medium Type-Enhancement | ```
Currently there is no way to see or edit X/Y in the editor, it should allow it
in the ObjectProperties dialog, just like Z index.
```
Original issue reported on code.google.com by `grum...@gmail.com` on 12 Oct 2011 at 1:26 | 1.0 | Editor should allow editing X/Y in ObjectProperties - ```
Currently there is no way to see or edit X/Y in the editor, it should allow it
in the ObjectProperties dialog, just like Z index.
```
Original issue reported on code.google.com by `grum...@gmail.com` on 12 Oct 2011 at 1:26 | priority | editor should allow editing x y in objectproperties currently there is no way to see or edit x y in the editor it should allow it in the objectproperties dialog just like z index original issue reported on code google com by grum gmail com on oct at | 1 |
173,506 | 6,525,795,675 | IssuesEvent | 2017-08-29 17:10:43 | minio/minio-java | https://api.github.com/repos/minio/minio-java | closed | Update minio-java to output new mint log format | priority: medium | `minio-java` tests are used in `mint` to test Minio server. `minio-java` logs need to be updated in the format below so that `mint` logs can be easily parsed.
```
{
"name":"mc", // SDK Name
"function":"testPresignGetObject", // Test function name
"description": "test function description (optional)", // Test function description
"args":"", // key value map, varName:value. Only arguements that devs may be interested in
"duration":"", // duration of the whole test
"status":"", // can be PASS, FAIL, NA
"alert":"failed to download pre-signed object(optional)", // error related, human readable message. Should be taken care of if present
"error":"stack-trace/exception message(only in case of failure)" // actual low level exception/error thrown by the program
}
``` | 1.0 | Update minio-java to output new mint log format - `minio-java` tests are used in `mint` to test Minio server. `minio-java` logs need to be updated in the format below so that `mint` logs can be easily parsed.
```
{
"name":"mc", // SDK Name
"function":"testPresignGetObject", // Test function name
"description": "test function description (optional)", // Test function description
"args":"", // key value map, varName:value. Only arguements that devs may be interested in
"duration":"", // duration of the whole test
"status":"", // can be PASS, FAIL, NA
"alert":"failed to download pre-signed object(optional)", // error related, human readable message. Should be taken care of if present
"error":"stack-trace/exception message(only in case of failure)" // actual low level exception/error thrown by the program
}
``` | priority | update minio java to output new mint log format minio java tests are used in mint to test minio server minio java logs need to be updated in the format below so that mint logs can be easily parsed name mc sdk name function testpresigngetobject test function name description test function description optional test function description args key value map varname value only arguements that devs may be interested in duration duration of the whole test status can be pass fail na alert failed to download pre signed object optional error related human readable message should be taken care of if present error stack trace exception message only in case of failure actual low level exception error thrown by the program | 1 |
696,772 | 23,915,840,634 | IssuesEvent | 2022-09-09 12:37:46 | vdjagilev/nmap-formatter | https://api.github.com/repos/vdjagilev/nmap-formatter | closed | Upgrade CodeQL to v2 | priority/medium type/other prop/pipeline | ```
Warning: CodeQL Action v1 will be deprecated on December 7th, 2022. Please upgrade to v2. For more information, see https://github.blog/changelog/2022-04-27-code-scanning-deprecation-of-codeql-action-v1/
```
| 1.0 | Upgrade CodeQL to v2 - ```
Warning: CodeQL Action v1 will be deprecated on December 7th, 2022. Please upgrade to v2. For more information, see https://github.blog/changelog/2022-04-27-code-scanning-deprecation-of-codeql-action-v1/
```
| priority | upgrade codeql to warning codeql action will be deprecated on december please upgrade to for more information see | 1 |
162,147 | 6,148,338,152 | IssuesEvent | 2017-06-27 17:36:45 | chaos/pdsh | https://api.github.com/repos/chaos/pdsh | closed | not user install | auto-migrated Priority-Medium Type-Enhancement | ```
Hello,
I have to install pdsh in a cluster, where I not have root permission, with a
particular user (userfoo).
But the application has to be used by all other user (myuser).
It is possible to install pdsh with these conditions?
I configure with --with-ssh --prefix=/opt/appli/pdsh/2.22
When I type pdsh, I get:
userfoo> module path "/opt/appli/pdsh/2.22/lib/pdsh" insicure
userfoo> "/opt/appli": World writable and sticky bit is not set
userfoo> Couldn't load any pdsh module
or
myuser> module path "/opt/appli/pdsh/2.22/lib/pdsh" insicure
myuser> module path "/opt/appli/pdsh/2.22/lib/pdsh": Owner not root, current
uid, or pdsh executable owner
userfoo> Couldn't load any pdsh module
```
Original issue reported on code.google.com by `man74...@gmail.com` on 5 Apr 2011 at 10:23
| 1.0 | not user install - ```
Hello,
I have to install pdsh in a cluster, where I not have root permission, with a
particular user (userfoo).
But the application has to be used by all other user (myuser).
It is possible to install pdsh with these conditions?
I configure with --with-ssh --prefix=/opt/appli/pdsh/2.22
When I type pdsh, I get:
userfoo> module path "/opt/appli/pdsh/2.22/lib/pdsh" insicure
userfoo> "/opt/appli": World writable and sticky bit is not set
userfoo> Couldn't load any pdsh module
or
myuser> module path "/opt/appli/pdsh/2.22/lib/pdsh" insicure
myuser> module path "/opt/appli/pdsh/2.22/lib/pdsh": Owner not root, current
uid, or pdsh executable owner
userfoo> Couldn't load any pdsh module
```
Original issue reported on code.google.com by `man74...@gmail.com` on 5 Apr 2011 at 10:23
| priority | not user install hello i have to install pdsh in a cluster where i not have root permission with a particular user userfoo but the application has to be used by all other user myuser it is possible to install pdsh with these conditions i configure with with ssh prefix opt appli pdsh when i type pdsh i get userfoo module path opt appli pdsh lib pdsh insicure userfoo opt appli world writable and sticky bit is not set userfoo couldn t load any pdsh module or myuser module path opt appli pdsh lib pdsh insicure myuser module path opt appli pdsh lib pdsh owner not root current uid or pdsh executable owner userfoo couldn t load any pdsh module original issue reported on code google com by gmail com on apr at | 1 |
498,275 | 14,404,944,479 | IssuesEvent | 2020-12-03 17:58:59 | SACOOP-PE/SIA-Analitica-PE | https://api.github.com/repos/SACOOP-PE/SIA-Analitica-PE | opened | Generate an alert bucket in the validator BDCC | medium priority | Generar un bucket de alertas de BDCC, estas estarán separadas del bucket de errores. | 1.0 | Generate an alert bucket in the validator BDCC - Generar un bucket de alertas de BDCC, estas estarán separadas del bucket de errores. | priority | generate an alert bucket in the validator bdcc generar un bucket de alertas de bdcc estas estarán separadas del bucket de errores | 1 |
380,391 | 11,259,861,967 | IssuesEvent | 2020-01-13 09:20:40 | OkunaOrg/okuna-api | https://api.github.com/repos/OkunaOrg/okuna-api | closed | Bug on switching community from private to public | bug priority:medium | > An Okuna user found a bug that occurs when switching a community from private to public. If you happened to deactivate the option for community members to invite new members into a private community and then switch the community to public, this setting is kept active. Meaning ppl can't invite new members to a public community and the owner can't change this setting unless he switches back to private, adjusts the setting accordingly and switches back to public again.
> I could confirm and recreate this behavior in a test community I created for this purpose. (edited) | 1.0 | Bug on switching community from private to public - > An Okuna user found a bug that occurs when switching a community from private to public. If you happened to deactivate the option for community members to invite new members into a private community and then switch the community to public, this setting is kept active. Meaning ppl can't invite new members to a public community and the owner can't change this setting unless he switches back to private, adjusts the setting accordingly and switches back to public again.
> I could confirm and recreate this behavior in a test community I created for this purpose. (edited) | priority | bug on switching community from private to public an okuna user found a bug that occurs when switching a community from private to public if you happened to deactivate the option for community members to invite new members into a private community and then switch the community to public this setting is kept active meaning ppl can t invite new members to a public community and the owner can t change this setting unless he switches back to private adjusts the setting accordingly and switches back to public again i could confirm and recreate this behavior in a test community i created for this purpose edited | 1 |
775,129 | 27,219,922,197 | IssuesEvent | 2023-02-21 03:46:56 | Reyder95/Project-Vultura-3D-Unity | https://api.github.com/repos/Reyder95/Project-Vultura-3D-Unity | closed | Implement dropping and picking up dropped items | medium priority ready for development item | When a user drops an item (by dragging it off the hotbar or right clicking -> remove), the item should be dropped into a cargo box that gets left in space.
Perhaps additionally, when an item is dropped nearby a cargo box, that item gets placed in said cargo box. | 1.0 | Implement dropping and picking up dropped items - When a user drops an item (by dragging it off the hotbar or right clicking -> remove), the item should be dropped into a cargo box that gets left in space.
Perhaps additionally, when an item is dropped nearby a cargo box, that item gets placed in said cargo box. | priority | implement dropping and picking up dropped items when a user drops an item by dragging it off the hotbar or right clicking remove the item should be dropped into a cargo box that gets left in space perhaps additionally when an item is dropped nearby a cargo box that item gets placed in said cargo box | 1 |
803,881 | 29,192,978,624 | IssuesEvent | 2023-05-19 22:20:42 | CodeWithAloha/HIERR | https://api.github.com/repos/CodeWithAloha/HIERR | closed | Add select workshop question to the demographic survey | Medium Priority | This would be a dropdown with all of the workshop options.
This involves modifying the demographic survey migration code to include all of the options or finding a new way to implement the demographic survey. | 1.0 | Add select workshop question to the demographic survey - This would be a dropdown with all of the workshop options.
This involves modifying the demographic survey migration code to include all of the options or finding a new way to implement the demographic survey. | priority | add select workshop question to the demographic survey this would be a dropdown with all of the workshop options this involves modifying the demographic survey migration code to include all of the options or finding a new way to implement the demographic survey | 1 |
336,147 | 10,172,275,036 | IssuesEvent | 2019-08-08 10:16:34 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | Test: pmempool_transform/TEST[11, 14, 15, 18] fail with valgrind | Exposure: Medium OS: Windows Priority: 2 high Type: Bug | <!--
Before creating new issue, ensure that similar issue wasn't already created
* Search: https://github.com/pmem/issues/issues
Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report.
Remember this is just a minimal template. You can extend it with data you think may be useful.
-->
# ISSUE: <!-- fill the title of issue -->
## Environment Information
- PMDK package version(s):
- OS(es) version(s): Fedora30
- ndctl version(s): 65
- kernel version(s): 5.1.17-300.fc30.x86_64
<!-- fill in also other useful environment data -->
## Please provide a reproduction of the bug:
```
./RUNTESTS pmempool_transform -s TEST11 -d force-enable -t all
./RUNTESTS pmempool_transform -s TEST15 -d force-enable -t all
./RUNTESTS pmempool_transform -s TEST11 -e force-enable -t all
./RUNTESTS pmempool_transform -s TEST14 -e force-enable -t all
./RUNTESTS pmempool_transform -s TEST15 -e force-enable -t all
./RUNTESTS pmempool_transform -s TEST18 -e force-enable -t all
```
## How often bug is revealed: (always, often, rare): always
<!-- describe special circumstances in section above -->
## Actual behavior:
```
./RUNTESTS pmempool_transform -s TEST11 -d force-enable -t all
pmempool_transform/TEST11: SETUP (all/pmem/debug/drd)
--74101:0: aspacem <<< SHOW_SEGMENTS: out_of_memory (177 segments)
--74101:0: aspacem 25 segment names in 25 slots
--74101:0: aspacem freelist is empty
--74101:0: aspacem (0,4,9) /usr/local/lib/valgrind/drd-amd64-linux
--74101:0: aspacem (1,48,8) /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
--74101:0: aspacem (2,104,8) /usr/lib64/ld-2.29.so
--74101:0: aspacem (3,130,1) /tmp/vgdb-pipe-shared-mem-vgdb-74101-by-jenkins-on-localhost.localdomain
--74101:0: aspacem (4,207,8) /usr/local/lib/valgrind/vgpreload_core-amd64-linux.so
--74101:0: aspacem (5,265,8) /usr/local/lib/valgrind/vgpreload_drd-amd64-linux.so
--74101:0: aspacem (6,322,9) /home/jenkins/greg/pmdk/src/debug/libpmempool.so.1.0.0
--74101:0: aspacem (7,381,8) /home/jenkins/greg/pmdk/src/debug/libpmemblk.so.1.0.0
--74101:0: aspacem (8,439,8) /home/jenkins/greg/pmdk/src/debug/libpmemlog.so.1.0.0
--74101:0: aspacem (9,497,9) /home/jenkins/greg/pmdk/src/debug/libpmemobj.so.1.0.0
--74101:0: aspacem (10,555,8) /home/jenkins/greg/pmdk/src/debug/libpmem.so.1.0.0
--74101:0: aspacem (11,610,8) /usr/lib64/libndctl.so.6.14.0
--74101:0: aspacem (12,644,7) /usr/lib64/libdaxctl.so.1.2.1
--74101:0: aspacem (13,678,8) /usr/lib64/libdl-2.29.so
--74101:0: aspacem (14,707,8) /usr/lib64/libpthread-2.29.so
--74101:0: aspacem (15,741,9) /usr/lib64/libc-2.29.so
--74101:0: aspacem (16,769,9) /usr/lib64/libudev.so.1.6.13
--74101:0: aspacem (17,802,7) /usr/lib64/libuuid.so.1.3.0
--74101:0: aspacem (18,834,8) /usr/lib64/libkmod.so.2.3.3
--74101:0: aspacem (19,866,8) /usr/lib64/librt-2.29.so
--74101:0: aspacem (20,895,8) /usr/lib64/libgcc_s-9-20190503.so.1
--74101:0: aspacem (21,935,8) /usr/lib64/liblzma.so.5.2.4
--74101:0: aspacem (22,967,7) /usr/lib64/libz.so.1.2.11
--74101:0: aspacem (23,997,10) /dev/dax1.0
--74101:0: aspacem (24,1013,5) /dev/dax1.3
--74101:0: aspacem 0: RSVN 0000000000-00003fffff 4194304 ----- SmFixed
--74101:0: aspacem 1: file 0000400000-0000404fff 20480 r---- d=0xfd00 i=28113057 o=0 (1,48)
--74101:0: aspacem 2: file 0000405000-000045afff 352256 r-xT- d=0xfd00 i=28113057 o=20480 (1,48)
--74101:0: aspacem 3: file 000045b000-000047dfff 143360 r---- d=0xfd00 i=28113057 o=372736 (1,48)
--74101:0: aspacem 4: RSVN 000047e000-000047efff 4096 ----- SmFixed
--74101:0: aspacem 5: file 000047f000-0000480fff 8192 r---- d=0xfd00 i=28113057 o=516096 (1,48)
--74101:0: aspacem 6: file 0000481000-0000481fff 4096 rw--- d=0xfd00 i=28113057 o=524288 (1,48)
--74101:0: aspacem 7: anon 0000482000-0000488fff 28672 rw---
--74101:0: aspacem 8: RSVN 0000489000-0003ffffff 59m ----- SmFixed
--74101:0: aspacem 9: file 0004000000-0004000fff 4096 r---- d=0xfd00 i=25168806 o=0 (2,104)
--74101:0: aspacem 10: file 0004001000-0004020fff 131072 r-xT- d=0xfd00 i=25168806 o=4096 (2,104)
--74101:0: aspacem 11: file 0004021000-0004028fff 32768 r---- d=0xfd00 i=25168806 o=135168 (2,104)
--74101:0: aspacem 12: 0004029000-0004029fff 4096
--74101:0: aspacem 13: file 000402a000-000402afff 4096 r---- d=0xfd00 i=25168806 o=167936 (2,104)
--74101:0: aspacem 14: file 000402b000-000402bfff 4096 rw--- d=0xfd00 i=25168806 o=172032 (2,104)
--74101:0: aspacem 15: anon 000402c000-000402cfff 4096 rw---
--74101:0: aspacem 16: anon 000402d000-000402dfff 4096 rwx--
--74101:0: aspacem 17: RSVN 000402e000-000482cfff 8384512 ----- SmLower
--74101:0: aspacem 18: file 000482d000-000482dfff 4096 r---- d=0xfd00 i=17991823 o=0 (4,207)
--74101:0: aspacem 19: file 000482e000-000482efff 4096 r-xT- d=0xfd00 i=17991823 o=4096 (4,207)
--74101:0: aspacem 20: file 000482f000-000482ffff 4096 r---- d=0xfd00 i=17991823 o=8192 (4,207)
--74101:0: aspacem 21: file 0004830000-0004830fff 4096 r---- d=0xfd00 i=17991823 o=8192 (4,207)
--74101:0: aspacem 22: file 0004831000-0004831fff 4096 rw--- d=0xfd00 i=17991823 o=12288 (4,207)
--74101:0: aspacem 23: anon 0004832000-0004833fff 8192 rw---
--74101:0: aspacem 24: file 0004834000-000483afff 28672 r---- d=0xfd00 i=18008249 o=0 (5,265)
--74101:0: aspacem 25: file 000483b000-0004851fff 94208 r-xT- d=0xfd00 i=18008249 o=28672 (5,265)
--74101:0: aspacem 26: file 0004852000-0004856fff 20480 r---- d=0xfd00 i=18008249 o=122880 (5,265)
--74101:0: aspacem 27: file 0004857000-0004857fff 4096 r---- d=0xfd00 i=18008249 o=139264 (5,265)
--74101:0: aspacem 28: file 0004858000-0004858fff 4096 rw--- d=0xfd00 i=18008249 o=143360 (5,265)
--74101:0: aspacem 29: file 0004859000-000485efff 24576 r---- d=0xfd00 i=11224907 o=0 (6,322)
--74101:0: aspacem 30: file 000485f000-00048b1fff 339968 r-xT- d=0xfd00 i=11224907 o=24576 (6,322)
--74101:0: aspacem 31: file 00048b2000-00048d3fff 139264 r---- d=0xfd00 i=11224907 o=364544 (6,322)
--74101:0: aspacem 32: file 00048d4000-00048d4fff 4096 ----- d=0xfd00 i=11224907 o=503808 (6,322)
--74101:0: aspacem 33: file 00048d5000-00048d5fff 4096 r---- d=0xfd00 i=11224907 o=503808 (6,322)
--74101:0: aspacem 34: file 00048d6000-00048d6fff 4096 rw--- d=0xfd00 i=11224907 o=507904 (6,322)
--74101:0: aspacem 35: anon 00048d7000-00048dcfff 24576 rw---
--74101:0: aspacem 36: file 00048dd000-00048e1fff 20480 r---- d=0xfd00 i=11239161 o=0 (7,381)
--74101:0: aspacem 37: file 00048e2000-0004906fff 151552 r-xT- d=0xfd00 i=11239161 o=20480 (7,381)
--74101:0: aspacem 38: file 0004907000-0004914fff 57344 r---- d=0xfd00 i=11239161 o=172032 (7,381)
--74101:0: aspacem 39: file 0004915000-0004915fff 4096 r---- d=0xfd00 i=11239161 o=225280 (7,381)
--74101:0: aspacem 40: file 0004916000-0004916fff 4096 rw--- d=0xfd00 i=11239161 o=229376 (7,381)
--74101:0: aspacem 41: anon 0004917000-0004918fff 8192 rw---
--74101:0: aspacem 42: file 0004919000-000491dfff 20480 r---- d=0xfd00 i=11224899 o=0 (8,439)
--74101:0: aspacem 43: file 000491e000-000493efff 135168 r-xT- d=0xfd00 i=11224899 o=20480 (8,439)
--74101:0: aspacem 44: file 000493f000-000494bfff 53248 r---- d=0xfd00 i=11224899 o=155648 (8,439)
--74101:0: aspacem 45: file 000494c000-000494cfff 4096 r---- d=0xfd00 i=11224899 o=204800 (8,439)
--74101:0: aspacem 46: file 000494d000-000494dfff 4096 rw--- d=0xfd00 i=11224899 o=208896 (8,439)
--74101:0: aspacem 47: anon 000494e000-000494ffff 8192 rw---
--74101:0: aspacem 48: file 0004950000-0004956fff 28672 r---- d=0xfd00 i=11224903 o=0 (9,497)
--74101:0: aspacem 49: file 0004957000-00049a1fff 307200 r-xT- d=0xfd00 i=11224903 o=28672 (9,497)
--74101:0: aspacem 50: file 00049a2000-00049b9fff 98304 r---- d=0xfd00 i=11224903 o=335872 (9,497)
--74101:0: aspacem 51: file 00049ba000-00049bafff 4096 ----- d=0xfd00 i=11224903 o=434176 (9,497)
--74101:0: aspacem 52: file 00049bb000-00049bcfff 8192 r---- d=0xfd00 i=11224903 o=434176 (9,497)
--74101:0: aspacem 53: file 00049bd000-00049bdfff 4096 rw--- d=0xfd00 i=11224903 o=442368 (9,497)
--74101:0: aspacem 54: anon 00049be000-00049c2fff 20480 rw---
--74101:0: aspacem 55: file 00049c3000-00049c5fff 12288 r---- d=0xfd00 i=11239755 o=0 (10,555)
--74101:0: aspacem 56: file 00049c6000-0004a68fff 667648 r-xT- d=0xfd00 i=11239755 o=12288 (10,555)
--74101:0: aspacem 57: file 0004a69000-0004a71fff 36864 r---- d=0xfd00 i=11239755 o=679936 (10,555)
--74101:0: aspacem 58: file 0004a72000-0004a72fff 4096 r---- d=0xfd00 i=11239755 o=712704 (10,555)
--74101:0: aspacem 59: file 0004a73000-0004a73fff 4096 rw--- d=0xfd00 i=11239755 o=716800 (10,555)
--74101:0: aspacem 60: anon 0004a74000-0004a74fff 4096 rw---
--74101:0: aspacem 61: file 0004a75000-0004a7dfff 36864 r---- d=0xfd00 i=25170280 o=0 (11,610)
--74101:0: aspacem 62: file 0004a7e000-0004a8ffff 73728 r-xT- d=0xfd00 i=25170280 o=36864 (11,610)
--74101:0: aspacem 63: file 0004a90000-0004a97fff 32768 r---- d=0xfd00 i=25170280 o=110592 (11,610)
--74101:0: aspacem 64: file 0004a98000-0004a98fff 4096 r---- d=0xfd00 i=25170280 o=139264 (11,610)
--74101:0: aspacem 65: file 0004a99000-0004a99fff 4096 rw--- d=0xfd00 i=25170280 o=143360 (11,610)
--74101:0: aspacem 66: file 0004a9a000-0004a9bfff 8192 r---- d=0xfd00 i=25170278 o=0 (12,644)
--74101:0: aspacem 67: file 0004a9c000-0004a9dfff 8192 r-xT- d=0xfd00 i=25170278 o=8192 (12,644)
--74101:0: aspacem 68: file 0004a9e000-0004a9efff 4096 r---- d=0xfd00 i=25170278 o=16384 (12,644)
--74101:0: aspacem 69: file 0004a9f000-0004a9ffff 4096 r---- d=0xfd00 i=25170278 o=16384 (12,644)
--74101:0: aspacem 70: anon 0004aa0000-0004aa0fff 4096 rw---
--74101:0: aspacem 71: file 0004aa1000-0004aa1fff 4096 r---- d=0xfd00 i=25488751 o=0 (13,678)
--74101:0: aspacem 72: file 0004aa2000-0004aa3fff 8192 r-xT- d=0xfd00 i=25488751 o=4096 (13,678)
--74101:0: aspacem 73: file 0004aa4000-0004aa4fff 4096 r---- d=0xfd00 i=25488751 o=12288 (13,678)
--74101:0: aspacem 74: file 0004aa5000-0004aa5fff 4096 r---- d=0xfd00 i=25488751 o=12288 (13,678)
--74101:0: aspacem 75: file 0004aa6000-0004aa6fff 4096 rw--- d=0xfd00 i=25488751 o=16384 (13,678)
--74101:0: aspacem 76: file 0004aa7000-0004aacfff 24576 r---- d=0xfd00 i=25488945 o=0 (14,707)
--74101:0: aspacem 77: file 0004aad000-0004abbfff 61440 r-xT- d=0xfd00 i=25488945 o=24576 (14,707)
--74101:0: aspacem 78: file 0004abc000-0004ac1fff 24576 r---- d=0xfd00 i=25488945 o=86016 (14,707)
--74101:0: aspacem 79: file 0004ac2000-0004ac2fff 4096 r---- d=0xfd00 i=25488945 o=106496 (14,707)
--74101:0: aspacem 80: file 0004ac3000-0004ac3fff 4096 rw--- d=0xfd00 i=25488945 o=110592 (14,707)
--74101:0: aspacem 81: anon 0004ac4000-0004ac7fff 16384 rw---
--74101:0: aspacem 82: file 0004ac8000-0004ae9fff 139264 r---- d=0xfd00 i=25168814 o=0 (15,741)
--74101:0: aspacem 83: file 0004aea000-0004c36fff 1363968 r-xT- d=0xfd00 i=25168814 o=139264 (15,741)
--74101:0: aspacem 84: file 0004c37000-0004c82fff 311296 r---- d=0xfd00 i=25168814 o=1503232 (15,741)
--74101:0: aspacem 85: file 0004c83000-0004c83fff 4096 ----- d=0xfd00 i=25168814 o=1814528 (15,741)
--74101:0: aspacem 86: file 0004c84000-0004c87fff 16384 r---- d=0xfd00 i=25168814 o=1814528 (15,741)
--74101:0: aspacem 87: file 0004c88000-0004c89fff 8192 rw--- d=0xfd00 i=25168814 o=1830912 (15,741)
--74101:0: aspacem 88: anon 0004c8a000-0004c8ffff 24576 rw---
--74101:0: aspacem 89: file 0004c90000-0004c94fff 20480 r---- d=0xfd00 i=25183443 o=0 (16,769)
--74101:0: aspacem 90: file 0004c95000-0004caefff 106496 r-xT- d=0xfd00 i=25183443 o=20480 (16,769)
--74101:0: aspacem 91: file 0004caf000-0004cb8fff 40960 r---- d=0xfd00 i=25183443 o=126976 (16,769)
--74101:0: aspacem 92: file 0004cb9000-0004cb9fff 4096 ----- d=0xfd00 i=25183443 o=167936 (16,769)
--74101:0: aspacem 93: file 0004cba000-0004cbafff 4096 r---- d=0xfd00 i=25183443 o=167936 (16,769)
--74101:0: aspacem 94: file 0004cbb000-0004cbbfff 4096 rw--- d=0xfd00 i=25183443 o=172032 (16,769)
--74101:0: aspacem 95: file 0004cbc000-0004cbdfff 8192 r---- d=0xfd00 i=25169119 o=0 (17,802)
--74101:0: aspacem 96: file 0004cbe000-0004cc2fff 20480 r-xT- d=0xfd00 i=25169119 o=8192 (17,802)
--74101:0: aspacem 97: file 0004cc3000-0004cc3fff 4096 r---- d=0xfd00 i=25169119 o=28672 (17,802)
--74101:0: aspacem 98: file 0004cc4000-0004cc4fff 4096 r---- d=0xfd00 i=25169119 o=28672 (17,802)
--74101:0: aspacem 99: anon 0004cc5000-0004cc5fff 4096 rw---
--74101:0: aspacem 100: file 0004cc6000-0004cc8fff 12288 r---- d=0xfd00 i=25169145 o=0 (18,834)
--74101:0: aspacem 101: file 0004cc9000-0004cd7fff 61440 r-xT- d=0xfd00 i=25169145 o=12288 (18,834)
--74101:0: aspacem 102: file 0004cd8000-0004cdcfff 20480 r---- d=0xfd00 i=25169145 o=73728 (18,834)
--74101:0: aspacem 103: file 0004cdd000-0004cddfff 4096 r---- d=0xfd00 i=25169145 o=90112 (18,834)
--74101:0: aspacem 104: file 0004cde000-0004cdefff 4096 rw--- d=0xfd00 i=25169145 o=94208 (18,834)
--74101:0: aspacem 105: file 0004cdf000-0004ce0fff 8192 r---- d=0xfd00 i=25488951 o=0 (19,866)
--74101:0: aspacem 106: file 0004ce1000-0004ce4fff 16384 r-xT- d=0xfd00 i=25488951 o=8192 (19,866)
--74101:0: aspacem 107: file 0004ce5000-0004ce6fff 8192 r---- d=0xfd00 i=25488951 o=24576 (19,866)
--74101:0: aspacem 108: file 0004ce7000-0004ce7fff 4096 r---- d=0xfd00 i=25488951 o=28672 (19,866)
--74101:0: aspacem 109: file 0004ce8000-0004ce8fff 4096 rw--- d=0xfd00 i=25488951 o=32768 (19,866)
--74101:0: aspacem 110: file 0004ce9000-0004cebfff 12288 r---- d=0xfd00 i=25168754 o=0 (20,895)
--74101:0: aspacem 111: file 0004cec000-0004cfcfff 69632 r-xT- d=0xfd00 i=25168754 o=12288 (20,895)
--74101:0: aspacem 112: file 0004cfd000-0004d00fff 16384 r---- d=0xfd00 i=25168754 o=81920 (20,895)
--74101:0: aspacem 113: file 0004d01000-0004d01fff 4096 r---- d=0xfd00 i=25168754 o=94208 (20,895)
--74101:0: aspacem 114: file 0004d02000-0004d02fff 4096 rw--- d=0xfd00 i=25168754 o=98304 (20,895)
--74101:0: aspacem 115: file 0004d03000-0004d05fff 12288 r---- d=0xfd00 i=25169099 o=0 (21,935)
--74101:0: aspacem 116: file 0004d06000-0004d1dfff 98304 r-xT- d=0xfd00 i=25169099 o=12288 (21,935)
--74101:0: aspacem 117: file 0004d1e000-0004d28fff 45056 r---- d=0xfd00 i=25169099 o=110592 (21,935)
--74101:0: aspacem 118: file 0004d29000-0004d29fff 4096 ----- d=0xfd00 i=25169099 o=155648 (21,935)
--74101:0: aspacem 119: file 0004d2a000-0004d2afff 4096 r---- d=0xfd00 i=25169099 o=155648 (21,935)
--74101:0: aspacem 120: anon 0004d2b000-0004d2dfff 12288 rw---
--74101:0: aspacem 121: file 0004d2e000-0004d30fff 12288 r---- d=0xfd00 i=25169023 o=0 (22,967)
--74101:0: aspacem 122: file 0004d31000-0004d3efff 57344 r-xT- d=0xfd00 i=25169023 o=12288 (22,967)
--74101:0: aspacem 123: file 0004d3f000-0004d45fff 28672 r---- d=0xfd00 i=25169023 o=69632 (22,967)
--74101:0: aspacem 124: file 0004d46000-0004d46fff 4096 r---- d=0xfd00 i=25169023 o=94208 (22,967)
--74101:0: aspacem 125: anon 0004d47000-0004d4cfff 24576 rw---
--74101:0: aspacem 126: anon 0004d4d000-000514cfff 4194304 rwx-H
--74101:0: aspacem 127: file 000514d000-000514dfff 4096 rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 128: file 000514e000-000514efff 4096 rw--- d=0x006 i=16519 o=0 (24,1013)
--74101:0: aspacem 129: file 000514f000-000514ffff 4096 rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 130: ANON 0005150000-0057d4ffff 1324m rwx--
--74101:0: aspacem 131: 0057d50000-0057ffffff 2818048
--74101:0: aspacem 132: FILE 0058000000-0058000fff 4096 r---- d=0xfd00 i=18008248 o=0 (0,4)
--74101:0: aspacem 133: FILE 0058001000-005809bfff 634880 r-x-- d=0xfd00 i=18008248 o=4096 (0,4)
--74101:0: aspacem 134: file 005809c000-005809cfff 4096 r-x-- d=0xfd00 i=18008248 o=638976 (0,4)
--74101:0: aspacem 135: FILE 005809d000-00581befff 1187840 r-x-- d=0xfd00 i=18008248 o=643072 (0,4)
--74101:0: aspacem 136: FILE 00581bf000-0058267fff 692224 r---- d=0xfd00 i=18008248 o=1830912 (0,4)
--74101:0: aspacem 137: 0058268000-0058268fff 4096
--74101:0: aspacem 138: FILE 0058269000-005826bfff 12288 rw--- d=0xfd00 i=18008248 o=2523136 (0,4)
--74101:0: aspacem 139: ANON 005826c000-0058c5efff 9m rw---
--74101:0: aspacem 140: ANON 0058c5f000-007fc5efff 624m rwx--
--74101:0: aspacem 141: 007fc5f000-007fffffff 3805184
--74101:0: aspacem 142: file 0080000000-02f5dfffff 10078m rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 143: file 02f5e00000-0f43bfefff 50397m rw--- d=0x006 i=16519 o=4096 (24,1013)
--74101:0: aspacem 144: ANON 0f43bff000-1001ffefff 3044m rwx--
--74101:0: aspacem 145: 1001fff000-1001ffffff 4096
--74101:0: aspacem 146: RSVN 1002000000-1002000fff 4096 ----- SmFixed
--74101:0: aspacem 147: ANON 1002001000-100268bfff 6860800 rwx--
--74101:0: aspacem 148: ANON 100268c000-100268dfff 8192 -----
--74101:0: aspacem 149: ANON 100268e000-100278dfff 1048576 rwx--
--74101:0: aspacem 150: ANON 100278e000-100278ffff 8192 -----
--74101:0: aspacem 151: FILE 1002790000-1002790fff 4096 rw--- d=0x02b i=14406966 o=0 (3,130)
--74101:0: aspacem 152: ANON 1002791000-10027e0fff 327680 rwx--
--74101:0: aspacem 153: 10027e1000-10027f6fff 90112
--74101:0: aspacem 154: ANON 10027f7000-1002816fff 131072 rwx--
--74101:0: aspacem 155: 1002817000-100288bfff 479232
--74101:0: aspacem 156: ANON 100288c000-1002bd8fff 3461120 rwx--
--74101:0: aspacem 157: 1002bd9000-1002c80fff 688128
--74101:0: aspacem 158: ANON 1002c81000-10033d5fff 7688192 rwx--
--74101:0: aspacem 159: 10033d6000-1003469fff 606208
--74101:0: aspacem 160: ANON 100346a000-103fd13fff 968m rwx--
--74101:0: aspacem 161: 103fd14000-103fffffff 3063808
--74101:0: aspacem 162: file 1040000000-12b5dfffff 10078m rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 163: file 12b5e00000-1f03bfffff 50398m rw--- d=0x006 i=16519 o=0 (24,1013)
--74101:0: aspacem 164: ANON 1f03c00000-1ffe7fffff 4012m rwx--
--74101:0: aspacem 165: 1ffe800000-1ffe800fff 4096
--74101:0: aspacem 166: RSVN 1ffe801000-1ffeff8fff 8355840 ----- SmUpper
--74101:0: aspacem 167: anon 1ffeff9000-1fff000fff 32768 rw---
--74101:0: aspacem 168: ANON 1fff001000-1fffc00fff 12m rwx--
--74101:0: aspacem 169: 1fffc01000-1fffffffff 4190208
--74101:0: aspacem 170: RSVN 2000000000-7ffe930eefff 130938g ----- SmFixed
--74101:0: aspacem 171: ANON 7ffe930ef000-7ffe93110fff 139264 rw---
--74101:0: aspacem 172: RSVN 7ffe93111000-7ffe9312dfff 118784 ----- SmFixed
--74101:0: aspacem 173: ANON 7ffe9312e000-7ffe93130fff 12288 r----
--74101:0: aspacem 174: RSVN 7ffe93131000-ffffffffff5fffff 16383e ----- SmFixed
--74101:0: aspacem 175: ANON ffffffffff600000-ffffffffff600fff 4096 r-x--
--74101:0: aspacem 176: RSVN ffffffffff601000-ffffffffffffffff 9m ----- SmFixed
--74101:0: aspacem >>>
pmempool_transform/TEST11 failed with exit code 1.
Last 30 lines of drd11.log below (whole file has 95 lines).
pmempool_transform/TEST11 drd11.log ==74101== by 0x48932E5: pmempool_transformU (replica.c:2462)
pmempool_transform/TEST11 drd11.log ==74101== by 0x48933A5: pmempool_transform (replica.c:2492)
pmempool_transform/TEST11 drd11.log ==74101== by 0x40F164: pmempool_transform_func (in /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool)
pmempool_transform/TEST11 drd11.log ==74101== by 0x406AD3: main (in /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool)
pmempool_transform/TEST11 drd11.log client stack range: [0x1FFEFF9000 0x1FFF000FFF] client SP: 0x1FFEFFD4B0
pmempool_transform/TEST11 drd11.log valgrind stack range: [0x100268E000 0x100278DFFF] top usage: 12072 of 1048576
pmempool_transform/TEST11 drd11.log
pmempool_transform/TEST11 drd11.log ==74101==
pmempool_transform/TEST11 drd11.log ==74101== Valgrind's memory management: out of memory:
pmempool_transform/TEST11 drd11.log ==74101== newSuperblock's request for 4194304 bytes failed.
pmempool_transform/TEST11 drd11.log ==74101== 10,504,224,768 bytes have already been mmap-ed ANONYMOUS.
pmempool_transform/TEST11 drd11.log ==74101== Valgrind cannot continue. Sorry.
pmempool_transform/TEST11 drd11.log ==74101==
pmempool_transform/TEST11 drd11.log ==74101== There are several possible reasons for this.
pmempool_transform/TEST11 drd11.log ==74101== - You have some kind of memory limit in place. Look at the
pmempool_transform/TEST11 drd11.log ==74101== output of 'ulimit -a'. Is there a limit on the size of
pmempool_transform/TEST11 drd11.log ==74101== virtual memory or address space?
pmempool_transform/TEST11 drd11.log ==74101== - You have run out of swap space.
pmempool_transform/TEST11 drd11.log ==74101== - Valgrind has a bug. If you think this is the case or you are
pmempool_transform/TEST11 drd11.log ==74101== not sure, please let us know and we'll try to fix it.
pmempool_transform/TEST11 drd11.log ==74101== Please note that programs can take substantially more memory than
pmempool_transform/TEST11 drd11.log ==74101== normal when running under Valgrind tools, eg. up to twice or
pmempool_transform/TEST11 drd11.log ==74101== more, depending on the tool. On a 64-bit machine, Valgrind
pmempool_transform/TEST11 drd11.log ==74101== should be able to make use of up 32GB memory. On a 32-bit
pmempool_transform/TEST11 drd11.log ==74101== machine, Valgrind should be able to use all the memory available
pmempool_transform/TEST11 drd11.log ==74101== to a single process, up to 4GB if that's how you have your
pmempool_transform/TEST11 drd11.log ==74101== kernel configured. Most 32-bit Linux setups allow a maximum of
pmempool_transform/TEST11 drd11.log ==74101== 3GB per process.
pmempool_transform/TEST11 drd11.log ==74101==
pmempool_transform/TEST11 drd11.log ==74101== Whatever the reason, Valgrind cannot continue. Sorry.
out11.log below.
pmem11.log below.
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:238 out_init] libpmem version 1.1
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [libpmem.c:56 libpmem_init]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [pmem.c:784 pmem_init]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:419 pmem_init_funcs]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:368 pmem_cpuinfo_to_funcs]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:372 pmem_cpuinfo_to_funcs] clflush supported
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:281 use_avx_memcpy_memset] avx supported
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:285 use_avx_memcpy_memset] PMEM_AVX not set or not == 1
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [pmem.c:216 pmem_has_auto_flush]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [os_auto_flush_linux.c:114 check_domain_in_region] region_path: /sys/bus/nd/devices/region0
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [os_auto_flush_linux.c:59 check_cpu_cache] domain_path: /sys/bus/nd/devices/region0/persistence_domain
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:472 pmem_init_funcs] Flushing CPU cache
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:487 pmem_init_funcs] using clflush
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:501 pmem_init_funcs] using movnt SSE2
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [pmem_posix.c:107 pmem_os_init]
pmemblk11.log below.
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:238 out_init] libpmemblk version 1.1
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <3> [libpmemblk.c:118 libpmemblk_init]
pmemlog11.log below.
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:238 out_init] libpmemlog version 1.1
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <3> [libpmemlog.c:118 libpmemlog_init]
pmemobj11.log below.
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:238 out_init] libpmemobj version 2.4
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [libpmemobj.c:52 libpmemobj_init]
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [obj.c:283 obj_init]
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [obj.c:183 obj_ctl_init_and_load] pop (nil)
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [ctl.c:424 ctl_load_config_from_string] ctl (nil) ctx (nil) cfg_string "fallocate.at_create=0;"
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [ctl.c:300 ctl_query] ctl (nil) ctx (nil) source 2 name fallocate.at_create type 1 arg 0x4d4f524
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [ctl.c:79 ctl_find_node] nodes 0x49bd900 name fallocate.at_create indexes 0x1ffefff588
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [set.c:124 util_remote_init]
Last 30 lines of pmempool11.log below (whole file has 357 lines).
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:67 device_dax_size] path "/dev/dax1.0"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:1781 util_part_open] part 0x4d55708 minsize 0 create 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:131 util_file_exists] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:558 util_file_open] path "/dev/dax1.3" size 0x1ffefff408 minsize 0 flags 2
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:258 util_file_get_size] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:223 util_file_get_type] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:131 util_file_exists] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:67 device_dax_size] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3547 util_replica_open] set 0x4d554a0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3367 util_replica_open_local] set 0x4d554a0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 63413678080 req_align 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d55688 addr 0x80000000 size 63413678080 offset 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:364 util_map_hdr] part 0x4d55688 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 4096 req_align 4096
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:364 util_map_hdr] part 0x4d55708 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 4096 req_align 4096
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d55708 addr 0x2f5e00000 size 0 offset 4096 flags 17
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:1130 util_replica_check_map_sync] set 0x4d554a0 repidx 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3488 util_replica_open_local] replica addr 0x80000000
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3547 util_replica_open] set 0x4d557b0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3367 util_replica_open_local] set 0x4d557b0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 63413682176 req_align 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d55968 addr 0x1040000000 size 63413682176 offset 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:364 util_map_hdr] part 0x4d55968 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 4096 req_align 4096
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d559e8 addr 0x12b5e00000 size 0 offset 0 flags 17
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:1130 util_replica_check_map_sync] set 0x4d557b0 repidx 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3488 util_replica_open_local] replica addr 0x1040000000
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [transform.c:584 copy_replica_data_fw] set_in 0x4d554a0, set_out 0x4d557b0, repn 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [replica.c:2117 replica_get_pool_size] set 0x4d554a0, repn 0
rpmem11.log below.
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:235 out_init] pid 74069: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool.static-debug
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:238 out_init] librpmem version 1.2
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 rpmem11.log <librpmem>: <3> [librpmem.c:61 librpmem_init]
pmempool_transform/TEST11 rpmem11.log <librpmem>: <3> [librpmem.c:76 librpmem_fini]
RUNTESTS: stopping: pmempool_transform/TEST11 failed, TEST=all FS=any BUILD=debug
```
## Expected behavior:
Tests should pass.
## Details
[Logs.zip](https://github.com/pmem/issues/files/3467373/Logs.zip)
## Additional information about Priority and Help Requested:
Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible -->
Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
| 1.0 | Test: pmempool_transform/TEST[11, 14, 15, 18] fail with valgrind - <!--
Before creating new issue, ensure that similar issue wasn't already created
* Search: https://github.com/pmem/issues/issues
Note that if you do not provide enough information to reproduce the issue, we may not be able to take action on your report.
Remember this is just a minimal template. You can extend it with data you think may be useful.
-->
# ISSUE: <!-- fill the title of issue -->
## Environment Information
- PMDK package version(s):
- OS(es) version(s): Fedora30
- ndctl version(s): 65
- kernel version(s): 5.1.17-300.fc30.x86_64
<!-- fill in also other useful environment data -->
## Please provide a reproduction of the bug:
```
./RUNTESTS pmempool_transform -s TEST11 -d force-enable -t all
./RUNTESTS pmempool_transform -s TEST15 -d force-enable -t all
./RUNTESTS pmempool_transform -s TEST11 -e force-enable -t all
./RUNTESTS pmempool_transform -s TEST14 -e force-enable -t all
./RUNTESTS pmempool_transform -s TEST15 -e force-enable -t all
./RUNTESTS pmempool_transform -s TEST18 -e force-enable -t all
```
## How often bug is revealed: (always, often, rare): always
<!-- describe special circumstances in section above -->
## Actual behavior:
```
./RUNTESTS pmempool_transform -s TEST11 -d force-enable -t all
pmempool_transform/TEST11: SETUP (all/pmem/debug/drd)
--74101:0: aspacem <<< SHOW_SEGMENTS: out_of_memory (177 segments)
--74101:0: aspacem 25 segment names in 25 slots
--74101:0: aspacem freelist is empty
--74101:0: aspacem (0,4,9) /usr/local/lib/valgrind/drd-amd64-linux
--74101:0: aspacem (1,48,8) /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
--74101:0: aspacem (2,104,8) /usr/lib64/ld-2.29.so
--74101:0: aspacem (3,130,1) /tmp/vgdb-pipe-shared-mem-vgdb-74101-by-jenkins-on-localhost.localdomain
--74101:0: aspacem (4,207,8) /usr/local/lib/valgrind/vgpreload_core-amd64-linux.so
--74101:0: aspacem (5,265,8) /usr/local/lib/valgrind/vgpreload_drd-amd64-linux.so
--74101:0: aspacem (6,322,9) /home/jenkins/greg/pmdk/src/debug/libpmempool.so.1.0.0
--74101:0: aspacem (7,381,8) /home/jenkins/greg/pmdk/src/debug/libpmemblk.so.1.0.0
--74101:0: aspacem (8,439,8) /home/jenkins/greg/pmdk/src/debug/libpmemlog.so.1.0.0
--74101:0: aspacem (9,497,9) /home/jenkins/greg/pmdk/src/debug/libpmemobj.so.1.0.0
--74101:0: aspacem (10,555,8) /home/jenkins/greg/pmdk/src/debug/libpmem.so.1.0.0
--74101:0: aspacem (11,610,8) /usr/lib64/libndctl.so.6.14.0
--74101:0: aspacem (12,644,7) /usr/lib64/libdaxctl.so.1.2.1
--74101:0: aspacem (13,678,8) /usr/lib64/libdl-2.29.so
--74101:0: aspacem (14,707,8) /usr/lib64/libpthread-2.29.so
--74101:0: aspacem (15,741,9) /usr/lib64/libc-2.29.so
--74101:0: aspacem (16,769,9) /usr/lib64/libudev.so.1.6.13
--74101:0: aspacem (17,802,7) /usr/lib64/libuuid.so.1.3.0
--74101:0: aspacem (18,834,8) /usr/lib64/libkmod.so.2.3.3
--74101:0: aspacem (19,866,8) /usr/lib64/librt-2.29.so
--74101:0: aspacem (20,895,8) /usr/lib64/libgcc_s-9-20190503.so.1
--74101:0: aspacem (21,935,8) /usr/lib64/liblzma.so.5.2.4
--74101:0: aspacem (22,967,7) /usr/lib64/libz.so.1.2.11
--74101:0: aspacem (23,997,10) /dev/dax1.0
--74101:0: aspacem (24,1013,5) /dev/dax1.3
--74101:0: aspacem 0: RSVN 0000000000-00003fffff 4194304 ----- SmFixed
--74101:0: aspacem 1: file 0000400000-0000404fff 20480 r---- d=0xfd00 i=28113057 o=0 (1,48)
--74101:0: aspacem 2: file 0000405000-000045afff 352256 r-xT- d=0xfd00 i=28113057 o=20480 (1,48)
--74101:0: aspacem 3: file 000045b000-000047dfff 143360 r---- d=0xfd00 i=28113057 o=372736 (1,48)
--74101:0: aspacem 4: RSVN 000047e000-000047efff 4096 ----- SmFixed
--74101:0: aspacem 5: file 000047f000-0000480fff 8192 r---- d=0xfd00 i=28113057 o=516096 (1,48)
--74101:0: aspacem 6: file 0000481000-0000481fff 4096 rw--- d=0xfd00 i=28113057 o=524288 (1,48)
--74101:0: aspacem 7: anon 0000482000-0000488fff 28672 rw---
--74101:0: aspacem 8: RSVN 0000489000-0003ffffff 59m ----- SmFixed
--74101:0: aspacem 9: file 0004000000-0004000fff 4096 r---- d=0xfd00 i=25168806 o=0 (2,104)
--74101:0: aspacem 10: file 0004001000-0004020fff 131072 r-xT- d=0xfd00 i=25168806 o=4096 (2,104)
--74101:0: aspacem 11: file 0004021000-0004028fff 32768 r---- d=0xfd00 i=25168806 o=135168 (2,104)
--74101:0: aspacem 12: 0004029000-0004029fff 4096
--74101:0: aspacem 13: file 000402a000-000402afff 4096 r---- d=0xfd00 i=25168806 o=167936 (2,104)
--74101:0: aspacem 14: file 000402b000-000402bfff 4096 rw--- d=0xfd00 i=25168806 o=172032 (2,104)
--74101:0: aspacem 15: anon 000402c000-000402cfff 4096 rw---
--74101:0: aspacem 16: anon 000402d000-000402dfff 4096 rwx--
--74101:0: aspacem 17: RSVN 000402e000-000482cfff 8384512 ----- SmLower
--74101:0: aspacem 18: file 000482d000-000482dfff 4096 r---- d=0xfd00 i=17991823 o=0 (4,207)
--74101:0: aspacem 19: file 000482e000-000482efff 4096 r-xT- d=0xfd00 i=17991823 o=4096 (4,207)
--74101:0: aspacem 20: file 000482f000-000482ffff 4096 r---- d=0xfd00 i=17991823 o=8192 (4,207)
--74101:0: aspacem 21: file 0004830000-0004830fff 4096 r---- d=0xfd00 i=17991823 o=8192 (4,207)
--74101:0: aspacem 22: file 0004831000-0004831fff 4096 rw--- d=0xfd00 i=17991823 o=12288 (4,207)
--74101:0: aspacem 23: anon 0004832000-0004833fff 8192 rw---
--74101:0: aspacem 24: file 0004834000-000483afff 28672 r---- d=0xfd00 i=18008249 o=0 (5,265)
--74101:0: aspacem 25: file 000483b000-0004851fff 94208 r-xT- d=0xfd00 i=18008249 o=28672 (5,265)
--74101:0: aspacem 26: file 0004852000-0004856fff 20480 r---- d=0xfd00 i=18008249 o=122880 (5,265)
--74101:0: aspacem 27: file 0004857000-0004857fff 4096 r---- d=0xfd00 i=18008249 o=139264 (5,265)
--74101:0: aspacem 28: file 0004858000-0004858fff 4096 rw--- d=0xfd00 i=18008249 o=143360 (5,265)
--74101:0: aspacem 29: file 0004859000-000485efff 24576 r---- d=0xfd00 i=11224907 o=0 (6,322)
--74101:0: aspacem 30: file 000485f000-00048b1fff 339968 r-xT- d=0xfd00 i=11224907 o=24576 (6,322)
--74101:0: aspacem 31: file 00048b2000-00048d3fff 139264 r---- d=0xfd00 i=11224907 o=364544 (6,322)
--74101:0: aspacem 32: file 00048d4000-00048d4fff 4096 ----- d=0xfd00 i=11224907 o=503808 (6,322)
--74101:0: aspacem 33: file 00048d5000-00048d5fff 4096 r---- d=0xfd00 i=11224907 o=503808 (6,322)
--74101:0: aspacem 34: file 00048d6000-00048d6fff 4096 rw--- d=0xfd00 i=11224907 o=507904 (6,322)
--74101:0: aspacem 35: anon 00048d7000-00048dcfff 24576 rw---
--74101:0: aspacem 36: file 00048dd000-00048e1fff 20480 r---- d=0xfd00 i=11239161 o=0 (7,381)
--74101:0: aspacem 37: file 00048e2000-0004906fff 151552 r-xT- d=0xfd00 i=11239161 o=20480 (7,381)
--74101:0: aspacem 38: file 0004907000-0004914fff 57344 r---- d=0xfd00 i=11239161 o=172032 (7,381)
--74101:0: aspacem 39: file 0004915000-0004915fff 4096 r---- d=0xfd00 i=11239161 o=225280 (7,381)
--74101:0: aspacem 40: file 0004916000-0004916fff 4096 rw--- d=0xfd00 i=11239161 o=229376 (7,381)
--74101:0: aspacem 41: anon 0004917000-0004918fff 8192 rw---
--74101:0: aspacem 42: file 0004919000-000491dfff 20480 r---- d=0xfd00 i=11224899 o=0 (8,439)
--74101:0: aspacem 43: file 000491e000-000493efff 135168 r-xT- d=0xfd00 i=11224899 o=20480 (8,439)
--74101:0: aspacem 44: file 000493f000-000494bfff 53248 r---- d=0xfd00 i=11224899 o=155648 (8,439)
--74101:0: aspacem 45: file 000494c000-000494cfff 4096 r---- d=0xfd00 i=11224899 o=204800 (8,439)
--74101:0: aspacem 46: file 000494d000-000494dfff 4096 rw--- d=0xfd00 i=11224899 o=208896 (8,439)
--74101:0: aspacem 47: anon 000494e000-000494ffff 8192 rw---
--74101:0: aspacem 48: file 0004950000-0004956fff 28672 r---- d=0xfd00 i=11224903 o=0 (9,497)
--74101:0: aspacem 49: file 0004957000-00049a1fff 307200 r-xT- d=0xfd00 i=11224903 o=28672 (9,497)
--74101:0: aspacem 50: file 00049a2000-00049b9fff 98304 r---- d=0xfd00 i=11224903 o=335872 (9,497)
--74101:0: aspacem 51: file 00049ba000-00049bafff 4096 ----- d=0xfd00 i=11224903 o=434176 (9,497)
--74101:0: aspacem 52: file 00049bb000-00049bcfff 8192 r---- d=0xfd00 i=11224903 o=434176 (9,497)
--74101:0: aspacem 53: file 00049bd000-00049bdfff 4096 rw--- d=0xfd00 i=11224903 o=442368 (9,497)
--74101:0: aspacem 54: anon 00049be000-00049c2fff 20480 rw---
--74101:0: aspacem 55: file 00049c3000-00049c5fff 12288 r---- d=0xfd00 i=11239755 o=0 (10,555)
--74101:0: aspacem 56: file 00049c6000-0004a68fff 667648 r-xT- d=0xfd00 i=11239755 o=12288 (10,555)
--74101:0: aspacem 57: file 0004a69000-0004a71fff 36864 r---- d=0xfd00 i=11239755 o=679936 (10,555)
--74101:0: aspacem 58: file 0004a72000-0004a72fff 4096 r---- d=0xfd00 i=11239755 o=712704 (10,555)
--74101:0: aspacem 59: file 0004a73000-0004a73fff 4096 rw--- d=0xfd00 i=11239755 o=716800 (10,555)
--74101:0: aspacem 60: anon 0004a74000-0004a74fff 4096 rw---
--74101:0: aspacem 61: file 0004a75000-0004a7dfff 36864 r---- d=0xfd00 i=25170280 o=0 (11,610)
--74101:0: aspacem 62: file 0004a7e000-0004a8ffff 73728 r-xT- d=0xfd00 i=25170280 o=36864 (11,610)
--74101:0: aspacem 63: file 0004a90000-0004a97fff 32768 r---- d=0xfd00 i=25170280 o=110592 (11,610)
--74101:0: aspacem 64: file 0004a98000-0004a98fff 4096 r---- d=0xfd00 i=25170280 o=139264 (11,610)
--74101:0: aspacem 65: file 0004a99000-0004a99fff 4096 rw--- d=0xfd00 i=25170280 o=143360 (11,610)
--74101:0: aspacem 66: file 0004a9a000-0004a9bfff 8192 r---- d=0xfd00 i=25170278 o=0 (12,644)
--74101:0: aspacem 67: file 0004a9c000-0004a9dfff 8192 r-xT- d=0xfd00 i=25170278 o=8192 (12,644)
--74101:0: aspacem 68: file 0004a9e000-0004a9efff 4096 r---- d=0xfd00 i=25170278 o=16384 (12,644)
--74101:0: aspacem 69: file 0004a9f000-0004a9ffff 4096 r---- d=0xfd00 i=25170278 o=16384 (12,644)
--74101:0: aspacem 70: anon 0004aa0000-0004aa0fff 4096 rw---
--74101:0: aspacem 71: file 0004aa1000-0004aa1fff 4096 r---- d=0xfd00 i=25488751 o=0 (13,678)
--74101:0: aspacem 72: file 0004aa2000-0004aa3fff 8192 r-xT- d=0xfd00 i=25488751 o=4096 (13,678)
--74101:0: aspacem 73: file 0004aa4000-0004aa4fff 4096 r---- d=0xfd00 i=25488751 o=12288 (13,678)
--74101:0: aspacem 74: file 0004aa5000-0004aa5fff 4096 r---- d=0xfd00 i=25488751 o=12288 (13,678)
--74101:0: aspacem 75: file 0004aa6000-0004aa6fff 4096 rw--- d=0xfd00 i=25488751 o=16384 (13,678)
--74101:0: aspacem 76: file 0004aa7000-0004aacfff 24576 r---- d=0xfd00 i=25488945 o=0 (14,707)
--74101:0: aspacem 77: file 0004aad000-0004abbfff 61440 r-xT- d=0xfd00 i=25488945 o=24576 (14,707)
--74101:0: aspacem 78: file 0004abc000-0004ac1fff 24576 r---- d=0xfd00 i=25488945 o=86016 (14,707)
--74101:0: aspacem 79: file 0004ac2000-0004ac2fff 4096 r---- d=0xfd00 i=25488945 o=106496 (14,707)
--74101:0: aspacem 80: file 0004ac3000-0004ac3fff 4096 rw--- d=0xfd00 i=25488945 o=110592 (14,707)
--74101:0: aspacem 81: anon 0004ac4000-0004ac7fff 16384 rw---
--74101:0: aspacem 82: file 0004ac8000-0004ae9fff 139264 r---- d=0xfd00 i=25168814 o=0 (15,741)
--74101:0: aspacem 83: file 0004aea000-0004c36fff 1363968 r-xT- d=0xfd00 i=25168814 o=139264 (15,741)
--74101:0: aspacem 84: file 0004c37000-0004c82fff 311296 r---- d=0xfd00 i=25168814 o=1503232 (15,741)
--74101:0: aspacem 85: file 0004c83000-0004c83fff 4096 ----- d=0xfd00 i=25168814 o=1814528 (15,741)
--74101:0: aspacem 86: file 0004c84000-0004c87fff 16384 r---- d=0xfd00 i=25168814 o=1814528 (15,741)
--74101:0: aspacem 87: file 0004c88000-0004c89fff 8192 rw--- d=0xfd00 i=25168814 o=1830912 (15,741)
--74101:0: aspacem 88: anon 0004c8a000-0004c8ffff 24576 rw---
--74101:0: aspacem 89: file 0004c90000-0004c94fff 20480 r---- d=0xfd00 i=25183443 o=0 (16,769)
--74101:0: aspacem 90: file 0004c95000-0004caefff 106496 r-xT- d=0xfd00 i=25183443 o=20480 (16,769)
--74101:0: aspacem 91: file 0004caf000-0004cb8fff 40960 r---- d=0xfd00 i=25183443 o=126976 (16,769)
--74101:0: aspacem 92: file 0004cb9000-0004cb9fff 4096 ----- d=0xfd00 i=25183443 o=167936 (16,769)
--74101:0: aspacem 93: file 0004cba000-0004cbafff 4096 r---- d=0xfd00 i=25183443 o=167936 (16,769)
--74101:0: aspacem 94: file 0004cbb000-0004cbbfff 4096 rw--- d=0xfd00 i=25183443 o=172032 (16,769)
--74101:0: aspacem 95: file 0004cbc000-0004cbdfff 8192 r---- d=0xfd00 i=25169119 o=0 (17,802)
--74101:0: aspacem 96: file 0004cbe000-0004cc2fff 20480 r-xT- d=0xfd00 i=25169119 o=8192 (17,802)
--74101:0: aspacem 97: file 0004cc3000-0004cc3fff 4096 r---- d=0xfd00 i=25169119 o=28672 (17,802)
--74101:0: aspacem 98: file 0004cc4000-0004cc4fff 4096 r---- d=0xfd00 i=25169119 o=28672 (17,802)
--74101:0: aspacem 99: anon 0004cc5000-0004cc5fff 4096 rw---
--74101:0: aspacem 100: file 0004cc6000-0004cc8fff 12288 r---- d=0xfd00 i=25169145 o=0 (18,834)
--74101:0: aspacem 101: file 0004cc9000-0004cd7fff 61440 r-xT- d=0xfd00 i=25169145 o=12288 (18,834)
--74101:0: aspacem 102: file 0004cd8000-0004cdcfff 20480 r---- d=0xfd00 i=25169145 o=73728 (18,834)
--74101:0: aspacem 103: file 0004cdd000-0004cddfff 4096 r---- d=0xfd00 i=25169145 o=90112 (18,834)
--74101:0: aspacem 104: file 0004cde000-0004cdefff 4096 rw--- d=0xfd00 i=25169145 o=94208 (18,834)
--74101:0: aspacem 105: file 0004cdf000-0004ce0fff 8192 r---- d=0xfd00 i=25488951 o=0 (19,866)
--74101:0: aspacem 106: file 0004ce1000-0004ce4fff 16384 r-xT- d=0xfd00 i=25488951 o=8192 (19,866)
--74101:0: aspacem 107: file 0004ce5000-0004ce6fff 8192 r---- d=0xfd00 i=25488951 o=24576 (19,866)
--74101:0: aspacem 108: file 0004ce7000-0004ce7fff 4096 r---- d=0xfd00 i=25488951 o=28672 (19,866)
--74101:0: aspacem 109: file 0004ce8000-0004ce8fff 4096 rw--- d=0xfd00 i=25488951 o=32768 (19,866)
--74101:0: aspacem 110: file 0004ce9000-0004cebfff 12288 r---- d=0xfd00 i=25168754 o=0 (20,895)
--74101:0: aspacem 111: file 0004cec000-0004cfcfff 69632 r-xT- d=0xfd00 i=25168754 o=12288 (20,895)
--74101:0: aspacem 112: file 0004cfd000-0004d00fff 16384 r---- d=0xfd00 i=25168754 o=81920 (20,895)
--74101:0: aspacem 113: file 0004d01000-0004d01fff 4096 r---- d=0xfd00 i=25168754 o=94208 (20,895)
--74101:0: aspacem 114: file 0004d02000-0004d02fff 4096 rw--- d=0xfd00 i=25168754 o=98304 (20,895)
--74101:0: aspacem 115: file 0004d03000-0004d05fff 12288 r---- d=0xfd00 i=25169099 o=0 (21,935)
--74101:0: aspacem 116: file 0004d06000-0004d1dfff 98304 r-xT- d=0xfd00 i=25169099 o=12288 (21,935)
--74101:0: aspacem 117: file 0004d1e000-0004d28fff 45056 r---- d=0xfd00 i=25169099 o=110592 (21,935)
--74101:0: aspacem 118: file 0004d29000-0004d29fff 4096 ----- d=0xfd00 i=25169099 o=155648 (21,935)
--74101:0: aspacem 119: file 0004d2a000-0004d2afff 4096 r---- d=0xfd00 i=25169099 o=155648 (21,935)
--74101:0: aspacem 120: anon 0004d2b000-0004d2dfff 12288 rw---
--74101:0: aspacem 121: file 0004d2e000-0004d30fff 12288 r---- d=0xfd00 i=25169023 o=0 (22,967)
--74101:0: aspacem 122: file 0004d31000-0004d3efff 57344 r-xT- d=0xfd00 i=25169023 o=12288 (22,967)
--74101:0: aspacem 123: file 0004d3f000-0004d45fff 28672 r---- d=0xfd00 i=25169023 o=69632 (22,967)
--74101:0: aspacem 124: file 0004d46000-0004d46fff 4096 r---- d=0xfd00 i=25169023 o=94208 (22,967)
--74101:0: aspacem 125: anon 0004d47000-0004d4cfff 24576 rw---
--74101:0: aspacem 126: anon 0004d4d000-000514cfff 4194304 rwx-H
--74101:0: aspacem 127: file 000514d000-000514dfff 4096 rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 128: file 000514e000-000514efff 4096 rw--- d=0x006 i=16519 o=0 (24,1013)
--74101:0: aspacem 129: file 000514f000-000514ffff 4096 rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 130: ANON 0005150000-0057d4ffff 1324m rwx--
--74101:0: aspacem 131: 0057d50000-0057ffffff 2818048
--74101:0: aspacem 132: FILE 0058000000-0058000fff 4096 r---- d=0xfd00 i=18008248 o=0 (0,4)
--74101:0: aspacem 133: FILE 0058001000-005809bfff 634880 r-x-- d=0xfd00 i=18008248 o=4096 (0,4)
--74101:0: aspacem 134: file 005809c000-005809cfff 4096 r-x-- d=0xfd00 i=18008248 o=638976 (0,4)
--74101:0: aspacem 135: FILE 005809d000-00581befff 1187840 r-x-- d=0xfd00 i=18008248 o=643072 (0,4)
--74101:0: aspacem 136: FILE 00581bf000-0058267fff 692224 r---- d=0xfd00 i=18008248 o=1830912 (0,4)
--74101:0: aspacem 137: 0058268000-0058268fff 4096
--74101:0: aspacem 138: FILE 0058269000-005826bfff 12288 rw--- d=0xfd00 i=18008248 o=2523136 (0,4)
--74101:0: aspacem 139: ANON 005826c000-0058c5efff 9m rw---
--74101:0: aspacem 140: ANON 0058c5f000-007fc5efff 624m rwx--
--74101:0: aspacem 141: 007fc5f000-007fffffff 3805184
--74101:0: aspacem 142: file 0080000000-02f5dfffff 10078m rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 143: file 02f5e00000-0f43bfefff 50397m rw--- d=0x006 i=16519 o=4096 (24,1013)
--74101:0: aspacem 144: ANON 0f43bff000-1001ffefff 3044m rwx--
--74101:0: aspacem 145: 1001fff000-1001ffffff 4096
--74101:0: aspacem 146: RSVN 1002000000-1002000fff 4096 ----- SmFixed
--74101:0: aspacem 147: ANON 1002001000-100268bfff 6860800 rwx--
--74101:0: aspacem 148: ANON 100268c000-100268dfff 8192 -----
--74101:0: aspacem 149: ANON 100268e000-100278dfff 1048576 rwx--
--74101:0: aspacem 150: ANON 100278e000-100278ffff 8192 -----
--74101:0: aspacem 151: FILE 1002790000-1002790fff 4096 rw--- d=0x02b i=14406966 o=0 (3,130)
--74101:0: aspacem 152: ANON 1002791000-10027e0fff 327680 rwx--
--74101:0: aspacem 153: 10027e1000-10027f6fff 90112
--74101:0: aspacem 154: ANON 10027f7000-1002816fff 131072 rwx--
--74101:0: aspacem 155: 1002817000-100288bfff 479232
--74101:0: aspacem 156: ANON 100288c000-1002bd8fff 3461120 rwx--
--74101:0: aspacem 157: 1002bd9000-1002c80fff 688128
--74101:0: aspacem 158: ANON 1002c81000-10033d5fff 7688192 rwx--
--74101:0: aspacem 159: 10033d6000-1003469fff 606208
--74101:0: aspacem 160: ANON 100346a000-103fd13fff 968m rwx--
--74101:0: aspacem 161: 103fd14000-103fffffff 3063808
--74101:0: aspacem 162: file 1040000000-12b5dfffff 10078m rw--- d=0x006 i=16516 o=0 (23,997)
--74101:0: aspacem 163: file 12b5e00000-1f03bfffff 50398m rw--- d=0x006 i=16519 o=0 (24,1013)
--74101:0: aspacem 164: ANON 1f03c00000-1ffe7fffff 4012m rwx--
--74101:0: aspacem 165: 1ffe800000-1ffe800fff 4096
--74101:0: aspacem 166: RSVN 1ffe801000-1ffeff8fff 8355840 ----- SmUpper
--74101:0: aspacem 167: anon 1ffeff9000-1fff000fff 32768 rw---
--74101:0: aspacem 168: ANON 1fff001000-1fffc00fff 12m rwx--
--74101:0: aspacem 169: 1fffc01000-1fffffffff 4190208
--74101:0: aspacem 170: RSVN 2000000000-7ffe930eefff 130938g ----- SmFixed
--74101:0: aspacem 171: ANON 7ffe930ef000-7ffe93110fff 139264 rw---
--74101:0: aspacem 172: RSVN 7ffe93111000-7ffe9312dfff 118784 ----- SmFixed
--74101:0: aspacem 173: ANON 7ffe9312e000-7ffe93130fff 12288 r----
--74101:0: aspacem 174: RSVN 7ffe93131000-ffffffffff5fffff 16383e ----- SmFixed
--74101:0: aspacem 175: ANON ffffffffff600000-ffffffffff600fff 4096 r-x--
--74101:0: aspacem 176: RSVN ffffffffff601000-ffffffffffffffff 9m ----- SmFixed
--74101:0: aspacem >>>
pmempool_transform/TEST11 failed with exit code 1.
Last 30 lines of drd11.log below (whole file has 95 lines).
pmempool_transform/TEST11 drd11.log ==74101== by 0x48932E5: pmempool_transformU (replica.c:2462)
pmempool_transform/TEST11 drd11.log ==74101== by 0x48933A5: pmempool_transform (replica.c:2492)
pmempool_transform/TEST11 drd11.log ==74101== by 0x40F164: pmempool_transform_func (in /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool)
pmempool_transform/TEST11 drd11.log ==74101== by 0x406AD3: main (in /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool)
pmempool_transform/TEST11 drd11.log client stack range: [0x1FFEFF9000 0x1FFF000FFF] client SP: 0x1FFEFFD4B0
pmempool_transform/TEST11 drd11.log valgrind stack range: [0x100268E000 0x100278DFFF] top usage: 12072 of 1048576
pmempool_transform/TEST11 drd11.log
pmempool_transform/TEST11 drd11.log ==74101==
pmempool_transform/TEST11 drd11.log ==74101== Valgrind's memory management: out of memory:
pmempool_transform/TEST11 drd11.log ==74101== newSuperblock's request for 4194304 bytes failed.
pmempool_transform/TEST11 drd11.log ==74101== 10,504,224,768 bytes have already been mmap-ed ANONYMOUS.
pmempool_transform/TEST11 drd11.log ==74101== Valgrind cannot continue. Sorry.
pmempool_transform/TEST11 drd11.log ==74101==
pmempool_transform/TEST11 drd11.log ==74101== There are several possible reasons for this.
pmempool_transform/TEST11 drd11.log ==74101== - You have some kind of memory limit in place. Look at the
pmempool_transform/TEST11 drd11.log ==74101== output of 'ulimit -a'. Is there a limit on the size of
pmempool_transform/TEST11 drd11.log ==74101== virtual memory or address space?
pmempool_transform/TEST11 drd11.log ==74101== - You have run out of swap space.
pmempool_transform/TEST11 drd11.log ==74101== - Valgrind has a bug. If you think this is the case or you are
pmempool_transform/TEST11 drd11.log ==74101== not sure, please let us know and we'll try to fix it.
pmempool_transform/TEST11 drd11.log ==74101== Please note that programs can take substantially more memory than
pmempool_transform/TEST11 drd11.log ==74101== normal when running under Valgrind tools, eg. up to twice or
pmempool_transform/TEST11 drd11.log ==74101== more, depending on the tool. On a 64-bit machine, Valgrind
pmempool_transform/TEST11 drd11.log ==74101== should be able to make use of up 32GB memory. On a 32-bit
pmempool_transform/TEST11 drd11.log ==74101== machine, Valgrind should be able to use all the memory available
pmempool_transform/TEST11 drd11.log ==74101== to a single process, up to 4GB if that's how you have your
pmempool_transform/TEST11 drd11.log ==74101== kernel configured. Most 32-bit Linux setups allow a maximum of
pmempool_transform/TEST11 drd11.log ==74101== 3GB per process.
pmempool_transform/TEST11 drd11.log ==74101==
pmempool_transform/TEST11 drd11.log ==74101== Whatever the reason, Valgrind cannot continue. Sorry.
out11.log below.
pmem11.log below.
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:238 out_init] libpmem version 1.1
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmem11.log <libpmem>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [libpmem.c:56 libpmem_init]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [pmem.c:784 pmem_init]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:419 pmem_init_funcs]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:368 pmem_cpuinfo_to_funcs]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:372 pmem_cpuinfo_to_funcs] clflush supported
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:281 use_avx_memcpy_memset] avx supported
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:285 use_avx_memcpy_memset] PMEM_AVX not set or not == 1
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [pmem.c:216 pmem_has_auto_flush]
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [os_auto_flush_linux.c:114 check_domain_in_region] region_path: /sys/bus/nd/devices/region0
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [os_auto_flush_linux.c:59 check_cpu_cache] domain_path: /sys/bus/nd/devices/region0/persistence_domain
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:472 pmem_init_funcs] Flushing CPU cache
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:487 pmem_init_funcs] using clflush
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [init.c:501 pmem_init_funcs] using movnt SSE2
pmempool_transform/TEST11 pmem11.log <libpmem>: <3> [pmem_posix.c:107 pmem_os_init]
pmemblk11.log below.
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:238 out_init] libpmemblk version 1.1
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmemblk11.log <libpmemblk>: <3> [libpmemblk.c:118 libpmemblk_init]
pmemlog11.log below.
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:238 out_init] libpmemlog version 1.1
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmemlog11.log <libpmemlog>: <3> [libpmemlog.c:118 libpmemlog_init]
pmemobj11.log below.
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:235 out_init] pid 74101: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:238 out_init] libpmemobj version 2.4
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [mmap.c:67 util_mmap_init]
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [libpmemobj.c:52 libpmemobj_init]
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [obj.c:283 obj_init]
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [obj.c:183 obj_ctl_init_and_load] pop (nil)
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [ctl.c:424 ctl_load_config_from_string] ctl (nil) ctx (nil) cfg_string "fallocate.at_create=0;"
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [ctl.c:300 ctl_query] ctl (nil) ctx (nil) source 2 name fallocate.at_create type 1 arg 0x4d4f524
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [ctl.c:79 ctl_find_node] nodes 0x49bd900 name fallocate.at_create indexes 0x1ffefff588
pmempool_transform/TEST11 pmemobj11.log <libpmemobj>: <3> [set.c:124 util_remote_init]
Last 30 lines of pmempool11.log below (whole file has 357 lines).
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:67 device_dax_size] path "/dev/dax1.0"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:1781 util_part_open] part 0x4d55708 minsize 0 create 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:131 util_file_exists] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:558 util_file_open] path "/dev/dax1.3" size 0x1ffefff408 minsize 0 flags 2
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:258 util_file_get_size] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:223 util_file_get_type] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:131 util_file_exists] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [file.c:67 device_dax_size] path "/dev/dax1.3"
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3547 util_replica_open] set 0x4d554a0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3367 util_replica_open_local] set 0x4d554a0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 63413678080 req_align 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d55688 addr 0x80000000 size 63413678080 offset 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:364 util_map_hdr] part 0x4d55688 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 4096 req_align 4096
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:364 util_map_hdr] part 0x4d55708 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 4096 req_align 4096
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d55708 addr 0x2f5e00000 size 0 offset 4096 flags 17
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:1130 util_replica_check_map_sync] set 0x4d554a0 repidx 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3488 util_replica_open_local] replica addr 0x80000000
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3547 util_replica_open] set 0x4d557b0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3367 util_replica_open_local] set 0x4d557b0 repidx 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 63413682176 req_align 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d55968 addr 0x1040000000 size 63413682176 offset 0 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:364 util_map_hdr] part 0x4d55968 flags 1
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [mmap_posix.c:153 util_map_hint] len 4096 req_align 4096
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:439 util_map_part] part 0x4d559e8 addr 0x12b5e00000 size 0 offset 0 flags 17
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:1130 util_replica_check_map_sync] set 0x4d557b0 repidx 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [set.c:3488 util_replica_open_local] replica addr 0x1040000000
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [transform.c:584 copy_replica_data_fw] set_in 0x4d554a0, set_out 0x4d557b0, repn 0
pmempool_transform/TEST11 pmempool11.log <libpmempool>: <3> [replica.c:2117 replica_get_pool_size] set 0x4d554a0, repn 0
rpmem11.log below.
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:235 out_init] pid 74069: program: /home/jenkins/greg/pmdk/src/tools/pmempool/pmempool.static-debug
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:238 out_init] librpmem version 1.2
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:242 out_init] src version: 1.6+git99.gb19d5cc6b
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:250 out_init] compiled with support for Valgrind pmemcheck
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:255 out_init] compiled with support for Valgrind helgrind
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:260 out_init] compiled with support for Valgrind memcheck
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:265 out_init] compiled with support for Valgrind drd
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:270 out_init] compiled with support for shutdown state
pmempool_transform/TEST11 rpmem11.log <librpmem>: <1> [out.c:275 out_init] compiled with libndctl 63+
pmempool_transform/TEST11 rpmem11.log <librpmem>: <3> [librpmem.c:61 librpmem_init]
pmempool_transform/TEST11 rpmem11.log <librpmem>: <3> [librpmem.c:76 librpmem_fini]
RUNTESTS: stopping: pmempool_transform/TEST11 failed, TEST=all FS=any BUILD=debug
```
## Expected behavior:
Tests should pass.
## Details
[Logs.zip](https://github.com/pmem/issues/files/3467373/Logs.zip)
## Additional information about Priority and Help Requested:
Are you willing to submit a pull request with a proposed change? (Yes, No) <!-- check one if possible -->
Requested priority: (Showstopper, High, Medium, Low) <!-- check one if possible -->
| priority | test pmempool transform test fail with valgrind before creating new issue ensure that similar issue wasn t already created search note that if you do not provide enough information to reproduce the issue we may not be able to take action on your report remember this is just a minimal template you can extend it with data you think may be useful issue environment information pmdk package version s os es version s ndctl version s kernel version s please provide a reproduction of the bug runtests pmempool transform s d force enable t all runtests pmempool transform s d force enable t all runtests pmempool transform s e force enable t all runtests pmempool transform s e force enable t all runtests pmempool transform s e force enable t all runtests pmempool transform s e force enable t all how often bug is revealed always often rare always actual behavior runtests pmempool transform s d force enable t all pmempool transform setup all pmem debug drd aspacem show segments out of memory segments aspacem segment names in slots aspacem freelist is empty aspacem usr local lib valgrind drd linux aspacem home jenkins greg pmdk src tools pmempool pmempool aspacem usr ld so aspacem tmp vgdb pipe shared mem vgdb by jenkins on localhost localdomain aspacem usr local lib valgrind vgpreload core linux so aspacem usr local lib valgrind vgpreload drd linux so aspacem home jenkins greg pmdk src debug libpmempool so aspacem home jenkins greg pmdk src debug libpmemblk so aspacem home jenkins greg pmdk src debug libpmemlog so aspacem home jenkins greg pmdk src debug libpmemobj so aspacem home jenkins greg pmdk src debug libpmem so aspacem usr libndctl so aspacem usr libdaxctl so aspacem usr libdl so aspacem usr libpthread so aspacem usr libc so aspacem usr libudev so aspacem usr libuuid so aspacem usr libkmod so aspacem usr librt so aspacem usr libgcc s so aspacem usr liblzma so aspacem usr libz so aspacem dev aspacem dev aspacem rsvn smfixed aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem rsvn smfixed aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem rsvn smfixed aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem anon rwx aspacem rsvn smlower aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file d i o aspacem file r d i o aspacem file rw d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file d i o aspacem file r d i o aspacem file rw d i o aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem file rw d i o aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file d i o aspacem file r d i o aspacem anon rw aspacem file r d i o aspacem file r xt d i o aspacem file r d i o aspacem file r d i o aspacem anon rw aspacem anon rwx h aspacem file rw d i o aspacem file rw d i o aspacem file rw d i o aspacem anon rwx aspacem aspacem file r d i o aspacem file r x d i o aspacem file r x d i o aspacem file r x d i o aspacem file r d i o aspacem aspacem file rw d i o aspacem anon rw aspacem anon rwx aspacem aspacem file rw d i o aspacem file rw d i o aspacem anon rwx aspacem aspacem rsvn smfixed aspacem anon rwx aspacem anon aspacem anon rwx aspacem anon aspacem file rw d i o aspacem anon rwx aspacem aspacem anon rwx aspacem aspacem anon rwx aspacem aspacem anon rwx aspacem aspacem anon rwx aspacem aspacem file rw d i o aspacem file rw d i o aspacem anon rwx aspacem aspacem rsvn smupper aspacem anon rw aspacem anon rwx aspacem aspacem rsvn smfixed aspacem anon rw aspacem rsvn smfixed aspacem anon r aspacem rsvn smfixed aspacem anon r x aspacem rsvn ffffffffffffffff smfixed aspacem pmempool transform failed with exit code last lines of log below whole file has lines pmempool transform log by pmempool transformu replica c pmempool transform log by pmempool transform replica c pmempool transform log by pmempool transform func in home jenkins greg pmdk src tools pmempool pmempool pmempool transform log by main in home jenkins greg pmdk src tools pmempool pmempool pmempool transform log client stack range client sp pmempool transform log valgrind stack range top usage of pmempool transform log pmempool transform log pmempool transform log valgrind s memory management out of memory pmempool transform log newsuperblock s request for bytes failed pmempool transform log bytes have already been mmap ed anonymous pmempool transform log valgrind cannot continue sorry pmempool transform log pmempool transform log there are several possible reasons for this pmempool transform log you have some kind of memory limit in place look at the pmempool transform log output of ulimit a is there a limit on the size of pmempool transform log virtual memory or address space pmempool transform log you have run out of swap space pmempool transform log valgrind has a bug if you think this is the case or you are pmempool transform log not sure please let us know and we ll try to fix it pmempool transform log please note that programs can take substantially more memory than pmempool transform log normal when running under valgrind tools eg up to twice or pmempool transform log more depending on the tool on a bit machine valgrind pmempool transform log should be able to make use of up memory on a bit pmempool transform log machine valgrind should be able to use all the memory available pmempool transform log to a single process up to if that s how you have your pmempool transform log kernel configured most bit linux setups allow a maximum of pmempool transform log per process pmempool transform log pmempool transform log whatever the reason valgrind cannot continue sorry log below log below pmempool transform log pid program home jenkins greg pmdk src tools pmempool pmempool pmempool transform log libpmem version pmempool transform log src version pmempool transform log compiled with support for valgrind pmemcheck pmempool transform log compiled with support for valgrind helgrind pmempool transform log compiled with support for valgrind memcheck pmempool transform log compiled with support for valgrind drd pmempool transform log compiled with support for shutdown state pmempool transform log compiled with libndctl pmempool transform log pmempool transform log pmempool transform log pmempool transform log pmempool transform log pmempool transform log clflush supported pmempool transform log avx supported pmempool transform log pmem avx not set or not pmempool transform log pmempool transform log region path sys bus nd devices pmempool transform log domain path sys bus nd devices persistence domain pmempool transform log flushing cpu cache pmempool transform log using clflush pmempool transform log using movnt pmempool transform log log below pmempool transform log pid program home jenkins greg pmdk src tools pmempool pmempool pmempool transform log libpmemblk version pmempool transform log src version pmempool transform log compiled with support for valgrind pmemcheck pmempool transform log compiled with support for valgrind helgrind pmempool transform log compiled with support for valgrind memcheck pmempool transform log compiled with support for valgrind drd pmempool transform log compiled with support for shutdown state pmempool transform log compiled with libndctl pmempool transform log pmempool transform log log below pmempool transform log pid program home jenkins greg pmdk src tools pmempool pmempool pmempool transform log libpmemlog version pmempool transform log src version pmempool transform log compiled with support for valgrind pmemcheck pmempool transform log compiled with support for valgrind helgrind pmempool transform log compiled with support for valgrind memcheck pmempool transform log compiled with support for valgrind drd pmempool transform log compiled with support for shutdown state pmempool transform log compiled with libndctl pmempool transform log pmempool transform log log below pmempool transform log pid program home jenkins greg pmdk src tools pmempool pmempool pmempool transform log libpmemobj version pmempool transform log src version pmempool transform log compiled with support for valgrind pmemcheck pmempool transform log compiled with support for valgrind helgrind pmempool transform log compiled with support for valgrind memcheck pmempool transform log compiled with support for valgrind drd pmempool transform log compiled with support for shutdown state pmempool transform log compiled with libndctl pmempool transform log pmempool transform log pmempool transform log pmempool transform log pop nil pmempool transform log ctl nil ctx nil cfg string fallocate at create pmempool transform log ctl nil ctx nil source name fallocate at create type arg pmempool transform log nodes name fallocate at create indexes pmempool transform log last lines of log below whole file has lines pmempool transform log path dev pmempool transform log part minsize create pmempool transform log path dev pmempool transform log path dev size minsize flags pmempool transform log path dev pmempool transform log path dev pmempool transform log path dev pmempool transform log path dev pmempool transform log set repidx flags pmempool transform log set repidx flags pmempool transform log len req align pmempool transform log part addr size offset flags pmempool transform log part flags pmempool transform log len req align pmempool transform log part flags pmempool transform log len req align pmempool transform log part addr size offset flags pmempool transform log set repidx pmempool transform log replica addr pmempool transform log set repidx flags pmempool transform log set repidx flags pmempool transform log len req align pmempool transform log part addr size offset flags pmempool transform log part flags pmempool transform log len req align pmempool transform log part addr size offset flags pmempool transform log set repidx pmempool transform log replica addr pmempool transform log set in set out repn pmempool transform log set repn log below pmempool transform log pid program home jenkins greg pmdk src tools pmempool pmempool static debug pmempool transform log librpmem version pmempool transform log src version pmempool transform log compiled with support for valgrind pmemcheck pmempool transform log compiled with support for valgrind helgrind pmempool transform log compiled with support for valgrind memcheck pmempool transform log compiled with support for valgrind drd pmempool transform log compiled with support for shutdown state pmempool transform log compiled with libndctl pmempool transform log pmempool transform log runtests stopping pmempool transform failed test all fs any build debug expected behavior tests should pass details additional information about priority and help requested are you willing to submit a pull request with a proposed change yes no requested priority showstopper high medium low | 1 |
605,925 | 18,751,970,150 | IssuesEvent | 2021-11-05 04:03:41 | nimblehq/nimble-medium-ios | https://api.github.com/repos/nimblehq/nimble-medium-ios | closed | As a user, I can delete my comment on an article in the comments history screen | type : feature category : ui priority : medium | ## Why
When the users login the application successfully, they can delete their own comment from any article in the `Comments History` screen.
## Acceptance Criteria
- [ ] Add a delete comment button to the bottom right corner of the comment table view cell created from #29 .
- [ ] Reuse the delete comment icon from the UI layout task #91.
- [ ] Hide the button by default.
## Resources
- The sample comment table view cell UI layout with delete comment icon:
<img width="557" alt="Screen Shot 2021-08-21 at 17 24 46" src="https://user-images.githubusercontent.com/70877098/130320379-c3ff2158-c419-4def-ad68-72abe385d63a.png">
| 1.0 | As a user, I can delete my comment on an article in the comments history screen - ## Why
When the users login the application successfully, they can delete their own comment from any article in the `Comments History` screen.
## Acceptance Criteria
- [ ] Add a delete comment button to the bottom right corner of the comment table view cell created from #29 .
- [ ] Reuse the delete comment icon from the UI layout task #91.
- [ ] Hide the button by default.
## Resources
- The sample comment table view cell UI layout with delete comment icon:
<img width="557" alt="Screen Shot 2021-08-21 at 17 24 46" src="https://user-images.githubusercontent.com/70877098/130320379-c3ff2158-c419-4def-ad68-72abe385d63a.png">
| priority | as a user i can delete my comment on an article in the comments history screen why when the users login the application successfully they can delete their own comment from any article in the comments history screen acceptance criteria add a delete comment button to the bottom right corner of the comment table view cell created from reuse the delete comment icon from the ui layout task hide the button by default resources the sample comment table view cell ui layout with delete comment icon img width alt screen shot at src | 1 |
30,599 | 2,724,305,564 | IssuesEvent | 2015-04-14 17:05:51 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | Crux can't call REST services when they are defined on a module which name is part of the application domain | bug imported Milestone-M14-C2 Priority-Medium | _From [thi...@cruxframework.org](https://code.google.com/u/114650528804514463329/) on January 28, 2014 22:51:01_
What steps will reproduce the problem? 1. Create a REST service bound to URL http://app.mydomain.com/mydomain/rest/service 2. Create a RestProxy to invoke the service
3. Try to call it. What is the expected output? What do you see instead? Crux generate a request to http://app.rest/service , that is an invalid URL Please use labels and text to provide additional information.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=292_ | 1.0 | Crux can't call REST services when they are defined on a module which name is part of the application domain - _From [thi...@cruxframework.org](https://code.google.com/u/114650528804514463329/) on January 28, 2014 22:51:01_
What steps will reproduce the problem? 1. Create a REST service bound to URL http://app.mydomain.com/mydomain/rest/service 2. Create a RestProxy to invoke the service
3. Try to call it. What is the expected output? What do you see instead? Crux generate a request to http://app.rest/service , that is an invalid URL Please use labels and text to provide additional information.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=292_ | priority | crux can t call rest services when they are defined on a module which name is part of the application domain from on january what steps will reproduce the problem create a rest service bound to url create a restproxy to invoke the service try to call it what is the expected output what do you see instead crux generate a request to that is an invalid url please use labels and text to provide additional information original issue | 1 |
222,512 | 7,433,079,396 | IssuesEvent | 2018-03-26 05:42:29 | cyberFund/cyber-markets | https://api.github.com/repos/cyberFund/cyber-markets | closed | [stream-api] setup batch architecture for tickers | Priority: Medium Status: Revision Needed Type: Enhancement | Reorganize tickers stream process to batch
(array with all updated tickers) | 1.0 | [stream-api] setup batch architecture for tickers - Reorganize tickers stream process to batch
(array with all updated tickers) | priority | setup batch architecture for tickers reorganize tickers stream process to batch array with all updated tickers | 1 |
688,849 | 23,597,460,623 | IssuesEvent | 2022-08-23 20:45:40 | WordPress/openverse-frontend | https://api.github.com/repos/WordPress/openverse-frontend | closed | Incorrect text styles in metadata information | good first issue help wanted 🟨 priority: medium 🛠 goal: fix 🕹 aspect: interface | ## Description
In the single result view of audio content, the metadata information shown in the Audio information section has incorrect styles. The font size is `16px` and line-height `32px`, while it should follow the [metadata item component](https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library?node-id=1043%3A3305) definitions.
## Reproduction
1. Search for anything and see audio results.
2. Click on any audio track to see its details.
3. See the Audio information section.
## Screenshots
<img src="https://user-images.githubusercontent.com/895819/181034199-d159de74-0a6f-4ea9-b356-971857555e3a.png" width="500">
## Resolution
- [ ] 🙋 I would be interested in resolving this bug.
| 1.0 | Incorrect text styles in metadata information - ## Description
In the single result view of audio content, the metadata information shown in the Audio information section has incorrect styles. The font size is `16px` and line-height `32px`, while it should follow the [metadata item component](https://www.figma.com/file/GIIQ4sDbaToCfFQyKMvzr8/Openverse-Design-Library?node-id=1043%3A3305) definitions.
## Reproduction
1. Search for anything and see audio results.
2. Click on any audio track to see its details.
3. See the Audio information section.
## Screenshots
<img src="https://user-images.githubusercontent.com/895819/181034199-d159de74-0a6f-4ea9-b356-971857555e3a.png" width="500">
## Resolution
- [ ] 🙋 I would be interested in resolving this bug.
| priority | incorrect text styles in metadata information description in the single result view of audio content the metadata information shown in the audio information section has incorrect styles the font size is and line height while it should follow the definitions reproduction search for anything and see audio results click on any audio track to see its details see the audio information section screenshots resolution 🙋 i would be interested in resolving this bug | 1 |
795,393 | 28,071,412,227 | IssuesEvent | 2023-03-29 19:22:58 | AY2223S2-CS2103T-W15-4/tp | https://api.github.com/repos/AY2223S2-CS2103T-W15-4/tp | closed | Enhance contact management feature | priority.medium | Allow editing and deleting contacts added to an internship application | 1.0 | Enhance contact management feature - Allow editing and deleting contacts added to an internship application | priority | enhance contact management feature allow editing and deleting contacts added to an internship application | 1 |
629,233 | 20,026,717,339 | IssuesEvent | 2022-02-01 22:14:30 | cse1110/andy | https://api.github.com/repos/cse1110/andy | closed | Create `run` method without security manager | enhancement Medium priority | Whenever running `andy` locally, there is no need for the `AndySecurityManager` to take over control of the permissions, and thus this should be disabled when running locally. For this, a method `runWithoutSecurityManager` should be created which does not add the `AddSecurityManagerStep` to the execution flow, this way no security manager is added to the `andy` process. | 1.0 | Create `run` method without security manager - Whenever running `andy` locally, there is no need for the `AndySecurityManager` to take over control of the permissions, and thus this should be disabled when running locally. For this, a method `runWithoutSecurityManager` should be created which does not add the `AddSecurityManagerStep` to the execution flow, this way no security manager is added to the `andy` process. | priority | create run method without security manager whenever running andy locally there is no need for the andysecuritymanager to take over control of the permissions and thus this should be disabled when running locally for this a method runwithoutsecuritymanager should be created which does not add the addsecuritymanagerstep to the execution flow this way no security manager is added to the andy process | 1 |
41,433 | 2,869,005,556 | IssuesEvent | 2015-06-05 22:31:39 | dart-lang/html | https://api.github.com/repos/dart-lang/html | closed | Incomplete HTML displaying errors incorrectly | Area-Polymer bug Fixed Priority-Medium | _Originally opened as dart-lang/sdk#16981_
*This issue was originally filed by LukeEC...@gmail.com*
_____
With a structure with insufficient end div tags
e.g.
<html>
<body>
<div>
<div>
</div>
</body>
A warning is displayed associated with \*the /body tag\* below:
"web/....html:163:1: Unexpected end tag (div). Missing end tag (body)."
It should be reporting that there was a missing end tag for (div) instead. This made fixing the difficult slightly tricky.
The page used polymer.
| 1.0 | Incomplete HTML displaying errors incorrectly - _Originally opened as dart-lang/sdk#16981_
*This issue was originally filed by LukeEC...@gmail.com*
_____
With a structure with insufficient end div tags
e.g.
<html>
<body>
<div>
<div>
</div>
</body>
A warning is displayed associated with \*the /body tag\* below:
"web/....html:163:1: Unexpected end tag (div). Missing end tag (body)."
It should be reporting that there was a missing end tag for (div) instead. This made fixing the difficult slightly tricky.
The page used polymer.
| priority | incomplete html displaying errors incorrectly originally opened as dart lang sdk this issue was originally filed by lukeec gmail com with a structure with insufficient end div tags e g lt html gt nbsp nbsp lt body gt nbsp nbsp nbsp nbsp lt div gt nbsp nbsp nbsp nbsp nbsp nbsp lt div gt nbsp nbsp nbsp nbsp nbsp nbsp lt div gt lt body gt a warning is displayed associated with the body tag below quot web html unexpected end tag div missing end tag body quot it should be reporting that there was a missing end tag for div instead this made fixing the difficult slightly tricky the page used polymer | 1 |
288,441 | 8,847,119,469 | IssuesEvent | 2019-01-08 00:06:48 | hackcambridge/hack-cambridge-website | https://api.github.com/repos/hackcambridge/hack-cambridge-website | closed | Attach QR code to tickets | Epic: Invitations Priority: Medium Type: Enhancement | On attendees tickets, we should generate a QR code of the application slug as well as the text. This would allow us to use a mobile app, to quickly scan and register people in the queue and get registration moving a lot quicker. In some cases, the QR code may not scan properly so having the application slug in text would allow for manual look up. The system still needs a little bit of thought but would greatly improve the registration process for everyone if done correctly. | 1.0 | Attach QR code to tickets - On attendees tickets, we should generate a QR code of the application slug as well as the text. This would allow us to use a mobile app, to quickly scan and register people in the queue and get registration moving a lot quicker. In some cases, the QR code may not scan properly so having the application slug in text would allow for manual look up. The system still needs a little bit of thought but would greatly improve the registration process for everyone if done correctly. | priority | attach qr code to tickets on attendees tickets we should generate a qr code of the application slug as well as the text this would allow us to use a mobile app to quickly scan and register people in the queue and get registration moving a lot quicker in some cases the qr code may not scan properly so having the application slug in text would allow for manual look up the system still needs a little bit of thought but would greatly improve the registration process for everyone if done correctly | 1 |
542,128 | 15,855,600,527 | IssuesEvent | 2021-04-08 00:16:59 | gw2efficiency/issues | https://api.github.com/repos/gw2efficiency/issues | closed | Add back Kibana alerts | 1-Type: Chore 2-Priority: B 3-Complexity: Low 4-Impact: Medium 5-Area: Other | Setup ElasticSearch on the old DB server, and then add back Kibana dashboards and alerts | 1.0 | Add back Kibana alerts - Setup ElasticSearch on the old DB server, and then add back Kibana dashboards and alerts | priority | add back kibana alerts setup elasticsearch on the old db server and then add back kibana dashboards and alerts | 1 |
39,640 | 2,857,859,758 | IssuesEvent | 2015-06-02 21:51:43 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | closed | View Dataset: Versions tab does not display file restriction info | Component: UX & Upgrade Priority: Medium Status: QA Type: Bug |
Not sure how important this is but I was testing the behavior of file restriction by version and changing file perms creates a new draft. However, these changes do not appear on the file versions tab/ differences so you can't determine why a new version was made. | 1.0 | View Dataset: Versions tab does not display file restriction info -
Not sure how important this is but I was testing the behavior of file restriction by version and changing file perms creates a new draft. However, these changes do not appear on the file versions tab/ differences so you can't determine why a new version was made. | priority | view dataset versions tab does not display file restriction info not sure how important this is but i was testing the behavior of file restriction by version and changing file perms creates a new draft however these changes do not appear on the file versions tab differences so you can t determine why a new version was made | 1 |
176,751 | 6,564,657,827 | IssuesEvent | 2017-09-08 03:13:12 | Railcraft/Railcraft | https://api.github.com/repos/Railcraft/Railcraft | closed | [1.10.2-10.1.2] Recipes containing creosote containers don't allow buckets of creosote | bug implemented inventory priority-medium | I glanced at the code a bit and I assume that the recipes are intended to accept them. Tried to find why the don't, but didn't find the end of that rabbithole.
Best guess: Bucket gets added only after the recipes are set up. | 1.0 | [1.10.2-10.1.2] Recipes containing creosote containers don't allow buckets of creosote - I glanced at the code a bit and I assume that the recipes are intended to accept them. Tried to find why the don't, but didn't find the end of that rabbithole.
Best guess: Bucket gets added only after the recipes are set up. | priority | recipes containing creosote containers don t allow buckets of creosote i glanced at the code a bit and i assume that the recipes are intended to accept them tried to find why the don t but didn t find the end of that rabbithole best guess bucket gets added only after the recipes are set up | 1 |
646,256 | 21,042,386,972 | IssuesEvent | 2022-03-31 13:25:26 | AY2122S2-CS2103-F09-2/tp | https://api.github.com/repos/AY2122S2-CS2103-F09-2/tp | closed | Refactor `FilterEventPredicate` into multiple specialized predicates | type.Task priority.Medium | Improves code quality by better adhering to Single Responsibility Principle, also reduces chance of null-related errors when certain predicates in `FilterEventPredicate` are not given. | 1.0 | Refactor `FilterEventPredicate` into multiple specialized predicates - Improves code quality by better adhering to Single Responsibility Principle, also reduces chance of null-related errors when certain predicates in `FilterEventPredicate` are not given. | priority | refactor filtereventpredicate into multiple specialized predicates improves code quality by better adhering to single responsibility principle also reduces chance of null related errors when certain predicates in filtereventpredicate are not given | 1 |
16,474 | 2,615,116,855 | IssuesEvent | 2015-03-01 05:42:02 | chrsmith/google-api-java-client | https://api.github.com/repos/chrsmith/google-api-java-client | closed | YouTube uploading | auto-migrated Priority-Medium Type-Sample | ```
Which API and version (e.g. Google Calendar Data API version 2)?
YouTube Data API version 2
What format (e.g. JSON, Atom)?
Atom
What Authentation (e.g. OAuth, OAuth 2, Android, ClientLogin)?
OAuth
Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)?
Java 6
External references, such as API reference guide?
Please provide any additional information below.
Could you please expand the YouTube sample to demonstrate video uploading? I've
tried all uploading methods (Direct uploading, resumable uploading, uploading
without metadata) and always get a Bad Request. All other functionality
(accessing videos, deleting them, etc) is working, it is only uploading where
I'm having the problems. Uploading in this fashion (using
MultipartRelatedContent) works perfectly for me with Picasa. What is different
about YouTube?
```
Original issue reported on code.google.com by `jsjenkin...@gmail.com` on 1 Oct 2010 at 2:40
* Merged into: #16 | 1.0 | YouTube uploading - ```
Which API and version (e.g. Google Calendar Data API version 2)?
YouTube Data API version 2
What format (e.g. JSON, Atom)?
Atom
What Authentation (e.g. OAuth, OAuth 2, Android, ClientLogin)?
OAuth
Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)?
Java 6
External references, such as API reference guide?
Please provide any additional information below.
Could you please expand the YouTube sample to demonstrate video uploading? I've
tried all uploading methods (Direct uploading, resumable uploading, uploading
without metadata) and always get a Bad Request. All other functionality
(accessing videos, deleting them, etc) is working, it is only uploading where
I'm having the problems. Uploading in this fashion (using
MultipartRelatedContent) works perfectly for me with Picasa. What is different
about YouTube?
```
Original issue reported on code.google.com by `jsjenkin...@gmail.com` on 1 Oct 2010 at 2:40
* Merged into: #16 | priority | youtube uploading which api and version e g google calendar data api version youtube data api version what format e g json atom atom what authentation e g oauth oauth android clientlogin oauth java environment e g java android app engine java external references such as api reference guide please provide any additional information below could you please expand the youtube sample to demonstrate video uploading i ve tried all uploading methods direct uploading resumable uploading uploading without metadata and always get a bad request all other functionality accessing videos deleting them etc is working it is only uploading where i m having the problems uploading in this fashion using multipartrelatedcontent works perfectly for me with picasa what is different about youtube original issue reported on code google com by jsjenkin gmail com on oct at merged into | 1 |
127,969 | 5,041,458,356 | IssuesEvent | 2016-12-19 10:24:52 | serverless/serverless | https://api.github.com/repos/serverless/serverless | closed | Visualize plugin "depdendencies" | exp/medium kind/feature priority/P2 status/accepted | It would be great to have a command to visualize the order in which all the plugins run (to see which plugins hook into which other plugin hooks). This makes debugging / developing plugins way easier.
| 1.0 | Visualize plugin "depdendencies" - It would be great to have a command to visualize the order in which all the plugins run (to see which plugins hook into which other plugin hooks). This makes debugging / developing plugins way easier.
| priority | visualize plugin depdendencies it would be great to have a command to visualize the order in which all the plugins run to see which plugins hook into which other plugin hooks this makes debugging developing plugins way easier | 1 |
103,967 | 4,187,976,108 | IssuesEvent | 2016-06-23 19:10:24 | thompsct/SemGen | https://api.github.com/repos/thompsct/SemGen | closed | JSBrowser window covers SemGen menus | bug medium priority | SemGen menus from the menubar open behind the stage and any occluded items cannot be selected. | 1.0 | JSBrowser window covers SemGen menus - SemGen menus from the menubar open behind the stage and any occluded items cannot be selected. | priority | jsbrowser window covers semgen menus semgen menus from the menubar open behind the stage and any occluded items cannot be selected | 1 |
423,627 | 12,299,289,450 | IssuesEvent | 2020-05-11 12:07:14 | hotosm/tasking-manager | https://api.github.com/repos/hotosm/tasking-manager | opened | Project update: Changing default language should include empty field check | Component: Backend Component: Frontend Difficulty: Medium Priority: Critical Type: Bug | We have come across around 9 projects with `default_locale` data missing from project_info.
eg: project 8151 has default_locale set to `fr` but there is no matching details for the project in `project_info` table. This breaks down any project search activity because of null value.
This has to be handled in the codebase in the following way:
* Better error handling on empty locale values
* Prompt user to fill in default information details whenever default_locale in updated.
For a project, I updated the default_locale to point to a non-English language and then hit on save. This did not prompt me to fill in information for any of the description/instruction/title in the default_locale. Ideally it should and when the default_locale is `en` this check is present. It should be present for all languages in the frontend. | 1.0 | Project update: Changing default language should include empty field check - We have come across around 9 projects with `default_locale` data missing from project_info.
eg: project 8151 has default_locale set to `fr` but there is no matching details for the project in `project_info` table. This breaks down any project search activity because of null value.
This has to be handled in the codebase in the following way:
* Better error handling on empty locale values
* Prompt user to fill in default information details whenever default_locale in updated.
For a project, I updated the default_locale to point to a non-English language and then hit on save. This did not prompt me to fill in information for any of the description/instruction/title in the default_locale. Ideally it should and when the default_locale is `en` this check is present. It should be present for all languages in the frontend. | priority | project update changing default language should include empty field check we have come across around projects with default locale data missing from project info eg project has default locale set to fr but there is no matching details for the project in project info table this breaks down any project search activity because of null value this has to be handled in the codebase in the following way better error handling on empty locale values prompt user to fill in default information details whenever default locale in updated for a project i updated the default locale to point to a non english language and then hit on save this did not prompt me to fill in information for any of the description instruction title in the default locale ideally it should and when the default locale is en this check is present it should be present for all languages in the frontend | 1 |
286,739 | 8,792,503,262 | IssuesEvent | 2018-12-21 16:19:52 | ansible/awx | https://api.github.com/repos/ansible/awx | opened | inventory prompt string for workflows not 100% correct | component:ui priority:medium state:needs_devel type:enhancement | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- UI
##### SUMMARY
1. have a workflow with inventory prompt on launch
The note is " This inventory is applied to all job template nodes that prompt for an inventory. ".
However, it's also applied to child workflow nodes as well. Maybe "all workflow nodes" is better wording.
##### ENVIRONMENT
* AWX version: current
| 1.0 | inventory prompt string for workflows not 100% correct - ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
- UI
##### SUMMARY
1. have a workflow with inventory prompt on launch
The note is " This inventory is applied to all job template nodes that prompt for an inventory. ".
However, it's also applied to child workflow nodes as well. Maybe "all workflow nodes" is better wording.
##### ENVIRONMENT
* AWX version: current
| priority | inventory prompt string for workflows not correct issue type bug report component name ui summary have a workflow with inventory prompt on launch the note is this inventory is applied to all job template nodes that prompt for an inventory however it s also applied to child workflow nodes as well maybe all workflow nodes is better wording environment awx version current | 1 |
830,341 | 32,003,074,589 | IssuesEvent | 2023-09-21 13:24:01 | ubiquity/ubiquibot-telegram | https://api.github.com/repos/ubiquity/ubiquibot-telegram | opened | Topic name does not update on renames | Time: <1 Hour Priority: 2 (Medium) | Renamed `fundraising` to `partnerships` in https://t.me/UbiquityDAO/29361
> Here is your forum: Fundraising
What do you want to do? | 1.0 | Topic name does not update on renames - Renamed `fundraising` to `partnerships` in https://t.me/UbiquityDAO/29361
> Here is your forum: Fundraising
What do you want to do? | priority | topic name does not update on renames renamed fundraising to partnerships in here is your forum fundraising what do you want to do | 1 |
127,355 | 5,028,971,997 | IssuesEvent | 2016-12-15 19:47:19 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | Need notification when party invite rescinded | priority: medium status: issue: help welcome now | When a party invitation is sent, a user with that option enabled gets an email. If the user is removed from the party, there's an email for that as well. There is, however, no email sent for if the invitation is rescinded before you join -- making it possible to get an email and sign in to accept, but find nothing.
This email should probably not be exactly the same as the removed from party email (it was the same in APIv2), but it should exist.
| 1.0 | Need notification when party invite rescinded - When a party invitation is sent, a user with that option enabled gets an email. If the user is removed from the party, there's an email for that as well. There is, however, no email sent for if the invitation is rescinded before you join -- making it possible to get an email and sign in to accept, but find nothing.
This email should probably not be exactly the same as the removed from party email (it was the same in APIv2), but it should exist.
| priority | need notification when party invite rescinded when a party invitation is sent a user with that option enabled gets an email if the user is removed from the party there s an email for that as well there is however no email sent for if the invitation is rescinded before you join making it possible to get an email and sign in to accept but find nothing this email should probably not be exactly the same as the removed from party email it was the same in but it should exist | 1 |
231,570 | 7,640,752,816 | IssuesEvent | 2018-05-08 00:32:01 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Producer/Stockpile linking problem | Medium Priority | So I have a shared workarea with pump jacks and a factory and a stockpile.
The petroleum project was started by a friend, but I inadvertantly removed the stockpile.
Because he has not relinked it the project is halted.
This is a bad workflow. It should not matter who links a project to its sources. | 1.0 | Producer/Stockpile linking problem - So I have a shared workarea with pump jacks and a factory and a stockpile.
The petroleum project was started by a friend, but I inadvertantly removed the stockpile.
Because he has not relinked it the project is halted.
This is a bad workflow. It should not matter who links a project to its sources. | priority | producer stockpile linking problem so i have a shared workarea with pump jacks and a factory and a stockpile the petroleum project was started by a friend but i inadvertantly removed the stockpile because he has not relinked it the project is halted this is a bad workflow it should not matter who links a project to its sources | 1 |
71,735 | 3,367,617,946 | IssuesEvent | 2015-11-22 10:19:05 | music-encoding/music-encoding | https://api.github.com/repos/music-encoding/music-encoding | closed | need method for capturing non-filing part of title | Priority: Medium | _From [pd...@virginia.edu](https://code.google.com/u/103686026181985548448/) on December 02, 2013 18:33:34_
Titles often begin with articles and other words that don't figure into sorting the title, such as "a", "an", and "the". MARC indicates the number of these characters in indicator1. The number "4" in the following example indicates the number of characters to skip (3 letters and a space) --
\<title nonfiling="4">The birds and the bees</title>
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=185_ | 1.0 | need method for capturing non-filing part of title - _From [pd...@virginia.edu](https://code.google.com/u/103686026181985548448/) on December 02, 2013 18:33:34_
Titles often begin with articles and other words that don't figure into sorting the title, such as "a", "an", and "the". MARC indicates the number of these characters in indicator1. The number "4" in the following example indicates the number of characters to skip (3 letters and a space) --
\<title nonfiling="4">The birds and the bees</title>
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=185_ | priority | need method for capturing non filing part of title from on december titles often begin with articles and other words that don t figure into sorting the title such as a an and the marc indicates the number of these characters in the number in the following example indicates the number of characters to skip letters and a space the birds and the bees original issue | 1 |
520,691 | 15,091,247,778 | IssuesEvent | 2021-02-06 14:44:20 | dcl-covid-19/mega-map | https://api.github.com/repos/dcl-covid-19/mega-map | closed | Transform the CalFresh Offices data to HSDS. | data transformation medium-priority | Use the variable mapping information provided under the "HSDS Mapping" tab of this sheet (https://docs.google.com/spreadsheets/d/1OyxTJCF4-sxivjsWHzTOpuYidAetQSjWzVCEDU45GZc/edit#gid=1127300923) to transform the data to HSDS format.
For the schedule data in particular, the HSDS schedule format has a "Weekday" field to be populated with "Monday" "Tuesday" Wednesday" "Thursday" "Friday" "Saturday" "Sunday" (can be separated by commas). For the days listed under "Weekday", show the times for which that service window opens under "opens_at" and closes under "closes_at". If there are additional service times that fall outside that consistent schedule, indicate the days of that service in "add_day" and the hours in "add_hours."
For example: To transform "M-F, 5:30-8:30pm; S & Su, 9am-12pm" to HSDS, you would do Weekday = ""Monday" "Tuesday" Wednesday" "Thursday" "Friday", opens_at = "5:30 PM" and closes_at = "8:30 PM". Add_day would be "Saturday" and "Sunday" and add_hour would be "9:00 AM - 12:00 PM". | 1.0 | Transform the CalFresh Offices data to HSDS. - Use the variable mapping information provided under the "HSDS Mapping" tab of this sheet (https://docs.google.com/spreadsheets/d/1OyxTJCF4-sxivjsWHzTOpuYidAetQSjWzVCEDU45GZc/edit#gid=1127300923) to transform the data to HSDS format.
For the schedule data in particular, the HSDS schedule format has a "Weekday" field to be populated with "Monday" "Tuesday" Wednesday" "Thursday" "Friday" "Saturday" "Sunday" (can be separated by commas). For the days listed under "Weekday", show the times for which that service window opens under "opens_at" and closes under "closes_at". If there are additional service times that fall outside that consistent schedule, indicate the days of that service in "add_day" and the hours in "add_hours."
For example: To transform "M-F, 5:30-8:30pm; S & Su, 9am-12pm" to HSDS, you would do Weekday = ""Monday" "Tuesday" Wednesday" "Thursday" "Friday", opens_at = "5:30 PM" and closes_at = "8:30 PM". Add_day would be "Saturday" and "Sunday" and add_hour would be "9:00 AM - 12:00 PM". | priority | transform the calfresh offices data to hsds use the variable mapping information provided under the hsds mapping tab of this sheet to transform the data to hsds format for the schedule data in particular the hsds schedule format has a weekday field to be populated with monday tuesday wednesday thursday friday saturday sunday can be separated by commas for the days listed under weekday show the times for which that service window opens under opens at and closes under closes at if there are additional service times that fall outside that consistent schedule indicate the days of that service in add day and the hours in add hours for example to transform m f s su to hsds you would do weekday monday tuesday wednesday thursday friday opens at pm and closes at pm add day would be saturday and sunday and add hour would be am pm | 1 |
768,768 | 26,979,655,884 | IssuesEvent | 2023-02-09 12:10:48 | GeorgesStavracas/mockups | https://api.github.com/repos/GeorgesStavracas/mockups | opened | Add source dialog | main window priority: medium | The current flow for adding sources is, at best, lacking. It should encourage reusage of sources.


TODO: references | 1.0 | Add source dialog - The current flow for adding sources is, at best, lacking. It should encourage reusage of sources.


TODO: references | priority | add source dialog the current flow for adding sources is at best lacking it should encourage reusage of sources todo references | 1 |
582,672 | 17,367,380,072 | IssuesEvent | 2021-07-30 09:11:52 | GIST-Petition-Site-Project/GIST-petition-web | https://api.github.com/repos/GIST-Petition-Site-Project/GIST-petition-web | closed | docs: Github template update | Priority: Medium Status: Reviewing Type: Feature/Document | ## 목표
- 내용을 적어주세요. (필요시 이미지 첨부)
현재는 제가 예전에 쓰던 issue tempate을 사용하고 있습니다.
이를 저의 Github template을 최신버전으로 업데이트합니다.
lable 또한 상세하게 분할되어 관리되고, template 또한 상세하게 작성될 수 있도록 업데이트 될 것 입니다.
## 체크리스트
- [ ] example
## 참고
- 내용을 적어주세요.
[참고 레포](https://github.com/gimquokka/github-initial-settings) | 1.0 | docs: Github template update - ## 목표
- 내용을 적어주세요. (필요시 이미지 첨부)
현재는 제가 예전에 쓰던 issue tempate을 사용하고 있습니다.
이를 저의 Github template을 최신버전으로 업데이트합니다.
lable 또한 상세하게 분할되어 관리되고, template 또한 상세하게 작성될 수 있도록 업데이트 될 것 입니다.
## 체크리스트
- [ ] example
## 참고
- 내용을 적어주세요.
[참고 레포](https://github.com/gimquokka/github-initial-settings) | priority | docs github template update 목표 내용을 적어주세요 필요시 이미지 첨부 현재는 제가 예전에 쓰던 issue tempate을 사용하고 있습니다 이를 저의 github template을 최신버전으로 업데이트합니다 lable 또한 상세하게 분할되어 관리되고 template 또한 상세하게 작성될 수 있도록 업데이트 될 것 입니다 체크리스트 example 참고 내용을 적어주세요 | 1 |
777,691 | 27,290,804,165 | IssuesEvent | 2023-02-23 16:28:21 | inlang/inlang | https://api.github.com/repos/inlang/inlang | closed | set up version and changelog management | type: improvement effort: low priority: medium | Managing the versions and changelogs of packages in a monorepo can be cumbersome. As much as possible should be automated and the developers should be guided to follow the strategy. See:
- https://inlang.com/documentation/breaking-changes
- [x] set up https://github.com/changesets/changesets
- [x] tie the packages `ide-extension`, `git-sdk`, `core` and `website` together as [fixed packages](https://github.com/changesets/changesets/blob/main/docs/fixed-packages.md)
- [x] set up Changesets Bot to check for changesets inside pull requests https://github.com/apps/changeset-bot (@samuelstroschein probably you need to do this, as it needs owner rights)
- [x] set up Changesets Github Action for creating release pull requests and tagging https://github.com/changesets/action
- [x] change [breaking changes](https://inlang.com/documentation/breaking-changes) to `Code organization`
- [x] update [contributing](https://inlang.com/documentation/contributing) and `Code organization` to reflect version and changelog management | 1.0 | set up version and changelog management - Managing the versions and changelogs of packages in a monorepo can be cumbersome. As much as possible should be automated and the developers should be guided to follow the strategy. See:
- https://inlang.com/documentation/breaking-changes
- [x] set up https://github.com/changesets/changesets
- [x] tie the packages `ide-extension`, `git-sdk`, `core` and `website` together as [fixed packages](https://github.com/changesets/changesets/blob/main/docs/fixed-packages.md)
- [x] set up Changesets Bot to check for changesets inside pull requests https://github.com/apps/changeset-bot (@samuelstroschein probably you need to do this, as it needs owner rights)
- [x] set up Changesets Github Action for creating release pull requests and tagging https://github.com/changesets/action
- [x] change [breaking changes](https://inlang.com/documentation/breaking-changes) to `Code organization`
- [x] update [contributing](https://inlang.com/documentation/contributing) and `Code organization` to reflect version and changelog management | priority | set up version and changelog management managing the versions and changelogs of packages in a monorepo can be cumbersome as much as possible should be automated and the developers should be guided to follow the strategy see set up tie the packages ide extension git sdk core and website together as set up changesets bot to check for changesets inside pull requests samuelstroschein probably you need to do this as it needs owner rights set up changesets github action for creating release pull requests and tagging change to code organization update and code organization to reflect version and changelog management | 1 |
527,198 | 15,325,776,763 | IssuesEvent | 2021-02-26 02:06:45 | otasoft/otasoft-api | https://api.github.com/repos/otasoft/otasoft-api | closed | implement RBAC | enhancement medium priority no-issue-activity | ## Feature Request
## Is your feature request related to a problem? Please describe.
<!-- A clear and concise description of what the problem is. Ex. I have an issue when [...] -->
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. Add any considered drawbacks. -->
## Teachability, Documentation, Adoption, Migration Strategy
<!-- If you can, explain how users will be able to use this and possibly write out a version the docs. Maybe a screenshot or design? -->
https://docs.nestjs.com/security/authorization
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. --> | 1.0 | implement RBAC - ## Feature Request
## Is your feature request related to a problem? Please describe.
<!-- A clear and concise description of what the problem is. Ex. I have an issue when [...] -->
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. Add any considered drawbacks. -->
## Teachability, Documentation, Adoption, Migration Strategy
<!-- If you can, explain how users will be able to use this and possibly write out a version the docs. Maybe a screenshot or design? -->
https://docs.nestjs.com/security/authorization
## What is the motivation / use case for changing the behavior?
<!-- Describe the motivation or the concrete use case. --> | priority | implement rbac feature request is your feature request related to a problem please describe describe the solution you d like teachability documentation adoption migration strategy what is the motivation use case for changing the behavior | 1 |
674,944 | 23,071,487,892 | IssuesEvent | 2022-07-25 18:31:17 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] Condition pushdown for index scans | kind/enhancement area/ysql priority/medium | Jira Link: [DB-563](https://yugabyte.atlassian.net/browse/DB-563)
### Description
IndexScan and IndexOnlyScan are already push down scan keys, expressions like `<key> <op> <constant>` to DocDB, however the scan may have other conditions, such as more complex expressions, conditions on non-key columns which are currently evaluated by Postgres, after row is retrieved from the DocDB.
We would like to push these expressions down and save node to node traffic by filtering out rows locally.
Our IndexScans on secondary indexes make two DocDB requests: one to find index, and other to fetch the rows. Both can carry expression, but there is a difference: when scanning the index, only indexed and included columns are available to evaluate expression, but early filtering is more efficient. Hence optimizer should assign pushdown expressions to right requests.
| 1.0 | [YSQL] Condition pushdown for index scans - Jira Link: [DB-563](https://yugabyte.atlassian.net/browse/DB-563)
### Description
IndexScan and IndexOnlyScan are already push down scan keys, expressions like `<key> <op> <constant>` to DocDB, however the scan may have other conditions, such as more complex expressions, conditions on non-key columns which are currently evaluated by Postgres, after row is retrieved from the DocDB.
We would like to push these expressions down and save node to node traffic by filtering out rows locally.
Our IndexScans on secondary indexes make two DocDB requests: one to find index, and other to fetch the rows. Both can carry expression, but there is a difference: when scanning the index, only indexed and included columns are available to evaluate expression, but early filtering is more efficient. Hence optimizer should assign pushdown expressions to right requests.
| priority | condition pushdown for index scans jira link description indexscan and indexonlyscan are already push down scan keys expressions like to docdb however the scan may have other conditions such as more complex expressions conditions on non key columns which are currently evaluated by postgres after row is retrieved from the docdb we would like to push these expressions down and save node to node traffic by filtering out rows locally our indexscans on secondary indexes make two docdb requests one to find index and other to fetch the rows both can carry expression but there is a difference when scanning the index only indexed and included columns are available to evaluate expression but early filtering is more efficient hence optimizer should assign pushdown expressions to right requests | 1 |
826,842 | 31,714,475,197 | IssuesEvent | 2023-09-09 17:36:17 | BenWestgate/Bails | https://api.github.com/repos/BenWestgate/Bails | closed | Unable to create a new receiving address | bug help wanted priority: high priority: medium | After initial sync & importing a BAILS-created wallet: Under the Receive tab, I cannot create a new receiving address to send a small test transaction (button is grayed out). I tried filling out all the forms, changing from Bech32, closing the wallet, restarting Core, and restarting Tails, but still no luck. The Window > Receiving addresses window shows no addresses. | 2.0 | Unable to create a new receiving address - After initial sync & importing a BAILS-created wallet: Under the Receive tab, I cannot create a new receiving address to send a small test transaction (button is grayed out). I tried filling out all the forms, changing from Bech32, closing the wallet, restarting Core, and restarting Tails, but still no luck. The Window > Receiving addresses window shows no addresses. | priority | unable to create a new receiving address after initial sync importing a bails created wallet under the receive tab i cannot create a new receiving address to send a small test transaction button is grayed out i tried filling out all the forms changing from closing the wallet restarting core and restarting tails but still no luck the window receiving addresses window shows no addresses | 1 |
638,228 | 20,719,251,778 | IssuesEvent | 2022-03-13 05:22:39 | AY2122S2-CS2103T-W09-4/tp | https://api.github.com/repos/AY2122S2-CS2103T-W09-4/tp | closed | Assign tasks to specific student | type.Story priority.Medium | As a TA, I can assign tasks to a specific student so that I can allocate and track a task that is given to the student. | 1.0 | Assign tasks to specific student - As a TA, I can assign tasks to a specific student so that I can allocate and track a task that is given to the student. | priority | assign tasks to specific student as a ta i can assign tasks to a specific student so that i can allocate and track a task that is given to the student | 1 |
644,308 | 20,973,552,043 | IssuesEvent | 2022-03-28 13:30:12 | ever-co/ever-gauzy | https://api.github.com/repos/ever-co/ever-gauzy | closed | Feat: Upgrade Organization Design | type: enhancement ✨ scope: app priority: medium | ## Context ⚓️
We make progressive upgrading of design according issue #3984
> We use [nebular themes](https://akveo.github.io/nebular/)
> All design details are available in figma
## Goals ⭕️
You have to upgrade organization manage page.
- [x] [Main](https://demo.gauzy.co/#/pages/organizations/edit/8bd9bda5-106e-4222-b15e-9bca00a00296/main)

- [x] [Location](https://demo.gauzy.co/#/pages/organizations/edit/8bd9bda5-106e-4222-b15e-9bca00a00296/location)

- [x] [Settings](https://demo.gauzy.co/#/pages/organizations/edit/8bd9bda5-106e-4222-b15e-9bca00a00296/settings)

- [x] All components must be responsive. Reading is right to left for hebrew language 🚨.
| 1.0 | Feat: Upgrade Organization Design - ## Context ⚓️
We make progressive upgrading of design according issue #3984
> We use [nebular themes](https://akveo.github.io/nebular/)
> All design details are available in figma
## Goals ⭕️
You have to upgrade organization manage page.
- [x] [Main](https://demo.gauzy.co/#/pages/organizations/edit/8bd9bda5-106e-4222-b15e-9bca00a00296/main)

- [x] [Location](https://demo.gauzy.co/#/pages/organizations/edit/8bd9bda5-106e-4222-b15e-9bca00a00296/location)

- [x] [Settings](https://demo.gauzy.co/#/pages/organizations/edit/8bd9bda5-106e-4222-b15e-9bca00a00296/settings)

- [x] All components must be responsive. Reading is right to left for hebrew language 🚨.
| priority | feat upgrade organization design context ⚓️ we make progressive upgrading of design according issue we use all design details are available in figma goals ⭕️ you have to upgrade organization manage page all components must be responsive reading is right to left for hebrew language 🚨 | 1 |
675,693 | 23,101,799,255 | IssuesEvent | 2022-07-27 04:07:29 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | build for arm64 ubuntu18.04 fail | kind/bug area/docdb priority/medium community/request | Jira Link: [DB-1915](https://yugabyte.atlassian.net/browse/DB-1915)
I build and fail
branch v1.3.0
os ubuntu 18.04
platform arm64v8 (NVIDIA Jetson Nano)
gcc v7.4.0
cmake version 3.14.6
OSError: [Errno 2] No such file or directory
CMake Error at CMakeLists.txt:578 (message):
Thirdparty was built unsuccessfully, terminating.
All meassages
--------------------------------------------------------------------------------
Building zlib (common)
--------------------------------------------------------------------------------
Bootstrapping /home/lijz/workspace/yugabyte-db/thirdparty/build/common/zlib-1.2.11 from /home/lijz/workspace/yugabyte-db/thirdparty/src/zlib-1.2.11
/home/lijz/workspace/yugabyte-db/thirdparty/build_definitions
Traceback (most recent call last):
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 786, in <module>
main()
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 783, in main
builder.run()
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 222, in run
self.build(BUILD_TYPE_COMMON)
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 601, in build
self.build_dependency(dep)
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 658, in build_dependency
with PushDir(self.create_build_dir_and_prepare(dep)):
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 745, in create_build_dir_and_prepare
subprocess.check_call(['rsync', '-a', src_dir + '/', build_dir])
File "/usr/lib/python2.7/subprocess.py", line 185, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 172, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
CMake Error at CMakeLists.txt:578 (message):
Thirdparty was built unsuccessfully, terminating.
| 1.0 | build for arm64 ubuntu18.04 fail - Jira Link: [DB-1915](https://yugabyte.atlassian.net/browse/DB-1915)
I build and fail
branch v1.3.0
os ubuntu 18.04
platform arm64v8 (NVIDIA Jetson Nano)
gcc v7.4.0
cmake version 3.14.6
OSError: [Errno 2] No such file or directory
CMake Error at CMakeLists.txt:578 (message):
Thirdparty was built unsuccessfully, terminating.
All meassages
--------------------------------------------------------------------------------
Building zlib (common)
--------------------------------------------------------------------------------
Bootstrapping /home/lijz/workspace/yugabyte-db/thirdparty/build/common/zlib-1.2.11 from /home/lijz/workspace/yugabyte-db/thirdparty/src/zlib-1.2.11
/home/lijz/workspace/yugabyte-db/thirdparty/build_definitions
Traceback (most recent call last):
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 786, in <module>
main()
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 783, in main
builder.run()
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 222, in run
self.build(BUILD_TYPE_COMMON)
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 601, in build
self.build_dependency(dep)
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 658, in build_dependency
with PushDir(self.create_build_dir_and_prepare(dep)):
File "/home/lijz/workspace/yugabyte-db/thirdparty/yb_build_thirdparty_main.py", line 745, in create_build_dir_and_prepare
subprocess.check_call(['rsync', '-a', src_dir + '/', build_dir])
File "/usr/lib/python2.7/subprocess.py", line 185, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 172, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
CMake Error at CMakeLists.txt:578 (message):
Thirdparty was built unsuccessfully, terminating.
| priority | build for fail jira link i build and fail branch os ubuntu platform nvidia jetson nano gcc cmake version oserror no such file or directory cmake error at cmakelists txt message thirdparty was built unsuccessfully terminating all meassages building zlib common bootstrapping home lijz workspace yugabyte db thirdparty build common zlib from home lijz workspace yugabyte db thirdparty src zlib home lijz workspace yugabyte db thirdparty build definitions traceback most recent call last file home lijz workspace yugabyte db thirdparty yb build thirdparty main py line in main file home lijz workspace yugabyte db thirdparty yb build thirdparty main py line in main builder run file home lijz workspace yugabyte db thirdparty yb build thirdparty main py line in run self build build type common file home lijz workspace yugabyte db thirdparty yb build thirdparty main py line in build self build dependency dep file home lijz workspace yugabyte db thirdparty yb build thirdparty main py line in build dependency with pushdir self create build dir and prepare dep file home lijz workspace yugabyte db thirdparty yb build thirdparty main py line in create build dir and prepare subprocess check call file usr lib subprocess py line in check call retcode call popenargs kwargs file usr lib subprocess py line in call return popen popenargs kwargs wait file usr lib subprocess py line in init errread errwrite file usr lib subprocess py line in execute child raise child exception oserror no such file or directory cmake error at cmakelists txt message thirdparty was built unsuccessfully terminating | 1 |
686,673 | 23,500,867,693 | IssuesEvent | 2022-08-18 08:18:30 | netdata/netdata-cloud | https://api.github.com/repos/netdata/netdata-cloud | closed | [Bug]: Refreshing page after a new Room creation re-directs to All nodes room | bug internal submit priority/medium cloud-frontend mgmt-navigation-team | ### Bug description
Refreshing the page while in another room re-directs to the All nodes room of the space
### Expected behavior
After a user refreshes the page he should always land in the page he started the refresh action from.
### Steps to reproduce
1. Login to Netdata Cloud
2. Create a new Room
3. After room creation - refresh the page
4. User lands on All Nodes room in Home tab regardless on where he previously was.
### Screenshots
-
### Error Logs
-
### Desktop
OS: MacOS
Browser: Chrome
Browser Version: 103
### Additional context
- | 1.0 | [Bug]: Refreshing page after a new Room creation re-directs to All nodes room - ### Bug description
Refreshing the page while in another room re-directs to the All nodes room of the space
### Expected behavior
After a user refreshes the page he should always land in the page he started the refresh action from.
### Steps to reproduce
1. Login to Netdata Cloud
2. Create a new Room
3. After room creation - refresh the page
4. User lands on All Nodes room in Home tab regardless on where he previously was.
### Screenshots
-
### Error Logs
-
### Desktop
OS: MacOS
Browser: Chrome
Browser Version: 103
### Additional context
- | priority | refreshing page after a new room creation re directs to all nodes room bug description refreshing the page while in another room re directs to the all nodes room of the space expected behavior after a user refreshes the page he should always land in the page he started the refresh action from steps to reproduce login to netdata cloud create a new room after room creation refresh the page user lands on all nodes room in home tab regardless on where he previously was screenshots error logs desktop os macos browser chrome browser version additional context | 1 |
59,125 | 3,103,335,753 | IssuesEvent | 2015-08-31 09:09:34 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Missing GeoSolutions brand into about window | enhancement Priority: Medium | Put GeoSolutions clickable logo into about window which redirects to [geosolutions](http://www.geo-solutions.it/) web site | 1.0 | Missing GeoSolutions brand into about window - Put GeoSolutions clickable logo into about window which redirects to [geosolutions](http://www.geo-solutions.it/) web site | priority | missing geosolutions brand into about window put geosolutions clickable logo into about window which redirects to web site | 1 |
57,846 | 3,084,058,521 | IssuesEvent | 2015-08-24 13:09:14 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | [zzxy] Диалог-Поиск: Вместо категория "Видео" теперь "Видео и субтитры" | bug imported Priority-Medium wontfix | _From [zzzxzzzy...@gmail.com](https://code.google.com/u/111612712877897236331/) on August 01, 2013 16:44:29_
(502-betta92, в сравнении с 24)
* Диалог-Поиск: - Вместо категория "Видео" - теперь "Видео и субтитры", это 1) не удобно - 2) мало помогает - учитываея что обычно - более одного источника, как вариант для желающих - сделать отдельно категори "Видео" и "Видео и субтитры"
PS: А, вообще прежде чем что то качать - нужно скачать файлсписок и уже там смотреть что ещё к файлу прилагается, или вы к экзешникам - будете выводить все их файлы с данными?...
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1138_ | 1.0 | [zzxy] Диалог-Поиск: Вместо категория "Видео" теперь "Видео и субтитры" - _From [zzzxzzzy...@gmail.com](https://code.google.com/u/111612712877897236331/) on August 01, 2013 16:44:29_
(502-betta92, в сравнении с 24)
* Диалог-Поиск: - Вместо категория "Видео" - теперь "Видео и субтитры", это 1) не удобно - 2) мало помогает - учитываея что обычно - более одного источника, как вариант для желающих - сделать отдельно категори "Видео" и "Видео и субтитры"
PS: А, вообще прежде чем что то качать - нужно скачать файлсписок и уже там смотреть что ещё к файлу прилагается, или вы к экзешникам - будете выводить все их файлы с данными?...
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1138_ | priority | диалог поиск вместо категория видео теперь видео и субтитры from on august в сравнении с диалог поиск вместо категория видео теперь видео и субтитры это не удобно мало помогает учитываея что обычно более одного источника как вариант для желающих сделать отдельно категори видео и видео и субтитры ps а вообще прежде чем что то качать нужно скачать файлсписок и уже там смотреть что ещё к файлу прилагается или вы к экзешникам будете выводить все их файлы с данными original issue | 1 |
486,151 | 14,005,111,816 | IssuesEvent | 2020-10-28 17:59:47 | AY2021S1-CS2113-T13-1/tp | https://api.github.com/repos/AY2021S1-CS2113-T13-1/tp | closed | Tag Events | priority.Medium type.Story | Tag Events so user can just list out events for the following tags (focus on the important events) | 1.0 | Tag Events - Tag Events so user can just list out events for the following tags (focus on the important events) | priority | tag events tag events so user can just list out events for the following tags focus on the important events | 1 |
518,471 | 15,028,909,503 | IssuesEvent | 2021-02-02 04:15:18 | dnnsoftware/Dnn.Platform | https://api.github.com/repos/dnnsoftware/Dnn.Platform | closed | Manifest versions need to be updated | Area: Platform > Library Effort: Low Priority: Medium Status: Ready for Development Type: Enhancement stale | ## Description of bug
Dnn 5 introduced manifest extension numbering (dnn vs dnn5) this was continued up to Dnn 7.
The purpose of this was to allow modules to have a slightly different installer depending on the version of Dnn they where getting installed on:
.dnn would apply to all versions before 5
.dnn5 would override the .dnn if on Dnn5
.dnn6 would win over both .dnn and .dnn6 but only if on Dnn6
So on and so forth.
This was maintained up to Dnn7, we should make sure to bring this back for both dnn9 and dnn10 as it could be extremely useful to offset breaking changes issues on developers. For instance a developer can have a single install with views/dlls compiple for 9 or 10 and just install the right ones depending on which version it is getting installed.
## Additional context
I was chatting with David Poindexter @david-poindexter about this, we have no time to test right now, but he was saying there was something not working correctly with this, we should definitely run some scenarios to ensure it does in fact work, a quick look in the code looks fine, but we need testing on that.
## Affected version
<!--
Please add X in at least one of the boxes as appropriate. In order for an issue to be accepted, a developer needs to be able to reproduce the issue on a currently supported version. If you are looking for a workaround for an issue with an older version, please visit the forums at https://dnncommunity.org/forums
-->
* [x] 10.0.0 alpha build
* [x] 9.5.0 alpha build
* [x] 9.4.4 latest supported release | 1.0 | Manifest versions need to be updated - ## Description of bug
Dnn 5 introduced manifest extension numbering (dnn vs dnn5) this was continued up to Dnn 7.
The purpose of this was to allow modules to have a slightly different installer depending on the version of Dnn they where getting installed on:
.dnn would apply to all versions before 5
.dnn5 would override the .dnn if on Dnn5
.dnn6 would win over both .dnn and .dnn6 but only if on Dnn6
So on and so forth.
This was maintained up to Dnn7, we should make sure to bring this back for both dnn9 and dnn10 as it could be extremely useful to offset breaking changes issues on developers. For instance a developer can have a single install with views/dlls compiple for 9 or 10 and just install the right ones depending on which version it is getting installed.
## Additional context
I was chatting with David Poindexter @david-poindexter about this, we have no time to test right now, but he was saying there was something not working correctly with this, we should definitely run some scenarios to ensure it does in fact work, a quick look in the code looks fine, but we need testing on that.
## Affected version
<!--
Please add X in at least one of the boxes as appropriate. In order for an issue to be accepted, a developer needs to be able to reproduce the issue on a currently supported version. If you are looking for a workaround for an issue with an older version, please visit the forums at https://dnncommunity.org/forums
-->
* [x] 10.0.0 alpha build
* [x] 9.5.0 alpha build
* [x] 9.4.4 latest supported release | priority | manifest versions need to be updated description of bug dnn introduced manifest extension numbering dnn vs this was continued up to dnn the purpose of this was to allow modules to have a slightly different installer depending on the version of dnn they where getting installed on dnn would apply to all versions before would override the dnn if on would win over both dnn and but only if on so on and so forth this was maintained up to we should make sure to bring this back for both and as it could be extremely useful to offset breaking changes issues on developers for instance a developer can have a single install with views dlls compiple for or and just install the right ones depending on which version it is getting installed additional context i was chatting with david poindexter david poindexter about this we have no time to test right now but he was saying there was something not working correctly with this we should definitely run some scenarios to ensure it does in fact work a quick look in the code looks fine but we need testing on that affected version please add x in at least one of the boxes as appropriate in order for an issue to be accepted a developer needs to be able to reproduce the issue on a currently supported version if you are looking for a workaround for an issue with an older version please visit the forums at alpha build alpha build latest supported release | 1 |
30,475 | 2,723,850,795 | IssuesEvent | 2015-04-14 14:50:50 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | Accept arbitrary time format on Timer widget | CruxWidgetLibrary enhancement imported Milestone-3.0.1 Priority-Medium | _From [ge...@cruxframework.org](https://code.google.com/u/108728025643241132101/) on October 05, 2010 11:53:16_
Accept arbitrary time format on Timer widget. Would be great having an attribute to inform the format to use.
For example, pattern="mm:ss"
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=198_ | 1.0 | Accept arbitrary time format on Timer widget - _From [ge...@cruxframework.org](https://code.google.com/u/108728025643241132101/) on October 05, 2010 11:53:16_
Accept arbitrary time format on Timer widget. Would be great having an attribute to inform the format to use.
For example, pattern="mm:ss"
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=198_ | priority | accept arbitrary time format on timer widget from on october accept arbitrary time format on timer widget would be great having an attribute to inform the format to use for example pattern mm ss original issue | 1 |
93,701 | 3,908,312,099 | IssuesEvent | 2016-04-19 15:30:26 | cytoscape/cytoscape.js | https://api.github.com/repos/cytoscape/cytoscape.js | reopened | Improve default stylesheet | priority-2-medium | - Use haystack edges by default
- Improve compound stylings a bit
- Increase edge thickness
- Use single, grey colour for nodes and edges
- Include selection colour for convenience
- Include border on compound backgrounds
- Try to keep style minimal and performant while balancing convenience | 1.0 | Improve default stylesheet - - Use haystack edges by default
- Improve compound stylings a bit
- Increase edge thickness
- Use single, grey colour for nodes and edges
- Include selection colour for convenience
- Include border on compound backgrounds
- Try to keep style minimal and performant while balancing convenience | priority | improve default stylesheet use haystack edges by default improve compound stylings a bit increase edge thickness use single grey colour for nodes and edges include selection colour for convenience include border on compound backgrounds try to keep style minimal and performant while balancing convenience | 1 |
408,301 | 11,944,626,090 | IssuesEvent | 2020-04-03 03:04:12 | esteemapp/esteem-surfer | https://api.github.com/repos/esteemapp/esteem-surfer | opened | dpoll.xyz links support | medium priority | `/detail/@forykw/what-shall-i-do-with-1190-steem/`
Support for opening dpoll.xyz posts
1. allow voting within Esteem or
2. open as external link for dpoll | 1.0 | dpoll.xyz links support - `/detail/@forykw/what-shall-i-do-with-1190-steem/`
Support for opening dpoll.xyz posts
1. allow voting within Esteem or
2. open as external link for dpoll | priority | dpoll xyz links support detail forykw what shall i do with steem support for opening dpoll xyz posts allow voting within esteem or open as external link for dpoll | 1 |
318,071 | 9,673,987,912 | IssuesEvent | 2019-05-22 08:53:10 | canonical-web-and-design/www.ubuntu.com | https://api.github.com/repos/canonical-web-and-design/www.ubuntu.com | closed | Some pages are indexed multiple times with utm_* parameter URLs | Priority: Medium Type: Enhancement | 1\. From any page on ubuntu.com, [search for “infographic”](https://www.ubuntu.com/search?q=infographic).
What happens: The first two results are pages that do not contain the word “infographic” at all:
- [Get Ubuntu | Download | Ubuntu](http://www.ubuntu.com/download?utm_source=Infographic&utm_medium=Infographic&utm_campaign=FY17Q1_Ubuntu_Infographic&)
- [The leading operating system for PCs, IoT devices, servers ...](http://www.ubuntu.com/?utm_source=Infographic&utm_medium=Link&utm_campaign=FY16Q%24_Ubuntu_Infgraphic&)
What should happen: These two pages should not be returned as results.
The pages are returned apparently because they include “infographic” in their URLs:
- …`?utm_source=Infographic&utm_medium=Infographic&utm_campaign=FY17Q1_Ubuntu_Infographic&`
- …`?utm_source=Infographic&utm_medium=Link&utm_campaign=FY16Q%24_Ubuntu_Infgraphic&` [sic]
But the same two pages are indexed separately without these URL parameters, as shown in these example searches:
- “[get ubuntu downloads](https://www.ubuntu.com/search?q=get+ubuntu+downloads)” (first result)
- “[the leading operating system](https://www.ubuntu.com/search?q=the+leading+operating+system)” (first result)
So, probably the indexer is following these links from somewhere else on ubuntu.com, and not realizing that they return exactly the same pages as the equivalent URLs without any parameters.
Besides causing irrelevant search results, this indexing is probably distorting the statistics of how often people follow those links in their original location.
I can think of three ways to fix it, though maybe there are others:
- Exclude any URL containing `?` from being indexed
- Exclude any URL containing `?utm_` or `&utm_` from being indexed
- Include `<link rel="canonical"`… on any page containing `?utm_` or `&utm_` in its URL.
---
*Reported from: https://www.ubuntu.com/search?q=infographic* | 1.0 | Some pages are indexed multiple times with utm_* parameter URLs - 1\. From any page on ubuntu.com, [search for “infographic”](https://www.ubuntu.com/search?q=infographic).
What happens: The first two results are pages that do not contain the word “infographic” at all:
- [Get Ubuntu | Download | Ubuntu](http://www.ubuntu.com/download?utm_source=Infographic&utm_medium=Infographic&utm_campaign=FY17Q1_Ubuntu_Infographic&)
- [The leading operating system for PCs, IoT devices, servers ...](http://www.ubuntu.com/?utm_source=Infographic&utm_medium=Link&utm_campaign=FY16Q%24_Ubuntu_Infgraphic&)
What should happen: These two pages should not be returned as results.
The pages are returned apparently because they include “infographic” in their URLs:
- …`?utm_source=Infographic&utm_medium=Infographic&utm_campaign=FY17Q1_Ubuntu_Infographic&`
- …`?utm_source=Infographic&utm_medium=Link&utm_campaign=FY16Q%24_Ubuntu_Infgraphic&` [sic]
But the same two pages are indexed separately without these URL parameters, as shown in these example searches:
- “[get ubuntu downloads](https://www.ubuntu.com/search?q=get+ubuntu+downloads)” (first result)
- “[the leading operating system](https://www.ubuntu.com/search?q=the+leading+operating+system)” (first result)
So, probably the indexer is following these links from somewhere else on ubuntu.com, and not realizing that they return exactly the same pages as the equivalent URLs without any parameters.
Besides causing irrelevant search results, this indexing is probably distorting the statistics of how often people follow those links in their original location.
I can think of three ways to fix it, though maybe there are others:
- Exclude any URL containing `?` from being indexed
- Exclude any URL containing `?utm_` or `&utm_` from being indexed
- Include `<link rel="canonical"`… on any page containing `?utm_` or `&utm_` in its URL.
---
*Reported from: https://www.ubuntu.com/search?q=infographic* | priority | some pages are indexed multiple times with utm parameter urls from any page on ubuntu com what happens the first two results are pages that do not contain the word “infographic” at all what should happen these two pages should not be returned as results the pages are returned apparently because they include “infographic” in their urls … utm source infographic utm medium infographic utm campaign ubuntu infographic … utm source infographic utm medium link utm campaign ubuntu infgraphic but the same two pages are indexed separately without these url parameters as shown in these example searches “ first result “ first result so probably the indexer is following these links from somewhere else on ubuntu com and not realizing that they return exactly the same pages as the equivalent urls without any parameters besides causing irrelevant search results this indexing is probably distorting the statistics of how often people follow those links in their original location i can think of three ways to fix it though maybe there are others exclude any url containing from being indexed exclude any url containing utm or utm from being indexed include link rel canonical … on any page containing utm or utm in its url reported from | 1 |
152,318 | 5,844,464,849 | IssuesEvent | 2017-05-10 12:00:13 | buttercup/buttercup-core | https://api.github.com/repos/buttercup/buttercup-core | closed | Easy method to get a URL from an entry | Effort: Low Priority: Medium Stability: No change Status: Blocked Type: Enhancement | Provide some easy way to get the URL from an entry. URLs may be stored in meta as:
* "URL"
* "url"
* "Url" | 1.0 | Easy method to get a URL from an entry - Provide some easy way to get the URL from an entry. URLs may be stored in meta as:
* "URL"
* "url"
* "Url" | priority | easy method to get a url from an entry provide some easy way to get the url from an entry urls may be stored in meta as url url url | 1 |
458,834 | 13,182,631,764 | IssuesEvent | 2020-08-12 16:05:50 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | closed | Add detail map to save window in map viewer | Priority: Medium enhancement | ### Description
While in #4391 we are going to reuse SaveModal as it is from Dashboards and stories, with this issue we want to add the datail card for the map also inside the viewer.
| 1.0 | Add detail map to save window in map viewer - ### Description
While in #4391 we are going to reuse SaveModal as it is from Dashboards and stories, with this issue we want to add the datail card for the map also inside the viewer.
| priority | add detail map to save window in map viewer description while in we are going to reuse savemodal as it is from dashboards and stories with this issue we want to add the datail card for the map also inside the viewer | 1 |
828,684 | 31,838,874,346 | IssuesEvent | 2023-09-14 15:02:58 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | closed | Remote: Enhance `datetime`, `date` and `timedelta` conversion | enhancement priority: medium backwards incompatible effort: small | XML-RPC used by the Remote API supports only [some data types](https://en.wikipedia.org/wiki/XML-RPC#Data_types). Currently the Remote library converts everything it doesn't recognize to strings, but there are several issues:
- Some types, at least `datetime`, are converted to strings even though XML-RPC actually supports them natively.
- Some types are converted to strings even though some other type would be better. For example, `timedelta` should probably be converted to a float by using `timedelta.total_seconds()`. Also `date` could be converted to `datetime` that then would be handled automatically.
- It's possible that with some types we should do some more formatting than just call `str()`.
The main motivation for this issues is making it easier for remote servers to convert values not supported by XML-RPC back to appropriate types. As an example, see robotframework/PythonRemoteServer#84.
In practice we need to go through all types that Robot's argument conversion supports and see what's the best way to handle them. | 1.0 | Remote: Enhance `datetime`, `date` and `timedelta` conversion - XML-RPC used by the Remote API supports only [some data types](https://en.wikipedia.org/wiki/XML-RPC#Data_types). Currently the Remote library converts everything it doesn't recognize to strings, but there are several issues:
- Some types, at least `datetime`, are converted to strings even though XML-RPC actually supports them natively.
- Some types are converted to strings even though some other type would be better. For example, `timedelta` should probably be converted to a float by using `timedelta.total_seconds()`. Also `date` could be converted to `datetime` that then would be handled automatically.
- It's possible that with some types we should do some more formatting than just call `str()`.
The main motivation for this issues is making it easier for remote servers to convert values not supported by XML-RPC back to appropriate types. As an example, see robotframework/PythonRemoteServer#84.
In practice we need to go through all types that Robot's argument conversion supports and see what's the best way to handle them. | priority | remote enhance datetime date and timedelta conversion xml rpc used by the remote api supports only currently the remote library converts everything it doesn t recognize to strings but there are several issues some types at least datetime are converted to strings even though xml rpc actually supports them natively some types are converted to strings even though some other type would be better for example timedelta should probably be converted to a float by using timedelta total seconds also date could be converted to datetime that then would be handled automatically it s possible that with some types we should do some more formatting than just call str the main motivation for this issues is making it easier for remote servers to convert values not supported by xml rpc back to appropriate types as an example see robotframework pythonremoteserver in practice we need to go through all types that robot s argument conversion supports and see what s the best way to handle them | 1 |
628,764 | 20,013,491,373 | IssuesEvent | 2022-02-01 09:38:54 | google/flax | https://api.github.com/repos/google/flax | closed | Updating linen_examples/wmt | Priority: P2 - medium | This issue tracks updates the `wmt` example to follow practices outlined in #231.
- [x] Port to linen API - once ported all subsequent changes should be done in `linen_examples/wmt`
- [x] Update `README.md`
- [x] Add `requirements.txt`
- [x] Update file structure
- [x] Use `ml_collections.ConfigDict`
- [ ] Add benchmark test
- [x] Add unit test for training/eval step
- [ ] Add Colab
- [x] Adhere to Google Python style
- [ ] Add mypy annotations
- [ ] Shorten/beautify training loop (consider using `clu` for this)
| 1.0 | Updating linen_examples/wmt - This issue tracks updates the `wmt` example to follow practices outlined in #231.
- [x] Port to linen API - once ported all subsequent changes should be done in `linen_examples/wmt`
- [x] Update `README.md`
- [x] Add `requirements.txt`
- [x] Update file structure
- [x] Use `ml_collections.ConfigDict`
- [ ] Add benchmark test
- [x] Add unit test for training/eval step
- [ ] Add Colab
- [x] Adhere to Google Python style
- [ ] Add mypy annotations
- [ ] Shorten/beautify training loop (consider using `clu` for this)
| priority | updating linen examples wmt this issue tracks updates the wmt example to follow practices outlined in port to linen api once ported all subsequent changes should be done in linen examples wmt update readme md add requirements txt update file structure use ml collections configdict add benchmark test add unit test for training eval step add colab adhere to google python style add mypy annotations shorten beautify training loop consider using clu for this | 1 |
274,867 | 8,568,881,015 | IssuesEvent | 2018-11-11 03:08:09 | lidarr/Lidarr | https://api.github.com/repos/lidarr/Lidarr | closed | Calendar not working | Area: UI Priority: Medium Status: Accepted Type: Bug | <!--
Before opening a new issue, please ensure:
- You use the discord for support/questions
- You search for existing bugs/feature requests
- Remove extraneous template details
-->
## Support / Questions
Please use https://discord.gg/8Y7rDc9 for support. Support requests or questions will be redirected to discord and the issue will be closed.
<!--
Remove if not opening a bug report
-->
## Bug Report
### System Information/Logs
**Lidarr Version:**
0.4.0.552
**Operating System:**
Windows 1803
**.net Framework (Windows) or mono (macOS/Linux) Version:**
**Link to Log Files (debug or trace):**
TypeError: Cannot read property 'percentOfTracks' of undefined
in CalendarEvent
in Connect(CalendarEvent)
in div
in div
in CalendarDay
in CalendarDayConnector
in Connect(CalendarDayConnector)
in div
in CalendarDays
in Connect(CalendarDays)
in div
in div
in Calendar
in CalendarConnector
in Connect(CalendarConnector)
in Measure
in Measure
in div
in div
in div
in Scrollbars
in OverlayScroller
in PageContentBody
in Connect(PageContentBody)
in div
in DocumentTitle
in SideEffect(DocumentTitle)
in ErrorBoundary
in PageContent
in CalendarPage
in Connect(CalendarPage)
in Route
in Switch
in Switch
in AppRoutes
in div
in div
in Page
in PageConnector
in Connect(PageConnector)
in Route
in withRouter(Connect(PageConnector))
in Router
in ConnectedRouter
in Provider
in DocumentTitle
in SideEffect(DocumentTitle)
in App
**Browser (for UI bugs):** Chrome
### Additional Information
| 1.0 | Calendar not working - <!--
Before opening a new issue, please ensure:
- You use the discord for support/questions
- You search for existing bugs/feature requests
- Remove extraneous template details
-->
## Support / Questions
Please use https://discord.gg/8Y7rDc9 for support. Support requests or questions will be redirected to discord and the issue will be closed.
<!--
Remove if not opening a bug report
-->
## Bug Report
### System Information/Logs
**Lidarr Version:**
0.4.0.552
**Operating System:**
Windows 1803
**.net Framework (Windows) or mono (macOS/Linux) Version:**
**Link to Log Files (debug or trace):**
TypeError: Cannot read property 'percentOfTracks' of undefined
in CalendarEvent
in Connect(CalendarEvent)
in div
in div
in CalendarDay
in CalendarDayConnector
in Connect(CalendarDayConnector)
in div
in CalendarDays
in Connect(CalendarDays)
in div
in div
in Calendar
in CalendarConnector
in Connect(CalendarConnector)
in Measure
in Measure
in div
in div
in div
in Scrollbars
in OverlayScroller
in PageContentBody
in Connect(PageContentBody)
in div
in DocumentTitle
in SideEffect(DocumentTitle)
in ErrorBoundary
in PageContent
in CalendarPage
in Connect(CalendarPage)
in Route
in Switch
in Switch
in AppRoutes
in div
in div
in Page
in PageConnector
in Connect(PageConnector)
in Route
in withRouter(Connect(PageConnector))
in Router
in ConnectedRouter
in Provider
in DocumentTitle
in SideEffect(DocumentTitle)
in App
**Browser (for UI bugs):** Chrome
### Additional Information
| priority | calendar not working before opening a new issue please ensure you use the discord for support questions you search for existing bugs feature requests remove extraneous template details support questions please use for support support requests or questions will be redirected to discord and the issue will be closed remove if not opening a bug report bug report system information logs lidarr version operating system windows net framework windows or mono macos linux version link to log files debug or trace typeerror cannot read property percentoftracks of undefined in calendarevent in connect calendarevent in div in div in calendarday in calendardayconnector in connect calendardayconnector in div in calendardays in connect calendardays in div in div in calendar in calendarconnector in connect calendarconnector in measure in measure in div in div in div in scrollbars in overlayscroller in pagecontentbody in connect pagecontentbody in div in documenttitle in sideeffect documenttitle in errorboundary in pagecontent in calendarpage in connect calendarpage in route in switch in switch in approutes in div in div in page in pageconnector in connect pageconnector in route in withrouter connect pageconnector in router in connectedrouter in provider in documenttitle in sideeffect documenttitle in app browser for ui bugs chrome additional information | 1 |
437,432 | 12,597,644,208 | IssuesEvent | 2020-06-11 00:26:44 | 2-of-clubs/2ofclubs | https://api.github.com/repos/2-of-clubs/2ofclubs | closed | JWT | Backend: Security Priority: Medium | - [x] Setup JWT to secure endpoints (Middleware)
- [x] Add Refresh Token (httponly cookie)
- [x] Caching JWT in browser (In memory only) | 1.0 | JWT - - [x] Setup JWT to secure endpoints (Middleware)
- [x] Add Refresh Token (httponly cookie)
- [x] Caching JWT in browser (In memory only) | priority | jwt setup jwt to secure endpoints middleware add refresh token httponly cookie caching jwt in browser in memory only | 1 |
55,731 | 3,074,285,823 | IssuesEvent | 2015-08-20 05:52:09 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | robotiumRC:Error: Bad component name: android.test.InstrumentationTestRunner | bug imported Priority-Medium wontfix | _From [bingyuan...@gmail.com](https://code.google.com/u/113742791707324738050/) on October 30, 2013 22:11:37_
What steps will reproduce the problem? 1. I writed a class file,according to example in wiki:
package com.android.test.rc;
import java.util.Properties;
import com.jayway.android.robotium.remotecontrol.solo.Message;
import com.jayway.android.robotium.remotecontrol.solo.SoloTest;
public class MyTest extends SoloTest{
public static final String DEFAULT_AUT_APK = "E:\\android\\test\\NotePad\\bin\\NotePad.apk";
public static final String messengerApk="C:\\robotiumrc\\SAFSTCPMessenger\\bin\\SAFSTCPMessenger-debug.apk";
public static final String testRunnerApk="C:\\robotiumrc\\RobotiumTestRunner\\bin\\RobotiumTestRunner-debug.apk";
public static final String instrumentArg="android.test.InstrumentationTestRunner";
public MyTest(){
super();
}
public MyTest(String[] args){
super(args);
}
public MyTest(String messengerApk, String testRunnerApk, String instrumentArg){
super(messengerApk, testRunnerApk, instrumentArg);
}
public static void main(String[] args){
SoloTest soloTest = new MyTest(messengerApk,testRunnerApk,instrumentArg);
soloTest.setAUTApk(DEFAULT_AUT_APK);
soloTest.process();
}
protected void test(){
try{
String activityID = solo.getCurrentActivity();
Properties props = solo._last_remote_result;
String activityName = props.getProperty(Message.PARAM_NAME);
String activityClass = props.getProperty(Message.PARAM_CLASS);
System.out.println("CurrentActivity UID: "+ activityID);
System.out.println("CurrentActivity Class: "+ activityClass);
System.out.println("CurrentActivity Name: "+ activityName);
}catch(Exception e){
e.printStackTrace();
}
}
}
2.run as java application:
Attempting to initialize Android Tools...
C:\Program Files\Android\android-sdk doesn't exist.
Setting Android Tools SDK Dir to E:\android\adt-bundle-windows-x86-20130917\adt-bundle-windows-x86-20130917\sdk
SoloTest INFO: Detected 1 device/emulators attached.
SoloTest DEBUG: INSTALLING E:\android\test\NotePad\bin\NotePad.apk
INSTALLING E:\android\test\NotePad\bin\NotePad.apk
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
ATTEMPTING ADB Install command: adb [Ljava.lang.String;@1a7789c
pkg: /data/local/tmp/NotePad.apk
Success
408 KB/s (65420 bytes in 0.156s)
ADB Install command successful.
SoloTest DEBUG: INSTALLING C:\robotiumrc\SAFSTCPMessenger\bin\SAFSTCPMessenger-debug.apk
INSTALLING C:\robotiumrc\SAFSTCPMessenger\bin\SAFSTCPMessenger-debug.apk
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
ATTEMPTING ADB Install command: adb [Ljava.lang.String;@6db33c
pkg: /data/local/tmp/SAFSTCPMessenger-debug.apk
Success
312 KB/s (34993 bytes in 0.109s)
ADB Install command successful.
SoloTest DEBUG: INSTALLING C:\robotiumrc\RobotiumTestRunner\bin\RobotiumTestRunner-debug.apk
INSTALLING C:\robotiumrc\RobotiumTestRunner\bin\RobotiumTestRunner-debug.apk
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
ATTEMPTING ADB Install command: adb [Ljava.lang.String;@1dafb4e
pkg: /data/local/tmp/RobotiumTestRunner-debug.apk
Success
395 KB/s (126654 bytes in 0.312s)
ADB Install command successful.
LAUNCHING android.test.InstrumentationTestRunner
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
usage: am [subcommand] [options]
usage: am start [-D] [-W] [-P \<FILE>] [--start-profiler \<FILE>]
[--R COUNT] [-S] [--opengl-trace]
[--user \<USER_ID> | current] \<INTENT>
am startservice [--user \<USER_ID> | current] \<INTENT>
am force-stop [--user \<USER_ID> | all | current] \<PACKAGE>
am kill [--user \<USER_ID> | all | current] \<PACKAGE>
am kill-all
am broadcast [--user \<USER_ID> | all | current] \<INTENT>
am instrument [-r] [-e \<NAME> \<VALUE>] [-p \<FILE>] [-w]
[--user \<USER_ID> | current]
[--no-window-animation] \<COMPONENT>
am profile start [--user \<USER_ID> current] \<PROCESS> \<FILE>
am profile stop [--user \<USER_ID> current] [<PROCESS>]
am dumpheap [--user \<USER_ID> current] [-n] \<PROCESS> \<FILE>
am set-debug-app [-w] [--persistent] \<PACKAGE>
am clear-debug-app
am monitor [--gdb \<port>]
am hang [--allow-restart]
am screen-compat [on|off] \<PACKAGE>
am to-uri [INTENT]
am to-intent-uri [INTENT]
am switch-user \<USER_ID>
am stop-user \<USER_ID>
am start: start an Activity. Options are:
-D: enable debugging
-W: wait for launch to complete
--start-profiler \<FILE>: start profiler and send results to \<FILE>
-P \<FILE>: like above, but profiling stops when app goes idle
-R: repeat the activity launch \<COUNT> times. Prior to each repeat,
the top activity will be finished.
-S: force stop the target app before starting the activity
--opengl-trace: enable tracing of OpenGL functions
--user \<USER_ID> | current: Specify which user to run as; if not
specified then run as the current user.
am startservice: start a Service. Options are:
--user \<USER_ID> | current: Specify which user to run as; if not
specified then run as the current user.
am force-stop: force stop everything associated with \<PACKAGE>.
--user \<USER_ID> | all | current: Specify user to force stop;
all users if not specified.
am kill: Kill all processes associated with \<PACKAGE>. Only kills.
processes that are safe to kill -- that is, will not impact the user
experience.
--user \<USER_ID> | all | current: Specify user whose processes to kill;
all users if not specified.
am kill-all: Kill all background processes.
am broadcast: send a broadcast Intent. Options are:
--user \<USER_ID> | all | current: Specify which user to send to; if not
specified then send to all users.
--receiver-permission \<PERMISSION>: Require receiver to hold permission.
am instrument: start an Instrumentation. Typically this target \<COMPONENT>
is the form \<TEST_PACKAGE>/<RUNNER_CLASS>. Options are:
-r: print raw results (otherwise decode REPORT_KEY_STREAMRESULT). Use with
[-e perf true] to generate raw output for performance measurements.
-e \<NAME> \<VALUE>: set argument \<NAME> to \<VALUE>. For test runners a
common form is [-e \<testrunner_flag> \<value>[,<value>...]].
-p \<FILE>: write profiling data to \<FILE>
-w: wait for instrumentation to finish before returning. Required for
test runners.
--user \<USER_ID> | current: Specify user instrumentation runs in;
...
_Original issue: http://code.google.com/p/robotium/issues/detail?id=545_ | 1.0 | robotiumRC:Error: Bad component name: android.test.InstrumentationTestRunner - _From [bingyuan...@gmail.com](https://code.google.com/u/113742791707324738050/) on October 30, 2013 22:11:37_
What steps will reproduce the problem? 1. I writed a class file,according to example in wiki:
package com.android.test.rc;
import java.util.Properties;
import com.jayway.android.robotium.remotecontrol.solo.Message;
import com.jayway.android.robotium.remotecontrol.solo.SoloTest;
public class MyTest extends SoloTest{
public static final String DEFAULT_AUT_APK = "E:\\android\\test\\NotePad\\bin\\NotePad.apk";
public static final String messengerApk="C:\\robotiumrc\\SAFSTCPMessenger\\bin\\SAFSTCPMessenger-debug.apk";
public static final String testRunnerApk="C:\\robotiumrc\\RobotiumTestRunner\\bin\\RobotiumTestRunner-debug.apk";
public static final String instrumentArg="android.test.InstrumentationTestRunner";
public MyTest(){
super();
}
public MyTest(String[] args){
super(args);
}
public MyTest(String messengerApk, String testRunnerApk, String instrumentArg){
super(messengerApk, testRunnerApk, instrumentArg);
}
public static void main(String[] args){
SoloTest soloTest = new MyTest(messengerApk,testRunnerApk,instrumentArg);
soloTest.setAUTApk(DEFAULT_AUT_APK);
soloTest.process();
}
protected void test(){
try{
String activityID = solo.getCurrentActivity();
Properties props = solo._last_remote_result;
String activityName = props.getProperty(Message.PARAM_NAME);
String activityClass = props.getProperty(Message.PARAM_CLASS);
System.out.println("CurrentActivity UID: "+ activityID);
System.out.println("CurrentActivity Class: "+ activityClass);
System.out.println("CurrentActivity Name: "+ activityName);
}catch(Exception e){
e.printStackTrace();
}
}
}
2.run as java application:
Attempting to initialize Android Tools...
C:\Program Files\Android\android-sdk doesn't exist.
Setting Android Tools SDK Dir to E:\android\adt-bundle-windows-x86-20130917\adt-bundle-windows-x86-20130917\sdk
SoloTest INFO: Detected 1 device/emulators attached.
SoloTest DEBUG: INSTALLING E:\android\test\NotePad\bin\NotePad.apk
INSTALLING E:\android\test\NotePad\bin\NotePad.apk
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
ATTEMPTING ADB Install command: adb [Ljava.lang.String;@1a7789c
pkg: /data/local/tmp/NotePad.apk
Success
408 KB/s (65420 bytes in 0.156s)
ADB Install command successful.
SoloTest DEBUG: INSTALLING C:\robotiumrc\SAFSTCPMessenger\bin\SAFSTCPMessenger-debug.apk
INSTALLING C:\robotiumrc\SAFSTCPMessenger\bin\SAFSTCPMessenger-debug.apk
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
ATTEMPTING ADB Install command: adb [Ljava.lang.String;@6db33c
pkg: /data/local/tmp/SAFSTCPMessenger-debug.apk
Success
312 KB/s (34993 bytes in 0.109s)
ADB Install command successful.
SoloTest DEBUG: INSTALLING C:\robotiumrc\RobotiumTestRunner\bin\RobotiumTestRunner-debug.apk
INSTALLING C:\robotiumrc\RobotiumTestRunner\bin\RobotiumTestRunner-debug.apk
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
ATTEMPTING ADB Install command: adb [Ljava.lang.String;@1dafb4e
pkg: /data/local/tmp/RobotiumTestRunner-debug.apk
Success
395 KB/s (126654 bytes in 0.312s)
ADB Install command successful.
LAUNCHING android.test.InstrumentationTestRunner
Checking for devices going offline...
Checking 1 for 'offline' status....
No 'offline' devices detected.
usage: am [subcommand] [options]
usage: am start [-D] [-W] [-P \<FILE>] [--start-profiler \<FILE>]
[--R COUNT] [-S] [--opengl-trace]
[--user \<USER_ID> | current] \<INTENT>
am startservice [--user \<USER_ID> | current] \<INTENT>
am force-stop [--user \<USER_ID> | all | current] \<PACKAGE>
am kill [--user \<USER_ID> | all | current] \<PACKAGE>
am kill-all
am broadcast [--user \<USER_ID> | all | current] \<INTENT>
am instrument [-r] [-e \<NAME> \<VALUE>] [-p \<FILE>] [-w]
[--user \<USER_ID> | current]
[--no-window-animation] \<COMPONENT>
am profile start [--user \<USER_ID> current] \<PROCESS> \<FILE>
am profile stop [--user \<USER_ID> current] [<PROCESS>]
am dumpheap [--user \<USER_ID> current] [-n] \<PROCESS> \<FILE>
am set-debug-app [-w] [--persistent] \<PACKAGE>
am clear-debug-app
am monitor [--gdb \<port>]
am hang [--allow-restart]
am screen-compat [on|off] \<PACKAGE>
am to-uri [INTENT]
am to-intent-uri [INTENT]
am switch-user \<USER_ID>
am stop-user \<USER_ID>
am start: start an Activity. Options are:
-D: enable debugging
-W: wait for launch to complete
--start-profiler \<FILE>: start profiler and send results to \<FILE>
-P \<FILE>: like above, but profiling stops when app goes idle
-R: repeat the activity launch \<COUNT> times. Prior to each repeat,
the top activity will be finished.
-S: force stop the target app before starting the activity
--opengl-trace: enable tracing of OpenGL functions
--user \<USER_ID> | current: Specify which user to run as; if not
specified then run as the current user.
am startservice: start a Service. Options are:
--user \<USER_ID> | current: Specify which user to run as; if not
specified then run as the current user.
am force-stop: force stop everything associated with \<PACKAGE>.
--user \<USER_ID> | all | current: Specify user to force stop;
all users if not specified.
am kill: Kill all processes associated with \<PACKAGE>. Only kills.
processes that are safe to kill -- that is, will not impact the user
experience.
--user \<USER_ID> | all | current: Specify user whose processes to kill;
all users if not specified.
am kill-all: Kill all background processes.
am broadcast: send a broadcast Intent. Options are:
--user \<USER_ID> | all | current: Specify which user to send to; if not
specified then send to all users.
--receiver-permission \<PERMISSION>: Require receiver to hold permission.
am instrument: start an Instrumentation. Typically this target \<COMPONENT>
is the form \<TEST_PACKAGE>/<RUNNER_CLASS>. Options are:
-r: print raw results (otherwise decode REPORT_KEY_STREAMRESULT). Use with
[-e perf true] to generate raw output for performance measurements.
-e \<NAME> \<VALUE>: set argument \<NAME> to \<VALUE>. For test runners a
common form is [-e \<testrunner_flag> \<value>[,<value>...]].
-p \<FILE>: write profiling data to \<FILE>
-w: wait for instrumentation to finish before returning. Required for
test runners.
--user \<USER_ID> | current: Specify user instrumentation runs in;
...
_Original issue: http://code.google.com/p/robotium/issues/detail?id=545_ | priority | robotiumrc error bad component name android test instrumentationtestrunner from on october what steps will reproduce the problem i writed a class file,according to example in wiki package com android test rc import java util properties import com jayway android robotium remotecontrol solo message import com jayway android robotium remotecontrol solo solotest public class mytest extends solotest public static final string default aut apk e android test notepad bin notepad apk public static final string messengerapk c robotiumrc safstcpmessenger bin safstcpmessenger debug apk public static final string testrunnerapk c robotiumrc robotiumtestrunner bin robotiumtestrunner debug apk public static final string instrumentarg android test instrumentationtestrunner public mytest super public mytest string args super args public mytest string messengerapk string testrunnerapk string instrumentarg super messengerapk testrunnerapk instrumentarg public static void main string args solotest solotest new mytest messengerapk testrunnerapk instrumentarg solotest setautapk default aut apk solotest process protected void test try string activityid solo getcurrentactivity properties props solo last remote result string activityname props getproperty message param name string activityclass props getproperty message param class system out println currentactivity uid activityid system out println currentactivity class activityclass system out println currentactivity name activityname catch exception e e printstacktrace run as java application: attempting to initialize android tools c program files android android sdk doesn t exist setting android tools sdk dir to e android adt bundle windows adt bundle windows sdk solotest info detected device emulators attached solotest debug installing e android test notepad bin notepad apk installing e android test notepad bin notepad apk checking for devices going offline checking for offline status no offline devices detected attempting adb install command adb ljava lang string pkg data local tmp notepad apk success kb s bytes in adb install command successful solotest debug installing c robotiumrc safstcpmessenger bin safstcpmessenger debug apk installing c robotiumrc safstcpmessenger bin safstcpmessenger debug apk checking for devices going offline checking for offline status no offline devices detected attempting adb install command adb ljava lang string pkg data local tmp safstcpmessenger debug apk success kb s bytes in adb install command successful solotest debug installing c robotiumrc robotiumtestrunner bin robotiumtestrunner debug apk installing c robotiumrc robotiumtestrunner bin robotiumtestrunner debug apk checking for devices going offline checking for offline status no offline devices detected attempting adb install command adb ljava lang string pkg data local tmp robotiumtestrunner debug apk success kb s bytes in adb install command successful launching android test instrumentationtestrunner checking for devices going offline checking for offline status no offline devices detected usage am usage am start am startservice am force stop am kill am kill all am broadcast am instrument am profile start am profile stop am dumpheap am set debug app am clear debug app am monitor am hang am screen compat am to uri am to intent uri am switch user am stop user am start start an activity options are d enable debugging w wait for launch to complete start profiler start profiler and send results to p like above but profiling stops when app goes idle r repeat the activity launch times prior to each repeat the top activity will be finished s force stop the target app before starting the activity opengl trace enable tracing of opengl functions user current specify which user to run as if not specified then run as the current user am startservice start a service options are user current specify which user to run as if not specified then run as the current user am force stop force stop everything associated with user all current specify user to force stop all users if not specified am kill kill all processes associated with only kills processes that are safe to kill that is will not impact the user experience user all current specify user whose processes to kill all users if not specified am kill all kill all background processes am broadcast send a broadcast intent options are user all current specify which user to send to if not specified then send to all users receiver permission require receiver to hold permission am instrument start an instrumentation typically this target is the form options are r print raw results otherwise decode report key streamresult use with to generate raw output for performance measurements e set argument to for test runners a common form is p write profiling data to w wait for instrumentation to finish before returning required for test runners user current specify user instrumentation runs in original issue | 1 |
176,666 | 6,562,200,098 | IssuesEvent | 2017-09-07 15:44:36 | edenlabllc/ehealth.api | https://api.github.com/repos/edenlabllc/ehealth.api | closed | Pharmacy registration | epic/legal-entity kind/user_story priority/medium project/reimbursement | As a Pharmacy Owner
I want be able to register legal_entity (pharmacy) in eHealth system
So that i can start working on reimbursement program
- [x] Update legal_entity (pharmacy) DM #699
- [x] Update Create legal entity WS #700
- [x] Update Get legal entity list WS #701
- [x] Update Get legal entity details WS #702
- [x] Update roles and scopes for new legal entity type
- [x] Add/Update dictionaries (clinet_types etc)
- [x] Improve validation rules (schema, dictionary validation etc)
- [x] Update deactivate legal entity WS #704 | 1.0 | Pharmacy registration - As a Pharmacy Owner
I want be able to register legal_entity (pharmacy) in eHealth system
So that i can start working on reimbursement program
- [x] Update legal_entity (pharmacy) DM #699
- [x] Update Create legal entity WS #700
- [x] Update Get legal entity list WS #701
- [x] Update Get legal entity details WS #702
- [x] Update roles and scopes for new legal entity type
- [x] Add/Update dictionaries (clinet_types etc)
- [x] Improve validation rules (schema, dictionary validation etc)
- [x] Update deactivate legal entity WS #704 | priority | pharmacy registration as a pharmacy owner i want be able to register legal entity pharmacy in ehealth system so that i can start working on reimbursement program update legal entity pharmacy dm update create legal entity ws update get legal entity list ws update get legal entity details ws update roles and scopes for new legal entity type add update dictionaries clinet types etc improve validation rules schema dictionary validation etc update deactivate legal entity ws | 1 |
791,385 | 27,862,018,782 | IssuesEvent | 2023-03-21 07:22:33 | AY2223S2-CS2113-T11-2/tp | https://api.github.com/repos/AY2223S2-CS2113-T11-2/tp | closed | Add CAP calculator for list | type.Story priority.Medium | - Hashmap already coded out in Grade class
- Check through taken mods and add grade and mc, if grade is SU then skip
- Formula to calculate overall cap
| 1.0 | Add CAP calculator for list - - Hashmap already coded out in Grade class
- Check through taken mods and add grade and mc, if grade is SU then skip
- Formula to calculate overall cap
| priority | add cap calculator for list hashmap already coded out in grade class check through taken mods and add grade and mc if grade is su then skip formula to calculate overall cap | 1 |
613,824 | 19,099,646,714 | IssuesEvent | 2021-11-29 20:48:34 | carbon-design-system/carbon-for-ibm-dotcom | https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom | closed | React: Change Link list React version to a wrapper of the Web Components version | package: react dev priority: medium adopter support icebox | _**This work is a follow-on to the development work done to create / change the same Web Components version. The Web Components development was managed under epic (#3369)**_
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
IBM.com Library developer
> I need to:
create/change the `Link list Reboot requested enhancement`
> so that I can:
provide the ibm.com adopter developers components they can use to build ibm.com web pages
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- This React component will be developed as a wrapper of the Web Component version.
- **This work can not begin until the beginning of 2021 due to a technical environment dependency.**
- **See the Epic for the Design and Functional specs information**
- Design QA testing issue (#3354)
- Prod QA testing issue (#3355)
#### Acceptance criteria
- [ ] Include README for the react component and corresponding styles
- [ ] Do not create knobs in Storybook that include JSON objects
- [ ] Break out Storybook stories into multiple variation stories, if applicable
- [ ] Create matching stable selectors in React component from the corresponding Web Component
- [ ] Create codesandbox example under `/packages/react/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] The Storybook link is added to the Design QA issue for their testing
- [ ] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when development is finished
| 1.0 | React: Change Link list React version to a wrapper of the Web Components version - _**This work is a follow-on to the development work done to create / change the same Web Components version. The Web Components development was managed under epic (#3369)**_
#### User Story
<!-- {{Provide a detailed description of the user's need here, but avoid any type of solutions}} -->
> As a `[user role below]`:
IBM.com Library developer
> I need to:
create/change the `Link list Reboot requested enhancement`
> so that I can:
provide the ibm.com adopter developers components they can use to build ibm.com web pages
#### Additional information
<!-- {{Please provide any additional information or resources for reference}} -->
- Story within Storybook with corresponding knobs
- Utilize Carbon
- This React component will be developed as a wrapper of the Web Component version.
- **This work can not begin until the beginning of 2021 due to a technical environment dependency.**
- **See the Epic for the Design and Functional specs information**
- Design QA testing issue (#3354)
- Prod QA testing issue (#3355)
#### Acceptance criteria
- [ ] Include README for the react component and corresponding styles
- [ ] Do not create knobs in Storybook that include JSON objects
- [ ] Break out Storybook stories into multiple variation stories, if applicable
- [ ] Create matching stable selectors in React component from the corresponding Web Component
- [ ] Create codesandbox example under `/packages/react/examples/codesandbox` and include in README
- [ ] Minimum 80% unit test coverage
- [ ] The Storybook link is added to the Design QA issue for their testing
- [ ] A comment is posted in the Prod QA issue, tagging Praveen and Chetan, when development is finished
| priority | react change link list react version to a wrapper of the web components version this work is a follow on to the development work done to create change the same web components version the web components development was managed under epic user story as a ibm com library developer i need to create change the link list reboot requested enhancement so that i can provide the ibm com adopter developers components they can use to build ibm com web pages additional information story within storybook with corresponding knobs utilize carbon this react component will be developed as a wrapper of the web component version this work can not begin until the beginning of due to a technical environment dependency see the epic for the design and functional specs information design qa testing issue prod qa testing issue acceptance criteria include readme for the react component and corresponding styles do not create knobs in storybook that include json objects break out storybook stories into multiple variation stories if applicable create matching stable selectors in react component from the corresponding web component create codesandbox example under packages react examples codesandbox and include in readme minimum unit test coverage the storybook link is added to the design qa issue for their testing a comment is posted in the prod qa issue tagging praveen and chetan when development is finished | 1 |
242,406 | 7,841,900,365 | IssuesEvent | 2018-06-18 21:09:48 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Need to do something about gif layers | Medium Priority Optimization | marmot has over 1GB of layer GIFs
these would only be like 10MB if they were mp4's or something similar - we should look into using FFMPEG or something similar | 1.0 | Need to do something about gif layers - marmot has over 1GB of layer GIFs
these would only be like 10MB if they were mp4's or something similar - we should look into using FFMPEG or something similar | priority | need to do something about gif layers marmot has over of layer gifs these would only be like if they were s or something similar we should look into using ffmpeg or something similar | 1 |
210,910 | 7,196,504,005 | IssuesEvent | 2018-02-05 03:26:58 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [7.0.0 Beta][Steam] Whitelists, Blacklists, Adminlists etc linked to username | Medium Priority | Hey,
till now, the whitelist, blacklist and adminlist where filled with the Eco username, which worked fine. Now, as Eco (seems to) use the Steam name to identify the user ingame, whitelists, blacklists and adminlists can be easily tricked.
Would be cool, if this can be linked to the steamID64 (or the eco username).
Cheers,
Niklas | 1.0 | [7.0.0 Beta][Steam] Whitelists, Blacklists, Adminlists etc linked to username - Hey,
till now, the whitelist, blacklist and adminlist where filled with the Eco username, which worked fine. Now, as Eco (seems to) use the Steam name to identify the user ingame, whitelists, blacklists and adminlists can be easily tricked.
Would be cool, if this can be linked to the steamID64 (or the eco username).
Cheers,
Niklas | priority | whitelists blacklists adminlists etc linked to username hey till now the whitelist blacklist and adminlist where filled with the eco username which worked fine now as eco seems to use the steam name to identify the user ingame whitelists blacklists and adminlists can be easily tricked would be cool if this can be linked to the or the eco username cheers niklas | 1 |
154,930 | 5,939,843,275 | IssuesEvent | 2017-05-25 07:05:21 | pmem/issues | https://api.github.com/repos/pmem/issues | closed | unit tests: unicode_api/TEST0: SETUP (all/pmem/debug/memcheck) fails | Exposure: Low OS: Linux Priority: 3 medium State: To be verified Type: Bug | Found on revision: 992c9e04ef64f86849752367184dffc6a6fbbe97
> unicode_api/TEST0: SETUP (all/pmem/debug/memcheck)
> RUNTESTS: stopping: unicode_api/TEST0 failed, TEST=all FS=any BUILD=debug
> ../Makefile.inc:341: recipe for target 'TEST0' failed
> make[3]: *** [TEST0] Error 1
> make[3]: Target 'pcheck' not remade because of errors.
> make[3]: Leaving directory '/home/jenkins/workspace/nvml_tests_memcheck_ubuntu/src/test/unicode_api'
> Makefile:387: recipe for target 'unicode_api' failed
> make[2]: *** [unicode_api] Error 2 | 1.0 | unit tests: unicode_api/TEST0: SETUP (all/pmem/debug/memcheck) fails - Found on revision: 992c9e04ef64f86849752367184dffc6a6fbbe97
> unicode_api/TEST0: SETUP (all/pmem/debug/memcheck)
> RUNTESTS: stopping: unicode_api/TEST0 failed, TEST=all FS=any BUILD=debug
> ../Makefile.inc:341: recipe for target 'TEST0' failed
> make[3]: *** [TEST0] Error 1
> make[3]: Target 'pcheck' not remade because of errors.
> make[3]: Leaving directory '/home/jenkins/workspace/nvml_tests_memcheck_ubuntu/src/test/unicode_api'
> Makefile:387: recipe for target 'unicode_api' failed
> make[2]: *** [unicode_api] Error 2 | priority | unit tests unicode api setup all pmem debug memcheck fails found on revision unicode api setup all pmem debug memcheck runtests stopping unicode api failed test all fs any build debug makefile inc recipe for target failed make error make target pcheck not remade because of errors make leaving directory home jenkins workspace nvml tests memcheck ubuntu src test unicode api makefile recipe for target unicode api failed make error | 1 |
304,926 | 9,345,141,110 | IssuesEvent | 2019-03-30 04:26:48 | lbryio/lbry | https://api.github.com/repos/lbryio/lbry | closed | Claim update not able to use existing deposit sometimes | area: wallet needs: repro priority: medium type: bug | <!--
Thanks for reporting an issue to LBRY and helping us improve!
To make it possible for us to help you, please fill out below information carefully.
Before reporting any issues, please make sure that you're using the latest version.
- App: https://github.com/lbryio/lbry-desktop/releases
- Daemon: https://github.com/lbryio/lbry/releases
We are also available on Discord at https://chat.lbry.io
-->
## The Issue
We have a community member that is unable to update their claim, which has a deposit of 4000 LBC, as the update process is not able to use the existing claim balance. He's told that he needs to decrease the claim amount to 60 LBC, which is his balance. He's able to update the claim by lowering the bid amount, and then update again using the 4000 LBC. You can see the chain here:https://explorer.lbry.io/tx/e84ea3eab102d3e64e9928f2c97ba094885254d9a5732dc22b6aa61b6e291376#output-0
I am not able to reproduce locally in a similar scenario but haven't tried higher LBC amounts. The lookup happens here:
https://github.com/lbryio/lbry/blob/master/lbrynet/extras/daemon/Daemon.py#L2232
I originally thought this might be related to the RC wallet sync bug, but we re-synced from scratch and he still ran into the problem.
## System Configuration
<!-- For the app, this info is in the About section at the bottom of the Help page.
You can include a screenshot instead of typing it out -->
<!-- For the daemon, run:
curl 'http://localhost:5279' --data '{"method":"version"}'
and include the full output -->
- LBRY Daemon version:
- LBRY App version:
- LBRY Installation ID:
- Operating system:
## Anything Else
<!-- Include anything else that does not fit into the above sections -->
## Screenshots
<!-- If a screenshot would help explain the bug, please include one or two here -->
## Internal Use
### Acceptance Criteria
1.
2.
3.
### Definition of Done
- [ ] Tested against acceptance criteria
- [ ] Tested against the assumptions of user story
- [ ] The project builds without errors
- [ ] Unit tests are written and passing
- [ ] Tests on devices/browsers listed in the issue have passed
- [ ] QA performed & issues resolved
- [ ] Refactoring completed
- [ ] Any configuration or build changes documented
- [ ] Documentation updated
- [ ] Peer Code Review performed
| 1.0 | Claim update not able to use existing deposit sometimes - <!--
Thanks for reporting an issue to LBRY and helping us improve!
To make it possible for us to help you, please fill out below information carefully.
Before reporting any issues, please make sure that you're using the latest version.
- App: https://github.com/lbryio/lbry-desktop/releases
- Daemon: https://github.com/lbryio/lbry/releases
We are also available on Discord at https://chat.lbry.io
-->
## The Issue
We have a community member that is unable to update their claim, which has a deposit of 4000 LBC, as the update process is not able to use the existing claim balance. He's told that he needs to decrease the claim amount to 60 LBC, which is his balance. He's able to update the claim by lowering the bid amount, and then update again using the 4000 LBC. You can see the chain here:https://explorer.lbry.io/tx/e84ea3eab102d3e64e9928f2c97ba094885254d9a5732dc22b6aa61b6e291376#output-0
I am not able to reproduce locally in a similar scenario but haven't tried higher LBC amounts. The lookup happens here:
https://github.com/lbryio/lbry/blob/master/lbrynet/extras/daemon/Daemon.py#L2232
I originally thought this might be related to the RC wallet sync bug, but we re-synced from scratch and he still ran into the problem.
## System Configuration
<!-- For the app, this info is in the About section at the bottom of the Help page.
You can include a screenshot instead of typing it out -->
<!-- For the daemon, run:
curl 'http://localhost:5279' --data '{"method":"version"}'
and include the full output -->
- LBRY Daemon version:
- LBRY App version:
- LBRY Installation ID:
- Operating system:
## Anything Else
<!-- Include anything else that does not fit into the above sections -->
## Screenshots
<!-- If a screenshot would help explain the bug, please include one or two here -->
## Internal Use
### Acceptance Criteria
1.
2.
3.
### Definition of Done
- [ ] Tested against acceptance criteria
- [ ] Tested against the assumptions of user story
- [ ] The project builds without errors
- [ ] Unit tests are written and passing
- [ ] Tests on devices/browsers listed in the issue have passed
- [ ] QA performed & issues resolved
- [ ] Refactoring completed
- [ ] Any configuration or build changes documented
- [ ] Documentation updated
- [ ] Peer Code Review performed
| priority | claim update not able to use existing deposit sometimes thanks for reporting an issue to lbry and helping us improve to make it possible for us to help you please fill out below information carefully before reporting any issues please make sure that you re using the latest version app daemon we are also available on discord at the issue we have a community member that is unable to update their claim which has a deposit of lbc as the update process is not able to use the existing claim balance he s told that he needs to decrease the claim amount to lbc which is his balance he s able to update the claim by lowering the bid amount and then update again using the lbc you can see the chain here i am not able to reproduce locally in a similar scenario but haven t tried higher lbc amounts the lookup happens here i originally thought this might be related to the rc wallet sync bug but we re synced from scratch and he still ran into the problem system configuration for the app this info is in the about section at the bottom of the help page you can include a screenshot instead of typing it out for the daemon run curl data method version and include the full output lbry daemon version lbry app version lbry installation id operating system anything else screenshots internal use acceptance criteria definition of done tested against acceptance criteria tested against the assumptions of user story the project builds without errors unit tests are written and passing tests on devices browsers listed in the issue have passed qa performed issues resolved refactoring completed any configuration or build changes documented documentation updated peer code review performed | 1 |
442,181 | 12,741,231,488 | IssuesEvent | 2020-06-26 05:25:53 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1595] Claiming troubles | Category: Gameplay Priority: Medium Status: Fixed Week Task | Step to reproduce:
- place store on unowned property:

- claim this store:

- unclaim:

Same with distribution station, but you can use it at all, because if you don't have an owner you can't add items:

| 1.0 | [0.9.0 staging-1595] Claiming troubles - Step to reproduce:
- place store on unowned property:

- claim this store:

- unclaim:

Same with distribution station, but you can use it at all, because if you don't have an owner you can't add items:

| priority | claiming troubles step to reproduce place store on unowned property claim this store unclaim same with distribution station but you can use it at all because if you don t have an owner you can t add items | 1 |
161,627 | 6,132,412,790 | IssuesEvent | 2017-06-25 01:51:46 | MrFouss/IA51-Crowd-Project | https://api.github.com/repos/MrFouss/IA51-Crowd-Project | closed | Add steering fleeing algorithm | 1 - Ready To Do Priority: Medium Type: Enhancement |
<!---
@huboard:{"order":8.003200480032,"milestone_order":0.9983015290314845,"custom_state":""}
-->
| 1.0 | Add steering fleeing algorithm -
<!---
@huboard:{"order":8.003200480032,"milestone_order":0.9983015290314845,"custom_state":""}
-->
| priority | add steering fleeing algorithm huboard order milestone order custom state | 1 |
652,489 | 21,554,202,121 | IssuesEvent | 2022-04-30 05:48:23 | VoltanFr/memcheck | https://api.github.com/repos/VoltanFr/memcheck | closed | Bug in MarkdownEditor: display width | bug priority-medium complexity-low | In a MarkdownEditor, if you turn on the rendered preview and enter a non-breakable string in the input field so long that it goes beyond the screen width, the preview resizes with a width bigger than the screen's.
It may help to know that this doesn't show up in learn mode: check the difference in the CSS.
Seen in the authoring page on an Android phone, 02/2022. | 1.0 | Bug in MarkdownEditor: display width - In a MarkdownEditor, if you turn on the rendered preview and enter a non-breakable string in the input field so long that it goes beyond the screen width, the preview resizes with a width bigger than the screen's.
It may help to know that this doesn't show up in learn mode: check the difference in the CSS.
Seen in the authoring page on an Android phone, 02/2022. | priority | bug in markdowneditor display width in a markdowneditor if you turn on the rendered preview and enter a non breakable string in the input field so long that it goes beyond the screen width the preview resizes with a width bigger than the screen s it may help to know that this doesn t show up in learn mode check the difference in the css seen in the authoring page on an android phone | 1 |
436,768 | 12,553,512,613 | IssuesEvent | 2020-06-06 22:18:54 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | No document preview even with Imagick | bug priority: medium | **Describe the bug**
Documents have been enabled with new update, uploads and displays fine but no live preview. My server provider states I do have Imagick, however it seems to not work. I have Imagick v 6.7.2.
My hosting is with 20i.
**To Reproduce**
Steps to reproduce the behavior:
1. Upload a pdf
2. click the item
3. Only option is to download
4. No preview is shown
**Expected behavior**
A preview should show with Imagick
| 1.0 | No document preview even with Imagick - **Describe the bug**
Documents have been enabled with new update, uploads and displays fine but no live preview. My server provider states I do have Imagick, however it seems to not work. I have Imagick v 6.7.2.
My hosting is with 20i.
**To Reproduce**
Steps to reproduce the behavior:
1. Upload a pdf
2. click the item
3. Only option is to download
4. No preview is shown
**Expected behavior**
A preview should show with Imagick
| priority | no document preview even with imagick describe the bug documents have been enabled with new update uploads and displays fine but no live preview my server provider states i do have imagick however it seems to not work i have imagick v my hosting is with to reproduce steps to reproduce the behavior upload a pdf click the item only option is to download no preview is shown expected behavior a preview should show with imagick | 1 |
237,422 | 7,759,955,339 | IssuesEvent | 2018-06-01 02:47:39 | martchellop/Entretenibit | https://api.github.com/repos/martchellop/Entretenibit | closed | Link the Main and Select Pages | enhancement priority: medium | You should be able to do the search in the main page and be redirected to select page with the query the user put in. | 1.0 | Link the Main and Select Pages - You should be able to do the search in the main page and be redirected to select page with the query the user put in. | priority | link the main and select pages you should be able to do the search in the main page and be redirected to select page with the query the user put in | 1 |
47,997 | 2,990,107,740 | IssuesEvent | 2015-07-21 06:57:24 | jayway/rest-assured | https://api.github.com/repos/jayway/rest-assured | closed | Json: prettyPrint() throws exception for HashMap | bug imported Priority-Medium | _From [ir.tim....@gmail.com](https://code.google.com/u/105189091508678878966/) on February 27, 2013 10:11:06_
What steps will reproduce the problem? 1. I try to use prettyPrint() method for Json file that looks like this:
{
"data": [
{
"uid": 10,
"name": "abc"
}
]
} What is the expected output? What do you see instead? java.lang.NullPointerException: Cannot invoke method isArray() on null object.
[org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:77),
org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45),
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42),
org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:32),
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42),
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108),
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112),
groovy.json.JsonOutput.toJson(JsonOutput.groovy:115),
com.jayway.restassured.path.json.JsonPath.prettify(JsonPath.java:555),
com.jayway.restassured.path.json.JsonPath.prettyPrint(JsonPath.java:564), What version of the product are you using? On what operating system? Rest-Assured 1.7.2 Please provide any additional information below. I suppose the reason is that JsonPath for this is represented by HashMap ( json = {java.util.HashMap} {data=[{uid=10, name=abc}]} ).
So in prettify() method JsonOutput.toJson(json) is called, then in this part of the code
} else if (object instanceof Collection ||
object.class.isArray() ||
object.class returns null because it is HashMap ( http://groovy.codehaus.org/FAQ+-+Collections,+Lists,+etc.#FAQ-CollectionsListsetc-WhydoesmyMapsizeormyMapclassreturnnull - maybe explanation here?)
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=218_ | 1.0 | Json: prettyPrint() throws exception for HashMap - _From [ir.tim....@gmail.com](https://code.google.com/u/105189091508678878966/) on February 27, 2013 10:11:06_
What steps will reproduce the problem? 1. I try to use prettyPrint() method for Json file that looks like this:
{
"data": [
{
"uid": 10,
"name": "abc"
}
]
} What is the expected output? What do you see instead? java.lang.NullPointerException: Cannot invoke method isArray() on null object.
[org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:77),
org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45),
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42),
org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:32),
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42),
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108),
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112),
groovy.json.JsonOutput.toJson(JsonOutput.groovy:115),
com.jayway.restassured.path.json.JsonPath.prettify(JsonPath.java:555),
com.jayway.restassured.path.json.JsonPath.prettyPrint(JsonPath.java:564), What version of the product are you using? On what operating system? Rest-Assured 1.7.2 Please provide any additional information below. I suppose the reason is that JsonPath for this is represented by HashMap ( json = {java.util.HashMap} {data=[{uid=10, name=abc}]} ).
So in prettify() method JsonOutput.toJson(json) is called, then in this part of the code
} else if (object instanceof Collection ||
object.class.isArray() ||
object.class returns null because it is HashMap ( http://groovy.codehaus.org/FAQ+-+Collections,+Lists,+etc.#FAQ-CollectionsListsetc-WhydoesmyMapsizeormyMapclassreturnnull - maybe explanation here?)
_Original issue: http://code.google.com/p/rest-assured/issues/detail?id=218_ | priority | json prettyprint throws exception for hashmap from on february what steps will reproduce the problem i try to use prettyprint method for json file that looks like this data uid name abc what is the expected output what do you see instead java lang nullpointerexception cannot invoke method isarray on null object org codehaus groovy runtime nullobject invokemethod nullobject java org codehaus groovy runtime callsite pogometaclasssite call pogometaclasssite java org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java org codehaus groovy runtime callsite nullcallsite call nullcallsite java org codehaus groovy runtime callsite callsitearray defaultcall callsitearray java org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java org codehaus groovy runtime callsite abstractcallsite call abstractcallsite java groovy json jsonoutput tojson jsonoutput groovy com jayway restassured path json jsonpath prettify jsonpath java com jayway restassured path json jsonpath prettyprint jsonpath java what version of the product are you using on what operating system rest assured please provide any additional information below i suppose the reason is that jsonpath for this is represented by hashmap json java util hashmap data so in prettify method jsonoutput tojson json is called then in this part of the code else if object instanceof collection object class isarray object class returns null because it is hashmap maybe explanation here original issue | 1 |
105,634 | 4,239,437,932 | IssuesEvent | 2016-07-06 09:27:35 | BugBusterSWE/MaaS | https://api.github.com/repos/BugBusterSWE/MaaS | closed | Recupero password | priority:medium Programmer | *Codice in cui si trova il problema*:
activity #37
*Descrizione del problema*:
Creazione pagina riguardante il recupero della password da parte di un utente non autenticato.
Link task: [https://bugbusters.teamwork.com/tasks/7179453](https://bugbusters.teamwork.com/tasks/7179453) | 1.0 | Recupero password - *Codice in cui si trova il problema*:
activity #37
*Descrizione del problema*:
Creazione pagina riguardante il recupero della password da parte di un utente non autenticato.
Link task: [https://bugbusters.teamwork.com/tasks/7179453](https://bugbusters.teamwork.com/tasks/7179453) | priority | recupero password codice in cui si trova il problema activity descrizione del problema creazione pagina riguardante il recupero della password da parte di un utente non autenticato link task | 1 |
722,875 | 24,877,011,805 | IssuesEvent | 2022-10-27 20:04:09 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Ability to register for flag change events | kind/enhancement area/docdb priority/medium | Jira Link: [DB-3541](https://yugabyte.atlassian.net/browse/DB-3541)
### Description
Add the ability to Register callbacks for runtime gFlag updates. | 1.0 | [DocDB] Ability to register for flag change events - Jira Link: [DB-3541](https://yugabyte.atlassian.net/browse/DB-3541)
### Description
Add the ability to Register callbacks for runtime gFlag updates. | priority | ability to register for flag change events jira link description add the ability to register callbacks for runtime gflag updates | 1 |
351,832 | 10,524,470,012 | IssuesEvent | 2019-09-30 13:21:46 | conan-community/community | https://api.github.com/repos/conan-community/community | closed | [question] hooks: Eat your own dog food | complex: medium priority: medium stage: queue type: feature | ### Description of Problem, Request, or Question
Hi @conan-community/barbarians
Since Conan offers some nice hooks, I think we could add them to our CI jobs.
Some possible hooks to be included:
- [Conan Center Reviewer](https://github.com/conan-io/hooks/blob/master/hooks/conan-center_reviewer.py)
- [Attribute Checker](https://github.com/conan-io/hooks/blob/master/hooks/attribute_checker.py)
- [Binary Linter](https://github.com/conan-io/hooks/blob/master/hooks/binary_linter.py)
- [Github Updater](https://github.com/conan-io/hooks/blob/master/hooks/github_updater.py)
- [SPDX Checker](https://github.com/conan-io/hooks/blob/master/hooks/spdx_checker.py)
Should we include those hooks?
If yes, should we consider any warning as error?
related issue: https://github.com/bincrafters/community/issues/716
Regards!
| 1.0 | [question] hooks: Eat your own dog food - ### Description of Problem, Request, or Question
Hi @conan-community/barbarians
Since Conan offers some nice hooks, I think we could add them to our CI jobs.
Some possible hooks to be included:
- [Conan Center Reviewer](https://github.com/conan-io/hooks/blob/master/hooks/conan-center_reviewer.py)
- [Attribute Checker](https://github.com/conan-io/hooks/blob/master/hooks/attribute_checker.py)
- [Binary Linter](https://github.com/conan-io/hooks/blob/master/hooks/binary_linter.py)
- [Github Updater](https://github.com/conan-io/hooks/blob/master/hooks/github_updater.py)
- [SPDX Checker](https://github.com/conan-io/hooks/blob/master/hooks/spdx_checker.py)
Should we include those hooks?
If yes, should we consider any warning as error?
related issue: https://github.com/bincrafters/community/issues/716
Regards!
| priority | hooks eat your own dog food description of problem request or question hi conan community barbarians since conan offers some nice hooks i think we could add them to our ci jobs some possible hooks to be included should we include those hooks if yes should we consider any warning as error related issue regards | 1 |
706,844 | 24,285,743,827 | IssuesEvent | 2022-09-28 21:56:50 | projectdiscovery/dnsx | https://api.github.com/repos/projectdiscovery/dnsx | closed | big wordlist file not working. | Priority: Medium Status: Completed Type: Bug | Hello,
When I used best-dns-wordlist.txt from assetnote wordlist. I saw dnsx not working anymore, and auto Killed the job.

| 1.0 | big wordlist file not working. - Hello,
When I used best-dns-wordlist.txt from assetnote wordlist. I saw dnsx not working anymore, and auto Killed the job.

| priority | big wordlist file not working hello when i used best dns wordlist txt from assetnote wordlist i saw dnsx not working anymore and auto killed the job | 1 |
569,574 | 17,015,400,975 | IssuesEvent | 2021-07-02 11:16:10 | open62541/open62541 | https://api.github.com/repos/open62541/open62541 | closed | typeIndex in code | Component: Server Priority: Medium Type: Bug | <!--
!ATTENTION!
Please read the following page carefully and provide us with all the
information requested:
https://github.com/open62541/open62541/wiki/Writing-Good-Issue-Reports
Use Github Markdown to format your text:
https://help.github.com/articles/basic-writing-and-formatting-syntax/
Fill out the sections and checklist below (add text at the end of each line).
!ATTENTION!
--------------------------------------------------------------------------------
-->
## Description
ua_services_attribute.c line 1728
```
const UA_DataType *historyDataType = &UA_TYPES[UA_TYPES_HISTORYDATA];
UA_HistoryDatabase_readFunc readHistory = NULL;
switch(request->historyReadDetails.content.decoded.type->typeIndex)
```
error C2039: 'typeIndex': is not a member of 'UA_DataType'
## Background Information / Reproduction Steps
Used CMake options:
<!--
Include all CMake options here, which you modified or used for your build.
If you are using cmake-gui, go to "Tools > Show my Changes" and paste the content of "Command Line Options"
On the command line use `cmake -L` (or `cmake -LA` if you changed advanced variables)
-->
```bash
cmake -DUA_NAMESPACE_ZERO=<YOUR_OPTION> <ANY_OTHER_OPTIONS> ..
```
## Checklist
Please provide the following information:
- [x] open62541 Version (release number or git tag): master
- [ ] Other OPC UA SDKs used (client or server):
- [ ] Operating system:
- [ ] Logs (with `UA_LOGLEVEL` set as low as necessary) attached
- [ ] Wireshark network dump attached
- [ ] Self-contained code example attached
- [x] Critical issue
| 1.0 | typeIndex in code - <!--
!ATTENTION!
Please read the following page carefully and provide us with all the
information requested:
https://github.com/open62541/open62541/wiki/Writing-Good-Issue-Reports
Use Github Markdown to format your text:
https://help.github.com/articles/basic-writing-and-formatting-syntax/
Fill out the sections and checklist below (add text at the end of each line).
!ATTENTION!
--------------------------------------------------------------------------------
-->
## Description
ua_services_attribute.c line 1728
```
const UA_DataType *historyDataType = &UA_TYPES[UA_TYPES_HISTORYDATA];
UA_HistoryDatabase_readFunc readHistory = NULL;
switch(request->historyReadDetails.content.decoded.type->typeIndex)
```
error C2039: 'typeIndex': is not a member of 'UA_DataType'
## Background Information / Reproduction Steps
Used CMake options:
<!--
Include all CMake options here, which you modified or used for your build.
If you are using cmake-gui, go to "Tools > Show my Changes" and paste the content of "Command Line Options"
On the command line use `cmake -L` (or `cmake -LA` if you changed advanced variables)
-->
```bash
cmake -DUA_NAMESPACE_ZERO=<YOUR_OPTION> <ANY_OTHER_OPTIONS> ..
```
## Checklist
Please provide the following information:
- [x] open62541 Version (release number or git tag): master
- [ ] Other OPC UA SDKs used (client or server):
- [ ] Operating system:
- [ ] Logs (with `UA_LOGLEVEL` set as low as necessary) attached
- [ ] Wireshark network dump attached
- [ ] Self-contained code example attached
- [x] Critical issue
| priority | typeindex in code attention please read the following page carefully and provide us with all the information requested use github markdown to format your text fill out the sections and checklist below add text at the end of each line attention description ua services attribute c line const ua datatype historydatatype ua types ua historydatabase readfunc readhistory null switch request historyreaddetails content decoded type typeindex error typeindex is not a member of ua datatype background information reproduction steps used cmake options include all cmake options here which you modified or used for your build if you are using cmake gui go to tools show my changes and paste the content of command line options on the command line use cmake l or cmake la if you changed advanced variables bash cmake dua namespace zero checklist please provide the following information version release number or git tag master other opc ua sdks used client or server operating system logs with ua loglevel set as low as necessary attached wireshark network dump attached self contained code example attached critical issue | 1 |
811,925 | 30,306,221,175 | IssuesEvent | 2023-07-10 09:39:15 | jpmorganchase/salt-ds | https://api.github.com/repos/jpmorganchase/salt-ds | closed | Salt ag grid theme doesn't style filter panel | type: bug 🪲 community priority: medium 😠 | ### Package name(s)
AG Grid Theme (@salt-ds/ag-grid-theme)
### Package version(s)
"@salt-ds/ag-grid-theme": "1.1.6"
### Description
Out of box ag grid filter panel doesn't match overall styling
- Spacing within the panel are tighter than before
- Buttons looks default HTML buttons, instead of Salt ones
Raised by MTK
### Steps to reproduce
Use below code in one of column def, then open the filter panel using triple dots menu on hovering header
filter: 'agTextColumnFilter',
filterParams: {
buttons: ['reset', 'apply'],
},
https://stackblitz.com/edit/salt-ag-grid-theme-g6m6gj?file=package.json,App.jsx
### Expected behavior
Spacing should match general Salt design language, button should look like Salt ones
### Operating system
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
### Browser
- [X] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
### Are you a JPMorgan Chase & Co. employee?
- [X] I am an employee of JPMorgan Chase & Co. | 1.0 | Salt ag grid theme doesn't style filter panel - ### Package name(s)
AG Grid Theme (@salt-ds/ag-grid-theme)
### Package version(s)
"@salt-ds/ag-grid-theme": "1.1.6"
### Description
Out of box ag grid filter panel doesn't match overall styling
- Spacing within the panel are tighter than before
- Buttons looks default HTML buttons, instead of Salt ones
Raised by MTK
### Steps to reproduce
Use below code in one of column def, then open the filter panel using triple dots menu on hovering header
filter: 'agTextColumnFilter',
filterParams: {
buttons: ['reset', 'apply'],
},
https://stackblitz.com/edit/salt-ag-grid-theme-g6m6gj?file=package.json,App.jsx
### Expected behavior
Spacing should match general Salt design language, button should look like Salt ones
### Operating system
- [X] macOS
- [ ] Windows
- [ ] Linux
- [ ] iOS
- [ ] Android
### Browser
- [X] Chrome
- [ ] Safari
- [ ] Firefox
- [ ] Edge
### Are you a JPMorgan Chase & Co. employee?
- [X] I am an employee of JPMorgan Chase & Co. | priority | salt ag grid theme doesn t style filter panel package name s ag grid theme salt ds ag grid theme package version s salt ds ag grid theme description out of box ag grid filter panel doesn t match overall styling spacing within the panel are tighter than before buttons looks default html buttons instead of salt ones raised by mtk steps to reproduce use below code in one of column def then open the filter panel using triple dots menu on hovering header filter agtextcolumnfilter filterparams buttons expected behavior spacing should match general salt design language button should look like salt ones operating system macos windows linux ios android browser chrome safari firefox edge are you a jpmorgan chase co employee i am an employee of jpmorgan chase co | 1 |
35,840 | 2,793,225,075 | IssuesEvent | 2015-05-11 09:30:26 | bounswe/bounswe2015group3 | https://api.github.com/repos/bounswe/bounswe2015group3 | closed | Searching: Semantic tagging and its importance | auto-migrated Priority-Medium Type-Task | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `umut.afa...@gmail.com` on 21 Feb 2015 at 12:36 | 1.0 | Searching: Semantic tagging and its importance - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
Please use labels and text to provide additional information.
```
Original issue reported on code.google.com by `umut.afa...@gmail.com` on 21 Feb 2015 at 12:36 | priority | searching semantic tagging and its importance what steps will reproduce the problem what is the expected output what do you see instead please use labels and text to provide additional information original issue reported on code google com by umut afa gmail com on feb at | 1 |
695,905 | 23,875,887,977 | IssuesEvent | 2022-09-07 19:00:52 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Do not add WAL entries larger than the amount that can be successfully replicated to tablet peers. | kind/bug area/docdb priority/medium 2.8 Backport Required 2.12 Backport Required 2.14 Backport Required | Jira Link: [DB-3183](https://yugabyte.atlassian.net/browse/DB-3183)
### Description
We have seen cases where in large WAL entries are persisted on the tablet leader but cannot be successfully replicated to followers, leading to tablets getting into unhealthy/unusable state. Example customer issue - https://yugabyte.zendesk.com/agent/tickets/3861
The fix in https://phabricator.dev.yugabyte.com/D16842 doesn't seem to handle the case where in a single WAL is larger than the limit.
| 1.0 | [DocDB] Do not add WAL entries larger than the amount that can be successfully replicated to tablet peers. - Jira Link: [DB-3183](https://yugabyte.atlassian.net/browse/DB-3183)
### Description
We have seen cases where in large WAL entries are persisted on the tablet leader but cannot be successfully replicated to followers, leading to tablets getting into unhealthy/unusable state. Example customer issue - https://yugabyte.zendesk.com/agent/tickets/3861
The fix in https://phabricator.dev.yugabyte.com/D16842 doesn't seem to handle the case where in a single WAL is larger than the limit.
| priority | do not add wal entries larger than the amount that can be successfully replicated to tablet peers jira link description we have seen cases where in large wal entries are persisted on the tablet leader but cannot be successfully replicated to followers leading to tablets getting into unhealthy unusable state example customer issue the fix in doesn t seem to handle the case where in a single wal is larger than the limit | 1 |
764,388 | 26,798,167,841 | IssuesEvent | 2023-02-01 13:23:25 | patternfly/patternfly-elements | https://api.github.com/repos/patternfly/patternfly-elements | closed | pfe-tabs inset implementation is incorrect | priority: medium bug | Incorrect implementation of inset.
Inset uses a [key, value] set of breakpoints to determine the amount of inset, not individually set as `sm|md|lg...` per tab set.
```
inset={{
default: 'insetNone',
md: 'insetSm',
xl: 'insetLg',
'2xl': 'inset2xl'
}}
``` | 1.0 | pfe-tabs inset implementation is incorrect - Incorrect implementation of inset.
Inset uses a [key, value] set of breakpoints to determine the amount of inset, not individually set as `sm|md|lg...` per tab set.
```
inset={{
default: 'insetNone',
md: 'insetSm',
xl: 'insetLg',
'2xl': 'inset2xl'
}}
``` | priority | pfe tabs inset implementation is incorrect incorrect implementation of inset inset uses a set of breakpoints to determine the amount of inset not individually set as sm md lg per tab set inset default insetnone md insetsm xl insetlg | 1 |
754,412 | 26,385,886,785 | IssuesEvent | 2023-01-12 12:14:55 | DwcJava/engine | https://api.github.com/repos/DwcJava/engine | closed | Introduce `HasDestroy` interface | Change: Medium Priority: High Type: Feature | Introduce a new `HasDestroy` interface with two methods [`destroy`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_destroy.htm) and [`isDestroyed`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_isdestroyed.htm) and implement in all controls.
1. the `destroy` method should check internally if a control is already destroyed.
2. If the method is called on a control which has not been attached to panel yet then the creation of the control should be skipped (Track with a flag)
| 1.0 | Introduce `HasDestroy` interface - Introduce a new `HasDestroy` interface with two methods [`destroy`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_destroy.htm) and [`isDestroyed`](https://documentation.basis.cloud/BASISHelp/WebHelp/bbjobjects/SysGui/bbjcontrol/bbjcontrol_isdestroyed.htm) and implement in all controls.
1. the `destroy` method should check internally if a control is already destroyed.
2. If the method is called on a control which has not been attached to panel yet then the creation of the control should be skipped (Track with a flag)
| priority | introduce hasdestroy interface introduce a new hasdestroy interface with two methods and and implement in all controls the destroy method should check internally if a control is already destroyed if the method is called on a control which has not been attached to panel yet then the creation of the control should be skipped track with a flag | 1 |
748,014 | 26,103,637,734 | IssuesEvent | 2022-12-27 10:21:48 | keepers-team/webtlo | https://api.github.com/repos/keepers-team/webtlo | closed | Регулировка раздач раздельно для подразделов, торрент-клиентов | medium priority | > хочется, чтобы управление раздачами велось раздельно для разных торрент-клиентов
> или не по клиентам, а по какому-то другому критерию (сами придумайте ) но чтобы можно было как-то группировать раздачи - эти сидирую до 3 сидов, а эти - когда уходит их хранитель (и сидов становится 0).
> неплохо было бы для каждого раздела\подраздела сделать возможность различной ркгулировки, а уж если по различным торрент-клиентам - вообще сказка получится. | 1.0 | Регулировка раздач раздельно для подразделов, торрент-клиентов - > хочется, чтобы управление раздачами велось раздельно для разных торрент-клиентов
> или не по клиентам, а по какому-то другому критерию (сами придумайте ) но чтобы можно было как-то группировать раздачи - эти сидирую до 3 сидов, а эти - когда уходит их хранитель (и сидов становится 0).
> неплохо было бы для каждого раздела\подраздела сделать возможность различной ркгулировки, а уж если по различным торрент-клиентам - вообще сказка получится. | priority | регулировка раздач раздельно для подразделов торрент клиентов хочется чтобы управление раздачами велось раздельно для разных торрент клиентов или не по клиентам а по какому то другому критерию сами придумайте но чтобы можно было как то группировать раздачи эти сидирую до сидов а эти когда уходит их хранитель и сидов становится неплохо было бы для каждого раздела подраздела сделать возможность различной ркгулировки а уж если по различным торрент клиентам вообще сказка получится | 1 |
399,205 | 11,744,562,464 | IssuesEvent | 2020-03-12 08:01:38 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1447] Craft UI: need to check icons' size | Priority: Medium Status: Fixed | 1. Now this order icons look like eggs with pixels. We need to make them with proper size


2. There is somw artifact

3. Can we change this effect for selected order?
Maybe some border effect, its too much like this

| 1.0 | [0.9.0 staging-1447] Craft UI: need to check icons' size - 1. Now this order icons look like eggs with pixels. We need to make them with proper size


2. There is somw artifact

3. Can we change this effect for selected order?
Maybe some border effect, its too much like this

| priority | craft ui need to check icons size now this order icons look like eggs with pixels we need to make them with proper size there is somw artifact can we change this effect for selected order maybe some border effect its too much like this | 1 |
40,822 | 2,868,944,316 | IssuesEvent | 2015-06-05 22:06:44 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | pub starting to feel slow | enhancement Fixed Priority-Medium | <a href="https://github.com/dgrove"><img src="https://avatars.githubusercontent.com/u/2108507?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [dgrove](https://github.com/dgrove)**
_Originally opened as dart-lang/sdk#9027_
----
I've attached a timestamp log from a pub update on the web_ui package (version 0.4.1+3). There were no packages downloaded, but this is definitely slower than it should be (8 seconds with logging disabled, from my fast desktop on the fast Google network).
A non-verbose log looks like this:
[2013-03-08 16:45:10] Resolving dependencies...
[2013-03-08 16:45:18] Dependencies updated!
[2013-03-08 16:45:18] Some packages that were installed are not compatible with your SDK version 0.1.2+0.r19695.dgrove and may not work:
[2013-03-08 16:45:18] - 'web_ui' requires >=0.4.0+0.r19406
[2013-03-08 16:45:18] - 'html5lib' requires >=0.3.7+5.r18669
[2013-03-08 16:45:18] - 'csslib' requires >=0.3.7+5.r18717
[2013-03-08 16:45:18] - 'source_maps' requires >=0.3.7+5.r18669
[2013-03-08 16:45:18]
[2013-03-08 16:45:18] You may be able to resolve this by upgrading to the latest Dart SDK
[2013-03-08 16:45:18] or adding a version constraint to use an older version of a package.
______
**Attachment:**
[pub.out.detailed](https://storage.googleapis.com/google-code-attachments/dart/issue-9027/comment-0/pub.out.detailed) (221.89 KB) | 1.0 | pub starting to feel slow - <a href="https://github.com/dgrove"><img src="https://avatars.githubusercontent.com/u/2108507?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [dgrove](https://github.com/dgrove)**
_Originally opened as dart-lang/sdk#9027_
----
I've attached a timestamp log from a pub update on the web_ui package (version 0.4.1+3). There were no packages downloaded, but this is definitely slower than it should be (8 seconds with logging disabled, from my fast desktop on the fast Google network).
A non-verbose log looks like this:
[2013-03-08 16:45:10] Resolving dependencies...
[2013-03-08 16:45:18] Dependencies updated!
[2013-03-08 16:45:18] Some packages that were installed are not compatible with your SDK version 0.1.2+0.r19695.dgrove and may not work:
[2013-03-08 16:45:18] - 'web_ui' requires >=0.4.0+0.r19406
[2013-03-08 16:45:18] - 'html5lib' requires >=0.3.7+5.r18669
[2013-03-08 16:45:18] - 'csslib' requires >=0.3.7+5.r18717
[2013-03-08 16:45:18] - 'source_maps' requires >=0.3.7+5.r18669
[2013-03-08 16:45:18]
[2013-03-08 16:45:18] You may be able to resolve this by upgrading to the latest Dart SDK
[2013-03-08 16:45:18] or adding a version constraint to use an older version of a package.
______
**Attachment:**
[pub.out.detailed](https://storage.googleapis.com/google-code-attachments/dart/issue-9027/comment-0/pub.out.detailed) (221.89 KB) | priority | pub starting to feel slow issue by originally opened as dart lang sdk i ve attached a timestamp log from a pub update on the web ui package version there were no packages downloaded but this is definitely slower than it should be seconds with logging disabled from my fast desktop on the fast google network a non verbose log looks like this resolving dependencies dependencies updated some packages that were installed are not compatible with your sdk version dgrove and may not work web ui requires gt requires gt csslib requires gt source maps requires gt you may be able to resolve this by upgrading to the latest dart sdk or adding a version constraint to use an older version of a package attachment kb | 1 |
406,077 | 11,886,680,612 | IssuesEvent | 2020-03-27 22:38:44 | CMPUT301W20T01/boost | https://api.github.com/repos/CMPUT301W20T01/boost | closed | UC 02.06.01 notified if ride offer accepted | priority: medium risk: medium size: 1 | **Partial User Story:**
_US 05.03.01_
As a driver, I want to be notified if my ride offer was accepted.
**Rationale:**
- To make sure ride offer is accepted once
- To make sure ride request is accepted at the same time
- To prevent other drivers to see the accepted request
---
Notes:
Might implement MVC
Might update current status for both ride offers and ride request
Might update DB of available active request | 1.0 | UC 02.06.01 notified if ride offer accepted - **Partial User Story:**
_US 05.03.01_
As a driver, I want to be notified if my ride offer was accepted.
**Rationale:**
- To make sure ride offer is accepted once
- To make sure ride request is accepted at the same time
- To prevent other drivers to see the accepted request
---
Notes:
Might implement MVC
Might update current status for both ride offers and ride request
Might update DB of available active request | priority | uc notified if ride offer accepted partial user story us as a driver i want to be notified if my ride offer was accepted rationale to make sure ride offer is accepted once to make sure ride request is accepted at the same time to prevent other drivers to see the accepted request notes might implement mvc might update current status for both ride offers and ride request might update db of available active request | 1 |
303,873 | 9,311,391,897 | IssuesEvent | 2019-03-25 21:15:04 | forpdi/forpdi | https://api.github.com/repos/forpdi/forpdi | closed | Não está mostrando alerta de campos obrigatórios no cadastro de um risco | ForRisco enhancement mediumpriority | Quando estou cadastrando um risco e clico no botão salvar sem ter preenchido os campos, o sistema não mostra em vermelho os campos que são obrigatórios.

| 1.0 | Não está mostrando alerta de campos obrigatórios no cadastro de um risco - Quando estou cadastrando um risco e clico no botão salvar sem ter preenchido os campos, o sistema não mostra em vermelho os campos que são obrigatórios.

| priority | não está mostrando alerta de campos obrigatórios no cadastro de um risco quando estou cadastrando um risco e clico no botão salvar sem ter preenchido os campos o sistema não mostra em vermelho os campos que são obrigatórios | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.