added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:37:26.129356
| 2022-12-21T13:04:24
|
1506268098
|
{
"authors": [
"sinlerdev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2325",
"repo": "Plothan/Vinum",
"url": "https://github.com/Plothan/Vinum/issues/10"
}
|
gharchive/issue
|
Add namespaces to Groups
Currently, tables that are passed to be tied to a key are tied as singular states, as opposed to tables creating a new namespace.
Namespaces are a very powerful feature to the centralized state paradigm, as sometimes, you have to divide multiple states into places according to their relevance- however, how can we explicitly define that a key is a namespace key rather than a normal one? As after all, there is nothing that helps developers know when a key is a namespace/normal one.
However, Namespaces can be also created by nesting groups, and treating each group as a new namespace, which can look something like this (this depends on #9):
local centerGroup = Group({
Namespace1 = Group({
key1 = "hi"
}, AlwaysTrue)
}, AlwaysTrue)
The instead looks like a far more superior solution to me, as it avoids implementing a new feature that risks causing issues with code understandability, Plus, we can actually create a specific processor to each namespace!
Just like previously mentioned, Namespaces can be created as groups which allows for setting a new processor for each namespace, Plus, it avoids designing a new API to allow for native namespaces.
This is rejected.
|
2025-04-01T06:37:26.147859
| 2023-10-30T02:44:07
|
1967359962
|
{
"authors": [
"Eliauk-TiAmo",
"ej0cl6"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2326",
"repo": "PlusLabNLP/DEGREE",
"url": "https://github.com/PlusLabNLP/DEGREE/issues/19"
}
|
gharchive/issue
|
ACE-zh
Hello, dear author.
I encountered a problem with Chinese data processing. I tried to use 'bash./scripts/process_ace05ep. sh' to change the English content to Chinese, but an error occurred as shown in the following figure:
, I have successfully preprocessed the English data. Can you provide some help?'? Thank you very much.
Hi, thanks for your interest in our work. For this work, we do not consider Chinese data. If you are interested in this part. You can check another work and script https://github.com/PlusLabNLP/X-Gear/tree/main/preprocessing
|
2025-04-01T06:37:26.283240
| 2015-07-23T19:07:49
|
96884962
|
{
"authors": [
"bradsokol",
"greatestape"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2327",
"repo": "Points/PyLCP",
"url": "https://github.com/Points/PyLCP/pull/44"
}
|
gharchive/pull-request
|
Support Python 2.7, 3.3 and 3.4
Uses the future package to support Python 2.7, 3.3 and 3.4 from a single code base.
Also tested with LCP code base with both unit and integration tests using Python 2.7.
github says this branch has conflicts
The conflict is with #43 which is also unmerged.
Not sure of the GH way of dealing with this? Accept PR #43 and then I can sort out the conflict and push again to this PR?
Done!
The decrease in coverage is due to Python 2/3 handling such as try/except on imports to handle differences in package names.
Let me know what you'd like me to do with this PR.
Ideally we'd be able to make coveralls look at the aggregate coverage across all different python versions, but I don't think that needs to block this pull request.
|
2025-04-01T06:37:26.320267
| 2016-02-16T13:49:35
|
133986351
|
{
"authors": [
"dariuszseweryn",
"mzgreen",
"uKL"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2328",
"repo": "Polidea/RxAndroidBle",
"url": "https://github.com/Polidea/RxAndroidBle/pull/3"
}
|
gharchive/pull-request
|
Simple BleClient mocking
#RxBleClient can be mocked now like this:
rxBleClient = new RxBleClientMock.Builder()
.deviceMacAddress("AA:BB:CC:DD:EE:FF")
.deviceName("TestDevice")
.rssi(42)
.rxBleDeviceServices(
new RxBleClientMock.ServicesBuilder()
.addService(
UUID.fromString("00001234-0000-0000-8000-000000000000"),
new RxBleClientMock.CharacteristicsBuilder()
.addCharacteristic(
UUID.fromString("00002a29-0000-1000-8000-00805f9b34fb",
"SomeData".getBytes(),
new RxBleClientMock.DescriptorsBuilder()
.addDescriptor(UUID.fromString("00002902-0000-1000-8000-00805f9b34fb"), "SomeDescriptor".getBytes())
.build()
).build()
).build()
).build();
Code review fixes applied, now mocking client looks like this:
rxBleClient = new RxBleClientMock.Builder()
.deviceMacAddress("AA:BB:CC:DD:EE:FF")
.deviceName("TestDevice")
.scanRecord("ScanRecord".getBytes())
.rssi(42)
.addService(
UUID.fromString("00001234-0000-0000-8000-000000000000"),
new RxBleClientMock.CharacteristicsBuilder()
.addCharacteristic(
UUID.fromString("00002a29-0000-1000-8000-00805f9b34fb"),
"CharacteristicData".getBytes(),
new RxBleClientMock.DescriptorsBuilder()
.addDescriptor(
UUID.fromString("00002902-0000-1000-8000-00805f9b34fb"),
"DescriptorData".getBytes()
).build()
).build()
).build();
Please verify again.
Looks good to me. What do you think @dariuszseweryn?
I'm not really convinced that Mocking functionality should be introduced in this project. Maybe some kind of extension would be better (like rxjava and rxandroid -> rxandroidble and rxandroidblemock)
This is added as a separate module. If someone wants to use it then he will need to add an additional dependency (exactly like MockWebServer from OkHttp).
Please configure publishing settings in gradle.properties.
Fixes have been applied.
Good points @dariuszseweryn. Thanks. Should be ok now.
I've implemented mocking characteristic notifications. API looks like this:
rxBleClient = new RxBleClientMock.Builder()
//....
.notificationSource(characteristic_UUID, observable)
.build();
You can pass a subject as a notification source and then when you can call getNotification(characteristic_UUID), you will get a notification every time you call onNext() on your subject. See updated tests.
@dariuszseweryn could you check the logic in createCharacteristicNotificationObservable especially the cache and share operators at the end? I've mimicked the behavior of RxBleConnectionImpl implementation but it's quite confusing so I'm not completely sure if I did it right.
I've implemented mocking connection status. API looks like this:
rxBleClient = new RxBleClientMock.Builder()
//....
.connectionStateSource(Subject<RxBleConnection.RxBleConnectionState>)
.build();
You can then subscribe to rxBleDevice.getConnectionState() to get notifications about connection state. Subject that is being passed as a parameter allows to change current connection state - see updated test.
Such functionality has no sense. It will be changed in the future. You can ignore it.
I've added support for simulating device disconnection. Now you will get a CONNECTED status when you subscribe to RxBleConnection and you can simulate a situation when device has disconnected itself. You can do it by calling rxBleClient.disconnect() method. State will change to DISCONNECTED and BleDisconnectedException error will be emited.
LGTM, merging.
|
2025-04-01T06:37:26.340146
| 2024-12-02T15:47:26
|
2712416105
|
{
"authors": [
"arekgotfryd",
"max3poloski"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2329",
"repo": "Polymarket/agents",
"url": "https://github.com/Polymarket/agents/issues/28"
}
|
gharchive/issue
|
python scripts/python/cli.py does not work
Describe the bug
I dont really know what the problem is but "python scripts/python/cli.py" dosnt work.
Look : (.venv) C:\Users\ALFA\Downloads\agents> python scripts/python/cli.py
Traceback (most recent call last):
File "C:\Users\ALFA\Downloads\agents\scripts\python\cli.py", line 4, in
from agents.polymarket.polymarket import Polymarket
ModuleNotFoundError: No module named 'agents'
What am i doing wrong ?
.
Run
export PYTHONPATH="."
and you should be fine.
|
2025-04-01T06:37:26.346267
| 2020-06-22T01:57:43
|
642692916
|
{
"authors": [
"justinfagnani",
"timonson"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2330",
"repo": "Polymer/lit-html",
"url": "https://github.com/Polymer/lit-html/issues/1175"
}
|
gharchive/issue
|
Question: Does html take fragments as arguments?
I could not find the answer anywhere. Unfortunately the type of the values parameter of TemplateResult is values: readonly unknown[]; and is not really self-explanatory. Therefore I am not sure if I can pass DOM fragments to the html function although I know that it works. But I would like to ask if this could somehow cause problems.
Example:
function createHtmlTemplate(html: string) {
var template = document.createElement("template")
template.innerHTML = html.trim()
return template
}
html`${createHtmlTemplate(`<p>Hello World</p>`).content}`
Is this a valid way to use html?
Thank you!
You can use DOM Nodes as values as documented in the supported data types section: https://lit-html.polymer-project.org/guide/template-reference#supported-data-types-for-text-bindings
Passing a Node will insert that node into the DOM, so make sure that the semantics of that are what you want. In the case of a <template> element, inserting the element itself will not cause anything to render because the template contents are stored as a separate document fragment. You probably want to return template.content.
You seem to be basically re-implementing unsafeHTML without the dirty-checking though. WHat's your use case?
@justinfagnani thank you. I have one quick follow-up question before I close this issue: Do SVGs fall under DOM Nodes here?
|
2025-04-01T06:37:26.381157
| 2016-11-21T18:45:35
|
190802631
|
{
"authors": [
"TomK"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2331",
"repo": "Polymer/polymer",
"url": "https://github.com/Polymer/polymer/issues/4167"
}
|
gharchive/issue
|
Unable to query slotted nodes created by dom-repeat
When slotting a dom-repeat, i am unable to query the generated items.
Possible regression of https://github.com/Polymer/polymer/issues/2276
Live Demo
http://codepen.io/oridan/pen/JbWrxv?editors=1000
Expected Results
console log should show something like:
[x-items] [slot]
Actual Results
console log shows
[] [slot]
Browsers Affected
[x] Chrome
Versions
Polymer: 2.0-preview
webcomponents: v1
Closing after discussion in slack. Thanks to @arthurevans suggestion of stamping elements into light dom as follows:
_attachDom: function (dom)
{
this.appendChild(dom);
},
|
2025-04-01T06:37:26.383470
| 2016-02-18T15:38:28
|
134613918
|
{
"authors": [
"azakus",
"devinivy",
"kaste"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2332",
"repo": "Polymer/polymer",
"url": "https://github.com/Polymer/polymer/pull/3439"
}
|
gharchive/pull-request
|
Refactorings around how computational expressions get their arguments
Do not set splices property on userland arrays. Fixes #2415, #2350
Use value sent through the notification system (#3179)
Don't mutate splices sent into userland. As reported #3239
Should fix #3179
I've added three more commits on top of that initial commit. It seemed appropriate to not open a PR for each of them. Anyhow, please review the commits individually. Overall it's a concise change, @azakus give it a P1 ;-)
This is a nice change :+1:
LGTM from @kevinpschaaf
|
2025-04-01T06:37:26.385170
| 2016-09-12T18:56:55
|
176455506
|
{
"authors": [
"kevinpschaaf",
"sorvell"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2333",
"repo": "Polymer/polymer",
"url": "https://github.com/Polymer/polymer/pull/3944"
}
|
gharchive/pull-request
|
Fixes #3938
Fixes #3938
Separate application of listeners and host attributes.
Adds _applyListeners and _ensureAttributes as override points.
Legacy Polymer impl ensures attributes subclass before superclass so that a subclass gets first crack as to what attribute value should exist (This matches the behavior of Polymer 1.0).
LGTM
|
2025-04-01T06:37:26.386668
| 2017-05-24T20:16:59
|
231158619
|
{
"authors": [
"maxknee"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2334",
"repo": "Polymer/web-component-tester",
"url": "https://github.com/Polymer/web-component-tester/pull/554"
}
|
gharchive/pull-request
|
Ensure that express and done are getting returned in prepare:webserver
[X] CHANGELOG.md has been updated
@justinfagnani Can you take a look at this please? Thanks!
|
2025-04-01T06:37:26.400485
| 2019-05-22T18:48:25
|
447289303
|
{
"authors": [
"lindner"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2335",
"repo": "PolymerLabs/arcs",
"url": "https://github.com/PolymerLabs/arcs/pull/3060"
}
|
gharchive/pull-request
|
Use CollectionStorageProvider for handle operations
Could be generalized to something more generic in the future.
Avoids any
Fixes #2340
Any issues with this?
|
2025-04-01T06:37:26.416822
| 2023-12-05T12:44:46
|
2026112876
|
{
"authors": [
"prashantasdeveloper"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2336",
"repo": "PolymeshAssociation/polymesh-sdk",
"url": "https://github.com/PolymeshAssociation/polymesh-sdk/pull/1099"
}
|
gharchive/pull-request
|
feat: 🎸 Add procedures to clear and remove a MetadataEntry
Description
Add procedure to clear an Asset Metadata value
Add procedure to remove any Local Asset Metadata
Also, adds a method isModifiable to MetadataEntry to return whether any metadata entry can be modified
Breaking Changes
NA
JIRA Link
DA-885, DA-950, DA-951
Checklist
[ ] Updated the Readme.md (if required) ?
:tada: This PR is included in version 23.0.0-alpha.37 :tada:
The release is available on:
npm package (@alpha dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0-alpha.1 :tada:
The release is available on:
npm package (@alpha dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0-confidential-assets.1 :tada:
The release is available on:
npm package (@confidential-assets dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0-beta.1 :tada:
The release is available on:
npm package (@beta dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:37:26.422434
| 2024-11-29T18:49:36
|
2705923247
|
{
"authors": [
"polymath-eric",
"prashantasdeveloper"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2337",
"repo": "PolymeshAssociation/polymesh-sdk",
"url": "https://github.com/PolymeshAssociation/polymesh-sdk/pull/1387"
}
|
gharchive/pull-request
|
style: 💄 remove unused isV6 internal function
Description
Breaking Changes
JIRA Link
Checklist
[ ] Updated the Readme.md (if required) ?
/fast-forward
:tada: This PR is included in version 27.0.0-beta.2 :tada:
The release is available on:
npm package (@beta dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 27.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:37:26.425793
| 2024-09-10T20:57:57
|
2517797038
|
{
"authors": [
"polymath-eric",
"prashantasdeveloper"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2338",
"repo": "PolymeshAssociation/polymesh-subquery",
"url": "https://github.com/PolymeshAssociation/polymesh-subquery/pull/255"
}
|
gharchive/pull-request
|
feat: 🎸 ensure one vote per signer for multiSig proposals
Description
when a multiSig signer changes their vote, the vote record will now be updated instead of inserting a second vote. Proposal approval/rejection counts will now be decremented in this case
Breaking Changes
JIRA Link
DA-1289
Checklist
[ ] Updated the Readme.md (if required) ?
Draft for now since I still need to verify the behavior when a signer approves a proposal, but is removed before the proposal is executed.
I merged in 7.x into settlements-v2 (since the v2 one had merge conflicts with alpha). As a result when base branch was updated, I rebased this PR onto settlements-v2 to make it up to date @polymath-eric
/fast-forward
|
2025-04-01T06:37:26.434196
| 2022-09-06T13:09:42
|
1363263530
|
{
"authors": [
"F-OBrien",
"KisIrene",
"sgurin",
"yzrbl"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2339",
"repo": "PolymeshAssociation/polymesh-wallet",
"url": "https://github.com/PolymeshAssociation/polymesh-wallet/issues/249"
}
|
gharchive/issue
|
Cannot read properties of undefined (reading 'creator')
Hi, guys. Got an error
And I think that is the reason why web3AccountsSubscribe not working via @polkadot/extension-dapp so I cannot handle switching between Networks
Could you please review
@sgurin Currently the testnet runtime is on v5.0.2, while mainnet is on v4.1.2. The version of the wallet extension on the chrome store is not fully compatible with v5.0.2, as seen from your error.
This issue will be addressed in the next release. There is a development release of the wallet in the chrome store that will allow you to use testnet without this wallet error. See https://chrome.google.com/webstore/detail/polymesh-wallet/ihppiagplceklfpgloomoiehdjikkacp
IF YOU DO DECIDE TO USE THE DEVELOPMENT WALLET IT SHOULD ONLY BE USED WITH TESTNET
Ensure you disable or remove the mainnet wallet when using the dev wallet to avoid conflicts between the two extensions
Still the same issue on Development Extension Version
That's because the development wallet is not compatible with the current mainnet runtime. You need to use development with Testnet and the official release with Mainnet until the runtime on mainnet is update to match testnet at which point the official wallet will also be updated.
Thanks a lot
Hi there, Robinland dev lead here. We are an unofficial partner of Polymath/Polymesh and a friend of Vincent's, as well as partner of the tech solution team that sgurin@ is a part of.
Assuming this issue will be fully addressed in the next release, may I ask for a rough ETA of the next release's timeline just so that we can plan our launches better that integrates with the Polymesh SDK? Thank you very much!
@yzrbl The mainnet update is on hold pending Ledger approval and release of an update hardware wallet app in Ledger Live. The official release on their end is taking longer than we expected and the latest information is that it still may not be approved this month.
If you've any general questions about integration you can reach out to us on the Polymesh discord server.
@F-OBrien @yzrbl Hello guys. Could you let me know whether this issue with Ledger Live has been resolved and updated?
@F-OBrien @yzrbl Hello guys. Could you let me know whether this issue with Ledger Live has been resolved and updated?
@KisIrene Yes. The Polymesh Ledger app was updated a couple of months ago. it can be downloaded from ledger live. The latest ledger app version is v3.5.000003.0
Thank you so much!!!
|
2025-04-01T06:37:26.452802
| 2024-01-21T17:31:18
|
2092684006
|
{
"authors": [
"PoojeshShetty"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2340",
"repo": "PoojeshShetty/chat-app-fe",
"url": "https://github.com/PoojeshShetty/chat-app-fe/issues/1"
}
|
gharchive/issue
|
Setup the base of the front end repo
To setup the base of the front end repo we would need to do the following things
[x] Set up the docs folder where we will be noting down each decision
[x] Set up the tailwind css
[x] Setup the eslint and prettierr
[ ] Finalize with the wireframe / design for the application
Finalizing wireframe in a separate task
|
2025-04-01T06:37:26.453776
| 2023-07-27T14:51:49
|
1824549538
|
{
"authors": [
"seniorm0ment",
"starry-shivam"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2341",
"repo": "Pool-Of-Tears/GreenStash",
"url": "https://github.com/Pool-Of-Tears/GreenStash/issues/49"
}
|
gharchive/issue
|
[Feature Request] Ability To Hide "No Deadline Set" On Widget
An option to simply hide the "no deadline set" text on the widget when no deadline is set, would be nice. Just for a cleaner look.
Seems like a good idea, planning to work on widgets soon, will do it then
|
2025-04-01T06:37:26.582964
| 2023-11-03T07:07:29
|
1975566934
|
{
"authors": [
"anomit",
"xadahiya"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2342",
"repo": "PowerLoom/pooler",
"url": "https://github.com/PowerLoom/pooler/issues/52"
}
|
gharchive/issue
|
Integrate IPFS and web3 storage uploads
Is your feature request related to a problem?
Presently, snapshotters send the entire contents of each snapshot to the payload commit service in audit protocol over RabbitMQ. This is a huge overhead considering that this can take place for thousands of projects per epoch. This often causes high resource usage when there is a burst of large snapshots that are computed.
Describe the solution you'd like
Once the snapshots are computed and built, upload them from within the snapshot and aggregation workers itself.
Describe alternatives you've considered
NA
Additional context
NA
Implemented in #53 and #55, closing.
|
2025-04-01T06:37:26.592309
| 2018-06-15T07:57:16
|
332679731
|
{
"authors": [
"PlagueHO",
"codecov-io",
"johlju"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2343",
"repo": "PowerShell/CertificateDsc",
"url": "https://github.com/PowerShell/CertificateDsc/pull/140"
}
|
gharchive/pull-request
|
Moved Code of Conduct to separate file - Fixes #139
Pull Request (PR) description
Moved Code of Conduct to separate file.
This Pull Request (PR) fixes the following issues:
Fixes #139
Task list:
[x] Change details added to Unreleased section of CHANGELOG.md?
[x] Added/updated documentation, comment-based help and descriptions in .schema.mof files where appropriate?
[ ] Examples appropriately updated?
[ ] New/changed code adheres to Style Guidelines?
[ ] Unit and (optional) Integration tests created/updated where possible?
@Johlju - would you mind reviewing? Should be just standard Code of Conduct change.
This change is
Codecov Report
Merging #140 into dev will increase coverage by <1%.
The diff coverage is 95%.
@@ Coverage Diff @@
## dev #140 +/- ##
===================================
+ Coverage 94% 94% +<1%
===================================
Files 5 5
Lines 519 520 +1
Branches 2 1 -1
===================================
+ Hits 491 493 +2
Misses 26 26
+ Partials 2 1 -1
@johlju - doh! Yes I did. Forgot I had another outstanding branch on this repo! Will fix it tonight.
Actually, I'll fix up this one once #137 has been reviewed and merged.
This one should be good to go now @johlju - sorry about the mess up!
No worries - it happens :smiley:
Reviewed 3 of 3 files at r1.
Review status: :shipit: complete! all files reviewed, all discussions resolved
Comments from Reviewable
|
2025-04-01T06:37:26.603440
| 2021-07-07T14:22:03
|
938948589
|
{
"authors": [
"PaulHigin",
"jmcadams-r7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2344",
"repo": "PowerShell/PowerShell",
"url": "https://github.com/PowerShell/PowerShell/issues/15735"
}
|
gharchive/issue
|
Can't use New-PSSession to "https://ps.compliance.protection.outlook.com/powershell-liveid/"
Prerequisites
[X] Write a descriptive title.
[X] Make sure you are able to repro it on the latest released version
[X] Search the existing issues.
[X] Refer to the FAQ.
[X] Refer to Differences between Windows PowerShell 5.1 and PowerShell.
Steps to reproduce
$Username =
$Password =
$SecPassword = ConvertTo-SecureString $Password -AsPlainText -Force
$Office365URI = "https://ps.compliance.protection.outlook.com/powershell-liveid/"
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri $Office365URI -Credential $Credentials -Authentication Basic -AllowRedirection
Expected behavior
$Session is a valid session in exchange online
Actual behavior
New-PSSession: /powershell/Connection.ps1:19
Line |
19 | $Session = New-PSSession -ConfigurationName Microsoft.Exchange -Conne …
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| [ps.compliance.protection.outlook.com] Connecting to remote
| server ps.compliance.protection.outlook.com failed with the
| following error message : MI_RESULT_FAILED For more
| information, see the about_Remote_Troubleshooting Help topic.
Environment data
Name Value
---- -----
PSVersion 7.1.0
PSEdition Core
GitCommitId 7.1.0
OS Darwin 19.6.0 Darwin Kernel Version 19.6.0: Thu May 6 00:48:39 PDT 2021; root:xnu-6153.141.33~1/RELEASE_X86_64
Platform Unix
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion <IP_ADDRESS>
WSManStackVersion 3.0
Visuals
NA
Some additional information. This worked up until Monday (July 5th 2021). We had this script suddenly fail across several organizations and we're not sure what changed.
WG-Remoting:
This ended up being a problem with docker. I couldn't tell you the root cause. I had to uninstall and reinstall docker to get it to work. A simple reset of docker wasn't enough.
|
2025-04-01T06:37:26.614571
| 2024-06-27T07:51:34
|
2377427215
|
{
"authors": [
"SteveL-MSFT",
"kborowinski",
"mklement0"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2345",
"repo": "PowerShell/PowerShell",
"url": "https://github.com/PowerShell/PowerShell/issues/23993"
}
|
gharchive/issue
|
WindowsCompatibility module fails to import on PS 7.5.0-preview.3
Prerequisites
[X] Write a descriptive title.
[X] Make sure you are able to repro it on the latest released version
[X] Search the existing issues.
[X] Refer to the FAQ.
[X] Refer to Differences between Windows PowerShell 5.1 and PowerShell.
Steps to reproduce
WindowsCompatibility module fails to import on PS 7.5.0-preview.3 due to an extra trailing dot in the namespace in the WindowsCompatibility.psm1 file:
using namespace System.Management.Automation.
This works on PowerShell 7.4.3, so I guess we should allow for trailing dots in PS 7.5.0 as well.
Expected behavior
I should be able to import WindowsCompatibility module
Actual behavior
Import-Module WindowsCompatibility -Force
ParserError: The specified namespace in the 'using' statement contains invalid characters.
Import-Module: The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory.
Error details
Exception :
Type : System.Management.Automation.PSInvalidOperationException
ErrorRecord :
Exception :
Type : System.Management.Automation.ParentContainsErrorRecordException
Message : The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program
Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory.
HResult : -2146233087
TargetObject : WindowsCompatibility
CategoryInfo : ResourceUnavailable: (WindowsCompatibility:String) [], ParentContainsErrorRecordException
FullyQualifiedErrorId : Modules_ModuleFileNotFound
TargetSite :
Name : LoadModuleManifest
DeclaringType : [Microsoft.PowerShell.Commands.ModuleCmdletBase]
MemberType : Method
Module : System.Management.Automation.dll
Message : The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program
Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory.
InnerException :
Type : System.IO.FileNotFoundException
Message : The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program
Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory.
HResult : -2147024894
Source : System.Management.Automation
HResult : -2146233079
StackTrace :
at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModuleManifest(String moduleManifestPath, ExternalScriptInfo manifestScriptInfo, Hashtable data, Hashtable localizedData,
ManifestProcessingFlags manifestProcessingFlags, Version minimumVersion, Version maximumVersion, Version requiredVersion, Nullable`1 requiredModuleGuid, ImportModuleOptions& options,
Boolean& containedErrors) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 3120
at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModuleManifest(String moduleManifestPath, ExternalScriptInfo manifestScriptInfo, Hashtable data, Hashtable localizedData,
ManifestProcessingFlags manifestProcessingFlags, Version minimumVersion, Version maximumVersion, Version requiredVersion, Nullable`1 requiredModuleGuid, ImportModuleOptions& options,
Boolean& containedErrors) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 3139
at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModule(PSModuleInfo parentModule, String fileName, String moduleBase, String prefix, SessionState ss, Object privateData,
ImportModuleOptions& options, ManifestProcessingFlags manifestProcessingFlags, Boolean& found, Boolean& moduleFileFound) in
D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 5630
at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadUsingExtensions(PSModuleInfo parentModule, String moduleName, String fileBaseName, String extension, String moduleBase, String
prefix, SessionState ss, ImportModuleOptions options, ManifestProcessingFlags manifestProcessingFlags, Boolean& found, Boolean& moduleFileFound) in
D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 5505
at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadUsingMultiVersionModuleBase(String moduleBase, ManifestProcessingFlags manifestProcessingFlags, ImportModuleOptions
importModuleOptions, Boolean& found) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 455
at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadUsingModulePath(PSModuleInfo parentModule, IEnumerable`1 modulePath, String name, SessionState ss, ImportModuleOptions options,
ManifestProcessingFlags manifestProcessingFlags, PSModuleInfo& module) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 381
at Microsoft.PowerShell.Commands.ImportModuleCommand.ImportModule_LocallyViaName(ImportModuleOptions importModuleOptions, String name) in
D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ImportModuleCommand.cs:line 824
TargetObject : WindowsCompatibility
CategoryInfo : ResourceUnavailable: (WindowsCompatibility:String) [Import-Module], PSInvalidOperationException
FullyQualifiedErrorId : Modules_ModuleFileNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand
InvocationInfo :
MyCommand : Import-Module
ScriptLineNumber : 1
OffsetInLine : 1
HistoryId : 11
Line : ipmo WindowsCompatibility -Force
Statement : ipmo WindowsCompatibility -Force
PositionMessage : At line:1 char:1
+ ipmo WindowsCompatibility -Force
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
InvocationName : ipmo
CommandOrigin : Internal
ScriptStackTrace : at <ScriptBlock>, <No file>: line 1
PipelineIterationInfo :
0
1
Environment data
Name Value
---- -----
PSVersion 7.5.0-preview.3
PSEdition Core
GitCommitId 7.5.0-preview.3-37-gec3840d6a1fffdbaca7173ededb4a4504b2f5b41
OS Microsoft Windows 10.0.19045
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion <IP_ADDRESS>
WSManStackVersion 3.0
Visuals
Note that the WindowsCompatibility module is obsolete (it was designed for t PS v6 , and is now archived; arguably, it should be hidden from the PowerShell Gallery or clearly marked as obsolete).
Its functionality has been folded into PowerShell itself, so there is no longer a need for it.
And, in general, I think it's preferable that something like using namespace System.Management.Automation. not be accepted.
@mklement0:
Yeah, it makes sense. I have a function that exports Scoop environment from online system to disconnected system, that needs to run some code from PS Core session on Windows PowerShell, and I was using Invoke-WinCommand to accomplish that. I'll refactor this code to use New-PSSession -UseWindowsPowerShell instead.
I also agree that the WindowsCompatibility module should be retired, the only concern I have that there might be some code in the wild that has using namespace statements with extra trailing dot that will fail to work on PS 7.5.0 once it is released as stable - I guess that it should be clearly stated in the release docs as breaking change so users are aware.
It's surprising for that trailing period to be there, but I see it in the psm1 source. The WinCompat module is deprecated and the repo is archived and no longer supported. As noted, there are other supported solutions.
|
2025-04-01T06:37:26.619220
| 2019-05-30T12:56:02
|
450294081
|
{
"authors": [
"chuanjiao10",
"iSazonov",
"vector-sec"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2346",
"repo": "PowerShell/PowerShell",
"url": "https://github.com/PowerShell/PowerShell/issues/9766"
}
|
gharchive/issue
|
Powershell hangs or dies silently when running Read-Host -AsSecureString when invoked via Python subprocess
Steps to reproduce
import json, uuid, subprocess, base64
PSCODE = '''
[CmdletBinding()]
param
(
$payload = (Read-Host -AsSecureString)
)
Get-Host
'''
def base64_encode_powershell(input_string):
"""
Encodes input value in UTF-16 Little Endian, making it suitable for use with powershell's encoded command argument.
"""
byte_string = input_string.encode('utf-16-le')
encoded_data = base64.b64encode(byte_string)
return encoded_data
def create_ps_file(ps_name, ps_content):
ps_path = ps_name
with open(ps_path, 'w+') as file:
file.write(ps_content)
return ps_path
ps_args = {}
ps_args['URL'] = "https://google.com"
payload = base64_encode_powershell(json.dumps(ps_args))
ps_path = create_ps_file('ps_' + str(uuid.uuid4()).replace('-','') + '.ps1', PSCODE)
output = subprocess.Popen(["pwsh", ps_path], stdin = subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = output.communicate(payload)
print stdout
Expected behavior
stdout in the Python code should contain the output of Get-Host
Name : ConsoleHost
Version : 6.2.1
InstanceId : 5168dee8-2e5c-4967-9d78-b53372cf351d
UI : System.Management.Automation.Internal.Host.InternalHostUserI
nterface
CurrentCulture : en-US
CurrentUICulture : en-US
PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy
DebuggerEnabled : True
IsRunspacePushed : False
Runspace : System.Management.Automation.Runspaces.LocalRunspace
Actual behavior
stdout in the Python code is empty, as is stderr
Environment data
Name Value
---- -----
PSVersion 6.2.1
PSEdition Core
GitCommitId 6.2.1
OS Linux 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018
Platform Unix
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion <IP_ADDRESS>
WSManStackVersion 3.0
Note
The expected behavior is observed when using PowerShell 5 on a Windows system, using the exact same Python code (except for changing pwsh to powershell.exe and adding a -f to the args in the subprocess call.
Suggest to simplify python code and give os, python version
Close as stale issue. Feel free to continue discussion.
Close as stale issue. Feel free to continue discussion.
|
2025-04-01T06:37:26.622171
| 2022-11-22T02:03:17
|
1458886589
|
{
"authors": [
"SteveL-MSFT",
"daxian-dbw",
"iSazonov",
"kilasuit",
"xtqqczze"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2347",
"repo": "PowerShell/PowerShell",
"url": "https://github.com/PowerShell/PowerShell/pull/18635"
}
|
gharchive/pull-request
|
Remove minor versions from PSCompatibleVersions
Supercedes #18346.
I'd be more willing to see this not remove all minor versions but instead leave the latest minor version
Language breaking change can be introduced only in new major version.
@kilasuit the @PowerShell/powershell-committee's position is 7.0 represents 7.x (docs will need to be updated) instead of perpetually updating this list with every yearly release of 7.x.
@SteveL-MSFT that's fair, just wanted to highlight this possible other viewpoint so that it's mentioned within this PR for full transparency to those coming to this in future
@xtqqczze Can you please fix the failing tests? Some tests need to be updated accordingly.
|
2025-04-01T06:37:26.626779
| 2019-03-27T11:32:28
|
425907076
|
{
"authors": [
"ykuijs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2348",
"repo": "PowerShell/SharePointDsc",
"url": "https://github.com/PowerShell/SharePointDsc/issues/1044"
}
|
gharchive/issue
|
[SPAppManagementServiceApp] Resource does not create proxy afterwards
Details of the scenario you tried and the problem that is occurring
When the Service App Proxy isn't created during deployment, this isn't created anymore at a later stage
Verbose logs showing the problem
N/A
Suggested solution to the issue
Update resource to check for the presence of the proxy and create it when not present.
The DSC configuration that is used to reproduce the issue (as detailed as possible)
N/A
The operating system the target node is running
Win 2K16, PSv5.1
Version of SharePoint that is used (e.g. SharePoint 2016)
All
Version and build of PowerShell the target node is running
v5.1
Version of the DSC module that was used ('dev' if using current dev branch)
Dev
Will be included in my next bugfix PR
|
2025-04-01T06:37:26.628688
| 2016-05-17T19:26:36
|
155341249
|
{
"authors": [
"vors"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2349",
"repo": "PowerShell/platyPS",
"url": "https://github.com/PowerShell/platyPS/pull/86"
}
|
gharchive/pull-request
|
Add New-Markdown test for dynamic parameters
Courtesy to @dotps1 for the function with dynamic parameters example
--ff only merge
This change is
Merged in master with --ff
|
2025-04-01T06:37:26.639358
| 2017-04-06T12:16:29
|
219879436
|
{
"authors": [
"andy1547",
"johlju"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2350",
"repo": "PowerShell/xMySql",
"url": "https://github.com/PowerShell/xMySql/issues/19"
}
|
gharchive/issue
|
Get-MySqlPort should be skipped when a optional port parameter is supplied
Resources that perform Get-MySqlPort currently fail for me as my.ini is not located in the standard location.
Not really sure why is even necessary to parse the port from the config, I assumed there's something I've overlooked so I've created an my.ini path override optional parameter to the Get-MySqlPort function and the associated resources. Happy to submit a PR.
That helper function is used in several resources to get the port - if there are a better way to get the port then that should be used. Otherwise your workaround sound promising. Do you want to send in a PR then please do.
|
2025-04-01T06:37:26.641345
| 2015-08-13T20:45:10
|
100864319
|
{
"authors": [
"Iristyle",
"KarolKaczmarek",
"msftclas"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2351",
"repo": "PowerShell/xRemoteDesktopSessionHost",
"url": "https://github.com/PowerShell/xRemoteDesktopSessionHost/pull/1"
}
|
gharchive/pull-request
|
Update errant file encodings
Remove smartquotes in MOF schema and ensure file is UTF8
Hi @Iristyle, I'm your friendly neighborhood Microsoft Pull Request Bot (You can call me MSBOT). Thanks for your contribution!
You've already signed the contribution license agreement. Thanks!
The agreement was validated by Microsoft and real humans are currently evaluating your PR.
TTYL, MSBOT;
Thanks for fixing encoding in this and other modules @Iristyle
|
2025-04-01T06:37:26.644126
| 2015-06-08T21:06:09
|
86331294
|
{
"authors": [
"KarolKaczmarek",
"TravisEz13",
"msftclas"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2352",
"repo": "PowerShell/xTimeZone",
"url": "https://github.com/PowerShell/xTimeZone/pull/6"
}
|
gharchive/pull-request
|
Update Schema to support singleton
Fix/Workaround for issue #2
Hi @TravisEz13, I'm your friendly neighborhood Microsoft Pull Request Bot (You can call me MSBOT). Thanks for your contribution!
It looks like you're a Microsoft contributor (Travis Plunk). If you're full-time, we DON'T require a Contribution License Agreement. If you are a vendor, please DO sign the electronic Contribution License Agreement. It will take 2 minutes and there's no faxing! https://cla.microsoft.com.
TTYL, MSBOT;
looks good
|
2025-04-01T06:37:26.699307
| 2024-02-04T08:48:10
|
2116962671
|
{
"authors": [
"Kvel2D"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2353",
"repo": "Praytic/youtd2",
"url": "https://github.com/Praytic/youtd2/issues/364"
}
|
gharchive/issue
|
DAMAGE event should be emitted only for damage caused by tower attacks
Any calls to do_spell_damage(), do_attack_damage() and AOE variations inside tower scripts should not emit DAMAGE event.
This behavior is according to original JASS code.
DAMAGE even should also be emitted for all instances of damage resulting from splash attacks.
Proof:
The tower in the screenshot is Ash Geyser. It has a splash attack and an ability which applies a debuff to creeps inside DAMAGE event callback. The screenshot shows that a creep was hit by splash attack (not as main target) and it received the debuff.
|
2025-04-01T06:37:26.710773
| 2016-10-04T16:58:33
|
180948574
|
{
"authors": [
"Rmohan06",
"benoitjchevalier"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2354",
"repo": "PredixDev/px-vis-xy-chart",
"url": "https://github.com/PredixDev/px-vis-xy-chart/issues/6"
}
|
gharchive/issue
|
No padding between axis label and color legend bar
Should have a 3px space. Right now the first color bar sits directly next to the label.
fixed
|
2025-04-01T06:37:26.942641
| 2017-01-26T16:43:04
|
203421912
|
{
"authors": [
"PrinceOfAmber",
"xenoflot"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2355",
"repo": "PrinceOfAmber/Cyclic",
"url": "https://github.com/PrinceOfAmber/Cyclic/issues/250"
}
|
gharchive/issue
|
[Request] Automatic Fishing Net auto-eject
I can see that this has been partially looked at in #200 but I'm not sure how we might go about automatically extracting from the Fishing net. Putting a hopper under it does extract the contents but appears to halt production.
With cyclic 1.10.2-1.9.19 I'm unable to extract from the top face using Translocators (extracts rod, not loot), Transfer Nodes, nor ActuallyAddition's ESD. (edit: Tried using an AA PhantomFace too and it complains. I'm guessing your inventory isn't being made available)
Perhaps you could have the Net automatically eject to an inventory that's on the top face? That way we could throw a chest on top and then use automation to clear the chest.
With the current restriction on available faces, I'd have to choose between having a crafter auto-insert fresh rods, OR having something auto-extract the fish. I don't have EnderIO in this pack so I can't use one conduit for both :P
Perhaps you could set the net to halt when the rod is close to breaking so that I don't lose my Unbreaking/Luck rod? Alternatively ease the placement restrictions to expose 2 faces for automation.
Yeah i could do an inventory on/off feature. so if its off then it will drop below always as if it was full.
letting rods break or not should be doable too.
i havent thought about the faces having only one open, ill look at it. maybe 3 out of 4 sides.
By the way, if the fishing rod has mending it shouldnt break in there, it should repair itself from the fishing xp
Made a bunch of changes in the latest release, let me know what you think https://minecraft.curseforge.com/projects/cyclic/files/2374252
Wonderful! I'll update the pack next weekend and check out the changes. Thanks!
|
2025-04-01T06:37:26.952497
| 2020-05-26T21:21:56
|
625207392
|
{
"authors": [
"codecov-commenter",
"rlskoeser"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2356",
"repo": "Princeton-CDH/mep-django",
"url": "https://github.com/Princeton-CDH/mep-django/pull/643"
}
|
gharchive/pull-request
|
Address JSON view for member map
indexes member address information in Solr
new JSON view to return map and member data with the same filters used on the members list view
@thatbudakguy codefactor is helpfully flagging a couple of todos I left in the code — I think they depend on what we decide we need from this view, so I probably need input from you to resolve.
IDK how we want to handle merging this one — do we merge into develop when you're happy with it even though it's not really functional on its own?
Codecov Report
Merging #643 into develop will increase coverage by 0.00%.
The diff coverage is 99.07%.
@@ Coverage Diff @@
## develop #643 +/- ##
=========================================
Coverage 98.08% 98.09%
=========================================
Files 218 219 +1
Lines 11568 11671 +103
Branches 63 63
=========================================
+ Hits 11347 11449 +102
- Misses 221 222 +1
|
2025-04-01T06:37:26.961308
| 2021-07-01T23:04:15
|
935275878
|
{
"authors": [
"hepcat72"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2357",
"repo": "Princeton-LSI-ResearchComputing/tracebase",
"url": "https://github.com/Princeton-LSI-ResearchComputing/tracebase/issues/133"
}
|
gharchive/issue
|
Update the column headers in the peakgroups search output
BUG DESCRIPTION
Problem
Column headers in the peakgroups search results (both basic and advanced) use Caroline's Processed tissue data. They should use the terms from issue #107. They could also be informed by the peakgroups tab of customer-desired consolidated view formats
Steps to reproduce
Go to http://<IP_ADDRESS>:8000/DataRepo/search_peakgroups/ and search for tissue is brain
Current behavior
Names used in Caroline's Processed tissue data
Expected behavior
Use the terms from issue #107 - TraceBase Terms and Definitions.
Suggested Change
Simple edit of search_peakgroups and the basic search templates.
Comment
Change the column headers from Caroline's terms to agreed upon terms (issue #107)
Originally posted by @hepcat72 in https://github.com/Princeton-LSI-ResearchComputing/tracebase/pull/127#discussion_r662395704
ISSUE OWNER SECTION
Assumptions
List of assumptions made WRT the code
E.g. We will assume input is correct (explaining why there is no validation)
Requirements
List of conditions to be met for the feature
E.g. Every column/row must display a value, i.e. cannot be empty
Limitations
A list of things this work will not fix
E.g. Getting edge case X to work requires too much effort and will not be
fixed in this effort.
Affected/Changed Components
Files
Environment variables
External executables
Database tables
Cron job settings
Etc.
DESIGN
GUI Change description
Describe changes the user will see.
Code Change Description (Pseudocode optional)
Describe code changes planned for the fix.
Tests
A test should be planned for each requirement (above), where possible.
Dupe of #147
|
2025-04-01T06:37:26.966327
| 2019-03-25T21:06:08
|
425115576
|
{
"authors": [
"Golmote",
"RunDevelopment"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2358",
"repo": "PrismJS/prism",
"url": "https://github.com/PrismJS/prism/issues/1836"
}
|
gharchive/issue
|
What is the parent property in the environment of the wrap hook for?
I don't get what the parent property is for.
We don't have a plugin that is using it doesn't even is what the name suggests: parent actually holds the nearest token array; i.e. in the case that the content of a token is a single token, then parent will not be the parent token but the token stream of the parent.
Apart from that, I also don't see why it would be useful:
Assuming we wrap a token which is in a token stream: What can parent be used for?
We can alter token positioned before the current token but that doesn't change the output of stringify because these token as stringified already. Changing the current token doesn't do anything either.
The only thing parent can be used for is to alter the items positioned after the current token. Btw. we can't add or remove items because stringify uses map.
So, what is it for?
Also, I don't see why the wrap hook should be able to modify the token stream anyway.
Added 11 May 2013
Because of the proximity in time between the commits, I suspect it was initially added for something related to the WPD plugin (20c0a1e96d6760aa2f0fad5b9a4f77d6f6b89434), but even there it is unused.
I believe it is safe to remove it, considering how unreliable it is anyway.
|
2025-04-01T06:37:26.976339
| 2018-05-03T06:00:13
|
319796297
|
{
"authors": [
"RJFares",
"dschuermann"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2359",
"repo": "PrivacyApps/html-textview",
"url": "https://github.com/PrivacyApps/html-textview/issues/136"
}
|
gharchive/issue
|
Replace compile with implementation
My project keeps giving me this error and fortunately android studio now points to the library which has this problem.
Please replace
dependencies {
compile 'com.android.support:support-annotations:25.0.0'
}
with "implementation" (and any other if there is)
Can you make a pull request and fix this?
Done.
I have never done this before so I hope I did it correctly :)
thanks!
|
2025-04-01T06:37:27.018339
| 2023-04-26T16:07:17
|
1685368637
|
{
"authors": [
"ipc103",
"manishapriya94"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2360",
"repo": "ProgramEquity/amplify",
"url": "https://github.com/ProgramEquity/amplify/issues/560"
}
|
gharchive/issue
|
Breakout 4: Token Vault workflows
Problem Statement:
Token scanning is important to ensure that secrets/tokens aren't exposed (very important for open source projects and the supply chain that depends on them). A repo contains 100 secrets and an org can contain upto 500
Tasks
*lean on maintainer for admin privileges
[ ] checkout the action for Hashicorp vault
[ ] create a .yml file in workflows
[ ] Set up a Vault instance,
[ ] Authentication with GitHub OIDC Token
[ ] Example usage to reference
2023-04-26
Vault is a Hashicorp service for storing secrets
OIDC workflows are used to generate short-lived auth tokens
This is preferable to generating a long-lived token with read:org
We have a Vault account already and can use that to generate a Vault instance
|
2025-04-01T06:37:27.215606
| 2024-09-12T01:57:07
|
2521148429
|
{
"authors": [
"JeferssonCL",
"JorgeHB69"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2361",
"repo": "Programming6-projects/LosCuriosos",
"url": "https://github.com/Programming6-projects/LosCuriosos/pull/9"
}
|
gharchive/pull-request
|
feat: Implementation of CRUD with JSON files (transports)
Pull Request
Description
Type of Change
[ ] ✨ New feature (non-breaking change which adds functionality)
[ ] 🛠️ Bug fix (non-breaking change which fixes an issue)
[ ] ❌ Breaking change (fix or feature that would cause existing functionality
to change)
[ ] 🧹 Code refactor
[ ] ✅ Build configuration change
[ ] 📝 Documentation
[ ] 🗑️ Chore
Task linked: CU-8689n1ev7 Implementation of CRUD with JSON files transports
|
2025-04-01T06:37:27.224436
| 2017-03-13T11:52:02
|
213744461
|
{
"authors": [
"daniel-j-h"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2362",
"repo": "Project-OSRM/node-osrm",
"url": "https://github.com/Project-OSRM/node-osrm/pull/304"
}
|
gharchive/pull-request
|
Documents roundtrip param
See https://github.com/Project-OSRM/osrm-backend/issues/3741
Now in the backend https://github.com/Project-OSRM/osrm-backend/pull/4185.
|
2025-04-01T06:37:27.238671
| 2024-09-22T19:02:43
|
2541213760
|
{
"authors": [
"Icesito68",
"Robotix22"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2363",
"repo": "Project-Silicium/Mu-Silicium",
"url": "https://github.com/Project-Silicium/Mu-Silicium/pull/1436"
}
|
gharchive/pull-request
|
Add Initial r9q Support
What Changed
Add support for r9q
Reason
As I don't think I'll have many more updates recently, I'll do the PR again.
Checklist
[x] Is what you changed Tested?
[x] Is the Source Code Cleaned?
Now the DSDT dosen't Exist, Please make a PR in the Silicium-ACPI Repo.
|
2025-04-01T06:37:27.273625
| 2021-08-12T18:12:36
|
969312519
|
{
"authors": [
"brian-rose",
"kmpaul"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2364",
"repo": "ProjectPythia/pythia-foundations",
"url": "https://github.com/ProjectPythia/pythia-foundations/pull/104"
}
|
gharchive/pull-request
|
Wrap analytics id in quotes
This might get the google analytics working for the Foundations site.
I noticed that my other JupyterBook site (which has working analytics) wrapped the ID in double quotes:
https://github.com/brian-rose/ClimateLaboratoryBook/blob/87801650d5f2e374494d8ccfc955b2a3b3000254/_config.yml#L45
This is consistent with the JupyterBook config reference, which shows double quotes:
https://jupyterbook.org/customize/config.html
I tried using double quotes here, but prettier insisted on changing them to single quotes.
@kmpaul I suggest merging this and see if it get the analytics working. I'm not sure we have a way to test this without merging first.
If this doesn't work, then we might be able to wait for a fix:
https://github.com/executablebooks/jupyter-book/issues/1300
https://github.com/pydata/pydata-sphinx-theme/pull/439
I don't think this fixed anything, sadly. I can't be 100% sure, but I don't see any differences in the actual HTML generated for the page.
Ok! I suppose my Climate Laboratory book must be using an "old style" Google Analytics ID, though I'm not yet clear what that means.
Does it start with a UA-? Or a G-? The first is the old and the second is the new.
|
2025-04-01T06:37:27.292555
| 2017-02-27T11:30:03
|
210458192
|
{
"authors": [
"Suparna-Acharya",
"iroshni"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2365",
"repo": "Promact/trappist",
"url": "https://github.com/Promact/trappist/issues/7"
}
|
gharchive/issue
|
forget password issue
Question: In login form after a user type his username and then click forget password link and after confirming new password ,will the user back to the login page with pre-filled fields with new password already or he/she has to enter the whole information again?
When user comes back on login form after resetting password from forgot password link, he/she has to fill both the details again.
|
2025-04-01T06:37:27.296366
| 2021-11-21T16:02:40
|
1059416880
|
{
"authors": [
"Tingtingyy",
"dizys"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2366",
"repo": "PromoSquad/promotions",
"url": "https://github.com/PromoSquad/promotions/issues/105"
}
|
gharchive/issue
|
Query BDD Test by productId
As a Developer
I need to describe the behavior of listing all promotions by productId
So that I can test the feature under DBB
Details & Assumptions:
Write the feature using Gherkin syntax that is understood by the behave tool
Acceptance Criteria:
When run tests using `behave`
Then I should see “list all promotions by productId query” scenario pass
IBM Cloud toolchain: Delivery Pipeline deployed promotions to prod, including commits 9fc2661199993d8990995905067cd7dac4192450, 594968bcce6b80cd84ae9d4c43896a7c0c1dd42a
|
2025-04-01T06:37:27.297850
| 2022-03-09T21:03:06
|
1164436454
|
{
"authors": [
"kripa528",
"tv547"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2367",
"repo": "PromotionsSquad/promotions",
"url": "https://github.com/PromotionsSquad/promotions/pull/32"
}
|
gharchive/pull-request
|
Updated routes.py file for descrip. & create promo
Updated the routes.py file to include our name and description. Also, updated the file within the Get Index function to inlcude read promotion.
reviewed!
|
2025-04-01T06:37:27.338366
| 2016-09-28T12:14:07
|
179755750
|
{
"authors": [
"DaneEveritt",
"andrea2107"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2368",
"repo": "Pterodactyl/Daemon",
"url": "https://github.com/Pterodactyl/Daemon/issues/26"
}
|
gharchive/issue
|
ifconfig not found cs go server
i am trying to run a cs go server on the panel but it seems like it can't find ifconfig,so it will crash
here's the log
http://hastebin.com/ukaqocacaw.sql
/cc @parkervcp
Oh i solved this by adding apt-get install -y net-tools to srcds docker Image but now cs go server gets stuck at assigning ip(it fails)
I am unable to reproduce this on my system, and did not encounter any issues while downloading CS:GO though the container.
I am going to assume that these issues are most likely due to the method that you used to move files, and could be either permissions or missing files entirely.
|
2025-04-01T06:37:27.340602
| 2017-12-18T20:47:32
|
283014927
|
{
"authors": [
"DaneEveritt",
"odddellarobbia"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2369",
"repo": "Pterodactyl/Panel",
"url": "https://github.com/Pterodactyl/Panel/issues/816"
}
|
gharchive/issue
|
[7.0] Server Sub-User Issues.
Panel or Daemon: Panel
Version of Panel/Daemon: 7.0
Server's OS: Ubuntu 16.04
Your Computer's OS & Browser: Chrome, Win 10
Add Details Below:
Adding a sub-user to a server is non-functional, unsure about deleting a sub-user from a server. Adding a deleting sub-users/users from the panel itself is fine though.
Can you please clarify what the issue here is @odddellarobbia?
|
2025-04-01T06:37:27.347178
| 2020-07-31T17:23:02
|
670035580
|
{
"authors": [
"jfrankl",
"tgilcrest"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2370",
"repo": "PublicMapping/districtbuilder",
"url": "https://github.com/PublicMapping/districtbuilder/issues/245"
}
|
gharchive/issue
|
Improve UX for zooming with restricted geolevels
Description
We prevent selecting the blocks geolevel at zoom levels below a certain threshold. However, if you zoom in, select the blocks geolevel, and then zoom out, blocks are still selected but they are grayed out and we don't clearly explain what changed and what a user needs to do to continue working with blocks. The user can't select geounits on the map until they manually select an available geolevel. For example:
To remedy this, we want to make it more clear that the user needs to zoom back in to see and interact with blocks by showing a them a message (see screenshot). We also want to not gray-out the blocks geolevel so it's clear the blocks geolevel is still active.
AC:
When a user selects a restricted geolevel (i.e. Blocks) and zooms out past the min-zoom, we show a message that explains: Zoom in to work with [geolevel]. (See screenshot)
Also when this happens, the blocks geolevel button is still active and indicates blocks are still selected (rather than being grayed out).
When zoomed out, if a user selects a non-restricted geolevel the restricted geolevel button is deactivated and returns to a state like in #268 / #287.
If a user has a restricted geounit selected, the user can zoom out past the min-zoom (rather than be limited to it) and the selection is maintained (i.e. sidebar shows proposed change, if you zoom back in the selection is still there)
Screenshots
I edited this issue to reflect the new direction mentioned in this message: https://github.com/PublicMapping/districtbuilder/pull/270#issuecomment-669479824
Updated to capture the various nuances.
@jfrankl What do you think of Zoom in to work with blocks? It's a little more concise than Zoom in to view and select blocks.
@tgilcrest looks good to me.
|
2025-04-01T06:37:27.373010
| 2017-03-21T01:11:20
|
215600909
|
{
"authors": [
"adamwolf",
"estiens",
"evaldsurtans",
"joebowbeer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2371",
"repo": "PunchThrough/bean-sdk-node",
"url": "https://github.com/PunchThrough/bean-sdk-node/issues/7"
}
|
gharchive/issue
|
firmware_bundles are obsolete
The most recent commit (72eca92) updated firmware to rev<PHONE_NUMBER>00, whereas current Bean Loader apps require rev<PHONE_NUMBER>00.
In a possibly related issue, the bean program_firmware command does not fail-fast if the bean's firmware is more recent than the revision in the SDK's firmware_bundles. Instead it proceeds, but does not complete:
[...]
2017-03-21T00:39:41.915Z INFO All services have been setup!
Connected!
2017-03-21T00:39:41.946Z INFO Char read success(2a27): 2A
Programming device firmware: 884aea153e0a
2017-03-21T00:39:41.952Z INFO Begin update called
2017-03-21T00:39:41.975Z INFO Char read success(2a26):<PHONE_NUMBER>00 Img-B
2017-03-21T00:39:41.976Z INFO Comparing firmware versions: Bundle version<PHONE_NUMBER>00), Bean version<PHONE_NUMBER>00)
2017-03-21T00:39:41.979Z INFO Starting FW update for device Bean+(884aea153e0a)
2017-03-21T00:39:41.981Z INFO Begin FW @<PHONE_NUMBER>
2017-03-21T00:39:41.984Z INFO Triggered a notification on Identify char
The command appears to hang at this point and must be manually terminated.
Another possibly related bug is that the bean program_sketch command also hangs. This has been reported several times recently in the forum:
2017-03-21T00:42:35.289Z INFO All services have been setup!
Connected!
2017-03-21T00:42:35.318Z INFO Char read success(2a27): 2A
Found sketch setLed for board Bean+
2017-03-21T00:42:35.328Z INFO No longer scanning...
2017-03-21T00:42:35.331Z INFO State transition: null -> STATE_INACTIVE
2017-03-21T00:42:35.331Z INFO Beginning sketch upload of sketch: setLed
2017-03-21T00:42:35.332Z INFO State transition: STATE_INACTIVE -> STATE_AWAIT_READY
2017-03-21T00:42:35.333Z INFO Sketch upload started!
Once again, the command appears to hang at this point and must be manually terminated.
Other commands that hang are: read_accel, read_ble_config, and read_device_info.
I wonder if the hangs are related to the firmware version? It seems unlikely that they are related to BLE dongle because the hangs have been reported when using several different approved dongles.
windows 10
python 2.7.13
node v6.10.0
bean 0.6.1
I forked and recompiled version - with latest firmware it is not possible to program lightblue bean anymore. Same problem as described here: http://beantalk.punchthrough.com/t/cli-unable-to-upload-sketch-update-firmware-rename-bean/4187/14
Not working on all platforms probably (mine linux)
I tried to downgrade firmware (using reversed patch), but unfortunately command also hangs
./bean.sh program_firmware -n Bean
2017-04-01T19:59:35.606Z INFO Setting scan timeout: 15 seconds
2017-04-01T19:59:35.611Z INFO Starting to scan...
Found device with name/address: Bean/04a3169af315
2017-04-01T19:59:36.349Z INFO No longer scanning...
2017-04-01T19:59:36.350Z INFO Connecting to device: Bean
2017-04-01T19:59:36.883Z INFO Looking up services for device: Bean
2017-04-01T19:59:37.725Z INFO Found service: OAD Service / f000ffc004514000b000000000000000
2017-04-01T19:59:37.725Z INFO Found service: Generic Access / 1800
2017-04-01T19:59:37.725Z INFO Found service: Generic Attribute / 1801
2017-04-01T19:59:37.726Z INFO Found service: Device Information / 180a
2017-04-01T19:59:37.726Z INFO Found service: Serial Transport Service / a495ff10c5b14b44b5121370f02d74de
2017-04-01T19:59:37.726Z INFO Found service: Unknown / a495ff20c5b14b44b5121370f02d74de
2017-04-01T19:59:37.726Z INFO Found service: Battery Service / 180f
2017-04-01T19:59:37.726Z INFO Found service: Scan Parameters / 1813
2017-04-01T19:59:37.726Z INFO Found service: Human Interface Device / 1812
2017-04-01T19:59:37.726Z INFO Found service: Unknown / 03b80e5aede84b33a7516ce34ec4c700
2017-04-01T19:59:37.727Z INFO Service setup successfully: Generic Access
2017-04-01T19:59:37.727Z INFO Service setup successfully: Generic Attribute
2017-04-01T19:59:37.727Z INFO Service setup successfully: Human Interface Device
2017-04-01T19:59:37.727Z INFO Service setup successfully: Scan Parameters
2017-04-01T19:59:37.728Z INFO Setting up IDENTIFY and BLOCK notifications
2017-04-01T19:59:37.728Z INFO Service setup successfully: Device Information
2017-04-01T19:59:37.728Z INFO Setting up SERIAL notifications
2017-04-01T19:59:37.728Z INFO Service setup successfully: Unknown
2017-04-01T19:59:37.728Z INFO Service setup successfully: Battery Service
2017-04-01T19:59:37.729Z INFO Service setup successfully: Unknown
2017-04-01T19:59:37.848Z INFO Service setup successfully: OAD Service
2017-04-01T19:59:37.870Z INFO Service setup successfully: Serial Transport Service
2017-04-01T19:59:37.871Z INFO All services have been setup!
Connected!
2017-04-01T19:59:37.893Z INFO Char read success(2a27): 1E
Programming device firmware: 04a3169af315
2017-04-01T19:59:37.895Z INFO Begin update called
2017-04-01T19:59:37.915Z INFO Char read success(2a26):<PHONE_NUMBER>00 Img-B
2017-04-01T19:59:37.916Z INFO Comparing firmware versions: Bundle version<PHONE_NUMBER>00), Bean version<PHONE_NUMBER>00)
2017-04-01T19:59:37.916Z INFO Starting FW update for device Bean(04a3169af315)
2017-04-01T19:59:37.916Z INFO Begin FW @<PHONE_NUMBER>
2017-04-01T19:59:37.917Z INFO Triggered a notification on Identify char
Sort of good news is that Bean is not bricked, because it is possible to program it using OSX non-CLI tools
The firmware in repo has been updated (20170406) but I'm still seeing the hangs for read_ble_config and read_device_info, etc.
windows 10
python 2.7.13
node v6.10.2
bean 0.6.2
Same problem still stuck CLI on Linux, Windows and OSX like this:
2017-04-01T19:59:37.893Z INFO Char read success(2a27): 1E
Programming device firmware: 04a3169af315
2017-04-01T19:59:37.895Z INFO Begin update called
2017-04-01T19:59:37.915Z INFO Char read success(2a26):<PHONE_NUMBER>00 Img-B
2017-04-01T19:59:37.916Z INFO Comparing firmware versions: Bundle version<PHONE_NUMBER>00), Bean version<PHONE_NUMBER>00)
2017-04-01T19:59:37.916Z INFO Starting FW update for device Bean(04a3169af315)
2017-04-01T19:59:37.916Z INFO Begin FW @<PHONE_NUMBER>
2017-04-01T19:59:37.917Z INFO Triggered a notification on Identify char
Only way to upload sketch only through OSX GUI uploader
Hi folks! We are working through this issue with the Node loader. I'll
make sure to update this issue when we make headway. Thanks!
On Tue, Apr 11, 2017 at 7:38 AM, evaldsurtans<EMAIL_ADDRESS>wrote:
Same problem still stuck CLI on Linux, Windows and OSX like this:
2017-04-01T19:59:37.893Z INFO Char read success(2a27): 1E
Programming device firmware: 04a3169af315
2017-04-01T19:59:37.895Z INFO Begin update called
2017-04-01T19:59:37.915Z INFO Char read success(2a26):<PHONE_NUMBER>00 Img-B
2017-04-01T19:59:37.916Z INFO Comparing firmware versions: Bundle version<PHONE_NUMBER>00), Bean version<PHONE_NUMBER>00)
2017-04-01T19:59:37.916Z INFO Starting FW update for device Bean(04a3169af315)
2017-04-01T19:59:37.916Z INFO Begin FW @<PHONE_NUMBER>
2017-04-01T19:59:37.917Z INFO Triggered a notification on Identify char
Only way to upload sketch only through OSX GUI uploader
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/PunchThrough/bean-sdk-node/issues/7#issuecomment-293247059,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACLYdHQr1grfayqOE3d4_IFeZK5zeLHks5ru3RYgaJpZM4MjNgH
.
Punch Through Tech Support suggested I do the following, and it works much better:
$ cd %AppData%\npm\node_modules\bean-sdk
$ npm install --save --save-exact<EMAIL_ADDRESS>
The problem is apparently that bean-sdk relies on noble 1.7.0 exactly but the dependency in package.json is specified as ^1.7.0 instead of 1.7.0 so the latest noble version (1.8.0) is installed instead.
The fix downgrades bean-sdk's noble install to 1.7.0 and rewrites the dependency so that an update won't accidentally revert the fix.
Yes, with latest update it is possible to program_sketch using CLI, thank you!
I am still running into hangs on linux with program sketch or program firmware
|
2025-04-01T06:37:27.388375
| 2020-07-12T06:50:44
|
655344012
|
{
"authors": [
"PurpleBooth"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2372",
"repo": "PurpleBooth/git-mit",
"url": "https://github.com/PurpleBooth/git-mit/pull/317"
}
|
gharchive/pull-request
|
fix: Add relates to to the duplicates list
This adds the relates to trailer to the list of trailers we detech
duplicates from, but the root of the problem is that we sometimes insert
a trailer when we don't actually need to, because it's already there.
This also fixes that.
Relates-to: #220
#315
|
2025-04-01T06:37:27.399897
| 2016-10-15T04:31:51
|
183188170
|
{
"authors": [
"Mechazawa",
"obskyr"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2373",
"repo": "Pushjet/Pushjet-Server-Api",
"url": "https://github.com/Pushjet/Pushjet-Server-Api/pull/18"
}
|
gharchive/pull-request
|
Return appropriate HTTP status codes on error
This pull request makes sure errors actually return non-OK HTTP status codes. Before, all requests (that didn't cause an actual uncaught exception serverside) returned 200. With this change, they instead return what they should according to the documentation, and as for the undocumented ones according to common sense.
With this pull request, the errors are mapped as following:
Error.NONE - 200 OK
Error.INVALID_CLIENT - 400 Bad request
Error.INVALID_SERVICE - 400 Bad request
Error.INVALID_SECRET - 400 Bad request
Error.DUPLICATE_LISTEN - 409 Conflict
Error.RATE_TOOFAST - 429 Too many requests
Error.SERVICE_NOTFOUND - 404 Not found
Error.ARGUMENT_MISSING - 400 Bad request
Error.INVALID_PUBKEY - 400 Bad request (Though this error is never used.)
Error.CONNECTION_CLOSING - 499 Client closed request (I wasn't sure about this one either, since it also goes unused.)
Error.NO_CHANGES - 400 Bad request
Pretty nice, eh? This also resolves #17.
Nice, I just need to make sure this is reflected in the docs from now on.
Error.CONNECTION_CLOSING WAS used when websockets were still in the API instead of a connector.
|
2025-04-01T06:37:27.401811
| 2015-01-27T15:35:33
|
55632785
|
{
"authors": [
"sean-hill",
"shaders"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2374",
"repo": "Pushwoosh/pushwoosh-sdk-samples",
"url": "https://github.com/Pushwoosh/pushwoosh-sdk-samples/issues/24"
}
|
gharchive/issue
|
Windows Phone notification payload
Hey just curious what the Windows Phone notification payload looks like.
iOS looks like
{
"aps": {
"sound": "default",
"alert": "push title"
},
u: '{key: value}' //user data
}
Android looks like
{
title: "push title"
userdata: '{key: value}' //user data
}
Windows Phone looks like
// ???
{
"content": "message",
"userdata": "{key: value}",
"onStart": false
}
|
2025-04-01T06:37:27.422317
| 2020-10-01T05:58:15
|
712515068
|
{
"authors": [
"debdutgoswami"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2376",
"repo": "Py-Contributors/awesomeScripts",
"url": "https://github.com/Py-Contributors/awesomeScripts/issues/50"
}
|
gharchive/issue
|
PDF Redaction
PDF Redaction -
what will change -
We generally find pdfs to contain sensitive information so it is necessary to remove those details. The code will annotate the sensitive informations.
Instructions
Create a new folder for your script and file/folder name should be appropriate.
Create a README.md in your folder for program Instructions
add requirements.txt if needed
Please add/delete options that are not relevant.
[x] Adding New Code
[] Improving Code
[] Improving Documentation
[] Bug Fix
Programming Language
[x] Python
:star2: Star it :fork_and_knife:Fork it :handshake: Contribute to it!
Discord server - https://discord.gg/FXyh2S3
Happy Coding,
I would love to work on this
|
2025-04-01T06:37:27.440572
| 2018-10-02T00:52:52
|
365697800
|
{
"authors": [
"DanHickstein",
"MikhailRyazanov",
"stggh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2377",
"repo": "PyAbel/PyAbel",
"url": "https://github.com/PyAbel/PyAbel/issues/231"
}
|
gharchive/issue
|
dasch cached basis implementation
An implementation of @MikhailRyazanov's basex caching and slicing, for the dasch methods is in my PyAbel fork dasch-cached-memory-basis
This issue provides discussion point on the best PyAbel coding practices.
Implementation
@MikhailRyazanov's clever ideas:
a) caching the basis numpy array, for faster repeat transforms (rather than reading the basis file)
and
b) slicing the basis from a larger basis (cached or file)
Are easy to implement:
a) add global variables _basis = None, as per basex, and a _method = None to identify the Dasch method, to abel.dasch.py
and
b) add slicing to the cached basis _basis[:cols, :cols], to extract from any larger basis.
# cache basis
_basis = None
_method = None
:
_basis = abel.tools.basis.get_bs_cached(method, cols, cols,
basis_dir=basis_dir,
cached_basis=(_basis, _method),
verbose=verbose)
_method = method
get_bs_cached() is common to all the basis methods, and was extracted to abel.tools.basis.py during coding of the abel.dasch.py methods.
abel.tools.basis.py became somewhat untidy when the linbasex method was included, since that method broke the simple method_basis_{cols}_{nbf}.npy naming scheme, as uniqueness required additional variables. The use of abel.tools.basis.py requires the dasch global variables to be passed to abel.tools.get_bs_cached(), in this fork, as a tuple
if cached_basis is not None and cached_basis[0] is not None:
_basis, _method = cached_basis
if _basis.shape[0] >= cols and _method == method:
if verbose:
print('Using memory cached basis')
return _basis
:
basis_files = glob.glob(path_to_basis_files)
for bf in basis_files:
if int(bf.split('_')[-2]) >= cols: # relies on file order
if verbose:
print("Loading {:s} basis {:s}".format(method, bf))
D = np.load(bf)
# trim to size
return D[:cols, :nbf]
Q1: Given the mess caused by linbasex is it better to return get_bs_cached() into abel.dasch.py, as is the case for abel.basex.py?
Q2: Any alternative suggestions?
Testing (as implemented in my fork)
test code gist
Generates two Dribinski sample quadrants, comparing the transform time for (0) generating the basis, (1) memory cached basis, and (2) re-reading the basis file. Then, for the smaller size quadrant, the transform time for a basis extracted from the larger basis (3) cached, (4) read from file.
Note, that file reading is expensive, for a small image it is better to generate the basis, rather than read it.
python test-dasch.py
Dribinski image quadrant (251, 251) ========================================
(0) Generate basis ------------------------------
A suitable basis for 'three_point' was not found.
A new basis will be generated.
But don't worry, it will be saved to disk for future use.
Operator matrix saved for later use to,
./three_point_basis_251_251.npy
... in 15.2 ms
(1) Cached basis ------------------------------
Using memory cached basis
... in 0.7 ms
(2) Read basis from file ------------------------------
Loading three_point basis ./three_point_basis_251_251.npy
... in 1.3 ms
Dribinski image quadrant (2501, 2501) ========================================
(0) Generate basis ------------------------------
A suitable basis for 'three_point' was not found.
A new basis will be generated.
But don't worry, it will be saved to disk for future use.
Operator matrix saved for later use to,
./three_point_basis_2501_2501.npy
... in 1399.4 ms
(1) Cached basis ------------------------------
Using memory cached basis
... in 346.2 ms
(2) Read basis from file ------------------------------
Loading three_point basis ./three_point_basis_2501_2501.npy
... in 368.7 ms
[remove small size basis file three_point_basis_251_251.npy]
Back to (251, 251) image size ----------------------------------------
(3) Cached from larger basis cache ------------------------------
Using memory cached basis
... in 1.1 ms
(4) Read from larger basis file ------------------------------
Loading three_point basis ./three_point_basis_2501_2501.npy
... in 20.1 ms
clean up, remove basis file
I was also confused about basex having its own get_bs_basex_cached(), unlike all other methods. Why was it made so?
I think, the simplest solution would be to keep all basis handling within the corresponding methods modules rather than combine them all in one separate module. Especially since different methods might have quite different requirements and approaches. On the other hand, there is some common code (this, however, is true for their ..._transform() methods as well).
We probably should compare how much code is shareable and how much is not.
My basex basis cropping is also more complicated. And currently, with the implementation of n ≠ nbf, I actually want to move from the n, nbf scheme to n, sigma, since it makes more sense to the user (can use the same kind of basis defined by the width sigma for any n and do not bother about calculating nbf, which is used only internally) and simplifies the cropping implementation.
Another question, related to https://github.com/PyAbel/PyAbel/issues/226#issuecomment-423258091: do we want separate caches for each "method" ("basex", "two_point", "onion_peeling", ...), such that they can coexist without evicting each other? Or the idea is to have caching only for repetitive calling of the same method (with constant parameters)?
I have implemented basex cache purging as basex_cleanup(), considering that there will be separate two_point_cleanup(), onion_peeling_cleanup() and so on, akin to basex_transform(), two_point_transform(), onion_peeling_transform... But if we want only one common cache, then it probably should be renamed to just cleanup(). And maybe even moved (with the cache variables) to transform.py?
moved (with the cache variables) to transform.py
That only works for abel.Transform(), direct calls abel.method.method_transform() bypass transform.py.
I think each method should handle its own cache, and perhaps, have abel.basis_purge(method), to purge unwanted basis variables.
@DanHickstein is normally the king of these types of decisions ;-)
Great discussion here!
I'm fine to have each method have it's own cache.
I also like the idea of using your check of the consistency of cropped basis sets as a unit test. You can perhaps simple use very small images in order to make it quick.
In regards to @MikhailRyazanov's point about n, nbf, and sigma: shouldn't each saved basis set be labelled with all three labels? Of course, a certain n and sigma suggests a reasonable nbf, but can't any sigma be selected in principle?
The basis defined in the BASEX article has only one parameter σ, which defines the overall scaling. That is, the spacing between the maxima of ρk and ρk+1 is always σ. In principle, k in (14) can be non-integer, but then its projection (15) is not a finite expansion. So nbf is really uniquely determined by n and sigma (within maybe ±1, depending on its rounding when n/sigma is not an integer).
Also, as I understand, in all Dasch methods nbf = n always, so nbf there is also redundant. And linbasex basis needs more than 2 parameters. I don't think that a common naming scheme, except the {method}_basis_ prefix, can exist, and since now all methods will be handling their basis files internally, it would be natural to allow them to use their own naming conventions.
Completed with PR #232 . Closing.
Great work!
|
2025-04-01T06:37:27.664345
| 2020-12-20T10:45:17
|
771561280
|
{
"authors": [
"LER0ever",
"davidhewitt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2378",
"repo": "PyO3/pyo3",
"url": "https://github.com/PyO3/pyo3/issues/1330"
}
|
gharchive/issue
|
Did you use a virtualenv?: brew install, no.
Your Rust version (rustc --version): rustc 1.49.0-beta.4 (877c7cbe1 2020-12-10)
Your PyO3 version: 0.12
Have you tried using latest PyO3 master (replace version = "0.x.y" with git = "https://github.com/PyO3/pyo3")?: no
💥 Reproducing
This happens on my own project, but I can reproduce it with maturin's test-crate
install rust beta with rustup (stable does not yet have darwin aarch64 support)
install python3.9 aarch64 with native homebrew
1. Compile with System python 3.8
git clone https://github.com/PyO3/maturin.git
cd maturin/test-crates/pyo3-pure
cargo build
would produce the following error:
= note: Undefined symbols for architecture arm64:
"_PyExc_ValueError", referenced from:
_$LT$pyo3..exceptions..PyValueError$u20$as$u20$pyo3..type_object..PyTypeInfo$GT$::type_object_raw::h059c73bcf950bd88 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.0.rcgu.o)
"_PyExc_BaseException", referenced from:
_$LT$pyo3..exceptions..PyBaseException$u20$as$u20$pyo3..type_object..PyTypeInfo$GT$::type_object_raw::h82828fb355186050 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.0.rcgu.o)
"_PyList_Append", referenced from:
pyo3::types::list::PyList::append::_$u7b$$u7b$closure$u7d$$u7d$::h5630ce7dc966791a in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.4.rcgu.o)
"_PyExc_SystemError", referenced from:
_$LT$pyo3..exceptions..PySystemError$u20$as$u20$pyo3..type_object..PyTypeInfo$GT$::type_object_raw::h76492aefdb4928b0 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.0.rcgu.o)
"_PyList_New", referenced from:
pyo3::types::list::PyList::empty::h3e9f2e6039a3ef3a in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.4.rcgu.o)
"_PyErr_PrintEx", referenced from:
pyo3::err::PyErr::print::h12dca2eb6fa69d90 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.7.rcgu.o)
"_PyObject_SetAttr", referenced from:
...
...
The python3 binary is from Apple, in universal format with both x86_64 and arm64. file $(which python3) gives /usr/bin/python3: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e]
2. Compile with HomeBrew Python 3.9
basically the same process and same error
git clone https://github.com/PyO3/maturin.git
cd maturin/test-crates/pyo3-pure
PYO3_PYTHON=python3.9 cargo build
file $(which python3.9) output: /opt/homebrew/bin/python3.9: Mach-O 64-bit executable arm64
3. Compile with Rosetta under x86 works fine
As you're using macOS - you would need cargo rustc --release -- -C link-arg=-undefined -C link-arg=dynamic_lookup?
Or otherwise you could try building with maturin develop, which will include these flags for you.
@davidhewitt Thanks for the reply, that is indeed the problem.
I copied my cargo config ~/.cargo/config which contains the fix for x86_64:
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
Changing to [target.aarch64-apple-darwin] fixes the problem. Perhaps the documentation @ https://pyo3.rs/master/index.html#using-rust-from-python can be updated to include aarch64 for anyone else encountering this issue?
@davidhewitt Oh and maturin develop currently does not work because of platforms crate cannot correctly detect the triplet aarch64-apple-darwin.
$ maturin develop
💥 maturin failed
Caused by: Could guess the current platform
Ref: https://github.com/RustSec/platforms-crate/blob/master/src/platform.rs
Will open a separate issue to either maturin or platforms.
|
2025-04-01T06:37:27.667166
| 2020-06-05T18:49:10
|
631846307
|
{
"authors": [
"Alexander-N",
"kngwyu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2379",
"repo": "PyO3/pyo3",
"url": "https://github.com/PyO3/pyo3/pull/959"
}
|
gharchive/pull-request
|
User Guide: Rewrite parallelism chapter
I took a stab at rewriting the parallelism chapter because it seemed to suggest that you need to release the GIL in order to achieve parallelism within your Rust code and that allow_threads enables running of Python code in parallel. See #640 and #649 for discussion and thanks @Askannz for the nice example which I included.
Also included the previous PR #957 to make the benchmarks more comparable.
Closes #956
Hmm, it looks that we have a problem with travis integration.
https://travis-ci.org/github/PyO3/pyo3/builds/695173811
It looks tests are correctly executed, but Github fails to fetch the result.
Closed and reopened to retrigger CI.
Thanks!
|
2025-04-01T06:37:27.669605
| 2024-11-26T08:29:40
|
2693682620
|
{
"authors": [
"thomgeo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2380",
"repo": "PyPSA/linopy",
"url": "https://github.com/PyPSA/linopy/issues/385"
}
|
gharchive/issue
|
CPLEX: Warning, line 133208941: Name 'x10609872' does not exist.
Dear all,
I am getting several of these warnings when using linopy with PyPSA:
Warning, line 133208941: Name 'x10609872' does not exist.
It looks to me like a constraint or variable, that CPLEX expects is not available? The network is a modified pypsa-eur network, to which I added different system components.
Best,
Georg
Thanks Fabian! After your comment, I wanted to investigate whether this has to do with linopy/PyPSA releases that came after the latest PyPSA-Eur version (0.13), and indeed I managed to solve the issue now by downgrading PyPSA (and thereby linopy) to the lowest version that was mentioned in the environment.yaml.
Don't know if this is even worth following up on, but with the latest version I also got some mentions of repeating rows, which was usually an indicator that the run would fail (for example "Row 'c6164160' repeats.")
I will just leave it here, in case it might be helpful. But it might also just be a result of incompatible packages, so feel free to ignore it, of course :-)
|
2025-04-01T06:37:27.702774
| 2023-08-07T15:12:19
|
1839684295
|
{
"authors": [
"konbraphat51",
"wannaphong"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2381",
"repo": "PyThaiNLP/pythainlp",
"url": "https://github.com/PyThaiNLP/pythainlp/issues/831"
}
|
gharchive/issue
|
bug: romanize("ฤดู", "royin")
Description
These words occurred error when using romanize() by royin engine:
ฤดูใบไม้ผลิ
ฤดูร้อน
ฤดูหนาว
ฤดูใบไม้ร่วง
ฤดู
Expected results
no error
Current results
return select_romanize_engine(engine)(text)
File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 229, in romanize
romanized_words = [_romanize(word) for word in words]
File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 229, in
romanized_words = [_romanize(word) for word in words]
File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 213, in _romanize
word = _replace_consonants(word, consonants)
File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 197, in _replace_consonants
mod_chars.append(_CONSONANTS[consonants[j]][1])
IndexError: list index out of range
Steps to reproduce
romanize("ฤดู", "royin")
PyThaiNLP version
4.0.2
Python version
3.10.12
Operating system and version
windows11
More info
Runned by VSCode, conda environment
Possible solution
No response
Files
No response
@konbraphat51 Hello! It is not bug. romanize(A Thai syllable, "royin") is input a syllable only to get correct. If you want to input a word, you should use romanize(text, engine: str = 'thai2rom').
We will improve our document. Thank you for reporting!
Oh, got it. thank you!
|
2025-04-01T06:37:27.722526
| 2022-01-26T10:50:39
|
1114880674
|
{
"authors": [
"SkafteNicki",
"nishant42491",
"stancld"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2382",
"repo": "PyTorchLightning/metrics",
"url": "https://github.com/PyTorchLightning/metrics/pull/800"
}
|
gharchive/pull-request
|
new metric SCC for Images
What does this PR do?
Adds a new metric Spatial Correlation Coefficient for Images
part of #799
Before submitting
[x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)-Yes
[x] Did you read the contributor guideline, Pull Request section?-Yes
[ ] Did you make sure to update the docs?-No
[ ] Did you write any new necessary tests?-No
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃
Hi @nishant42491, thanks for wanting to contribute.
However, I can only see that you have created a new file. Is that correct?
Hi @nishant42491, thanks for wanting to contribute. However, I can only see that you have created a new file. Is that correct?
yes, that's correct. sorry for the delay as I was trying to understand how to calculate the SCC coefficient as there are not many resources available online to understand the implementation of Laplacian filters. I am working with my college professors to understand more about the SCC metric how ever due to my college exams my progress has been slow. however I will try to speed things up again really sorry for the delay.
Hi @nishant42491, do you have any updates here? :]
Hi @nishant42491, do you have any updates here? :]
yep, I have taken references from andrewkhalels sewar repo containing information about the SCC metric and have created a scc.py file containing its implementation however I am not 100 % sure that it's correct as I still don't understand the SCC metric completely. Ill be more than happy to change anything that needs changing. I would love any suggestions on how to proceed further. Thank you a lot for your patience with me I really appreciate it :].
@SkafteNicki @stancld I have finished creating a functional and class-based interface for the scc metric and they seem to be working properly. I have tested it against the sewar repos SCC metric and my metric is giving me the right outputs. However, I am stuck on implementing the test_scc.py file as I don't exactly know what to write in the tests file. I have tried using the sewar repos scc metric in the tests file but it does not work with batched inputs so i am not sure what to test my metric against. any suggestions on how to move forward will really help me out. Thank you in advance for your help :]
Hi @nishant42491,
I tried to take a stab at finishing this PR. I written tests and added some docs, but the tests fails with assertion when I try to compare against the implementation from sewar. Could you maybe help with getting this PR finished, by providing at least an example how to get this implementation to match the one from sewar.
Hi @nishant42491, I tried to take a stab at finishing this PR. I written tests and added some docs, but the tests fails with assertion when I try to compare against the implementation from sewar. Could you maybe help with getting this PR finished, by providing at least an example how to get this implementation to match the one from sewar.
Hi @nishant42491, I tried to take a stab at finishing this PR. I written tests and added some docs, but the tests fails with assertion when I try to compare against the implementation from sewar. Could you maybe help with getting this PR finished, by providing at least an example how to get this implementation to match the one from sewar.
I think the difference between the two metrics arises because the package in swear has implemented a high pass filter whilst I have not however the difference did not seem significant whilst i was testing the metric
@SkafteNicki should I try implementing a high pass filter for my metric too to try and make it pass the tests you have written?
@nishant42491 yes, we kind of need the two metrics to produce the same output, so we make sure that this implementation is doing the right thing.
Maybe the reason the tests are failing is that the input is random? did you test with some structured input?


@SkafteNicki I have received outputs 0.079 and 0.080 for 2 images for swear's metric and my metric respectively
will try and implement the high pass filter which should solve the problem. @SkafteNicki Thnx for the help on the tests i really appreciate it :]
@nishant42491 is it correct that current differences in implementation is:
The input to sewar should be target, preds whereas your implementation takes in oppesit order preds, target (which is the same as the rest of out codebase)
The input to sewar is expected to be single images of shape [H, W, C] whereas your implementation takes in [B, C, H, W] (so the channel dimension should be moved when we compare)
Just trying to figure out why I cannot get the numbers to match (I know the high pass filter is missing, but it should still be fairly close as to my understanding).
@nishant42491 is it correct that current differences in implementation is:
The input to sewar should be target, preds whereas your implementation takes in oppesit order preds, target (which is the same as the rest of out codebase)
The input to sewar is expected to be single images of shape [H, W, C] whereas your implementation takes in [B, C, H, W] (so the channel dimension should be moved when we compare)
Just trying to figure out why I cannot get the numbers to match (I know the high pass filter is missing, but it should still be fairly close as to my understanding).
yep, those are the differences between the sewar implementation and the metric's implementation. for the inputs, I have tested
In both metrics the difference is slight generally up to the 3rd decimal place.
@nishant42491 is it correct that current differences in implementation is:
The input to sewar should be target, preds whereas your implementation takes in oppesit order preds, target (which is the same as the rest of out codebase)
The input to sewar is expected to be single images of shape [H, W, C] whereas your implementation takes in [B, C, H, W] (so the channel dimension should be moved when we compare)
Just trying to figure out why I cannot get the numbers to match (I know the high pass filter is missing, but it should still be fairly close as to my understanding).
yep, those are the differences between the sewar implementation and the metric's implementation. for the inputs, I have tested In both metrics the difference is slight generally up to the 3rd decimal place.
one more thing is that my default kernel size is 9, whilst sewras kernel size is 8 so you have to set the sewar metrics kernel size to 9 explicitly to get similar results
@nishant42491 tried changing the implementation based on what you have told me but cannot get it to match. Below is shown the wrapped reference implementation from sewar that should do everything correctly:
summarizes a batch of input
flips the input
permute the dimensions
changes the window size to 9
from sewar.full_ref import scc
def _reference_scc(preds, target, reduction):
val = 0.0
for p, t in zip(preds, target):
val += scc(t.permute(1, 2, 0).numpy(), p.permute(1, 2, 0).numpy(), ws=9)
val = val if reduction == "sum" else val / preds.shape[0]
return val
from torchmetrics.functional import spatial_correlation_coefficient
import torch
_ = torch.manual_seed(42)
BATCH_SIZE = 10
CHANNELS = 3
SIZE = 100
preds = torch.randint(0, 255, (BATCH_SIZE, CHANNELS, SIZE, SIZE)).float()
target = torch.randint(0, 255, (BATCH_SIZE, CHANNELS, SIZE, SIZE)).float()
print(spatial_correlation_coefficient(preds, target, reduction='sum').item())
print(_reference_scc(preds, target, reduction='sum'))
can you find what I do wrong?
Your Implementation seems correct. I'm not quite sure why the outputs differ.
|
2025-04-01T06:37:27.763064
| 2021-11-08T15:39:28
|
1047600253
|
{
"authors": [
"SeanNaren",
"ananthsub",
"carmocca",
"daniellepintz",
"dlangerm",
"four4fish",
"justusschock",
"kaushikb11",
"t-vi",
"tchaton",
"williamFalcon",
"zippeurfou"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2383",
"repo": "PyTorchLightning/pytorch-lightning",
"url": "https://github.com/PyTorchLightning/pytorch-lightning/issues/10410"
}
|
gharchive/issue
|
[RFC] Future of gpus/ipus/tpu_cores with respect to devices
Proposed refactoring or deprecation
Currently we have two methods to specifying devices. Let's take GPUs for example:
The standard case that we've all grown used to and are mostly aware of.
trainer = Trainer(gpus=2)
Introduced in 1.5, tries to make the number of devices agnostic. This means if you specify accelerator='tpu' we automatically know to use 2 TPU cores.
trainer = Trainer(devices=2, accelerator='gpu')
Recently, it has come up in https://github.com/PyTorchLightning/pytorch-lightning/pull/10404#discussion_r744562512 that we may want to deprecate and prevent further device specific names from appearing in the Trainer (such as hpus).
Related conversation https://github.com/PyTorchLightning/pytorch-lightning/issues/9053#issuecomment-904239610
I see two options:
🚀 We keep both device specific arguments (gpus tpu_cores ipus for the Trainer) and devices
👀 We drop gpus tpu_cores ipus in the future and fully rely on devices. (Potentially this would likely be done in Lightning 2.0, instead of after 2 minor releases)
cc @kaushikb11 @justusschock @ananthsub @awaelchli
IMO we should follow the contributing guidelines: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md#main-core-value-one-less-thing-to-remember
Having multiple options in the public API to do the same thing is really confusing.
+1, totally agree
Current device related flags are confusing. Multiple flags partially overlap and interfere each other. When multiple flags passed in, we define prioritize and ignore some of the flags.
For example:
gpu=2, device=3 device will be ignored.
gpu=2, cpu=2, accelerator=cpu what will happen? I think cpu with num_process=2?
I prefer option 2, drop gpus, tpu_cores, ipus in the future and fully rely on devices
And can we have devices be int, not set =auto?
With this option accelerator flag for device_type, devices (probably rename to devices_num?) for device_number. It's also scalable for new device types like hpus
I think going from this:
Trainer(gpus=2)
to
trainer = Trainer(devices=2, accelerator='gpu')
is a major step backwards in usability. now users have to dig into docs to understand how to use things. it definitely violates the "one-less-thing-to-remember" part of the API.
I guess, I'm just wondering why we're exploring this? I thought we were already pretty stable on the device API stuff
@williamFalcon The more kinds of accelerators we get, the more flags we will also have. Switching from Trainer(gpus=8) to Trainer(tpu_cores=8) also requires users to dig through the docs. Actually I find it easier to have Trainer(devices=2, accelerator='gpu'/'tpu') as the flags stay the same, it is easier to remember and also scaling better. So personally this would be the "one-less-thing-to-remember" for me.
Also I suspect, we would likely have the accelerator defaulting to 'auto' then which means that Trainer(devices=8) would run on gpu if available, on tpu if available and if no special accelerator is available it would fall back to cpu.
@williamFalcon As @justusschock shared, the previous approach doesn't scale well and makes exploration confusing.
Furthermore, the new API provides an auto as follows:
Trainer(devices="auto", accelerator="auto")
which would make the code runnable on every hardware without any code changes. Which isn't possible with the previous API.
To address the discoverability issue, isn't it common to import the Trainer and see what parameters are available? Isn't this more common that going to the docs to find the parameter?
I opened the issue as I felt it was important as a community we come to an agreement as the idea was floating around a few PRs (with inconsistent agreement). It's important to have one single direction here (especially as we introduce other accelerators). I do strongly disagree with removing gpus/tpu_cores/ipus/hpus/cpus from the Trainer for primarily ease/discoverability.
I think it would be beneficial to try get community votes on this, so maybe a post on our General slack channel is warranted?
Even something like gpus as lightning defines them today is ambiguous. PyTorch also supports AMD GPUs: https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/
But this isn't supported at all with Lightning because gpu assumes nvidia/cuda, whereas PyTorch's device allows for more backends.
In my head the accelerator in Lightning maps to the pytorch device being used. So why don't we preserve the same semantics PyTorch already offers for this to smoothen the transition and keep parity?
Adding to the conversation. In this issue, a user is requesting MIG instead of GPUs for A100 like a machine: https://github.com/PyTorchLightning/pytorch-lightning/issues/10529
@tchaton do we have a consensus to move forward with this issue?
Maybe thought that is a bit different there.
Going back to @williamFalcon argument that:
Trainer(gpus=2) to Trainer(devices=2, accelerator='gpu') is more work for the user.
My question here is that as far as I understand a correct me if I am wrong you have CPU and then either gpu/tpu/hpu/...
That is why you can have something as follow:
Trainer(devices=2, accelerator='gpu') where accelerator is only one combination.
I also assume that our users "don't" want to care if it is gpu/tpu/hpu.. as their code will remain the same as long as it is not CPU and even if it is CPU PL help you there to make it seamless.
Finally, we can automatically detect what kind of accelerator you have available today with auto.
That being said what if we had a "wrapper" around anything that is non CPU such that we can keep the same structure while making it "easy" for the users.
ie. Trainer(cpus=2,xpus=2) this will automatically find if x is gpu/tpu/hpu.
Then we allow default to be auto ie. Trainer(cpus=null,xpu=null) or could use -1 for example.
To take the pro-Accelerator argument to the extreme (also with the "fractional" devices), how about not splitting devices= and accelerator=?
If instantiating Accelerator all the time is too much of a hassle for @williamFalcon 's taste (I never liked the configuration part of tf sessions, either, and there is a good reason why PyTorch doesn't force you to do device = torch.Device("cuda") all over the thing but will just take "cuda"), how about:
Trainer(devices=2) # I want two of whatever is available (so GPUs > CPUs in preference, but only the same kind.
Occasions where "casual users" will have TPU GPU and IPU in the same box will be rare enough...
This is breaking because it would make "GPU if available" the default :( (though I never understood why it is not).
For more elaborate configs, one could have
Trainer(devices=Accelerator("cuda", 2))
My apologies for adding another color of shed, but to my mind, there are these cases we want to cater to:
The easy one! Needing to instantiate Accelerator is a bit more API for people to remember than just gpus=.... Personally, I have to concentrate really hard to know how many c and l to put in there, too.
The turbopropower-user: Would it not be more consistent and flexible to have Accelerator as the single truth about what their thing trains on? I certainly like to consider all my clusters of 512 DGXes for training BERT in 30 seconds a single device...
The unknown future. I think we'll see a lot more blur to "thing the training runs on = n devices of type a" that the proposed API of devices=2, accelerator=... suggests.
Best regards
Thomas
To add to @t-vi comment,
I believe the accelerator could be set to 'auto' by default as it is quite unlikely there is an overlapping machine with both GPUs and TPUs available.
So the hardware is totally abstracted and this provides an experience closer to Jax with their auto platform detection
Trainer(gpus=8) or Trainer(tpu_cores=8) or Trainer(cpu=8) or Trainer(hpus=8) or Trainer(ipus=8) ...
would be replaced directly with:
Trainer(devices=2)
If a user has a machine with a GPU and wants to debug on CPU, he would simply add the accelerator to force the decision-making.
Trainer(devices=2, accelerator="cpu")
By the most critical point is:
I think we'll see a lot more blur to "thing the training runs on = n devices of type a" that the proposed API of devices=2, accelerator=... suggests.
I believe this API would need to provide a smarter hardware detection mechanism for MIG hardware.
Coming from both the High-Performance and embedded spaces, I'll weigh in here with some general thoughts.
Often with large clusters, we have models and/or datasets which can't fit on a single node. If the API says gpus=2 or tpus=2, what control do I have over where those devices are or which devices get used for which parts of the model? Should PTL support this type of deployment at all?
There are certain accelerators I might want to use which are only useful for inference but not training. FPGAs, for example, are really great for low-latency inference, but with the above API, do I have to instantiate a "Trainer" to use a device for inference? This makes little sense to me as a user. Is this something that PTL wants to support? If so, a rework is in order.
There is research being done on heterogeneous architectures which have GPUs, DSPs, etc. available on a single node. The distribution of work and communication between these devices is non-trivial and a scheduling nightmare, but it's not too far off. Virtualized communication technology like CXL and composable infrastructure like Liquid will enable these types of nodes to "exist" in a cloud or on-prem cluster. I think PTL should be forward thinking and have these types of setups in mind, especially if it is to be adopted by the research community as a usable tool.
A "device" as we think of it today (a GPU, a CPU) will likely be upended when in-memory processors come to the mainstream. (What is a "memory" device? How many "cores" does it have? It quickly loses any meaning). What about the Xilinx Versal architecture? It has many compute cores in a dynamic software-defined network fabric connected to an FPGA. It's one "device", but it's also many.
To the above points, I have a couple of suggestions:
The trainer should be agnostic to what it is executing on. It should be the object facilitating and orchestrating the training session (it is a trainer after all), but it shouldn't care what device is on the other end. If it does have knowledge of device-specifics, then as many of the users above pointed out, the API and argument count/complexity will explode if even just a few accelerators become mainstream and anything but basic training strategies are to be supported.
We should have an Accelerator API describing a device, its location, and its features. The average user shouldn't have to use this API at all, or even know it exists, and sane defaults should be set. However, it should be flexible enough to be used for cutting-edge device research. Where this accelerator API fits in the ecosystem is going to need to be decided by the community, but it shouldn't be passed to the trainer because if I have a device which is only for inference acceleration, then it makes no sense to create a trainer.
I am very interested to see where this discussion goes, and I apologize for the ramble.
This discussion has extended to other related points, but to give my opinion the original question, I fully agree with @tchaton's API vision here: https://github.com/PyTorchLightning/pytorch-lightning/issues/10410#issuecomment-972712672.
Where the original gpus, tpus, ... are deprecated and removed.
I don't think adding new options xpus=..., or devices=Accelerator("cuda", 2) should be in the cards anymore, as the new devices=..., format was just introduced in 1.5 and we would be once again deprecating this newly introduced functionality for a different thing. There's no clear winner here and we just need to choose one approach.
do I have to instantiate a "Trainer" to use a device for inference
it shouldn't be passed to the trainer because if I have a device which is only for inference acceleration, then it makes no sense to create a trainer.
Keep in mind that the Trainer has that name since it's been the core part of Lightning since the beginning, but it's way more than a "brainer" and could be thought of as an engine, for example, we have validate, test, and predict which are split from the training procedure
A lot of great inputs! Let me start off by summarizing:
The current API was built when only GPUs were supported. Then TPUs were added. And now a few years later, we live in a world where more alternatives are starting to emerge. This is the current API.
Trainer(gpus=2)
Trainer(tpus=2)
But now, we live in a world where more than GPU|TPU devices are coming out (HPU, etc...). In this case, the proposal is to modify the API like so:
Trainer(devices=2, accelerator='tpu')
Well... we also introduced the 'auto' flag, so the actual default call would look like this:
Trainer(devices=2)
# because Trainer(devices=2, accelerator='auto') is the default
@t-vi also brought up the alternative that there could be a class in the event that configs get unwieldy
Trainer(accelerator=Accelerator("cuda", 2))
@dlangerm also brought up that in certain complex scenarios:
Multinode training (which we've supported from day 1 @dlangerm, and you specify the num_nodes argument). Today we already support selecting many configurations here, so i'm not sure what a relevant usecase is.
Yes, PL already supports inference. We can think about configurations during inference needing a "Trainer" (@tchaton @carmocca maybe "Trainer" needs to be renamed in 2.0)
Heterogenous hardware is an awesome upcoming research challenge that we'll be excited to tackle next year. But today, it's premature until research matures a bit more.
In-memory processors also sound promising. If you know of a real use case, happy to collaborate on working out how to do something like that.
@dlangerm we do have an accelerator API (it's been there since 1.0)... it's just used internally and not exposed to the user.
Decision
So, with all that said, if there's broader community support for moving from:
Trainer(gpus=2)
to:
Trainer(devices=2, accelerator='tpu')
# default is auto
Trainer(devices=2)
Then I'm happy to back this option as it is more scalable and my only concern is "having to only remember one thing"...
So, I'd love to hear more from the community about the effect on usability.
if there are no major quelms about this and everyone's excited, let's roll it out for 2.0
cc @tchaton @carmocca @ananthsub @daniellepintz
I have a question about this; if we want to roll this out for 2.0 when can we start working on it? Could we start working on it now for example?
@tchaton @awaelchli @ananthsub What's you guys' thought on when will be the right time for 2.0? Should Accelerator Refactor and stable accelerator be part of the 2.0?
I think it's better to have big changes once. I will prefer having accelerator stable version and the flags addressed in the same version. It's easier to communicate with users and reduce future confusings.
Hey @daniellepintz @four4fish.
Yes, I agree with you both. I don't believe this change requires a Lightning 2.0 as this is a natural evolution of Lightning becoming hardware-agnostic directly at the Trainer level.
IMO, I would like to action this change for v1.6. @ananthsub @awaelchli @carmocca @kaushikb11 @justusschock Any thoughts on this ?
Agreed. We could go ahead with this change for v1.6, along with major Accelerator refactor.
Given the above decisions, is there a consensus on renaming Trainer to something more appropriate for the "Brain" or "Engine" that is has become?
If these changes are towards a hardware-agnostic API that can be used for either training or inference, Trainer will become very confusing. Even today, creating a Trainer instance to perform Trainer.predict is fairly unintuitive.
@dlangerm this is not really related to this issue, so I won't go into much detail here. Feel free to open a new issue for this discussion.
From my POV, we shouldn't rename the Trainer before 2.0.
Renaming flags is one thing (and we will need to have a pretty long deprecation cycle for those), but renaming the major components as the Trainer or LightningModule would be too much of a breaking change since this could also break the API in all other places as well.
Hey @kaushikb11 I saw you assigned yourself to this issue. I was planning on working on the accelerator_connector refactor (https://github.com/PyTorchLightning/pytorch-lightning/issues/10422) which was blocked by this issue. Am I okay to proceed with accelerator_connector refactor or is that something you were planning on doing?
I am working on this in #11040 - do we also want to deprecate num_processes and num_nodes?
I think num_processes yes, but num_nodes we might still need in case of multi-node training
Got it, thanks!
|
2025-04-01T06:37:27.771261
| 2020-10-03T05:59:48
|
714014379
|
{
"authors": [
"SkafteNicki",
"aligholami",
"ananthsub",
"nathanpainchaud",
"tchaton",
"williamFalcon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2384",
"repo": "PyTorchLightning/pytorch-lightning",
"url": "https://github.com/PyTorchLightning/pytorch-lightning/issues/3813"
}
|
gharchive/issue
|
Calling module.log(...) within a callback fails
🐛 Bug
Calling pl_module.log(...) within a Callback fails, even though this is recommended by the documentation here: https://pytorch-lightning.readthedocs.io/en/latest/loggers.html#logging-from-a-callback
Error
File "my_callback_file.py", line XX, in on_validation_epoch_end
pl_module.log_dict(my_metrics_dict)
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 287, in log_dict
self.log(
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 233, in log
self._results.log(
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 171, in log
self.__set_meta(
File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 217, in __set_meta
_internal = self['meta']['_internal']
KeyError: '_internal'
python-BaseException
cc @nathanpainchaud
This is happening on master
Expected behavior
We can log from callbacks using the lightning module
Environment
Happening on PyTorch Lightning master
@ananthsub I just tried on master and cannot reproduce (I think it was solved yesterday, as I could reproduce 2 days ago).
@SkafteNicki The issue was created by @ananthsub based on a question/issue I initially raised in the slack. I'll try to see if the bug is now resolved on master for me (it was still present as of yesterday afternoon) and I'll update you here as soon as I can.
Hey @nathanpainchaud,
Did you manage to reliably reproduce this behaviour? And if yes, could share the draft PR associated with this issue ?
I will try to try it out too.
Best regards,
Thomas Chaton.
Hey @tchaton,
Thanks for the follow up! I opened the draft PR where I added a test that reproduces the behavior I'm getting.
If I can help in any other way to get this sorted, just let me know!
@tchaton Any updates on whether this is a feature that's planned to be supported, or on the contrary it's been abandoned? I'm only asking this question because the issue has been labelled priority for a while, but every PR referring to it have been closed without getting merged :laughing:
Yes! thats been worked on these weeks. Starting to be merged now (https://github.com/PyTorchLightning/pytorch-lightning/pull/4439)
I still have problems logging my validation metrics using pl_module.log() in the on_validation_end() hook. Any thoughts?
|
2025-04-01T06:37:27.773601
| 2020-07-30T13:11:24
|
668706788
|
{
"authors": [
"Borda",
"tgaddair"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2385",
"repo": "PyTorchLightning/pytorch-lightning",
"url": "https://github.com/PyTorchLightning/pytorch-lightning/pull/2764"
}
|
gharchive/pull-request
|
Horovod & py3.8
What does this PR do?
resolving Horovod as discussed in #2745
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃
LGTM! Thanks @Borda!
Surprisingly simple. LGTM 😄
yeah, we shall keep PR as simple as possible to the review is quick...
|
2025-04-01T06:37:27.780470
| 2024-02-19T18:41:09
|
2142959892
|
{
"authors": [
"N-Wouda",
"leonlan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2386",
"repo": "PyVRP/PyVRP",
"url": "https://github.com/PyVRP/PyVRP/issues/478"
}
|
gharchive/issue
|
Stable version as default in docs
Is your feature request related to a problem? Please describe
Most users install PyVRP through PyPI and so they use the stable version. However, www.pyvrp.org shows the documentation for the latest development version by default, possibly confusing users when they find features in the docs that aren't released yet.
Describe the solution you'd like
www.pyvrp.org should default to the stable version of PyVRP.
www.pyvrp.org/dev should default to the latest development version of PyVRP.
I'll pick this up. We're getting many questions about stuff not working because of this 😅
We also have the "older" docs hosted in this repository: https://github.com/PyVRP/PyVRP.github.io. Those are available at https://pyvrp.github.io/v0.7.0/, and so one. It'd be nice if we can somehow merge all that together so that:
pyvrp.org -> latest stable
pyvrp.org/dev -> latest build on main
pyvrp.org/v<version> -> docs associated with version <version>
I don't really know how to do this. I do know statsmodels has things set up exactly this way. Maybe we can borrow some of their setup for our own purposes?
I'm assigning this for 0.9.0, since we really ought to solve this ASAP.
I'm looking into how statsmodels does it for the next hour and will make notes in this comment.
Alright I had hoped to work on this today, but the devcontainer stuff took a little longer and now it's getting late. There's always tomorrow :).
I don't want this to block a release of 0.9.0, but I'll try to finish it in the coming week.
Possibly interesting: https://github.com/jimporter/mike?tab=readme-ov-file
I feel like I keep postponing this, but at some point we really ought to pick this up 😆
|
2025-04-01T06:37:27.848172
| 2020-03-31T18:47:34
|
591338120
|
{
"authors": [
"leifliddy",
"waylan"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2387",
"repo": "Python-Markdown/markdown",
"url": "https://github.com/Python-Markdown/markdown/issues/929"
}
|
gharchive/issue
|
can't install version 3.2.x with python2
I'm not able to install version 4.2.1, due to importlib.util not being found.
I'm not sure exactly which module I need to install to meet this dependency.
I didn't experience this issue with version 3.1.x
[root@black markdown-3.2.1]# python2.7 setup.py
Traceback (most recent call last):
File "setup.py", line 39, in <module>
__version__, __version_info__ = get_version()
File "setup.py", line 32, in get_version
import importlib.util
ImportError: No module named util
As noted in the release notes, we dropped support for Python 2.7 in version 3.2. You have two options:
Update to Python 3.5 or later.
Continue to use Python-Markdown 3.1 with Python 2.7.
I suppose we could test and add an error message to the setup.py script. We did include an error message for those who have a develop install of the package. And pip will refuse to install 3.2 on Python 2.7 due to the meta-data mismatch.
Thanks for pointing that out. I'll continue using Python-Markdown 3.1 for now. It looks like transitioning to Python3 is going to be inevitable at some point, which is ultimately a good thing.
|
2025-04-01T06:37:27.851720
| 2024-04-01T22:35:26
|
2219196376
|
{
"authors": [
"ItayTheDar",
"copdips"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2388",
"repo": "PythonNest/PyNest",
"url": "https://github.com/PythonNest/PyNest/issues/50"
}
|
gharchive/issue
|
[Demo requested] Full example on one-to-many and many-to-many relationships
Hello,
I like pretty much the idea and the layout of PyNest.
Could you please provide a demo project that has users, products modules, with many-to-many and one-to-mant relationships between them ? maybe two demos for the 2 relationships could be better.
And how user interacts with product ? by using services or something else ?
I asked for this because if seems that the examples given in the repo only show the cases that user and product are independent.
Hello! Thank you for your words.
Basically, you were correct with you assumption that user and product will talk with each other using services. In that way, product service can inject user service and perform query operations on the user assets. Nest is really about organise your code in small and decoupled lego pieces that helps you to build robust application.
I have a few examples i can share and i will upload them asap to the PythonNest organisation.
Hello @copdips
I have an awesome news for you!
I've just public one of my pynest projects, an application for managing stocks that demonstrate the powerful architecture of nestjs in python.
This is the link to the project - https://github.com/ItayTheDar/Stockify
Thanks again for bringing this issue to our community!
awesome, thanks
|
2025-04-01T06:37:27.856786
| 2021-05-18T09:16:28
|
894163667
|
{
"authors": [
"RasmusBC59",
"astafan8",
"jenshnielsen"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2389",
"repo": "QCoDeS/Qcodes",
"url": "https://github.com/QCoDeS/Qcodes/pull/3023"
}
|
gharchive/pull-request
|
Keithley 2600 4probe
This PR adds a new mode to the doFastSweep function of the of Keithley 26xx
The new mode allows for 4 probe measurements
@jenshnielsen
Agree with @astafan8 that we should not promote the Measurement but use the "new" dataset in the examples if possible. Otherwise this looks good
@astafan8 I have updated the notebook to use fastsweep in combination with a d0d. We could also make a new function(do_faste_sweep) in the driver wrapping the d0d. So performing the fastsweep stays a one-liner?
I have updated the notebook to use fastsweep in combination with a d0d
perfect!!! thank you!
We could also make a new function(do_faste_sweep) in the driver wrapping the d0d. So performing the fastsweep stays a one-liner?
no, that would be mixing concerns for negligible benefit -- do*d is already a one-liner-ish :) and the fact that doFastSweep exists as a method is just historical and shouldn't have been added in the first place.
|
2025-04-01T06:37:27.864293
| 2023-02-22T12:47:01
|
1595046565
|
{
"authors": [
"jakegrigsby",
"santoshatchi",
"steve3nto"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2390",
"repo": "QData/spacetimeformer",
"url": "https://github.com/QData/spacetimeformer/issues/66"
}
|
gharchive/issue
|
Average of different prediction horizons as a metric?
Hello Authors,
Could you please clarify the usage of the average of different prediction horizons as a benchmarking metric? Why was it used, and how to justify the validity of this?
I am doing a similar project and trying to report values at different horizons. My model is not getting values close to those reported in SOTA (top 5) models like yours. Could you please help with the intuition on reporting the average rather than individual horizons?
Thanks
Santosh
IIRC that's a convention inherited by Informer and the followup works to it that have come out since this repo's initial release and before it's more recent versions. The accuracy of individual timesteps into the future can be arbitrary and hard to interpret. 1 step predictions are too easy, but distant predictions can be very difficult given a fixed length context window which may be too short. In highly periodic domains some distant horizons can also be easy (such as 24 hours ahead in a dataset with clear daily periodicity like weather forecasting). So reporting every horizon metric takes a lot of explaining, large tables, and can be misleading. Averaging gives a better sense of the model's performance over the entire duration we care about.
At a few points during this project I hacked together logging metrics for accuracy at each individual timestep as a sanity-check. In my experience you can expect a roughly linearly increasing error as you predict further into the future.
As far as replicating the results on these datasets in your own project, double check that you aren't counting missing datapoints in the metrics. This can make a huge difference and is something a lot of the literature (and early versions of this codebase) get wrong.
I agree with Jake, averaging over the whole prediction horizon makes sense in order to compare single numbers as a metric.
It is a pity though that different benchmarks use different metrics.
For example, check here for PEMS-Bay:
https://paperswithcode.com/sota/traffic-prediction-on-pems-bay
They report RMSE (I guess this is averaged over the whole horizon)
and MAE @ 12 step (this is for a single prediction 12 steps into the future)
It would be good to have more standardized metrics.
In the paper there is no RMSE for PEMS-Bay. There is MAE, MSE and MAPE, but unfortunately PapersWithCode does not report those.
This is not a question, just a comment, sorry for the spam! 😁
Yeah the traffic datasets / literature is the main example where reporting multiple horizons is the default. The longest horizons are 12 timesteps so this can be feasible. Once you get longer than that it stops making sense to report arbitrary intervals in tables in my opinion. It would be interesting if the convention for reporting forecasting results was a plot of error over forecast duration for each dataset. That wasn't necessary at the time (2021) but I think this is probably what I would do if I were to redo this project today...
|
2025-04-01T06:37:27.900220
| 2019-07-14T13:09:01
|
467834540
|
{
"authors": [
"sayanarijit"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2391",
"repo": "QQuick/Transcrypt",
"url": "https://github.com/QQuick/Transcrypt/issues/653"
}
|
gharchive/issue
|
Maybe a @compile decorator?
I would like to discuss the possibility of adding something like a @compile decorator that compiles the Python functions/classes when the python module in imported and reuse the compiled JS. So only the load time will be affected, not the execution time.
This way we don't have to worry about manually compiling Python into JS.
If you like this suggestion, I can work on it.
Code example:
# events.py
@compile
def show_alert(e):
e.preventDefault()
alert("boom!")
# index.py
from htmldoom import render, elements as e
from events import show_alert
render(e.a(onclick=show_alert)("Click me"))
# <a onclick="(e) => {e.preventDefault(), alert('boom!')}">Click me</a>
Update:
If this is possible. I'm not sure.
Closing this due to lack of interest.
|
2025-04-01T06:37:28.016949
| 2022-12-12T17:38:43
|
1492405670
|
{
"authors": [
"md-yaseeny",
"sinhasaurabh3104"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2392",
"repo": "QasimWani/LeetHub",
"url": "https://github.com/QasimWani/LeetHub/issues/448"
}
|
gharchive/issue
|
leethub not working at all for me. i have solved more than 20 questions but not a single question pushed to my github
Please review previous closed issues before filling out a new one! Duplicate issues will be closed without comment.
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Go to '...'
Click on '....'
Scroll down to '....'
See error
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
leethub not working for me
i have solved more than 20 questions since i have added leethub extensions. I don't why but not a single solution pushed to github.
Switch to older version of leetcode .
There will be an option Revert to older version in drop down menu
|
2025-04-01T06:37:28.023784
| 2021-09-22T15:35:11
|
1004441869
|
{
"authors": [
"leepengcheng",
"ryantd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2393",
"repo": "Qihoo360/dgl-operator",
"url": "https://github.com/Qihoo360/dgl-operator/issues/21"
}
|
gharchive/issue
|
how to use distributed partitioning
I change partitionMode to ParMETIS ,but it seems to have no effect.
I change partitionMode to ParMETIS ,but it seems to have no effect.
@leepengcheng ParMETIS still in development, will be published before November as expected.
|
2025-04-01T06:37:28.025895
| 2024-10-25T18:06:03
|
2614805239
|
{
"authors": [
"abbycross",
"beckykd"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2394",
"repo": "Qiskit/documentation",
"url": "https://github.com/Qiskit/documentation/pull/2185"
}
|
gharchive/pull-request
|
IAM topics
Adding some of the IAM-related topics. There will be a lot of broken links and other issues until we solidify what all topics are being added and where.
Cleanup and move to another repo
|
2025-04-01T06:37:28.027568
| 2021-12-01T17:09:20
|
1068671029
|
{
"authors": [
"CLAassistant",
"catornow"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2395",
"repo": "Qiskit/qiskit-experiments",
"url": "https://github.com/Qiskit/qiskit-experiments/pull/550"
}
|
gharchive/pull-request
|
Small fix in DragCalAnalysis model descriptions
Summary
This PR adds a missing freq parameter to the DragCalAnalysis model descriptions and to the formulas in the docstring.
Details and comments
See code.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T06:37:28.030613
| 2022-02-28T20:20:00
|
1154511695
|
{
"authors": [
"coveralls",
"rathishcholarajan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2396",
"repo": "Qiskit/qiskit-ibm-runtime",
"url": "https://github.com/Qiskit/qiskit-ibm-runtime/pull/171"
}
|
gharchive/pull-request
|
Allow to trigger workflow that publishes the documentation manually
Summary
Details and comments
Fixes #
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 56.778%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
2044
Relevant Lines:
3600
💛 - Coveralls
|
2025-04-01T06:37:28.037711
| 2021-09-09T20:49:00
|
992618669
|
{
"authors": [
"CLAassistant",
"adekusar-drl",
"attp",
"rsln-s",
"woodsp-ibm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2397",
"repo": "Qiskit/qiskit-machine-learning",
"url": "https://github.com/Qiskit/qiskit-machine-learning/pull/209"
}
|
gharchive/pull-request
|
Batching of circuits to overcome memory issues when using statevector simulator
Summary
Currently batch_size parameter is ignored if the statevector simulator is used. This leads to unreasonable memory use due to the size of the transpiled circuits, up to 1TB of RAM for 800 by 800 kernel matrix and 20 qubits (see qiskit-terra, issue #6991). This pull request fixes this by transpiling and simulating circuits in batches, never storing the entire 800 circuits. The modification uses batch_size parameter that is already used in non-statevector case.
Details and comments
I had success by setting batch_size=50 (memory footprint down to <20 GB).
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
@rsln-s Thanks a lot for your contribution. Could you please add a reno file with a bug fix description? Also, if possible add a test to cover this fix.
@attp could you please take a look at the changes?
Added reno file. The correctness of the output of the code is checked by tests already in place; testing the memory usage may be complicated within constraints of gh-actions.
The spell checker also fails on this transpiled word from the release note. If you add it to .pylintdict file in the root, our custom dictionary, since its correctly spelt then it should pass CI. (You will see the words are in lowercase in alphabetic order so just add it appropriately)
Updated the .pylintdict dictionary
@woodsp-ibm Do you have any comments?
Can I ask you to make a minor change to the docstring for batch_size in the constructor. It says
batch_size: Number of circuits to batch together for computation. Default 1000.
specifically default 1000 yet the code has
batch_size: int = 900,
So it should state the default is 900. I think it may have been 1000 at one point but was changed, if I recall correctly, to 900 to fit more with the limits around the provider.
In terms of testing you say its covered by the current unit tests. From what I can see there is nothing explicitly testing that aspect. Of course batch_size defaults to 900 and is in main path. I am not sure what we do in test is ever affected by the batch size since from what I can see the tests are pretty small. Having said that if we had a test that dropped the batch size down I am not sure how to test it given its behavior is internal to evaluate - the only way comes to mind is hooking the quantum instance execute and checking the number of circuits is as expected along the way in addition the the final result being as expected.
Looks good to me.
Regarding the batch_size docstring, you are correct @woodsp-ibm the default was originally 1000, and we changed it to 900 to match the backend limits and reduce the number of jobs sent. I must have missed updating the docstring in the PR.
Updated the docstring as requested by @woodsp-ibm.
Just a note. Locally I changed the QuantumInstance execute method to print the number of circuits it was passed and ran the kernel unit tests. Its quite a small number so nowhere near the 900 limit - in fact its not even in double digits. Anyway I changed default back size in the kernel to a much small number and things worked but not all numbers were limited - presumably the ones via statevector usage as I was doing this from the main branch. Cloning your fork and doing the same from there then all the counts were limited - in fact I set it to 1 as a test and it printed all 1's and passed. So it seems its working ok. It would be nice if the test did somehow test out the batch size but maybe that could/should be raised as a separate issue. @adekusar-drl any thoughts here - you commented early on about a test around the fix.
@woodsp-ibm When I mentioned unit tests I did not have anything special on my mind. In general, your idea of setting batch size to 1 and then running a test on the statevector simulator make sense.
@rsln-s What do you think?
In general, your idea of setting batch size to 1 and then running a test on the statevector simulator make sense.
I had done it so it applied to the qasm mode as well - since it appeared not tested in general. While the final outcome can be checked as currently, what is more complicated to do is to check that indeed the batch size is limiting the number of internal computations (circuits). My only thought on perhaps how to do such a check. as I mentioned in an earlier comment was about hooking the QuantumInstance execute method on the instance used with the backend such that the number of circuits given to the method could be fairly easily intercepted and checked - i.e hook it and do whatever was needed to check then call over to the original method that was hooked so that the circuit results from execute can be returned.
@woodsp-ibm I approve, merge this PR and open an issue to improve tests for QuantumKernel. Any thoughts?
|
2025-04-01T06:37:28.042685
| 2022-02-03T18:28:39
|
1123420425
|
{
"authors": [
"coveralls",
"mtreinish"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2398",
"repo": "Qiskit/qiskit-terra",
"url": "https://github.com/Qiskit/qiskit-terra/pull/7620"
}
|
gharchive/pull-request
|
Update git blame rev ignore list
Summary
In the recently merged #7615 we bumped the black version and constrained
it to a stable version to keep us on a fixed set of formatting rules but
also receiving bug fixes. In doing this some new formatting rules were
applied to the repo (mainly x ** y was changed to x**y) by the new
version of black. To reduce noise in the git blame this commit updates
the .git-blame-ignore-revs file (which was added after we started using
black in #6362) to include the sha1 for this commit. This means that
when running git blame on files this commit will be ignored (assuming
the local git environment is configured correctly).
Details and comments
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
3 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.005%) to 83.355%
Files with Coverage Reduction
New Missed Lines
%
qiskit/pulse/library/waveform.py
3
89.36%
Totals
Change from base Build<PHONE_NUMBER>:
-0.005%
Covered Lines:
52236
Relevant Lines:
62667
💛 - Coveralls
|
2025-04-01T06:37:28.057214
| 2024-01-08T10:00:45
|
2070103144
|
{
"authors": [
"1ucian0",
"ShellyGarion",
"TsafrirA"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2399",
"repo": "Qiskit/qiskit",
"url": "https://github.com/Qiskit/qiskit/issues/11509"
}
|
gharchive/issue
|
RZXCalibrationBuilder fails for qubit pairs with GaussianSquareDrag
Environment
Qiskit version: 0.45 and 1.1.0.dev0+c99f325
Python version: 3.12
Operating system: Windows
What is happening?
RZXCalibrationBuilder fails for qubit pairs with GaussianSquareDrag, because it can't identify the native ECR direction.
How can we reproduce the issue?
The following code
from qiskit import QuantumCircuit
import numpy as np
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import RZXCalibrationBuilder
backend = provider.get_backend("ibmq_kolkata")
instmap = backend.defaults().instruction_schedule_map
qubits = ...
qc = QuantumCircuit(8)
qc.rzx(np.pi/2, *qubits)
pass1 = RZXCalibrationBuilder(instmap)
qc = PassManager(pass1).run(qc)
works if you set qubits=(4,7) and fails if you set qubits=(6,7).
The error originates here, and is caused by the filter counting comp tones which ignores anything but GaussianSquare or Waveform pulses. However, for some pairs (with the pair 6,7 being one of them) the pulses are reported as GaussianSqaureDrag. The pulses have beta=0 so they are identical to GaussianSquare, but they are not counted towards the comp tones, which leads to the failure.
What should happen?
The code should run for both qubit pairs.
Any suggestions?
I suspect changing the allowed types in the filter would solve the issue, but I am not familiar enough with this piece code, and what the backends might report for other qubits.
Counting GaussianSquareDrag with beta=0 is perhaps a safer option.
I was able to reproduce this issue on 0.46.
This works:
from qiskit import QuantumCircuit
import numpy as np
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import RZXCalibrationBuilder
from qiskit_ibm_provider import IBMProvider
provider = IBMProvider()
backend = provider.get_backend("ibm_cusco")
instmap = backend.defaults().instruction_schedule_map
qubits = (6,7)
qc = QuantumCircuit(8)
qc.rzx(np.pi/2, *qubits)
pass1 = RZXCalibrationBuilder(instmap)
qc = PassManager(pass1).run(qc)
qc.draw('text')
This, it does not:
from qiskit import QuantumCircuit
import numpy as np
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import RZXCalibrationBuilder
from qiskit_ibm_provider import IBMProvider
provider = IBMProvider()
backend = provider.get_backend("ibm_cusco")
instmap = backend.defaults().instruction_schedule_map
qubits = (4,7)
qc = QuantumCircuit(8)
qc.rzx(np.pi/2, *qubits)
pass1 = RZXCalibrationBuilder(instmap)
qc = PassManager(pass1).run(qc)
qc.draw('text')
QiskitError: "Native direction cannot be determined: operation on qubits [4, 7] for the following instruction schedule map: ...
The class qiskit.transpiler.passes.calibration.rzx_builder.RZXCalibrationBuilder is deprecated as of Qiskit 1.3. It will be removed in Qiskit 2.0. The entire Qiskit Pulse package is being deprecated and will be moved to the Qiskit Dynamics repository: https://github.com/qiskit-community/qiskit-dynamics. Note that once removed, qiskit.transpiler.passes.calibration.rzx_builder.RZXCalibrationBuilder will have no alternative in Qiskit.
|
2025-04-01T06:37:28.066653
| 2023-12-13T18:20:29
|
2040245267
|
{
"authors": [
"coveralls",
"jakelishman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2400",
"repo": "Qiskit/qiskit",
"url": "https://github.com/Qiskit/qiskit/pull/11411"
}
|
gharchive/pull-request
|
Add missing parameter in standard-gate mapping
Summary
The XXPlusYYGate and XXMinusYYGate instances returned from get_standard_gate_name_mapping were missing their optional beta parameter.
Details and comments
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
16 unchanged lines in 4 files lost coverage.
Overall coverage decreased (-0.02%) to 87.552%
Files with Coverage Reduction
New Missed Lines
%
crates/qasm2/src/expr.rs
1
93.76%
qiskit/quantum_info/synthesis/two_qubit_decompose.py
2
96.65%
crates/qasm2/src/parse.rs
6
97.6%
crates/qasm2/src/lex.rs
7
91.41%
Totals
Change from base Build<PHONE_NUMBER>:
-0.02%
Covered Lines:
59771
Relevant Lines:
68269
💛 - Coveralls
|
2025-04-01T06:37:28.074128
| 2024-09-30T10:24:35
|
2556214509
|
{
"authors": [
"alexanderivrii"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2401",
"repo": "Qiskit/qiskit",
"url": "https://github.com/Qiskit/qiskit/pull/13240"
}
|
gharchive/pull-request
|
Improve qubit tracking in HighLevelSynthesis
Summary
Fixes #13239. Now for both examples in the referenced issue, HighLevelSynthesis produces a circuit with 24 CX-gates and 45 U-gates.
In addition, this also improves clean ancilla detection on following example:
inner1 = QuantumCircuit(4)
inner1.h(0)
inner1.cz(0, 2)
qc = QuantumCircuit(6)
qc.append(inner1.to_instruction(), [1, 2, 3, 4])
pass_ = HighLevelSynthesis(basis_gates=["cx", "u"])
qct = pass_(qc)
Even though the inner circuit inner1 is defined over the qubits 1, 2, 3, 4 in the main circuit qc, the qubits 1 and 3 in the inner circuit (corresponding to the qubits 2 and 4 in the main circuit) remain clean and can be used as clean ancilla qubits in the following gates.
Details and comments
This PR is based on numerous discussions with @Cryoris.
Recalling the second example from the referenced issue
inner = QuantumCircuit(6)
inner.mcx([0, 1, 2, 3, 4], 5)
custom_gate = inner.to_gate()
qc = QuantumCircuit(10)
qc.append(custom_gate, [3, 4, 5, 6, 7, 0])
basis_gates = ["u", "cx"]
tqc = HighLevelSynthesis(basis_gates=basis_gates)(qc)
the tricky part is that the recursive call to HighLevelSynthesis::_run on the inner circuit should have access to the clean "global" qubits outside of the circuit's definition. To tackle this, we introduce a class QubitContext which keeps the correspondence between the current DAG's qubits and the global qubits of the original circuit. The state of the global qubits (clean/dirty) is tracked by QubitTracker. When an internal synthesis algorithm (here for the internal MCX gate) checks how many clean/dirty ancilla qubits are available, it does so taking all of the global qubits into account.
However, this also means that synthesizing an internal DAG may output a DAG with more qubits. In particular (demonstrating possible complications) we can have an internal DAG with say 10 qubits that contains an MCX-gate over 8 qubits that can be synthesized by the appropriate synthesis algorithm to a circuit over 14 qubits (exploiting the global clean/dirty) qubits. Hence, our internal DAG must also grow in size (fortunately all the tracking is possible using the local-to-global correspondences of all the objects involved) which might mean that the DAG higher up in the chain might need to grow as well.
This PR also adds more HighLevelSynthesis tests exploring all these possible edge-cases.
In summary, this should handle any mixture of recursively defined circuits with annotated operations, custom gate definitions and HighLevelSynthesis plugins. For circuits containing control-flow ops, the extended functionality is not used (and will be delegated to another PR, if someone wants to tackle this). In other words, if we have a control-flow op over say 4 qubits in the bigger circuit, only these 4 qubits will be used when recursively processing the blocks in this op.
Update: An additional observation is that this also improves the synthesis of open-controlled MCX gates. In Qiskit, the name of an open-controlled gate is not of the form "mcx", but rather of the form "mcx_o17", where "17" is (the integer representation of) the control state. When processing this gate, HLS would not immediately call a synthesis plugin (since "mcx_o17" is not in the list of gate names for which plugins exist), but recursively process the definition of this gate, which is a quantum circuit consisting of a layer of X-gates, a closed-controlled MCX gate (called "mcx"), and the inverse layer of X-gates. During this recursion, HLS would now indeed call the synthesis plugin for the internal MCX-gate, and with this PR it would be able to use the ancilla qubits in the main circuit and outside of the internal open-controlled gate's definition.
Note: in 7432b6b I have disabled one of the newly added MCMT tests, I am trying to decide whether it points to a real bug or HLS is accidentally doing something too smart.
Please note a small follow-up PR #13369 that ports QubitTracker and QubitContext to Rust.
|
2025-04-01T06:37:28.077568
| 2020-07-13T22:55:31
|
656202312
|
{
"authors": [
"coveralls",
"mtreinish"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2402",
"repo": "Qiskit/retworkx",
"url": "https://github.com/Qiskit/retworkx/pull/99"
}
|
gharchive/pull-request
|
Update coverage rustflags
This commit updates the flags used for generating coverage data with
grcov based on the latest README for grcov. [1] The last time these were
updated in #72 was to remove a flag that was being removed from rust and
causing the job to fail. However, that commit failed to add equivalent
flags which would perform the same functionality. This commit fixes that
oversight so we should have more reliable coverage collection.
[1] https://github.com/mozilla/grcov/blob/master/README.md
Coverage increased (+2.7%) to 88.235% when pulling b04d177b84b25ba180ec2987164c8f2f001a91fc on mtreinish:update-rustflags-for-coverage into a9dcb8d3f376b1971e48c68fbf6b2c4aa084c67a on Qiskit:master.
|
2025-04-01T06:37:28.092934
| 2018-05-15T09:53:19
|
323145950
|
{
"authors": [
"codecov-io",
"georgeconstantinou"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2403",
"repo": "QoboLtd/project-template-cakephp",
"url": "https://github.com/QoboLtd/project-template-cakephp/pull/540"
}
|
gharchive/pull-request
|
Bugfix for Search child items functionality (task #5956)
Updated search child items logic to match new lists parsed structure.
Codecov Report
Merging #540 into master will decrease coverage by 0.09%.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #540 +/- ##
===========================================
- Coverage 27.44% 27.35% -0.1%
- Complexity 972 976 +4
===========================================
Files 88 88
Lines 3334 3345 +11
===========================================
Hits 915 915
- Misses 2419 2430 +11
Impacted Files
Coverage Δ
Complexity Δ
...ent/Plugin/Search/Model/ChildListItemsListener.php
0% <0%> (ø)
21 <0> (ø)
:arrow_down:
src/ScheduledJobs/Jobs/CakeShellJob.php
0% <0%> (ø)
8% <0%> (+4%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1597472...662d7ff. Read the comment docs.
|
2025-04-01T06:37:28.115169
| 2023-10-25T12:13:44
|
1961270931
|
{
"authors": [
"pavithraes"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2404",
"repo": "Quansight/ragna",
"url": "https://github.com/Quansight/ragna/issues/109"
}
|
gharchive/issue
|
Update examples
Add some minimal narration to the examples, and look into converting them into mkdocs-galley based examples to serve on the website.
These can live in the Tutorial or References section, under in an "Examples" sub-section.
Also, make sure this works narratively with the tutorial docs (which are derived from the examples)
Closing as duplicate of #26 -- will track it there
|
2025-04-01T06:37:28.120501
| 2019-04-26T12:22:54
|
437647413
|
{
"authors": [
"AlexCatarino",
"jaredbroad"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2405",
"repo": "QuantConnect/Lean",
"url": "https://github.com/QuantConnect/Lean/pull/3136"
}
|
gharchive/pull-request
|
Adds Custom Data from US Energy Information Administration (eia.gov)
Description
Built new custom data class for US Energy Information Administration data. Accommodates hourly, daily, monthly, quarterly, and yearly resolutions.
Related Issue
Closes #3106
Motivation and Context
Adds new custom data class to expand current custom data capabilities and add functionality for users.
Requires Documentation Change
Not likely, although updates to custom data documentation will be needed if deemed necessary. Additionally, documentation updates may be needed for Python indicators since it has been found that
self.EMA returns the same as self.EMA.Current.Value, at least when formatted in logging. This will need further testing.
How Has This Been Tested?
Tested locally across numerous tickers and resolutions.
Tested in QuantConnect Cloud in backtesting and live mode.
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
[ ] Non-functional change (xml comments/documentation/etc)
Checklist:
[x] My code follows the code style of this project.
[x] I have read the CONTRIBUTING document.
[x] All new and existing tests passed.
[x] My branch follows the naming convention bug-<issue#>-<description> or feature-<issue#>-<description>
We need to create a "No New Data" Signal for custom data types which can address the null return problem. We should also review all existing custom data (quandl) implementations to confirm they're returning OK.
|
2025-04-01T06:37:28.126512
| 2019-11-27T03:22:59
|
529090495
|
{
"authors": [
"RohanTalip",
"jaredbroad"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2406",
"repo": "QuantConnect/Lean",
"url": "https://github.com/QuantConnect/Lean/pull/3878"
}
|
gharchive/pull-request
|
Minor typo and grammar fixes to comments in PeriodCountConsolidatorBase
Description
Related Issue
Motivation and Context
Requires Documentation Change
How Has This Been Tested?
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] Refactor (non-breaking change which improves implementation)
[ ] Performance (non-breaking change which improves performance. Please add associated performance test and results)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
[x] Non-functional change (xml comments/documentation/etc)
Checklist:
[x] My code follows the code style of this project.
[x] I have read the CONTRIBUTING document.
[ ] I have added tests to cover my changes.
[ ] All new and existing tests passed.
[ ] My branch follows the naming convention bug-<issue#>-<description> or feature-<issue#>-<description>
Thank you @RohanTalip
|
2025-04-01T06:37:28.128123
| 2023-01-06T17:25:51
|
1522904270
|
{
"authors": [
"DerekMelchin",
"pberto"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2407",
"repo": "QuantConnect/lean-cli",
"url": "https://github.com/QuantConnect/lean-cli/issues/261"
}
|
gharchive/issue
|
Data Feed Options Don't Include Alt Datasets
When we run lean live, the "Select a data feed" prompt should include alt datasets like Tiingo. It currently just shows
Since you guys are at it, it would be nice to add also TwelveData: https://twelvedata.com
|
2025-04-01T06:37:28.129278
| 2021-11-11T18:47:42
|
1051263963
|
{
"authors": [
"omidkrad"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2408",
"repo": "QuantConnect/lean-cli",
"url": "https://github.com/QuantConnect/lean-cli/issues/41"
}
|
gharchive/issue
|
Deploying algo using DLL
For security reasons, I want to deploy my algo in a .NET assembly rather than using a CS code project. Is there any way to do that? Even if I reference the DLL in the project and run it, that should be fine.
I'm closing this, this it's not supported.
|
2025-04-01T06:37:28.138482
| 2020-02-24T05:59:27
|
569652101
|
{
"authors": [
"jstac",
"mtiley",
"shlff"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2409",
"repo": "QuantEcon/lecture-source-py",
"url": "https://github.com/QuantEcon/lecture-source-py/pull/934"
}
|
gharchive/pull-request
|
about_py: excel link added and misc edits
fixes #796
Good PR, nice improvements and fixes. Please see commend above.
Thanks, @mtiley . Nice work.
Hi @jstac , I kept @mtiley 's changes and modified the sentence as you instructed.
Thanks @shlff!
Perhaps a link to this article could be included somewhere --- a history of how Python became so popular.
https://www.welcometothejungle.com/en/articles/btc-python-popular
Thanks for the feedback @jstac, I've made those changes.
Nice work, thanks @mtiley
|
2025-04-01T06:37:28.140845
| 2018-05-21T14:01:49
|
324927303
|
{
"authors": [
"SylvainCorlay",
"wolfv"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2410",
"repo": "QuantStack/xtensor",
"url": "https://github.com/QuantStack/xtensor/pull/862"
}
|
gharchive/pull-request
|
fix random access view regeression
thanks @PerretB for finding this bug!
fixes #856
Awesome, thanks!
|
2025-04-01T06:37:28.142149
| 2024-06-11T15:54:45
|
2346777954
|
{
"authors": [
"Andrew-S-Rosen",
"buildbot-princeton",
"superstar54"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2411",
"repo": "Quantum-Accelerators/quacc",
"url": "https://github.com/Quantum-Accelerators/quacc/pull/2233"
}
|
gharchive/pull-request
|
Fix a small typo in elastic
The default value from pymatgen is 0.06.
Can one of the admins verify this patch?
Thank you, @superstar54! I'll go ahead and merge this in.
|
2025-04-01T06:37:28.151555
| 2024-05-22T02:42:43
|
2309454330
|
{
"authors": [
"shrik3"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2412",
"repo": "QuarkContainer/Quark",
"url": "https://github.com/QuarkContainer/Quark/pull/1274"
}
|
gharchive/pull-request
|
epoll: define EpollEvent structure per linux API
the userData is sometimes defined as [i32; 2] in Quark code. This doesn't make sense to me because the data is used in one piece as a u64 anyways. But I may be wrong.
However I don't see where this array is converted into u64, am I missing something? What's the initial idea of define the userData as [i32; 2] ?
This is tested on both x86 and aarch64, I don't see regression so far. Also fixes #1230
CC @QuarkContainer
Note: @CharlyYu fixed a similar issue recently, but the EpollEvent struct is defined in more than 4 places differently depending on how they are used and they all require additional padding for aarch64. I don't want to have 8x struct definitions of the same thing so I propose to use one unified def of this struct.
The Data field could be either a fd (i32), a u32, or a u64 data. If you use Data as i32 it will use the lower 4 bytes and has no conflict with the earlier defs.
|
2025-04-01T06:37:28.191460
| 2023-09-25T10:50:13
|
1911211851
|
{
"authors": [
"QuiiBz",
"ivanafanasyeu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2414",
"repo": "QuiiBz/next-international",
"url": "https://github.com/QuiiBz/next-international/issues/195"
}
|
gharchive/issue
|
Feedback for “Middleware configuration”
urlMappingStrategy: rewrite is not working
Please provide a reproduction. It works as expected in https://github.com/QuiiBz/next-international/tree/main/examples/next-app
|
2025-04-01T06:37:28.197104
| 2023-12-04T07:27:17
|
2023200608
|
{
"authors": [
"QuiiBz",
"cglacet",
"pajarrahmansyah",
"pontusab"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2415",
"repo": "QuiiBz/next-international",
"url": "https://github.com/QuiiBz/next-international/issues/300"
}
|
gharchive/issue
|
Flicker with client component
I have some flicker on refresh when using client components with translations, it's due the suspense in I18nProviderClient is there a way to fix this flicker?
https://github.com/QuiiBz/next-international/assets/655158/90f4363f-6dae-4e90-ad22-a8e29367b62e
layout.tsx:
"use client";
import { I18nProviderClient } from "@/locales/client";
import { ReactNode } from "react";
type ProviderProps = {
locale: string;
children: ReactNode;
};
export function Provider({ locale, children }: ProviderProps) {
return (
<I18nProviderClient locale={locale} fallback={<p>Loading...</p>}>
{children}
</I18nProviderClient>
);
}
"next-international": "^1.1.4",
"next": "14.0.3",
Sorry for the delay, I started a new job. Could you share a minimal reproduction? I cannot reproduce the issue with the example in the repo: https://github.com/QuiiBz/next-international/tree/main/examples/next-app
No worries at all! Congrats on the new job, well deserved! I will actually investigate other things like next-theme to really know what the problem is here. So I will close this and if the problem still exists after midday is open-source it will be much easier for me to showcase the issue.
Marry Christmas and happy new year!
I just confirmed that it was indeed the next-themes provider, moving it down under I18nProviderClient fixed the flicker issue!
Hello, doesn't this contradicts this part of the documentation?
Move all your routes inside an app/[locale]/ folder. For Client Components, wrap the lowest parts of your app with I18nProviderClient inside a layout
Because the next-themes provider goes on the very top of your app. I'm having the same flicker issue.
That depends if next-themes can suspend too or not; next-international has a suspense boundary internally so it might be why it works when next-themes is a children of it.
In my case I use next-ui and next-theme, all of them should be inside the next-international provider. not only that, I think for every client component
Just want to add a note for everyone, looking for this flicker for a full day HAHA thank god
|
2025-04-01T06:37:28.316885
| 2023-01-13T19:31:48
|
1532807640
|
{
"authors": [
"MuntashirAkon",
"REAndroid"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2416",
"repo": "REAndroid/ARSCLib",
"url": "https://github.com/REAndroid/ARSCLib/issues/9"
}
|
gharchive/issue
|
Set indentation for attributes in new lines
There is an option to set indentation for child tags in XMLDocument but no option to set indentation for new line attributes and they are at present calculated at XMLElement#appendAttributesIndentText(Writer). It might be better to provide an option to set indentations instead of calculating them. For example, instead of providing an argument boolean newLineAttributes, you can provide int attributeIndentation whose negative value (e.g. -1) would imply no new line and any positive number would imply the amount of indentation.
Thanks.
I agree , there a lot of other issues to fix on com.reandroid.xml.* classes.
Any updates?
Sorry I don't know how I forgot this issue.
Keeping newLineAttributes, I have added setAttributesIndentScale(float indentScale) for elements, you can set negative value to pull back.
Keeping newLineAttributes, I have added setAttributesIndentScale(float indentScale) for elements, you can set negative value to pull back.
It only scales the calculated indentation, it does not set the indentation.
I feel like i missed your point. As on the last commit you can set indentation to any position you like, I kept newLineAttributes param to turn off/on indentation. The indentation for attributes must be calculated bc it is anchored with parent element.
Can you show me your goal with screenshoot/document ? Or make PR
I have ended up writing a concrete implementation of XmlPullParser to handle this. As I said in #18, I don't think XML conversion should be part of the library as it can be handled quite easily using XmlSerializer and Transformer functions.
|
2025-04-01T06:37:28.339342
| 2022-07-28T21:12:15
|
1321493533
|
{
"authors": [
"ElianHugh",
"benz0li",
"eitsupi",
"grantmcdermott",
"hermidalc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2418",
"repo": "REditorSupport/vscode-R",
"url": "https://github.com/REditorSupport/vscode-R/issues/1162"
}
|
gharchive/issue
|
Master installation script / convenience function
Is your feature request related to a problem? Please describe.
Setting up a fully-fledged R environment in VS Code currently involves several manual steps, as outlined in the README. Users must first install the languageserver package (from R), then add the REditorSupport extension (from VS Code), and then configure several other add-ons from other locations/sources (e.g. radian via pip in the terminal, httpd from R...).
In my experience, this startup rigmarole is tricky for newcomers and, possibly, even for fairly advanced users. (Example: I explicitly had to add radian to my $PATH on both Linux and Mac before it would work.) Once everything is set up, then VS Code genuinely makes for an excellent R IDE. But it's harder getting to that point than it needs to be IMO.
Describe the solution you'd like
One possible solution is to handle all of these installation requirements via a single convenience script or R function. I'm thinking something along the lines of arrow::install_arrow() and reticulate::install_python() (or even JILL and Homebrew).
Say this functionality was bundled as an R function; call it vscoder_setup(). Users could then pass arguments regarding the installation and configuration that they'd like... although I'd argue that the full enchilada of recommended steps—including the debugger plugin and radian—be installed as a default.
Describe alternatives you've considered
Apart from the current manual approach, none.
Additional context
#718 is somewhat related. Again, however, my goal is to provide a single convenience function that I can give to students and colleagues that sees them off to the races with minimal effort or assumed knowledge on their part.
Thanks for considering!
I think a generalized approach to doing this is to use VSCode Remote-Containers (or simply devcontainer CLI).
@grantmcdermott Another approach is building a Docker image with JupyterLab + R + code-server + ...
Log into https://demo.jupyter.b-data.ch with your GitHub account and start Image R (verse:latest) + code-server.
You may also run registry.gitlab.b-data.ch/jupyterlab/r/verse locally like the official Jupyter docker images: https://github.com/jupyter/docker-stacks#quick-start
See also https://gitlab.b-data.ch/explore/projects/topics/JupyterLab.
ℹ️ Multi-arch (linux/amd64, linux/arm64/v8) Docker images with code-sever + R | Python | Julia + ...
Thanks both. I agree the containerized options are nice and I personally make use of them a lot in my own workflow.
But again, I'm looking to accommodate users who will, typically, never have heard of Docker and might even be getting exposed to R and VS Code for the first time. (And me in the middle, trying to convince them to use both!) I'd love to make the setup+installation process for these users as simple as it is for, say, RStudio.
I think it would be fair to say that one of the main vscode-R goals, at present, is to drastically reduce the barrier to entry. This can be seen for instance in the defaulting of standard vsc-R settings, or the ability to toggle R options via extension settings.
RStudio's onboarding and new user experience is nothing short of exceptional and is something that can (and probably should) be used as a benchmark. @renkun-ken has a nice blogpost discussing this (unfortunately I can't find the link ATM).
As you've outlined, we need to do better with the setup phase. The python extension does a nice job of setting up upon initial installation, and perhaps we need to explore that more.
The selection of rPath and rTerm can also be confusing, and the cognitive overhead should hopefully be reduced in an upcoming PR.
Hopefully more to come in making this a more inviting experience :)
Hopefully the work the devs are doing to bring full multiroot support so that R in VSCode operates just like python will make things better, because the python side is really easy to get going, and R should be the same
And sorry but maybe I'm an idiot, but I've never gotten vscode-R to work. I follow the README and have all the requirements installed. I specify the rterm and rpath to fixed binary paths in my settings, but nothing works, you cannot attach an R terminal in VSCode, it doesn't do any syntax checking, really it just doesn't seem connected to R and radian at all. I'm simply not understanding because honestly how can anyone do these simple steps wrong.
Is it because my R, radian, languageserver, httpgd are in a single conda environment? I remember also trying to install these into system paths via my linux dnf package manager, but remember it also didn't work at all. How did other people get things to work?
|
2025-04-01T06:37:28.348741
| 2023-09-18T10:31:53
|
1900639743
|
{
"authors": [
"dr-orlovsky",
"fedsten"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2419",
"repo": "RGB-WG/rgb-wallet",
"url": "https://github.com/RGB-WG/rgb-wallet/pull/104"
}
|
gharchive/pull-request
|
Handle chain information in explicit form in invoices
This is a breaking change since inside the invoice structure we make chain non-optional; plus we add new error cases and make set_chain return Result. Thus I am making this PR against v11 branch.
Closes #103
Any reason why it is non-optional instead of optional as discussed in #103?
@fedsten the parameter is optional as was discussed, thus the invoice string is backwards-compatible. However, for each given invoice we always know which network it belongs to (since we are defaulting to mainnet as discussed), thus we must have a non-optional field.
Got it, for clarifying.
In other words, a wallet will always know and be sure to which network a given invoice belongs. If the invoice has conflicting network information (like an address network doesn't match the network provided in a parameter) it will error during the parse procedure. But each parsed invoice always knows which network it is valid for,
@zoedberg wait, we do not have CI set up for this repo? Oh, I'll fix it.
I did this PR originally against v0.10 branch which was used rust-bitcoin v0.30. Then I understood it's breaking, so moved to v0.11, which doesn't depend on rust-bitcoin at all and have a different Address type (from bp-std) with different API. Thus the issue. Will fix.
Closing in favor of solution from https://github.com/RGB-WG/rgb-std/pull/118
|
2025-04-01T06:37:28.479450
| 2024-07-28T15:46:30
|
2434000056
|
{
"authors": [
"NicEastvillage",
"VirxEC"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2439",
"repo": "RLBot/core",
"url": "https://github.com/RLBot/core/issues/38"
}
|
gharchive/issue
|
Let users make and pick presets for Psyonix bots
Some users like to run tournaments with Psyonix bots and sometimes change their name and appearance. In v4, the latter was technically possible, but a bit of a hassle to do. In v5 we should avoid hardcoding Psyonix bots too much. I propose the following features:
Users can pick specific Psyonix bot name+loadout
in core/match toml: Matches can have (Psyonix) bots with specified/overridden name and loadout
in GUI: TBD. Maybe it should be possible to override any bots' name and appearance by right/double clicking a bot added to the current match before match start.
Users can change the pool of names+loudouts used by Psyonix bots (less important, more advanced impl)
We don't want users to accidentally delete the standard Psyonix bot loadouts, but they should be able to define a pool of loadouts that rlbot picks from. It should be easy to include/exclude standard loadouts and new custom loadouts in this pool. One solution could be a file that includes paths to the included loadouts.
We could read config.toml for Psyonix bots and allow setting the name, which if matching a preset, would load that preset. Also if a looks config is defined, we could just use that instead of any preset. This would be easy to implement.
The “adding to the preset pool” could also be faked by the gui with the above system.
I have thought about this a bit more. We also have users who want to troll their friends by renaming Nexto. So ideally, we should find a general solution that works with both custom and Psyonix bots. I imagine some kind of override field in the match config now.
What is config.toml? The one corresponding to rlbot.cfg?
Also related: #40
So it would look something like this:
[[cars]]
config = "greg.toml"
team = 0
type = "psyonix"
skill = "allstar"
We have settled on something akin to what Virx described above. The match settings (rlbot.toml) can override any bot's name and loadout. This enables the GUI (and other match starter tools) to control how (Psyonix) bots appear when using that tool. By default, core will select randomly from the standard loadouts used in the game when spawning Psyonix bots.
|
2025-04-01T06:37:28.487531
| 2022-02-02T11:41:29
|
1121821787
|
{
"authors": [
"SteveQ2"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2441",
"repo": "RMCob/homebridge-wyze-connected-home-op",
"url": "https://github.com/RMCob/homebridge-wyze-connected-home-op/issues/23"
}
|
gharchive/issue
|
Error- Can’t find orphan to remove, also 429
This is a really great plugin. I’ve been using it since you first posted it many months ago. It took many attempts to get it up and running, but it has been worth all the effort. However, it has lots of stability issues. I have rebuilt it (a clean installation) about once a month. Now I have 2 instances running on 2 different windows 10 mini computers. They are both running but Probably not for long. The HomeBridge log is full of errors.
Steve Q
I reinstalled NPM and hb-service. The plugin is now working.
SteveQ2
|
2025-04-01T06:37:28.510860
| 2024-01-30T17:54:44
|
2108437618
|
{
"authors": [
"blakesweeney"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2442",
"repo": "RNAcentral/rnacentral-import-pipeline",
"url": "https://github.com/RNAcentral/rnacentral-import-pipeline/pull/179"
}
|
gharchive/pull-request
|
Cleanup rnc accession columns
This is the start of work to cleanup the columns in rnc_accessions. Generally, this pull request stops writing and loading specific columns which do not appear to be useful. This should be safe to just run, but as far as I can tell tests are broken right now. Prior to running this pipeline this we must manually edit rnc_update.update_rnc_accession function in the database to remove references to the columns cleaned up here. Otherwise it will break when loading things into the database. For reference the columns are:
map
allele
ordinal
pseudogene
old_locus_tag
anticodon
division
common_name
classification
species
operon
The taxonomic columns are not needed but left in the entry object for now. There needs to be more careful work to remove them as we do use them for some logic within the pipeline.
I've been working through the database tests and have fixed some of them. Fixing those is going to take me a while. I can start tracking it but I suspect this is very slow going.
|
2025-04-01T06:37:28.568165
| 2023-03-12T18:32:59
|
1620465431
|
{
"authors": [
"hongxiayang",
"mpourasa",
"ppanchad-amd",
"tedtroxell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2443",
"repo": "ROCm/ROCm",
"url": "https://github.com/ROCm/ROCm/issues/1931"
}
|
gharchive/issue
|
Segmentation Fault Pytorch RX 6700xt
Hi,
I am trying to run a test for my RX 6700xt but I face many problems.
I tried issue#1686 and 1687 but I was not able to fix them.
I get an error of segmentation faul when I am trying to do run a test inside a docker image:
https://www.youtube.com/watch?v=HwGgzaz7ipQ
The other problem is that after having installed rocm and pytorch terminal returns cuda available = false. ( in docker image returns true)
I have used both Ubuntu 22 and 20 but still have the same proble. Is pytorch compatible only with 5.2 rocm?
Also, in the docker image it returns cuda available but does not run the test.
Can anybody help?
Do you know if that version supports the 6700 xt? I have a 7900 xtx, which isn't supported and I can't use the gpu without it segfaulting, I would guess that it's a driver mismatch.
I followed the example below closely and got the same error again:
https://www.youtube.com/watch?v=IQSvz6jBCis&t=1072s
RX6700xt is compatible with the versions of the example
Looks like your gpu type is not in the official supported list:
https://rocm.docs.amd.com/en/latest/release/gpu_os_support.html
Can you run
rocminfo | grep gfx
Looks like your gpu type is not in the official supported list: https://rocm.docs.amd.com/en/latest/release/gpu_os_support.html
Can you run
rocminfo | grep gfx
According to the following guide RX 6700 XT should be suffecient.
https://www.videogames.ai/2022/09/01/RX-6700s-Machine-Learning-ROCm.html
@mpourasa RX6700 XT is not officially supported in our latest ROCm 6.1.1. Thanks!
|
2025-04-01T06:37:28.569619
| 2024-09-27T20:06:07
|
2553654970
|
{
"authors": [
"samjwu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2444",
"repo": "ROCm/ROCm",
"url": "https://github.com/ROCm/ROCm/pull/3826"
}
|
gharchive/pull-request
|
Update 6.2.2 docs
from 6.2 release branch roc-6.2.x
ignoring spell check failure
hit docker hub rate limit of 100 pulls per 6 hours
|
2025-04-01T06:37:28.573950
| 2024-03-12T10:57:07
|
2181321804
|
{
"authors": [
"LakshmiKumar23",
"kiritigowda",
"rrawther",
"shobana-mcw",
"swetha097"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2445",
"repo": "ROCm/rocAL",
"url": "https://github.com/ROCm/rocAL/pull/114"
}
|
gharchive/pull-request
|
Generic name change - To maintain a generic name instead of "image"
This PR changes the names of image_info to sample_info to maintain a generic name for all use-cases (audio, video, image)
@LakshmiKumar23 : can you please review this PR?
Hi @rrawther,
In our last call we discussed to rename image to sample which will be a common reference for image / audio / video. We are using this API to get info for other media too.
Can we replace sample with media / data? Please share your thoughts.
@swetha097 can you address the review comments? We need to merge this before we can review the audio PRs
@LakshmiKumar23 Addressed the comments.
@swetha097 @LakshmiKumar23 -- failing CI tests - http://math-ci.amd.com/blue/organizations/jenkins/main%2Fprecheckin%2FrocAL/detail/PR-114/2/pipeline/227
@swetha097 @LakshmiKumar23
[ 68%] Building CXX object rocAL/CMakeFiles/rocal.dir/source/meta_data/meta_node_resize_mirror_normalize.cpp.o
../../../rocAL/source/meta_data/bounding_box_graph.cpp: In member function 'virtual void BoundingBoxGraph::update_random_bbox_meta_data(pMetaDataBatch, pMetaDataBatch, DecodedDataInfo, CropImageInfo)':
../../../rocAL/source/meta_data/bounding_box_graph.cpp:76:36: error: expected primary-expression before '.' token
76 | auto crop_cords = CropImageInfo._crop_image_coords;
| ^
make[2]: *** [rocAL/CMakeFiles/rocal.dir/build.make:1060: rocAL/CMakeFiles/rocal.dir/source/meta_data/bounding_box_graph.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:909: rocAL/CMakeFiles/rocal.dir/all] Error 2
make: *** [Makefile:166: all] Error 2
@kiritigowda Fixed it.
@kiritigowda : this is ready to be merged. Is it still failing CI?
@kiritigowda CI passed on this
|
2025-04-01T06:37:28.580473
| 2024-08-22T18:15:35
|
2481438997
|
{
"authors": [
"amd-jnovotny",
"yhuiYH"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2446",
"repo": "ROCm/rocm-install-on-linux",
"url": "https://github.com/ROCm/rocm-install-on-linux/pull/274"
}
|
gharchive/pull-request
|
Cherry-pick to docs/6.2.1: Fix broken link in Ubuntu 20.04 tab of Docker support matrix
I made a typo in my local branch name. It's based on a checkout of upstream/docs/6.2.1. Sorry for the confusion!
We can close this, as it was addressed in https://github.com/ROCm/rocm-install-on-linux/pull/277
|
2025-04-01T06:37:28.583284
| 2022-05-11T02:03:45
|
1231902780
|
{
"authors": [
"kahmed10",
"qianqing13579"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:2447",
"repo": "ROCmSoftwarePlatform/AMDMIGraphX",
"url": "https://github.com/ROCmSoftwarePlatform/AMDMIGraphX/issues/1203"
}
|
gharchive/issue
|
Unknown hip compiler
Running resnet50 by using migraphx from commit 31906785f138d31908da3794079ff90bc321bd97 in rocm5.1.1 causes a failure:
root@bogon:/data_share/qq/Workspace/Git/migraphx/build# /opt/rocm/bin/migraphx-driver perf --model resnet50
Compiling ...
terminate called after throwing an instance of 'migraphx::version_1::exception'
what(): /data_share/qq/Workspace/Git/migraphx/src/targets/gpu/compile_hip.cpp:201: compile_hip_src: Unknown hip compiler: /opt/rocm
Aborted (core dumped)
the docker image I use:
docker pull rocm/pytorch:rocm5.1.1_ubuntu20.04_py3.7_pytorch_1.9.0
the docker image I use:
docker pull rocm/pytorch:rocm5.1.1_ubuntu20.04_py3.7_pytorch_1.9.0
How are you building MIGraphX?
I used the following steps:
git clone https://github.com/ROCmSoftwarePlatform/AMDMIGraphX.git
cd AMDMIGraphX; git checkout 3190678
rbuild build -d depend -B build_pyt_docker --cxx=/opt/rocm/llvm/bin/clang++
./build_pyt_docker/bin/migraphx-driver perf --model resnet50
I don't see the above error when I do this. For MI100 GPU.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.