id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
332679731
Moved Code of Conduct to separate file - Fixes #139 Pull Request (PR) description Moved Code of Conduct to separate file. This Pull Request (PR) fixes the following issues: Fixes #139 Task list: [x] Change details added to Unreleased section of CHANGELOG.md? [x] Added/updated documentation, comment-based help and descriptions in .schema.mof files where appropriate? [ ] Examples appropriately updated? [ ] New/changed code adheres to Style Guidelines? [ ] Unit and (optional) Integration tests created/updated where possible? @Johlju - would you mind reviewing? Should be just standard Code of Conduct change. This change is  Codecov Report Merging #140 into dev will increase coverage by <1%. The diff coverage is 95%. @@ Coverage Diff @@ ## dev #140 +/- ## =================================== + Coverage 94% 94% +<1% =================================== Files 5 5 Lines 519 520 +1 Branches 2 1 -1 =================================== + Hits 491 493 +2 Misses 26 26 + Partials 2 1 -1 @johlju - doh! Yes I did. Forgot I had another outstanding branch on this repo! Will fix it tonight. Actually, I'll fix up this one once #137 has been reviewed and merged. This one should be good to go now @johlju - sorry about the mess up! No worries - it happens :smiley: Reviewed 3 of 3 files at r1. Review status: :shipit: complete! all files reviewed, all discussions resolved Comments from Reviewable
gharchive/pull-request
2018-06-15T07:57:16
2025-04-01T06:37:26.592309
{ "authors": [ "PlagueHO", "codecov-io", "johlju" ], "repo": "PowerShell/CertificateDsc", "url": "https://github.com/PowerShell/CertificateDsc/pull/140", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
938948589
Can't use New-PSSession to "https://ps.compliance.protection.outlook.com/powershell-liveid/" Prerequisites [X] Write a descriptive title. [X] Make sure you are able to repro it on the latest released version [X] Search the existing issues. [X] Refer to the FAQ. [X] Refer to Differences between Windows PowerShell 5.1 and PowerShell. Steps to reproduce $Username = $Password = $SecPassword = ConvertTo-SecureString $Password -AsPlainText -Force $Office365URI = "https://ps.compliance.protection.outlook.com/powershell-liveid/" $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri $Office365URI -Credential $Credentials -Authentication Basic -AllowRedirection Expected behavior $Session is a valid session in exchange online Actual behavior New-PSSession: /powershell/Connection.ps1:19 Line | 19 | $Session = New-PSSession -ConfigurationName Microsoft.Exchange -Conne … | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | [ps.compliance.protection.outlook.com] Connecting to remote | server ps.compliance.protection.outlook.com failed with the | following error message : MI_RESULT_FAILED For more | information, see the about_Remote_Troubleshooting Help topic. Environment data Name Value ---- ----- PSVersion 7.1.0 PSEdition Core GitCommitId 7.1.0 OS Darwin 19.6.0 Darwin Kernel Version 19.6.0: Thu May 6 00:48:39 PDT 2021; root:xnu-6153.141.33~1/RELEASE_X86_64 Platform Unix PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…} PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 WSManStackVersion 3.0 Visuals NA Some additional information. This worked up until Monday (July 5th 2021). We had this script suddenly fail across several organizations and we're not sure what changed. WG-Remoting: This ended up being a problem with docker. I couldn't tell you the root cause. I had to uninstall and reinstall docker to get it to work. A simple reset of docker wasn't enough.
gharchive/issue
2021-07-07T14:22:03
2025-04-01T06:37:26.603440
{ "authors": [ "PaulHigin", "jmcadams-r7" ], "repo": "PowerShell/PowerShell", "url": "https://github.com/PowerShell/PowerShell/issues/15735", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2377427215
WindowsCompatibility module fails to import on PS 7.5.0-preview.3 Prerequisites [X] Write a descriptive title. [X] Make sure you are able to repro it on the latest released version [X] Search the existing issues. [X] Refer to the FAQ. [X] Refer to Differences between Windows PowerShell 5.1 and PowerShell. Steps to reproduce WindowsCompatibility module fails to import on PS 7.5.0-preview.3 due to an extra trailing dot in the namespace in the WindowsCompatibility.psm1 file: using namespace System.Management.Automation. This works on PowerShell 7.4.3, so I guess we should allow for trailing dots in PS 7.5.0 as well. Expected behavior I should be able to import WindowsCompatibility module Actual behavior Import-Module WindowsCompatibility -Force ParserError: The specified namespace in the 'using' statement contains invalid characters. Import-Module: The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory. Error details Exception : Type : System.Management.Automation.PSInvalidOperationException ErrorRecord : Exception : Type : System.Management.Automation.ParentContainsErrorRecordException Message : The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory. HResult : -2146233087 TargetObject : WindowsCompatibility CategoryInfo : ResourceUnavailable: (WindowsCompatibility:String) [], ParentContainsErrorRecordException FullyQualifiedErrorId : Modules_ModuleFileNotFound TargetSite : Name : LoadModuleManifest DeclaringType : [Microsoft.PowerShell.Commands.ModuleCmdletBase] MemberType : Method Module : System.Management.Automation.dll Message : The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory. InnerException : Type : System.IO.FileNotFoundException Message : The module to process 'WindowsCompatibility.psm1', listed in field 'ModuleToProcess/RootModule' of module manifest 'C:\Program Files\PowerShell\Modules\WindowsCompatibility\1.0.0\WindowsCompatibility.psd1' was not processed because no valid module was found in any module directory. HResult : -2147024894 Source : System.Management.Automation HResult : -2146233079 StackTrace : at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModuleManifest(String moduleManifestPath, ExternalScriptInfo manifestScriptInfo, Hashtable data, Hashtable localizedData, ManifestProcessingFlags manifestProcessingFlags, Version minimumVersion, Version maximumVersion, Version requiredVersion, Nullable`1 requiredModuleGuid, ImportModuleOptions& options, Boolean& containedErrors) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 3120 at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModuleManifest(String moduleManifestPath, ExternalScriptInfo manifestScriptInfo, Hashtable data, Hashtable localizedData, ManifestProcessingFlags manifestProcessingFlags, Version minimumVersion, Version maximumVersion, Version requiredVersion, Nullable`1 requiredModuleGuid, ImportModuleOptions& options, Boolean& containedErrors) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 3139 at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadModule(PSModuleInfo parentModule, String fileName, String moduleBase, String prefix, SessionState ss, Object privateData, ImportModuleOptions& options, ManifestProcessingFlags manifestProcessingFlags, Boolean& found, Boolean& moduleFileFound) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 5630 at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadUsingExtensions(PSModuleInfo parentModule, String moduleName, String fileBaseName, String extension, String moduleBase, String prefix, SessionState ss, ImportModuleOptions options, ManifestProcessingFlags manifestProcessingFlags, Boolean& found, Boolean& moduleFileFound) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 5505 at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadUsingMultiVersionModuleBase(String moduleBase, ManifestProcessingFlags manifestProcessingFlags, ImportModuleOptions importModuleOptions, Boolean& found) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 455 at Microsoft.PowerShell.Commands.ModuleCmdletBase.LoadUsingModulePath(PSModuleInfo parentModule, IEnumerable`1 modulePath, String name, SessionState ss, ImportModuleOptions options, ManifestProcessingFlags manifestProcessingFlags, PSModuleInfo& module) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ModuleCmdletBase.cs:line 381 at Microsoft.PowerShell.Commands.ImportModuleCommand.ImportModule_LocallyViaName(ImportModuleOptions importModuleOptions, String name) in D:\DEVELOPMENT\PowerShellCore\src\System.Management.Automation\engine\Modules\ImportModuleCommand.cs:line 824 TargetObject : WindowsCompatibility CategoryInfo : ResourceUnavailable: (WindowsCompatibility:String) [Import-Module], PSInvalidOperationException FullyQualifiedErrorId : Modules_ModuleFileNotFound,Microsoft.PowerShell.Commands.ImportModuleCommand InvocationInfo : MyCommand : Import-Module ScriptLineNumber : 1 OffsetInLine : 1 HistoryId : 11 Line : ipmo WindowsCompatibility -Force Statement : ipmo WindowsCompatibility -Force PositionMessage : At line:1 char:1 + ipmo WindowsCompatibility -Force + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ InvocationName : ipmo CommandOrigin : Internal ScriptStackTrace : at <ScriptBlock>, <No file>: line 1 PipelineIterationInfo : 0 1 Environment data Name Value ---- ----- PSVersion 7.5.0-preview.3 PSEdition Core GitCommitId 7.5.0-preview.3-37-gec3840d6a1fffdbaca7173ededb4a4504b2f5b41 OS Microsoft Windows 10.0.19045 Platform Win32NT PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…} PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 WSManStackVersion 3.0 Visuals Note that the WindowsCompatibility module is obsolete (it was designed for t PS v6 , and is now archived; arguably, it should be hidden from the PowerShell Gallery or clearly marked as obsolete). Its functionality has been folded into PowerShell itself, so there is no longer a need for it. And, in general, I think it's preferable that something like using namespace System.Management.Automation. not be accepted. @mklement0: Yeah, it makes sense. I have a function that exports Scoop environment from online system to disconnected system, that needs to run some code from PS Core session on Windows PowerShell, and I was using Invoke-WinCommand to accomplish that. I'll refactor this code to use New-PSSession -UseWindowsPowerShell instead. I also agree that the WindowsCompatibility module should be retired, the only concern I have that there might be some code in the wild that has using namespace statements with extra trailing dot that will fail to work on PS 7.5.0 once it is released as stable - I guess that it should be clearly stated in the release docs as breaking change so users are aware. It's surprising for that trailing period to be there, but I see it in the psm1 source. The WinCompat module is deprecated and the repo is archived and no longer supported. As noted, there are other supported solutions.
gharchive/issue
2024-06-27T07:51:34
2025-04-01T06:37:26.614571
{ "authors": [ "SteveL-MSFT", "kborowinski", "mklement0" ], "repo": "PowerShell/PowerShell", "url": "https://github.com/PowerShell/PowerShell/issues/23993", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
450294081
Powershell hangs or dies silently when running Read-Host -AsSecureString when invoked via Python subprocess Steps to reproduce import json, uuid, subprocess, base64 PSCODE = ''' [CmdletBinding()] param ( $payload = (Read-Host -AsSecureString) ) Get-Host ''' def base64_encode_powershell(input_string): """ Encodes input value in UTF-16 Little Endian, making it suitable for use with powershell's encoded command argument. """ byte_string = input_string.encode('utf-16-le') encoded_data = base64.b64encode(byte_string) return encoded_data def create_ps_file(ps_name, ps_content): ps_path = ps_name with open(ps_path, 'w+') as file: file.write(ps_content) return ps_path ps_args = {} ps_args['URL'] = "https://google.com" payload = base64_encode_powershell(json.dumps(ps_args)) ps_path = create_ps_file('ps_' + str(uuid.uuid4()).replace('-','') + '.ps1', PSCODE) output = subprocess.Popen(["pwsh", ps_path], stdin = subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = output.communicate(payload) print stdout Expected behavior stdout in the Python code should contain the output of Get-Host Name : ConsoleHost Version : 6.2.1 InstanceId : 5168dee8-2e5c-4967-9d78-b53372cf351d UI : System.Management.Automation.Internal.Host.InternalHostUserI nterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy DebuggerEnabled : True IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Actual behavior stdout in the Python code is empty, as is stderr Environment data Name Value ---- ----- PSVersion 6.2.1 PSEdition Core GitCommitId 6.2.1 OS Linux 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 Platform Unix PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…} PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 WSManStackVersion 3.0 Note The expected behavior is observed when using PowerShell 5 on a Windows system, using the exact same Python code (except for changing pwsh to powershell.exe and adding a -f to the args in the subprocess call. Suggest to simplify python code and give os, python version Close as stale issue. Feel free to continue discussion. Close as stale issue. Feel free to continue discussion.
gharchive/issue
2019-05-30T12:56:02
2025-04-01T06:37:26.619220
{ "authors": [ "chuanjiao10", "iSazonov", "vector-sec" ], "repo": "PowerShell/PowerShell", "url": "https://github.com/PowerShell/PowerShell/issues/9766", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1458886589
Remove minor versions from PSCompatibleVersions Supercedes #18346. I'd be more willing to see this not remove all minor versions but instead leave the latest minor version Language breaking change can be introduced only in new major version. @kilasuit the @PowerShell/powershell-committee's position is 7.0 represents 7.x (docs will need to be updated) instead of perpetually updating this list with every yearly release of 7.x. @SteveL-MSFT that's fair, just wanted to highlight this possible other viewpoint so that it's mentioned within this PR for full transparency to those coming to this in future @xtqqczze Can you please fix the failing tests? Some tests need to be updated accordingly.
gharchive/pull-request
2022-11-22T02:03:17
2025-04-01T06:37:26.622171
{ "authors": [ "SteveL-MSFT", "daxian-dbw", "iSazonov", "kilasuit", "xtqqczze" ], "repo": "PowerShell/PowerShell", "url": "https://github.com/PowerShell/PowerShell/pull/18635", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
425907076
[SPAppManagementServiceApp] Resource does not create proxy afterwards Details of the scenario you tried and the problem that is occurring When the Service App Proxy isn't created during deployment, this isn't created anymore at a later stage Verbose logs showing the problem N/A Suggested solution to the issue Update resource to check for the presence of the proxy and create it when not present. The DSC configuration that is used to reproduce the issue (as detailed as possible) N/A The operating system the target node is running Win 2K16, PSv5.1 Version of SharePoint that is used (e.g. SharePoint 2016) All Version and build of PowerShell the target node is running v5.1 Version of the DSC module that was used ('dev' if using current dev branch) Dev Will be included in my next bugfix PR
gharchive/issue
2019-03-27T11:32:28
2025-04-01T06:37:26.626779
{ "authors": [ "ykuijs" ], "repo": "PowerShell/SharePointDsc", "url": "https://github.com/PowerShell/SharePointDsc/issues/1044", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
155341249
Add New-Markdown test for dynamic parameters Courtesy to @dotps1 for the function with dynamic parameters example --ff only merge This change is Merged in master with --ff
gharchive/pull-request
2016-05-17T19:26:36
2025-04-01T06:37:26.628688
{ "authors": [ "vors" ], "repo": "PowerShell/platyPS", "url": "https://github.com/PowerShell/platyPS/pull/86", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
219879436
Get-MySqlPort should be skipped when a optional port parameter is supplied Resources that perform Get-MySqlPort currently fail for me as my.ini is not located in the standard location. Not really sure why is even necessary to parse the port from the config, I assumed there's something I've overlooked so I've created an my.ini path override optional parameter to the Get-MySqlPort function and the associated resources. Happy to submit a PR. That helper function is used in several resources to get the port - if there are a better way to get the port then that should be used. Otherwise your workaround sound promising. Do you want to send in a PR then please do.
gharchive/issue
2017-04-06T12:16:29
2025-04-01T06:37:26.639358
{ "authors": [ "andy1547", "johlju" ], "repo": "PowerShell/xMySql", "url": "https://github.com/PowerShell/xMySql/issues/19", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
100864319
Update errant file encodings Remove smartquotes in MOF schema and ensure file is UTF8 Hi @Iristyle, I'm your friendly neighborhood Microsoft Pull Request Bot (You can call me MSBOT). Thanks for your contribution! You've already signed the contribution license agreement. Thanks! The agreement was validated by Microsoft and real humans are currently evaluating your PR. TTYL, MSBOT; Thanks for fixing encoding in this and other modules @Iristyle
gharchive/pull-request
2015-08-13T20:45:10
2025-04-01T06:37:26.641345
{ "authors": [ "Iristyle", "KarolKaczmarek", "msftclas" ], "repo": "PowerShell/xRemoteDesktopSessionHost", "url": "https://github.com/PowerShell/xRemoteDesktopSessionHost/pull/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
86331294
Update Schema to support singleton Fix/Workaround for issue #2 Hi @TravisEz13, I'm your friendly neighborhood Microsoft Pull Request Bot (You can call me MSBOT). Thanks for your contribution! It looks like you're a Microsoft contributor (Travis Plunk). If you're full-time, we DON'T require a Contribution License Agreement. If you are a vendor, please DO sign the electronic Contribution License Agreement. It will take 2 minutes and there's no faxing! https://cla.microsoft.com. TTYL, MSBOT; looks good
gharchive/pull-request
2015-06-08T21:06:09
2025-04-01T06:37:26.644126
{ "authors": [ "KarolKaczmarek", "TravisEz13", "msftclas" ], "repo": "PowerShell/xTimeZone", "url": "https://github.com/PowerShell/xTimeZone/pull/6", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2116962671
DAMAGE event should be emitted only for damage caused by tower attacks Any calls to do_spell_damage(), do_attack_damage() and AOE variations inside tower scripts should not emit DAMAGE event. This behavior is according to original JASS code. DAMAGE even should also be emitted for all instances of damage resulting from splash attacks. Proof: The tower in the screenshot is Ash Geyser. It has a splash attack and an ability which applies a debuff to creeps inside DAMAGE event callback. The screenshot shows that a creep was hit by splash attack (not as main target) and it received the debuff.
gharchive/issue
2024-02-04T08:48:10
2025-04-01T06:37:26.699307
{ "authors": [ "Kvel2D" ], "repo": "Praytic/youtd2", "url": "https://github.com/Praytic/youtd2/issues/364", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
180948574
No padding between axis label and color legend bar Should have a 3px space. Right now the first color bar sits directly next to the label. fixed
gharchive/issue
2016-10-04T16:58:33
2025-04-01T06:37:26.710773
{ "authors": [ "Rmohan06", "benoitjchevalier" ], "repo": "PredixDev/px-vis-xy-chart", "url": "https://github.com/PredixDev/px-vis-xy-chart/issues/6", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
203421912
[Request] Automatic Fishing Net auto-eject I can see that this has been partially looked at in #200 but I'm not sure how we might go about automatically extracting from the Fishing net. Putting a hopper under it does extract the contents but appears to halt production. With cyclic 1.10.2-1.9.19 I'm unable to extract from the top face using Translocators (extracts rod, not loot), Transfer Nodes, nor ActuallyAddition's ESD. (edit: Tried using an AA PhantomFace too and it complains. I'm guessing your inventory isn't being made available) Perhaps you could have the Net automatically eject to an inventory that's on the top face? That way we could throw a chest on top and then use automation to clear the chest. With the current restriction on available faces, I'd have to choose between having a crafter auto-insert fresh rods, OR having something auto-extract the fish. I don't have EnderIO in this pack so I can't use one conduit for both :P Perhaps you could set the net to halt when the rod is close to breaking so that I don't lose my Unbreaking/Luck rod? Alternatively ease the placement restrictions to expose 2 faces for automation. Yeah i could do an inventory on/off feature. so if its off then it will drop below always as if it was full. letting rods break or not should be doable too. i havent thought about the faces having only one open, ill look at it. maybe 3 out of 4 sides. By the way, if the fishing rod has mending it shouldnt break in there, it should repair itself from the fishing xp Made a bunch of changes in the latest release, let me know what you think https://minecraft.curseforge.com/projects/cyclic/files/2374252 Wonderful! I'll update the pack next weekend and check out the changes. Thanks!
gharchive/issue
2017-01-26T16:43:04
2025-04-01T06:37:26.942641
{ "authors": [ "PrinceOfAmber", "xenoflot" ], "repo": "PrinceOfAmber/Cyclic", "url": "https://github.com/PrinceOfAmber/Cyclic/issues/250", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
625207392
Address JSON view for member map indexes member address information in Solr new JSON view to return map and member data with the same filters used on the members list view @thatbudakguy codefactor is helpfully flagging a couple of todos I left in the code — I think they depend on what we decide we need from this view, so I probably need input from you to resolve. IDK how we want to handle merging this one — do we merge into develop when you're happy with it even though it's not really functional on its own? Codecov Report Merging #643 into develop will increase coverage by 0.00%. The diff coverage is 99.07%. @@ Coverage Diff @@ ## develop #643 +/- ## ========================================= Coverage 98.08% 98.09% ========================================= Files 218 219 +1 Lines 11568 11671 +103 Branches 63 63 ========================================= + Hits 11347 11449 +102 - Misses 221 222 +1
gharchive/pull-request
2020-05-26T21:21:56
2025-04-01T06:37:26.952497
{ "authors": [ "codecov-commenter", "rlskoeser" ], "repo": "Princeton-CDH/mep-django", "url": "https://github.com/Princeton-CDH/mep-django/pull/643", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
935275878
Update the column headers in the peakgroups search output BUG DESCRIPTION Problem Column headers in the peakgroups search results (both basic and advanced) use Caroline's Processed tissue data. They should use the terms from issue #107. They could also be informed by the peakgroups tab of customer-desired consolidated view formats Steps to reproduce Go to http://127.0.0.1:8000/DataRepo/search_peakgroups/ and search for tissue is brain Current behavior Names used in Caroline's Processed tissue data Expected behavior Use the terms from issue #107 - TraceBase Terms and Definitions. Suggested Change Simple edit of search_peakgroups and the basic search templates. Comment Change the column headers from Caroline's terms to agreed upon terms (issue #107) Originally posted by @hepcat72 in https://github.com/Princeton-LSI-ResearchComputing/tracebase/pull/127#discussion_r662395704 ISSUE OWNER SECTION Assumptions List of assumptions made WRT the code E.g. We will assume input is correct (explaining why there is no validation) Requirements List of conditions to be met for the feature E.g. Every column/row must display a value, i.e. cannot be empty Limitations A list of things this work will not fix E.g. Getting edge case X to work requires too much effort and will not be fixed in this effort. Affected/Changed Components Files Environment variables External executables Database tables Cron job settings Etc. DESIGN GUI Change description Describe changes the user will see. Code Change Description (Pseudocode optional) Describe code changes planned for the fix. Tests A test should be planned for each requirement (above), where possible. Dupe of #147
gharchive/issue
2021-07-01T23:04:15
2025-04-01T06:37:26.961308
{ "authors": [ "hepcat72" ], "repo": "Princeton-LSI-ResearchComputing/tracebase", "url": "https://github.com/Princeton-LSI-ResearchComputing/tracebase/issues/133", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
425115576
What is the parent property in the environment of the wrap hook for? I don't get what the parent property is for. We don't have a plugin that is using it doesn't even is what the name suggests: parent actually holds the nearest token array; i.e. in the case that the content of a token is a single token, then parent will not be the parent token but the token stream of the parent. Apart from that, I also don't see why it would be useful: Assuming we wrap a token which is in a token stream: What can parent be used for? We can alter token positioned before the current token but that doesn't change the output of stringify because these token as stringified already. Changing the current token doesn't do anything either. The only thing parent can be used for is to alter the items positioned after the current token. Btw. we can't add or remove items because stringify uses map. So, what is it for? Also, I don't see why the wrap hook should be able to modify the token stream anyway. Added 11 May 2013 Because of the proximity in time between the commits, I suspect it was initially added for something related to the WPD plugin (20c0a1e96d6760aa2f0fad5b9a4f77d6f6b89434), but even there it is unused. I believe it is safe to remove it, considering how unreliable it is anyway.
gharchive/issue
2019-03-25T21:06:08
2025-04-01T06:37:26.966327
{ "authors": [ "Golmote", "RunDevelopment" ], "repo": "PrismJS/prism", "url": "https://github.com/PrismJS/prism/issues/1836", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
319796297
Replace compile with implementation My project keeps giving me this error and fortunately android studio now points to the library which has this problem. Please replace dependencies { compile 'com.android.support:support-annotations:25.0.0' } with "implementation" (and any other if there is) Can you make a pull request and fix this? Done. I have never done this before so I hope I did it correctly :) thanks!
gharchive/issue
2018-05-03T06:00:13
2025-04-01T06:37:26.976339
{ "authors": [ "RJFares", "dschuermann" ], "repo": "PrivacyApps/html-textview", "url": "https://github.com/PrivacyApps/html-textview/issues/136", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1685368637
Breakout 4: Token Vault workflows Problem Statement: Token scanning is important to ensure that secrets/tokens aren't exposed (very important for open source projects and the supply chain that depends on them). A repo contains 100 secrets and an org can contain upto 500 Tasks *lean on maintainer for admin privileges [ ] checkout the action for Hashicorp vault [ ] create a .yml file in workflows [ ] Set up a Vault instance, [ ] Authentication with GitHub OIDC Token [ ] Example usage to reference 2023-04-26 Vault is a Hashicorp service for storing secrets OIDC workflows are used to generate short-lived auth tokens This is preferable to generating a long-lived token with read:org We have a Vault account already and can use that to generate a Vault instance
gharchive/issue
2023-04-26T16:07:17
2025-04-01T06:37:27.018339
{ "authors": [ "ipc103", "manishapriya94" ], "repo": "ProgramEquity/amplify", "url": "https://github.com/ProgramEquity/amplify/issues/560", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2521148429
feat: Implementation of CRUD with JSON files (transports) Pull Request Description Type of Change [ ] ✨ New feature (non-breaking change which adds functionality) [ ] 🛠️ Bug fix (non-breaking change which fixes an issue) [ ] ❌ Breaking change (fix or feature that would cause existing functionality to change) [ ] 🧹 Code refactor [ ] ✅ Build configuration change [ ] 📝 Documentation [ ] 🗑️ Chore Task linked: CU-8689n1ev7 Implementation of CRUD with JSON files transports
gharchive/pull-request
2024-09-12T01:57:07
2025-04-01T06:37:27.215606
{ "authors": [ "JeferssonCL", "JorgeHB69" ], "repo": "Programming6-projects/LosCuriosos", "url": "https://github.com/Programming6-projects/LosCuriosos/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
213744461
Documents roundtrip param See https://github.com/Project-OSRM/osrm-backend/issues/3741 Now in the backend https://github.com/Project-OSRM/osrm-backend/pull/4185.
gharchive/pull-request
2017-03-13T11:52:02
2025-04-01T06:37:27.224436
{ "authors": [ "daniel-j-h" ], "repo": "Project-OSRM/node-osrm", "url": "https://github.com/Project-OSRM/node-osrm/pull/304", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
2541213760
Add Initial r9q Support What Changed Add support for r9q Reason As I don't think I'll have many more updates recently, I'll do the PR again. Checklist [x] Is what you changed Tested? [x] Is the Source Code Cleaned? Now the DSDT dosen't Exist, Please make a PR in the Silicium-ACPI Repo.
gharchive/pull-request
2024-09-22T19:02:43
2025-04-01T06:37:27.238671
{ "authors": [ "Icesito68", "Robotix22" ], "repo": "Project-Silicium/Mu-Silicium", "url": "https://github.com/Project-Silicium/Mu-Silicium/pull/1436", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
969312519
Wrap analytics id in quotes This might get the google analytics working for the Foundations site. I noticed that my other JupyterBook site (which has working analytics) wrapped the ID in double quotes: https://github.com/brian-rose/ClimateLaboratoryBook/blob/87801650d5f2e374494d8ccfc955b2a3b3000254/_config.yml#L45 This is consistent with the JupyterBook config reference, which shows double quotes: https://jupyterbook.org/customize/config.html I tried using double quotes here, but prettier insisted on changing them to single quotes. @kmpaul I suggest merging this and see if it get the analytics working. I'm not sure we have a way to test this without merging first. If this doesn't work, then we might be able to wait for a fix: https://github.com/executablebooks/jupyter-book/issues/1300 https://github.com/pydata/pydata-sphinx-theme/pull/439 I don't think this fixed anything, sadly. I can't be 100% sure, but I don't see any differences in the actual HTML generated for the page. Ok! I suppose my Climate Laboratory book must be using an "old style" Google Analytics ID, though I'm not yet clear what that means. Does it start with a UA-? Or a G-? The first is the old and the second is the new.
gharchive/pull-request
2021-08-12T18:12:36
2025-04-01T06:37:27.273625
{ "authors": [ "brian-rose", "kmpaul" ], "repo": "ProjectPythia/pythia-foundations", "url": "https://github.com/ProjectPythia/pythia-foundations/pull/104", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1229283838
:technologist: PlayerShouldInvulnerableタグを作成 close #904 この処理が実行されたあとワールドとか神器の処理でゲームモードが変えられた場合変なことになりそうだけど大丈夫? 当然そのtick内ではダメージを食らったりするだろうけど正直"遅かった"で済まされる問題程度だと思うので問題ない気もする
gharchive/pull-request
2022-05-09T07:13:29
2025-04-01T06:37:27.278146
{ "authors": [ "ChenCMD" ], "repo": "ProjectTSB/TheSkyBlessing", "url": "https://github.com/ProjectTSB/TheSkyBlessing/pull/908", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
1313440167
Testing: Babylon Controller User Story Verbleibende ungetestete Babylon Controller: src/Components/Core/Presentation/Babylon/ Akzeptanzkriterien [ ] Avatar/AvatarController [ ] LearningElement/LearningElementController Definition of Ready [ ] User Story ist klein genug für Sprint [ ] User Story ist für jeden Beteiligten klar verständlich [ ] User Story Aufwand ist geschätzt [ ] User Story hat Akzeptanzkriterien [ ] User Story hat einen Mehrwert für das Produkt oder die Entwicklung [ ] User Story Ursprung ist bekannt (Stakeholder) [ ] User Story ist Release zugewiesen Definition of Done [ ] Alle Akzeptanzkriterien sind erfüllt [ ] Die Implementierung ist gepusht [ ] Die Codekonventionen sind eingehalten [ ] Eine Code Review wurde durchgeführt (oder in Paired Programming gearbeitet) [ ] Unittestabdeckung ist über 92% oder so hoch wie möglich mit validen Gründen weshalb die 92% nicht erreicht werden können [ ] Alle Tests müssen bestanden sein [ ] Technische Dokumentation innerhalb der Realisierung wurde, falls nötig, erstellt (Quellcode) [ ] Dokumentation des Vorgehens wurde, falls nötig, angelegt (Zenhub Kommentar) [ ] Es gibt keine bekannten Bugs [ ] Die Realisierung der User Story wurde erfolgreich durch den Product Owner abgenommen Please add your planning poker estimate with ZenHub @philgei
gharchive/issue
2022-07-21T15:28:08
2025-04-01T06:37:27.284441
{ "authors": [ "DerKatsche", "Lizardguard" ], "repo": "ProjektAdLer/2D_3D_AdLer", "url": "https://github.com/ProjektAdLer/2D_3D_AdLer/issues/242", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
210458192
forget password issue Question: In login form after a user type his username and then click forget password link and after confirming new password ,will the user back to the login page with pre-filled fields with new password already or he/she has to enter the whole information again? When user comes back on login form after resetting password from forgot password link, he/she has to fill both the details again.
gharchive/issue
2017-02-27T11:30:03
2025-04-01T06:37:27.292555
{ "authors": [ "Suparna-Acharya", "iroshni" ], "repo": "Promact/trappist", "url": "https://github.com/Promact/trappist/issues/7", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1059416880
Query BDD Test by productId As a Developer I need to describe the behavior of listing all promotions by productId So that I can test the feature under DBB Details & Assumptions: Write the feature using Gherkin syntax that is understood by the behave tool Acceptance Criteria: When run tests using `behave` Then I should see “list all promotions by productId query” scenario pass IBM Cloud toolchain: Delivery Pipeline deployed promotions to prod, including commits 9fc2661199993d8990995905067cd7dac4192450, 594968bcce6b80cd84ae9d4c43896a7c0c1dd42a
gharchive/issue
2021-11-21T16:02:40
2025-04-01T06:37:27.296366
{ "authors": [ "Tingtingyy", "dizys" ], "repo": "PromoSquad/promotions", "url": "https://github.com/PromoSquad/promotions/issues/105", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1164436454
Updated routes.py file for descrip. & create promo Updated the routes.py file to include our name and description. Also, updated the file within the Get Index function to inlcude read promotion. reviewed!
gharchive/pull-request
2022-03-09T21:03:06
2025-04-01T06:37:27.297850
{ "authors": [ "kripa528", "tv547" ], "repo": "PromotionsSquad/promotions", "url": "https://github.com/PromotionsSquad/promotions/pull/32", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
179755750
ifconfig not found cs go server i am trying to run a cs go server on the panel but it seems like it can't find ifconfig,so it will crash here's the log http://hastebin.com/ukaqocacaw.sql /cc @parkervcp Oh i solved this by adding apt-get install -y net-tools to srcds docker Image but now cs go server gets stuck at assigning ip(it fails) I am unable to reproduce this on my system, and did not encounter any issues while downloading CS:GO though the container. I am going to assume that these issues are most likely due to the method that you used to move files, and could be either permissions or missing files entirely.
gharchive/issue
2016-09-28T12:14:07
2025-04-01T06:37:27.338366
{ "authors": [ "DaneEveritt", "andrea2107" ], "repo": "Pterodactyl/Daemon", "url": "https://github.com/Pterodactyl/Daemon/issues/26", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
283014927
[7.0] Server Sub-User Issues. Panel or Daemon: Panel Version of Panel/Daemon: 7.0 Server's OS: Ubuntu 16.04 Your Computer's OS & Browser: Chrome, Win 10 Add Details Below: Adding a sub-user to a server is non-functional, unsure about deleting a sub-user from a server. Adding a deleting sub-users/users from the panel itself is fine though. Can you please clarify what the issue here is @odddellarobbia?
gharchive/issue
2017-12-18T20:47:32
2025-04-01T06:37:27.340602
{ "authors": [ "DaneEveritt", "odddellarobbia" ], "repo": "Pterodactyl/Panel", "url": "https://github.com/Pterodactyl/Panel/issues/816", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
670035580
Improve UX for zooming with restricted geolevels Description We prevent selecting the blocks geolevel at zoom levels below a certain threshold. However, if you zoom in, select the blocks geolevel, and then zoom out, blocks are still selected but they are grayed out and we don't clearly explain what changed and what a user needs to do to continue working with blocks. The user can't select geounits on the map until they manually select an available geolevel. For example: To remedy this, we want to make it more clear that the user needs to zoom back in to see and interact with blocks by showing a them a message (see screenshot). We also want to not gray-out the blocks geolevel so it's clear the blocks geolevel is still active. AC: When a user selects a restricted geolevel (i.e. Blocks) and zooms out past the min-zoom, we show a message that explains: Zoom in to work with [geolevel]. (See screenshot) Also when this happens, the blocks geolevel button is still active and indicates blocks are still selected (rather than being grayed out). When zoomed out, if a user selects a non-restricted geolevel the restricted geolevel button is deactivated and returns to a state like in #268 / #287. If a user has a restricted geounit selected, the user can zoom out past the min-zoom (rather than be limited to it) and the selection is maintained (i.e. sidebar shows proposed change, if you zoom back in the selection is still there) Screenshots I edited this issue to reflect the new direction mentioned in this message: https://github.com/PublicMapping/districtbuilder/pull/270#issuecomment-669479824 Updated to capture the various nuances. @jfrankl What do you think of Zoom in to work with blocks? It's a little more concise than Zoom in to view and select blocks. @tgilcrest looks good to me.
gharchive/issue
2020-07-31T17:23:02
2025-04-01T06:37:27.347178
{ "authors": [ "jfrankl", "tgilcrest" ], "repo": "PublicMapping/districtbuilder", "url": "https://github.com/PublicMapping/districtbuilder/issues/245", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
215600909
firmware_bundles are obsolete The most recent commit (72eca92) updated firmware to rev 201611070000, whereas current Bean Loader apps require rev 201611160000. In a possibly related issue, the bean program_firmware command does not fail-fast if the bean's firmware is more recent than the revision in the SDK's firmware_bundles. Instead it proceeds, but does not complete: [...] 2017-03-21T00:39:41.915Z INFO All services have been setup! Connected! 2017-03-21T00:39:41.946Z INFO Char read success(2a27): 2A Programming device firmware: 884aea153e0a 2017-03-21T00:39:41.952Z INFO Begin update called 2017-03-21T00:39:41.975Z INFO Char read success(2a26): 201611160000 Img-B 2017-03-21T00:39:41.976Z INFO Comparing firmware versions: Bundle version (201611070000), Bean version (201611160000) 2017-03-21T00:39:41.979Z INFO Starting FW update for device Bean+(884aea153e0a) 2017-03-21T00:39:41.981Z INFO Begin FW @ 1490056782 2017-03-21T00:39:41.984Z INFO Triggered a notification on Identify char The command appears to hang at this point and must be manually terminated. Another possibly related bug is that the bean program_sketch command also hangs. This has been reported several times recently in the forum: 2017-03-21T00:42:35.289Z INFO All services have been setup! Connected! 2017-03-21T00:42:35.318Z INFO Char read success(2a27): 2A Found sketch setLed for board Bean+ 2017-03-21T00:42:35.328Z INFO No longer scanning... 2017-03-21T00:42:35.331Z INFO State transition: null -> STATE_INACTIVE 2017-03-21T00:42:35.331Z INFO Beginning sketch upload of sketch: setLed 2017-03-21T00:42:35.332Z INFO State transition: STATE_INACTIVE -> STATE_AWAIT_READY 2017-03-21T00:42:35.333Z INFO Sketch upload started! Once again, the command appears to hang at this point and must be manually terminated. Other commands that hang are: read_accel, read_ble_config, and read_device_info. I wonder if the hangs are related to the firmware version? It seems unlikely that they are related to BLE dongle because the hangs have been reported when using several different approved dongles. windows 10 python 2.7.13 node v6.10.0 bean 0.6.1 I forked and recompiled version - with latest firmware it is not possible to program lightblue bean anymore. Same problem as described here: http://beantalk.punchthrough.com/t/cli-unable-to-upload-sketch-update-firmware-rename-bean/4187/14 Not working on all platforms probably (mine linux) I tried to downgrade firmware (using reversed patch), but unfortunately command also hangs ./bean.sh program_firmware -n Bean 2017-04-01T19:59:35.606Z INFO Setting scan timeout: 15 seconds 2017-04-01T19:59:35.611Z INFO Starting to scan... Found device with name/address: Bean/04a3169af315 2017-04-01T19:59:36.349Z INFO No longer scanning... 2017-04-01T19:59:36.350Z INFO Connecting to device: Bean 2017-04-01T19:59:36.883Z INFO Looking up services for device: Bean 2017-04-01T19:59:37.725Z INFO Found service: OAD Service / f000ffc004514000b000000000000000 2017-04-01T19:59:37.725Z INFO Found service: Generic Access / 1800 2017-04-01T19:59:37.725Z INFO Found service: Generic Attribute / 1801 2017-04-01T19:59:37.726Z INFO Found service: Device Information / 180a 2017-04-01T19:59:37.726Z INFO Found service: Serial Transport Service / a495ff10c5b14b44b5121370f02d74de 2017-04-01T19:59:37.726Z INFO Found service: Unknown / a495ff20c5b14b44b5121370f02d74de 2017-04-01T19:59:37.726Z INFO Found service: Battery Service / 180f 2017-04-01T19:59:37.726Z INFO Found service: Scan Parameters / 1813 2017-04-01T19:59:37.726Z INFO Found service: Human Interface Device / 1812 2017-04-01T19:59:37.726Z INFO Found service: Unknown / 03b80e5aede84b33a7516ce34ec4c700 2017-04-01T19:59:37.727Z INFO Service setup successfully: Generic Access 2017-04-01T19:59:37.727Z INFO Service setup successfully: Generic Attribute 2017-04-01T19:59:37.727Z INFO Service setup successfully: Human Interface Device 2017-04-01T19:59:37.727Z INFO Service setup successfully: Scan Parameters 2017-04-01T19:59:37.728Z INFO Setting up IDENTIFY and BLOCK notifications 2017-04-01T19:59:37.728Z INFO Service setup successfully: Device Information 2017-04-01T19:59:37.728Z INFO Setting up SERIAL notifications 2017-04-01T19:59:37.728Z INFO Service setup successfully: Unknown 2017-04-01T19:59:37.728Z INFO Service setup successfully: Battery Service 2017-04-01T19:59:37.729Z INFO Service setup successfully: Unknown 2017-04-01T19:59:37.848Z INFO Service setup successfully: OAD Service 2017-04-01T19:59:37.870Z INFO Service setup successfully: Serial Transport Service 2017-04-01T19:59:37.871Z INFO All services have been setup! Connected! 2017-04-01T19:59:37.893Z INFO Char read success(2a27): 1E Programming device firmware: 04a3169af315 2017-04-01T19:59:37.895Z INFO Begin update called 2017-04-01T19:59:37.915Z INFO Char read success(2a26): 201611070000 Img-B 2017-04-01T19:59:37.916Z INFO Comparing firmware versions: Bundle version (201609290000), Bean version (201611070000) 2017-04-01T19:59:37.916Z INFO Starting FW update for device Bean(04a3169af315) 2017-04-01T19:59:37.916Z INFO Begin FW @ 1491076778 2017-04-01T19:59:37.917Z INFO Triggered a notification on Identify char Sort of good news is that Bean is not bricked, because it is possible to program it using OSX non-CLI tools The firmware in repo has been updated (20170406) but I'm still seeing the hangs for read_ble_config and read_device_info, etc. windows 10 python 2.7.13 node v6.10.2 bean 0.6.2 Same problem still stuck CLI on Linux, Windows and OSX like this: 2017-04-01T19:59:37.893Z INFO Char read success(2a27): 1E Programming device firmware: 04a3169af315 2017-04-01T19:59:37.895Z INFO Begin update called 2017-04-01T19:59:37.915Z INFO Char read success(2a26): 201611070000 Img-B 2017-04-01T19:59:37.916Z INFO Comparing firmware versions: Bundle version (201609290000), Bean version (201611070000) 2017-04-01T19:59:37.916Z INFO Starting FW update for device Bean(04a3169af315) 2017-04-01T19:59:37.916Z INFO Begin FW @ 1491076778 2017-04-01T19:59:37.917Z INFO Triggered a notification on Identify char Only way to upload sketch only through OSX GUI uploader Hi folks! We are working through this issue with the Node loader. I'll make sure to update this issue when we make headway. Thanks! On Tue, Apr 11, 2017 at 7:38 AM, evaldsurtans notifications@github.com wrote: Same problem still stuck CLI on Linux, Windows and OSX like this: 2017-04-01T19:59:37.893Z INFO Char read success(2a27): 1E Programming device firmware: 04a3169af315 2017-04-01T19:59:37.895Z INFO Begin update called 2017-04-01T19:59:37.915Z INFO Char read success(2a26): 201611070000 Img-B 2017-04-01T19:59:37.916Z INFO Comparing firmware versions: Bundle version (201609290000), Bean version (201611070000) 2017-04-01T19:59:37.916Z INFO Starting FW update for device Bean(04a3169af315) 2017-04-01T19:59:37.916Z INFO Begin FW @ 1491076778 2017-04-01T19:59:37.917Z INFO Triggered a notification on Identify char Only way to upload sketch only through OSX GUI uploader — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/PunchThrough/bean-sdk-node/issues/7#issuecomment-293247059, or mute the thread https://github.com/notifications/unsubscribe-auth/AACLYdHQr1grfayqOE3d4_IFeZK5zeLHks5ru3RYgaJpZM4MjNgH . Punch Through Tech Support suggested I do the following, and it works much better: $ cd %AppData%\npm\node_modules\bean-sdk $ ​npm install --save --save-exact noble@1.7.0​ The problem is apparently that bean-sdk relies on noble 1.7.0 exactly but the dependency in package.json is specified as ^1.7.0 instead of 1.7.0 so the latest noble version (1.8.0) is installed instead. The fix downgrades bean-sdk's noble install to 1.7.0 and rewrites the dependency so that an update won't accidentally revert the fix. Yes, with latest update it is possible to program_sketch using CLI, thank you! I am still running into hangs on linux with program sketch or program firmware
gharchive/issue
2017-03-21T01:11:20
2025-04-01T06:37:27.373010
{ "authors": [ "adamwolf", "estiens", "evaldsurtans", "joebowbeer" ], "repo": "PunchThrough/bean-sdk-node", "url": "https://github.com/PunchThrough/bean-sdk-node/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
655344012
fix: Add relates to to the duplicates list This adds the relates to trailer to the list of trailers we detech duplicates from, but the root of the problem is that we sometimes insert a trailer when we don't actually need to, because it's already there. This also fixes that. Relates-to: #220 #315
gharchive/pull-request
2020-07-12T06:50:44
2025-04-01T06:37:27.388375
{ "authors": [ "PurpleBooth" ], "repo": "PurpleBooth/git-mit", "url": "https://github.com/PurpleBooth/git-mit/pull/317", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
183188170
Return appropriate HTTP status codes on error This pull request makes sure errors actually return non-OK HTTP status codes. Before, all requests (that didn't cause an actual uncaught exception serverside) returned 200. With this change, they instead return what they should according to the documentation, and as for the undocumented ones according to common sense. With this pull request, the errors are mapped as following: Error.NONE - 200 OK Error.INVALID_CLIENT - 400 Bad request Error.INVALID_SERVICE - 400 Bad request Error.INVALID_SECRET - 400 Bad request Error.DUPLICATE_LISTEN - 409 Conflict Error.RATE_TOOFAST - 429 Too many requests Error.SERVICE_NOTFOUND - 404 Not found Error.ARGUMENT_MISSING - 400 Bad request Error.INVALID_PUBKEY - 400 Bad request (Though this error is never used.) Error.CONNECTION_CLOSING - 499 Client closed request (I wasn't sure about this one either, since it also goes unused.) Error.NO_CHANGES - 400 Bad request Pretty nice, eh? This also resolves #17. Nice, I just need to make sure this is reflected in the docs from now on. Error.CONNECTION_CLOSING WAS used when websockets were still in the API instead of a connector. I should remove it some time.
gharchive/pull-request
2016-10-15T04:31:51
2025-04-01T06:37:27.399897
{ "authors": [ "Mechazawa", "obskyr" ], "repo": "Pushjet/Pushjet-Server-Api", "url": "https://github.com/Pushjet/Pushjet-Server-Api/pull/18", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
55632785
Windows Phone notification payload Hey just curious what the Windows Phone notification payload looks like. iOS looks like { "aps": { "sound": "default", "alert": "push title" }, u: '{key: value}' //user data } Android looks like { title: "push title" userdata: '{key: value}' //user data } Windows Phone looks like // ??? { "content": "message", "userdata": "{key: value}", "onStart": false }
gharchive/issue
2015-01-27T15:35:33
2025-04-01T06:37:27.401811
{ "authors": [ "sean-hill", "shaders" ], "repo": "Pushwoosh/pushwoosh-sdk-samples", "url": "https://github.com/Pushwoosh/pushwoosh-sdk-samples/issues/24", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1197060762
🛑 Home Assistant is down In 503b115, Home Assistant (https://homeassistant.pxlbuzzard.com) was down: HTTP code: 502 Response time: 312 ms Resolved: Home Assistant is back up in e4999c3.
gharchive/issue
2022-04-08T09:18:15
2025-04-01T06:37:27.418095
{ "authors": [ "PxlBuzzard" ], "repo": "PxlBuzzard/upptime", "url": "https://github.com/PxlBuzzard/upptime/issues/450", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
712515068
PDF Redaction PDF Redaction - what will change - We generally find pdfs to contain sensitive information so it is necessary to remove those details. The code will annotate the sensitive informations. Instructions Create a new folder for your script and file/folder name should be appropriate. Create a README.md in your folder for program Instructions add requirements.txt if needed Please add/delete options that are not relevant. [x] Adding New Code [] Improving Code [] Improving Documentation [] Bug Fix Programming Language [x] Python :star2: Star it :fork_and_knife:Fork it :handshake: Contribute to it! Discord server - https://discord.gg/FXyh2S3 Happy Coding, I would love to work on this
gharchive/issue
2020-10-01T05:58:15
2025-04-01T06:37:27.422317
{ "authors": [ "debdutgoswami" ], "repo": "Py-Contributors/awesomeScripts", "url": "https://github.com/Py-Contributors/awesomeScripts/issues/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
365697800
dasch cached basis implementation An implementation of @MikhailRyazanov's basex caching and slicing, for the dasch methods is in my PyAbel fork dasch-cached-memory-basis This issue provides discussion point on the best PyAbel coding practices. Implementation @MikhailRyazanov's clever ideas: a) caching the basis numpy array, for faster repeat transforms (rather than reading the basis file) and b) slicing the basis from a larger basis (cached or file) Are easy to implement: a) add global variables _basis = None, as per basex, and a _method = None to identify the Dasch method, to abel.dasch.py and b) add slicing to the cached basis _basis[:cols, :cols], to extract from any larger basis. # cache basis _basis = None _method = None : _basis = abel.tools.basis.get_bs_cached(method, cols, cols, basis_dir=basis_dir, cached_basis=(_basis, _method), verbose=verbose) _method = method get_bs_cached() is common to all the basis methods, and was extracted to abel.tools.basis.py during coding of the abel.dasch.py methods. abel.tools.basis.py became somewhat untidy when the linbasex method was included, since that method broke the simple method_basis_{cols}_{nbf}.npy naming scheme, as uniqueness required additional variables. The use of abel.tools.basis.py requires the dasch global variables to be passed to abel.tools.get_bs_cached(), in this fork, as a tuple if cached_basis is not None and cached_basis[0] is not None: _basis, _method = cached_basis if _basis.shape[0] >= cols and _method == method: if verbose: print('Using memory cached basis') return _basis : basis_files = glob.glob(path_to_basis_files) for bf in basis_files: if int(bf.split('_')[-2]) >= cols: # relies on file order if verbose: print("Loading {:s} basis {:s}".format(method, bf)) D = np.load(bf) # trim to size return D[:cols, :nbf] Q1: Given the mess caused by linbasex is it better to return get_bs_cached() into abel.dasch.py, as is the case for abel.basex.py? Q2: Any alternative suggestions? Testing (as implemented in my fork) test code gist Generates two Dribinski sample quadrants, comparing the transform time for (0) generating the basis, (1) memory cached basis, and (2) re-reading the basis file. Then, for the smaller size quadrant, the transform time for a basis extracted from the larger basis (3) cached, (4) read from file. Note, that file reading is expensive, for a small image it is better to generate the basis, rather than read it. python test-dasch.py Dribinski image quadrant (251, 251) ======================================== (0) Generate basis ------------------------------ A suitable basis for 'three_point' was not found. A new basis will be generated. But don't worry, it will be saved to disk for future use. Operator matrix saved for later use to, ./three_point_basis_251_251.npy ... in 15.2 ms (1) Cached basis ------------------------------ Using memory cached basis ... in 0.7 ms (2) Read basis from file ------------------------------ Loading three_point basis ./three_point_basis_251_251.npy ... in 1.3 ms Dribinski image quadrant (2501, 2501) ======================================== (0) Generate basis ------------------------------ A suitable basis for 'three_point' was not found. A new basis will be generated. But don't worry, it will be saved to disk for future use. Operator matrix saved for later use to, ./three_point_basis_2501_2501.npy ... in 1399.4 ms (1) Cached basis ------------------------------ Using memory cached basis ... in 346.2 ms (2) Read basis from file ------------------------------ Loading three_point basis ./three_point_basis_2501_2501.npy ... in 368.7 ms [remove small size basis file three_point_basis_251_251.npy] Back to (251, 251) image size ---------------------------------------- (3) Cached from larger basis cache ------------------------------ Using memory cached basis ... in 1.1 ms (4) Read from larger basis file ------------------------------ Loading three_point basis ./three_point_basis_2501_2501.npy ... in 20.1 ms clean up, remove basis file I was also confused about basex having its own get_bs_basex_cached(), unlike all other methods. Why was it made so? I think, the simplest solution would be to keep all basis handling within the corresponding methods modules rather than combine them all in one separate module. Especially since different methods might have quite different requirements and approaches. On the other hand, there is some common code (this, however, is true for their ..._transform() methods as well). We probably should compare how much code is shareable and how much is not. My basex basis cropping is also more complicated. And currently, with the implementation of n ≠ nbf, I actually want to move from the n, nbf scheme to n, sigma, since it makes more sense to the user (can use the same kind of basis defined by the width sigma for any n and do not bother about calculating nbf, which is used only internally) and simplifies the cropping implementation. Another question, related to https://github.com/PyAbel/PyAbel/issues/226#issuecomment-423258091: do we want separate caches for each "method" ("basex", "two_point", "onion_peeling", ...), such that they can coexist without evicting each other? Or the idea is to have caching only for repetitive calling of the same method (with constant parameters)? I have implemented basex cache purging as basex_cleanup(), considering that there will be separate two_point_cleanup(), onion_peeling_cleanup() and so on, akin to basex_transform(), two_point_transform(), onion_peeling_transform... But if we want only one common cache, then it probably should be renamed to just cleanup(). And maybe even moved (with the cache variables) to transform.py? moved (with the cache variables) to transform.py That only works for abel.Transform(), direct calls abel.method.method_transform() bypass transform.py. I think each method should handle its own cache, and perhaps, have abel.basis_purge(method), to purge unwanted basis variables. @DanHickstein is normally the king of these types of decisions ;-) Great discussion here! I'm fine to have each method have it's own cache. I also like the idea of using your check of the consistency of cropped basis sets as a unit test. You can perhaps simple use very small images in order to make it quick. In regards to @MikhailRyazanov's point about n, nbf, and sigma: shouldn't each saved basis set be labelled with all three labels? Of course, a certain n and sigma suggests a reasonable nbf, but can't any sigma be selected in principle? The basis defined in the BASEX article has only one parameter σ, which defines the overall scaling. That is, the spacing between the maxima of ρk and ρk+1 is always σ. In principle, k in (14) can be non-integer, but then its projection (15) is not a finite expansion. So nbf is really uniquely determined by n and sigma (within maybe ±1, depending on its rounding when n/sigma is not an integer). Also, as I understand, in all Dasch methods nbf = n always, so nbf there is also redundant. And linbasex basis needs more than 2 parameters. I don't think that a common naming scheme, except the {method}_basis_ prefix, can exist, and since now all methods will be handling their basis files internally, it would be natural to allow them to use their own naming conventions. Completed with PR #232 . Closing. Great work!
gharchive/issue
2018-10-02T00:52:52
2025-04-01T06:37:27.440572
{ "authors": [ "DanHickstein", "MikhailRyazanov", "stggh" ], "repo": "PyAbel/PyAbel", "url": "https://github.com/PyAbel/PyAbel/issues/231", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
771561280
Compilation failed with M1 macOS, PyO3 0.12, Python 3 🌍 Environment Your operating system and version: macOS Big Sur 11.1 Your python version: Python 3.8 (system), Python 3.9 (homebrew), Python 3.9 (homebrew, Rosetta) How did you install python (e.g. apt or pyenv)? Did you use a virtualenv?: brew install, no. Your Rust version (rustc --version): rustc 1.49.0-beta.4 (877c7cbe1 2020-12-10) Your PyO3 version: 0.12 Have you tried using latest PyO3 master (replace version = "0.x.y" with git = "https://github.com/PyO3/pyo3")?: no 💥 Reproducing This happens on my own project, but I can reproduce it with maturin's test-crate install rust beta with rustup (stable does not yet have darwin aarch64 support) install python3.9 aarch64 with native homebrew 1. Compile with System python 3.8 git clone https://github.com/PyO3/maturin.git cd maturin/test-crates/pyo3-pure cargo build would produce the following error: = note: Undefined symbols for architecture arm64: "_PyExc_ValueError", referenced from: _$LT$pyo3..exceptions..PyValueError$u20$as$u20$pyo3..type_object..PyTypeInfo$GT$::type_object_raw::h059c73bcf950bd88 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.0.rcgu.o) "_PyExc_BaseException", referenced from: _$LT$pyo3..exceptions..PyBaseException$u20$as$u20$pyo3..type_object..PyTypeInfo$GT$::type_object_raw::h82828fb355186050 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.0.rcgu.o) "_PyList_Append", referenced from: pyo3::types::list::PyList::append::_$u7b$$u7b$closure$u7d$$u7d$::h5630ce7dc966791a in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.4.rcgu.o) "_PyExc_SystemError", referenced from: _$LT$pyo3..exceptions..PySystemError$u20$as$u20$pyo3..type_object..PyTypeInfo$GT$::type_object_raw::h76492aefdb4928b0 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.0.rcgu.o) "_PyList_New", referenced from: pyo3::types::list::PyList::empty::h3e9f2e6039a3ef3a in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.4.rcgu.o) "_PyErr_PrintEx", referenced from: pyo3::err::PyErr::print::h12dca2eb6fa69d90 in libpyo3-a087e80261990d6e.rlib(pyo3-a087e80261990d6e.pyo3.1al8rrrd-cgu.7.rcgu.o) "_PyObject_SetAttr", referenced from: ... ... The python3 binary is from Apple, in universal format with both x86_64 and arm64. file $(which python3) gives /usr/bin/python3: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e] 2. Compile with HomeBrew Python 3.9 basically the same process and same error git clone https://github.com/PyO3/maturin.git cd maturin/test-crates/pyo3-pure PYO3_PYTHON=python3.9 cargo build file $(which python3.9) output: /opt/homebrew/bin/python3.9: Mach-O 64-bit executable arm64 3. Compile with Rosetta under x86 works fine As you're using macOS - you would need cargo rustc --release -- -C link-arg=-undefined -C link-arg=dynamic_lookup? Or otherwise you could try building with maturin develop, which will include these flags for you. @davidhewitt Thanks for the reply, that is indeed the problem. I copied my cargo config ~/.cargo/config which contains the fix for x86_64: [target.x86_64-apple-darwin] rustflags = [ "-C", "link-arg=-undefined", "-C", "link-arg=dynamic_lookup", ] Changing to [target.aarch64-apple-darwin] fixes the problem. Perhaps the documentation @ https://pyo3.rs/master/index.html#using-rust-from-python can be updated to include aarch64 for anyone else encountering this issue? @davidhewitt Oh and maturin develop currently does not work because of platforms crate cannot correctly detect the triplet aarch64-apple-darwin. $ maturin develop 💥 maturin failed Caused by: Could guess the current platform Ref: https://github.com/RustSec/platforms-crate/blob/master/src/platform.rs Will open a separate issue to either maturin or platforms.
gharchive/issue
2020-12-20T10:45:17
2025-04-01T06:37:27.664345
{ "authors": [ "LER0ever", "davidhewitt" ], "repo": "PyO3/pyo3", "url": "https://github.com/PyO3/pyo3/issues/1330", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
631846307
User Guide: Rewrite parallelism chapter I took a stab at rewriting the parallelism chapter because it seemed to suggest that you need to release the GIL in order to achieve parallelism within your Rust code and that allow_threads enables running of Python code in parallel. See #640 and #649 for discussion and thanks @Askannz for the nice example which I included. Also included the previous PR #957 to make the benchmarks more comparable. Closes #956 Hmm, it looks that we have a problem with travis integration. https://travis-ci.org/github/PyO3/pyo3/builds/695173811 It looks tests are correctly executed, but Github fails to fetch the result. Closed and reopened to retrigger CI. Thanks!
gharchive/pull-request
2020-06-05T18:49:10
2025-04-01T06:37:27.667166
{ "authors": [ "Alexander-N", "kngwyu" ], "repo": "PyO3/pyo3", "url": "https://github.com/PyO3/pyo3/pull/959", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2693682620
CPLEX: Warning, line 133208941: Name 'x10609872' does not exist. Dear all, I am getting several of these warnings when using linopy with PyPSA: Warning, line 133208941: Name 'x10609872' does not exist. It looks to me like a constraint or variable, that CPLEX expects is not available? The network is a modified pypsa-eur network, to which I added different system components. Best, Georg Thanks Fabian! After your comment, I wanted to investigate whether this has to do with linopy/PyPSA releases that came after the latest PyPSA-Eur version (0.13), and indeed I managed to solve the issue now by downgrading PyPSA (and thereby linopy) to the lowest version that was mentioned in the environment.yaml. Don't know if this is even worth following up on, but with the latest version I also got some mentions of repeating rows, which was usually an indicator that the run would fail (for example "Row 'c6164160' repeats.") I will just leave it here, in case it might be helpful. But it might also just be a result of incompatible packages, so feel free to ignore it, of course :-)
gharchive/issue
2024-11-26T08:29:40
2025-04-01T06:37:27.669605
{ "authors": [ "thomgeo" ], "repo": "PyPSA/linopy", "url": "https://github.com/PyPSA/linopy/issues/385", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1839684295
bug: romanize("ฤดู", "royin") Description These words occurred error when using romanize() by royin engine: ฤดูใบไม้ผลิ ฤดูร้อน ฤดูหนาว ฤดูใบไม้ร่วง ฤดู Expected results no error Current results return select_romanize_engine(engine)(text) File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 229, in romanize romanized_words = [_romanize(word) for word in words] File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 229, in romanized_words = [_romanize(word) for word in words] File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 213, in _romanize word = _replace_consonants(word, consonants) File "C:\Users\brigh\anaconda3\envs\RomanDictionary\lib\site-packages\pythainlp\transliterate\royin.py", line 197, in _replace_consonants mod_chars.append(_CONSONANTS[consonants[j]][1]) IndexError: list index out of range Steps to reproduce romanize("ฤดู", "royin") PyThaiNLP version 4.0.2 Python version 3.10.12 Operating system and version windows11 More info Runned by VSCode, conda environment Possible solution No response Files No response @konbraphat51 Hello! It is not bug. romanize(A Thai syllable, "royin") is input a syllable only to get correct. If you want to input a word, you should use romanize(text, engine: str = 'thai2rom'). We will improve our document. Thank you for reporting! Oh, got it. thank you!
gharchive/issue
2023-08-07T15:12:19
2025-04-01T06:37:27.702774
{ "authors": [ "konbraphat51", "wannaphong" ], "repo": "PyThaiNLP/pythainlp", "url": "https://github.com/PyThaiNLP/pythainlp/issues/831", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1114880674
new metric SCC for Images What does this PR do? Adds a new metric Spatial Correlation Coefficient for Images part of #799 Before submitting [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)-Yes [x] Did you read the contributor guideline, Pull Request section?-Yes [ ] Did you make sure to update the docs?-No [ ] Did you write any new necessary tests?-No PR review Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged. Did you have fun? Make sure you had fun coding 🙃 Hi @nishant42491, thanks for wanting to contribute. However, I can only see that you have created a new file. Is that correct? Hi @nishant42491, thanks for wanting to contribute. However, I can only see that you have created a new file. Is that correct? yes, that's correct. sorry for the delay as I was trying to understand how to calculate the SCC coefficient as there are not many resources available online to understand the implementation of Laplacian filters. I am working with my college professors to understand more about the SCC metric how ever due to my college exams my progress has been slow. however I will try to speed things up again really sorry for the delay. Hi @nishant42491, do you have any updates here? :] Hi @nishant42491, do you have any updates here? :] yep, I have taken references from andrewkhalels sewar repo containing information about the SCC metric and have created a scc.py file containing its implementation however I am not 100 % sure that it's correct as I still don't understand the SCC metric completely. Ill be more than happy to change anything that needs changing. I would love any suggestions on how to proceed further. Thank you a lot for your patience with me I really appreciate it :]. @SkafteNicki @stancld I have finished creating a functional and class-based interface for the scc metric and they seem to be working properly. I have tested it against the sewar repos SCC metric and my metric is giving me the right outputs. However, I am stuck on implementing the test_scc.py file as I don't exactly know what to write in the tests file. I have tried using the sewar repos scc metric in the tests file but it does not work with batched inputs so i am not sure what to test my metric against. any suggestions on how to move forward will really help me out. Thank you in advance for your help :] Hi @nishant42491, I tried to take a stab at finishing this PR. I written tests and added some docs, but the tests fails with assertion when I try to compare against the implementation from sewar. Could you maybe help with getting this PR finished, by providing at least an example how to get this implementation to match the one from sewar. Hi @nishant42491, I tried to take a stab at finishing this PR. I written tests and added some docs, but the tests fails with assertion when I try to compare against the implementation from sewar. Could you maybe help with getting this PR finished, by providing at least an example how to get this implementation to match the one from sewar. Hi @nishant42491, I tried to take a stab at finishing this PR. I written tests and added some docs, but the tests fails with assertion when I try to compare against the implementation from sewar. Could you maybe help with getting this PR finished, by providing at least an example how to get this implementation to match the one from sewar. I think the difference between the two metrics arises because the package in swear has implemented a high pass filter whilst I have not however the difference did not seem significant whilst i was testing the metric @SkafteNicki should I try implementing a high pass filter for my metric too to try and make it pass the tests you have written? @nishant42491 yes, we kind of need the two metrics to produce the same output, so we make sure that this implementation is doing the right thing. Maybe the reason the tests are failing is that the input is random? did you test with some structured input? ![puppy](https://user-images.githubusercontent.com/79035403/165307101-b9628ea4-bf33-4c70-95c2-f16c83995358.jpeg) ![mud_turtle](https://user-images.githubusercontent.com/79035403/165307128-2da3fc20-14fe-4dc1-9f13-719b9ffd98bb.jpg) @SkafteNicki I have received outputs 0.079 and 0.080 for 2 images for swear's metric and my metric respectively will try and implement the high pass filter which should solve the problem. @SkafteNicki Thnx for the help on the tests i really appreciate it :] @nishant42491 is it correct that current differences in implementation is: The input to sewar should be target, preds whereas your implementation takes in oppesit order preds, target (which is the same as the rest of out codebase) The input to sewar is expected to be single images of shape [H, W, C] whereas your implementation takes in [B, C, H, W] (so the channel dimension should be moved when we compare) Just trying to figure out why I cannot get the numbers to match (I know the high pass filter is missing, but it should still be fairly close as to my understanding). @nishant42491 is it correct that current differences in implementation is: The input to sewar should be target, preds whereas your implementation takes in oppesit order preds, target (which is the same as the rest of out codebase) The input to sewar is expected to be single images of shape [H, W, C] whereas your implementation takes in [B, C, H, W] (so the channel dimension should be moved when we compare) Just trying to figure out why I cannot get the numbers to match (I know the high pass filter is missing, but it should still be fairly close as to my understanding). yep, those are the differences between the sewar implementation and the metric's implementation. for the inputs, I have tested In both metrics the difference is slight generally up to the 3rd decimal place. @nishant42491 is it correct that current differences in implementation is: The input to sewar should be target, preds whereas your implementation takes in oppesit order preds, target (which is the same as the rest of out codebase) The input to sewar is expected to be single images of shape [H, W, C] whereas your implementation takes in [B, C, H, W] (so the channel dimension should be moved when we compare) Just trying to figure out why I cannot get the numbers to match (I know the high pass filter is missing, but it should still be fairly close as to my understanding). yep, those are the differences between the sewar implementation and the metric's implementation. for the inputs, I have tested In both metrics the difference is slight generally up to the 3rd decimal place. one more thing is that my default kernel size is 9, whilst sewras kernel size is 8 so you have to set the sewar metrics kernel size to 9 explicitly to get similar results @nishant42491 tried changing the implementation based on what you have told me but cannot get it to match. Below is shown the wrapped reference implementation from sewar that should do everything correctly: summarizes a batch of input flips the input permute the dimensions changes the window size to 9 from sewar.full_ref import scc def _reference_scc(preds, target, reduction): val = 0.0 for p, t in zip(preds, target): val += scc(t.permute(1, 2, 0).numpy(), p.permute(1, 2, 0).numpy(), ws=9) val = val if reduction == "sum" else val / preds.shape[0] return val from torchmetrics.functional import spatial_correlation_coefficient import torch _ = torch.manual_seed(42) BATCH_SIZE = 10 CHANNELS = 3 SIZE = 100 preds = torch.randint(0, 255, (BATCH_SIZE, CHANNELS, SIZE, SIZE)).float() target = torch.randint(0, 255, (BATCH_SIZE, CHANNELS, SIZE, SIZE)).float() print(spatial_correlation_coefficient(preds, target, reduction='sum').item()) print(_reference_scc(preds, target, reduction='sum')) can you find what I do wrong? Your Implementation seems correct. I'm not quite sure why the outputs differ.
gharchive/pull-request
2022-01-26T10:50:39
2025-04-01T06:37:27.722526
{ "authors": [ "SkafteNicki", "nishant42491", "stancld" ], "repo": "PyTorchLightning/metrics", "url": "https://github.com/PyTorchLightning/metrics/pull/800", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1047600253
[RFC] Future of gpus/ipus/tpu_cores with respect to devices Proposed refactoring or deprecation Currently we have two methods to specifying devices. Let's take GPUs for example: The standard case that we've all grown used to and are mostly aware of. trainer = Trainer(gpus=2) Introduced in 1.5, tries to make the number of devices agnostic. This means if you specify accelerator='tpu' we automatically know to use 2 TPU cores. trainer = Trainer(devices=2, accelerator='gpu') Recently, it has come up in https://github.com/PyTorchLightning/pytorch-lightning/pull/10404#discussion_r744562512 that we may want to deprecate and prevent further device specific names from appearing in the Trainer (such as hpus). Related conversation https://github.com/PyTorchLightning/pytorch-lightning/issues/9053#issuecomment-904239610 I see two options: 🚀 We keep both device specific arguments (gpus tpu_cores ipus for the Trainer) and devices 👀 We drop gpus tpu_cores ipus in the future and fully rely on devices. (Potentially this would likely be done in Lightning 2.0, instead of after 2 minor releases) cc @kaushikb11 @justusschock @ananthsub @awaelchli IMO we should follow the contributing guidelines: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/.github/CONTRIBUTING.md#main-core-value-one-less-thing-to-remember Having multiple options in the public API to do the same thing is really confusing. +1, totally agree Current device related flags are confusing. Multiple flags partially overlap and interfere each other. When multiple flags passed in, we define prioritize and ignore some of the flags. For example: gpu=2, device=3 device will be ignored. gpu=2, cpu=2, accelerator=cpu what will happen? I think cpu with num_process=2? I prefer option 2, drop gpus, tpu_cores, ipus in the future and fully rely on devices And can we have devices be int, not set =auto? With this option accelerator flag for device_type, devices (probably rename to devices_num?) for device_number. It's also scalable for new device types like hpus I think going from this: Trainer(gpus=2) to trainer = Trainer(devices=2, accelerator='gpu') is a major step backwards in usability. now users have to dig into docs to understand how to use things. it definitely violates the "one-less-thing-to-remember" part of the API. I guess, I'm just wondering why we're exploring this? I thought we were already pretty stable on the device API stuff @williamFalcon The more kinds of accelerators we get, the more flags we will also have. Switching from Trainer(gpus=8) to Trainer(tpu_cores=8) also requires users to dig through the docs. Actually I find it easier to have Trainer(devices=2, accelerator='gpu'/'tpu') as the flags stay the same, it is easier to remember and also scaling better. So personally this would be the "one-less-thing-to-remember" for me. Also I suspect, we would likely have the accelerator defaulting to 'auto' then which means that Trainer(devices=8) would run on gpu if available, on tpu if available and if no special accelerator is available it would fall back to cpu. @williamFalcon As @justusschock shared, the previous approach doesn't scale well and makes exploration confusing. Furthermore, the new API provides an auto as follows: Trainer(devices="auto", accelerator="auto") which would make the code runnable on every hardware without any code changes. Which isn't possible with the previous API. To address the discoverability issue, isn't it common to import the Trainer and see what parameters are available? Isn't this more common that going to the docs to find the parameter? I opened the issue as I felt it was important as a community we come to an agreement as the idea was floating around a few PRs (with inconsistent agreement). It's important to have one single direction here (especially as we introduce other accelerators). I do strongly disagree with removing gpus/tpu_cores/ipus/hpus/cpus from the Trainer for primarily ease/discoverability. I think it would be beneficial to try get community votes on this, so maybe a post on our General slack channel is warranted? Even something like gpus as lightning defines them today is ambiguous. PyTorch also supports AMD GPUs: https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/ But this isn't supported at all with Lightning because gpu assumes nvidia/cuda, whereas PyTorch's device allows for more backends. In my head the accelerator in Lightning maps to the pytorch device being used. So why don't we preserve the same semantics PyTorch already offers for this to smoothen the transition and keep parity? Adding to the conversation. In this issue, a user is requesting MIG instead of GPUs for A100 like a machine: https://github.com/PyTorchLightning/pytorch-lightning/issues/10529 @tchaton do we have a consensus to move forward with this issue? Maybe thought that is a bit different there. Going back to @williamFalcon argument that: Trainer(gpus=2) to Trainer(devices=2, accelerator='gpu') is more work for the user. My question here is that as far as I understand a correct me if I am wrong you have CPU and then either gpu/tpu/hpu/... That is why you can have something as follow: Trainer(devices=2, accelerator='gpu') where accelerator is only one combination. I also assume that our users "don't" want to care if it is gpu/tpu/hpu.. as their code will remain the same as long as it is not CPU and even if it is CPU PL help you there to make it seamless. Finally, we can automatically detect what kind of accelerator you have available today with auto. That being said what if we had a "wrapper" around anything that is non CPU such that we can keep the same structure while making it "easy" for the users. ie. Trainer(cpus=2,xpus=2) this will automatically find if x is gpu/tpu/hpu. Then we allow default to be auto ie. Trainer(cpus=null,xpu=null) or could use -1 for example. To take the pro-Accelerator argument to the extreme (also with the "fractional" devices), how about not splitting devices= and accelerator=? If instantiating Accelerator all the time is too much of a hassle for @williamFalcon 's taste (I never liked the configuration part of tf sessions, either, and there is a good reason why PyTorch doesn't force you to do device = torch.Device("cuda") all over the thing but will just take "cuda"), how about: Trainer(devices=2) # I want two of whatever is available (so GPUs > CPUs in preference, but only the same kind. Occasions where "casual users" will have TPU GPU and IPU in the same box will be rare enough... This is breaking because it would make "GPU if available" the default :( (though I never understood why it is not). For more elaborate configs, one could have Trainer(devices=Accelerator("cuda", 2)) My apologies for adding another color of shed, but to my mind, there are these cases we want to cater to: The easy one! Needing to instantiate Accelerator is a bit more API for people to remember than just gpus=.... Personally, I have to concentrate really hard to know how many c and l to put in there, too. The turbopropower-user: Would it not be more consistent and flexible to have Accelerator as the single truth about what their thing trains on? I certainly like to consider all my clusters of 512 DGXes for training BERT in 30 seconds a single device... The unknown future. I think we'll see a lot more blur to "thing the training runs on = n devices of type a" that the proposed API of devices=2, accelerator=... suggests. Best regards Thomas To add to @t-vi comment, I believe the accelerator could be set to 'auto' by default as it is quite unlikely there is an overlapping machine with both GPUs and TPUs available. So the hardware is totally abstracted and this provides an experience closer to Jax with their auto platform detection Trainer(gpus=8) or Trainer(tpu_cores=8) or Trainer(cpu=8) or Trainer(hpus=8) or Trainer(ipus=8) ... would be replaced directly with: Trainer(devices=2) If a user has a machine with a GPU and wants to debug on CPU, he would simply add the accelerator to force the decision-making. Trainer(devices=2, accelerator="cpu") By the most critical point is: I think we'll see a lot more blur to "thing the training runs on = n devices of type a" that the proposed API of devices=2, accelerator=... suggests. I believe this API would need to provide a smarter hardware detection mechanism for MIG hardware. Coming from both the High-Performance and embedded spaces, I'll weigh in here with some general thoughts. Often with large clusters, we have models and/or datasets which can't fit on a single node. If the API says gpus=2 or tpus=2, what control do I have over where those devices are or which devices get used for which parts of the model? Should PTL support this type of deployment at all? There are certain accelerators I might want to use which are only useful for inference but not training. FPGAs, for example, are really great for low-latency inference, but with the above API, do I have to instantiate a "Trainer" to use a device for inference? This makes little sense to me as a user. Is this something that PTL wants to support? If so, a rework is in order. There is research being done on heterogeneous architectures which have GPUs, DSPs, etc. available on a single node. The distribution of work and communication between these devices is non-trivial and a scheduling nightmare, but it's not too far off. Virtualized communication technology like CXL and composable infrastructure like Liquid will enable these types of nodes to "exist" in a cloud or on-prem cluster. I think PTL should be forward thinking and have these types of setups in mind, especially if it is to be adopted by the research community as a usable tool. A "device" as we think of it today (a GPU, a CPU) will likely be upended when in-memory processors come to the mainstream. (What is a "memory" device? How many "cores" does it have? It quickly loses any meaning). What about the Xilinx Versal architecture? It has many compute cores in a dynamic software-defined network fabric connected to an FPGA. It's one "device", but it's also many. To the above points, I have a couple of suggestions: The trainer should be agnostic to what it is executing on. It should be the object facilitating and orchestrating the training session (it is a trainer after all), but it shouldn't care what device is on the other end. If it does have knowledge of device-specifics, then as many of the users above pointed out, the API and argument count/complexity will explode if even just a few accelerators become mainstream and anything but basic training strategies are to be supported. We should have an Accelerator API describing a device, its location, and its features. The average user shouldn't have to use this API at all, or even know it exists, and sane defaults should be set. However, it should be flexible enough to be used for cutting-edge device research. Where this accelerator API fits in the ecosystem is going to need to be decided by the community, but it shouldn't be passed to the trainer because if I have a device which is only for inference acceleration, then it makes no sense to create a trainer. I am very interested to see where this discussion goes, and I apologize for the ramble. This discussion has extended to other related points, but to give my opinion the original question, I fully agree with @tchaton's API vision here: https://github.com/PyTorchLightning/pytorch-lightning/issues/10410#issuecomment-972712672. Where the original gpus, tpus, ... are deprecated and removed. I don't think adding new options xpus=..., or devices=Accelerator("cuda", 2) should be in the cards anymore, as the new devices=..., format was just introduced in 1.5 and we would be once again deprecating this newly introduced functionality for a different thing. There's no clear winner here and we just need to choose one approach. do I have to instantiate a "Trainer" to use a device for inference it shouldn't be passed to the trainer because if I have a device which is only for inference acceleration, then it makes no sense to create a trainer. Keep in mind that the Trainer has that name since it's been the core part of Lightning since the beginning, but it's way more than a "brainer" and could be thought of as an engine, for example, we have validate, test, and predict which are split from the training procedure A lot of great inputs! Let me start off by summarizing: The current API was built when only GPUs were supported. Then TPUs were added. And now a few years later, we live in a world where more alternatives are starting to emerge. This is the current API. Trainer(gpus=2) Trainer(tpus=2) But now, we live in a world where more than GPU|TPU devices are coming out (HPU, etc...). In this case, the proposal is to modify the API like so: Trainer(devices=2, accelerator='tpu') Well... we also introduced the 'auto' flag, so the actual default call would look like this: Trainer(devices=2) # because Trainer(devices=2, accelerator='auto') is the default @t-vi also brought up the alternative that there could be a class in the event that configs get unwieldy Trainer(accelerator=Accelerator("cuda", 2)) @dlangerm also brought up that in certain complex scenarios: Multinode training (which we've supported from day 1 @dlangerm, and you specify the num_nodes argument). Today we already support selecting many configurations here, so i'm not sure what a relevant usecase is. Yes, PL already supports inference. We can think about configurations during inference needing a "Trainer" (@tchaton @carmocca maybe "Trainer" needs to be renamed in 2.0) Heterogenous hardware is an awesome upcoming research challenge that we'll be excited to tackle next year. But today, it's premature until research matures a bit more. In-memory processors also sound promising. If you know of a real use case, happy to collaborate on working out how to do something like that. @dlangerm we do have an accelerator API (it's been there since 1.0)... it's just used internally and not exposed to the user. Decision So, with all that said, if there's broader community support for moving from: Trainer(gpus=2) to: Trainer(devices=2, accelerator='tpu') # default is auto Trainer(devices=2) Then I'm happy to back this option as it is more scalable and my only concern is "having to only remember one thing"... So, I'd love to hear more from the community about the effect on usability. if there are no major quelms about this and everyone's excited, let's roll it out for 2.0 cc @tchaton @carmocca @ananthsub @daniellepintz I have a question about this; if we want to roll this out for 2.0 when can we start working on it? Could we start working on it now for example? @tchaton @awaelchli @ananthsub What's you guys' thought on when will be the right time for 2.0? Should Accelerator Refactor and stable accelerator be part of the 2.0? I think it's better to have big changes once. I will prefer having accelerator stable version and the flags addressed in the same version. It's easier to communicate with users and reduce future confusings. Hey @daniellepintz @four4fish. Yes, I agree with you both. I don't believe this change requires a Lightning 2.0 as this is a natural evolution of Lightning becoming hardware-agnostic directly at the Trainer level. IMO, I would like to action this change for v1.6. @ananthsub @awaelchli @carmocca @kaushikb11 @justusschock Any thoughts on this ? Agreed. We could go ahead with this change for v1.6, along with major Accelerator refactor. Given the above decisions, is there a consensus on renaming Trainer to something more appropriate for the "Brain" or "Engine" that is has become? If these changes are towards a hardware-agnostic API that can be used for either training or inference, Trainer will become very confusing. Even today, creating a Trainer instance to perform Trainer.predict is fairly unintuitive. @dlangerm this is not really related to this issue, so I won't go into much detail here. Feel free to open a new issue for this discussion. From my POV, we shouldn't rename the Trainer before 2.0. Renaming flags is one thing (and we will need to have a pretty long deprecation cycle for those), but renaming the major components as the Trainer or LightningModule would be too much of a breaking change since this could also break the API in all other places as well. Hey @kaushikb11 I saw you assigned yourself to this issue. I was planning on working on the accelerator_connector refactor (https://github.com/PyTorchLightning/pytorch-lightning/issues/10422) which was blocked by this issue. Am I okay to proceed with accelerator_connector refactor or is that something you were planning on doing? I am working on this in #11040 - do we also want to deprecate num_processes and num_nodes? I think num_processes yes, but num_nodes we might still need in case of multi-node training Got it, thanks!
gharchive/issue
2021-11-08T15:39:28
2025-04-01T06:37:27.763064
{ "authors": [ "SeanNaren", "ananthsub", "carmocca", "daniellepintz", "dlangerm", "four4fish", "justusschock", "kaushikb11", "t-vi", "tchaton", "williamFalcon", "zippeurfou" ], "repo": "PyTorchLightning/pytorch-lightning", "url": "https://github.com/PyTorchLightning/pytorch-lightning/issues/10410", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
714014379
Calling module.log(...) within a callback fails 🐛 Bug Calling pl_module.log(...) within a Callback fails, even though this is recommended by the documentation here: https://pytorch-lightning.readthedocs.io/en/latest/loggers.html#logging-from-a-callback Error File "my_callback_file.py", line XX, in on_validation_epoch_end pl_module.log_dict(my_metrics_dict) File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 287, in log_dict self.log( File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 233, in log self._results.log( File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 171, in log self.__set_meta( File "/home/local/USHERBROOKE/pain5474/opt/miniconda3/envs/cav/lib/python3.8/site-packages/pytorch_lightning/core/step_result.py", line 217, in __set_meta _internal = self['meta']['_internal'] KeyError: '_internal' python-BaseException cc @nathanpainchaud This is happening on master Expected behavior We can log from callbacks using the lightning module Environment Happening on PyTorch Lightning master @ananthsub I just tried on master and cannot reproduce (I think it was solved yesterday, as I could reproduce 2 days ago). @SkafteNicki The issue was created by @ananthsub based on a question/issue I initially raised in the slack. I'll try to see if the bug is now resolved on master for me (it was still present as of yesterday afternoon) and I'll update you here as soon as I can. Hey @nathanpainchaud, Did you manage to reliably reproduce this behaviour? And if yes, could share the draft PR associated with this issue ? I will try to try it out too. Best regards, Thomas Chaton. Hey @tchaton, Thanks for the follow up! I opened the draft PR where I added a test that reproduces the behavior I'm getting. If I can help in any other way to get this sorted, just let me know! @tchaton Any updates on whether this is a feature that's planned to be supported, or on the contrary it's been abandoned? I'm only asking this question because the issue has been labelled priority for a while, but every PR referring to it have been closed without getting merged :laughing: Yes! thats been worked on these weeks. Starting to be merged now (https://github.com/PyTorchLightning/pytorch-lightning/pull/4439) I still have problems logging my validation metrics using pl_module.log() in the on_validation_end() hook. Any thoughts?
gharchive/issue
2020-10-03T05:59:48
2025-04-01T06:37:27.771261
{ "authors": [ "SkafteNicki", "aligholami", "ananthsub", "nathanpainchaud", "tchaton", "williamFalcon" ], "repo": "PyTorchLightning/pytorch-lightning", "url": "https://github.com/PyTorchLightning/pytorch-lightning/issues/3813", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
668706788
Horovod & py3.8 What does this PR do? resolving Horovod as discussed in #2745 PR review Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged. Did you have fun? Make sure you had fun coding 🙃 LGTM! Thanks @Borda! Surprisingly simple. LGTM 😄 yeah, we shall keep PR as simple as possible to the review is quick...
gharchive/pull-request
2020-07-30T13:11:24
2025-04-01T06:37:27.773601
{ "authors": [ "Borda", "tgaddair" ], "repo": "PyTorchLightning/pytorch-lightning", "url": "https://github.com/PyTorchLightning/pytorch-lightning/pull/2764", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2142959892
Stable version as default in docs Is your feature request related to a problem? Please describe Most users install PyVRP through PyPI and so they use the stable version. However, www.pyvrp.org shows the documentation for the latest development version by default, possibly confusing users when they find features in the docs that aren't released yet. Describe the solution you'd like www.pyvrp.org should default to the stable version of PyVRP. www.pyvrp.org/dev should default to the latest development version of PyVRP. I'll pick this up. We're getting many questions about stuff not working because of this 😅 We also have the "older" docs hosted in this repository: https://github.com/PyVRP/PyVRP.github.io. Those are available at https://pyvrp.github.io/v0.7.0/, and so one. It'd be nice if we can somehow merge all that together so that: pyvrp.org -> latest stable pyvrp.org/dev -> latest build on main pyvrp.org/v<version> -> docs associated with version <version> I don't really know how to do this. I do know statsmodels has things set up exactly this way. Maybe we can borrow some of their setup for our own purposes? I'm assigning this for 0.9.0, since we really ought to solve this ASAP. I'm looking into how statsmodels does it for the next hour and will make notes in this comment. Alright I had hoped to work on this today, but the devcontainer stuff took a little longer and now it's getting late. There's always tomorrow :). I don't want this to block a release of 0.9.0, but I'll try to finish it in the coming week. Possibly interesting: https://github.com/jimporter/mike?tab=readme-ov-file I feel like I keep postponing this, but at some point we really ought to pick this up 😆
gharchive/issue
2024-02-19T18:41:09
2025-04-01T06:37:27.780470
{ "authors": [ "N-Wouda", "leonlan" ], "repo": "PyVRP/PyVRP", "url": "https://github.com/PyVRP/PyVRP/issues/478", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
591338120
can't install version 3.2.x with python2 I'm not able to install version 4.2.1, due to importlib.util not being found. I'm not sure exactly which module I need to install to meet this dependency. I didn't experience this issue with version 3.1.x [root@black markdown-3.2.1]# python2.7 setup.py Traceback (most recent call last): File "setup.py", line 39, in <module> __version__, __version_info__ = get_version() File "setup.py", line 32, in get_version import importlib.util ImportError: No module named util As noted in the release notes, we dropped support for Python 2.7 in version 3.2. You have two options: Update to Python 3.5 or later. Continue to use Python-Markdown 3.1 with Python 2.7. I suppose we could test and add an error message to the setup.py script. We did include an error message for those who have a develop install of the package. And pip will refuse to install 3.2 on Python 2.7 due to the meta-data mismatch. Thanks for pointing that out. I'll continue using Python-Markdown 3.1 for now. It looks like transitioning to Python3 is going to be inevitable at some point, which is ultimately a good thing.
gharchive/issue
2020-03-31T18:47:34
2025-04-01T06:37:27.848172
{ "authors": [ "leifliddy", "waylan" ], "repo": "Python-Markdown/markdown", "url": "https://github.com/Python-Markdown/markdown/issues/929", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2219196376
[Demo requested] Full example on one-to-many and many-to-many relationships Hello, I like pretty much the idea and the layout of PyNest. Could you please provide a demo project that has users, products modules, with many-to-many and one-to-mant relationships between them ? maybe two demos for the 2 relationships could be better. And how user interacts with product ? by using services or something else ? I asked for this because if seems that the examples given in the repo only show the cases that user and product are independent. Hello! Thank you for your words. Basically, you were correct with you assumption that user and product will talk with each other using services. In that way, product service can inject user service and perform query operations on the user assets. Nest is really about organise your code in small and decoupled lego pieces that helps you to build robust application. I have a few examples i can share and i will upload them asap to the PythonNest organisation. Hello @copdips I have an awesome news for you! I've just public one of my pynest projects, an application for managing stocks that demonstrate the powerful architecture of nestjs in python. This is the link to the project - https://github.com/ItayTheDar/Stockify Thanks again for bringing this issue to our community! awesome, thanks
gharchive/issue
2024-04-01T22:35:26
2025-04-01T06:37:27.851720
{ "authors": [ "ItayTheDar", "copdips" ], "repo": "PythonNest/PyNest", "url": "https://github.com/PythonNest/PyNest/issues/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
894163667
Keithley 2600 4probe This PR adds a new mode to the doFastSweep function of the of Keithley 26xx The new mode allows for 4 probe measurements @jenshnielsen Agree with @astafan8 that we should not promote the Measurement but use the "new" dataset in the examples if possible. Otherwise this looks good @astafan8 I have updated the notebook to use fastsweep in combination with a d0d. We could also make a new function(do_faste_sweep) in the driver wrapping the d0d. So performing the fastsweep stays a one-liner? I have updated the notebook to use fastsweep in combination with a d0d perfect!!! thank you! We could also make a new function(do_faste_sweep) in the driver wrapping the d0d. So performing the fastsweep stays a one-liner? no, that would be mixing concerns for negligible benefit -- do*d is already a one-liner-ish :) and the fact that doFastSweep exists as a method is just historical and shouldn't have been added in the first place.
gharchive/pull-request
2021-05-18T09:16:28
2025-04-01T06:37:27.856786
{ "authors": [ "RasmusBC59", "astafan8", "jenshnielsen" ], "repo": "QCoDeS/Qcodes", "url": "https://github.com/QCoDeS/Qcodes/pull/3023", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1595046565
Average of different prediction horizons as a metric? Hello Authors, Could you please clarify the usage of the average of different prediction horizons as a benchmarking metric? Why was it used, and how to justify the validity of this? I am doing a similar project and trying to report values at different horizons. My model is not getting values close to those reported in SOTA (top 5) models like yours. Could you please help with the intuition on reporting the average rather than individual horizons? Thanks Santosh IIRC that's a convention inherited by Informer and the followup works to it that have come out since this repo's initial release and before it's more recent versions. The accuracy of individual timesteps into the future can be arbitrary and hard to interpret. 1 step predictions are too easy, but distant predictions can be very difficult given a fixed length context window which may be too short. In highly periodic domains some distant horizons can also be easy (such as 24 hours ahead in a dataset with clear daily periodicity like weather forecasting). So reporting every horizon metric takes a lot of explaining, large tables, and can be misleading. Averaging gives a better sense of the model's performance over the entire duration we care about. At a few points during this project I hacked together logging metrics for accuracy at each individual timestep as a sanity-check. In my experience you can expect a roughly linearly increasing error as you predict further into the future. As far as replicating the results on these datasets in your own project, double check that you aren't counting missing datapoints in the metrics. This can make a huge difference and is something a lot of the literature (and early versions of this codebase) get wrong. I agree with Jake, averaging over the whole prediction horizon makes sense in order to compare single numbers as a metric. It is a pity though that different benchmarks use different metrics. For example, check here for PEMS-Bay: https://paperswithcode.com/sota/traffic-prediction-on-pems-bay They report RMSE (I guess this is averaged over the whole horizon) and MAE @ 12 step (this is for a single prediction 12 steps into the future) It would be good to have more standardized metrics. In the paper there is no RMSE for PEMS-Bay. There is MAE, MSE and MAPE, but unfortunately PapersWithCode does not report those. This is not a question, just a comment, sorry for the spam! 😁 Yeah the traffic datasets / literature is the main example where reporting multiple horizons is the default. The longest horizons are 12 timesteps so this can be feasible. Once you get longer than that it stops making sense to report arbitrary intervals in tables in my opinion. It would be interesting if the convention for reporting forecasting results was a plot of error over forecast duration for each dataset. That wasn't necessary at the time (2021) but I think this is probably what I would do if I were to redo this project today...
gharchive/issue
2023-02-22T12:47:01
2025-04-01T06:37:27.864293
{ "authors": [ "jakegrigsby", "santoshatchi", "steve3nto" ], "repo": "QData/spacetimeformer", "url": "https://github.com/QData/spacetimeformer/issues/66", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
467834540
Maybe a @compile decorator? I would like to discuss the possibility of adding something like a @compile decorator that compiles the Python functions/classes when the python module in imported and reuse the compiled JS. So only the load time will be affected, not the execution time. This way we don't have to worry about manually compiling Python into JS. If you like this suggestion, I can work on it. Code example: # events.py @compile def show_alert(e): e.preventDefault() alert("boom!") # index.py from htmldoom import render, elements as e from events import show_alert render(e.a(onclick=show_alert)("Click me")) # <a onclick="(e) => {e.preventDefault(), alert('boom!')}">Click me</a> Update: If this is possible. I'm not sure. Closing this due to lack of interest.
gharchive/issue
2019-07-14T13:09:01
2025-04-01T06:37:27.900220
{ "authors": [ "sayanarijit" ], "repo": "QQuick/Transcrypt", "url": "https://github.com/QQuick/Transcrypt/issues/653", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
577268124
数字货币DataStruct resample的一个warning,引入了一些talib没有的好指标 QUANTAXIS PR 😇 感谢您对于QUANTAXIS的项目的参与~ 🛠请在此完善一下最后的信息~ 注意 如果非第一次fork, 请先将quantaxis 库的内容PR进您的项目进行更新, 然后再执行对于quantaxis的pr 请完善CHANGELOG再进行PR PR前必看 请使用如下代码对要PR的代码进行格式化: 使用以下代码来格式化 pip install https://github.com/google/yapf/archive/master.zip yapf -i --style .style.yapf <file> 🛠该PR主要解决的问题🛠: 完全用nparray传递参数的talib库,原因是用nparray因为没有 Series MultiIndex 的问题。 处理和补完速度都比pandas更快,转 pd.DataFrame/pd.Series 只需要一个构造函数。 我还引入了一些talib没有,但是写出来不超过10行代码,很容易实现好指标,源自外网(TradingView或者MQ4/5)找到的。 例如 Moving Average ADX Hull Moving Average Volume HMA(未完成) 作者信息:阿财 时间: 2020/03/07 Hello @Rgveda! Thanks for opening this PR. We checked the lines you've touched for PEP 8 issues, and found: In the file QUANTAXIS/QAAnalysis/QAAnalysis_signal.py: Line 39:1: E302 expected 2 blank lines, found 1 Line 58:80: E501 line too long (108 > 79 characters) Line 75:74: W291 trailing whitespace Line 118:80: E501 line too long (100 > 79 characters) Line 255:1: W293 blank line contains whitespace Line 256:80: E501 line too long (174 > 79 characters) Line 257:52: E231 missing whitespace after ',' Line 258:52: E231 missing whitespace after ',' Line 259:53: E231 missing whitespace after ',' Line 260:54: E231 missing whitespace after ',' Line 262:80: E501 line too long (91 > 79 characters) Line 263:80: E501 line too long (83 > 79 characters) Line 264:80: E501 line too long (84 > 79 characters) Line 271:80: E501 line too long (86 > 79 characters) Line 272:80: E501 line too long (86 > 79 characters) Line 273:80: E501 line too long (104 > 79 characters) Line 274:80: E501 line too long (104 > 79 characters) Line 278:80: E501 line too long (94 > 79 characters) Line 279:80: E501 line too long (95 > 79 characters) Line 280:80: E501 line too long (90 > 79 characters) Line 281:80: E501 line too long (90 > 79 characters) Line 285:80: E501 line too long (191 > 79 characters) Line 286:80: E501 line too long (191 > 79 characters) Line 287:80: E501 line too long (143 > 79 characters) Line 287:109: E712 comparison to True should be 'if cond is True:' or 'if cond:' Line 288:80: E501 line too long (143 > 79 characters) Line 288:109: E712 comparison to True should be 'if cond is True:' or 'if cond:' Line 289:80: E501 line too long (83 > 79 characters) Line 290:80: E501 line too long (83 > 79 characters) Line 291:80: E501 line too long (221 > 79 characters) Line 292:80: E501 line too long (226 > 79 characters) Line 294:80: E501 line too long (116 > 79 characters) Line 295:80: E501 line too long (116 > 79 characters) Line 297:44: E231 missing whitespace after ',' Line 309:1: W293 blank line contains whitespace Line 310:80: E501 line too long (105 > 79 characters) Line 311:46: E231 missing whitespace after ',' Line 312:46: E231 missing whitespace after ',' Line 313:47: E231 missing whitespace after ',' Line 314:48: E231 missing whitespace after ',' Line 316:80: E501 line too long (88 > 79 characters) Line 323:80: E501 line too long (80 > 79 characters) Line 324:80: E501 line too long (80 > 79 characters) Line 326:80: E501 line too long (98 > 79 characters) Line 327:80: E501 line too long (98 > 79 characters) Line 337:53: W291 trailing whitespace Line 338:23: E231 missing whitespace after ',' Line 338:35: E231 missing whitespace after ',' Line 338:57: E231 missing whitespace after ',' Line 340:43: E231 missing whitespace after ',' Line 342:80: E501 line too long (125 > 79 characters) Line 345:80: E501 line too long (85 > 79 characters) Line 347:80: E501 line too long (89 > 79 characters) Line 348:80: E501 line too long (83 > 79 characters) Line 349:80: E501 line too long (89 > 79 characters) Line 350:80: E501 line too long (99 > 79 characters) Line 351:80: E501 line too long (89 > 79 characters) Line 352:80: E501 line too long (83 > 79 characters) Line 353:80: E501 line too long (89 > 79 characters) Line 354:80: E501 line too long (99 > 79 characters) Line 355:80: E501 line too long (118 > 79 characters) Line 356:80: E501 line too long (119 > 79 characters) Line 357:80: E501 line too long (108 > 79 characters) Line 358:80: E501 line too long (108 > 79 characters) Line 361:1: E305 expected 2 blank lines after class or function definition, found 1 Line 362:1: E302 expected 2 blank lines, found 0 Line 370:1: W293 blank line contains whitespace Line 371:40: E231 missing whitespace after ',' Line 371:53: E231 missing whitespace after ',' Line 372:42: E231 missing whitespace after ',' Line 372:54: E231 missing whitespace after ',' Line 373:47: E231 missing whitespace after ',' Line 373:60: E231 missing whitespace after ',' Line 373:72: E231 missing whitespace after ',' Line 374:80: E501 line too long (126 > 79 characters) Line 375:1: W293 blank line contains whitespace Line 376:41: E231 missing whitespace after ',' Line 376:63: E231 missing whitespace after ',' Line 377:42: E231 missing whitespace after ',' Line 377:54: E231 missing whitespace after ',' Line 378:47: E231 missing whitespace after ',' Line 378:59: E231 missing whitespace after ',' Line 378:72: E231 missing whitespace after ',' Line 379:80: E501 line too long (126 > 79 characters) Line 381:80: E501 line too long (105 > 79 characters) Line 395:1: W293 blank line contains whitespace Line 400:1: W293 blank line contains whitespace Line 401:80: E501 line too long (145 > 79 characters) Line 404:80: E501 line too long (84 > 79 characters) Line 405:80: E501 line too long (84 > 79 characters) Line 406:1: W293 blank line contains whitespace Line 409:80: E501 line too long (100 > 79 characters) Line 411:80: E501 line too long (90 > 79 characters) Line 412:80: E501 line too long (90 > 79 characters) Line 413:80: E501 line too long (107 > 79 characters) Line 414:80: E501 line too long (107 > 79 characters) Line 423:80: E501 line too long (129 > 79 characters) Line 424:40: E231 missing whitespace after ',' Line 425:1: W293 blank line contains whitespace Line 429:80: E501 line too long (144 > 79 characters) Line 430:80: E501 line too long (146 > 79 characters) Line 432:59: E231 missing whitespace after ',' Line 433:35: E231 missing whitespace after ',' Line 436:1: W293 blank line contains whitespace Line 437:80: E501 line too long (100 > 79 characters) Line 442:52: E231 missing whitespace after ',' Line 443:52: E231 missing whitespace after ',' Line 444:52: E231 missing whitespace after ',' Line 445:55: E231 missing whitespace after ',' Line 446:55: E231 missing whitespace after ',' Line 447:62: E231 missing whitespace after ',' Line 448:80: E501 line too long (104 > 79 characters) Line 449:80: E501 line too long (104 > 79 characters) Line 450:22: W292 no newline at end of file In the file QUANTAXIS/QAIndicator/talib_numpy.py: Line 149:17: W291 trailing whitespace Line 154:80: E501 line too long (107 > 79 characters) Line 161:80: E501 line too long (146 > 79 characters) Line 172:47: W291 trailing whitespace Line 184:80: E501 line too long (105 > 79 characters) Line 190:80: E501 line too long (94 > 79 characters) Line 190:85: E203 whitespace before ',' Line 191:80: E501 line too long (81 > 79 characters) Line 192:80: E501 line too long (86 > 79 characters) Line 194:80: E501 line too long (84 > 79 characters) Line 197:23: W291 trailing whitespace Line 198:80: E501 line too long (82 > 79 characters) Line 200:80: E501 line too long (107 > 79 characters) Line 201:21: W292 no newline at end of file In the file QUANTAXIS_Test/QAAnalysis_Test/QASignal.py: Line 9:80: E501 line too long (137 > 79 characters) Line 10:58: E231 missing whitespace after ',' Line 11:13: E128 continuation line under-indented for visual indent Line 12:13: E128 continuation line under-indented for visual indent Line 13:13: E128 continuation line under-indented for visual indent Line 14:13: E128 continuation line under-indented for visual indent Line 15:5: E265 block comment should start with '# ' Line 16:5: E265 block comment should start with '# ' Line 22:80: E501 line too long (105 > 79 characters) Line 28:22: W291 trailing whitespace Line 40:9: E128 continuation line under-indented for visual indent Line 45:71: E231 missing whitespace after ',' Line 46:71: E231 missing whitespace after ',' Line 47:71: E231 missing whitespace after ',' Line 48:76: E231 missing whitespace after ',' Line 50:80: E501 line too long (82 > 79 characters) Line 52:80: E501 line too long (109 > 79 characters) Line 54:67: E712 comparison to True should be 'if cond is True:' or 'if cond:' Line 54:80: E501 line too long (101 > 79 characters) Line 55:80: E501 line too long (152 > 79 characters) Line 55:99: E251 unexpected spaces around keyword / parameter equals Line 55:101: E251 unexpected spaces around keyword / parameter equals Line 56:80: E501 line too long (157 > 79 characters) Line 56:99: E251 unexpected spaces around keyword / parameter equals Line 56:101: E251 unexpected spaces around keyword / parameter equals Line 58:80: E501 line too long (170 > 79 characters) Line 59:80: E501 line too long (120 > 79 characters) Line 60:80: E501 line too long (104 > 79 characters) Line 61:76: E712 comparison to True should be 'if cond is True:' or 'if cond:' Line 61:80: E501 line too long (110 > 79 characters) Line 63:80: E501 line too long (178 > 79 characters) Line 64:80: E501 line too long (99 > 79 characters) Line 65:80: E501 line too long (104 > 79 characters) Line 66:79: E712 comparison to True should be 'if cond is True:' or 'if cond:' Line 66:80: E501 line too long (113 > 79 characters) Line 68:80: E501 line too long (183 > 79 characters) Line 69:80: E501 line too long (99 > 79 characters) Line 70:80: E501 line too long (163 > 79 characters) Line 71:80: E501 line too long (104 > 79 characters) Line 72:79: E712 comparison to True should be 'if cond is True:' or 'if cond:' Line 72:80: E501 line too long (113 > 79 characters) Line 74:23: E251 unexpected spaces around keyword / parameter equals Line 74:25: E251 unexpected spaces around keyword / parameter equals Line 74:29: E231 missing whitespace after ',' Line 75:80: E501 line too long (102 > 79 characters) Line 76:80: E501 line too long (112 > 79 characters) Line 76:92: E251 unexpected spaces around keyword / parameter equals Line 76:94: E251 unexpected spaces around keyword / parameter equals Line 76:105: E251 unexpected spaces around keyword / parameter equals Line 76:107: E251 unexpected spaces around keyword / parameter equals Line 77:80: E501 line too long (108 > 79 characters) Line 78:80: E501 line too long (112 > 79 characters) Line 78:92: E251 unexpected spaces around keyword / parameter equals Line 78:94: E251 unexpected spaces around keyword / parameter equals Line 78:105: E251 unexpected spaces around keyword / parameter equals Line 78:107: E251 unexpected spaces around keyword / parameter equals Line 79:80: E501 line too long (85 > 79 characters) Line 80:80: E501 line too long (91 > 79 characters) Line 81:80: E501 line too long (93 > 79 characters) Line 82:80: E501 line too long (93 > 79 characters) Line 83:80: E501 line too long (112 > 79 characters) Line 84:80: E501 line too long (112 > 79 characters) In the file QUANTAXIS_Test/QAAnalysis_Test/QASignal_ADXm_Test.py: Line 11:1: E302 expected 2 blank lines, found 1 Line 12:5: E265 block comment should start with '# ' Line 16:61: E231 missing whitespace after ',' Line 21:23: E251 unexpected spaces around keyword / parameter equals Line 21:25: E251 unexpected spaces around keyword / parameter equals Line 21:29: E231 missing whitespace after ',' Line 24:80: E501 line too long (180 > 79 characters) Line 25:80: E501 line too long (82 > 79 characters) Line 28:5: E741 ambiguous variable name 'l' Line 31:5: E265 block comment should start with '# ' Line 35:1: E305 expected 2 blank lines after class or function definition, found 1 In the file QUANTAXIS_Test/QAAnalysis_Test/QASignal_hull_MA_Test.py: Line 10:1: E302 expected 2 blank lines, found 1 Line 12:9: E128 continuation line under-indented for visual indent Line 24:23: E251 unexpected spaces around keyword / parameter equals Line 24:25: E251 unexpected spaces around keyword / parameter equals Line 24:29: E231 missing whitespace after ',' Line 26:80: E501 line too long (180 > 79 characters) Line 28:5: E265 block comment should start with '# ' Line 29:5: E265 block comment should start with '# ' Line 34:5: E741 ambiguous variable name 'l' Line 34:6: E225 missing whitespace around operator Line 40:1: E305 expected 2 blank lines after class or function definition, found 1 Line 40:12: E225 missing whitespace around operator In the file QUANTAXIS_Test/QAFetch_Test/QAQuery_Advance_Test.py: Line 844:1: W293 blank line contains whitespace
gharchive/pull-request
2020-03-07T03:14:34
2025-04-01T06:37:28.012888
{ "authors": [ "Rgveda", "pep8speaks" ], "repo": "QUANTAXIS/QUANTAXIS", "url": "https://github.com/QUANTAXIS/QUANTAXIS/pull/1483", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1492405670
leethub not working at all for me. i have solved more than 20 questions but not a single question pushed to my github Please review previous closed issues before filling out a new one! Duplicate issues will be closed without comment. Describe the bug A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior: Go to '...' Click on '....' Scroll down to '....' See error Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Additional context Add any other context about the problem here. leethub not working for me i have solved more than 20 questions since i have added leethub extensions. I don't why but not a single solution pushed to github. Switch to older version of leetcode . There will be an option Revert to older version in drop down menu
gharchive/issue
2022-12-12T17:38:43
2025-04-01T06:37:28.016949
{ "authors": [ "md-yaseeny", "sinhasaurabh3104" ], "repo": "QasimWani/LeetHub", "url": "https://github.com/QasimWani/LeetHub/issues/448", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1004441869
how to use distributed partitioning I change partitionMode to ParMETIS ,but it seems to have no effect. I change partitionMode to ParMETIS ,but it seems to have no effect. @leepengcheng ParMETIS still in development, will be published before November as expected.
gharchive/issue
2021-09-22T15:35:11
2025-04-01T06:37:28.023784
{ "authors": [ "leepengcheng", "ryantd" ], "repo": "Qihoo360/dgl-operator", "url": "https://github.com/Qihoo360/dgl-operator/issues/21", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2614805239
IAM topics Adding some of the IAM-related topics. There will be a lot of broken links and other issues until we solidify what all topics are being added and where. Cleanup and move to another repo
gharchive/pull-request
2024-10-25T18:06:03
2025-04-01T06:37:28.025895
{ "authors": [ "abbycross", "beckykd" ], "repo": "Qiskit/documentation", "url": "https://github.com/Qiskit/documentation/pull/2185", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1068671029
Small fix in DragCalAnalysis model descriptions Summary This PR adds a missing freq parameter to the DragCalAnalysis model descriptions and to the formulas in the docstring. Details and comments See code. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
gharchive/pull-request
2021-12-01T17:09:20
2025-04-01T06:37:28.027568
{ "authors": [ "CLAassistant", "catornow" ], "repo": "Qiskit/qiskit-experiments", "url": "https://github.com/Qiskit/qiskit-experiments/pull/550", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1154511695
Allow to trigger workflow that publishes the documentation manually Summary Details and comments Fixes # Pull Request Test Coverage Report for Build 1912399559 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 56.778% Totals Change from base Build 1912304137: 0.0% Covered Lines: 2044 Relevant Lines: 3600 💛 - Coveralls
gharchive/pull-request
2022-02-28T20:20:00
2025-04-01T06:37:28.030613
{ "authors": [ "coveralls", "rathishcholarajan" ], "repo": "Qiskit/qiskit-ibm-runtime", "url": "https://github.com/Qiskit/qiskit-ibm-runtime/pull/171", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
992618669
Batching of circuits to overcome memory issues when using statevector simulator Summary Currently batch_size parameter is ignored if the statevector simulator is used. This leads to unreasonable memory use due to the size of the transpiled circuits, up to 1TB of RAM for 800 by 800 kernel matrix and 20 qubits (see qiskit-terra, issue #6991). This pull request fixes this by transpiling and simulating circuits in batches, never storing the entire 800 circuits. The modification uses batch_size parameter that is already used in non-statevector case. Details and comments I had success by setting batch_size=50 (memory footprint down to <20 GB). Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. @rsln-s Thanks a lot for your contribution. Could you please add a reno file with a bug fix description? Also, if possible add a test to cover this fix. @attp could you please take a look at the changes? Added reno file. The correctness of the output of the code is checked by tests already in place; testing the memory usage may be complicated within constraints of gh-actions. The spell checker also fails on this transpiled word from the release note. If you add it to .pylintdict file in the root, our custom dictionary, since its correctly spelt then it should pass CI. (You will see the words are in lowercase in alphabetic order so just add it appropriately) Updated the .pylintdict dictionary @woodsp-ibm Do you have any comments? Can I ask you to make a minor change to the docstring for batch_size in the constructor. It says batch_size: Number of circuits to batch together for computation. Default 1000. specifically default 1000 yet the code has batch_size: int = 900, So it should state the default is 900. I think it may have been 1000 at one point but was changed, if I recall correctly, to 900 to fit more with the limits around the provider. In terms of testing you say its covered by the current unit tests. From what I can see there is nothing explicitly testing that aspect. Of course batch_size defaults to 900 and is in main path. I am not sure what we do in test is ever affected by the batch size since from what I can see the tests are pretty small. Having said that if we had a test that dropped the batch size down I am not sure how to test it given its behavior is internal to evaluate - the only way comes to mind is hooking the quantum instance execute and checking the number of circuits is as expected along the way in addition the the final result being as expected. Looks good to me. Regarding the batch_size docstring, you are correct @woodsp-ibm the default was originally 1000, and we changed it to 900 to match the backend limits and reduce the number of jobs sent. I must have missed updating the docstring in the PR. Updated the docstring as requested by @woodsp-ibm. Just a note. Locally I changed the QuantumInstance execute method to print the number of circuits it was passed and ran the kernel unit tests. Its quite a small number so nowhere near the 900 limit - in fact its not even in double digits. Anyway I changed default back size in the kernel to a much small number and things worked but not all numbers were limited - presumably the ones via statevector usage as I was doing this from the main branch. Cloning your fork and doing the same from there then all the counts were limited - in fact I set it to 1 as a test and it printed all 1's and passed. So it seems its working ok. It would be nice if the test did somehow test out the batch size but maybe that could/should be raised as a separate issue. @adekusar-drl any thoughts here - you commented early on about a test around the fix. @woodsp-ibm When I mentioned unit tests I did not have anything special on my mind. In general, your idea of setting batch size to 1 and then running a test on the statevector simulator make sense. @rsln-s What do you think? In general, your idea of setting batch size to 1 and then running a test on the statevector simulator make sense. I had done it so it applied to the qasm mode as well - since it appeared not tested in general. While the final outcome can be checked as currently, what is more complicated to do is to check that indeed the batch size is limiting the number of internal computations (circuits). My only thought on perhaps how to do such a check. as I mentioned in an earlier comment was about hooking the QuantumInstance execute method on the instance used with the backend such that the number of circuits given to the method could be fairly easily intercepted and checked - i.e hook it and do whatever was needed to check then call over to the original method that was hooked so that the circuit results from execute can be returned. @woodsp-ibm I approve, merge this PR and open an issue to improve tests for QuantumKernel. Any thoughts?
gharchive/pull-request
2021-09-09T20:49:00
2025-04-01T06:37:28.037711
{ "authors": [ "CLAassistant", "adekusar-drl", "attp", "rsln-s", "woodsp-ibm" ], "repo": "Qiskit/qiskit-machine-learning", "url": "https://github.com/Qiskit/qiskit-machine-learning/pull/209", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1123420425
Update git blame rev ignore list Summary In the recently merged #7615 we bumped the black version and constrained it to a stable version to keep us on a fixed set of formatting rules but also receiving bug fixes. In doing this some new formatting rules were applied to the repo (mainly x ** y was changed to x**y) by the new version of black. To reduce noise in the git blame this commit updates the .git-blame-ignore-revs file (which was added after we started using black in #6362) to include the sha1 for this commit. This means that when running git blame on files this commit will be ignored (assuming the local git environment is configured correctly). Details and comments Pull Request Test Coverage Report for Build 1791123583 0 of 0 changed or added relevant lines in 0 files are covered. 3 unchanged lines in 1 file lost coverage. Overall coverage decreased (-0.005%) to 83.355% Files with Coverage Reduction New Missed Lines % qiskit/pulse/library/waveform.py 3 89.36% Totals Change from base Build 1791028269: -0.005% Covered Lines: 52236 Relevant Lines: 62667 💛 - Coveralls
gharchive/pull-request
2022-02-03T18:28:39
2025-04-01T06:37:28.042685
{ "authors": [ "coveralls", "mtreinish" ], "repo": "Qiskit/qiskit-terra", "url": "https://github.com/Qiskit/qiskit-terra/pull/7620", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2070103144
RZXCalibrationBuilder fails for qubit pairs with GaussianSquareDrag Environment Qiskit version: 0.45 and 1.1.0.dev0+c99f325 Python version: 3.12 Operating system: Windows What is happening? RZXCalibrationBuilder fails for qubit pairs with GaussianSquareDrag, because it can't identify the native ECR direction. How can we reproduce the issue? The following code from qiskit import QuantumCircuit import numpy as np from qiskit.transpiler import PassManager from qiskit.transpiler.passes import RZXCalibrationBuilder backend = provider.get_backend("ibmq_kolkata") instmap = backend.defaults().instruction_schedule_map qubits = ... qc = QuantumCircuit(8) qc.rzx(np.pi/2, *qubits) pass1 = RZXCalibrationBuilder(instmap) qc = PassManager(pass1).run(qc) works if you set qubits=(4,7) and fails if you set qubits=(6,7). The error originates here, and is caused by the filter counting comp tones which ignores anything but GaussianSquare or Waveform pulses. However, for some pairs (with the pair 6,7 being one of them) the pulses are reported as GaussianSqaureDrag. The pulses have beta=0 so they are identical to GaussianSquare, but they are not counted towards the comp tones, which leads to the failure. What should happen? The code should run for both qubit pairs. Any suggestions? I suspect changing the allowed types in the filter would solve the issue, but I am not familiar enough with this piece code, and what the backends might report for other qubits. Counting GaussianSquareDrag with beta=0 is perhaps a safer option. I was able to reproduce this issue on 0.46. This works: from qiskit import QuantumCircuit import numpy as np from qiskit.transpiler import PassManager from qiskit.transpiler.passes import RZXCalibrationBuilder from qiskit_ibm_provider import IBMProvider provider = IBMProvider() backend = provider.get_backend("ibm_cusco") instmap = backend.defaults().instruction_schedule_map qubits = (6,7) qc = QuantumCircuit(8) qc.rzx(np.pi/2, *qubits) pass1 = RZXCalibrationBuilder(instmap) qc = PassManager(pass1).run(qc) qc.draw('text') This, it does not: from qiskit import QuantumCircuit import numpy as np from qiskit.transpiler import PassManager from qiskit.transpiler.passes import RZXCalibrationBuilder from qiskit_ibm_provider import IBMProvider provider = IBMProvider() backend = provider.get_backend("ibm_cusco") instmap = backend.defaults().instruction_schedule_map qubits = (4,7) qc = QuantumCircuit(8) qc.rzx(np.pi/2, *qubits) pass1 = RZXCalibrationBuilder(instmap) qc = PassManager(pass1).run(qc) qc.draw('text') QiskitError: "Native direction cannot be determined: operation on qubits [4, 7] for the following instruction schedule map: ... The class qiskit.transpiler.passes.calibration.rzx_builder.RZXCalibrationBuilder is deprecated as of Qiskit 1.3. It will be removed in Qiskit 2.0. The entire Qiskit Pulse package is being deprecated and will be moved to the Qiskit Dynamics repository: https://github.com/qiskit-community/qiskit-dynamics. Note that once removed, qiskit.transpiler.passes.calibration.rzx_builder.RZXCalibrationBuilder will have no alternative in Qiskit.
gharchive/issue
2024-01-08T10:00:45
2025-04-01T06:37:28.057214
{ "authors": [ "1ucian0", "ShellyGarion", "TsafrirA" ], "repo": "Qiskit/qiskit", "url": "https://github.com/Qiskit/qiskit/issues/11509", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2040245267
Add missing parameter in standard-gate mapping Summary The XXPlusYYGate and XXMinusYYGate instances returned from get_standard_gate_name_mapping were missing their optional beta parameter. Details and comments Pull Request Test Coverage Report for Build 7199428658 0 of 0 changed or added relevant lines in 0 files are covered. 16 unchanged lines in 4 files lost coverage. Overall coverage decreased (-0.02%) to 87.552% Files with Coverage Reduction New Missed Lines % crates/qasm2/src/expr.rs 1 93.76% qiskit/quantum_info/synthesis/two_qubit_decompose.py 2 96.65% crates/qasm2/src/parse.rs 6 97.6% crates/qasm2/src/lex.rs 7 91.41% Totals Change from base Build 7198837138: -0.02% Covered Lines: 59771 Relevant Lines: 68269 💛 - Coveralls
gharchive/pull-request
2023-12-13T18:20:29
2025-04-01T06:37:28.066653
{ "authors": [ "coveralls", "jakelishman" ], "repo": "Qiskit/qiskit", "url": "https://github.com/Qiskit/qiskit/pull/11411", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2556214509
Improve qubit tracking in HighLevelSynthesis Summary Fixes #13239. Now for both examples in the referenced issue, HighLevelSynthesis produces a circuit with 24 CX-gates and 45 U-gates. In addition, this also improves clean ancilla detection on following example: inner1 = QuantumCircuit(4) inner1.h(0) inner1.cz(0, 2) qc = QuantumCircuit(6) qc.append(inner1.to_instruction(), [1, 2, 3, 4]) pass_ = HighLevelSynthesis(basis_gates=["cx", "u"]) qct = pass_(qc) Even though the inner circuit inner1 is defined over the qubits 1, 2, 3, 4 in the main circuit qc, the qubits 1 and 3 in the inner circuit (corresponding to the qubits 2 and 4 in the main circuit) remain clean and can be used as clean ancilla qubits in the following gates. Details and comments This PR is based on numerous discussions with @Cryoris. Recalling the second example from the referenced issue inner = QuantumCircuit(6) inner.mcx([0, 1, 2, 3, 4], 5) custom_gate = inner.to_gate() qc = QuantumCircuit(10) qc.append(custom_gate, [3, 4, 5, 6, 7, 0]) basis_gates = ["u", "cx"] tqc = HighLevelSynthesis(basis_gates=basis_gates)(qc) the tricky part is that the recursive call to HighLevelSynthesis::_run on the inner circuit should have access to the clean "global" qubits outside of the circuit's definition. To tackle this, we introduce a class QubitContext which keeps the correspondence between the current DAG's qubits and the global qubits of the original circuit. The state of the global qubits (clean/dirty) is tracked by QubitTracker. When an internal synthesis algorithm (here for the internal MCX gate) checks how many clean/dirty ancilla qubits are available, it does so taking all of the global qubits into account. However, this also means that synthesizing an internal DAG may output a DAG with more qubits. In particular (demonstrating possible complications) we can have an internal DAG with say 10 qubits that contains an MCX-gate over 8 qubits that can be synthesized by the appropriate synthesis algorithm to a circuit over 14 qubits (exploiting the global clean/dirty) qubits. Hence, our internal DAG must also grow in size (fortunately all the tracking is possible using the local-to-global correspondences of all the objects involved) which might mean that the DAG higher up in the chain might need to grow as well. This PR also adds more HighLevelSynthesis tests exploring all these possible edge-cases. In summary, this should handle any mixture of recursively defined circuits with annotated operations, custom gate definitions and HighLevelSynthesis plugins. For circuits containing control-flow ops, the extended functionality is not used (and will be delegated to another PR, if someone wants to tackle this). In other words, if we have a control-flow op over say 4 qubits in the bigger circuit, only these 4 qubits will be used when recursively processing the blocks in this op. Update: An additional observation is that this also improves the synthesis of open-controlled MCX gates. In Qiskit, the name of an open-controlled gate is not of the form "mcx", but rather of the form "mcx_o17", where "17" is (the integer representation of) the control state. When processing this gate, HLS would not immediately call a synthesis plugin (since "mcx_o17" is not in the list of gate names for which plugins exist), but recursively process the definition of this gate, which is a quantum circuit consisting of a layer of X-gates, a closed-controlled MCX gate (called "mcx"), and the inverse layer of X-gates. During this recursion, HLS would now indeed call the synthesis plugin for the internal MCX-gate, and with this PR it would be able to use the ancilla qubits in the main circuit and outside of the internal open-controlled gate's definition. Note: in 7432b6b I have disabled one of the newly added MCMT tests, I am trying to decide whether it points to a real bug or HLS is accidentally doing something too smart. Please note a small follow-up PR #13369 that ports QubitTracker and QubitContext to Rust.
gharchive/pull-request
2024-09-30T10:24:35
2025-04-01T06:37:28.074128
{ "authors": [ "alexanderivrii" ], "repo": "Qiskit/qiskit", "url": "https://github.com/Qiskit/qiskit/pull/13240", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
656202312
Update coverage rustflags This commit updates the flags used for generating coverage data with grcov based on the latest README for grcov. [1] The last time these were updated in #72 was to remove a flag that was being removed from rust and causing the job to fail. However, that commit failed to add equivalent flags which would perform the same functionality. This commit fixes that oversight so we should have more reliable coverage collection. [1] https://github.com/mozilla/grcov/blob/master/README.md Coverage increased (+2.7%) to 88.235% when pulling b04d177b84b25ba180ec2987164c8f2f001a91fc on mtreinish:update-rustflags-for-coverage into a9dcb8d3f376b1971e48c68fbf6b2c4aa084c67a on Qiskit:master.
gharchive/pull-request
2020-07-13T22:55:31
2025-04-01T06:37:28.077568
{ "authors": [ "coveralls", "mtreinish" ], "repo": "Qiskit/retworkx", "url": "https://github.com/Qiskit/retworkx/pull/99", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
323145950
Bugfix for Search child items functionality (task #5956) Updated search child items logic to match new lists parsed structure. Codecov Report Merging #540 into master will decrease coverage by 0.09%. The diff coverage is 0%. @@ Coverage Diff @@ ## master #540 +/- ## =========================================== - Coverage 27.44% 27.35% -0.1% - Complexity 972 976 +4 =========================================== Files 88 88 Lines 3334 3345 +11 =========================================== Hits 915 915 - Misses 2419 2430 +11 Impacted Files Coverage Δ Complexity Δ ...ent/Plugin/Search/Model/ChildListItemsListener.php 0% <0%> (ø) 21 <0> (ø) :arrow_down: src/ScheduledJobs/Jobs/CakeShellJob.php 0% <0%> (ø) 8% <0%> (+4%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 1597472...662d7ff. Read the comment docs.
gharchive/pull-request
2018-05-15T09:53:19
2025-04-01T06:37:28.092934
{ "authors": [ "codecov-io", "georgeconstantinou" ], "repo": "QoboLtd/project-template-cakephp", "url": "https://github.com/QoboLtd/project-template-cakephp/pull/540", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1961270931
Update examples Add some minimal narration to the examples, and look into converting them into mkdocs-galley based examples to serve on the website. These can live in the Tutorial or References section, under in an "Examples" sub-section. Also, make sure this works narratively with the tutorial docs (which are derived from the examples) Closing as duplicate of #26 -- will track it there
gharchive/issue
2023-10-25T12:13:44
2025-04-01T06:37:28.115169
{ "authors": [ "pavithraes" ], "repo": "Quansight/ragna", "url": "https://github.com/Quansight/ragna/issues/109", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
437647413
Adds Custom Data from US Energy Information Administration (eia.gov) Description Built new custom data class for US Energy Information Administration data. Accommodates hourly, daily, monthly, quarterly, and yearly resolutions. Related Issue Closes #3106 Motivation and Context Adds new custom data class to expand current custom data capabilities and add functionality for users. Requires Documentation Change Not likely, although updates to custom data documentation will be needed if deemed necessary. Additionally, documentation updates may be needed for Python indicators since it has been found that self.EMA returns the same as self.EMA.Current.Value, at least when formatted in logging. This will need further testing. How Has This Been Tested? Tested locally across numerous tickers and resolutions. Tested in QuantConnect Cloud in backtesting and live mode. Types of changes [ ] Bug fix (non-breaking change which fixes an issue) [x] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) [ ] Non-functional change (xml comments/documentation/etc) Checklist: [x] My code follows the code style of this project. [x] I have read the CONTRIBUTING document. [x] All new and existing tests passed. [x] My branch follows the naming convention bug-<issue#>-<description> or feature-<issue#>-<description> We need to create a "No New Data" Signal for custom data types which can address the null return problem. We should also review all existing custom data (quandl) implementations to confirm they're returning OK.
gharchive/pull-request
2019-04-26T12:22:54
2025-04-01T06:37:28.120501
{ "authors": [ "AlexCatarino", "jaredbroad" ], "repo": "QuantConnect/Lean", "url": "https://github.com/QuantConnect/Lean/pull/3136", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
529090495
Minor typo and grammar fixes to comments in PeriodCountConsolidatorBase Description Related Issue Motivation and Context Requires Documentation Change How Has This Been Tested? Types of changes [ ] Bug fix (non-breaking change which fixes an issue) [ ] Refactor (non-breaking change which improves implementation) [ ] Performance (non-breaking change which improves performance. Please add associated performance test and results) [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to change) [x] Non-functional change (xml comments/documentation/etc) Checklist: [x] My code follows the code style of this project. [x] I have read the CONTRIBUTING document. [ ] I have added tests to cover my changes. [ ] All new and existing tests passed. [ ] My branch follows the naming convention bug-<issue#>-<description> or feature-<issue#>-<description> Thank you @RohanTalip
gharchive/pull-request
2019-11-27T03:22:59
2025-04-01T06:37:28.126512
{ "authors": [ "RohanTalip", "jaredbroad" ], "repo": "QuantConnect/Lean", "url": "https://github.com/QuantConnect/Lean/pull/3878", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1522904270
Data Feed Options Don't Include Alt Datasets When we run lean live, the "Select a data feed" prompt should include alt datasets like Tiingo. It currently just shows Since you guys are at it, it would be nice to add also TwelveData: https://twelvedata.com
gharchive/issue
2023-01-06T17:25:51
2025-04-01T06:37:28.128123
{ "authors": [ "DerekMelchin", "pberto" ], "repo": "QuantConnect/lean-cli", "url": "https://github.com/QuantConnect/lean-cli/issues/261", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1051263963
Deploying algo using DLL For security reasons, I want to deploy my algo in a .NET assembly rather than using a CS code project. Is there any way to do that? Even if I reference the DLL in the project and run it, that should be fine. I'm closing this, this it's not supported.
gharchive/issue
2021-11-11T18:47:42
2025-04-01T06:37:28.129278
{ "authors": [ "omidkrad" ], "repo": "QuantConnect/lean-cli", "url": "https://github.com/QuantConnect/lean-cli/issues/41", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
569652101
about_py: excel link added and misc edits fixes #796 Good PR, nice improvements and fixes. Please see commend above. Thanks, @mtiley . Nice work. Hi @jstac , I kept @mtiley 's changes and modified the sentence as you instructed. Thanks @shlff! Perhaps a link to this article could be included somewhere --- a history of how Python became so popular. https://www.welcometothejungle.com/en/articles/btc-python-popular Thanks for the feedback @jstac, I've made those changes. Nice work, thanks @mtiley
gharchive/pull-request
2020-02-24T05:59:27
2025-04-01T06:37:28.138482
{ "authors": [ "jstac", "mtiley", "shlff" ], "repo": "QuantEcon/lecture-source-py", "url": "https://github.com/QuantEcon/lecture-source-py/pull/934", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
324927303
fix random access view regeression thanks @PerretB for finding this bug! fixes #856 Awesome, thanks!
gharchive/pull-request
2018-05-21T14:01:49
2025-04-01T06:37:28.140845
{ "authors": [ "SylvainCorlay", "wolfv" ], "repo": "QuantStack/xtensor", "url": "https://github.com/QuantStack/xtensor/pull/862", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2346777954
Fix a small typo in elastic The default value from pymatgen is 0.06. Can one of the admins verify this patch? Thank you, @superstar54! I'll go ahead and merge this in.
gharchive/pull-request
2024-06-11T15:54:45
2025-04-01T06:37:28.142149
{ "authors": [ "Andrew-S-Rosen", "buildbot-princeton", "superstar54" ], "repo": "Quantum-Accelerators/quacc", "url": "https://github.com/Quantum-Accelerators/quacc/pull/2233", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2309454330
epoll: define EpollEvent structure per linux API the userData is sometimes defined as [i32; 2] in Quark code. This doesn't make sense to me because the data is used in one piece as a u64 anyways. But I may be wrong. However I don't see where this array is converted into u64, am I missing something? What's the initial idea of define the userData as [i32; 2] ? This is tested on both x86 and aarch64, I don't see regression so far. Also fixes #1230 CC @QuarkContainer Note: @CharlyYu fixed a similar issue recently, but the EpollEvent struct is defined in more than 4 places differently depending on how they are used and they all require additional padding for aarch64. I don't want to have 8x struct definitions of the same thing so I propose to use one unified def of this struct. The Data field could be either a fd (i32), a u32, or a u64 data. If you use Data as i32 it will use the lower 4 bytes and has no conflict with the earlier defs.
gharchive/pull-request
2024-05-22T02:42:43
2025-04-01T06:37:28.151555
{ "authors": [ "shrik3" ], "repo": "QuarkContainer/Quark", "url": "https://github.com/QuarkContainer/Quark/pull/1274", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1577111590
🛑 Biken is down In d35bb5e, Biken (https://biken.quentinp.me) was down: HTTP code: 0 Response time: 0 ms Resolved: Biken is back up in 83dc903.
gharchive/issue
2023-02-09T02:12:35
2025-04-01T06:37:28.181205
{ "authors": [ "QuentinPhilipp" ], "repo": "QuentinPhilipp/uptime", "url": "https://github.com/QuentinPhilipp/uptime/issues/157", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1911211851
Feedback for “Middleware configuration” urlMappingStrategy: rewrite is not working Please provide a reproduction. It works as expected in https://github.com/QuiiBz/next-international/tree/main/examples/next-app
gharchive/issue
2023-09-25T10:50:13
2025-04-01T06:37:28.191460
{ "authors": [ "QuiiBz", "ivanafanasyeu" ], "repo": "QuiiBz/next-international", "url": "https://github.com/QuiiBz/next-international/issues/195", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2023200608
Flicker with client component I have some flicker on refresh when using client components with translations, it's due the suspense in I18nProviderClient is there a way to fix this flicker? https://github.com/QuiiBz/next-international/assets/655158/90f4363f-6dae-4e90-ad22-a8e29367b62e layout.tsx: "use client"; import { I18nProviderClient } from "@/locales/client"; import { ReactNode } from "react"; type ProviderProps = { locale: string; children: ReactNode; }; export function Provider({ locale, children }: ProviderProps) { return ( <I18nProviderClient locale={locale} fallback={<p>Loading...</p>}> {children} </I18nProviderClient> ); } "next-international": "^1.1.4", "next": "14.0.3", Sorry for the delay, I started a new job. Could you share a minimal reproduction? I cannot reproduce the issue with the example in the repo: https://github.com/QuiiBz/next-international/tree/main/examples/next-app No worries at all! Congrats on the new job, well deserved! I will actually investigate other things like next-theme to really know what the problem is here. So I will close this and if the problem still exists after midday is open-source it will be much easier for me to showcase the issue. Marry Christmas and happy new year! I just confirmed that it was indeed the next-themes provider, moving it down under I18nProviderClient fixed the flicker issue! Hello, doesn't this contradicts this part of the documentation? Move all your routes inside an app/[locale]/ folder. For Client Components, wrap the lowest parts of your app with I18nProviderClient inside a layout Because the next-themes provider goes on the very top of your app. I'm having the same flicker issue. That depends if next-themes can suspend too or not; next-international has a suspense boundary internally so it might be why it works when next-themes is a children of it. In my case I use next-ui and next-theme, all of them should be inside the next-international provider. not only that, I think for every client component Just want to add a note for everyone, looking for this flicker for a full day HAHA thank god
gharchive/issue
2023-12-04T07:27:17
2025-04-01T06:37:28.197104
{ "authors": [ "QuiiBz", "cglacet", "pajarrahmansyah", "pontusab" ], "repo": "QuiiBz/next-international", "url": "https://github.com/QuiiBz/next-international/issues/300", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
310287166
Travis CI É preciso configurar o Travis-CI para rodar automaticamente os testes do projeto. Já temos o .travis.yml no projeto, basta configurar a conta. Eu deixei a tag ali pronta, pro php 7 pra cima, só editar o .travis.yml, se tiver alguma dúvida, eu tenho um livro a respeito: https://ci.mrprompt.com.br ou tento ajudar Blz, deixa comigoEm 1 de abr de 2018 13:36, Thiago Paes notifications@github.com escreveu:Eu deixei a tag ali pronta, pro php 7 pra cima, só editar o .travis.yml, se tiver alguma dúvida, eu tenho um livro a respeito: https://ci.mrprompt.com.br ou tento ajudar —You are receiving this because you commented.Reply to this email directly, view it on GitHub, or mute the thread. resolvido, versão 7.0 do phpunit so roda com PHP 7.1 modifiquei a verão do phpunit para 6.5.
gharchive/issue
2018-04-01T02:17:14
2025-04-01T06:37:28.200350
{ "authors": [ "Rctnet", "mrprompt" ], "repo": "QuilhaSoft/OpenCnabPHP", "url": "https://github.com/QuilhaSoft/OpenCnabPHP/issues/52", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
332048249
Bradesco cnab240 Com a ajuda do @Rctnet consegui gerar o arquivo para ser homologado porém, já de cara, deu erro de arquivo invalido! Busquei na internet e parece que isso se da devido o formato que foi salvo o txt, tem que ser com salvo codificação ANSI. Antes de enviar novamente para homologar, estou verificando os campos das classes Registro porém bateu umas duvidas: O Registro0 se refere ao Header? O Registro1 se refere ao Trailer? O Registro3P se refere ao Segmento P? Tem ainda Header_lote e Trailer_lote, quais são suas classes Registro correspondentes? Eu atualizei o Registro0 considerando que ele seja o Header do arquivo porem notei que o Registro0 do cnab400 tem uma campo chamado "identificacao_registro" e nesse cnab240 que atualizei não consta esse campo no Header do arquivo e sim no Header de lote, esse Header de lote onde atualizo, qual classe Registro se refere a ela? Quanto a codificação do arquivo, na linha 95 do exemploRemessa.php eu faço o decode de utf8 para salvar o arquivo, pode ser ai que você vai converter para ANSI, disponibilizei ontem também um update que permite a modificação do caracter de fim de linha, veja na linha 120 do https://github.com/QuilhaSoft/OpenCnabPHP/blob/master/src/resources/B748/remessa/cnab400/Registro0.php como é feito(necessário para o banco SICREDI) 3.1 - Composição do Arquivo cnab240 da CEF,Santander, o do bradesco deve ser igual Registro Header de Arquivo (Tipo = 0) Registro0.php Registro Header de Lote (Tipo = 1) Registro1.php Registros Iniciais do lote (opcional) .. (Tipo = 2) Registro2.php Registros de detalhe Segmentos (Tipo = 3) aqui podem conter os seguimentos P,Q,R,S Registro3P.php , 3Q, assim sucessivamente Registros finais do lote (opcional) (Tipo = 4) Registro4.php Registro trailer do lote (opcional) (Tipo = 5)Registro5.php Registro Trailer de Arquivo (Tipo = 9) Registro9.php Blz @Rctnet , vou fazer uns ajustes e pintando alguma duvida ou dando certo eu retorno informando... @Rctnet Alterei os campos das classes Registro conforme doc do Bradesco e consegui gerar o arquivo para homologar porém sempre ta recusando o arquivo e dando a mensagem "O registro possui tamanho invalido". É um erro muito vago, não especifica onde é. Você sabe o que pode ser? Qual o tamanho que o arquivo precisa ter? Uma curiosidade, analisando outros arquivos de outros bancos notei que tinha a descrição "REMESSA-TESTE" porém na documentação do Bradesco não vi em nenhum momento menção a isso. O txt que gerei não tem essa descrição. O tamanho nesse caso é da linha, que deve ter 240 caracteres Posta um arquivo aqui para eu dar uma olhada, ou manda no meu email o arquivo que recebi esta sem quebra de liinha Isso que notei tbm, abri ele no notepad++ e só tem uma linha unica Quando gero os dados na tela, aparece a quebra de linha porém quando copio e salvo em txt dae acontece isso, mesmo salvando com codificação ANSI. pode ser na conversão para ANSI os caracteres de fim de linha são inválidos Depois de analisar os arquivos de teste na pasta Samples notei que tinha um comando de saida dos dados que trata o utf8, eu não estava tratando assim a codificação, foi uma falha minha. Depois que inclui esse comando resolveu o problema da quebra de linha, agora vou enviar esse arquivo pro banco pra ver o erro que dá. O comando que usei foi esse na saida dos dados. header("Content-Disposition: attachment;filename=" . $arquivo->getFileName() .";"); echo utf8_decode($arquivo->getText()); // observar a header do seu php para não gerar comflitos de codificação de caracteres Que pena, a empolgação durou pouco kkkkk Deu o mesmo erro :/ Segue arquivo que enviei para validar. REM_237_1234_1.txt
gharchive/issue
2018-06-13T15:29:34
2025-04-01T06:37:28.210325
{ "authors": [ "CristianoMCon", "Rctnet" ], "repo": "QuilhaSoft/OpenCnabPHP", "url": "https://github.com/QuilhaSoft/OpenCnabPHP/issues/77", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1532807640
Set indentation for attributes in new lines There is an option to set indentation for child tags in XMLDocument but no option to set indentation for new line attributes and they are at present calculated at XMLElement#appendAttributesIndentText(Writer). It might be better to provide an option to set indentations instead of calculating them. For example, instead of providing an argument boolean newLineAttributes, you can provide int attributeIndentation whose negative value (e.g. -1) would imply no new line and any positive number would imply the amount of indentation. Thanks. I agree , there a lot of other issues to fix on com.reandroid.xml.* classes. Any updates? Sorry I don't know how I forgot this issue. Keeping newLineAttributes, I have added setAttributesIndentScale(float indentScale) for elements, you can set negative value to pull back. Keeping newLineAttributes, I have added setAttributesIndentScale(float indentScale) for elements, you can set negative value to pull back. It only scales the calculated indentation, it does not set the indentation. I feel like i missed your point. As on the last commit you can set indentation to any position you like, I kept newLineAttributes param to turn off/on indentation. The indentation for attributes must be calculated bc it is anchored with parent element. Can you show me your goal with screenshoot/document ? Or make PR I have ended up writing a concrete implementation of XmlPullParser to handle this. As I said in #18, I don't think XML conversion should be part of the library as it can be handled quite easily using XmlSerializer and Transformer functions.
gharchive/issue
2023-01-13T19:31:48
2025-04-01T06:37:28.316885
{ "authors": [ "MuntashirAkon", "REAndroid" ], "repo": "REAndroid/ARSCLib", "url": "https://github.com/REAndroid/ARSCLib/issues/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1400050738
🛑 RED7 Staff - Portal is down In 826b475, RED7 Staff - Portal (https://portal.red7.ml) was down: HTTP code: 0 Response time: 0 ms Resolved: RED7 Staff - Portal is back up in e81a91a.
gharchive/issue
2022-10-06T17:23:00
2025-04-01T06:37:28.319440
{ "authors": [ "Creaous" ], "repo": "RED7Studios/status", "url": "https://github.com/RED7Studios/status/issues/893", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1321493533
Master installation script / convenience function Is your feature request related to a problem? Please describe. Setting up a fully-fledged R environment in VS Code currently involves several manual steps, as outlined in the README. Users must first install the languageserver package (from R), then add the REditorSupport extension (from VS Code), and then configure several other add-ons from other locations/sources (e.g. radian via pip in the terminal, httpd from R...). In my experience, this startup rigmarole is tricky for newcomers and, possibly, even for fairly advanced users. (Example: I explicitly had to add radian to my $PATH on both Linux and Mac before it would work.) Once everything is set up, then VS Code genuinely makes for an excellent R IDE. But it's harder getting to that point than it needs to be IMO. Describe the solution you'd like One possible solution is to handle all of these installation requirements via a single convenience script or R function. I'm thinking something along the lines of arrow::install_arrow() and reticulate::install_python() (or even JILL and Homebrew). Say this functionality was bundled as an R function; call it vscoder_setup(). Users could then pass arguments regarding the installation and configuration that they'd like... although I'd argue that the full enchilada of recommended steps—including the debugger plugin and radian—be installed as a default. Describe alternatives you've considered Apart from the current manual approach, none. Additional context #718 is somewhat related. Again, however, my goal is to provide a single convenience function that I can give to students and colleagues that sees them off to the races with minimal effort or assumed knowledge on their part. Thanks for considering! I think a generalized approach to doing this is to use VSCode Remote-Containers (or simply devcontainer CLI). @grantmcdermott Another approach is building a Docker image with JupyterLab + R + code-server + ... Log into https://demo.jupyter.b-data.ch with your GitHub account and start Image R (verse:latest) + code-server. You may also run registry.gitlab.b-data.ch/jupyterlab/r/verse locally like the official Jupyter docker images: https://github.com/jupyter/docker-stacks#quick-start See also https://gitlab.b-data.ch/explore/projects/topics/JupyterLab. ℹ️ Multi-arch (linux/amd64, linux/arm64/v8) Docker images with code-sever + R | Python | Julia + ... Thanks both. I agree the containerized options are nice and I personally make use of them a lot in my own workflow. But again, I'm looking to accommodate users who will, typically, never have heard of Docker and might even be getting exposed to R and VS Code for the first time. (And me in the middle, trying to convince them to use both!) I'd love to make the setup+installation process for these users as simple as it is for, say, RStudio. I think it would be fair to say that one of the main vscode-R goals, at present, is to drastically reduce the barrier to entry. This can be seen for instance in the defaulting of standard vsc-R settings, or the ability to toggle R options via extension settings. RStudio's onboarding and new user experience is nothing short of exceptional and is something that can (and probably should) be used as a benchmark. @renkun-ken has a nice blogpost discussing this (unfortunately I can't find the link ATM). As you've outlined, we need to do better with the setup phase. The python extension does a nice job of setting up upon initial installation, and perhaps we need to explore that more. The selection of rPath and rTerm can also be confusing, and the cognitive overhead should hopefully be reduced in an upcoming PR. Hopefully more to come in making this a more inviting experience :) Hopefully the work the devs are doing to bring full multiroot support so that R in VSCode operates just like python will make things better, because the python side is really easy to get going, and R should be the same And sorry but maybe I'm an idiot, but I've never gotten vscode-R to work. I follow the README and have all the requirements installed. I specify the rterm and rpath to fixed binary paths in my settings, but nothing works, you cannot attach an R terminal in VSCode, it doesn't do any syntax checking, really it just doesn't seem connected to R and radian at all. I'm simply not understanding because honestly how can anyone do these simple steps wrong. Is it because my R, radian, languageserver, httpgd are in a single conda environment? I remember also trying to install these into system paths via my linux dnf package manager, but remember it also didn't work at all. How did other people get things to work?
gharchive/issue
2022-07-28T21:12:15
2025-04-01T06:37:28.339342
{ "authors": [ "ElianHugh", "benz0li", "eitsupi", "grantmcdermott", "hermidalc" ], "repo": "REditorSupport/vscode-R", "url": "https://github.com/REditorSupport/vscode-R/issues/1162", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1900639743
Handle chain information in explicit form in invoices This is a breaking change since inside the invoice structure we make chain non-optional; plus we add new error cases and make set_chain return Result. Thus I am making this PR against v11 branch. Closes #103 Any reason why it is non-optional instead of optional as discussed in #103? @fedsten the parameter is optional as was discussed, thus the invoice string is backwards-compatible. However, for each given invoice we always know which network it belongs to (since we are defaulting to mainnet as discussed), thus we must have a non-optional field. Got it, for clarifying. In other words, a wallet will always know and be sure to which network a given invoice belongs. If the invoice has conflicting network information (like an address network doesn't match the network provided in a parameter) it will error during the parse procedure. But each parsed invoice always knows which network it is valid for, @zoedberg wait, we do not have CI set up for this repo? Oh, I'll fix it. I did this PR originally against v0.10 branch which was used rust-bitcoin v0.30. Then I understood it's breaking, so moved to v0.11, which doesn't depend on rust-bitcoin at all and have a different Address type (from bp-std) with different API. Thus the issue. Will fix. Closing in favor of solution from https://github.com/RGB-WG/rgb-std/pull/118
gharchive/pull-request
2023-09-18T10:31:53
2025-04-01T06:37:28.348741
{ "authors": [ "dr-orlovsky", "fedsten" ], "repo": "RGB-WG/rgb-wallet", "url": "https://github.com/RGB-WG/rgb-wallet/pull/104", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1943749483
🛑 CEOTR Home Page loads is down In b44df53, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 15b2de1 after 2 minutes.
gharchive/issue
2023-10-15T06:40:41
2025-04-01T06:37:28.430353
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/10712", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1878697053
🛑 CEOTR Home Page loads is down In 0c91271, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 4341d70 after 2 minutes.
gharchive/issue
2023-09-02T13:49:14
2025-04-01T06:37:28.432717
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/11561", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1971391838
🛑 CEOTR Home Page loads is down In 960d72c, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in a00fe7c after 2 minutes.
gharchive/issue
2023-10-31T22:34:07
2025-04-01T06:37:28.435098
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/15360", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1973620665
🛑 CEOTR Home Page loads is down In a685c2b, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 3602b4d after 2 minutes.
gharchive/issue
2023-11-02T06:51:57
2025-04-01T06:37:28.437493
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/15703", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1978026113
🛑 Sensor Tracker login is down In 4e69a74, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in df23ebd after 1 minute.
gharchive/issue
2023-11-05T23:31:56
2025-04-01T06:37:28.440155
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/16694", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2001237748
🛑 Sensor Tracker login is down In 2791adc, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in a2edb29 after 1 minute.
gharchive/issue
2023-11-20T02:21:26
2025-04-01T06:37:28.442774
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/20112", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1817291984
🛑 Sensor Tracker login is down In aa7a1b0, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 52 ms Resolved: Sensor tracker login is back up in 26ce07e.
gharchive/issue
2023-07-23T20:50:05
2025-04-01T06:37:28.445140
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/2101", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1817586744
🛑 Sensor Tracker login is down In f70f816, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 46 ms Resolved: Sensor tracker login is back up in f3f2118.
gharchive/issue
2023-07-24T04:22:12
2025-04-01T06:37:28.447606
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/2159", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2013517078
🛑 Sensor Tracker login is down In d5aeb60, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in 4e71703 after 1 minute.
gharchive/issue
2023-11-28T03:04:07
2025-04-01T06:37:28.449946
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/21995", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1827673399
🛑 CEOTR Home Page loads is down In a06f603, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 7c43f68.
gharchive/issue
2023-07-29T21:26:56
2025-04-01T06:37:28.452319
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/3467", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1916272800
🛑 CEOTR Home Page loads is down In 615141c, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in c9fbbbb after 2 minutes.
gharchive/issue
2023-09-27T20:31:53
2025-04-01T06:37:28.454970
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/5826", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1923339947
🛑 CEOTR Home Page loads is down In 6b51877, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 1559d22 after 2 minutes.
gharchive/issue
2023-10-03T05:28:01
2025-04-01T06:37:28.457291
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/7347", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1852438987
🛑 CEOTR Home Page loads is down In 86c96ec, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 1e74faf.
gharchive/issue
2023-08-16T03:19:49
2025-04-01T06:37:28.459654
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/7466", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1853801112
🛑 Sensor Tracker login is down In a0570d2, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 42 ms Resolved: Sensor tracker login is back up in 68f87ec.
gharchive/issue
2023-08-16T19:36:11
2025-04-01T06:37:28.462041
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/7633", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1853960296
🛑 Sensor Tracker login is down In 2b8d234, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 107 ms Resolved: Sensor tracker login is back up in af2e687.
gharchive/issue
2023-08-16T22:00:50
2025-04-01T06:37:28.464492
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/7661", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1859493886
🛑 Sensor Tracker login is down In 91205f0, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 502 Response time: 48 ms Resolved: Sensor tracker login is back up in 62bf4a8 after 38 days, 3 hours, 14 minutes.
gharchive/issue
2023-08-21T14:23:28
2025-04-01T06:37:28.467083
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/8775", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1932146943
🛑 Sensor Tracker login is down In 4b2ccf3, Sensor Tracker login (https://stg.ceotr.ca/sensor_tracker) was down: HTTP code: 0 Response time: 0 ms Resolved: Sensor tracker login is back up in a1ed6c6 after .
gharchive/issue
2023-10-09T01:03:56
2025-04-01T06:37:28.469524
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/8979", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1936746611
🛑 CEOTR Home Page loads is down In fbb831b, CEOTR Home Page loads (https://stg.ceotr.ca CEOTR Home) was down: HTTP code: 0 Response time: 0 ms Resolved: CEOTR Home Page loads is back up in 1302d38 after 2 minutes.
gharchive/issue
2023-10-11T03:56:56
2025-04-01T06:37:28.471886
{ "authors": [ "RKTowse" ], "repo": "RKTowse/upptime", "url": "https://github.com/RKTowse/upptime/issues/9558", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }