id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1512961389 | Incomplete Description of BULK INSERT FORMAT
[Enter feedback here]
Incomplete Description of BULK INSERT FORMAT. Does not enumerate available file formats and the earliest supporting SQL Server version.
Document Details
⚠ Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.
ID: fe073190-9591-00f0-0d07-8f1fcf16bbc0
Version Independent ID: 769fb224-8abc-29b4-355d-bab3417be8c1
Content: Use a format file to bulk import data - SQL Server
Content Source: docs/relational-databases/import-export/use-a-format-file-to-bulk-import-data-sql-server.md
Service: sql
Sub-service: data-movement
GitHub Login: @rwestMSFT
Microsoft Alias: randolphwest
@ClarkFrazier thanks for your feedback. SQL Server versions would be 2016 and higher. Documentation relating to SQL Server 2005 - 2014 is available in the previous versions archive.
As for available file formats, the data formats are covered in Data formats for bulk import or bulk export (SQL Server).
If you have any more questions, please let us know.
@ClarkFrazier -- thank you for your considered feedback.
@rwestMSFT -- thank you for clarifying.
I am closing this issue now. You are welcome to @ mention me for any followup. We hope to hear from you again.
| gharchive/issue | 2022-12-28T16:34:27 | 2025-04-01T04:32:47.821746 | {
"authors": [
"ClarkFrazier",
"WilliamAntonRohm",
"rwestMSFT"
],
"repo": "MicrosoftDocs/sql-docs",
"url": "https://github.com/MicrosoftDocs/sql-docs/issues/8435",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2027927195 | Add C# code to "Coding and Debugging the Script Component" page
This is a suggestion than issue. Is it possible to add C# code to the page "Coding and Debugging the Script Component", https://learn.microsoft.com/en-us/sql/integration-services/extending-packages-scripting/data-flow-script-component/coding-and-debugging-the-script-component?view=sql-server-ver16
and the rest of the SSIS Script Component Pages.?
@chanmmn -- thank you for your suggestion. Please consider these resources:
SQL Server on Microsoft Q&A
DBA Stack Exchange
Stack Overflow
@chugugrace -- please look into this issue.
@chanmmn We are closing this public issue and tracking it internally.
AB#214733
| gharchive/issue | 2023-12-06T07:59:23 | 2025-04-01T04:32:47.825390 | {
"authors": [
"WilliamAntonRohm",
"chanmmn",
"rwestMSFT"
],
"repo": "MicrosoftDocs/sql-docs",
"url": "https://github.com/MicrosoftDocs/sql-docs/issues/9535",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
326353346 | add comparison table
I feel like the overall T-SQL documentation on boolean expressions involving NULLs has been crying out for a very clear example of how NULLs work under the ANSI_NULLS setting. I've created a table that I think illustrates this well, and put it where I think it would be most appropriate, but it may be better suited to a different location.
@NReilingh : Thanks for your contribution! The author, @edmacauley, has been notified to review your proposed change.
@NReilingh Thanks for your contribution, great idea. #sign-off
| gharchive/pull-request | 2018-05-25T01:47:32 | 2025-04-01T04:32:47.826983 | {
"authors": [
"NReilingh",
"PRMerger9",
"edmacauley"
],
"repo": "MicrosoftDocs/sql-docs",
"url": "https://github.com/MicrosoftDocs/sql-docs/pull/658",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1247071115 | Update what-s-new-in-sql-server-2022.md
Please add the hyperlink to the Parameter sensitive plan optimization section of this documentation (line 91) - https://docs.microsoft.com/sql/relational-databases/performance/parameter-sensitivity-plan-optimization. Or the relative path is - ../relational-databases/performance/parameter-sensitivity-plan-optimization.md
@thesqlsith : Thanks for your contribution! The author(s) have been notified to review your proposed change.
@MikeRayMSFT,
Can you review the proposed changes?
IMPORTANT: When the changes are ready for publication, add a #sign-off comment to signal that the PR is ready for the review team to merge. Thanks.
Please assign this to me, and we'll handle this with an internal PR.
Please assign this to me, and we'll handle this with an internal PR.
Thanks. Should we close this PR?
Please assign this to me, and we'll handle this with an internal PR.
Thanks. Should we close this PR?
No need, the internal PR will handle it for us.
| gharchive/pull-request | 2022-05-24T20:08:10 | 2025-04-01T04:32:47.830806 | {
"authors": [
"PRMerger8",
"jborsecnik",
"rwestMSFT",
"thesqlsith"
],
"repo": "MicrosoftDocs/sql-docs",
"url": "https://github.com/MicrosoftDocs/sql-docs/pull/7610",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
336266110 | Words were mixed up hyphens instead of underscores
Just made it understood more by changing dashes (-) to dashes/hyphens (-) and then hyphens () to underscores ().
@dbaduck : Thanks for your contribution! The author, @MashaMSFT, has been notified to review your proposed change.
Hi @dbaduck, thank you so much for your contribution! You're right, this is better. Thanks again :) #sign-off
| gharchive/pull-request | 2018-06-27T14:56:25 | 2025-04-01T04:32:47.832315 | {
"authors": [
"MashaMSFT",
"PRMerger16",
"dbaduck"
],
"repo": "MicrosoftDocs/sql-docs",
"url": "https://github.com/MicrosoftDocs/sql-docs/pull/828",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
353276743 | Unable to build project
I'm not sure if I missed anything on this example.
Here's the error.
Severity Code Description Project File Line Suppression State
Error The "ResolveLibraryProjectImports" task failed unexpectedly.
System.IO.FileNotFoundException: Could not load assembly 'MobileApp1, Version=0.0.0.0, Culture=neutral, PublicKeyToken='. Perhaps it doesn't exist in the Mono for Android profile?
File name: 'MobileApp1.dll'
at Java.Interop.Tools.Cecil.DirectoryAssemblyResolver.Resolve(AssemblyNameReference reference, ReaderParameters parameters)
at Java.Interop.Tools.Cecil.DirectoryAssemblyResolver.Resolve(String fullName)
at Xamarin.Android.Tasks.ResolveLibraryProjectImports.Extract(DirectoryAssemblyResolver res, ICollection1 jars, ICollection1 resolvedResourceDirectories, ICollection1 resolvedAssetDirectories, ICollection1 resolvedEnvironments)
at Xamarin.Android.Tasks.ResolveLibraryProjectImports.Execute()
at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__26.MoveNext() MobileApp1.Android
Error CS0103 The name 'zipCodeEntry' does not exist in the current context MobileApp1 C:\Users\Now Corporation\VSProjects\MobileApp2\MobileApp1\MobileApp1\MobileApp1\WeatherPage.xaml.cs 18 Active
Error CS0103 The name 'zipCodeEntry' does not exist in the current context MobileApp1 C:\Users\Now Corporation\VSProjects\MobileApp2\MobileApp1\MobileApp1\MobileApp1\WeatherPage.xaml.cs 20 Active
Error CS0103 The name 'InitializeComponent' does not exist in the current context MobileApp1 C:\Users\Now Corporation\VSProjects\MobileApp2\MobileApp1\MobileApp1\MobileApp1\WeatherPage.xaml.cs 10 Active
Error CS0103 The name 'getWeatherBtn' does not exist in the current context MobileApp1 C:\Users\Now Corporation\VSProjects\MobileApp2\MobileApp1\MobileApp1\MobileApp1\WeatherPage.xaml.cs 22 Active
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 3b25d4cd-0c3f-e9a6-6256-ea54d0972d4b
Version Independent ID: babd49b7-8e18-13dc-99f2-6783495cb0dc
Content: Learn app-building basics with Xamarin.Forms in Visual Studio - Visual Studio
Content Source: docs/cross-platform/learn-app-building-basics-with-xamarin-forms-in-visual-studio.md
Product: visual-studio-dev15
GitHub Login: @charlespetzold
Microsoft Alias: chape
There is a link to the completed sample at the end of the page - you can view/download it from GitHub.
Same error with this https://github.com/xamarin/xamarin-forms-samples/tree/master/Weather. Something missing?
I was just able to build this sample successfully. Can you provide more information on what platform you are using and what Android SDKs you have installed. We recommend Visual Studio 2019 and the SDK for Android API Level 27 (8.1, Oreo) which is what this sample is currently set to use. Also ensure that you have restored NuGet packages, so that Xamarin.Forms and the Android support library packages are correctly installed.
Our testing has shown the sample working, so I'm going to close this issue. If you still have a problem please create a new issue with additional feedback either on the doc or the sample repo on GitHub.
Thanks
| gharchive/issue | 2018-08-23T08:29:26 | 2025-04-01T04:32:47.841947 | {
"authors": [
"conceptdev",
"patrickjopia",
"wandabwa2004"
],
"repo": "MicrosoftDocs/visualstudio-docs",
"url": "https://github.com/MicrosoftDocs/visualstudio-docs/issues/1429",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
692854866 | Build Tools in container doesn't detect container build failures
I'm experiencing silent failure problems when attempting to build a Docker container with Visual Studio build tools by following instructions on https://docs.microsoft.com/en-us/visualstudio/install/build-tools-container?view=vs-2019 . More specifically, docker reports Successfully built ... even when container building actually fail. This seem like an unnecessary fragile setup to me.
Steps to reproduce:
Build a container with --add Microsoft.VisualStudio.Workload.VCTools;includeRecommended;includeOptional without increasing the container disk size. This will lead to build failure, despite Docker build reports it as a success.
Build a container behind a proxy server. This will lead to build failure, despite Docker build reports it as a success.
Would it be possible to make "docker build" fail in the scenarios above?
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: b91a4a43-5353-97cf-7532-7644aa3b497f
Version Independent ID: 858d9a5c-72f3-6a85-9270-0a0dd9e0438b
Content: Install Visual Studio Build Tools into a container
Content Source: docs/install/build-tools-container.md
Product: visual-studio-windows
Technology: vs-installation
GitHub Login: @ornellaalt
Microsoft Alias: ornella
Understood. I've now moved the suggestion over to https://developercommunity.visualstudio.com/idea/1175381/build-tools-in-container-doesnt-detect-container-b.html
| gharchive/issue | 2020-09-04T07:56:32 | 2025-04-01T04:32:47.848479 | {
"authors": [
"forderud"
],
"repo": "MicrosoftDocs/visualstudio-docs",
"url": "https://github.com/MicrosoftDocs/visualstudio-docs/issues/5819",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1282662004 | How to load the API in Swagger UI
Able to run the application. Would like to know if there is a way to load the API in Swagger UI.
Trie to set it up as per instructions from the Swashbuckle section and use the same port mentioned in the article it didn't work. May I know if there's a way to do this?
I also tried with Asp.Net Core with Angular Template from VS2022 didn't help either
@rjankathi -- Raj, thank you for your feedback. Which docs.microsoft.com article are you following? Meanwhile, please consider these resources:
Visual Studio on Microsoft Q&A
Visual Studio MSDN Forum
Stack Overflow
Reddit
Hello sir @WilliamAntonRohm here is the article
Create an ASP.NET Core app with Angular in Visual Studio:
Please let me know if you need more info.
Thank you @rjankathi for your followup. Please provide more details on where you were stopped in the article steps. This repo is solely for documentation issues, and this may be implementation-related -- please try the help resources I mentioned above.
| gharchive/issue | 2022-06-23T16:26:35 | 2025-04-01T04:32:47.853126 | {
"authors": [
"WilliamAntonRohm",
"rjankathi"
],
"repo": "MicrosoftDocs/visualstudio-docs",
"url": "https://github.com/MicrosoftDocs/visualstudio-docs/issues/8199",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
832267317 | Repo sync for protected CLA branch
The pull request is created from master637514462482295989 to master to fix git push error for protected CLA branch
@ghogen : Thanks for your contribution! The author(s) have been notified to review your proposed change.
| gharchive/pull-request | 2021-03-15T23:04:11 | 2025-04-01T04:32:47.854171 | {
"authors": [
"PRMerger6",
"ghogen"
],
"repo": "MicrosoftDocs/visualstudio-docs",
"url": "https://github.com/MicrosoftDocs/visualstudio-docs/pull/6482",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
337753319 | Text duplication
The second and third paragraphs are almost identical. From the source, it looks like one is for TFS and one is for VSTS, but there is no indication for it in the rendered text.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 973bb49c-7fcb-a953-21ff-82b03f899e73
Version Independent ID: 507b2d8b-9560-9afb-4628-ae562832c7ce
Content: Share content by creating a wiki for your team project - VSTS & TFS
Content Source: docs/project/wiki/wiki-create-repo.md
Product: devops
GitHub Login: @KathrynEE
Microsoft Alias: kaelli
@tsahi - thank you for the feedback. You are correct that the text was duplicated. I'm correcting it now and the change should be live later today.
| gharchive/issue | 2018-07-03T05:44:55 | 2025-04-01T04:32:47.868126 | {
"authors": [
"KathrynEE",
"tsahi"
],
"repo": "MicrosoftDocs/vsts-docs",
"url": "https://github.com/MicrosoftDocs/vsts-docs/issues/1203",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
346696999 | Instructions out of date with new UI
The new UI changes in VSTS result in this article being out of date. The search bar no longer has a drop down to select where to search.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 219de2f0-b98c-3fcf-dc0c-7518e913c944
Version Independent ID: a7e93e0a-fece-4848-9f58-42aeac9700b4
Content: Search the Wiki - VSTS & TFS
Content Source: docs/project/wiki/search-wiki.md
Product: devops
GitHub Login: @KathrynEE
Microsoft Alias: kaelli
@AStoker - Thank you for your feedback. I've added an item to my backlog to update this article.
| gharchive/issue | 2018-08-01T17:32:35 | 2025-04-01T04:32:47.871953 | {
"authors": [
"AStoker",
"KathrynEE"
],
"repo": "MicrosoftDocs/vsts-docs",
"url": "https://github.com/MicrosoftDocs/vsts-docs/issues/1453",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
574918300 | Is it possible to clean the workspace in a Deployment job using Vm resources in an environment?
I've found this community post where someone asked about cleaning their agents in deployment jobs, and a "fix" was released that allowed putting workspace clean settings in a deployment job: https://developercommunity.visualstudio.com/solutions/770657/view.html
Using this syntax is valid, but this fix was released before VM Environment agents were released. Using that syntax, it doesn't appear to be cleaning the workspace directory on agents that I'm deploying to. Additionally none of the steps in a deployment job will accept workspace: clean: all. I would expect that I could specify it as a property of steps or maybe deploy but both are invalid.
So first of all, this documentation for deployment jobs doesn't include the valid syntax of specifying a workspace: clean setting on a deployment job.
Second of all (and this may be partially product feedback) there appears to be no way currently to clean an agent that's part of an Environment. If that is not supported, the documentation should be updated to specify the workspace: clean settings for a deployment job, and also be explicit about that setting not carrying over to agents.
This same feedback could be left on the Deployments Job page as well: https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops
But at the very least if this schema page is the documentation for the full YAML Schema, it should include the valid schema of workspace: clean in a deployment job.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 7458d5b2-1bb7-7d87-7ce6-2b0c430b0155
Version Independent ID: edf5bbd3-a3f0-5426-e985-31290f8afd05
Content: YAML schema - Azure Pipelines
Content Source: docs/pipelines/yaml-schema.md
Product: devops
Technology: devops-cicd
GitHub Login: @steved0x
Microsoft Alias: sdanie
@FISHMANPET -- Peter, thank you for your question. You may find an answer here:
Azure DevOps Support Bot
Azure DevOps on Stack Overflow
@steved0x -- Steve, please look into this issue.
#reassign ramimsft
| gharchive/issue | 2020-03-03T20:22:46 | 2025-04-01T04:32:47.879728 | {
"authors": [
"FISHMANPET",
"WilliamAntonRohm",
"steved0x"
],
"repo": "MicrosoftDocs/vsts-docs",
"url": "https://github.com/MicrosoftDocs/vsts-docs/issues/7480",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
520195490 | Corrections for work item operations permissioning
PCA or TFA permissions will not take precedence over others for work item operations, such as deletion. Any Deny inherited from group membership will trump these Admin rights.
Edited all references.
@KimPlausible : Thanks for your contribution! The author(s) have been notified to review your proposed change.
| gharchive/pull-request | 2019-11-08T19:32:58 | 2025-04-01T04:32:47.881468 | {
"authors": [
"KimPlausible",
"PRMerger14"
],
"repo": "MicrosoftDocs/vsts-docs",
"url": "https://github.com/MicrosoftDocs/vsts-docs/pull/6312",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1325842993 | Environment list duplicate tags (resourceReferences)
According to the docs Environments - List
GET https://dev.azure.com/{organization}/{project}/_apis/distributedtask/environments/{environmentId}?expands=resourceReferences&api-version=7.1-preview.1
Should return the environment's info, including each resource's tags, which it does but repeats the same tag instead of enumerating them.
URI enterred:
https://dev.azure.com/redacted/redacted/_apis/distributedtask/environments/62?expands=resourceReferences&api-version=7.1-preview.1
JSON returned:
{"resources":[{"tags":["app-nlb1","app-nlb1","app-nlb1","app-nlb1"],"id":123,"name":"WVM0TDAPPWB1","type":"virtualMachine"},{"tags":["app-nlb2","app-nlb2","app-nlb2","app-nlb2"],"id":124,"name":"WVM0TDAPPWB2","type":"virtualMachine"}],"id":62,"name":"Dev","description":"Automatically created environment","createdBy":"redacted","createdOn":"2022-06-03T15:01:25.7033333Z","lastModifiedBy":"redacted","lastModifiedOn":"2022-06-03T15:10:50.06Z","project":"redacted"}
As you can see, the tags are duplicated: (for ..WB1) app-nlb1, app-nlb1, app-nlb1, app-nlb1
But in reality, the tags are: (for ..WB1) app, app-nlb1, nlb1, node1
Expected result:
{"resources":[{"tags":["app","app-nlb1","nlb1","node1"],"id":123,"name":"WVM0TDAPPWB1","type":"virtualMachine"},{"tags":["app","app-nlb2","nlb2","node2"],"id":124,"name":"WVM0TDAPPWB2","type":"virtualMachine"}],"id":62,"name":"Dev","description":"Automatically created environment","createdBy":"redacted","createdOn":"2022-06-03T15:01:25.7033333Z","lastModifiedBy":"redacted","lastModifiedOn":"2022-06-03T15:10:50.06Z","project":"redacted"}
I tried:
api-version=7.1-preview.1
api-version=6.0-preview.1
Same results
@patware Thanks for reporting that looks like a bug, I tried to reproduce but didn't find any environment with tags. I will forward you feedback to the team owning this API though.
| gharchive/issue | 2022-08-02T13:16:30 | 2025-04-01T04:32:47.886515 | {
"authors": [
"nechvatalp",
"patware"
],
"repo": "MicrosoftDocs/vsts-rest-api-specs",
"url": "https://github.com/MicrosoftDocs/vsts-rest-api-specs/issues/560",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
365556298 | update formatting, 'Run', 'F5' made bold
'F5' and 'Run' on line 52 bold to match the formatting on the rest of the page
:white_check_mark: Validation status: passed
File
Status
Preview URL
Details
docs/train-model-vs-tools-ai.md
:white_check_mark:Succeeded
View
For more details, please refer to the build report.
Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
| gharchive/pull-request | 2018-10-01T17:08:58 | 2025-04-01T04:32:47.890286 | {
"authors": [
"blakephillips",
"opbld31"
],
"repo": "MicrosoftDocs/windows-ai-docs",
"url": "https://github.com/MicrosoftDocs/windows-ai-docs/pull/30",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
386499314 | Parameters, causes and troubleshooting
Add parameters description, possible causes and troubleshooting steps.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 64521c99-6b84-0b52-ef96-efccf3b46d0a
Version Independent ID: 418df85d-a861-a2ed-29f8-6d37d8bc1b1d
Content: Bug Check 0x18B SECURE_KERNEL_ERROR - Windows drivers
Content Source: windows-driver-docs-pr/debugger/bug-check-0x18b--secure-kernel-error.md
Product: windows-hardware
GitHub Login: @DOMARS
Microsoft Alias: domars
Note that this kind of non-documentation is worse than useless. If you don’t have resources to add documentation for 100s of defined bug checks, don’t define 100s of bug checks.
@LuigiBruno - I have opened an internal work item to track this work, so closing this issue for now.
As I have mentioned before, we use a process to prioritize the most frequent bug checks out of the hundreds that are currently defined. With our limited documentation team resources, it is likely that we will only be able to do this work for a small sub set of all of the possible stop codes.
Thank you for your interest in improving our stop code docs.
| gharchive/issue | 2018-12-01T21:26:27 | 2025-04-01T04:32:47.895151 | {
"authors": [
"DOMARS",
"LuigiBruno",
"tinus-github"
],
"repo": "MicrosoftDocs/windows-driver-docs",
"url": "https://github.com/MicrosoftDocs/windows-driver-docs/issues/1119",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
476072379 | What are the MBIM messages to perform the operation for a Consumer eUICC?
Hi Microsoft Experts,
As you know, for a Consumer eUICC, user requires to support following interaction with eUICC :
Profile downloading and installation
Enable a profile
Disable a profile
Delete a profile
eUICC memory reset ( to delete all profiles)
Profile query list
Nicknaming
I don't see Microsoft defines any dedicated MBIM CIDs for these operations, anything I missed? Or these operation can be done by the CIDs present in this page? please help clarify?
Thanks
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: c17a35ca-60a6-fd9f-e171-8413f8ea4604
Version Independent ID: 7516b9db-e539-19d7-9be6-1140f6141404
Content: MB low level UICC access - Windows drivers
Content Source: windows-driver-docs-pr/network/mb-low-level-uicc-access.md
Product: windows-hardware
Technology: windows-devices
GitHub Login: @duncanmacmichael
Microsoft Alias: dumacmic
Hi @lchen1234, good question. Between this page and the MB UICC application and file system access topic, that's what we currently have to offer for MBIM information related to UICC operations. Let me see what I can find out about your eUICC questions.
Hi @lchen1234, thanks for waiting. I've gotten an answer from the MBB team, which I'll summarize here:
The CIDs that we expose at the MBIM level are for facilitating Windows's low-level (APDU-level) communication with the eUICC module, meaning he logical operations you're looking for here are not supported at this low-level interface. Windows with eUICC support is shipped with built-in local profile assistant (LPA) functionality for these types of operations.
Basically, programming at this level is not appropriate for these user interatctions; that is handled by higher-level functionality that is built in to Windows. Therefore, there is nothing to document here as far as MBIM CIDs.
Hope this helps.
| gharchive/issue | 2019-08-02T08:56:27 | 2025-04-01T04:32:47.902723 | {
"authors": [
"duncanmacmichael",
"lchen1234"
],
"repo": "MicrosoftDocs/windows-driver-docs",
"url": "https://github.com/MicrosoftDocs/windows-driver-docs/issues/1700",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
331350802 | Missing entry for 0x61949
Received value of 0x61949 in bugcheck. This value is undocumented, what does it mean?
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: de955911-dfbf-33b3-778a-767a5a578a0d
Version Independent ID: 23fb5cbd-5f7d-f7ca-1622-5885020966c2
Content: Bug Check 0x1A MEMORY_MANAGEMENT
Content Source: windows-driver-docs-pr/debugger/bug-check-0x1a--memory-management.md
Product: windows-hardware
GitHub Login: @DOMARS
Microsoft Alias: domars
Planning to consolidate the bug check 0x1A memory issues and get with the dev team this week.
@LarryK348 - Information on the 0x61949 parameter is now included in the topic. Thanks for the input on the stop code topics.
| gharchive/issue | 2018-06-11T21:07:49 | 2025-04-01T04:32:47.907174 | {
"authors": [
"DOMARS",
"LarryK348"
],
"repo": "MicrosoftDocs/windows-driver-docs",
"url": "https://github.com/MicrosoftDocs/windows-driver-docs/issues/566",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
410436756 | Update update-a-code-signing-certificate.md
Updated page to include instructions for adding "additional certificates" as the process is the same.
Thanks, @SillyKeith
| gharchive/pull-request | 2019-02-14T18:39:57 | 2025-04-01T04:32:47.908433 | {
"authors": [
"EliotSeattle",
"SillyKeith"
],
"repo": "MicrosoftDocs/windows-driver-docs",
"url": "https://github.com/MicrosoftDocs/windows-driver-docs/pull/1326",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
284993864 | Fix for rule difference
The microsoft.com markdown rules are slightly different than the GitHub markdown rules. My last commit had a glitch on the live page. Sorry.
Thanks @timrprobocom yes, the rendering engines are different, no way you could know or test :)
Still, this process works amazingly well, and quickly, too. My compliments.
Thanks, we like it too. We're still actively migrating content into the Markdown/Git-based system, so let us know if you come across a page you want to update that doesn't have an edit button. Quick way to tell is if the page is on MSDN, it's the old system, and if it's on docs.ms.com, then it's on the new
| gharchive/pull-request | 2017-12-28T21:33:18 | 2025-04-01T04:32:47.910931 | {
"authors": [
"tedhudek",
"timrprobocom"
],
"repo": "MicrosoftDocs/windows-driver-docs",
"url": "https://github.com/MicrosoftDocs/windows-driver-docs/pull/300",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
489980004 | Planning worksheet
Can the planning worksheet be provided in some format other than pdf? Excel or Word would be far better alone, or perhaps some sort of simple web form.
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 5739fab2-c6db-a7a9-b0e7-2bf4b85912a7
Version Independent ID: 82c58053-c1d7-d938-1b0f-f3f7f70ada3b
Content: Planning a Windows Hello for Business Deployment
Content Source: windows/security/identity-protection/hello-for-business/hello-planning-guide.md
Product: w10
Technology: windows
GitHub Login: @mapalko
Microsoft Alias: mapalko
@officedocsbot assign @jvsam
As mentioned in the closed issue ticket https://github.com/MicrosoftDocs/windows-itpro-docs/issues/2726, comment https://github.com/MicrosoftDocs/windows-itpro-docs/issues/2726#issuecomment-466835220, modern versions of Word are able to open PDF files for editing.
Thank you for sharing that information @illfated.
@poortom1004 thank you for your feedback. At the moment, that is the only format available for the planning worksheet. Your understanding is appreciated.
How can I request that a word version be provided? Since there's been others asking for the same thing, there's clearly a desire from others for one to be provided. The format that is currently provided just doesn't make much sense.
How can I request that a word version be provided?
@mapalko is the request possible?
@poortom1004 I converted the Planning Worksheet that is on this documentation (pdf was last updated on 2017-07-05). Though these links are not on the doc, some users may stumble upon this specific issue that you have opened (Open and closed issues are available on the Feedback section of this documentation).
https://1drv.ms/w/s!AgqYDMeP_2-2ggzRcQpAUlyCapoU (doc)
https://1drv.ms/w/s!AgqYDMeP_2-2ggvMgGjgc2TnCdDv (docx)
https://1drv.ms/x/s!AgqYDMeP_2-2ghFSsNVcWv4Ku4wq (spreadsheet)
Disclaimer: The converted documents are not owned by Microsoft, please use at your own risk. The links may change without notice. Always refer to the official Planning Worksheet on the doc.
Hello again @poortom1004, it would appear that the pdf file is really the only available format at this time. I am sure the document author has taken note of your suggestion and when a new format becomes available from the Microsoft Download Center, it'll be added to the docs. We will now close this issue. Feel free to re-open or create another issue through the doc's feedback feature, if you have other suggestions or ideas to improve the quality of this documentation.
Thank you again for your feedback.
@officedocsbot close
This link used to point to an excel sheet may be a year or two ago. Not everyone may be using the latest version of word or have a different tool to edit PDFs. Also, the original excel file used to prepopulate certain cells in the sheet based on selections that were made in other cells. I should have the old version of the sheet somewhere and can find and upload it, if someone needs it.
I checked and found the excel sheet. Apparently, I was mistaken about the prepopulating of stuff. It is a manual entry for everything by referencing the information in this article, so basically any editable format will work I guess. This can be a suggestion though that if an excel sheet can be provided which does prepopulate items based on other selections, it'll make the job of filling this up and coming up with the required config, so much easier. More than that, it can eliminate the possibility of errors while doing this manually! :)
| gharchive/issue | 2019-09-05T20:17:31 | 2025-04-01T04:32:47.922085 | {
"authors": [
"illfated",
"jvsam",
"make4fun",
"poortom1004"
],
"repo": "MicrosoftDocs/windows-itpro-docs",
"url": "https://github.com/MicrosoftDocs/windows-itpro-docs/issues/4874",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
495668654 | The wording
"To see that the operation was performed, check “4663(S): An attempt was made to access an object.”" - I think the word "what" or "which" would be more appropriate here.
"To see what/which operation was performed, check “4663(S): An attempt was made to access an object.”"
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 2286d366-a0ef-e224-2b74-782a80a4820e
Version Independent ID: 36ebd64c-e558-26a2-658f-929b87c3990b
Content: 4656(S, F) A handle to an object was requested. (Windows 10)
Content Source: windows/security/threat-protection/auditing/event-4656.md
Product: w10
Technology: windows
GitHub Login: @Dansimp
Microsoft Alias: dansimp
Fair enough, but I think you might have missed the meaning of the preceding sentence. Please notice the text I have highlighted in bold :
This event shows that access was requested, and the results of the request, but it doesn’t show that the operation was performed.
To see that the operation was performed, check “4663(S): An attempt was made to access an object.”
" I think you might have missed the meaning of the preceding sentence." - yes, I have... Agree - please excuse me for the disturbance!
No problem at all. Happy to hear that you find it agreeable.
@e0i : May we get this issue ticket closed?
"May we get this issue ticket closed?" - sure!
Very well, then. Feel free to close this issue whenever you can find the [Close] or [Close and comment] button.
| gharchive/issue | 2019-09-19T09:04:54 | 2025-04-01T04:32:47.929863 | {
"authors": [
"MichaelFirsov",
"illfated"
],
"repo": "MicrosoftDocs/windows-itpro-docs",
"url": "https://github.com/MicrosoftDocs/windows-itpro-docs/issues/4971",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
866447878 | tags causing issue in localized articles
The following English articles contain tags which are not valid .md syntax.
https://docs.microsoft.com/en-us/windows/client-management/mdm/diagnosticlog-csp
https://docs.microsoft.com/en-us/windows/client-management/mdm/provisioning-csp
https://docs.microsoft.com/en-us/windows/client-management/mdm/proxy-csp
https://docs.microsoft.com/en-us/windows/client-management/mdm/win32appinventory-csp
https://docs.microsoft.com/en-us/windows/client-management/mdm/update-csp
When the articles are localized the pages appear messed up because of the tags
https://docs.microsoft.com/ja-jp/windows/client-management/mdm/diagnosticlog-csp
https://docs.microsoft.com/ja-jp/windows/client-management/mdm/provisioning-csp
https://docs.microsoft.com/ja-jp/windows/client-management/mdm/proxy-csp
https://docs.microsoft.com/ja-jp/windows/client-management/mdm/win32appinventory-csp
https://docs.microsoft.com/ja-jp/windows/client-management/mdm/update-csp
Can you please fix the English topics
See the related issue: https://github.com/MicrosoftDocs/windows-itpro-docs/issues/9426
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: a84b842f-150d-f006-0563-db5a0f01a64f
Version Independent ID: 7dd55cbe-abc2-9c33-3fef-843b87f26312
Content: DiagnosticLog CSP - Windows Client Management
Content Source: windows/client-management/mdm/diagnosticlog-csp.md
Product: w10
Technology: windows
GitHub Login: @ManikaDhiman
Microsoft Alias: dansimp
There you go. PR #9452 is available for copy review, comments, and suggestions.
| gharchive/issue | 2021-04-23T21:39:43 | 2025-04-01T04:32:47.938305 | {
"authors": [
"TinaMcN",
"illfated"
],
"repo": "MicrosoftDocs/windows-itpro-docs",
"url": "https://github.com/MicrosoftDocs/windows-itpro-docs/issues/9449",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
463446813 | Microsoft Edge/kiosk mode: broken relative links
Description:
The 4 images in the subsection "Supported configuration types" are supposed to be linked to their image files,
but the relative links have been broken by not containing the needed parent directory dots.
Proposed change:
Add the required parent directory dots to enable the links to point to the files as intended (even if not strictly needed to begin with).
It may not be easy to spot the changes due to the text layout and HTML linking format,
so a meta-diff for the 4 individual link changes should be more useful:
- |**Single-app**<p><a href="/images/Picture1.png" alt="Full-sized view single-app
+ |**Single-app**<p><a href="../images/Picture1.png" alt="Full-sized view single-a
- <p> <p><a href="/images/Picture2.png" alt="Full-sized view single-app
+ <p> <p><a href="../images/Picture2.png" alt="Full-sized view single-a
- | **Multi-app**<p><a href="/images/Picture5.png" alt="Full-sized view multi-app
+ | **Multi-app**<p><a href="../images/Picture5.png" alt="Full-sized view multi-a
- <p> <p><a href="/images/Picture6.png" alt="Full-sized view multi-app
+ <p> <p><a href="../images/Picture6.png" alt="Full-sized view multi-a
Issue ticket reference or closure:
Closes #4275
cc @jvsam (via #4285)
Hi @nenonix and @JohanFreelancer9 for copy review, as per process. Thank you.
Hi @eavena this pull request has been copy edited and is ready for your final review and approval. Please let us know if there are outstanding issues to be resolved prior to merge. Thank you. See: commits and changes.
Hi again @eavena, kindly review this pull request when time permits. Thank you.
Hi again @eavena how are you? Can you please review this PR and merge if there are no issues? Thank you.
Hi @AndreaBarr can you help us with this? Thank you in advance.
Thank you @AndreaBarr for your help.
| gharchive/pull-request | 2019-07-02T21:36:20 | 2025-04-01T04:32:47.943497 | {
"authors": [
"e0i",
"illfated",
"jvsam"
],
"repo": "MicrosoftDocs/windows-itpro-docs",
"url": "https://github.com/MicrosoftDocs/windows-itpro-docs/pull/4325",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
527549536 | Advanced hunting: Kusto query operators versus MT
Description:
As discussed in issue ticket #5463 (KQL table command list translated to German), at least half of the operator names in the table list of common
Kusto query language operators are being translated to German by Machine Translation, making them incorrect and unusable in localized pages.
Also, as pointed out the MS Docs localization specialist team member Tina McNaboe (@TinaMcN), this is also happens in localization to other languages than German, so a generic solution to block these terms from translation would be quite productive to save time by avoiding more human translated pages for each affected localization language.
Proposed change:
Add MD code tags (back ticks) around the operators in the table.
issue ticket closure or reference:
Ref. #5463 (it will take a week to confirm that the problem is solved)
@mypil : I assume that the follow-up of this PR belongs to you, based on the originating issue ticket #5463.
#sign-off
To see the visual side of the results from adding the back ticks to the keywords, use the Rich Text Diff view button:
https://github.com/MicrosoftDocs/windows-itpro-docs/pull/5513/files?short_path=88f67eb#diff-88f67ebecdc631b5ff3cd408140db290
@lomayor - PR is ready for final review.
Thank you.
cc: @kenwith
I have another PR that should fix this (and apply other updates). Holding for PM approval. Please keep this pending for now.
@Mypil, just confirmed that I can't publish my PR yet. Merging this for now. Note that the entire library needs to have these backticks added. I will apply those fixes in a later PR.
| gharchive/pull-request | 2019-11-23T12:06:33 | 2025-04-01T04:32:47.948799 | {
"authors": [
"illfated",
"lomayor",
"mypil"
],
"repo": "MicrosoftDocs/windows-itpro-docs",
"url": "https://github.com/MicrosoftDocs/windows-itpro-docs/pull/5513",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1200530139 | Migrate module compatibility list from PS-Docs
Migrate module compatibility list from PS-Docs
This article was created by Joey Aiello to document which modules were compatible with PowerShell 6+. This list needs belongs here in the Windows content and needs to be maintained by the Windows teams.
Docs Build status updates of commit 3d95150:
:white_check_mark: Validation status: passed
File
Status
Preview URL
Details
docset/docs-conceptual/winserver2016-ps/module-compatibility.md
:white_check_mark:Succeeded
View (WindowsServer2016-ps)
docset/docs-conceptual/winserver2016-ps/toc.yml
:white_check_mark:Succeeded
View (WindowsServer2016-ps)
docset/docs-conceptual/winserver2019-ps/module-compatibility.md
:white_check_mark:Succeeded
View (WindowsServer2019-ps)
docset/docs-conceptual/winserver2019-ps/toc.yml
:white_check_mark:Succeeded
View (WindowsServer2019-ps)
docset/docs-conceptual/winserver2022-ps/module-compatibility.md
:white_check_mark:Succeeded
View (WindowsServer2022-ps)
docset/docs-conceptual/winserver2022-ps/toc.yml
:white_check_mark:Succeeded
View (WindowsServer2022-ps)
For more details, please refer to the build report.
Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report.
For any questions, please:Try searching the docs.microsoft.com contributor guidesPost your question in the Docs support channel
| gharchive/pull-request | 2022-04-11T21:03:25 | 2025-04-01T04:32:47.960113 | {
"authors": [
"opbld32",
"sdwheeler"
],
"repo": "MicrosoftDocs/windows-powershell-docs",
"url": "https://github.com/MicrosoftDocs/windows-powershell-docs/pull/2938",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1956214247 | fix an un-escaped asterisk in rename network adapter
fix the expression matching error
PR Summary
fix a simple typo / error in the rename network adapter file
PR Checklist
[x] Descriptive Title: This PR's title is a synopsis of the changes it proposes.
[x] Summary: This PR's summary describes the scope and intent of the change.
[x] Contributor's Guide: I have read the contributors guide.
[x] Style: This PR adheres to the style guide.
@tiburd
PR has been copyedited and is ready for final review, could you please check and merge? Thanks!
| gharchive/pull-request | 2023-10-23T02:07:52 | 2025-04-01T04:32:47.963547 | {
"authors": [
"scanum",
"yoinked-h"
],
"repo": "MicrosoftDocs/windows-powershell-docs",
"url": "https://github.com/MicrosoftDocs/windows-powershell-docs/pull/3674",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
416767774 | namespaces and usings incomplete for c++ example
copying the c++ example leads to a shitload of errors mostly related to certain things not being defined.
please add a comment with which namespaces and includes should be used
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
ID: 4ce88b8b-ad32-bcc7-bb9c-834d3cc487e8
Version Independent ID: eef88d4b-6051-95fe-b180-868dd8ff4db7
Content: FileSavePicker Class (Windows.Storage.Pickers) - Windows UWP applications
Content Source: https://cpubwin.visualstudio.com/DefaultCollection/windows-uwp/_git/winrt-api-build?path=%2fwinrt-api-build%2fwindows.storage.pickers.filesavepicker.yml&version=GBlive&_a=contents
Product: uwp
Git Login: mcleanbyron
Microsoft Alias: mcleans
Thanks for leaving your feedback. The code snippets on this page are part of a broader code sample, you can find the required includes in the complete sample code. I'm going to update this page to point to that updated sample and I'll add a note about this to the page.
Thanks, @snellejelle99, and apologies for the long delay between updates.
That code example was C++/CX, and C++/CX has been superseded by C++/WinRT. Since there is a C++/WinRT version of that sample app, I've mentioned that in the topic, kept the link to it, and removed the obsolete C++/CX sample.
Thanks!
-Steve
Don't worry about it. It's only been three years. :")
| gharchive/issue | 2019-03-04T11:58:40 | 2025-04-01T04:32:47.969860 | {
"authors": [
"knicholasa",
"snellejelle99",
"stevewhims"
],
"repo": "MicrosoftDocs/winrt-api",
"url": "https://github.com/MicrosoftDocs/winrt-api/issues/958",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
692482199 | [Pen Events] Isolation of pointer IDs between top-level browsing contexts
To avoid supporting fingerprinting, the spec / explainer should ensure that web pages running in 2 top-level browser contexts (i.e. browser tabs) shouldn't be able to determine that they are on the same machine by the pointer IDs from pen events.
For instance, if the ID was something like 4729283, and that was the same for a single user's pen across all top-level browsing contexts, web pages in 2 tabs that both received events [0] from that pen could reasonably conclude that they are observing the actions of the same user.
Assigning simple ordinal IDs (1, 2, 3) is one way to avoid this problem.
[0] The explainer says that "any connected PenEventTarget will receive pen events only while its relevant window is active", which is good, as this prevents event timestamp fingerprinting -- for the ID fingerprinting scenario, the user would have to use the pen, then switch tabs, then use the same pen again.
Thank you for the issue. I do have the following text in the current explainer:
The pointerId property identifies the pen which triggered the event and should follow the rules for assigning a pointerId in accordance with the procedures in the PointerEvents spec.
I agree with what you've written, but I was trying to avoid writing very specific steps in the explainer. Do you feel that this issue is already adequately addressed in the PointerEvents spec? I think the issue is equally applicable there. Do you agree?
In looking through the PointerEvents spec for applicable language I found these two pieces of guidance in note text:
from here
Each active pointer should have the same id within the scope of the top-level browsing context (as defined by [HTML]). However, there is no such guarantee across multiple top-level browsing contexts.
and from here.
user agents MUST ensure that the pointerId that is assigned remains the same only for the lifetime of the current page, and that any new pointerId values are not predictable (e.g. generated randomly with cryptographically strong randomness), to minimize the possibility of users being uniquely fingerprinted and tracked across different pages.
Ah, I missed that last part of the PointerEvents spec -- I think that would mitigate my concerns, thanks.
| gharchive/issue | 2020-09-03T23:01:58 | 2025-04-01T04:32:47.974449 | {
"authors": [
"BoCupp-Microsoft",
"caraitto"
],
"repo": "MicrosoftEdge/MSEdgeExplainers",
"url": "https://github.com/MicrosoftEdge/MSEdgeExplainers/issues/393",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1084859049 | Change to new sentiment scoring
Lab/Demo: 22-Create-a-search-solution
Changes proposed in this pull request:
Change modify-search to retrieve sentiment label instead of sentiment score.
Change Python web page template to visualize sentiment label instead of score.
Change C# web page to visualize sentiment label instead of score.
Fixed
@GraemeMalcolm could you check if everything's ok now?
Looks good - merged!
| gharchive/pull-request | 2021-12-20T14:31:31 | 2025-04-01T04:32:47.997218 | {
"authors": [
"GraemeMalcolm",
"madiepev"
],
"repo": "MicrosoftLearning/AI-102-AIEngineer",
"url": "https://github.com/MicrosoftLearning/AI-102-AIEngineer/pull/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1325452601 | 04-create-dax-calculations-in-power-bi-desktop.md
English:
Quarter
Current Translation:
季
Suggested Translation:
Quarter
Description:
The name of calculated columns should be English. Global check
Hello contributor,
Thank you for your contribution! We are processing your suggestions and we will update you shortly.
Kind regards,
Microsoft Worldwide Learning Team
Run bugfix tool and waiting for result.
| gharchive/issue | 2022-08-02T08:07:39 | 2025-04-01T04:32:48.005718 | {
"authors": [
"HienThuThiDai",
"olprod",
"stonezy123"
],
"repo": "MicrosoftLearning/PL-300-Microsoft-Power-BI-Data-Analyst.zh-tw",
"url": "https://github.com/MicrosoftLearning/PL-300-Microsoft-Power-BI-Data-Analyst.zh-tw/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1161870492 | Create a provisioned account same database name twice
Module: 02
Lab/Demo: 01
Task: 00
Step: 00
Create a provisioned account
Description of issue
Repro steps:
Back in the Data Explorer pane, expand the nothroughputdb database node and then observe the requiredthroughputcontainer container node within the hierarchy.In the Data Explorer pane, expand New Container and then select New Database.In the New Database popup, enter the following values for each setting, and then select OK:
Setting
Value
Database id
manualthroughputdb
Share throughput across containers
Select this option
Database throughput
Manual
RU/s
400
Back in the Data Explorer pane, expand the manualthroughputdb database node and then observe the childcontainer container node within the hierarchy.
In the New Database popup, enter the following values for each setting, and then select OK:
we have created database manualthroughputdb on step number 16 then again we have to create db with same name on step 19
manualthroughputdb can not be created again
@Usamawahabkhan, thank you for bringing this to our attention, label has been corrected to "Use existing"
| gharchive/issue | 2022-03-07T20:10:32 | 2025-04-01T04:32:48.012274 | {
"authors": [
"MScalopez",
"Usamawahabkhan"
],
"repo": "MicrosoftLearning/dp-420-cosmos-db-dev",
"url": "https://github.com/MicrosoftLearning/dp-420-cosmos-db-dev/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
414520296 | (maint) inc.header.php: removes duplicate code -> better parameter overview
This project is fantastic and I really want to push the code to a cleaner and more secure level.
It was really hard for me to get a quick overview of all possible commands.
The code was very verbose and full of duplicated patterns, that has grown naturally over time. My initial goal was to remove unnecessary sudo calls at least to playout_controls.sh.
I decided to do this small refactoring:
it checks GET and POST parameters and puts it into $urlparams variable
all redirects happens in one place
simple commands are defined in one array and handled directly in the logic under it
Next steps:
Simplify remaining if constructs. Move all system calls to an own file.
Hi @leachiM2k
thanks for the enthusiasm in talking about the project and diving into the project. Your perception is correct: "has grown naturally over time". And there are quite a few things that could do with an overhaul.
python for MPD / MPC control
cleanup of code (you are on to that!)
rethinking of global variable management (I made some progress there, but still have to test it)
Your work: have you tested all options and have it running locally? I will try to find some time this week to test it on my local Pi.
Hi @leachiM2k
merged it with the develop branch
Hey @MiczFlor
I've done a dry test with "request-to-command" matching and it seems to work as expected.
What do you exactly mean by "python for MPD / MPC"? Do you want to use python to control MPD instead of a bash script?
There is a lib out there that I use in python to control MPD directly. I use it for a connection of my phoniebox to Amazon Alexa. (Alexa -> MQTT -> Python -> MPD)
There is also a lib for PHP that would allow better parsing and maybe a little faster execution.
Yes, I thought about using python to control mpd. But please tell me about the php lib. I would prefer that.
I am a fan of bash commands. It‘s like the ultimate abstraction. In the light of adding video to Phoniebox, introducing python or php to control mpd directly feels like hard coding it into Phoniebox :)
For PHP there is a lib called "mpd.class.php". Unfortunately you'll find different versions out there, but all with the same concept. I think the most promising one is here: https://github.com/JSurf/mpd.class.php
For Python you can rely on "python-mpd2"
I like bash, but I think it's good for system tasks, like installation and file manipulation. It's really hard to understand the logic behind netcat, so here I would prefer a python script or service.
You should not worry about going away from bash to python. Python is already widely spread in the project. And it allows a more compact code than bash.
| gharchive/pull-request | 2019-02-26T09:48:33 | 2025-04-01T04:32:48.039425 | {
"authors": [
"MiczFlor",
"leachiM2k"
],
"repo": "MiczFlor/RPi-Jukebox-RFID",
"url": "https://github.com/MiczFlor/RPi-Jukebox-RFID/pull/500",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2151438009 | 🛑 headache.or.kr is down
In c0f5bca, headache.or.kr (https://www.headache.or.kr/index.php) was down:
HTTP code: 0
Response time: 0 ms
Resolved: headache.or.kr is back up in 890430b after 10 minutes.
| gharchive/issue | 2024-02-23T16:40:10 | 2025-04-01T04:32:48.044452 | {
"authors": [
"MigraineKR"
],
"repo": "MigraineKR/khs.status",
"url": "https://github.com/MigraineKR/khs.status/issues/177",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2341776255 | 🛑 migrainecollaborative.org is down
In 13a94bd, migrainecollaborative.org (https://migrainecollaborative.org/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: migrainecollaborative.org is back up in ebbe18f after 5 minutes.
| gharchive/issue | 2024-06-08T18:36:14 | 2025-04-01T04:32:48.047737 | {
"authors": [
"MigraineKR"
],
"repo": "MigraineKR/upptime",
"url": "https://github.com/MigraineKR/upptime/issues/1547",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2520357634 | 🛑 headachemigraine.org is down
In 7351e9a, headachemigraine.org (https://headachemigraine.org/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: headachemigraine.org is back up in 0ec7385 after 5 minutes.
| gharchive/issue | 2024-09-11T17:38:19 | 2025-04-01T04:32:48.050902 | {
"authors": [
"MigraineKR"
],
"repo": "MigraineKR/upptime",
"url": "https://github.com/MigraineKR/upptime/issues/1770",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
61063233 | lint.js file not found
I use latest version of Ternific with latest version (1.2) of Brackets, on Windows 7.
In the developer console, I get lot of errors about a missing file:
Failed to load resource: net::ERR_FILE_NOT_FOUND
/C:/Users/PhiLho/AppData/Roaming/Brackets/extensions/user/ternific/TernWorker.js:42 Uncaught NetworkError: Failed to execute 'importScripts' on 'WorkerGlobalScope': The script at 'file:///C:/Users/PhiLho/AppData/Roaming/Brackets/extensions/user/ternific/libs/tern/plugin/lint.js' failed to load.
and indeed I have no file of this name there.
Can I install it manually to fix that? Or change some setting to skip attempts to load it? Or, better, maybe you can fix the issue? :smile:
Thanks.
I upgraded to 0.8.0, and I still see the message.
There is an improvement, though: it is not repeated on each file change.
So this became a minor issue (I use another plugin for the linter).
@PhiLhoSoft Ah terribly sorry for no response at all! There have been changes around plugin loading which perhaps have resolved your issue. I really don't see this issue. If you are still running into this issue, could you share your .tern-project file?
Hey, I was wondering if I will ever receive an answer! ;-)
No worries, it is only a minor annoyance, and as said, reduced now.
But I fear it is still present in latest version (I just updated right now):
Failed to load resource: net::ERR_FILE_NOT_FOUND -> file:///C:/Users/plhoste/AppData/Roaming/Brackets/extensions/user/ternific/node_modules/tern/plugin/lint.js
Uncaught NetworkError: Failed to execute 'importScripts' on 'WorkerGlobalScope': The script at 'file:///C:/Users/plhoste/AppData/Roaming/Brackets/extensions/user/ternific/node_modules/tern/plugin/lint.js' failed to load. -> /C:/Users/plhoste/AppData/Roaming/Brackets/extensions/user/ternific/TernWorker.js:58
As I wrote, the file is just not there, at the indicated path. In this directory, I have only the following files:
requirejs.js
angular.js
commonjs.js
complete_strings.js
doc_comment.js
es_modules.js
modules.js
node.js
node_resolve.js
So it looks more like a problem of installation (I even removed and reinstalled the plugin) than of settings.
| gharchive/issue | 2015-03-13T13:25:29 | 2025-04-01T04:32:48.057266 | {
"authors": [
"MiguelCastillo",
"PhiLhoSoft"
],
"repo": "MiguelCastillo/Brackets-Ternific",
"url": "https://github.com/MiguelCastillo/Brackets-Ternific/issues/47",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2296648171 | Hexagon: ひも中心合わせ時の「位置」
目で合わせた時の「位置」は、各方向1,2,3 ... -1,-2,-3 です。
表示はありませんが、合わせ目の位置がゼロです。
ひも中心合わせ時の「位置」は、今は、同様にゼロがありませんが、
合わせ位置のひもが存在するのですから、「位置」としてゼロを表示すべきでしょう。
v1.8.3で修正しました。
| gharchive/issue | 2024-05-15T00:56:32 | 2025-04-01T04:32:48.061981 | {
"authors": [
"MihoHarusawa"
],
"repo": "MihoHarusawa/CraftBand",
"url": "https://github.com/MihoHarusawa/CraftBand/issues/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1954719177 | Feature request: Add an option to not apply gray background on top and bottom popup
Hello,
Thank you for the great package. Would it be possible to add an option to not gray the background (on top or bottom popup) so that they can be use as toast.
Hey!
Thank you very much for your comment. The feature has been implemented in the latest patch. Feel free to write other ideas, I'm open to new propositions :)
PS. You can find more SwiftUI packages here - https://github.com/Mijick, and the list is still growing.
Have a great day,
T.K.
| gharchive/issue | 2023-10-20T17:09:29 | 2025-04-01T04:32:48.063860 | {
"authors": [
"FulcrumOne",
"gpfister-oskey"
],
"repo": "Mijick/PopupView",
"url": "https://github.com/Mijick/PopupView/issues/56",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1211094003 | Actual Burn Rate Compensation
If a reactor in a group is falling below on burn rate, try to compensate by ramping up other reactors.
This feature is already a implemented, future features such as #198 will extend it with more conditions.
| gharchive/issue | 2022-04-21T14:03:49 | 2025-04-01T04:32:48.067054 | {
"authors": [
"MikaylaFischler"
],
"repo": "MikaylaFischler/cc-mek-scada",
"url": "https://github.com/MikaylaFischler/cc-mek-scada/issues/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
822117470 | fromFormat as the opposite to toFormat
I can't find an easy way to parse a string that was created using toFormat. I'd like to see a fromFormat function that does that (based on current configuration) and additionally an option to configure BigNumber to automatically parse the strings passed to the constructor. Something like this: BigNumber.config({ AUTO_PARSE: true });
Does this exist yet? In my case this is only about ignoring the groupSeparator and reading decimalSeparator as '.'. But it would also have to handle everything else in BigNumber.Format.
Does this work for you?
function removeFormatting ({ fromNumberString, withDecimalPoint = '.' }) {
const rx = new RegExp(`[^${withDecimalPoint}\\d]*`, 'g')
return fromNumberString.replace(rx, '')
}
removeFormatting({ fromNumberString: '123 456 789.12345 6789' })
// 123456789.123456789
It works by creating a regular-expression that removes everything that isn't a digit or a decimal-point.
I don't want to enhance the BigNumber constructor so it can handle toFormat strings, but following shuckster a fromFormat function could be added using, for example
// Assumes '-' is only used as a minus sign.
BigNumber.fromFormat = (stringFromToFormat, formatObject) => {
const sep = formatObject.decimalSeparator;
const str = stringFromToFormat.replace(new RegExp(`[^${sep || '.'}\\d-]+`, 'g'), '');
return new BigNumber(sep ? str.replace(sep, '.') : str);
};
(If this function was internal to the library then obviously it would have access to the global FORMAT object and so it wouldn't need to be passed in.)
probably should be called fromFormat -> parse
parse('1000.000-e', {format: '', groupSeparator, ...etc})
parse('1000.000-e')
| gharchive/issue | 2021-03-04T13:08:47 | 2025-04-01T04:32:48.072207 | {
"authors": [
"MikeMcl",
"claudemartin",
"povilass",
"shuckster"
],
"repo": "MikeMcl/bignumber.js",
"url": "https://github.com/MikeMcl/bignumber.js/issues/286",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1856279965 | Weird behaviour when using a BackdropFilter
Love the package!
A backdrop filter inside a skeletonizer blurs the whole widget instead of only the background. Example picture:
It only blurs the whole widget when the skeletonizer is enabled, when disabled everything works as expected.
Minimal sample code that demonstrates the issue:
Skeletonizer(
child: BackdropFilter(
filter: ImageFilter.blur(sigmaX: 10, sigmaY: 10),
child: Container(
decoration: BoxDecoration(
color: Colors.white.withOpacity(0.4),
),
width: 100,
height: 100,
child: Text(
'test test test test',
),
),
),
);
My current work around is to change the sigmaX/Y blur to 0 when the skeletonizer is enabled.
@rienkkk I've been working on some fixes and just published a new release, I tried your code sample and it works just fine, can you confirm if you still have this issue with release 0.7.0?
0.7.0 fixed it! Thanks
| gharchive/issue | 2023-08-18T08:18:29 | 2025-04-01T04:32:48.079146 | {
"authors": [
"Milad-Akarie",
"rienkkk"
],
"repo": "Milad-Akarie/skeletonizer",
"url": "https://github.com/Milad-Akarie/skeletonizer/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1345088727 | Set timeout-minutes for CI
Most jobs complete in about 20 minutes, so anything above 60 minutes can probably be considered a failure. This avoids having to wait 6 hours for a failure to be reported.
Thanks!
| gharchive/pull-request | 2022-08-20T08:00:28 | 2025-04-01T04:32:48.080125 | {
"authors": [
"MilesCranmer",
"rikhuijzer"
],
"repo": "MilesCranmer/SymbolicRegression.jl",
"url": "https://github.com/MilesCranmer/SymbolicRegression.jl/pull/116",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
705583258 | Configuration Model
Currently Coda embeds conditional compilation logic using PPX mechanisms.
For example: src/lib/consensus/consensus.ml:
[%%import
"/src/config.mlh"]
module Intf = Intf
[%%if
consensus_mechanism = "proof_of_stake"]
include Proof_of_stake
[%%else]
[%%show
consesus_mechanism]
[%%error
"invalid value for \"consensus_mechanism\""]
[%%endif]
module Proof_of_stake = Proof_of_stake
Bazel allows us to eliminate much of this, or rather to move it out of program logic and into build logic, which (IMO) is where it belongs. In this example, consensus.ml could look like this:
module Intf = Intf
include Proof_of_stake
module Proof_of_stake = Proof_of_stake
The build logic would compile this only if consensus_mechanism - a build-time constant - were set to proof_of_stake.
A related example: src/lib/consensus/global_slot.mli, which contains:
[%%import "/src/config.mlh"]
...
[%%ifdef consensus_mechanism]
...some self-contained code defining module Checked...
[%%endif]
Here the conditional code could be moved to a module, say global_slot_consensus.ml. The %ifdef would be replaced with include global_slot_checked. The build logic would generate global_slot_checked.ml, copying global_slot_consensus.ml if the build-time var consensus_mechanism is non-null, otherwise generating and compiling an empty file.
Completely eliminating [%%ifdef consensus_mechanism] in this manner would involve substantial effort, but would have the effect of expressing the consensus/nonconsensus structure of the code on the surface, in the organization of the files (and the build code) rather than the organization of OCaml code within the files.
Other uses of %%ifdef would be simpler. In some cases, in which the conditional code is embedded in a complex expression, replacement may be less feasible.
Other PPX constructs can also be eliminated. For example, src/lib/genesis_constants/genesis_constants.ml contains a bunch of %%inject directives. They could easily be eliminated by having the build logic generate the file during the precompilation (bootstrap) phase of the build. The result in this case would be simpler code, and possibly faster compilation, since the PPX processing step would be eliminated (albeit replaced with more bootstrap processing).
Also, where conditional logic is unavoidable, config.mlh could be replaced by a Configuration module generated by the build system. So instead of [%%ifdef foo ...] we would have if Configuration.foo or something like that. An optimizing compiler could presumably see that Configuration.foo is a constant and optimize out the conditional, but I don't know if the OCaml compiler could do that.
Related: #6013
Unrelatedly, we've been talking about trying to remove a lot of our conditional compilation anyway -- for example, at this point, we should just stick with Proof_of_stake as our one and only consensus mechanism.
Now that we depend on Snarky_base, which has no native dependencies, we should change the checked modules into functors that take a snarky interface, and then that optcomp can be removed too.
The compile config for selecting a consensus mechanism should probably just be removed now. It existed for historical reasons to remain abstract over consensus entirely, but as development progressed and we officially settled on our consensus mechanism, this is essentially a noop in our build configuration. Every build profile always to have this set to proof_of_stake for a successful build right now.
Closing this issue as it is stale for a long time.
| gharchive/issue | 2020-09-21T13:08:41 | 2025-04-01T04:32:48.120161 | {
"authors": [
"bkase",
"mobileink",
"mrmr1993",
"nholland94",
"shimkiv"
],
"repo": "MinaProtocol/mina",
"url": "https://github.com/MinaProtocol/mina/issues/6073",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
791514496 | Block Log Storage Improvements
We're currently using a simple script that is deployed along each node in our infra that does the following:
Outputs block logs to a seperate stream.
Breaks out each block into a new file named by the state hash of the block.
Attempts top upload to google cloud storage if that state hash has not been uploaded yet.
This has a few issues:
If the first node to upload a block has a serialization error, we lose that particular block.
Traversing this data is tedious and requires lots of manual data digging.
Not very observable in it's current iteration, no alerts if attempting to upload the block logs fails.
We should come up with improvements or alternatives to address these issues.
@figitaki to add some more details so we can decide on a direction
Here's the proposed directions:
Google Cloud Storage v2
The simplest solution that would not cover all the issues but would be simplest to implement would be to update our current solution to have each node store their block logs in order to prevent data loss. We should also add more logging and reporting to the block log script to improve our observability into the health of the data.
This does not solve our problems for observability and recoverability.
Volume Mount
In order to make loading the block logs into an Archive node instance easier, we could simply write the files to a separate volume that could be attached to an archive container when we need to recover from this data. This would simply extend from the previous solution and would still not provide much improvement in the way of usability for checking data integrity / hygiene.
Postgres Database
Since we already have an existing Postgres instance for the archive node we could move the block logs data into a separate database in the Postgres table which will allow us to query this data using SQL. While this would provide the greatest improvement to usability of this data it would require the greatest lift since we would need to update the recovery path in the actual archive node to support Postgres.
The block logs table would consist of a few metadata fields stateHash, hash, createdAt and the contents as a string.
@deepthiskumar can you please make a proper assignment when time allows.
@deepthiskumar Mina Foundation has arranged to have developers from Granola Systems (https://github.com/orgs/Granola-Team/) help with the archive node. Can @mxnkarou take this issue?
| gharchive/issue | 2021-01-21T21:42:14 | 2025-04-01T04:32:48.126184 | {
"authors": [
"aneesharaines",
"figitaki",
"robinbb",
"shimkiv"
],
"repo": "MinaProtocol/mina",
"url": "https://github.com/MinaProtocol/mina/issues/7596",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
514973819 | add stream_query_entities_fullmetadata
fix #148. The metadata is required to preserve the types.
Excellent! Thank you very much!
| gharchive/pull-request | 2019-10-30T20:49:59 | 2025-04-01T04:32:48.131209 | {
"authors": [
"MindFlavor",
"ctaggart"
],
"repo": "MindFlavor/AzureSDKForRust",
"url": "https://github.com/MindFlavor/AzureSDKForRust/pull/161",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2014092201 | Sankey Chart Card Not Displaying Solar Energy Generation Data
Environment
Home Assistant Version: [e.g., 2023.11.3]
Sankey Chart Card Version: [e.g., 1.18.0]
Describe the issue
I'm using the Sankey Chart Card to visualize my energy flow, including energy consumption and generation. While the card correctly shows consumption, it does not display the energy being generated by my solar panels despite the panels actively generating power.
Steps to Reproduce
Go to the Sankey Chart Card on my dashboard.
Observe the energy flow.
Notice that solar panel generation is missing from the display.
Expected behavior
I would expect the Sankey Chart to show all energy flows, including solar panel generation, as part of the energy distribution.
Actual behavior
The solar panel generation is not displayed on the Sankey Chart, although other energy flows are visible.
Additional information
Here's the code I'm using for the Sankey Chart Card:
height: 200
unit_prefix: ''
round: 1
min_box_height: 3
min_box_distance: 5
show_states: true
show_units: true
sections:
- entities:
- type: entity
children:
- sensor.total_energy_woonkamer
- sensor.total_energy_keuken
- sensor.total_energy_washok
entity_id: sensor.kwh_meter_3_phase_active_power
color_on_state: false
- type: entity
children:
- sensor.total_energy_woonkamer
- sensor.total_energy_keuken
- sensor.total_energy_washok
entity_id: sensor.p1_meter_active_power
color_on_state: false
- entities:
- type: entity
children:
- sensor.0x70b3d52b60043bf0_power
entity_id: sensor.total_energy_woonkamer
- type: entity
children:
- sensor.0x70b3d52b6004245a_power
- sensor.vaatwasser_power
entity_id: sensor.total_energy_keuken
- type: entity
children:
- sensor.0x70b3d52b600425fd_power
- sensor.spiegel_power
entity_id: sensor.total_energy_washok
sort_group_by_parent: false
- entities:
- type: entity
children: []
entity_id: sensor.0x70b3d52b60043bf0_power
- type: entity
children: []
entity_id: sensor.0x70b3d52b6004245a_power
- type: entity
children: []
entity_id: sensor.vaatwasser_power
- type: entity
children: []
entity_id: sensor.0x70b3d52b600425fd_power
- type: entity
children: []
entity_id: sensor.spiegel_power
sort_group_by_parent: false
type: custom:sankey-chart
show_names: true
wide: false
show_icons: true
Attached is a screenshot that clearly shows the solar panels are generating power, yet the Sankey Chart does not reflect this.
I suspect that the solar energy generation data is displayed as a negative value, which could be why it's not being visualized correctly on the Sankey Chart Card.
Only positive numbers are displayed. See min_state in the Readme
Thank you
| gharchive/issue | 2023-11-28T10:13:44 | 2025-04-01T04:32:48.137705 | {
"authors": [
"MindFreeze",
"filoor"
],
"repo": "MindFreeze/ha-sankey-chart",
"url": "https://github.com/MindFreeze/ha-sankey-chart/issues/152",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
665565223 | feat(ui): words details page
Commit message
feat(ui): new words details page
这是新的“多来源条目” 的详情页。 用于 /w/这个路由。
最终效果图是:https://images-cdn.shimo.im/OC93UAsSIWabDS2m__original ,此PR只做了冯版的部分。之后在同一详情页再增加其他来源的UI。
demo:
冯版+zingzeu_words: http://yngdieng-dev.mindong.asia/w/CgQ0RkY1
contrib: http://yngdieng-dev.mindong.asia/w/ICQ (contrib解释详情还没做)
Part of #261
为便于理解,此PR中新建的几个component示意图如下:
效果不错。看见contrib的demo心潮澎湃。
看见contrib的demo心潮澎湃。
lol. it's taken way too long.
@insualk @rovinglight 有空review一下此PR么?
I am going to merge this PR. Feel free to ask more questions later.
| gharchive/pull-request | 2020-07-25T11:01:13 | 2025-04-01T04:32:48.142503 | {
"authors": [
"Guanchishan",
"ztl8702"
],
"repo": "MindongLab/yngdieng",
"url": "https://github.com/MindongLab/yngdieng/pull/440",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
11944676 | Multiworld support
Vault supports multiple worlds since version 1.2.24
This allows accounts to be addressed on a per-world basis, instead of only per-player.
Add support for multiple worlds to Gringotts core
Is this still being considered? We would like to use this in a multiworld environment but it does not work as expected. All items from all worlds will transact in the first vault created, regardless of world. MultiWorldMoney plugin keeps separate balances straight and money still works per world as designed. but users get startled to see empty vaults in other worlds, plus they can't carry items physical between worlds by design.
Currently there isn't much active development going on. Pull requests will still be accepted happily.
| gharchive/issue | 2013-03-12T20:24:45 | 2025-04-01T04:32:48.173317 | {
"authors": [
"PockyMonstrosity",
"jastice"
],
"repo": "MinecraftWars/Gringotts",
"url": "https://github.com/MinecraftWars/Gringotts/issues/74",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1651538126 | Bounty: URL Parameter Toggles
~As the script is presently written, there are two lines of code script users have to alter in order to change the desired URL parameters. One line detects whether the desired URL Parameters are already present in the video link, while the other appends the URL parameters if they are absent.~
The Privacy Redirector script which I recommend use alongside this script features different toggles in the code to enable and disable not just individual redirects, but whether those redirects pull destinations from a Farside instance or from a pool of editable URLs maintained by the script user.
~At minimum, if anyone is capable of adding such a feature to this script without breaking it, it would be desirable for there to be a single pool of editable URL parameters that can then be applied to both the detect line and the append line so that users do not have to manually edit both lines of code for themselves.~
More complicated would be to recreate the toggles from the Privacy Redirector script so that users can merely toggle on/off desired URL parameters and have them applied to the detect and append lines of the code.
~Should the latter-most option be plausible,~ It may further be of use for users to fall back on a "single URL parameter pool" should the options of default toggles prove undesirable for whatever reason, most likely being reasons beyond this repository's scope such as the addition of new URL parameters to the Invidious project or the deprecation of pre-existing URL parameters.
After cloning this script to create a Wikipedia Classic Theme script, I stumbled upon another script for the same purpose that may offer the blueprints for both creating a single editable URL parameter string, and allowing for different parameters for the homepage and video URLs (resolving #5 ).
As of the latest update, there is now a single line of URL parameters for users to change.
Special thanks to the old-wikipedia-layout script for the guidance, as well as Privacy Redirector.
The change has also been applied to my own Wikipedia Classic Theme Script.
As toggles for applying different URL parameters is still desirable, I shall leave this issue open, albeit appropriately renamed.
With how drastically the script has matured, I'm less interesting in recreating switches ala-Privacy Redirector, but this remains an option regardless.
For the sake of ease of usability, and the potential of this script being adapted into an web extension, this will be my next priority.
Code functions fine, and I'd rather not break it needlessly when the "fill in the blank" options work fine enough.
If anyone wants to they can make a pull request. Otherwise, I'm not gonna make this happen.
| gharchive/issue | 2023-04-03T07:57:28 | 2025-04-01T04:32:48.204845 | {
"authors": [
"MintMain21"
],
"repo": "MintMain21/Invidious-Preferences-Userscript",
"url": "https://github.com/MintMain21/Invidious-Preferences-Userscript/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1795477344 | 🛑 Ye is down
In 8ec3a96, Ye ($YE) was down:
HTTP code: 500
Response time: 9830 ms
Resolved: Ye is back up in 5ebface.
| gharchive/issue | 2023-07-09T16:45:50 | 2025-04-01T04:32:48.207264 | {
"authors": [
"Syrup"
],
"repo": "Miouwn/upl",
"url": "https://github.com/Miouwn/upl/issues/1256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1827154587 | 🛑 Ye is down
In 03f2320, Ye ($YE) was down:
HTTP code: 500
Response time: 11272 ms
Resolved: Ye is back up in d065512.
| gharchive/issue | 2023-07-28T22:34:31 | 2025-04-01T04:32:48.209298 | {
"authors": [
"Syrup"
],
"repo": "Miouwn/upl",
"url": "https://github.com/Miouwn/upl/issues/1867",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1888625084 | 🛑 Ye is down
In 14a7a0f, Ye ($YE) was down:
HTTP code: 500
Response time: 7110 ms
Resolved: Ye is back up in 4dcb0e4 after 8 minutes.
| gharchive/issue | 2023-09-09T07:30:11 | 2025-04-01T04:32:48.211288 | {
"authors": [
"Syrup"
],
"repo": "Miouwn/upl",
"url": "https://github.com/Miouwn/upl/issues/3338",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2701519627 | Add OpenStack standalone control plane Helm chart and templates
Resolves #592.
This PR introduces a Helm chart and associated templates to facilitate the deployment of standalone control plane on OpenStack within the Hybrid Multi-Cloud (HMC) platform.
Local Run:
ka get machines
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
openstack-dev-cp-0 openstack-dev openstack-dev-cp-0 openstack:///b52ccc71-566b-4487-a0b8-64b2c08f0e5b Running 8m29s v1.30.5+k0s.0
openstack-dev-cp-1 openstack-dev openstack-dev-cp-1 openstack:///4022efcb-da46-4a99-80f3-2eaf3681238e Running 8m29s v1.30.5+k0s.0
openstack-dev-cp-2 openstack-dev openstack-dev-cp-2 openstack:///87ae87e0-2ac5-4f06-b7ba-970d6dbcab3f Running 8m29s v1.30.5+k0s.0
openstack-dev-md-p27j6-6wfn4 openstack-dev openstack-dev-md-p27j6-6wfn4 openstack:///5c2c51c8-f2c0-48d3-9c8f-ed28db53112a Running 8m30s
openstack-dev-md-p27j6-w6qvx openstack-dev openstack-dev-md-p27j6-w6qvx openstack:///2df39a6d-2507-452f-9a0a-fb3e6a5d2052 Running 8m30s
clusterctl describe cluster openstack-dev -n hmc-system
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/openstack-dev True 5m26s
├─ClusterInfrastructure - OpenStackCluster/openstack-dev
├─ControlPlane - K0sControlPlane/openstack-dev-cp
│ └─3 Machines... True 7m32s See openstack-dev-cp-0, openstack-dev-cp-1, ...
└─Workers
└─MachineDeployment/openstack-dev-md True 110s
└─2 Machines... True 3m43s See openstack-dev-md-p27j6-6wfn4, openstack-dev-md-p27j6-w6qvx
k -n hmc-system get managedclusters.hmc.mirantis.com
NAME READY STATUS
openstack-dev True ManagedCluster is ready
@a13x5 @eromanova Would appreciate if you can take a look at the PR and merge it soon.
| gharchive/pull-request | 2024-11-28T09:39:37 | 2025-04-01T04:32:48.216244 | {
"authors": [
"bnallapeta"
],
"repo": "Mirantis/hmc",
"url": "https://github.com/Mirantis/hmc/pull/696",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
325999579 | Fix tables column width
Add table column width
Reduce status column width, since it is only one icon
Looks fine, if the tables are still ok after that
pls attach screenshot
@naumvd95 It's not just a one screenshot, except bug fixing (with PM\AM get down from the main text) the main improvement is on the cluster details page, You can compare before\after by loading update-table-layout branch (and running gulp build)
| gharchive/pull-request | 2018-05-24T07:38:59 | 2025-04-01T04:32:48.218109 | {
"authors": [
"ekhomyakova",
"katyafervent",
"naumvd95"
],
"repo": "Mirantis/kqueen-ui",
"url": "https://github.com/Mirantis/kqueen-ui/pull/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
331130274 | Set both mac and vlan during VF reconstruction
This change is
Reviewed 1 of 1 files at r1.
Review status: 0 of 2 LGTMs obtained, and 1 stale
Comments from Reviewable
| gharchive/pull-request | 2018-06-11T10:24:06 | 2025-04-01T04:32:48.220383 | {
"authors": [
"jellonek",
"pigmej"
],
"repo": "Mirantis/virtlet",
"url": "https://github.com/Mirantis/virtlet/pull/690",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
611234976 | How to run on Rockchip RK3399?
Hi, have you tried it on RK3399? Does libMNN with OpenCL work on RK3399? I did not successfully run it on RK3399.
Besides, will you release other face recognition model on MNN? For example, arcface (insightface) with resnet50 or resnet101 backbone?
No RK3399 avaliable, I think you need change the mnn lib compiled on RK 3399
Thank you, will you release other face recognition model on MNN? For example, arcface (insightface) with resnet50 or resnet101 backbone?
yes, but not now, I will provide later when I am free.
here are the solutions for RK3399 ubuntu:
https://github.com/alibaba/MNN/issues/426#issuecomment-569717671
Tips:
main.cpp add header: #include<dlfcn.h>
int main():
auto handle = dlopen("libMNN_CL.so", RTLD_NOW);
CMakeLists.txt: target_link_libraries(MNN pthread ${CMAKE_DL_LIBS})
我在RK3399 Ubuntu 18上面成功了
First, you should build the right MNN lib version for your 3399(Ubuntu or Android)
Hi @xuguozhi, do you have any link to guide how to build right MNN lib version on RK3399? When building libMNN_CL.so, first I cloned MNN from https://github.com/alibaba/MNN, set OpenCL flag to ON, and then tried to build the lib but got some errors at the end like this: undefined reference to `dlopen'. Should I modify something in MNN source code?
Besides, I have a Android RK3399 but with other board, not that Firefly. Do you have any introduction to flash Ubuntu 18 to eMMC? Thank you!
if you use Android, please follow this link to build from source: https://www.yuque.com/mnn/en/build_android
or
you can just apply the prebuilt lib: https://github.com/alibaba/MNN/releases/download/0.2.1.9/MNN-Android-0.2.1.9.zip
version 0.2.1.9 works fine
Hi @xuguozhi, actually i built libMNN_CL.so on a ubuntu 18 virtual machine. Did you get the errors **undefined reference to dlopen', dlclose'... ** when building libmnn_CL.so?
Hi @xuguozhi, actually i built libMNN_CL.so on a ubuntu 18 virtual machine. Did you get the errors **undefined reference to dlopen', dlclose'... ** when building libmnn_CL.so?
Not get. I git cloned the source code on RK3399 board and build MNN on it
| gharchive/issue | 2020-05-02T17:19:00 | 2025-04-01T04:32:48.234015 | {
"authors": [
"MirrorYuChen",
"sangdv",
"xuguozhi"
],
"repo": "MirrorYuChen/mnn_example",
"url": "https://github.com/MirrorYuChen/mnn_example/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
118165290 | Multi quote feature?
This feature already exists?
No. You can reply to multiple posts at once but not quote them.
Teach me please? First I thought that I needed press select on context menu and select the posts I want reply, but that is the only way to respond multiple posts? Thank you.
It's the only way.
Nothing too much different than the PC version to be honest. You think is possible to improve the reply system with my idea above?
You want a button to quote all selected posts?
Yes, but I wanted know your opinion about it. You're the app developer.
I think it's not necessasy. You can't select what to quote so all posts will be quoted. It makes no sense because you'll reply for entire post in any case and you can do it with multi reply button.
Also there is no place for one more button in action bar.
Closing this issue then.
It's just me or this feature was implemented silent? Because when I press select I have 3 buttons and one of them works to make multi-replies.
You can reply to multiple posts at once but not quote them.
It was implemented a long ago.
| gharchive/issue | 2015-11-21T03:07:53 | 2025-04-01T04:32:48.239651 | {
"authors": [
"GiRaFa-SM",
"Mishiranu"
],
"repo": "Mishiranu/Dashchan",
"url": "https://github.com/Mishiranu/Dashchan/issues/21",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
708638223 | adminapi bug
1、adminapi有一个bug就是没判断yaml文件配置的方式,导致接口各种500,不看源码还不知道
2、基本所有的接口都直接ercd.get(key…。理论上应该去修改或获取ymal文件,以支持单机无数据库的部署方式
3、插件的bug,通过adminapi绕过了dashboard进行配置,但结果却和想象中的不一样,不断重定向
4、配置节点的时候不支持cdn的方式。必须写ip,导致dockercompose或k8s必须启动一个独立的网络配置每个服务必须有一个静态ip
嗯,这些先忽略吧,等2.0再拿这些复现下试试
| gharchive/issue | 2020-09-25T04:49:28 | 2025-04-01T04:32:48.240954 | {
"authors": [
"Miss-you"
],
"repo": "Miss-you/apisix-book",
"url": "https://github.com/Miss-you/apisix-book/issues/39",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2074930398 | Adding buttons capability to have Front plate Raw Image and Canvas Rounded Rect dynamically enabled OnHover events
This PR is to add the capability to MRTK3 buttons of dynamically enable/disable the Front plate Raw Image and the Canvas Rounded Rect based on OnHover {Enter/Exit} events.
The motivation of this PR is the research done to find bottlenecks on MRTK3 buttons and increase the amount of concurrent buttons in a scene before a significant framerate drop (60 FPS).
The impact of this change is that MRTK3 scenes will now be able to support ~84% of buttons as in MRTK2 scenes (current is ~46%), effectively doubling the amount of buttons before a significant framerate drop (also documented).
The philosophy behind this code change is:
We retain the investment previously done on the improved visual appeal of MRTK3 buttons, nothing is erased.
We empower the user to decide which buttons want to be rendered as better-performant and which not.
Is it possible to make the docs and videos public?
Right now, it's required to log with a Microsoft account, and I don't have one.
I think we need to use another state than hovering. With hovering you miss the scenario where an interactor is in proximity of many buttons. In this case we should still show the front plate so that the visual indication is shown.
For example, this image shows the scenario I'm talking about (screen shot from video without your change):
Ok, so, which other state do you want me to use?
Is it possible to make the docs and videos public? Right now, it's required to log with a Microsoft account, and I don't have one.
I am afraid not. I removed them to avoid confusions.
I think we need to use another state than hovering. With hovering you miss the scenario where an interactor is in proximity of many buttons. In this case we should still show the front plate so that the visual indication is shown.
For example, this image shows the scenario I'm talking about (screen shot from video without your change):
Ok, so, which other state do you want me to use?
I think this is something we may need to create. How are proximity hover lights working today? I would look into what is controlling that, can base front plate enablement based on whatever is rendering hover lights
@keveleigh, @whebertML, @anonymous2585: Thank you all for your feedback and for taking the time to look into this PR. I cannot share the internal docs regarding the benchmarking we did and its findings because they are labeled as "Confidential". However, I can tell you in general terms what we did and what motivated this PR:
1 - We developed a benchmarking tool similar to MRTK2's PerformanceMeasurement tool and we used it to profile UI elements and try to find performance bottlenecks.
2 - Our experiments showed that if we disabled the Front plate Raw Image and Canvas Rounded Rect then we would double the amount of buttons that an MRTK3 scene could support in an HL2 device before falling below a 60 fps rate.
3 - We then though on a way to allow developers to have an easy way to disable such components without having them to do it manually.
Based on the above, we created this PR so that developers don't lose any of the existing behaviors nor visual appeal yet they have an easy way to activate the new behavior and be able to have more buttons in their UIs before a significant framerate occurs.
I hope this gives you more information on the motivation of this PR. Please feel free to reach out if you have any further comments or questions.
Update: I am making this PR back to draft while I figure out why some tests are failing.
Re-opening this PR as all tests are now passing.
Just a comment to test if my animated gifs are visible:
First:
Second:
In the last commit I have:
Switched to an array of Components to make dynamically enabled/disabled based on proximity.
Added a new scene to demonstrate the new functionality. The new scene, CanvasExampleWithDynamicComponents.unity, is a clone of CanvasExample.unity scene but each Action Button, Hero Button, and List Item has dynamic canvas' CanvasElementRoundedRects and dynamic FrontPlate's RawImage Components as those were the target Components to make dynamic in the original PR.
Updated UnityTests accordingly.
Code looks great!
Can you please polish the PressableButton inspector window, and move the new events under the "MRTK Events" header?
Done in latest commit, Proximity-hover events are now under "MRTK Events" > "Proximity-Hover Events", for example:
I apologize for this... :(
After reviewing the inspector changes, I just discovered the TimedFlag in MRTK... https://github.com/MixedRealityToolkit/MixedRealityToolkit-Unity/blob/55e2f38f086b517fcfd562e6ab4d51062211079c/org.mixedrealitytoolkit.core/Utilities/TimedFlag.cs#L20C18-L20C27
The TimedFlag class has an Entered and Exited event. We should probably follow the MRTK pattern here. Please collapse LastProximityHoverEntered/Exitted and public bool IsProximityHovered into public TimedFlag IsProximityHovered . Sorry that I didn't know about this earlier.
Using the TimedFlag also means we no longer need the XRProximityHoverEvents.cs. Again sorry about the late breaking discovery.
No worries, I wasn't aware that existed either :) ... and better find it now and change it now than having to change it once it is in Prod and consumed by our customers. I made the change in the latest commit. I also updated Editor's Inspector and the TimedFlag property + events now appear under StatefulInteractable Events as shown next:
UXCore's dependencies need to be updated, since they now depend on a change to Core
Change UXCore's package.json dependency from:
"org.mixedrealitytoolkit.core": "3.1.0",
to
"org.mixedrealitytoolkit.core": "3.2.0",
Done in latest commit.
@keveleigh Thank you for the additional feedback. I've created this issue to track the implementation of your feedback.
| gharchive/pull-request | 2024-01-10T18:22:16 | 2025-04-01T04:32:48.296897 | {
"authors": [
"AMollis",
"anonymous2585",
"ms-RistoRK"
],
"repo": "MixedRealityToolkit/MixedRealityToolkit-Unity",
"url": "https://github.com/MixedRealityToolkit/MixedRealityToolkit-Unity/pull/611",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2416922631 | 🛑 Mailserver is down
In cae59c0, Mailserver (https://outlook.office.com) was down:
HTTP code: 417
Response time: 136 ms
Resolved: Mailserver is back up in 03dfb2e after 11 minutes.
| gharchive/issue | 2024-07-18T16:49:37 | 2025-04-01T04:32:48.348869 | {
"authors": [
"Mixtery"
],
"repo": "Mixtery/mixtery.github.io",
"url": "https://github.com/Mixtery/mixtery.github.io/issues/255",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
276601051 | java.lang.UnsupportedOperationException: Can't convert value at index 1 to dimension: type=0x1d
android.view.InflateException: Binary XML file line #0: Binary XML file line #0: Error inflating class com.mobidevelop.spl.widget.SplitPaneLayout
Caused by: android.view.InflateException: Binary XML file line #0: Error inflating class com.mobidevelop.spl.widget.SplitPaneLayout
Caused by: java.lang.reflect.InvocationTargetException
at java.lang.reflect.Constructor.newInstance0(Native Method)
at java.lang.reflect.Constructor.newInstance(Constructor.java:334)
at android.view.LayoutInflater.createView(LayoutInflater.java:647)
at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:790)
at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:730)
at android.view.LayoutInflater.inflate(LayoutInflater.java:492)
at android.view.LayoutInflater.inflate(LayoutInflater.java:423)
at com.ashomok.imagetotext.ocr_result.tab_fragments.TextFragment.onCreateView(TextFragment.java:63)
at android.app.Fragment.performCreateView(Fragment.java:2611)
at android.app.FragmentManagerImpl.moveToState(FragmentManager.java:1276)
at android.app.FragmentManagerImpl.addAddedFragments(FragmentManager.java:2415)
at android.app.FragmentManagerImpl.executeOpsTogether(FragmentManager.java:2194)
at android.app.FragmentManagerImpl.removeRedundantOperationsAndExecute(FragmentManager.java:2148)
at android.app.FragmentManagerImpl.execPendingActions(FragmentManager.java:2049)
at android.app.FragmentManagerImpl.executePendingTransactions(FragmentManager.java:798)
at android.support.v13.app.FragmentPagerAdapter.finishUpdate(FragmentPagerAdapter.java:151)
at android.support.v4.view.ViewPager.populate(ViewPager.java:1236)
at android.support.v4.view.ViewPager.populate(ViewPager.java:1084)
at android.support.v4.view.ViewPager.onMeasure(ViewPager.java:1614)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.support.design.widget.CoordinatorLayout.onMeasureChild(CoordinatorLayout.java:714)
at android.support.design.widget.HeaderScrollingViewBehavior.onMeasureChild(HeaderScrollingViewBehavior.java:91)
at android.support.design.widget.AppBarLayout$ScrollingViewBehavior.onMeasureChild(AppBarLayout.java:1361)
at android.support.design.widget.CoordinatorLayout.onMeasure(CoordinatorLayout.java:784)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.support.v7.widget.ContentFrameLayout.onMeasure(ContentFrameLayout.java:139)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1514)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:806)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:685)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.LinearLayout.measureChildBeforeLayout(LinearLayout.java:1514)
at android.widget.LinearLayout.measureVertical(LinearLayout.java:806)
at android.widget.LinearLayout.onMeasure(LinearLayout.java:685)
at android.view.View.measure(View.java:22002)
at android.view.ViewGroup.measureChildWithMargins(ViewGroup.java:6580)
at android.widget.FrameLayout.onMeasure(FrameLayout.java:185)
at com.android.internal.policy.DecorView.onMeasure(DecorView.java:721)
at android.view.View.measure(View.java:22002)
at android.view.ViewRootImpl.performMeasure(ViewRootImpl.java:2410)
at android.view.ViewRootImpl.measureHierarchy(ViewRootImpl.java:1498)
at android.view.ViewRootImpl.performTraversals(ViewRootImpl.java:1751)
at android.view.ViewRootImpl.doTraversal(ViewRootImpl.java:1386)
at android.view.ViewRootImpl$TraversalRunnable.run(ViewRootImpl.java:6733)
at android.view.Choreographer$CallbackRecord.run(Choreographer.java:911)
at android.view.Choreographer.doCallbacks(Choreographer.java:723)
at android.view.Choreographer.doFrame(Choreographer.java:658)
at android.view.Choreographer$FrameDisplayEventReceiver.run(Choreographer.java:897)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6541)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.Zygote$MethodAndArgsCaller.run(Zygote.java:240)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:767)
Caused by: java.lang.UnsupportedOperationException: Can't convert value at index 1 to dimension: type=0x1d
at android.content.res.TypedArray.getDimensionPixelSize(TypedArray.java:730)
at com.mobidevelop.spl.widget.SplitPaneLayout.extractAttributes(SplitPaneLayout.java:86)
at com.mobidevelop.spl.widget.SplitPaneLayout.<init>(SplitPaneLayout.java:74)
... 63 more
Test running failed: Instrumentation run failed due to 'Process crashed.'
text_fragment.xml
<?xml version="1.0" encoding="utf-8"?>
<com.mobidevelop.spl.widget.SplitPaneLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:spl="http://schemas.android.com/apk/res-auto"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
spl:orientation="vertical"
spl:splitterBackground="@color/colorAccent"
spl:splitterPosition="33%"
spl:splitterSize="12dip"
android:id="@+id/split_pane_layout">
<include layout="@layout/split_pane_content" />
</com.mobidevelop.spl.widget.SplitPaneLayout>
Any advices?
Incorporate this project in yours instead:
https://github.com/CodeNinj4/android-split-pane-layout
Thank you, MikiSoft. It helped.
| gharchive/issue | 2017-11-24T12:17:11 | 2025-04-01T04:32:48.382182 | {
"authors": [
"MikiSoft",
"bieliaievays"
],
"repo": "MobiDevelop/android-split-pane-layout",
"url": "https://github.com/MobiDevelop/android-split-pane-layout/issues/13",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1899777212 | Use a combination of PR number and run count for the connector version.
Fixes #1294
Updates the build version to a combination of the PR number and the run number (which increments on every PR run). The build version winds up being 0.0.<prnumber>.<runnumber>.
Change the artifact upload to only upload that specific zip file, instead of a wildcard of everything in the release folder.
I am waiting for the checks to run :-)
| gharchive/pull-request | 2023-09-17T13:03:54 | 2025-04-01T04:32:48.383999 | {
"authors": [
"neilenns",
"tigert"
],
"repo": "MobiFlight/MobiFlight-Connector",
"url": "https://github.com/MobiFlight/MobiFlight-Connector/pull/1295",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
80548597 | Cannot build android with cca using the com.google.cordova.admob plugin and chrome.identity
Im opening this issue here as well, since I got no answer by the admob plugin developer yet. Issue link
Im a using google cca to build a cordova app, but after adding the chrome.identity Plugin and Cordova admob pro, I cant build the app anymore. The terminal error says: UNEXPECTED TOP-LEVEL EXCEPTION: com.android.dex.DexException: Multiple dex files define Lcom/google/andr
oid/gms/actions/ReserveIntents; (Full error log below)
System Details
Windows
cca v0.7.0
If you need anything else please let me know.
Replication Steps
Install cca
In a terminal: cca create app
cd app
cca platform add android
Open app/www/manifest.json and in the permissions array add: identity e.g. "permissions": ["<all_urls>", "identity"]
cca build android Builds fine.
cca plugin add com.google.cordova.admob
cca build android Fails
Full error log from terminal
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':dexArmv7Debug'.
> com.android.ide.common.internal.LoggedErrorException: Failed to run command:
C:\Users\TsoumiAc\AppData\Local\Android\android-sdk\build-tools\22.0.1\d
x.bat --dex --no-optimize --output C:\Users\TsoumiAc\Desktop\app\platforms\andro
id\build\intermediates\dex\armv7\debug --input-list=C:\Users\TsoumiAc\Desktop\ap
p\platforms\android\build\intermediates\tmp\dex\armv7\debug\inputList.txt
Error Code:
2
Output:
UNEXPECTED TOP-LEVEL EXCEPTION:
com.android.dex.DexException: Multiple dex files define Lcom/google/andr
oid/gms/actions/ReserveIntents;
at com.android.dx.merge.DexMerger.readSortableTypes(DexMerger.ja
va:596)
at com.android.dx.merge.DexMerger.getSortedTypes(DexMerger.java:
554)
at com.android.dx.merge.DexMerger.mergeClassDefs(DexMerger.java:
535)
at com.android.dx.merge.DexMerger.mergeDexes(DexMerger.java:171)
at com.android.dx.merge.DexMerger.merge(DexMerger.java:189)
at com.android.dx.command.dexer.Main.mergeLibraryDexBuffers(Main
.java:454)
at com.android.dx.command.dexer.Main.runMonoDex(Main.java:303)
at com.android.dx.command.dexer.Main.run(Main.java:246)
at com.android.dx.command.dexer.Main.main(Main.java:215)
at com.android.dx.command.Main.main(Main.java:106)
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output.
BUILD FAILED
Total time: 24.141 secs
C:\Users\TsoumiAc\Desktop\app\platforms\android\cordova\node_modules\q\q.js:126
throw e;
^
Error code 1 for command: cmd with args: /s /c "C:\Users\TsoumiAc\Desktop\app\pl
atforms\android\gradlew cdvBuildDebug -b C:\Users\TsoumiAc\Desktop\app\platforms
\android\build.gradle -Dorg.gradle.daemon=true"
ERROR building one of the platforms: Error: C:\Users\TsoumiAc\Desktop\app\platfo
rms\android\cordova\build.bat: Command failed with exit code 1
You may not have the required environment or OS to build this project
Error: C:\Users\TsoumiAc\Desktop\app\platforms\android\cordova\build.bat: Comman
d failed with exit code 1
at ChildProcess.whenDone (C:\Users\TsoumiAc\AppData\Roaming\npm\node_modules
\cca\node_modules\cordova\node_modules\cordova-lib\src\cordova\superspawn.js:131
:23)
at ChildProcess.emit (events.js:110:17)
at maybeClose (child_process.js:1008:16)
at Process.ChildProcess._handle.onexit (child_process.js:1080:5)
Hi @Knorcedger,
I've seen this a number of times, assuming you're using cordova 5 then you should fork the admob plugin and replace this line in plugin.xml:
<dependency id="com.google.playservices" url="https://github.com/floatinghotpot/google-play-services" commit="r19" />
with something like this
<framework src="com.google.android.gms:play-services-ads:+" />
Then install the admob plugin from your local drive or pushed fork.
Please note I haven't tested the above but this is how I've solved a number of dex issues with other plugins .
Good luck!
I tried the suggest framework, but unfortunately it builds error...
* What went wrong:
A problem occurred configuring root project 'android'.
> Could not resolve all dependencies for configuration ':_debugCompile'.
> Could not find any version that matches com.google.android.gms:play-services-ads:+.
Searched in the following locations:
https://repo1.maven.org/maven2/com/google/android/gms/play-services-ads/maven-metadata.xml
https://repo1.maven.org/maven2/com/google/android/gms/play-services-ads/
Required by:
:android:unspecified
Sounds like you don't have the "Google Repository" installed. You can add it via the Android SDK Manager. (type android into a terminal)
Yes, you are correct.
After installing following extras in Android SDK Manager, the build is successful.
Android Support Repository
Android Support Library
Google Play services
Google Repository
Thanks!
Thank you guys for the help :)
| gharchive/issue | 2015-05-25T14:09:35 | 2025-04-01T04:32:48.396928 | {
"authors": [
"Knorcedger",
"agrieve",
"floatinghotpot",
"gbenvenuti"
],
"repo": "MobileChromeApps/cordova-plugin-chrome-apps-identity",
"url": "https://github.com/MobileChromeApps/cordova-plugin-chrome-apps-identity/issues/3",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
981408857 | new rule: no locked platforms
Describe the new validation rule
The GTFS specification says:
No locked platforms: Each platform (location_type=0 or empty) or boarding area (location_type=4) must be connected to at least one entrance/exit (location_type=2) via some chain of pathways. Stations not allowing a pathway to the outside of the station from a given platform are rare.
_https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#pathwaystxt_
Proposed solution
Build the chain of pathway from/to each platform (location_type=0 or empty) or boarding area (location_type=4) then evaluate if an entrance/exit is included in this pathway chain. If not, this is an error.
Error vs warning
This should be an error (violates a requirement of the GTFS spec).
I have an implementation of that rule at Google and I will open-source it soon.
| gharchive/issue | 2021-08-27T16:29:27 | 2025-04-01T04:32:48.401005 | {
"authors": [
"aababilov",
"lionel-nj"
],
"repo": "MobilityData/gtfs-validator",
"url": "https://github.com/MobilityData/gtfs-validator/issues/975",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2548048133 | feat: flex - added forbidden_same_day_booking_field_value notice
Summary
This PR introduces a new validation rule that triggers an ERROR severity notice when the following conditions are met:
booking_type = 1 (Same-day booking)
Any of the following fields are present in the booking rule:
prior_notice_last_day
prior_notice_last_time
prior_notice_start_time
prior_notice_service_id
Expected Behavior
A validation notice will be generated if the conditions listed above are met, flagging the presence of restricted fields in a same-day booking rule.
Example using this test dataset.
Please make sure these boxes are checked before submitting your pull request - thanks!
[x] Run the unit tests with gradle test to make sure you didn't break anything
[ ] Add or update any needed documentation to the repo
[x] Format the title like "feat: [new feature short description]". Title must follow the Conventional Commit Specification(https://www.conventionalcommits.org/en/v1.0.0/).
[x] Linked all relevant issues
[x] Include screenshot(s) showing how this pull request works and fixes the issue(s)
Yay for acceptance tests! @tzujenchanmbd After taking a look at the new error, I think we missed an exception in the spec:
Looks like prior_notice_start day is only Forbidden for booking_type=1 if prior_notice_duration_max is defined. In the feed above, prior_notice_duration_max does not exist, so it should not trigger the error.
Can you confirm before @cka-y updates the logic?
seems like the logic you're mentioning @emmambd is also referenced in https://github.com/MobilityData/gtfs-validator/issues/1827
I think the fix here is to just remove prior_notice_start_time as a forbidden field and have https://github.com/MobilityData/gtfs-validator/issues/1827 and https://github.com/MobilityData/gtfs-validator/issues/1832 manage this use case.
LGTM. It seems #1827, #1832, and #1833 can handle prior_notice_start_time well. We can remove it here.
Notice name, description, and columns displayed also lgtm!
Looks like this has been approved twice - just want to be explicit the logic change has to be made above to prior_notice_start_time as a forbidden field before this should be merged.
it has now been addressed @emmambd
| gharchive/pull-request | 2024-09-25T13:45:40 | 2025-04-01T04:32:48.410611 | {
"authors": [
"cka-y",
"emmambd",
"tzujenchanmbd"
],
"repo": "MobilityData/gtfs-validator",
"url": "https://github.com/MobilityData/gtfs-validator/pull/1847",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2568619533 | Function Calling support
Hi,
I couldn't find any information in the docs about function calling. Does LightLLM support it or do I need to prompt and parse manually?
@floschne Yes, You need do it by yourself now. We will support this feature in future. However, the schedule is not very certain,we are currently improving basic performance and support for constraint output.
Totally understandable, thanks for your answer and work in general:-)
| gharchive/issue | 2024-10-06T11:50:24 | 2025-04-01T04:32:48.423756 | {
"authors": [
"floschne",
"hiworldwzj"
],
"repo": "ModelTC/lightllm",
"url": "https://github.com/ModelTC/lightllm/issues/552",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
261896327 | Design Aplayer like Zing Mp3 HTML5 Player?
https://mp3.zing.vn/bai-hat/Noi-Thuong-Nhau-Thi-Dung-Lam-Trai-Tim-Em-Dau-Bich-Phuong/ZW80COFO.html
Aplayer design like Zing Mp3 Player with support .lrc style karaoke?
Feel free to make pull requests, thanks.
| gharchive/issue | 2017-10-01T06:46:29 | 2025-04-01T04:32:48.429927 | {
"authors": [
"DIYgod",
"tuannvbg"
],
"repo": "MoePlayer/APlayer",
"url": "https://github.com/MoePlayer/APlayer/issues/156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
526639174 | Hide WebTorrent native controls (overlap with Dplayer)
隐藏WebTorrent原生控制条,避免与Dplayer控制条重叠。
环境:Chrome 78.0.3904.97, DPlayer.min.js cb95daa, WebTorrent 0.107.17
Before (when paused):
After (when paused):
I'm having the same issue
| gharchive/pull-request | 2019-11-21T14:44:28 | 2025-04-01T04:32:48.431975 | {
"authors": [
"JacobWennebro",
"tutugreen"
],
"repo": "MoePlayer/DPlayer",
"url": "https://github.com/MoePlayer/DPlayer/pull/639",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2055777598 | 🛑 SearxNG is down
In 715b42f, SearxNG (https://searx.melashri.me) was down:
HTTP code: 500
Response time: 1130 ms
Resolved: SearxNG is back up in 7750e3d after 8 minutes.
| gharchive/issue | 2023-12-25T16:41:48 | 2025-04-01T04:32:48.444597 | {
"authors": [
"MohamedElashri"
],
"repo": "MohamedElashri/monitor",
"url": "https://github.com/MohamedElashri/monitor/issues/672",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
55535054 | Update FizzBuzz.scala
Cambiando el inicio del ciclo a 0
Hola,
Estás tratando de mezclar los branches equivocados. Debes mandar tus cambios a MontealegreLuis:angelrosales en lugar de MontealegreLuis:master.
Para cambiar al branch correcto debes cerrar este PR y abrir uno nuevo.
Gracias
| gharchive/pull-request | 2015-01-26T20:46:18 | 2025-04-01T04:32:48.582856 | {
"authors": [
"MontealegreLuis",
"angelrosales"
],
"repo": "MontealegreLuis/algoritmos",
"url": "https://github.com/MontealegreLuis/algoritmos/pull/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1644610828 | 🛑 Waddle Penguins Island - API is down
In b123750, Waddle Penguins Island - API (https://api.waddlepenguins.me/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Waddle Penguins Island - API is back up in 02b7a00.
| gharchive/issue | 2023-03-28T20:25:51 | 2025-04-01T04:32:48.593065 | {
"authors": [
"chriskermit"
],
"repo": "MoonlightStudiosInt/status",
"url": "https://github.com/MoonlightStudiosInt/status/issues/622",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
146516123 | 17.37 & 38 Missing from guide
Missing from guide
Dang, do my commits just auto merge?
| gharchive/pull-request | 2016-04-07T05:52:24 | 2025-04-01T04:32:48.594043 | {
"authors": [
"Squawk09"
],
"repo": "Mooophy/Cpp-Primer",
"url": "https://github.com/Mooophy/Cpp-Primer/pull/419",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
771437357 | Add a color picking feature and some minor improvements for Dialog UI (Closes: #5)
Description
Add new color picking feature for players to choose their preference color. Also some minor GUI changes for the Dialog UI due to the added feature. Also add some more constants, documentation of the code and other minor fixes / improvements.
Notes for reviewer
@HyperionCoding, take a look at the code and functionality. Let me note if there are some inconsistencies or something else to improve yet still.
Some improvements should or could be made before merging new features.
Add clearer marker for what colour is currently picked. Maybe a colour changing marker next to the name field
Allow any color to be picked (QColorDialog). It was mentioned in issue #5 that similar colours can get mixed but this will be solved when claiming visuals are changed to borders.
Maybe add colour buttons programatically. This would allow additional features like remembering previous used colours.
| gharchive/pull-request | 2020-12-19T19:11:59 | 2025-04-01T04:32:48.596527 | {
"authors": [
"HyperionCoding",
"Moppa5"
],
"repo": "Moppa5/pirkanmaan-valloitus",
"url": "https://github.com/Moppa5/pirkanmaan-valloitus/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
374660069 | All Groups Lost
All of sudden whilst starting FF the tabs have been empty.
Trying to import tabs from Local Backup - All date-based item opens with error - The list was empty!
Please advice
Usually, when the tabs and the backup are empty, it is because the browser swiped off all your data. Unfortunately, I don't know the reason.
I have planned to change my backend code soon for a more stable version.
I apologize for the inconvenience.
| gharchive/issue | 2018-10-27T15:57:43 | 2025-04-01T04:32:48.619268 | {
"authors": [
"GThib",
"Morikko"
],
"repo": "Morikko/sync-tab-groups",
"url": "https://github.com/Morikko/sync-tab-groups/issues/146",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1946339996 | Issue with Leprechaun/Log Basket
Love the plug in! I noticed today while doing forestry that the plugin is not able to accurately track gp gained when logs go into an open basket (sometimes it does though?). Also, when I bank logs with the leprechaun it subtracts the logs from GP gained.
Huh, that is strange. Are you using a log basket or do you have the forestry basket? It should track logs in the log basket properly.
I hadn't thought of what happens when banking with the leprechaun but I can definitely make a fix for that.
Just using the normal log basket with it open. It doesn't seem to be a
consistent problem however. On my laptop Runelite I don't have the item
charges plugin installed, but I do on my PC, and haven't experienced the
issue while playing on my PC yet. That may just be coincidence though idk
if there's cross-talk between the two plugins.
On Mon, Oct 16, 2023, 9:07 PM Mo Ben-Zacharia @.***>
wrote:
Huh, that is strange. Are you using a log basket or do you have the
forestry basket? It should track logs in the log basket properly.
I hadn't thought of what happens when banking with the leprechaun but I
can definitely make a fix for that.
—
Reply to this email directly, view it on GitHub
https://github.com/MosheBenZacharia/GP-Per-Hour/issues/14#issuecomment-1765541196,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BDKOLJH5SQM2S7GUNVFPJC3X7XR5ZAVCNFSM6AAAAAA6DAVMO2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONRVGU2DCMJZGY
.
You are receiving this because you authored the thread.Message ID:
@.***>
That is very weird, I'll see if I can reproduce it as well. Let me know if you are able to figure out under what circumstances it happens.
oh good find, I'll open a bug for that as well.
Ok I've got some info on the log basket situation. For me, it seems like
the GP/hr plugin is not reliant on the item charges improved plug-in as I
initially thought, however there are still some weird interactions between
GP/hr and the log basket. For one, if you use the check option when right
clicking log basket and the number is less than it previously was (i.e.
banked with leprechaun) it will incorrectly subtract from the profit
calculation. Additionally, if the log basket thinks it is full (i.e. item
charge of 28 but not actually full), any logs cut and added to the basket
aren't added to profit calculation. Lastly, similar to the seed vault
deposits, depositing with the leprechaun also subtracts from profit
calculation.
I realize this is a bit long-winded so if you want to follow-up on discord
or something let me know.
On Tue, Oct 17, 2023 at 6:08 PM Mo Ben-Zacharia @.***>
wrote:
oh good find, I'll open a bug for that as well.
—
Reply to this email directly, view it on GitHub
https://github.com/MosheBenZacharia/GP-Per-Hour/issues/14#issuecomment-1767331810,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BDKOLJE3MHQJAVK424EYWNDX74FXZAVCNFSM6AAAAAA6DAVMO2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONRXGMZTCOBRGA
.
You are receiving this because you authored the thread.Message ID:
@.***>
Ah, that definitely makes sense, so essentially the tool leprechaun is causing all sorts of issues!
Man that is going to be a pain to test, if you're able to get a screenshot of the message in chat when a tool leprechaun banks your logs (if it exists) that would be super helpful for me to deploy a fix. If there is a separate message for when you bank your inventory and when you use your log basket on him I would need that too.
| gharchive/issue | 2023-10-17T01:10:41 | 2025-04-01T04:32:48.646326 | {
"authors": [
"MosheBenZacharia",
"Teichnician"
],
"repo": "MosheBenZacharia/GP-Per-Hour",
"url": "https://github.com/MosheBenZacharia/GP-Per-Hour/issues/14",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
225728941 | Modal Functionality
This PR introduces modal functionality into the image carousel. Selecting to "Expand" an image will pop open an image with this image content. Styling happens by adding a modal-active class to the body, where the elements are styled within this state class.
Todo
[x] Add arrow, close and expand icons
[ ] Add "close" functionality
[x] ESC to close functionality
[ ] Selecting pagination forces modal
Upon further review, I don't think the specific icons should be added here, but in the project itself. This would keep it from coupling too tightly to a specific style.
I'll continue to add "Close" and "Forced Pagination".
| gharchive/pull-request | 2017-05-02T15:18:47 | 2025-04-01T04:32:48.651256 | {
"authors": [
"tdrach"
],
"repo": "MotelIs/vue-carousel",
"url": "https://github.com/MotelIs/vue-carousel/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
372221629 | Highlighting makes non-bold text invisible
Browser: Chrome 69.0.3497.100
Operating System: Windows 10
URL of the problem page:
All
N/A
When I highlight text with this activated, the text vanishes unless its bold. If it IS bold, then it highlights with the selected color (as seen in screenshots)
Hi @adagar!
Because every site can add custom css, I would need a link to the article to fix this problem. Please share this with me.
From the main Medium site - I looked at this article and it has no problems:
Oh, ok! I was seeing this issue here:
https://medium.freecodecamp.org/why-you-should-have-practice-hours-as-a-developer-ee0f2d0293a2
On Sat, Oct 20, 2018 at 11:44 AM Rob Garrison notifications@github.com
wrote:
Hi @adagar https://github.com/adagar!
Because every site can add custom css, I would need a link to the article
to fix this problem. Please share this with me.
From the main Medium site - I looked at this article
https://medium.com/starts-with-a-bang/the-universe-has-a-speed-limit-and-it-isnt-the-speed-of-light-543b7523b54f
and it has no problems:
https://user-images.githubusercontent.com/136959/47257551-f0f92000-d454-11e8-9a04-4373085ab023.png
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Mottie/Darker-Medium/issues/8#issuecomment-431592921,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADhwqtvzypdSqA5vyJ6cT5Kny2CogB7Fks5um0S_gaJpZM4Xx-i-
.
--
Nicholas Garza
Ahh ok, it's the self-highlight that's not working. Okay, I'll work on that 😸
| gharchive/issue | 2018-10-20T15:19:09 | 2025-04-01T04:32:48.662637 | {
"authors": [
"Mottie",
"adagar"
],
"repo": "Mottie/Darker-Medium",
"url": "https://github.com/Mottie/Darker-Medium/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
88641478 | Better working version I believe
I think this may work better
Agreed, will work out the bugs with this system and elaborate.
| gharchive/pull-request | 2015-06-16T07:11:36 | 2025-04-01T04:32:48.683001 | {
"authors": [
"Camoleopard",
"Moulberry"
],
"repo": "Moulberry/moulberry.github.io",
"url": "https://github.com/Moulberry/moulberry.github.io/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
90725926 | Artifactory requires authentification for some artifacts
I got the following error on Artifactory (for meta-server) today:
Could not resolve all dependencies for configuration ':compile'.
Could not download javax.inject.jar (org.glassfish.hk2.external:javax.inject:2.4.0-b10)
Could not HEAD 'http://artifactory.terasology.org/artifactory/virtual-repo-live/org/glassfish/hk2/external/javax.inject/2.4.0-b10/javax.inject-2.4.0-b10.jar'. Received status code 401 from server: Unauthorized
I changed the artifact resolution order (Maven Central first, then Artifactory) to make it work. The Artifactory config looks fine, I wonder what causes this. Maybe the lookup of external artifacts?
Huh, that's peculiar. I don't recall having set up authentication being required anywhere.
In the past on publishing artifacts I've got errors that sound authentication related (permission denied) but that has been when trying to publish a second identical version of a release artifact to a release-only repo (unlike snapshots those don't get accepted with additional version increments)
Occasionally if some repo is unavailable maybe we will have hit odd errors where a download of something not in our Artifactory is none the less attempted as it isn't available elsewhere (with something down). Could that be it and Maven Central just had a hiccup the first time, then worked the second time and the order change was just a distraction?
I was able to reproduce the issue for several hours yesterday, but it seems to work today. Maybe Artifactory was just redirect the request to Maven Central, which - for some reason - requested authentication for that particular artifact. Closing until it happens again.
Just ran into a very similar thing that could be the same underlying cause. I fired up a new Artifactory instance on a docker container (which just acts as a local caching passthrough to jcenter). Then our Jenkins server started up and ran a load of builds all at once. Some of them failed due to 401 errors. My best hunch is that they were all hitting Artifactory at the same time for the same dependencies and perhaps Artifactory clashed for the local cache files - i.e. it was downloading them from jcenter then trying to write into its cache, but perhaps finding that the file was locked because another request was doing the same thing. Just a theory, but interested to know if that would explain it.
The last thing my build system needs is spurious failures, so I'm keen to understand and eradicate this.
Very good idea @alwaysthecritic ! Especially since we might end up with more and more builds hitting ours, although the listed error above was for a jar that wouldn't be getting updated in our instance - in this case it was probably just proxying forward to jcenter or Maven Central. Makes sense too, although I wonder if Artifactory itself should improve its handling in a case like that. Are you on the latest version?
It may well have been that this issue for us was just a hiccup, but always good to get more details and ideas for future notice.
| gharchive/issue | 2015-06-24T16:28:42 | 2025-04-01T04:32:48.688402 | {
"authors": [
"Cervator",
"alwaysthecritic",
"msteiger"
],
"repo": "MovingBlocks/Terasology",
"url": "https://github.com/MovingBlocks/Terasology/issues/1793",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
197481462 | Add javadocs and sometimes comment
Contains
Add javadocs and sometimes comment
Can one of the admins please verify this patch?
Nice job - merged!
| gharchive/pull-request | 2016-12-24T23:49:50 | 2025-04-01T04:32:48.689923 | {
"authors": [
"Cervator",
"GooeyHub",
"gkaretka"
],
"repo": "MovingBlocks/Terasology",
"url": "https://github.com/MovingBlocks/Terasology/pull/2704",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
771202785 | SPM Moya includes ReactiveSwift and RxSwift even when not selected for the target
When adding Moya with SPM it also include ReactiveSwift and RxSwift even when not selected
Any updates?
| gharchive/issue | 2020-12-18T22:21:05 | 2025-04-01T04:32:48.691052 | {
"authors": [
"ardavydov",
"atrbx5"
],
"repo": "Moya/Moya",
"url": "https://github.com/Moya/Moya/issues/2116",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
199454759 | Post request with parameters in both body and url
I am not entirely sure if that is an issue but please feel free to prove me wrong.
What I am trying to achieve is to make a POST request to an API. The first thing is that I have to provide the TOKEN for the API, encodend in the url like:
http://awesomeapi.com?token=mytoken
Unfortunately the API I am calling, is made in a way so that it accepts the token only if it's a URL encoded parameter.
Next thing is I have to send a parameters in a JSON body like :
{
param1: value,
param2: value
}
So in general I have to make a POST request and provide both URL encoded parameters and body parameters.
While researching about an answer I came into Issue 431, where the author is trying to achieve the same, but I can not see a straightforward answer to his question.
Reading the docs make me think that having parameters in the request body and as URL encoded is of course possible in separate requests, but having them combined in one is not or at least not documented yet. Am I right about that ? If not, can you provide an example of how can this be done ?
You can change the used ParameterEncoding. An idea could be to create a ParameterEncoding like:
public struct TokenEncoding: ParameterEncoding {
public let token: String
private let urlEncoding = URLEncoding(destination: .methodDependent)
init(token: String) {
self.token = token
}
public func encode(_ urlRequest: URLRequestConvertible, with parameters: Parameters?) throws -> URLRequest {
// use URLEncoding to encoding parameters
// use something like URLEncoding's implementation to add token to URL.
// URLEncoding can be found here: https://github.com/Alamofire/Alamofire/blob/master/Source/ParameterEncoding.swift#L70
}
}
As @bjarkehs suggests, writing a custom ParameterEncoding could be a great idea. However, I might prefer searching for a token parameter to in parameters to url encode and then encode the rest as JSON--as opposed to taking the token as an initializer argument. However, that's a personal style and use case decision.
Another option would be to create a PluginType to do this. It would work similarly to AccessTokenPlugin so take a look here for docs and here for implementation.
I'll close this issue since it's been inactive for a while. Feel free to reopen if you still have questions, @kristiyandobrev
@pedrovereza I just came to a similar issue again, so I consider re-openning the issue again, since I am uncertain if it's possible to make for example POST or PUT request to api/resource/:id and pass a json body out of the box, without any additional implementation, as it was suggested above.
@kristiyandobrev I think what you're looking for is what @scottrhoyt suggested:
I might prefer searching for a token parameter to in parameters to url encode and then encode the rest as JSON.
Basically, if you have an endpoint like:
enum API {
case example(id:String, name:String, lastName: String)
}
In order to make a request to api/resource/:id passing name and lastName as json body, you have to:
Encode id on the url
public var path: String {
switch self {
case .example(let id, _, _):
return "api/resource/\(id)"
}
}
Pass name and lastName as parameters
public var parameters: [String : Any]? {
switch self {
case .example(let _, let name, let lastName):
return ["name": name, "lastName": lastName]
}
}
Encode parameters as json
public var parameterEncoding: Moya.ParameterEncoding {
switch self {
case .example:
return JSONEncoding.default
}
}
@kristiyandobrev were you able to make the request as you wanted?
Yes. Thanks for the help .
I want to call this https://www.blaalb.com/restApi/testApi/myOrders?number=1&month=24
enum JokerService {
switch self {
case getMyOrders(number: Int, month: Int)
}
}
var path: String {
switch self {
case . getMyOrders:
return "/myOrders"
}
}
}
var method: Moya.Method {
switch self {
case .getMyOrders:
return .get
}
}
var task: Task {
switch self {
case . getMyOrders(let number, let month):
let p = ["number" : number,
"month": month]
return .requestParameters(parameters:p , encoding: URLEncoding.default)
}
}
Can you please help with the comment by @AhmAbdallah , it will help many others as well
@zohairhadi you should be able to do that easier now, with the Task type we have: Task..requestCompositeParameters(bodyParameters:bodyEncoding:urlParameters), which allow you to combine url encoded parameters with another type (data / parameters).
| gharchive/issue | 2017-01-08T23:13:14 | 2025-04-01T04:32:48.702397 | {
"authors": [
"AhmAbdallah",
"bjarkehs",
"kristiyandobrev",
"pedrovereza",
"scottrhoyt",
"sunshinejr",
"zohairhadi"
],
"repo": "Moya/Moya",
"url": "https://github.com/Moya/Moya/issues/909",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
69251876 | Current day highlighted text overlay bug.
The gif below describes it much better than I can, but essentially, when I toggle between tabs the currently highlighted day produces a text overlay bug of sorts. I'm currently searching for a solution, but I'd thought I'd see if you have any quick fixes first.
Hello @FrancisBaileyH,
This bug is fixed in the pre-release of 1.1.0 version. You can get it here. Note, that it's under an active maintenance so more changes to come.
In order to fix your issue simply move frames' update commitment to viewDidLayoutSubviews() method. It's fixed in the demo project. Also the actual source code has the following directory: CVCalendar-develop / CVCalendar Demo / CVCalendar.
Hope it helps.
P.S. If it works for you let me know so I can close the issue.
Regards,
Eugene
Hi @Mozharovsky,
Moving the commitCalendarViewUpdate() and commitMenuViewUpdate() into the viewDidLayoutSubviews() method did not work, unfortunately. Instead, the text overlay bug is no permanently visible, no matter what action was taken.
At this point, we're close to ready for our release and don't want to upgrade the library just yet.
Was there any other code you changed that may have fixed the bug on your end?
Are you sure you got the CVCalendar folder from the CVCalendar Demo/CVCalendar/ and not the CVCalendar/ from the root? Since there are two CVCalendar folders in the Develop branch. I had the same problems as you, but I can't reproduce it anymore how hard I try.
@martjemeyer I haven't actually updated the library, I was just trying out the quick fix that @Mozharovsky had mentioned.
Okay, I've implemented the new library in a branch and it does indeed fix the issue. Just curious, before I close this issue, if anyone noticed the menuView doesn't appear. Is there something different I need to do?
I Added the following fixed the issue on my side:
// MARK: - CVCalendarMenuViewDelegate
extension yourViewController: CVCalendarMenuViewDelegate {
// firstWeekday() has been already implemented.
}
I must say I used Eugenes extension way of initializing the calendar in his updated demo. But I think you can add it behind your last closing curly bracket of the class
@martjemeyer, thanks, I added implemented the delegate protocol and then found out you have to hook up the menuView delegate from the main storyboard as well.
| gharchive/issue | 2015-04-18T01:27:03 | 2025-04-01T04:32:48.710677 | {
"authors": [
"FrancisBaileyH",
"Mozharovsky",
"martjemeyer"
],
"repo": "Mozharovsky/CVCalendar",
"url": "https://github.com/Mozharovsky/CVCalendar/issues/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1699126716 | Add a command to find your avatar
Could be !findme !whereami to show where your avatar is on the overlay.
Make the avatar bigger for a short amount of time
or
Make the avatar blink
or
Make the avatar say something
Implemented in https://github.com/MrEliptik/twitch_avatars_overlay/commit/9d40160f003bcb9746329d37f06a0c42a492f402
| gharchive/issue | 2023-05-07T16:02:49 | 2025-04-01T04:32:48.737703 | {
"authors": [
"MrEliptik"
],
"repo": "MrEliptik/twitch_avatars_overlay",
"url": "https://github.com/MrEliptik/twitch_avatars_overlay/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
314760088 | Update dependencies and beautify generated code
This includes some changes I did while using this project to learn more about annotation processors.
A generated provider from a project of mine now looks like this:
public final class RelationViewModelProvider {
@NonNull
public static RelationViewModel get(@NonNull FragmentActivity activity, String entryId) {
return ViewModelProviders
.of(activity, new RelationViewModelFactory(entryId))
.get(RelationViewModel.class);
}
@NonNull
public static RelationViewModel get(@NonNull Fragment fragment, String entryId) {
return ViewModelProviders
.of(fragment, new RelationViewModelFactory(entryId))
.get(RelationViewModel.class);
}
private static final class RelationViewModelFactory implements ViewModelProvider.Factory {
private final String entryId;
RelationViewModelFactory(String entryId) {
this.entryId = entryId;
}
@NonNull
@Override
public <T extends ViewModel> T create(@NonNull Class<T> modelClass) {
return (T) new RelationViewModel(entryId);
}
}
}
As you see, this now contains the Factory. I did that as we may implement the upcoming incremental annotation processing in Gradle 4.7 and that requires to output one file per input file.
I also used the real parameter names in the Factory, as that looks better than "var1" and "p1".
Wow, this is great! I'm so so sorry it took me this long to provide a response.
Will merge shortly. Thanks a bunch!
| gharchive/pull-request | 2018-04-16T18:02:56 | 2025-04-01T04:32:48.740260 | {
"authors": [
"MrHadiSatrio",
"rubengees"
],
"repo": "MrHadiSatrio/Alfred",
"url": "https://github.com/MrHadiSatrio/Alfred/pull/16",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
851399605 | Jschoudt 2.2.10 personal
Fixup my fiasco 8)
Man, I need more coffee.
| gharchive/pull-request | 2021-04-06T12:59:44 | 2025-04-01T04:32:48.787674 | {
"authors": [
"jschoudt"
],
"repo": "MrPrimate/vtta-tokenizer",
"url": "https://github.com/MrPrimate/vtta-tokenizer/pull/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1357204478 | crash at login
my server crashes when a player wants to close the client window during joining
2022-08-31 13:32:21: ERROR[Main]: ServerError: AsyncErr: Lua: Runtime error from mod '' in callback environment_Step(): /home/nik/minetest/bin/../mods/edit_skin/init.lua:101: attempt to index local 'skin' (a nil value)
2022-08-31 13:32:21: ERROR[Main]: stack traceback:
2022-08-31 13:32:21: ERROR[Main]: /home/nik/minetest/bin/../mods/edit_skin/init.lua:101: in function 'compile_skin'
2022-08-31 13:32:21: ERROR[Main]: /home/nik/minetest/bin/../mods/edit_skin/init.lua:123: in function 'update_player_skin'
2022-08-31 13:32:21: ERROR[Main]: /home/nik/minetest/bin/../mods/edit_skin/init.lua:172: in function 'func'
2022-08-31 13:32:21: ERROR[Main]: /home/nik/minetest/bin/../builtin/common/after.lua:20: in function </home/nik/minetest/bin/../builtin/common/after.lua:5>
2022-08-31 13:32:21: ERROR[Main]: /home/nik/minetest/bin/../builtin/game/register.lua:431: in function </home/nik/minetest/bin/../builtin/game/register.lua:417>
Are you using 3D armor?
Line 170 and following:
-- Needed for 3D Armor + sfinv
if minetest.global_exists("armor") then
minetest.after(0.01, function() edit_skin.update_player_skin(player) end)
end
This code is missing an ObjectRef validity check.
Please try it again. I think I have resolved this issue. I was able to reproduce the crash. After fixing the issue, I can't reproduce the crash.
yeah it works! thank you for the quick response
| gharchive/issue | 2022-08-31T11:34:25 | 2025-04-01T04:32:48.789734 | {
"authors": [
"MrRar",
"Niklp09"
],
"repo": "MrRar/edit_skin",
"url": "https://github.com/MrRar/edit_skin/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
327076590 | not all fonts working
Hey there,
thanks for this tool! I just want to change our projekt from pdfmake to this tool, but I have Problems with some fonts
The other fonts are working...
Maybe some idea? thx!
Are you using the latest version of jsPDF?
1.4.0
Have you tried using jspdf.debug.js?
npm version
Did you have the fonts base64 encoded?
4/6 fonts are working, and yes :-) ,... base64
thx!
Heya
can you provide a generated pdf so that i can check it?
sorry! notification email here at github was wrong (/old)
here is a example repo https://github.com/crazyx13th/test-jsPDF-fonts
(quick'n'dirty mixture of old-project and typescript-removing and and and :-) )
next problem is that the Chinese /CN version looks wrong too
left: ok,
right: wrong (dont understand why it's looking like handwriting style)
Thx!
hope you can fix this (and hebrew text :-) ) THX 👍
hey there,
another example
Arial-Bold.ttf is ok
ArialNarrow-Bold.ttf failed (see below)
thx!
Having this issue as well
Downloaded a font named "Roboto" (https://fonts.google.com/specimen/Roboto?selection.family=Roboto) in ttf format then base64 encoded it but get deformed text in the output PDF
code used https://pastebin.com/TS8VDXHg
generated file
test (52).pdf
how it looks
Duplicate of #1902
| gharchive/issue | 2018-05-28T16:39:50 | 2025-04-01T04:32:48.799872 | {
"authors": [
"arasabbasi",
"crazyx13th",
"sweetheatmn"
],
"repo": "MrRio/jsPDF",
"url": "https://github.com/MrRio/jsPDF/issues/1780",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
961530986 | Error Proxy Connection Detected
Proxy Connection detected shows when creating openccc account can be fixed using browser as undetectable chrome which is used in this repo please edit the repo to support undetectable chrome..
https://github.com/AmmeySaini/Edu-Mail-Generator
still not working even with this undetectable chrome
still not working I get same error.
Fixed
| gharchive/issue | 2021-08-05T07:29:50 | 2025-04-01T04:32:48.801679 | {
"authors": [
"AaradhySahu",
"LogicKey",
"MrStark-XD",
"turtldoves669"
],
"repo": "MrStark-XD/Edu-Mail-Generator",
"url": "https://github.com/MrStark-XD/Edu-Mail-Generator/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
165834710 | Issues
1.EWT "Advanced LUA Unlock" , gatherbot not shows [line 34]
2.Overlays, draw line or circle makes a lot of lags
3.fishingbot not working [IsObjectCreatedBy error?]
4.what is DH id?
5. spell cast has lags.
1- EWT problem was fixed
2- thats normal
4- DH? Demon hunter? 577 = Havoc, 581 = Vengeance
5- lags? can you try the new version see if that helps?
Still Checking (3)
cast a spell → GCD → delay 200~500 → cast next spell
Try the new update, should fix it by default
| gharchive/issue | 2016-07-15T17:08:02 | 2025-04-01T04:32:48.809456 | {
"authors": [
"MrTheSoulz",
"gongmang1"
],
"repo": "MrTheSoulz/NerdPack",
"url": "https://github.com/MrTheSoulz/NerdPack/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1650550880 | chore(main): release 1.1.2
:robot: I have created a release beep boop
1.1.2 (2023-04-01)
Bug Fixes
zip binaries in github release (b108084)
This PR was generated with Release Please. See documentation.
:robot: Release is at https://github.com/Mubashwer/git-mob/releases/tag/v1.1.2 :sunflower:
| gharchive/pull-request | 2023-04-01T16:28:04 | 2025-04-01T04:32:48.901659 | {
"authors": [
"Mubashwer"
],
"repo": "Mubashwer/git-mob",
"url": "https://github.com/Mubashwer/git-mob/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2469612674 | MudAutocomplete shows NoItemsTemplate on getting AutoFocus
Things to check
[X] I have searched the existing issues for this bug
[X] To rule out a caching problem I made sure the bug also happens in an incognito tab
Bug type
Component
Component name
MudAutocomplete
What happened?
I have a Dialog that has a MudFocusTrap with an MudAutocomplete that is getting Focus automaticalls (First element of form)
When the Dialog opens, the MudAutocomplete shows the NoItemsTemplate, even though the User has not entered anything
This did not happen before, only after upgrading to v7.
Expected behavior
The MudAutocomplete should not show the NoItemsTemplate, only after the user enters something - like it worked before v7
Reproduction link
https://try.mudblazor.com/snippet/QaQIkCbUcoxjyvvy
Reproduction steps
Open the linked example
Open the dialog
MudAutocomplete shows NoItemsTemplate
Relevant log output
No response
Version (bug)
7.x.x
Version (working)
6.9.2
What browsers are you seeing the problem on?
Firefox, Chrome, Edge, Safari
On which operating systems are you experiencing the issue?
Windows
Pull Request
[ ] I would like to do a Pull Request
Code of Conduct
[X] I agree to follow this project's Code of Conduct
When autocomplete get the focus, it displays the available values to user. As the autocomplete hasn't SerachFunc, none value is found.
The bug would rather be in v6.
If you want to show the dropdown only after the user enter a first character, you can set the property MinCharacters.
Thanks for the quick reply, but my issue is not that it shows "No items found" but rather that it does so on opening the dialog.
The example does not have an OnSearch, because for this example it does not need to.
in my production environment, it has an OnSearch, but the Autocomplete still shows the drop down on opening the dialog, not like it was previously.
Also the OnSearch does not get called, because if it would, it would show some elements, not "No items found" (tested in my production environment)
I will update the example, or provide a second one on Monday though for clarity
in my production environment, it has an OnSearch, but the Autocomplete still shows the drop down on opening the dialog, not like it was previously.
I think it the expected behavior, so previously isn't worked as expected (disclaimer I am not the author, just a user).
@ScarletKuro, do you have an opinion?
@ScarletKuro, do you have an opinion?
I also don't know, I feel like this is expected. @Mr-Technician was the author of this PR: https://github.com/MudBlazor/MudBlazor/pull/4692
@ScarletKuro So it sounds like something changed in v7?
This seems like normal behavior to me - the autocomplete was focused, and as it had no items to display, it displayed the NoItemsTemplate. My only caveat is that maybe the NoItemsTemplate should only show if there is a SearchFunc? Then again, no items is no items whether there is a SearchFunc or not.
I checked on v6.21.0 and it behaves the same way, so it's not something that was introduced in v7.
Then I downgraded to author version v6.9.2, and it's still working the same way.
I was checking without dialog and MudFocusTrap tho.
I think this could be changed to only occur with OpenOnFocus if it doesn't already?
I checked on v6.21.0 and it behaves the same way
Ah my bad... I completely missed this part in the migration guide, even though it was marked as a warning box...
Closing this issue, OpenOnFocus="false" reverts back to our prefered behaviour!
Thanks guys!
| gharchive/issue | 2024-08-16T06:44:17 | 2025-04-01T04:32:48.917048 | {
"authors": [
"MarioMatschgi",
"Mr-Technician",
"ScarletKuro",
"danielchalmers",
"vernou"
],
"repo": "MudBlazor/MudBlazor",
"url": "https://github.com/MudBlazor/MudBlazor/issues/9640",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2714323403 | 🧪 Testing of third-party device implementations
Anyone implementing the QDMI device specification will be interested in whether it has been implemented correctly so that when the shared library is distributed one can expect it to work.
It would be good to set up infrastructure (tests + documentation) on how to best set this up and provide some support in doing so.
Best case scenario, we get a setup where passing a set of test implies (with high probability) that the device implementation is compliant with the interface definition.
In principle, I see two (non-exclusive) options here:
Testing of (open-source, git-hosted) downstream projects as part of this repository. We could check out the respective repository, build the device, and hook it into the example QDMI driver to run the tests. This naturally only works if the repository for the device implementation is open source.
Providing more elaborate testing as part of the QDMI device template, potentially including a GitHub Workflow to test the device in CI. This even allows private device implementations to properly test their code.
For both options, the biggest question is how much we can reliably test without knowing any details about the device. We could definitely perform a lot of sanity checks, e.g., correct error behaviour as specified in the interface.
I believe it should also be possible to test the query interface without knowing too much about the device, e.g., that certain properties must be provided.
The control interface, i.e., circuit execution, is probably much tougher to reliably and regularly test as it requires submission of actual jobs to the device, which might not be available or intended for use in this fashion. I'll create a separate issue for this kind of testing that might be run semi-regularly.
For the second option, it would be good to unify the tests being added to the template and the existing tests for the example devices to avoid too much code duplication.
Regarding this:
The control interface, i.e., circuit execution, is probably much tougher to reliably and regularly test as it requires submission of actual jobs to the device, which might not be available or intended for use in this fashion. I'll create a separate issue for this kind of testing that might be run semi-regularly.
I completely agree. We should avoid testing actual circuit execution on quantum devices as part of the CI pipeline. For instance, considering the LRZ's devices, if a provider wanted to test a specific device hosted here, such tests likely wouldn't be feasible due to restrictions on granting remote access. Additionally, many providers would likely be uncomfortable allowing access to their devices directly from a public platform such like GitHub.
This raises an interesting (perhaps silly) question: could we design a test framework that can be included within the repository, allowing it to be cloned and tested locally? Such a solution could validate whether a QDMI device works correctly in the local environment and with access to the actual hardware.
@freetonik @kukushechkin: I believe this is inline with the concerns you raised during our last meeting. Could you kindly share any feedback you may have?
Regarding this:
The control interface, i.e., circuit execution, is probably much tougher to reliably and regularly test as it requires submission of actual jobs to the device, which might not be available or intended for use in this fashion. I'll create a separate issue for this kind of testing that might be run semi-regularly.
I completely agree. We should avoid testing actual circuit execution on quantum devices as part of the CI pipeline. For instance, considering the LRZ's devices, if a provider wanted to test a specific device hosted here, such tests likely wouldn't be feasible due to restrictions on granting remote access. Additionally, many providers would likely be uncomfortable allowing access to their devices directly from a public platform such like GitHub.
Fully agree 👍🏼
This raises an interesting (perhaps silly) question: could we design a test framework that can be included within the repository, allowing it to be cloned and tested locally? Such a solution could validate whether a QDMI device works correctly in the local environment and with access to the actual hardware.
Short answer: yes. And definitely not a silly question.
@ystade and I had a pretty good discussion today about general testing of device implementations and how we could provide that as part of the QDMI repository and/or the QDMI device implementation template. @ystade can elaborate a little more on the things we discussed.
While working on this, we should keep in mind that there might be tests a device implementer might want to run in CI (general spec compliance, query interface tests, etc.), while there are others that should only be run on-demand and locally (e.g., functional control interface tests).
Note that the current functional tests that are part of the repository already "fully" test the control interface of the example devices. This is only possible, because the example backends just return random values for their job results. However, this already provides a good blueprint for such kinds of tests that could also be part of device implementations.
On the IQM side we are thinking about not doing open-source development of the QDMI device implementation, as it would make it harder to track compatibility between shipped control software and that library. So let's assume there isn't necessary a repo on GitHub.
I think there are several valuable areas for testing:
The control interface.
Requires programs submission, but it does not require real programs execution, so on the other side there can be a mock. This feels more like integration testing against a higher-level component using QDMI device library (something called Submitter @echavarria-lrz mentioned when we talked?). Having such a tool with a tests suite representing "the latest release version of MQSS" will allow us to test our dev changes for the QDMI on our side.
The other way around, dev changes of the MQSS can be verified against the latest shipped QDMI device implementation, for example received with the IQM QC Control Software package.
Programs spec support.
A set of reference QIR programs for the supported device capabilities would be ideal, so any QDMI device provider can keep verifying not just the QDMI, but the underlying software/hardware. While there is the standard, mapping this to how QDMI device capabilities map to QIR is what should be tested.
@kukushechkin I really appreciate your feedback.
I'll leave the technical details about the Submitter to @mnfarooqi, but to provide more context for this discussion, a Submitter is effectively a QDMI client that offloads circuits from the compiler to a target device.
This bullet I believe was already addressed by @burgholzer in this issue: https://github.com/Munich-Quantum-Software-Stack/QDMI/issues/113#issue-2714531589
In addition to @kukushechkin 's points:
This raises an interesting (perhaps silly) question: could we design a test framework that can be included within the repository, allowing it to be cloned and tested locally? Such a solution could validate whether a QDMI device works correctly in the local environment and with access to the actual hardware.
Device maintainers like ourselves would need versioned, reproducible packages of such test, so that we can a) easily and reliably run tests while developing, and b) set up pipelines, including in our private internal repos, for regression testing. Ideally, it should be just part of the MQSS SDK, so that we can validate "MQSS version X is compatible with IQM Software version Y", and keep records of these compatibility mappings.
Short answer: yes. And definitely not a silly question.
@ystade and I had a pretty good discussion today about general testing of device implementations and how we could provide that as part of the QDMI repository and/or the QDMI device implementation template. @ystade can elaborate a little more on the things we discussed.
The following is more on the technical side: As discussed with @burgholzer, the tests of the example devices provided in the QDMI repository are tested end-to-end, meaning that they are not tested independently and individually but rather from a client through a driver. Hence, all tests contained in QDMI right now rely on an implementation of the driver.
However, device maintainers may want to test their device independently without starting a driver. Especially, those individual tests should be provided together with the template that can be exported from QDMI with a specified prefix. At the same time, those tests can also be used to test the included example devices.
Still, we do not want to duplicate too much code while the tests of the devices have to deal with the custom prefixes that device implementations use. To this end, we want to implement individual device tests in the top-level test directory and while building the project they are instantiated with the respective prefix to be compatible with the corresponding device implementation.
To summarise, those tests that allow testing the device independently from any driver can also be used to implement a validation check whether a device complies with the specification.
I'll leave the technical details about the Submitter to @mnfarooqi, but to provide more context for this discussion, a Submitter is effectively a QDMI client that offloads circuits from the compiler to a target device.
QDMI is the interface for MQSS to connect to devices. MQSS itself is a collection of different components, which can be used in different combinations depending on what the hosting site needs. To test a device implementation against a MQSS component, e.g. Submitter, tests can be provided in the component's repo.
Device maintainers like ourselves would need versioned, reproducible packages of such test, so that we can a) easily and reliably run tests while developing, and b) set up pipelines, including in our private internal repos, for regression testing. Ideally, it should be just part of the MQSS SDK, so that we can validate "MQSS version X is compatible with IQM Software version Y", and keep records of these compatibility mappings.
Regarding MQSS compatibility, my understanding is that you can claim that "MQSS version X is compatible with IQM Software version Y" if both (MQSS and IQM software) are compatible with a QDMI version Z.
@burgholzer @ystade
Has QDMI v1.0 been released? I don't see a release or version tag in the repo.
@burgholzer @ystade Has QDMI v1.0 been released? I don't see a release or version tag in the repo.
Just briefly commenting on this. I will come back to the other comments in this thread at a later point in time.
v1 has not been officially released. Given some open issues that require breaking changes in the interface (#117, #118) and some issues that add features that almost seem necessary (#108, #109, #115), I would argue that we should get these resolved first before officially marking this as v1.
Since we are currently not expecting any further substantial (breaking) changes, we feel comfortable to realize this until the end of the year.
| gharchive/issue | 2024-12-03T08:27:44 | 2025-04-01T04:32:48.994037 | {
"authors": [
"burgholzer",
"echavarria-lrz",
"freetonik",
"kukushechkin",
"mnfarooqi",
"ystade"
],
"repo": "Munich-Quantum-Software-Stack/QDMI",
"url": "https://github.com/Munich-Quantum-Software-Stack/QDMI/issues/112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
207073291 | fix(ShareButtonsComponent): fix showCount not propagated to inner `…
…ShareButtonComponent (typo)
Coverage remained the same at 97.698% when pulling e56c4d89b6ee680ea37201d9940352837514c8ac on tinesoft:fix_sb_component into 24c480dffa72fa3915826f1aced5899ab4f81e97 on MurhafSousli:master.
| gharchive/pull-request | 2017-02-12T19:47:16 | 2025-04-01T04:32:49.005705 | {
"authors": [
"coveralls",
"tinesoft"
],
"repo": "MurhafSousli/ng2-sharebuttons",
"url": "https://github.com/MurhafSousli/ng2-sharebuttons/pull/63",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2140426630 | First lightning transaction did not open channel
I created an invoice, and paid it via lightning. I can see the payment completed. I can see the transaction in the activity,
But there isn't a channel open, no balance, not able to send.
What should be done to resolve the issue?
Can you submit logs and screenshots, otherwise we aren't able to help you
Something went wrong with the channel. The logs also do not go very far back. Are you using this wallet on multiple devices at the same time?
mutiny-logs.txt
Yes. I imported the state to a different browser, hoping it would refresh properly or something. Here's the log from the original browser.
mutiny-logs.txt
Yes, I imported the state to a different browser, hoping it would reset. Here's the log from the original browser
The logs aren't going back far enough to see what the original issue is. We'll reach out to the LSP to see if the channel appears on their end and what the status of it is.
node: 03a9f16b52e71baeac05102fa112302f3d20890052dee4151703ce51c99d8147f8
Does anything show up in the channel view in the settings?
The channel page is empty
Are you using the original browser to check this? Is there anything if you go to admin tools and look at the channel list there?
It's never been the case previously that channels just disappear on the mutiny wallet side. It shows that the channel exists on the LSP side. So I'm very unsure what happened to cause this and if it's the result of bouncing between multiple devices.
One problem I notice is that there seems to be multiple "node managers" that have been initialized and created, but the channel does/should exist. We'll have to spend some time thinking about this problem and how to reproduce/solve it. Though any additional information you can provide will go a long way. We'll hopefully have something additional later this week.
@bronco1 as @TonyGiorgio said, this seems to a weird bug where multiple versions of the channel manager were spun up. It seems the one that has the channel is not the one that is saved to your local storage. This should be fixable.
In your state file you'll have something like this
{"nodes":{"bd2d8dba-a5e1-4415-a58b-fc767b16d1db":{"child_index":0,"lsp": "..."},"archived":false}},"version":3}}
what you will need to do is replace the uuid (here in my case it's bd2d8dba-a5e1-4415-a58b-fc767b16d1db) with 4223775b-2702-4a65-88d8-8af82cc33e65 and you'll want to increase the version number.
If you edit that and import the new state file that should hopefully resolve the issue.
Thank you both, this did resolve my issue.
Great, thank you for reporting! The next version should prevent this from happening to other people, it is apparently a rare edge case.
I've subscribed to Mutiny+. looking forward to your future updates..
I've subscribed to Mutiny+. looking forward to your future updates..
Thank you for the support!
| gharchive/issue | 2024-02-17T18:31:21 | 2025-04-01T04:32:49.049864 | {
"authors": [
"TonyGiorgio",
"benthecarman",
"bronco1"
],
"repo": "MutinyWallet/mutiny-web",
"url": "https://github.com/MutinyWallet/mutiny-web/issues/889",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
59518864 | Layout is broken with empty table
Need to display properly when there is no data and <tbody> is empty
Fixed in fixtable-angular (originally thought the bug would require changes to _circulateStyles() itself but the issue was it being called too soon in the directive)
| gharchive/issue | 2015-03-02T18:09:15 | 2025-04-01T04:32:49.074124 | {
"authors": [
"michaelmcauley"
],
"repo": "MyPureCloud/fixtable-core",
"url": "https://github.com/MyPureCloud/fixtable-core/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1388035125 | Add api
Hey, I would like to tinker around a bit with controlling the savestate programatically.
For that it would be really cool to be able to tweak certain things at runtime via an api where I can send requests to.
Could you expose such an api?
API? Can you describe a bit more about what you want to achieve?
Poker is a tool that runs when needed. Users don't usually keep it running for an extended period of time. I don't see a point in adding an API call to it when most of the time the app is not running.
If you would like to tweak things inside the game memory, perhaps you should look at how sys-botbase/usb-botbase works.
They run in the background of your Switch and allow you to manipulate game memory with different commands.
I have interaction with an audiance of livestreams in mind. E.g. for a chaos mod or randomized creative challenges or just things being manipulatable via twitchchat. To write such software, it would need an api or webhook where I can send executable commands to set up things in a specific way.
| gharchive/issue | 2022-09-27T16:24:36 | 2025-04-01T04:32:49.076654 | {
"authors": [
"KevinCCucumber",
"MyShiLingStar"
],
"repo": "MyShiLingStar/ACNHPokerCore",
"url": "https://github.com/MyShiLingStar/ACNHPokerCore/issues/40",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2578462080 | update total_supply for better readability improvements
update total_supply from 10000000000000000 to 10_000_000_000 * 100_000 for better readability improvements
Thank you, closing this as this is an on chain dependency previously published
| gharchive/pull-request | 2024-10-10T10:55:58 | 2025-04-01T04:32:49.095428 | {
"authors": [
"JasonRUAN",
"leecchh"
],
"repo": "MystenLabs/deepbookv3",
"url": "https://github.com/MystenLabs/deepbookv3/pull/293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1434371551 | Add benchmarking workflow to CI
Closing #181. PR #209 should be merged first.
This PR adds benchmarking to the CI.
Running all benchmarks takes ~1 hour, so we only do it on publish, but it may also be triggered manually.
We use cargo criterion to run benchmarks and generate reports:
In order to keep history of benchmarks, we first checkout the old reports from gh-pages
Then we use cargo criterion to run all benchmarks with the most recent version of fastcrypto and generate reports,
The new reports (including history) are pushed to gh-pages. See https://mystenlabs.github.io/fastcrypto/benchmarks/criterion/reports/ for an incomplete report (not all benchmarks have been run yet), and https://mystenlabs.github.io/fastcrypto/benchmarks/criterion/reports/Verify/Ed25519/history.html for an example of how the history of a benchmark is shown.
The results are stored as a JSON-file and stored under benchmarks/history with the latest commit hash as the filename. This allows analysis of historic data and comparison of versions using various analysis tools.
This restructures the gh-pages such that documentation is put under "docs" and benchmarks under "benchmarks".
I'd prefer that for CI requirements and to save some CI cycles (and eventually cost) to
reduce sampling size to a minimum accepted (even 10-20 might suffice Vs default)
set some frequency or enabled by triggering it only by a PR author.
... until we have a better budget and latency estimation plan.
I'll fix the sample size and adjust the significance level also.
Regarding the frequency I think running it only on publish makes sense, both because it doesn't happen that often and because it only makes sense to re-run the benchmarks when we have made significant changes. What do you think about that, @kchalkias?
will there be any alert or warning if performance degraded significantly?
overall looks great! only compute benchmark on publishing makes sense.
Not as it is now. But there's a Github action (https://github.com/marketplace/actions/continuous-benchmark) which allows this. I think it would give some false positives since the benchmarks fluctuate quite a lot, even when they are running locally. But we could definitely look into it, if you think it would make sens.
| gharchive/pull-request | 2022-11-03T09:45:33 | 2025-04-01T04:32:49.101498 | {
"authors": [
"jonas-lj"
],
"repo": "MystenLabs/fastcrypto",
"url": "https://github.com/MystenLabs/fastcrypto/pull/211",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.