added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:39:36.713853
| 2021-01-11T07:52:32
|
783149600
|
{
"authors": [
"alexdima",
"shskwmt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8515",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/114127"
}
|
gharchive/pull-request
|
Fixes 113603: Change reason for moveWordCommand to CursorChangeReason.Explicit
This PR fixes https://github.com/microsoft/vscode/issues/113603
https://user-images.githubusercontent.com/17052177/104156845-456d8900-542d-11eb-957b-8a8e01d50dee.mp4
Thank you!
Thank you!
|
2025-04-01T06:39:36.715382
| 2021-03-09T03:36:41
|
825251885
|
{
"authors": [
"life-droid",
"yume-chan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8516",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/118502"
}
|
gharchive/pull-request
|
[sql] grammar update, added window functions
This PR fixes #118500 Syntax support for SQL WINDOW functions.
Apologies as I am unsure how to test the changes.
Note:
https://github.com/microsoft/vscode/blob/120a9f6476553edf3275bfa23df5e3245cfa0146/extensions/sql/syntaxes/sql.tmLanguage.json#L2-L5
|
2025-04-01T06:39:36.717236
| 2021-06-22T00:46:52
|
926721774
|
{
"authors": [
"rzhao271"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8517",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/126861"
}
|
gharchive/pull-request
|
Add single level of categorization #126089
This PR affects #126089
Here's how the toc currently looks
Testing GHPRI
Testing extension
|
2025-04-01T06:39:36.719346
| 2022-04-14T00:25:16
|
1203907089
|
{
"authors": [
"amanasifkhalid",
"jrieken"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8518",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/147422"
}
|
gharchive/pull-request
|
Fix #146166: Snippet transform preserves existing camel/Pascal case
This PR fixes #146166 so that existing camel/Pascal case within a snake_case or kebab-case name is preserved. For example, transforming stub_PathServiceToken to camel case now produces stubPathServiceToken, rather than stubPathservicetoken. Corresponding tests have been added to demonstrate this new behavior.
However, this adjusted implementation introduces some ambiguity I'd like to discuss. In the previous camel case implementation, the name portland-OR-temp would be transformed into portlandOrTemp. Now, it would become portlandORTemp, preserving the upper-case OR. Is this behavior desirable? I assumed this approach would best preserve user intent, but I understand preferences for the consistency of the former approach.
Thanks @amanasifkhalid
|
2025-04-01T06:39:36.727762
| 2022-06-23T08:18:03
|
1282004514
|
{
"authors": [
"Athayde3586",
"bpasero",
"jrieken"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8519",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/152953"
}
|
gharchive/pull-request
|
merge editor - add setting for layout
This PR is for:
https://github.com/microsoft/vscode/issues/150813
https://github.com/microsoft/vscode/issues/150812
This also puts "Merge Editor" quite prominently under the "Text Editor"
The setting is currently not resource scoped, not sure, that is maybe overkill, though could be added later.
No worries, we can discuss (in the end it's just a swap of services mainly).
I think I tried to model this after some existing settings we have that are similar:
But I do not have arguments for or against. The only thing to note is that resource scoped setting (if we ever wanted to allow the user to control layout per file) would require us to use a setting.
No and I'll tell u this if it's mine tell them to release it
On Thu, Jun 23, 2022, 6:25 AM Benjamin Pasero @.***>
wrote:
No worries, we can discuss (in the end it's just a swap of services
mainly).
I think I tried to model this after some existing settings we have that
are similar:
[image: image]
https://user-images.githubusercontent.com/900690/175287376-550408f2-90bf-4b09-b336-421b5b80b9e8.png
[image: image]
https://user-images.githubusercontent.com/900690/175287428-b3253c9a-9dec-48ef-82b5-4433bfe639b0.png
But I do not have arguments for or against. The only thing to note is that
resource scoped setting (if we ever wanted to allow the user to control
layout per file) would require us to use a setting.
—
Reply to this email directly, view it on GitHub
https://github.com/microsoft/vscode/pull/152953#issuecomment-1164288446,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AYYXGWSYOMFL44V7BLHADZDVQRCRLANCNFSM5ZTN4QZA
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
closing in favour of https://github.com/microsoft/vscode/pull/153108
|
2025-04-01T06:39:36.729730
| 2024-12-06T18:07:19
|
2723645421
|
{
"authors": [
"aeschli",
"alyssacabading"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8520",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/235514"
}
|
gharchive/pull-request
|
Added more documentation for better readability
Addresses Issue #234442. Added more JSDoc style comments on the src/server-main file to enhance code readability. Implemented documentation on the following parts: the shouldSpawnCli conditional, parseRange function, and prompt function. This is so that there is consistent documentation throughout the file to help future contributors to understand seemingly complex code.
@microsoft-github-policy-service agree
Thanks you for the PR, but we rather leave the code as is, I think it's clear, what it does without comments
|
2025-04-01T06:39:36.736661
| 2021-10-04T18:45:11
|
1015511151
|
{
"authors": [
"Masamune3210",
"albertopasqualetto",
"denelon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8521",
"repo": "microsoft/winget-cli",
"url": "https://github.com/microsoft/winget-cli/issues/1547"
}
|
gharchive/issue
|
Key to request for administrator privileges
Description of the new feature/enhancement
It would be good having a feature which requests for administrator privileges without opening a new window of the terminal.
Proposed technical implementation details (optional)
With a button and also a keyboard shortcut.
@albertopasqualetto I'm not sure how this would work. There is no "native" Windows "sudo" equivalent. What you are asking for might be better requested from https://github.com/Microsoft/Terminal.
As a quick fix, scoop has a sudo you can download that works reasonably well, but yeah I agree Terminal is probably the better idea, if anything because it would be shell-wide rather than a specific program
Sorry, completely mistaken the git project! Thanks
|
2025-04-01T06:39:36.738343
| 2023-09-10T06:06:01
|
1889003130
|
{
"authors": [
"Trenly",
"yujiahan2018"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8522",
"repo": "microsoft/winget-cli",
"url": "https://github.com/microsoft/winget-cli/issues/3609"
}
|
gharchive/issue
|
I hope that I can change the default installation location
Description of the new feature / enhancement
I hope to configure the json file to install the software to the specified drive through winget, and I don't want to install it to C by default
Proposed technical implementation details
I hope to configure the json file to install the software to the specified drive through winget, and I don't want to install it to C by default
Duplicate of #3608
|
2025-04-01T06:39:36.743188
| 2023-05-11T12:29:05
|
1705731686
|
{
"authors": [
"JamieSabbatella",
"mdanish-kh",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8523",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/106520"
}
|
gharchive/pull-request
|
New version: Amazon.WorkspacesClient version 5.9.0
[x] Have you signed the Contributor License Agreement?
[x] Have you checked that there aren't other open pull requests for the same manifest update/change?
[x] This PR only modifies one (1) manifest
[x] Have you validated your manifest locally with winget validate --manifest <path>?
[x] Does your manifest conform to the 1.4 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
@JamieSabbatella Just an fyi: The PR has been manually validated and is waiting on you to resolve the comments before merge.
@JamieSabbatella Just an fyi: The PR has been manually validated and is waiting on you to resolve the comments before merge.
Yes - sorry about these Internal-Error-Dynamic-Scan errors. They're caused by Automated Validation pipeline interference, and the package did pass our alternate Manual Validation pipeline.
|
2025-04-01T06:39:36.747464
| 2023-06-07T21:41:47
|
1746725799
|
{
"authors": [
"stephengillie",
"superusercode"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8524",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/109156"
}
|
gharchive/pull-request
|
TimBrogden.Keysticks version <IP_ADDRESS>
[ ] Have you signed the Contributor License Agreement?
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] This PR only modifies one (1) manifest
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.4 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: codeflow:open?pullrequest=https://github.com/microsoft/winget-pkgs/pull/109156&drop=dogfoodAlpha
This package seems to install the installer. Is it a nested installer?
|
2025-04-01T06:39:36.751915
| 2023-08-11T07:57:10
|
1846375873
|
{
"authors": [
"hjkl950217"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8525",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/116399"
}
|
gharchive/pull-request
|
New version: Ndd version 2.7.0
[ ] Have you signed the Contributor License Agreement?
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] This PR only modifies one (1) manifest
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.4 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: codeflow:open?pullrequest=https://github.com/microsoft/winget-pkgs/pull/116399&drop=dogfoodAlpha
/AzurePipelines run
@wingetbot
|
2025-04-01T06:39:36.756337
| 2023-11-15T04:35:01
|
1994023706
|
{
"authors": [
"matbech",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8526",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/126734"
}
|
gharchive/pull-request
|
SmartSoft.SmartFTP version 10.0.3185.0
[ ] Have you signed the Contributor License Agreement?
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] This PR only modifies one (1) manifest
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.5 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
Hi @matbech,
Apps and Features Entries were omitted on this version. Should they be included?
Thank you for pointing this out. It seems the latest version of wingetcreate removed the UpgradeCode field if it wasn't under AppsAndFeaturesEntries node.
|
2025-04-01T06:39:36.760316
| 2023-12-08T05:00:29
|
2031931621
|
{
"authors": [
"SpecterShell",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8527",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/129683"
}
|
gharchive/pull-request
|
New package: sigoden.Dufs version 0.38.0
[x] Have you signed the Contributor License Agreement?
[x] Have you checked that there aren't other open pull requests for the same manifest update/change?
[x] This PR only modifies one (1) manifest
[x] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[x] Does your manifest conform to the 1.5 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
@wingetbot waivers Add Validation-Executable-Error
|
2025-04-01T06:39:36.770884
| 2024-02-16T21:47:32
|
2139532428
|
{
"authors": [
"Nbc66",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8529",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/139610"
}
|
gharchive/pull-request
|
New package: Nbc66.VTFEditReloaded version <IP_ADDRESS>
[x] Have you signed the Contributor License Agreement?
[x] Have you checked that there aren't other open pull requests for the same manifest update/change?
[x] This PR only modifies one (1) manifest
[x] Have you validated your manifest locally with winget validate --manifest <path>?
[x] Does your manifest conform to the 1.6 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
@microsoft-github-policy-service agree
Hi is there anything else i need to do to be abel to get this package inside of winget?
Automatic Validation ended with:
No errors to post.
(Automated response - build 791.)
Automatic Validation ended with:
No errors to post.
(Automated response - build 791.)
So whats the issue then?
Manual Validation ended with:
@wingetbot waivers Add Validation-Executable-Error
@stephengillie I updated the manifest with the requested changes
|
2025-04-01T06:39:36.776209
| 2024-04-02T23:27:05
|
2221616534
|
{
"authors": [
"stephengillie",
"vedantmgoyal9"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8530",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/147407"
}
|
gharchive/pull-request
|
OpenRCT2.OpenRCT2 version 0.4.10
Checklist for Pull Requests
[ ] Have you signed the Contributor License Agreement?
[ ] Is there a linked Issue?
Manifests
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] This PR only modifies one (1) manifest
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.6 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
Automatic Validation ended with:
2024-04-02T23:31:38.9813405Z ##[error]Manifest Validation Failed
2024-04-02T23:31:44.0645527Z ##[section]Finishing: Validate Manifest
(Automated response - build 876.)
|
2025-04-01T06:39:36.778486
| 2024-04-16T16:11:08
|
2246427342
|
{
"authors": [
"SpecterShell",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8531",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/149478"
}
|
gharchive/pull-request
|
New version: Hugo.Hugo version 0.125.0
Created by 🥟 Dumplings in workflow run #4445.
Logs
Updated: 0.124.1 → 0.125.0
Submitting WinGet manifests
Microsoft Reviewers: Open in CodeFlow
Spaces detected in version number.
(Automated response - build 876)
|
2025-04-01T06:39:36.781172
| 2024-05-09T12:16:50
|
2287568722
|
{
"authors": [
"SpecterShell",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8532",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/152827"
}
|
gharchive/pull-request
|
New version: Tencent.TencentDocs version 3.5.1
Created by 🥟 Dumplings in workflow run #4995.
Logs
Updated: 3.5.0 → 3.5.1
Submitting WinGet manifests
Microsoft Reviewers: Open in CodeFlow
I'm having trouble finding a link to this version - has it been released yet?
I'm having trouble finding a link to this version - has it been released yet?
It is released in their update channel.
|
2025-04-01T06:39:36.785590
| 2024-07-03T17:49:56
|
2389156727
|
{
"authors": [
"drpebcak",
"hackean-msft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8533",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/161375"
}
|
gharchive/pull-request
|
chore: add vcredist dep to gptscript installer
Checklist for Pull Requests
[x] Have you signed the Contributor License Agreement?
[ ] Is there a linked Issue?
Manifests
[x] Have you checked that there aren't other open pull requests for the same manifest update/change?
[x] This PR only modifies one (1) manifest
[x] Have you validated your manifest locally with winget validate --manifest <path>?
[x] Does your manifest conform to the 1.6 schema?
Note: <path> is the directory's name containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
/azp run
|
2025-04-01T06:39:36.789947
| 2024-10-10T21:58:52
|
2579931119
|
{
"authors": [
"Trenly"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8534",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/180915"
}
|
gharchive/pull-request
|
Update manifests/e/Eugeny/Tabby/1.0.141/Eugeny.Tabby.locale.en-US.yaml
Fixes an issue where this package has multiple different monikers
Microsoft Reviewers: Open in CodeFlow
[Policy] Reset Labels
|
2025-04-01T06:39:36.791284
| 2024-10-28T04:09:04
|
2617242739
|
{
"authors": [
"Exorcism0666",
"ItzLevvie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8535",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/186130"
}
|
gharchive/pull-request
|
Remove version: NumeRe.NumeRe version <IP_ADDRESS>9
[Automated] It returns code over 400 in all urls
Microsoft Reviewers: Open in CodeFlow
Close with reason: URL works;
|
2025-04-01T06:39:36.792487
| 2024-11-01T04:01:53
|
2628266893
|
{
"authors": [
"Exorcism0666",
"ItzLevvie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8536",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/187653"
}
|
gharchive/pull-request
|
Remove version: REALiX.HWiNFO version 8.04
[Automated] It returns code over 400 in all urls
Microsoft Reviewers: Open in CodeFlow
Close with reason: URL works;
|
2025-04-01T06:39:36.794304
| 2024-11-12T03:13:56
|
2650886529
|
{
"authors": [
"Exorcism0666",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8537",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/190634"
}
|
gharchive/pull-request
|
Remove version: WinSCP.WinSCP version 5.19.4
[Automated] It returns code over 400 in all urls
Microsoft Reviewers: Open in CodeFlow
URL: https://sourceforge.net/projects/winscp/files/WinSCP/5.19.4/WinSCP-5.19.4-Setup.exe/download
OK
(Automated message - build 923)
|
2025-04-01T06:39:36.798223
| 2021-09-01T08:46:56
|
984880223
|
{
"authors": [
"vedantmgoyal2009"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8538",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/26551"
}
|
gharchive/pull-request
|
ARP: Amazon.Chime version 4.39.10232.1
[ ] Have you signed the Contributor License Agreement?
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.0 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
|
2025-04-01T06:39:36.801886
| 2022-02-15T18:05:35
|
1139036995
|
{
"authors": [
"ImJoakim",
"OfficialEsco",
"zachcarp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8539",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/46470"
}
|
gharchive/pull-request
|
New version: Couchbase.ServerEnterprise version 7.0.7031
[X] Have you signed the Contributor License Agreement?
[X] Have you checked that there aren't other open pull requests for the same manifest update/change?
[X] Does your manifest conform to the 1.0 schema?
Microsoft Reviewers: Open in CodeFlow
Should probably add a ShortDescription too.
@wingetbot waivers Add Policy-Test-2.7
|
2025-04-01T06:39:36.805538
| 2022-05-20T20:06:51
|
1243570429
|
{
"authors": [
"ItzLevvie",
"JanDeDobbeleer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8540",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/61433"
}
|
gharchive/pull-request
|
JanDeDobbeleer.OhMyPosh version 7.87.0
[ ] Have you signed the Contributor License Agreement?
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.1 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
@wingetbot run
|
2025-04-01T06:39:36.807082
| 2022-08-28T12:41:32
|
1353342362
|
{
"authors": [
"russellbanks"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8541",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/74966"
}
|
gharchive/pull-request
|
ARP Entries: Azul.Zulu.13.JDK version 13.35.51
Pull request has been automatically created using Add-ARPEntries
Removed DisplayVersion because it matches https://github.com/microsoft/winget-pkgs/pull/74965
|
2025-04-01T06:39:36.810714
| 2022-12-27T15:36:48
|
1511914932
|
{
"authors": [
"SpecterShell",
"mdanish-kh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8542",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/92368"
}
|
gharchive/pull-request
|
[Zip][WiX] New package: Uzero.ScanScan version 0.2.7
[X] Have you signed the Contributor License Agreement?
[X] Have you checked that there aren't other open pull requests for the same manifest update/change?
[X] Does your manifest conform to the 1.4 schema?
Microsoft Reviewers: Open in CodeFlow
@msftbot .zip
|
2025-04-01T06:39:36.811976
| 2023-01-17T17:41:24
|
1536785472
|
{
"authors": [
"junaga"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8543",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/93987"
}
|
gharchive/pull-request
|
up whatsapp.whatsapp description
null
Microsoft Reviewers: Open in CodeFlow
@microsoft-github-policy-service agree
|
2025-04-01T06:39:36.816491
| 2020-07-16T07:45:19
|
657943795
|
{
"authors": [
"kennykerr",
"rylev",
"uhuntu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8544",
"repo": "microsoft/winrt-rs",
"url": "https://github.com/microsoft/winrt-rs/pull/262"
}
|
gharchive/pull-request
|
Re-implement try_download to prevent twice download.
As my mentioned in #260, there was a bug in cargo winrt install procedure, without setting nobody when doing a curl header request, it will cost much time that not just request header. The patch is for fixing it.
@uhuntu how are you testing this to confirm before/after that the network usage is satisfactory? Also, remember to run cargo fmt on the code to allow the CI build to pass.
@uhuntu how are you testing this to confirm before/after that the network usage is satisfactory? Also, remember to run cargo fmt on the code to allow the CI build to pass.
About the cargo fmt, it's pretty strange that in my machine it will not result like CI did. Anyway, I format it like CI suggest.
About the testing, you can see the ouput below after the fix:
C:\Users\hunt\source\repos\winrt-rs>cargo winrt install --verbose
Resolving winrt_macros
Installing 0 nuget dependencies
Resolving winrt
Installing 3 nuget dependencies
Fetching KennyKerr.Windows.TestWinRT
Requesting https://www.nuget.org/api/v2/package/KennyKerr.Windows.TestWinRT/1.0.17
Requesting https://globalcdn.nuget.org/packages/kennykerr.windows.testwinrt.1.0.17.nupkg
Retrieved data from https://globalcdn.nuget.org/packages/kennykerr.windows.testwinrt.1.0.17.nupkg
Starting extraction of 'KennyKerr.Windows.TestWinRT'
Searching zip file: _rels\.rels
Searching zip file: KennyKerr.Windows.TestWinRT.nuspec
Searching zip file: lib\uap10.0\TestComponent.winmd
Found winmd file: "TestComponent.winmd"
Searching zip file: runtimes\win10-x86\native\TestComponent.dll
Found dll "TestComponent.dll" with arch "win10-x86" at path runtimes\win10-x86\native\TestComponent.dll
Searching zip file: runtimes\win10-x64\native\TestComponent.dll
Found dll "TestComponent.dll" with arch "win10-x64" at path runtimes\win10-x64\native\TestComponent.dll
Searching zip file: [Content_Types].xml
Searching zip file: package\services\metadata\core-properties\0e77aaa72e4f4184a4a9efbcbb14f1ca.psmdcp
Searching zip file: .signature.p7s
Fetching Microsoft.AI.MachineLearning
Requesting https://www.nuget.org/api/v2/package/Microsoft.AI.MachineLearning/1.3.0
Requesting https://globalcdn.nuget.org/packages/microsoft.ai.machinelearning.1.3.0.nupkg
Retrieved data from https://globalcdn.nuget.org/packages/microsoft.ai.machinelearning.1.3.0.nupkg
Before the fix, you should never see "Retrieved data from https://globalcdn.nuget.org/packages/microsoft.ai.machinelearning.1.3.0.nupkg"
@tim-weis I re-submitted as you wished.
@uhuntu this looks goood! Thanks for being patient. I think the only thing to do is what @tim-weis said - map the errors from cURL to Error::DownloadError
@rylev @tim-weis Thanks for being patient too :) I re-submitted the code.
Re-submitted.
|
2025-04-01T06:39:36.825950
| 2023-04-03T19:58:54
|
1652686573
|
{
"authors": [
"abannachGrafana",
"charlesritchea",
"oxc"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8545",
"repo": "microsoft/wslg",
"url": "https://github.com/microsoft/wslg/issues/1029"
}
|
gharchive/issue
|
Repeating keys running IntelliJ in WSLg
Windows build number:
22621.0
Your Distribution version:
22.04
Your WSL versions:
WSL version: <IP_ADDRESS>
Kernel version: <IP_ADDRESS>
WSLg version: 1.0.49
MSRDC version: 1.2.3770
Direct3D version: 1.608.2-61064218
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22621.1485
Steps to reproduce:
Install jetbrains-toolbox directly to wsl, install IntelliJ and use it with a large project
WSL logs:
No response
WSL dumps:
No response
Expected behavior:
No response
Actual behavior:
After awhile keys will randomly start repeating (particularly bad with IdeaVIM!)
I remember this exact same beviour when I used to use Hyper-V making me think whatever common input library/protocol is to blame
I'm also running Chrome Remote Desktop above this, maybe that is a factor as well
I'm pretty sure it's from whatever common library is used by both WSLg and
Hyper-V because I also experienced this with Hyper-V hosted Linux. I think
I tried to bring it up there too years ago and obviously was never fixed
On Fri, Jan 19, 2024, 9:18 AM Adam Bannach @.***> wrote:
I just started running into this as well. I realize this is an old thread.
I've noticed when typing in the terminal sometimes it will miss a key and
then keys will just stay repeating until I hit another key.
Did you ever find the root cause?
—
Reply to this email directly, view it on GitHub
https://github.com/microsoft/wslg/issues/1029#issuecomment-1900502826,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAMUMHJ2U6CAI2AKQ3GDED3YPJ6CVAVCNFSM6AAAAAAWRYVE2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBQGUYDEOBSGY
.
You are receiving this because you authored the thread.Message ID:
@.***>
Interesting. As I was thinking about this, I realized that I turned on the "Quick accent" tool in PowerToys Which listens to holding a key down to prompt for doing the accented character. I just turned it off, so I'll monitor for a bit, but so far I'm thinking that may have been my problem.
Following up, I haven't had an incident since disabling the "Quick accent" tool. Its worth looking into any services that are running that may be monitoring input from the keyboard. Not a solution, but maybe a clue.
I don't even have PowerToys installed, but still experiencing the same issue.
It gets massively worse if there is more than one WSLg window open, at that point I can basically not type anymore because keys will be just repeating endlessly, until I close all but one window again.
|
2025-04-01T06:39:36.827960
| 2019-05-30T16:43:36
|
450400813
|
{
"authors": [
"TiagoBrenck",
"jmprieur"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8546",
"repo": "microsoftgraph/csharp-teams-sample-graph",
"url": "https://github.com/microsoftgraph/csharp-teams-sample-graph/pull/29"
}
|
gharchive/pull-request
|
Fixed swapped cache methods
As an urgent hotfix, I have fixed UserTokenCacheAfterAccessNotification and UserTokenCacheBeforeAccessNotification swapped implementations.
Jean-Marc suggested to improve our .NET 4.X cache providers according to the latest changes on microsoft-authentication-extensions-for-dotnet but since this will require more time, I would like to do this quick hotfix first to prevent developers of cloning a wrong class as soon as possible.
@Jackson-Woods FYI (and to watch for other cases, although @TiagoBrenck has searched in all GitHub for this error)
|
2025-04-01T06:39:36.832393
| 2024-11-22T15:41:19
|
2683644165
|
{
"authors": [
"PenguinCats",
"SteveMutungi254"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8547",
"repo": "microsoftgraph/entra-powershell",
"url": "https://github.com/microsoftgraph/entra-powershell/pull/1229"
}
|
gharchive/pull-request
|
Fix RequiredResourceAccess cannot be an array issue in Set-AzureADMSApplication
Why we have this PR
When using Set-EntraApplication -ApplicationId $application.Id -RequiredResourceAccess $requiredResourceAccess, the $requiredResourceAccess could be an array. For example:
$requiredResourceAccess = @(
@{resourceAppId = '00000003-0000-0000-c000-000000000000'
resourceAccess = @(
@{
id = 'c79f8feb-a9db-4090-85f9-90d820caa0eb'
type = 'Scope'
},
@{
id = '9a5d68dd-52b0-4cc2-bd40-abcf44ac3a30'
type = 'Role'
})
},
@{resourceAppId = '11111111-0000-0000-c000-000000000000'
resourceAccess = @(
@{
id = '11111111-a9db-4090-85f9-90d820caa0eb'
type = 'Scope'
},
@{
id = '11111111-52b0-4cc2-bd40-abcf44ac3a30'
type = 'Role'
})
}
)
Set-EntraApplication -ApplicationId $application.Id -RequiredResourceAccess $requiredResourceAccess
In current logic, the input array will be converted into a single json string. However, the Set-EntraApplication is actually calling Update-MgApplication command, which requires input -RequiredResourceAccess <IMicrosoftGraphRequiredResourceAccess[]>. Reference here. This single string cannot be converted into IMicrosoftGraphRequiredResourceAccess[], causing the following error:
How to fix in the PR
We convert the parameter into string[], each json string can be converted into IMicrosoftGraphRequiredResourceAccess correctly.
Thank you so much, @PenguinCats, for your valuable contribution! We truly appreciate it and are currently reviewing your suggestion.
@PenguinCats: I have run some tests, and the new change is working perfectly. Thank you for the contribution.
Thank you for the quick review.
@KenitoInc: Build pipeline also passed.
|
2025-04-01T06:39:36.846146
| 2019-07-17T15:39:27
|
469294171
|
{
"authors": [
"Jopie64",
"Mr-alaa",
"VinodRavichandran",
"Warrior1st",
"guerital",
"ksikorsk",
"mwerghemmi",
"rugt0r"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8548",
"repo": "microsoftgraph/microsoft-graph-comms-samples",
"url": "https://github.com/microsoftgraph/microsoft-graph-comms-samples/issues/102"
}
|
gharchive/issue
|
Getting participants in a call using the API
Hi
This is related to a post I made on Stackoverflow, but it didn't gain much traction and this seems like a better place for it.
When I try to request the list of participants in a call (as detailed on https://docs.microsoft.com/en-us/graph/api/call-list-participants?view=graph-rest-beta) the response suggests there are none. For example:
I place a call to the bot, and the calling endpoint receives a notification of an incoming call with id 471f0300-401f-4c4a-9967-3cee9a052519. The bot answers the call with a POST on:
https://graph.microsoft.com/beta/app/calls/471f0300-401f-4c4a-9967-3cee9a052519/answer
The bot subsequently receives a message on the calling endpoint that the call has been established. I can query the graph about this call by making a GET:
https://graph.microsoft.com/beta/app/calls/471f0300-401f-4c4a-9967-3cee9a052519
I get a response with code 200 and the details of the call in progress, so the call is clearly valid and accessible. However, if I attempt to get the list of participants with this GET:
https://graph.microsoft.com/beta/app/calls/471f0300-401f-4c4a-9967-3cee9a052519/participants
I get a response with code 200, but the body contains the following:
{"@odata.context":"https://graph.microsoft.com/beta/$metadata#app/calls('471f0300-401f-4c4a-9967-3cee9a052519')/participants","value":[]}
Shouldn't this contain the list of participants in the call? Is this implemented, and if so, is there an example of this working?
Thanks.
--
Edit:
Further to this, I am not getting a notification of the roster being updated when the bot invites participants. Assuming the bot invites someone to join the call with the above call Id, I issue a POST to:
https://graph.microsoft.com/beta/app/calls/471f0300-401f-4c4a-9967-3cee9a052519/participants/invite
The response contains the Id of a comms operation, and when the invitee accepts, the calling endpoint receives a message that the comms operation completed. I do not however get an update to the participants resource as detailed in this piece of documentation. I can't see that I'm doing anything wrong here. Any ideas?
Another issue is that I can't get the participant resource when I use the myParticipantId field which is returned when I GET the call. Surely I should at least be able to get this?
Example assuming the callId is 471f0300-401f-4c4a-9967-3cee9a052519:
I perform a GET on:
https://graph.microsoft.com/beta/app/calls/471f0300-401f-4c4a-9967-3cee9a052519
I get back some JSON which includes the following:
"myParticipantId": "64e965cf-3672-4a58-accd-6fdd162be212"
I assumed I would be able to get the participant resource by doing the following:
https://graph.microsoft.com/beta/app/calls/471f0300-401f-4c4a-9967-3cee9a052519/participants/64e965cf-3672-4a58-accd-6fdd162be212
But instead I get a 404 with "Participant not found" as the message.
The inability to get participants is becoming a real problem for me, I'd like to try out audio routing groups but once again it seems I need to provide participant Ids. Does anyone else have this working?
Running the incident bot sample I see that a GET on calls/{id}/participants does actually work and I get a list of participants. That's when I follow the instructions in README.md and schedule a meeting, join it myself, then post on https://{botUrl}/joinCall with the meeting ID so the bot can join.
Can we only get participants from a pre-existing meeting the bot has called into? Or is there a way for the bot to get the participants in a call it has created?
I see this note on the documentation:
"Important: When a call is escalated from peer-to-peer to multiparty, not all multiparty features are available. Specifically, the bot will not receive roster updates."
So we don't get roster update notifications for a call, but does this mean that we cannot access the roster at all? And if so, why? Tracking participants seems like basic functionality.
@VinodRavichandran can you discuss the above subtleties? It looks like gaps with escalation to me. Any plan to address these?
@zhengni-msft Can you please evaluate this. Is it just documentation update or fixes?
Hi, I am now getting roster updates on escalated calls via the calling endpoint, as well as being able to request the list of participants on escalated calls doing beta/app/calls/{id}/participants. This is what we were after, thanks.
@rugt0r
We're running into the same problem as you described in the first post of this thread:
Unable to get a list of participants of an incoming call. Also querying myParticipantId fails like you mentioned.
Since this issue is now closed, did you find a way to mitigate this?
Hi @zhengni-msft , @VinodRavichandran , @zhengni-msft,
Is the support of communications/calls/{id}/audioRoutingGroups/{id} still planned for the Graph API ?
this is a vry important feature for call centers Scenarios..
Any change to get this GA this year ?
Many thank !
Any news on this bug?
i tried all and i got
"code": "BadRequest",
"message": "Resource not found for the segment 'app'.",
i tried all and i got
"code": "BadRequest",
"message": "Resource not found for the segment 'app'.",
Anyone figured this out? I tried using the beta version as suggested but I still get value: [].
|
2025-04-01T06:39:36.911035
| 2021-08-24T20:40:47
|
978466383
|
{
"authors": [
"Neutrino-Sunset"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8549",
"repo": "microsoftgraph/microsoft-graph-explorer-v4",
"url": "https://github.com/microsoftgraph/microsoft-graph-explorer-v4/issues/1077"
}
|
gharchive/issue
|
Photo request works in Graph Explorer gives 404 in Postman and Angular
I can't see where best to post this, so I'll post it here and I'm sure someone will move it to where it is supposed to go if it's in the wrong place.
The following two images sum up the problem I've found.
Calling https://graph.microsoft.com/beta/users/f3091fbc-12a9-4689-b20f-c2f963bfe5fd/photo/$value works in the Graph Explorer, calling the same url in Postman (and in my Angular app) fails returning 404.
I don't see how that can possibly be right. The resource clearly exists. Calling https://graph.microsoft.com/beta/me also works fine in both Postman and from Angular.
AB#10856
It doesn't work using your own Postman collection either.
Steps to reproduce:
Follow the steps to fork and setup auth for the Microsoft Graph Postman collection as described here
Log in using a personal Microsoft account
Verify v1.0 endpoints are called correctly.
Duplicate the /v1.0/me/photo/$value endpoint and adjust the url to /beta/me/photo/$value
Send the request.
Expected
The account's profile photo should be returned, as it is when using the Graph Explorer
Actual
404 error is returned
After further investigation I have discovered more details relating to this issue.
If I have a personal Microsoft account with a profile picture and use the Graph Explorer and the /beta/me/photo/$value api I retrieve the profile picture just fine.
If I use the same personal account to login to an application registered with Azure AD, I do not get the profile picture. But in Azure AD I can load a profile picture to the account's guest listing and then the Graph api does return the profile picture.
It would seem that using the Graph Explorer an attempt to retrieve an account's profile picture attempts to get the profile picture from the account directly, but if you use the Graph API to get the profile picture in the context of being logged in to an Azure AD registered application it only gets the profile picture from AD, and if there isn't a profile picture associated with the account's guest listing in AD then the Graph API doesn't return the profile picture from the account itself.
I don't know why it would work like that, but it's completely useless. Obviously the average guest user of an Azure AD application isn't going to have login access to the Azure portal to manipulate their guest profile listing there.
This really should just work.
I've worked out the problem.
If you have an application registered in Azure as multi-tenant then you authenticate using the common tenant, (i.e. with MSAL 2.0 this would be your authority setting https://login.microsoftonline.com/common) then the Graph /beta/me/photo/$value api correctly retrieves the user's photo directly from their account.
If however you have an application registered as single tenant, then your identity authority would be https://login.microsoftonline.com/{tenant-id}, then the Graph /beta/me/photo/$value api only retrieves the photo from the account's guest listing in Azure.
That seems like a bug in the Graph API to me. In the case of a guest account used in a single tenant appliation, the account isn't defined in Azure, only referenced there, and while the directory admins can manage aspects of the account's permissions in Azure the user's actual account is the sole source of truth regarding the user's profile information and that's what the graph api should be returning for any setting not overridden in the Azure directory.
|
2025-04-01T06:39:36.923646
| 2021-12-15T05:31:59
|
1080584030
|
{
"authors": [
"anonymh",
"jasonjoh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8550",
"repo": "microsoftgraph/nodejs-webhooks-sample",
"url": "https://github.com/microsoftgraph/nodejs-webhooks-sample/issues/351"
}
|
gharchive/issue
|
Ngrok does not work over network
So I have a problem with ngrok. I'v worked with nexphisher and blackeye. And it worked only on devices that are connected to the same network as me. I"m using kali linux on oracle VM Ubunto(last Version). Can anyone help?
I'm sorry but I'm not sure how I can help. The ngrok docs have an Ask a question button on the page that emails their support team.
|
2025-04-01T06:39:36.928592
| 2022-05-12T21:46:36
|
1234513074
|
{
"authors": [
"FractalMind",
"peter-mw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8551",
"repo": "microweber/microweber",
"url": "https://github.com/microweber/microweber/issues/863"
}
|
gharchive/issue
|
How to implement variants in Microweber?
hi, the variants are not finished yet , we started work on them but they are hidden for now until we finish them
Originally posted by @peter-mw in https://github.com/microweber/microweber/issues/771#issuecomment-942108793
Good day, as posted before we are in the process of deciding how our app will interface with Microweber to publish our variants.
For now, we are looking at 3 solutions but they all have their problems:
(current) Using multiple items to emulate variants(Not true variants, just different items, supports translations)
Using custom_fields (No translation available, custom_fields don't change prices: Red or Blue products will always have the same price)
Using variants (Not implemented yet, support translations?)
Without giving anyone pressure, does someone know when variants will be finished? Two weeks-months-years? And will they be compatible with Multi-language translation?
Or is there any way to make custom_fields change the price to emulate variants? And finally when the custom_fields will support translation?
I know, it's a lot of questions about Microweber's roadmap but any clues will help us a lot! Thank you for your time :)
Hi, we have started some work on the variants, its on the roadmap for the next versions
Hi, we have started some work on the variants, its on the roadmap for the next versions
Maybe next month will be ready
Awesome thank you. Any idea if it will support translations?
|
2025-04-01T06:39:36.953059
| 2023-11-11T09:58:10
|
1988870736
|
{
"authors": [
"mienaiyami",
"pein0saga"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8552",
"repo": "mienaiyami/yomikiru",
"url": "https://github.com/mienaiyami/yomikiru/issues/301"
}
|
gharchive/issue
|
"Navigate to page" not working properly (testing with cbz file)
Type of installation
Setup (.exe)
Type of reader (if reader related)
image
Steps to reproduce
Navigate to page with mouse or f shortcut
✔️ Expected Behavior
Navigate to page
❌ Actual Behavior
Cannot Navigate to correct page.
If I want to jump to 188, it will only jump to 39; if 288, only 90
I just checked this on a cbz with 250 pages and it was working without any issue.
Could you share some more details?
Note that when you click and move cursor while it is scrolling to the page, it will cancel the scroll and you will be on the page where you started dragging mouse.
|
2025-04-01T06:39:36.967879
| 2020-09-29T21:01:26
|
711448608
|
{
"authors": [
"BDisp",
"tig"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8553",
"repo": "migueldeicaza/gui.cs",
"url": "https://github.com/migueldeicaza/gui.cs/issues/927"
}
|
gharchive/issue
|
Window should support not having a frame/title
Window is currently hard coded to have a frame (and title).
There are cases where this frame is un-wanted, especially in scenarios where the user wants max screen real estate.
Suggestion is to
Add a bool HasFrame property (default true). Can't be Frame because that's already a member.
Logic throughout would need to change (see all places where 1 + exists).
if padding = 0 and HasFrame is false then X and Y would be 0, 0.
If padding =1 and HasFrame is false then X and Y would be 1, 1.
I just realized this has an easy work around:
var win = new Window("title")
{
X = -1,
Y = -1,
Width = Dim.Fill(-1),
Height = Dim.Fill(-1)
};
Any reasons why this won't work? (I tested it and it seems fine).
Only the line in the title is printed. Even using a empty string it also printed because the margin isn't handled for the top. So the top horizontal line is always printed. Maybe if Y == -1 && title.IsEmpty don't print the line.
Only the line in the title is printed. Even using a empty string it also printed because the margin isn't handled for the top. So the top horizontal line is always printed. Maybe if Y == -1 && title.IsEmpty don't print the line. And if Y == -1 && !title.IsEmpty only print the title and supress the line.
I'm not sure what you mean. In another test, I modified Generic to:
class MyScenario : Scenario {
public override void Setup ()
{
// Put your scenario code here, e.g.
var button = new Button ("Press me!") {
X = Pos.Center (),
Y = Pos.Center (),
};
button.Clicked += () => MessageBox.Query (20, 7, "Hi", "Neat?", "Yes", "No");
Win.X = -1;
Win.Y = -1;
Win.Height = Dim.Fill (-1);
Win.Width = Dim.Fill (-1);
Win.Add (button);
}
}
So it seems to work fine. What am I missing?
I tested in the Example project and since it have a MenuBar and a StatusBar the horizontal top line is printed.
Closing. Won't fix.
|
2025-04-01T06:39:37.005460
| 2018-04-03T06:36:12
|
310711372
|
{
"authors": [
"miguelmota",
"siitao"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8554",
"repo": "miguelmota/cointop",
"url": "https://github.com/miguelmota/cointop/issues/2"
}
|
gharchive/issue
|
refresh?
Does it support automatic refresh?
@siitao yes, it polls the CoinMarketCap API every minute
@siitao btw just added ctrl-r keyboard shortcut to force refresh
|
2025-04-01T06:39:37.017801
| 2023-11-12T15:08:29
|
1989443611
|
{
"authors": [
"mik-jozef"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8555",
"repo": "mik-jozef/lr-parser-typescript",
"url": "https://github.com/mik-jozef/lr-parser-typescript/pull/4"
}
|
gharchive/pull-request
|
Version 1
A major rewrite. Among the changes are:
TokenKinds are replaced by strings.
Uses the minimal instead of the classic LR parsing algorithm to generate the parser table.
This resulted in a reduction of Hyloa's parser table from more than 100 000 states to 472 (!)
making me believe there must be a bug somewhere, but none found so far.
Renamed SyntaxTreeClass#rule to "pattern".
Proper documentation added.
Internal changes:
Better organization of code.
From now on, I'll do reasonably-sized MRs only.
LGTM, thank ye very much!
|
2025-04-01T06:39:37.027145
| 2023-03-10T11:03:48
|
1618797109
|
{
"authors": [
"HerringtonDarkholme",
"Mubashwer",
"mikaelmello"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8556",
"repo": "mikaelmello/inquire",
"url": "https://github.com/mikaelmello/inquire/issues/100"
}
|
gharchive/issue
|
Need ability to inject stdout and stderr writers to prompt
Is your feature request related to a problem? Please describe.
I want to test the output of a multiselect prompt in unit tests.
Describe the solution you'd like
I want to be able to pass impl std::io::Write for both stdout and stderr into the multiselect prompt.
Describe alternatives you've considered
The alternative is to test the multiselect prompt output in integration tests only.
Additional context
I am building a CLI app and I want to inject writers for stdout and stderr into the components as dependencies to make them more testable.
+1 for this. Another alternative might be a programmatic API to answer inquire's prompt?
Designing such API might be challenging and fun.
+100, I certainly want to do something in this direction, specially because inquire itself has an awful test coverage.
I'm currently taking a look at what's pending in order to prioritize them, this is likely going to be at the top so keep watching :)
|
2025-04-01T06:39:37.033709
| 2016-06-02T04:32:02
|
158057373
|
{
"authors": [
"hankditton",
"intinig"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8557",
"repo": "mikamai/ruby-lol",
"url": "https://github.com/mikamai/ruby-lol/issues/53"
}
|
gharchive/issue
|
Rate limiting
It would be nice to build automatic rate-limiting into the client
Hey hank,
we though about it for a while, but we didn't want to start adding random features to a library that is meant to be just a dumb client (you'll see we're also moving away from rich classes to parse responses).
If you have a good modular solution to this please feel free to submit it though :)
I'm working on an implementation that follows how caching is implemented closely, does that sound good?
Sure, let's see it :)
On Fri, Jun 3, 2016 at 10:38 PM Hank Ditton<EMAIL_ADDRESS>wrote:
I'm working on an implementation that follows how caching is implemented
closely, does that sound good?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/mikamai/ruby-lol/issues/53#issuecomment-223701096,
or mute the thread
https://github.com/notifications/unsubscribe/AAAClDGQa7QcM0b-mB1vVPpQ9Y25NTczks5qIJ7HgaJpZM4IsMfL
.
PR submitted, #54
|
2025-04-01T06:39:37.051481
| 2017-08-09T17:01:23
|
249097383
|
{
"authors": [
"mike-levenick",
"motnoslo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8558",
"repo": "mike-levenick/mut",
"url": "https://github.com/mike-levenick/mut/issues/5"
}
|
gharchive/issue
|
Trying to update Mobile Device Name for an IOS device
Is the Mobile Device Name the "general" section of the device inventory, the same as the "Device Name" in the "attribute to update" dropdown in The MUT?
I tried to update the Mobile Device name from iPad to a actual name. The MUT seems to be ok with it, but afterwards the Mobile Device name is still iPad. First time I'm using the Mut, so I assume I'm doing stupid, unless the device name and mobile device name are different. I'm saving as a Windows .csv too.
Any ideas?
Hello,
The only way to update Device Name for iPads in the JDS is via the "enforce mobile device name" checkbox in the JSS.
What the MUT does, is it generates an MDM command, which will then tell the device to change the name.
In order for this to work, you have to have a supervised device, the device has to have MDM communication, and it has to be turned on and on network.
You can check in the management tab for the device, and see if it has a pending "Set Device Name" command.
If you got a 201 - success, it sounds like the command was likely generated correctly, and the name may just update the next time the device talks to the JSS.
Hmmmm, seems like it might be a JAMF issue. I have the "enforce mobile device name" checked. It is a supervised device, it is communicating. When I went into the "edit general" section, I can change the name to Mickey Mouse, hit save. I go back out it still shows iPad, edit General, it shows Mickey Mouse, hit save. I close the device page, search for the device again, "edit general", it's back to iPad.
Do you have a configuration profile scoped to that device to block changing device name via the restrictions payload?
That will stop even admins from changing the name, and if you have that set up, it will behave similarly to what you're describing.
Nope. Just check the config profile, no block on changing the name.
Curiouser and curiouser.
If you go to the inventory record for the device and go to the management tab, does it have a pending Set Device Name command? If you go to the history tab for management history, did that command ever go through? Is it being generated and failing? Being generated and succeeding but then flipping back? Not being generated at all?
Hi Mike,
My apologies for disappearing, but as school approaches I'm fighting many
fires.
I went ahead and ran a .csv file with about 75 devices to change the device
name, MUT was had a mix of OK's and Failed, but on the devices that I was
looking at the time, the device name was changed. I need to check the
serial for a failed device and see if its device name has changed. Does the
device have to be on to come back with an OK? Not all the devices were on
at that time.
On Wed, Aug 9, 2017 at 1:52 PM, Mike Levenick<EMAIL_ADDRESS>wrote:
Curiouser and curiouser.
If you go to the inventory record for the device and go to the management
tab, does it have a pending Set Device Name command? If you go to the
history tab for management history, did that command ever go through? Is it
being generated and failing? Being generated and succeeding but then
flipping back? Not being generated at all?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/mike-levenick/mut/issues/5#issuecomment-321331693,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AYIdlYOKT4RUSlgbAoTGCPoP_JCpEwD5ks5sWfHAgaJpZM4OyXVm
.
Sounds good. What is the error number on the failures? Is it 404 or something else?
Make sure you have MDM communication to a supervised device, which is turned on and on-network, and you do not have a config profile scoped to the device which restricts name changes.
Other than that, depending on what the error number is, we can do other troubleshooting steps. You can also enable Advanced Debugging from the menu bar.
I checked two of the many devices that had failed, but they we not on at
the time I ran MUT. It makes sense that they failed at runtime, but when I
turned them on the device name had changed. The fails were 400's.
I think we're good.
Thanks again!
On Thu, Aug 17, 2017 at 2:27 PM, Mike Levenick<EMAIL_ADDRESS>wrote:
Sounds good. What is the error number on the failures? Is it 404 or
something else?
Make sure you have MDM communication to a supervised device, which is
turned on and on-network, and you do not have a config profile scoped to
the device which restricts name changes.
Other than that, depending on what the error number is, we can do other
troubleshooting steps. You can also enable Advanced Debugging from the menu
bar.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/mike-levenick/mut/issues/5#issuecomment-323155487,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AYIdlfczsnmvp_dDcyMPdGEU7fBsvfjLks5sZIX7gaJpZM4OyXVm
.
Sounds great. I'll close 'er up.
|
2025-04-01T06:39:37.101518
| 2012-01-20T21:49:00
|
2917346
|
{
"authors": [
"CrabDude",
"alunny",
"dscape",
"mikeal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8559",
"repo": "mikeal/request",
"url": "https://github.com/mikeal/request/issues/160"
}
|
gharchive/issue
|
Basic Auth Fail
Where curl succeeds:
curl -L<EMAIL_ADDRESS>
request fails:
>
var request = require('request'),
url = require('url'),
uri = url.parse('https://XXXX:XXXX@build.phonegap.com/api/v1/apps/63320/android');
console.log(uri);
request({url: uri}, function(err, res) {
console.log(res);
});
<
{
...
body: '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>InvalidArgument</Code><Message>Unsupported Authorization Type</Message><ArgumentValue>Basic YWRhbS5jcmFidHJlZUBwYWxtLmNvbTpuZXdwYXNzd29yZA==</ArgumentValue><ArgumentName>Authorization</ArgumentName><RequestId>1099F84E8B9EC52C</RequestId><HostId>OleJHmIQkg76G98H8xngear3Ojo7fjpc7Id0GMoCqJvD5JKt/pilSnY4LFV1ylCf</HostId></Error>'
...
}
Obviously this could be a build.phonegap.com issue, but at first blush it seems to be more an Auth header issue.
can you give me the verbose curl output?
https://gist.github.com/1649844
hrm.... that looks like exactly what we do. BTW, you don't need to pre-parse the URI.
try this.
var r = request(url)
r.on('response', function () {console.log(r.headers.authorization)})
Compare that with the header you can see in the curl debug output.
Whoops. Looks like I didn't include 80% of the curl output. Updated the gist:
https://gist.github.com/1649844
Result (from your snippet):
Basic YWRhbS5jcmFidHJlZUBwYWxtLmNvbTpuZXdwYXNzd29yZA==
Header dump from res object:
_header: 'GET /android.phonegap/slicehost-production/apps/63320/PhoneGap_GettingStarted-debug.apk HTTP/1.1\r\nCookie: _okapi_2_session=BAh7ByIbd2FyZGVuLnVzZXIucGVyc29uLmtleVsIIgtQZXJzb25bBmkCCIQiIiQyYSQxMCRsUWVjMUFjVzdWUm9UTE1CWkF5RTEuIg9zZXNzaW9uX2lkIiU2NjI1ZjA2MDljMTgzYjBmODJkOTRkMmRiYTQxZjE1ZA%3D%3D--1cabde9030b443daf316daddf1b2507bec795771\r\nauthorization: Basic YWRhbS5jcmFidHJlZUBwYWxtLmNvbTpuZXdwYXNzd29yZA==\r\ncontent-length: 0\r\nhost: s3.amazonaws.com\r\nConnection: keep-alive\r\n\r\n',
looks like they match, doesn't seem to be an auth issue.
@alunny having auth issues to build.phonegap with request, thoughts?
I'll dig into this more over the weekend. I just wanted to get it out there to maybe get a lead on where to look or see if it was a known issue with a workaround.
That request (to download an Android binary) redirects from build.phonegap.com to a URL hosted by S3, with headers like the following:
Location: http://s3.amazonaws.com/android.phonegap/slicehost-production/apps/99999/SomeApp-release.apk
Status: 302
It looks like request is forwarding the build.phonegap.com auth headers to s3.amazonaws.com, which doesn't like them.
Not sure what the spec suggests in this case - maybe there's a header we should be returning in these cases.
A quick look here - http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.8 - suggests request is at fault. A different host on a different protocol should not be considered the same realm. This matches my intuition - there's no reason to think the auth credentials for one host should be valid for a different one.
hrm....
There are lots of places request, and other HTTP clients, disregard the RFC behavior because it's unexpected behavior for a client you would use in code.
I'm wondering if this is one of those cases. If you set a user/pass in a programming client to you expect it to drop those options on redirect?
For now, the fix for @CrabDude is to run with redirects:false and grab the location and make a new request without auth.
I'd argue that dropping the user/pass for a redirect to a different host is the expected behavior - it's certainly what web browsers do. I'm not sure of a use case where you would want to assume the auth mechanism for host A is the same as the auth mechanism for host B.
That said, the workaround is trivial enough that further bikeshedding can be postponed :)
FWIW, behavio consistent with curl is certainly consistent with the law of least surprise.
I'm inclined to agree. We can just blow away the header if host !== newhost when we blow away the host header.
Dude I totally use newpassword as my password too!!! :)
Lol. Damn. Did I leave the password in there again?? Or are you just recognizing the hash?
Basic auth is just a hash, it's only secure talking to phonegap because it's https and the whole socket is encrypted :)
|
2025-04-01T06:39:37.103797
| 2021-05-21T02:35:30
|
897622162
|
{
"authors": [
"Alan-Liang",
"mikecao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8560",
"repo": "mikecao/umami",
"url": "https://github.com/mikecao/umami/issues/679"
}
|
gharchive/issue
|
Crypto: synchronous password hashing?
https://github.com/mikecao/umami/blob/5ecaf5587b6bb5968a4b0c53c110bf807dbe9fed/lib/crypto.js#L42-L48
Why would we use hashSync and compareSync in asynchronous functions? They could lead to reduced I/O efficiency.
It's not that big a deal performance wise, but yes, should be sync calls.
|
2025-04-01T06:39:37.109168
| 2014-07-23T17:11:36
|
38550490
|
{
"authors": [
"cbillen",
"jeremy"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8561",
"repo": "mikel/mail",
"url": "https://github.com/mikel/mail/pull/754"
}
|
gharchive/pull-request
|
Forcing evaluation to integer prevents infinite looping with ruby-units
If you use ruby-units in your project the division of 1/2 will return a fraction, in turn this will yield an infinite loop in this method causing any call to mail to fail and freeze the system. By simply ensuring we're casting to an integer we avoid any potential conflicts.
Fixed by #795
|
2025-04-01T06:39:37.123293
| 2020-11-11T20:07:04
|
741036124
|
{
"authors": [
"Asken",
"olegtar83"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8562",
"repo": "mikeobrien/HidLibrary",
"url": "https://github.com/mikeobrien/HidLibrary/issues/123"
}
|
gharchive/issue
|
Trying to read report
var devices = HidLibrary.HidDevices.Enumerate(1848);
device = devices.FirstOrDefault();
device.OpenDevice();
device.ReadReport(ReadReportCallback);
Gives me System.PlatformNotSupportedException: 'Operation is not supported on this platform.'
I'm on win 10 running .Net 5
I used the source and then it's something else :)
I used the source and then it's something else :)
Hi, i got the same issue, can you share how you fixed it?
I hardly remember but I think I used the source from git rather than the nuget package
Nah.. it didn`t worked.. do you rewrite the BeginInvoke() method, could you check?
|
2025-04-01T06:39:37.135960
| 2020-07-15T14:26:41
|
657394151
|
{
"authors": [
"JakubMosakowski",
"mikepenz"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8563",
"repo": "mikepenz/FastAdapter",
"url": "https://github.com/mikepenz/FastAdapter/pull/912"
}
|
gharchive/pull-request
|
Add expandAllOnPath method to ExpandableExtension
#911 Added method for expanding all items on a path. Updated the example just to show how to use this method.
I'm not sure what are use cases of using similar method expandIncludeParents. I mean, is there any way to get a position of a collapsed element?
thank you so much
|
2025-04-01T06:39:37.174946
| 2021-07-02T14:59:00
|
935847333
|
{
"authors": [
"B4nan",
"sojeda"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8564",
"repo": "mikro-orm/mikro-orm",
"url": "https://github.com/mikro-orm/mikro-orm/issues/2001"
}
|
gharchive/issue
|
Using DTO Class in constructor.
Describe the bug
I need help, is it possible to use a DTO class in a entity constructor? I am looking for an ORM that allows it. because I have entities with many parameters and it is cleaner this way
Stack trace
node_1 | [NestWinston] Error 7/2/2021, 2:36:42 PM Method: GET; Path: /123456/concepts; Error: Cannot read property 'storeId' of undefined; - {"service":"xxxx","stack":["Typ
eError: Cannot read property 'storeId' of undefined\n at new Concept (/usr/src/app/src/modules/concepts/entities/concept.entity.ts:89:31)\n at EntityFactory.createEntity (
/usr/src/app/node_modules/@mikro-orm/core/entity/EntityFactory.js:69:20)\n at EntityFactory.create (/usr/src/app/node_modules/@mikro-orm/core/entity/EntityFactory.js:34:77)\n
at SqlEntityManager.find (/usr/src/app/node_modules/@mikro-orm/core/EntityManager.js:100:52)\n at processTicksAndRejections (node:internal/process/task_queues:93:5)\n
at ConceptService.getAll (/usr/src/app/src/modules/concepts/concept.service.ts:13:33)\n at ConceptController.findAll (/usr/src/app/src/modules/concepts/concept.controller.ts:
33:22)"]}
To Reproduce
Steps to reproduce the behavior:
Create a entity with constructor.
export interface StoreConceptPort {
storeId: string;
}
import {
IsNotEmpty,
} from 'class-validator';
import { UseCaseValidatableDto } from '@test/common/dto/use-case-validatable-dto';
import { Exclude, Expose, plainToClass } from 'class-transformer';
import { StoreConceptPort } from<EMAIL_ADDRESS>import { ServiceType } from<EMAIL_ADDRESS>
@Exclude()
export class StoreConceptDto
extends UseCaseValidatableDto
implements StoreConceptPort
{
@Expose()
@IsNotEmpty()
storeId: string;
public static async new(payload: {
storeId: string;
}): Promise<StoreConceptDto> {
const adapter: StoreConceptDto = plainToClass(StoreConceptDto, payload);
await adapter.validate();
return adapter;
}
}
import {
Embedded,
Entity,
EntityRepositoryType,
Index,
ManyToOne,
Property,
Unique,
} from '@mikro-orm/core';
import { ConceptRepository } from<EMAIL_ADDRESS>import { BaseEntity } from '@test/common/entities/base-entity';
import { StoreConceptPort } from<EMAIL_ADDRESS>
@Entity({ tableName: 'concepts' })
export class Concept extends BaseEntity {
[EntityRepositoryType]?: ConceptRepository;
@Index()
@Property({
name: 'store_id',
type: 'varchar',
})
storeId: string;
constructor(conceptDto: StoreConceptPort) {
super();
this.storeId = conceptDto.storeId;
}
}
Find Entity with repository
import { Concept } from './entities/concept.entity';
import { ServiceType } from<EMAIL_ADDRESS>import { EntityRepository, Repository } from '@mikro-orm/core';
@Repository(Concept)
export class ConceptRepository extends EntityRepository<Concept> {
public async findOneByStore(id, storeId): Promise<Concept> {
return await this.findOneOrFail({
storeId: { $eq: storeId },
id: { $eq: id },
});
}
}
And use it.
See error: "TypeError: Cannot read property 'storeId' of undefined..."
Expected behavior
The model is expected to hydrate well
Additional context
Versions
Dependency
Version
node
15.2.0
typescript
4.2.3
mikro-orm
4.5.7
your-driver
?
That stack trace is from different call - you can see it starts with findAll() not findOne(). The ORM will never use entity constructor to create instance, so unless you are using forceEntityConstructor flag, this is not even possible to happen, the instance is created via Object.create() instead. And if you are using that flag, you need to ensure your entity ctors all have optional parameters, as they can never be filled in.
|
2025-04-01T06:39:37.189202
| 2020-03-19T19:15:42
|
584639572
|
{
"authors": [
"B4nan",
"osayfun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8565",
"repo": "mikro-orm/mikro-orm",
"url": "https://github.com/mikro-orm/mikro-orm/issues/415"
}
|
gharchive/issue
|
Question about Mongo Partial Index
How can this index be achieved?
db.collectionName.createIndex(
{unique: true, partialFilterExpression: {username: { $exists: true }}}
);
Currently not possible but should be easy to add.
|
2025-04-01T06:39:37.234227
| 2016-08-31T16:38:10
|
174321125
|
{
"authors": [
"cjpatoilo",
"jkinley-sds"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8566",
"repo": "milligram/milligram",
"url": "https://github.com/milligram/milligram/issues/116"
}
|
gharchive/issue
|
Flex Modifiers don't seem to be working
How do I use .row-wrap, .col-bottom, etc? I have tried numerous ways and they do not seem to work. Also, I don't see the breakpoints working either. I apologize if this is my ignorance, but I was unable to see these things identified in the docs. I hope this helps...
Version info
Milligram:
https://cdnjs.cloudflare.com/ajax/libs/milligram/1.1.0/milligram.css
Other (e.g. normalize.css, node.js, npm, bower, browser, operating system) (if applicable):
NA
Test case
http://codepen.io/jkinley/pen/PGYXvJ
Steps to reproduce
I have tried numerous ways to get modifiers like .row-wrap .col-top, etc to work and I am lost. I have looked at the Scss and tried several classes and they do not seem to work.
I have reviewed the docs and there are no examples of how to use these advanced modifiers.
Expected behavior
.row-wrap should invoke flexbox to wrap.
.col-bottom should make flex items align to the bottom.
Actual behavior
the classes don't have an effect.
Hi @jkinley-sds
Sorry if this has caused any problems, then less will be available the next version of Milligram with this issue fixed. #88
|
2025-04-01T06:39:37.679194
| 2022-03-06T10:36:42
|
1160575531
|
{
"authors": [
"andebor",
"gra-moore"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8567",
"repo": "mimiro-io/datahub-cli",
"url": "https://github.com/mimiro-io/datahub-cli/issues/87"
}
|
gharchive/issue
|
Add support for accessing security endpoints
The new datahub security endpoints allow the registration of clients, their public keys and also ACLs.
It should be possible to access these endpoints from mim
mim security client store
mim security client get
mim security acls store
mim security acls get
This is implemented
|
2025-04-01T06:39:37.707673
| 2024-06-24T11:00:16
|
2369931994
|
{
"authors": [
"Divi",
"sebastianMindee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8568",
"repo": "mindee/mindee-api-php",
"url": "https://github.com/mindee/mindee-api-php/issues/69"
}
|
gharchive/issue
|
Composer: missing binary file in "bin" directory
Prerequisites
Put an X between the brackets on this line if you have done all of the following:
[X] Reproduced the problem or exposed a new need
[X] Checked the GitHub existing issues
Description
When doing a composer install, the binary mindee is missing in the /bin directory.
There is a warning message after the composer install
Skipped installation of bin bin/mindee for package mindee/mindee: file not found in package
Steps to Reproduce
run a composer install
Expected behavior:
Creating a file mindee in bin/mindee.
Actual behavior:
Missing file, seems to be in the root directory.
Reproduces how often:
100%
Versions
mindee/mindee => 1.8.0
PHP 8.3.6
Well spotted! Will be fixed shortly.
|
2025-04-01T06:39:37.785761
| 2020-07-18T21:23:04
|
660389032
|
{
"authors": [
"mariux",
"soerenmartius"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8569",
"repo": "mineiros-io/pre-commit-hooks",
"url": "https://github.com/mineiros-io/pre-commit-hooks/pull/23"
}
|
gharchive/pull-request
|
Don't pass single filenames to golangci-lint so it doesn't run in parallel
Per default, pre-commit passes a list of modified files to the called hook. This means that e.g. all modified *.go files are processed as a list and then piped one-by-one to the target pre-commit hook. golangci-lint comes with a lockfile per default and is not meant to run multiple instances in parallel. Instead of passing each modified file to golangci-lint, the recommended approach is to just run it without passing any explicit files. This won't affect local environments either since it comes with internal caching.
This PR will fix occurring issues in CI such as https://github.com/mineiros-io/terraform-aws-route53/pull/33.
Question: This will break partial commits as not committed filles need to validate.. doesn't it?? A commit should only validate the actual committed changes.. and it should validate only those as the current workdir could validate fine but the commit can fail?
Just deep-dived and you are actually right. Setting pass_filenames to false will run golangci-lint run --fix which will basically run recursively over all directories. What I didn't understand yet is that setting pass_filenames to true ( which is the default value ) should just pass a list of files to the hook. The official golangci-lint documentation advice us to run the multilinter in a similar manner, e.g. golangci-lint run dir1 dir2/... dir3/file1.go . In https://github.com/mineiros-io/pre-commit-hooks/blob/master/pre_commit_hooks/go/golangci-lint.sh#L6 we run the provided command but still pre-commit seems to start multiple instances of golangci-lint.
Question: This will break partial commits as not committed filles need to validate.. doesn't it?? A commit should only validate the actual committed changes.. and it should validate only those as the current workdir could validate fine but the commit can fail?
Just deep-dived and you are actually right. Setting pass_filenames to false will run golangci-lint run --fix which will basically run recursively over all directories. What I didn't understand yet is that setting pass_filenames to true ( which is the default value ) should just pass a list of files to the hook. The official golangci-lint documentation advice us to run the multilinter in a similar manner, e.g. golangci-lint run dir1 dir2/... dir3/file1.go . In https://github.com/mineiros-io/pre-commit-hooks/blob/master/pre_commit_hooks/go/golangci-lint.sh#L6 we run the provided command but still pre-commit seems to start multiple instances of golangci-lint.
how does pre-commit pass the files? so what is $@ ?
maybe https://pre-commit.com/#hooks-require_serial is what you need?
Question: This will break partial commits as not committed filles need to validate.. doesn't it?? A commit should only validate the actual committed changes.. and it should validate only those as the current workdir could validate fine but the commit can fail?
Just deep-dived and you are actually right. Setting pass_filenames to false will run golangci-lint run --fix which will basically run recursively over all directories. What I didn't understand yet is that setting pass_filenames to true ( which is the default value ) should just pass a list of files to the hook. The official golangci-lint documentation advice us to run the multilinter in a similar manner, e.g. golangci-lint run dir1 dir2/... dir3/file1.go . In https://github.com/mineiros-io/pre-commit-hooks/blob/master/pre_commit_hooks/go/golangci-lint.sh#L6 we run the provided command but still pre-commit seems to start multiple instances of golangci-lint.
how does pre-commit pass the files? so what is $@ ?
maybe https://pre-commit.com/#hooks-require_serial is what you need
When running pre-commit run -a it seems that pre-commit invokes golangci-lint two times. $@ looks like this:
test/private_hosted_zone_test.go test/multiple_domains_same_records_test.go test/failover_routing_test.go test/multiple_domains_different_records_test.go
and
test/delegation_set_test.go test/basic_routing_test.go test/weighted_routing_test.go
See the screenshot:
Question: This will break partial commits as not committed filles need to validate.. doesn't it?? A commit should only validate the actual committed changes.. and it should validate only those as the current workdir could validate fine but the commit can fail?
Just deep-dived and you are actually right. Setting pass_filenames to false will run golangci-lint run --fix which will basically run recursively over all directories. What I didn't understand yet is that setting pass_filenames to true ( which is the default value ) should just pass a list of files to the hook. The official golangci-lint documentation advice us to run the multilinter in a similar manner, e.g. golangci-lint run dir1 dir2/... dir3/file1.go . In https://github.com/mineiros-io/pre-commit-hooks/blob/master/pre_commit_hooks/go/golangci-lint.sh#L6 we run the provided command but still pre-commit seems to start multiple instances of golangci-lint.
how does pre-commit pass the files? so what is $@ ?
maybe https://pre-commit.com/#hooks-require_serial is what you need
When running pre-commit run -a it seems that pre-commit invokes golangci-lint two times. $@ looks like this:
test/private_hosted_zone_test.go test/multiple_domains_same_records_test.go test/failover_routing_test.go test/multiple_domains_different_records_test.go
and
test/delegation_set_test.go test/basic_routing_test.go test/weighted_routing_test.go
See the screenshot:
In addition to that, when setting require_serial to true, it seems that golangci-lint is being invoked by pre-commit without passing an explicit list of files as shown in the screenshot.
run -a just checks all files... so yes, it is not passing a list of changed files.. but this is expected then..
run -a just checks all files... so yes, it is not passing a list of changed files.. but this is expected then..
In a CI environment, we always check all files since we don't have any state between the CI runs. Locally though, it should only check the affected files if that makes sense to you? The question is more why setting require_serial: true doesn't invoke any files at all.
run -a just checks all files... so yes, it is not passing a list of changed files.. but this is expected then..
In a CI environment, we always check all files since we don't have any state between the CI runs. Locally though, it should only check the affected files if that makes sense to you? The question is more why setting require_serial: true doesn't invoke any files at all.
Updated the config, should work now.
In a CI environment, this will always run the linter on all files as it's been before. In a local environment, golangci-lint will create it's own file and checksum tree and decided internally on what files ( golang packages ) the linters should be applied.
|
2025-04-01T06:39:37.788347
| 2022-08-01T15:57:19
|
1324635933
|
{
"authors": [
"GreatWyrm",
"mworzala"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8570",
"repo": "minestommmo/JointMMO",
"url": "https://github.com/minestommmo/JointMMO/pull/7"
}
|
gharchive/pull-request
|
Farming Interactions
This PR sets up several block interactions meant to mimic Vanilla behavior relating to crop and farmland behavior.
Closes #3.
One general comment, please try to use nullability annotations (NotNull, Nullable, UnknownNullability), it lets IntelliJ & null away give better hints
|
2025-04-01T06:39:37.871422
| 2017-08-18T06:35:03
|
251152370
|
{
"authors": [
"cynici",
"mingchen"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8571",
"repo": "mingchen/django-cas-ng",
"url": "https://github.com/mingchen/django-cas-ng/pull/134"
}
|
gharchive/pull-request
|
Allow relative CAS_SERVER_URL without protocol and hostname
I run multiple instances of the Django application and roll out updates rapidly by directing live traffic at my load-balancer. By allowing settings.CAS_SERVER_URL to be relative, the Django application can figure out the correct protocol (http/https) and hostname (e.g. test.example.com, live.example.com, etc.) on-the-fly based on HTTP request headers without the need for restarting the application. Furthermore, the same application can even serve multiple domain names.
Thank you for contribution!
|
2025-04-01T06:39:37.881641
| 2024-08-31T02:20:32
|
2498560291
|
{
"authors": [
"EanLee",
"RaZer0k"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8574",
"repo": "mini-software/MiniExcel",
"url": "https://github.com/mini-software/MiniExcel/issues/667"
}
|
gharchive/issue
|
write formula to Excel, formula treated as string
Excel Type
[x] XLSX
[ ] XLSM
[ ] CSV
[ ] OTHER
Upload Excel File
MiniExcel Version
1.34.1
Description
When you directly output formulas to an Excel file using MiniExcel, and then save it as a new file, the cells containing the formulas are treated as regular data, and the formulas themselves do not work. The formulas only become active after you manually enter the cells, edit the formulas, and press Enter.
And, I can’t find the relevant settings and instructions in the instructions for use in the readme.
var data = new List<Dictionary<string, object>>
{
new Dictionary<string, object>
{
["Institution"] = "Institution",
["Created"] = "Created",
["Formula"] = "Formula",
},
new Dictionary<string, object>
{
["Institution"] = "BMC Inc.",
["Created"] = "2021-01-01",
["Formula"] = $"=SUM(A2:B2)"
},
};
MiniExcel.SaveAs("test.xlsx", data, printHeader: false, overwriteFile: true, excelType: ExcelType.XLSX);
Functionality waiting for Pull Request.
PR 679: Formula Support
|
2025-04-01T06:39:37.883774
| 2019-04-24T19:33:24
|
436869940
|
{
"authors": [
"elotroalex"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8575",
"repo": "minicomp/wax",
"url": "https://github.com/minicomp/wax/issues/39"
}
|
gharchive/issue
|
create different classes for animated and static galleries
to solve the problem with vertical spacing
create only function specific class for this, ex. min-height, no-min-height
|
2025-04-01T06:39:37.885312
| 2019-09-29T17:31:53
|
499945966
|
{
"authors": [
"RileyChing",
"regstuff"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8576",
"repo": "minimaxir/gpt-2-keyword-generation",
"url": "https://github.com/minimaxir/gpt-2-keyword-generation/issues/7"
}
|
gharchive/issue
|
Choose from a list of keywords
Hello,
Thanks for this. It's working quite well, but is there a way to get it to choose only from a pre-defined list of keywords?
Hello,
Thanks for this. It's working quite well, but is there a way to get it to choose only from a pre-defined list of keywords?
Hello, I would like to ask if you have any progress regarding the idea you said, I am also very interested and want to ask you for advice.
Looking forward to your reply, thanks a lot!!!
|
2025-04-01T06:39:37.889064
| 2023-06-29T19:38:40
|
1781382776
|
{
"authors": [
"keyboardAnt",
"minimaxir"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8577",
"repo": "minimaxir/simpleaichat",
"url": "https://github.com/minimaxir/simpleaichat/issues/38"
}
|
gharchive/issue
|
Support unsafe function schemas
LLMs for code are capable of reasoning beyond just what is merely executable [^1][^2]. Therefore, I suggest allowing users to provide free-form function schemas that aren't necessarily strictly following the JSON Schema format.
[^1]: Souza, Beatriz, and Michael Pradel. "LExecutor: Learning-Guided Execution." arXiv preprint arXiv:2302.02343 (2023).
[^2]: https://www.youtube.com/watch?v=YIYlkCbIxqc&t=2664s
Function input implementation is handled by a model-by-model basis, which is why the current schema code is in chatgpt.py.
@minimaxir, what do you mean by "model-by-model basis"?
This implementation of structured I/O to ChatGPT is specific to ChatGPT.
Okay, but why not allow users to provide schemas that violate the JSON Schema format?
No point, unless there is evidence that it actually improves generation quality.
To keep the library simple, I am not adding things for the sake of adding things.
|
2025-04-01T06:39:37.891498
| 2018-11-19T13:23:37
|
382212821
|
{
"authors": [
"Pryanga306",
"cheriimoya"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8578",
"repo": "mininet/mininet",
"url": "https://github.com/mininet/mininet/issues/843"
}
|
gharchive/issue
|
Missing file irange in mininet.util
from mininet.topo import Topo prompts error
This line prompts error when ran and it was found that module irange that is often called by Topo is missing. How can this be resolved to use Topo function to create topology?
this can be closed i think
|
2025-04-01T06:39:37.896993
| 2024-02-12T20:43:23
|
2130951482
|
{
"authors": [
"haorenfsa",
"harshavardhana",
"mahbubzulkarnain",
"simondeziel"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8579",
"repo": "minio/minio-go",
"url": "https://github.com/minio/minio-go/issues/1931"
}
|
gharchive/issue
|
minio/minio-go/v7@v7.0.67/utils.go:627:67: undefined: tls.CertificateVerificationError with Go 1.19.
https://github.com/minio/minio-go/pull/1921 and more specifically https://github.com/minio/minio-go/commit/76a41461fe5124fb9b646615c6abafcd1d41c7c2 caused minio-go to no longer build with Go 1.19.
https://endoflife.date/go go1.19 is EOLed
@harshavardhana I guess a mention on https://github.com/minio/minio-go/releases/tag/v7.0.67 would have avoid the surprise on our side. I'm fine with dropping support for EOL Go version but was thrown off by go 1.17 in the go.mod.
go1.17 was kept for migration purposes; it was a mistake when using the new types.
Agree! We run into this issue as well. If we're still going use go1.17 in go mod, then better fix it. otherwise better update it to 1.20
Upgrade version golang, solve for me. go 1.20
|
2025-04-01T06:39:38.052170
| 2024-09-25T15:46:37
|
2548351081
|
{
"authors": [
"joelstobart-moj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8580",
"repo": "ministryofjustice/calculate-release-dates-api",
"url": "https://github.com/ministryofjustice/calculate-release-dates-api/pull/855"
}
|
gharchive/pull-request
|
Consolidate configurations
Change application files to be more meaningful
Remove excess ones
Update files to reflect the new default (SDS40 and what is live) rather than the historic SDS at 50
Update test cases where there are specific issues
Add documentation to explain how developers should use the application profiles going forward.
Make all operational commencement dates consistent in all files (some reflect speculative future dates, that have now passed).
NOTE: There is still an outstanding test case failure to be addressed. Pushed up for a partial review.
|
2025-04-01T06:39:38.069480
| 2022-10-13T10:05:53
|
1407518167
|
{
"authors": [
"SteveLinden",
"davidkelliott",
"ewastempel",
"julialawrence",
"seanprivett"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8581",
"repo": "ministryofjustice/modernisation-platform",
"url": "https://github.com/ministryofjustice/modernisation-platform/issues/2399"
}
|
gharchive/issue
|
OIDC - Complete current OIDC
Outstanding OIDC conversions:
Modernisation Platform repo:
terraform/
environments (Excluding the files in environments directory as those run off a different set of keys)
core-vpc
core-network-services
bootstrap
delegate-access
single-sign-on
secure-baseline
github
pagerduty
modernisation-platform-account
Modernisation Platform AMI Builds
modernisation-platform
teams
I think terratests in modules should continue using the testing-test ci credentials since adding an OIDC provider in the testing-test account would prevent OIDC module from being tested.
https://github.com/ministryofjustice/modernisation-platform/issues/2040#issuecomment-1275821857 <-- core-security-production
picking up the core-vpc account
OIDC refactor implementation for the core-vpc account: https://github.com/ministryofjustice/modernisation-platform/pull/2551
I'm looking at delegate access and the environments as they both run off privileged keys in the root account
`Modernisation Platform AMI Builds
[ ] modernisation-platform
[ ] teams`
Should be split off into a different story.
OIDC refactor implementation for the core-vpc-test-deployment (I have missed it in the previous PR)
and the core-network-services-deployment workflows: https://github.com/ministryofjustice/modernisation-platform/pull/2560
OIDC refactor implementation for the modernisation-platform-account: https://github.com/ministryofjustice/modernisation-platform/pull/2567
https://github.com/ministryofjustice/modernisation-platform/pull/2571 for single-sign-on and secure-baselines
Moved delegate-access and environments out to https://github.com/ministryofjustice/modernisation-platform/issues/2568
Moved the github one to [https://github.com/ministryofjustice/modernisation-platform/pull/2570] #2570
|
2025-04-01T06:39:38.072507
| 2022-10-20T13:01:05
|
1416583254
|
{
"authors": [
"AntonyBishop"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8582",
"repo": "ministryofjustice/operations-engineering",
"url": "https://github.com/ministryofjustice/operations-engineering/issues/1670"
}
|
gharchive/issue
|
Slackbot to open frequently used sites ie GH users, GH team, pingdom users, from the ask channel
Background
We run lots of services. We often need to open multiple site to do some tasks. It would be nice to be open a link via the #ask channel to reduce hunting for various links/bookmarks. Would reduce number of clicks.
Approach
Investigate options that might be available.
Acceptance Criteria
[ ] Options for review by team
Reference
How to write good user stories
Looked at tefter app. Would do this however won't proceed with that option as there is a cost.
Added links to a shortcut bar in the main slack channel. Revisit this need if we need something more sophisticated.
Done
|
2025-04-01T06:39:38.082155
| 2022-10-26T16:58:29
|
1424382912
|
{
"authors": [
"codecov-commenter",
"kate-49",
"mattmachell"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8583",
"repo": "ministryofjustice/opg-sirius-supervision-workflow",
"url": "https://github.com/ministryofjustice/opg-sirius-supervision-workflow/pull/242"
}
|
gharchive/pull-request
|
Sw 5910 pagination tests
Changed pagination element from Jen's design to follow GDS component standards
https://design-system.service.gov.uk/components/pagination/#:~:text=Components-,Pagination,-Help users navigate
Codecov Report
Base: 47.52% // Head: 47.73% // Increases project coverage by +0.20% :tada:
Coverage data is based on head (a0d97a6) compared to base (c035eb4).
Patch coverage: 100.00% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## main #242 +/- ##
==========================================
+ Coverage 47.52% 47.73% +0.20%
==========================================
Files 13 13
Lines 505 507 +2
==========================================
+ Hits 240 242 +2
Misses 240 240
Partials 25 25
Impacted Files
Coverage Δ
internal/sirius/get_page_details.go
92.59% <100.00%> (+0.28%)
:arrow_up:
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Hi Kate, took a look at this today and fixed the CSS issue on the pagination buttons.
Turns out there was a class on the button that needed to be on the parent <li> element, govuk-pagination__item--current. Also changed the template logic to to just not render the next/prev when it doesn't need to, rather than use visibility:hidden.
One more scenario to look at on Monday and make sure is tested.
Locally if I filter by 50 per page and go to page 2, I can't go back to page 1.
http://localhost:8888/supervision/workflow/?change-team=13&page=1&tasksPerPage=50&xsrfToken=&assignTeam=0&assignCM=&tasksPerPage=50
|
2025-04-01T06:39:38.085854
| 2020-02-18T15:15:52
|
566962638
|
{
"authors": [
"GemTay",
"coveralls"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8584",
"repo": "ministryofjustice/opg-use-an-lpa",
"url": "https://github.com/ministryofjustice/opg-use-an-lpa/pull/234"
}
|
gharchive/pull-request
|
removed govuk link on viewer home page
Purpose
To remove the link to the lpa service on the viewer home page as from research some users are unnecessarily following the link which takes them away from the service and interrupts their journey
Fixes UML-484
Approach
Removed the link so that 'lasting power of attorney' is just text
Checklist
[x] I have performed a self-review of my own code
[ ] I have updated documentation (Confluence/GitHub wiki/tech debt doc) where relevant
[ ] I have added tests to prove my work
[ ] The product team have tested these changes
Coverage remained the same at 78.523% when pulling 33116c8ebfd63c288c42b4a807d49f8d95437d86 on UML-484-Remove-lpa-link-on-viewer-start-page into 38f35dfce6471e146257ed9dba3b341387c567b1 on master.
|
2025-04-01T06:39:38.154199
| 2023-07-22T02:20:33
|
1816544264
|
{
"authors": [
"smjmoj",
"staff-infrastructure-moj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8585",
"repo": "ministryofjustice/staff-device-shared-services-infrastructure",
"url": "https://github.com/ministryofjustice/staff-device-shared-services-infrastructure/pull/77"
}
|
gharchive/pull-request
|
Update Terraform terraform-aws-modules/vpc/aws to v5
This PR contains the following updates:
Package
Type
Update
Change
terraform-aws-modules/vpc/aws (source)
module
major
2.78.0 -> 5.1.0
Release Notes
terraform-aws-modules/terraform-aws-vpc (terraform-aws-modules/vpc/aws)
v5.1.0
Compare Source
Features
Add support for creating a security group for VPC endpoint(s) (#962) (802d5f1)
v5.0.0
Compare Source
⚠ BREAKING CHANGES
Bump Terraform AWS Provider version to 5.0 (#941)
Features
Bump Terraform AWS Provider version to 5.0 (#941) (2517eb9)
4.0.2 (2023-05-15)
Bug Fixes
Add dns64 routes (#924) (743798d)
4.0.1 (2023-04-07)
Bug Fixes
Add missing private subnets to max subnet length local (#920) (6f51f34)
v4.0.2
Compare Source
v4.0.1
Compare Source
v4.0.0
Compare Source
⚠ BREAKING CHANGES
Support enabling NAU metrics in "aws_vpc" resource (#838)
Features
Support enabling NAU metrics in "aws_vpc" resource (#838) (44e6eaa)
v3.19.0
Compare Source
Features
Add public and private tags per az (#860) (a82c9d3)
Bug Fixes
Use a version for to avoid GitHub API rate limiting on CI workflows (#876) (2a0319e)
3.18.1 (2022-10-27)
Bug Fixes
Update CI configuration files to use latest version (#850) (b94561d)
v3.18.1
Compare Source
v3.18.0
Compare Source
Features
Added ability to specify CloudWatch Log group name for VPC Flow logs (#847) (80d6318)
v3.17.0
Compare Source
Features
Add custom subnet names (#816) (4416e37)
3.16.1 (2022-10-14)
Bug Fixes
Prevent an error when VPC Flow log log_group and role is not created (#844) (b0c81ad)
v3.16.1
Compare Source
v3.16.0
Compare Source
Features
Add IPAM IPv6 support (#718) (4fe7745)
v3.15.0
Compare Source
Features
Add IPAM IPv4 support (#716) (6eddcad)
3.14.4 (2022-09-05)
Bug Fixes
Remove EC2-classic deprecation warnings by hardcoding classiclink values to null (#826) (736931b)
3.14.3 (2022-09-02)
Bug Fixes
Allow security_group_ids to take null values (#825) (67ef09a)
3.14.2 (2022-06-20)
Bug Fixes
Compact CIDR block outputs to avoid empty diffs (#802) (c3fd156)
3.14.1 (2022-06-16)
Bug Fixes
Declare data resource only for requested VPC endpoints (#800) (024fbc0)
v3.14.4
Compare Source
v3.14.3
Compare Source
v3.14.2
Compare Source
v3.14.1
Compare Source
v3.14.0
Compare Source
Features
Change to allow create variable within specific vpc objects (#773) (5913d7e)
v3.13.0
Compare Source
Features
Made it clear that we stand with Ukraine (acb0ae5)
v3.12.0
Compare Source
Features
Added custom route for NAT gateway (#748) (728a4d1)
3.11.5 (2022-01-28)
Bug Fixes
Addresses persistent diff with manage_default_network_acl (#737) (d247d8e)
3.11.4 (2022-01-26)
Bug Fixes
Fixed redshift_route_table_ids outputs (#739) (7c8df92)
3.11.3 (2022-01-13)
Bug Fixes
Update tags for default resources to correct spurious plan diffs (#730) (d1adf74)
3.11.2 (2022-01-11)
Bug Fixes
Correct for_each map on VPC endpoints to propagate endpoint maps correctly (#729) (19fcf0d)
3.11.1 (2022-01-10)
Bug Fixes
update CI/CD process to enable auto-release workflow (#711) (57ba0ef)
v3.11.5
Compare Source
v3.11.4
Compare Source
v3.11.3
Compare Source
v3.11.2
Compare Source
v3.11.1
Compare Source
v3.11.0
Compare Source
feat: Add tags to VPC flow logs IAM policy (#706)
v3.10.0
Compare Source
fix: Enabled destination_options only for VPC Flow Logs on S3 (#703)
v3.9.0
Compare Source
feat: Added timeout block to aws_default_route_table resource (#701)
v3.8.0
Compare Source
feat: Added support for VPC Flow Logs in Parquet format (#700)
docs: Fixed docs in simple-vpc
chore: Updated outputs in example (#690)
Updated pre-commit
v3.7.0
Compare Source
feat: Add support for naming and tagging subnet groups (#688)
v3.6.0
Compare Source
feat: Added device_name to customer gateway object. (#681)
v3.5.0
Compare Source
fix: Return correct route table when enable_public_redshift is set (#337)
v3.4.0
Compare Source
fix: Update the terraform to support new provider signatures (#678)
v3.3.0
Compare Source
docs: Added ID of aws_vpc_dhcp_options to outputs (#669)
fix: Fixed mistake in separate private route tables example (#664)
fix: Fixed SID for assume role policy for flow logs (#670)
v3.2.0
Compare Source
feat: Added database_subnet_group_name variable (#656)
v3.1.0
Compare Source
chore: Removed link to cloudcraft
chore: Private DNS cannot be used with S3 endpoint (#651)
chore: update CI/CD to use stable terraform-docs release artifact and discoverable Apache2.0 license (#643)
v3.0.0
Compare Source
refactor: remove existing vpc endpoint configurations from base module and move into sub-module (#635)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
VPC community module requires code to be refactored.
Suggest pinning module locally to current version.
2.78.0 -> 5.1.0
|
2025-04-01T06:39:38.162051
| 2023-12-11T20:45:25
|
2036496668
|
{
"authors": [
"bearthatcares"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8586",
"repo": "minmatarfleet/minmatar.org",
"url": "https://github.com/minmatarfleet/minmatar.org/issues/6"
}
|
gharchive/issue
|
When logging in with Discord you should be redirected back to where you started
Click into corporations and try and apply to a corporation, you'll be forced to log in. This passes a next parameter, but we lose it with all the redirects. We should push this into the django session (or preserve it somehow) so that we can redirect back to the original page after logging in.
Completed
|
2025-04-01T06:39:38.167092
| 2020-02-23T20:59:48
|
569557494
|
{
"authors": [
"eryno",
"tonyc",
"unsay"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8587",
"repo": "minnestar/sessionizer",
"url": "https://github.com/minnestar/sessionizer/issues/245"
}
|
gharchive/issue
|
Ansible 2.2+ cannot run playbook
Steps to Reproduce
Install the latest version of Ansible (2.9)
Force vagrant to reprovision by running vagrant provision
Expected Behavior
Ansible will successfully run through the playbook
Actual Behavior
Ansible fails with the message ERROR! 'sudo' is not a valid attribute for a Play.
Details
sudo has been deprecated since Ansible 2.0 and was removed in 2.2. It was replaced with become. (See this Stack Overflow post for more details.)
Output
Vagrant/Ansible error:
eryn@eryn-XPS-13-9370:~/Code/sessionizer/vagrant$ vagrant provision
==> default: Running provisioner: ansible...
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.9.4).
Alternatively, the compatibility mode can be specified in your Vagrantfile:
https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode
default: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/home/eryn/Code/sessionizer/vagrant/.vagrant/provisioners/ansible/inventory -v ansible/development.yml
Using /etc/ansible/ansible.cfg as config file
ERROR! 'sudo' is not a valid attribute for a Play
The error appears to be in '/home/eryn/Code/sessionizer/vagrant/ansible/development.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Set up base development environment
^ here
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
Version data:
eryn@eryn-XPS-13-9370:~/Code/sessionizer/vagrant$ ansible --version
ansible 2.9.4
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/eryn/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Oct 8 2019, 14:14:10) [GCC 5.4.0 20160609]
The solution to this is to use become: instead of sudo:. I can submit a PR if there's interest in updating the project to work with newer versions of Ansible.
It sounds like the general feeling on Slack is to move towards Docker over Vagrant, but if you've got a quick PR handy that allows the VM to start up properly, I don't see a reason not to do it!
We've done away with Vagrant/Docker and went back to the basics, a plain-ol' RoR app w/PostgreSQL dependency.
refs #316
|
2025-04-01T06:39:38.248385
| 2022-12-07T19:35:18
|
1482689473
|
{
"authors": [
"TheQuantumPhysicist",
"azarovh",
"iljakuklic"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8591",
"repo": "mintlayer/mintlayer-core",
"url": "https://github.com/mintlayer/mintlayer-core/pull/586"
}
|
gharchive/pull-request
|
Rework undo for DeltaDataCollection v2
This is a continuation of work started in #577, but as a separate PR to avoid painful rebase. This time based on master.
I'll try to formulate the problem once more. I need deltas and undos to be associative. Meaning sequence like (DB <- Delta) <- Undo should be equal to DB <- (Delta <- Undo) where <- is merge operation. Previous approach where undo didn't contain information and only Erase operation didn't satisfy this property.
This PR implements undo operation through separate delta type DataDeltaUndo, which can be applied to a delta from another collection or to a DB. Thus the information is not lost for undo, allowing us to flush delta to db at any moment and still be able to undo it later.
Moreover in a arbitrary chain of deltas and undos they can be merged in different order and still produce the same result, e.g:
Delta + Delta + Undo + Undo
(Delta + Delta) + (Undo + Undo)
Delta + ((Delta + Undo) + Undo)
Delta + (Delta + (Undo + Undo))
What was the original motivation for the separation into three types? I may be missing something.
The idea to distinguish DeltaDataUndo from DeltaData was introduced to make the code more explicit. Indeed undo can be represented by a simple delta, but that would make user intention unclear. For example, we want to be able to apply additional rules for undo (right now applying delta over undo is forbidden for simplicity).
I agree that it looks duplicated, maybe an undo wrapper over DeltaData is better. Will play with it.
Regarding DeltaDataCollection. It's just a container for deltas. A map of arbitrary keys to either DeltaData or DeltaDataUndo. It is also useful because it encapsulates the rules of undo creation: undo is only created on delta+delta merge (not undo merge or delta+undo).
Wondering if we can get away with something a bit simpler.
I believe that your main concern is the reimplementation of operations for undo. Otherwise, the current approach is more restrictive in terms of how undo can be created and used.
The idea to distinguish DeltaDataUndo from DeltaData was introduced to make the code more explicit. Indeed undo can be represented by a simple delta, but that would make user intention unclear. For example, we want to be able to apply additional rules for undo (right now applying delta over undo is forbidden for simplicity). I agree that it looks duplicated, maybe an undo wrapper over DeltaData is better. Will play with it.
Regarding DeltaDataCollection. It's just a container for deltas. A map of arbitrary keys to either DeltaData or DeltaDataUndo. It is also useful because it encapsulates the rules of undo creation: undo is only created on delta+delta merge (not undo merge or delta+undo).
Sorry I meant whether DeltaData, DeltaDataUndo and DeltaMapElement could be somehow streamlined. I don't have a problem with DeltaDataCollection, that one just slipped in by accident.
I gave the issue of composition in presence of errors some thought and this is what I ended up with. Please let me know your opinion.
Specification
Before diving into the implementation, let me first lay down what we want to achieve accompanied with some high-level observations. Hopefully, we will be able to derive the correct implementation from that later on. Let us ignore for the moment the issue of representing missing values. For now, assume we are working with a value of type T that is always present and we only have the Modify operation, no Create or Delete. This apparent limitation will be reviewed later.
Fundamentally, a delta Delta<T> represents a function of type Fn(T) -> Result<T>. That is, a function that takes some data point of type T and either returns a new data point of type T or fails. We can find a concrete data type to use to represent deltas efficiently later. For now, to lay out semantics of deltas, how should deltas behave when applied to a value and how should they compose together, it is instructive to think of them as if they were implemented as the function type above.
The Delta type has a method Delta::apply taking Self and a T and producing a Result<T> which actually turns the delta into its function representation |x| my_delta.apply(x) : impl Fn(T) -> Result<T>. This is basically your combine_delta_with_data. The apply function for the "primitive" create/modify/delete operations is obvious enough, it just performs given operation.
Let us now look into how should deltas combine. Since deltas represent functions, combining deltas should behave like function composition (i.e. applying one function after another). However, the usual notion of function composition compose(f, g) { |x| f(g(x)) } does not quite cut it, because the result of f is a Result<T> and g expects a T as its argument. We need a modified version of function composition for fallible functions which looks something like compose_fallible(f, g) { |x| Ok(f(g(x)?)?) }. Here we take f and g satisfying Fn(T) -> Result<T> and we get one function that also satisfies Fn(T) -> Result<T>.
So for delta composition to be well behaved, the following three should all be equivalent functions:
|x| Ok(d1.apply(d0.apply(x)?)?)
|x| combine(d0, d1).apply(x)
compose_fallible(|x1| d1.apply(x1), |x0| d0.apply(x0))
In the above, the convention that d0, d1, etc are deltas Delta<T> whereas x, x0, x1, etc are values of type T.
Since function composition is associative (including the fallible version, assuming no side effects), making delta composition in effect implement function composition will make delta composition associative too once it is applied to data.
Implementation
Here is one representation of the Delta type that seems to satisfy the above requirements:
enum Delta<T> {
Modify(T, T), // Require the old value to be equal to the first component, update it to the second
Mismatch, // There was a old/new mismatch somewhere along the way
}
The semantics are defined in terms of apply as follows:
fn apply(self, x: T) -> Option<T> {
match self {
Delta::Modify(old, new) if x == old => Ok(new),
_ => None, // we have Delta::Mismatch or the old value does not match
}
}
Using Option in place of Result since there is only one error state. Supporting multiple error types is a bit tricky as discussed below.
The delta composition goes like this:
fn combine<T>(d0: Delta<T>, d1: Delta<T>) -> Delta<T> {
match (d0, d1) {
(Delta::Modify(x0, x1), Delta::Modify(x2, x3)) if x1 == x2 => Delta::Modify(x0, x3),
_ => Delta::Mismatch, // either one of deltas is a mismatch or the equality check above fails
}
}
I think this satisfies the specification outlined in the previous section. Proof left as an exercise for the reader :laughing:
Handling missing values
The above does not handle creating and deleting values. That functionality is easily recovered by using Delta<Option<T>>.
This is different from a no-op delta (which is not representable with the current iteration of the Delta type, see below)
Note that since Delta<U> has various desirable properties (associativity of the combine operation) for all U, Delta<Option<T>> has them too by setting U = Option<T>.
Undo
The notion of inverting something is captured by the mathematical structure group. To make Delta into a group, we need two extra ingredients: the identity element and the inverse operation.
Identity
We add the Noop arm as the identity element:
enum Delta<T> {
Noop,
Modify(T, T),
Mismatch,
}
The Delta::Noop behaves like the identity function when applied to data element x. Composing Noop with any other delta d gives d (no matter whether noop is lhs or rhs).
It may be sensible to keep separate types for deltas with the noop arm and for deltas without it. When put in a map, the noop case can be represented by key not being present. The new Delta is isomorphic to Option<OldDelta>.
Inverse
Inverse has to satisfy combine(d.inverse(), d) == combine(d, d.inverse()) == Delta::Noop. The following does that:
Noop => Noop,
Modify(old, new) => Modify(new, old),
Mismatch => Mismatch,
Distinguishing "forward" and "backward" deltas at type level
Also touched upon in one of the other comments was that it may be useful to distinguish using types whether a delta applies new changes or undoes previously applied changes.
My preferred solution would be to add a phantom type that captures whether the intention is for the delta to be applied forward or backward. Inverse turns the forward annotation to backward and vice versa. The combine operation requires the directions to be the same but does not care otherwise. It is not clear to me what the resulting annotation should be when mixing directions, that's why I suggest for combine to work only on deltas going in the same direction. This could be revisited later.
Tradeoffs
There are two things that I identified that we lose by employing this approach.
Less granular errors
Due to a more general representation, the various error cases (like double creation, modifying non-existing value, deleted value mismatch, etc) have been collapsed into one. Is this a serious issue?
Seems we could change Mismatch to Mismatch(expected_value, actual_value) for a richer error reporting, and the original error cases (like deleting a non-existing value, etc.) could be extracted from that. However, it gets more complicated once inverses enter the scene. Since we want the first error encountered to be propagated and inverting changes which error we encounter first, we would have to track two errors, one for the case when we apply the operations in the specified order and one for backwards order. At least all errors in the middle can still be dropped. Not sure it's worth investing the effort and extra complexity into this.
Early exit less convenient
Using Result or Option allows us to use the ? operator when something goes wrong to exit early. When working with deltas, we often want to exit early as soon as we encounter a Delta::Mismatch. A nice principled way to do it is to implement std::ops::Try for Delta but it is currently unstable. An ugly option is to use a Result<(T, T), MismatchError> to represent Delta or have Delta easily convertible to it, so ? can be used on it directly.
The combine operation requires both arguments to be evaluated no matter what which may result in inefficiency. We can provide a version of combine that evaluates rhs lazily:
fn combine_lazy<T>(d0: Delta<T>, d1: impl FnOnce() -> Delta<T>) -> Delta<T> {
if matches!(d0, Delta::Mismatch) { return Delta::Mismatch }
combine(d0, d1())
}
// use combine_lazy(a, || expensive_operation(b, c)) instead of combine(a, expensive_operation(b, c))
I don't particularly like any of these options apart from the one that requires unstable features.
Conclusion
I hope I have not missed something obvious. This seems to work in my head :rofl:.
There is a number of advantages to the use of well-established mathematical structures. They are well researched so for example any result that someone has proven about groups also applies to deltas if we make deltas into a group. For some reason I suspect the crypto people here are familiar with groups in particular. Also the well-established structures tend to be very well behaved, predictable and composable.
Inverse has to satisfy combine(d.inverse(), d) == combine(d, d.inverse()) == Delta::Noop. The following does that:
Noop => Noop,
Modify(old, new) => Modify(new, old),
Mismatch => Mismatch,
Turns out this is not correct. The Modify case does not satisfy the inverse law as it gives a Modify(old, old) instead of a Noop. The Modify(old, old) is correct according to the original spec where we require applying deltas one by one to be equivalent to combining deltas and applying the combined delta.
There is something called Inverse semigroup which seems to fit the use case here much better. It has a weaker notion of inverse where combine(d, d.inverse(), d) == d. The following inverse operation, formulated on the original Delta without the extra identity element, satisfies that:
Modify(old, new) => Modify(new, old),
Mismatch => Mismatch,
Also we don't have to artificially introduce the identity element to make this formulation work, so the whole thing becomes somewhat simpler.
I like the formulation and I think it's worth giving a try. I'd like to expand on this and say that the elements you mentioned and the semi-group seem to form a symmetric/hermitian transformation in Hilbert space. One can see that a data (even empty data) is a column vector (A, B) that represents the data of an account from state A to state B. The transformation function is a hermitian/symmetric matrix multiplication that moves the data among these states. If we represent the data in these states, they're guaranteed to be reversible. I think if we want to go nuts we can have a full mathematical formulation of this.
I hope I have not missed something obvious. This seems to work in my head
I like the formulation and I think it's worth giving a try. I'd like to expand on this and say that the elements you mentioned and the semi-group seem to form a symmetric/hermitian transformation in Hilbert space. One can see that a data (even empty data) is a column vector (A, B) that represents the data of an account from state A to state B. The transformation function is a hermitian/symmetric matrix multiplication that moves the data among these states. If we represent the data in these states, they're guaranteed to be reversible. I think if we want to go nuts we can have a full mathematical formulation of this.
Added more tests and some fixes. New tests fail which shows that the current approach has problems.
Closing this PR in favor of #613. All comments are addressed there.
|
2025-04-01T06:39:38.255309
| 2023-09-18T15:29:28
|
1901200723
|
{
"authors": [
"HernandoR",
"mintyfrankie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8592",
"repo": "mintyfrankie/brilliant-CV",
"url": "https://github.com/mintyfrankie/brilliant-CV/issues/13"
}
|
gharchive/issue
|
live version Switching
Is your feature request related to a problem? Please describe.
Mostly, if I updated my CV, I would like to update all languages and compile them all.
Switch behavior by source editing seems unnatural
Describe the solution you'd like
Pass the desired languages by cmdline args, or change the varLanguage into a list
Describe alternatives you've considered
Furthermore. It feels more reasonable to nast the language entry in the file i.e. like the metadata.typ
I realize that it is a language limitation that each compile can only output one PDF. and also if the cmdline can pass args to the template
A very interesting proposal, I see your point here.
I will be happy to implement this if Typst expands this functionality: https://github.com/typst/typst/issues/295
In the meanwhile, you might want to write a simple bash script that change the variable in the metadata.typ by commands like sed and compile the file, such that you could execute the script once and get the PDF for all versions.
I made a minor change to support this feature. Please have a look at my repo for the use of multi-version support, and my repo of template for information.
Let me know if you'd like to pull my changes, I'll raise a PR
I took a look but I don't quite understand your repo -- so it is that you added a varVersion and it's all?
You can submit a PR if you want though. It would be clearer to review altogether.
Follow up discussion will be in https://github.com/mintyfrankie/brilliant-CV-Submodule/pull/9
|
2025-04-01T06:39:38.282471
| 2015-02-12T12:37:11
|
57454856
|
{
"authors": [
"lazytesting"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8593",
"repo": "mirabeau-nl/WbTstr.Net",
"url": "https://github.com/mirabeau-nl/WbTstr.Net/issues/7"
}
|
gharchive/issue
|
Kill chrome driver before running a test
Running a test with the local chrome driver will fail when the previous test is interrupted.
To make the test process more stable, first kill the chrome driver before running the test.
Solved in commit 0f13b38a360107459bd57d155634675e73a12528 in slightly different way.
Each instance of the driver will get an unique process name, this way it's possible to start multiple instances of the Chrome Driver.
|
2025-04-01T06:39:38.284358
| 2020-08-05T08:13:11
|
673347174
|
{
"authors": [
"hiraku-wfs",
"mozoko"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8594",
"repo": "miraclelinux/meta-emlinux",
"url": "https://github.com/miraclelinux/meta-emlinux/pull/74"
}
|
gharchive/pull-request
|
libpcap: Add patch to fix build target dependency
because debian patch replaced 'libpcap.so' target to '$(SHARDLIB)' target to set soname as 0.8.
Here are details about libpcap soname:
https://people.debian.org/~rfrancoise/libpcap-faq.html
Thanks!
Could you propose this fix also to meta-debian?
|
2025-04-01T06:39:38.288662
| 2021-07-02T13:56:16
|
935794910
|
{
"authors": [
"chgl",
"uklft"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8595",
"repo": "miracum/ahd2fhir",
"url": "https://github.com/miracum/ahd2fhir/pull/11"
}
|
gharchive/pull-request
|
Compose Settings from multiple BaseSettings classes
to approach this issue: https://github.com/miracum/ahd2fhir/blob/master/ahd2fhir/kafka_setup.py#L48
Brilliant, thank you so much. Looks much cleaner! I guess technically it's a breaking change though, since the external configuration needs to be changed (including the readme.md). But personally, I am all for it.
Could you also update the readme.md's section on Kafka with the new settings names as part of this PR? https://github.com/miracum/ahd2fhir#kafka-settings
Other than that looks good to me.
Hey, glad that you like it! I'll update the README.
I noticed that pydantics BaseSettings supports only one level of prefix nesting (see here).
A fix would be to set the full prefix on the second-level objects:
class KafkaProducerSettings(BaseSettings):
compression_type: str = "gzip"
class Config:
env_prefix = "kafka_producer_"
Seems reasonable to me!
|
2025-04-01T06:39:38.309295
| 2023-01-09T10:05:13
|
1525271998
|
{
"authors": [
"mirceanton"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8596",
"repo": "mirceanton/mirceanton.com",
"url": "https://github.com/mirceanton/mirceanton.com/issues/40"
}
|
gharchive/issue
|
Implement cloud functions for sending email
Currently, the Conact page uses netlify forms to send emails. A more vendor-neutral option would be prefferable.
With the rise of cloud functions, I think a simple javascript function to send mail using nodemailer or something like that could be easily implemented. This could then be ran by one of the many hosting providers as most of them offer free functions (Netlify, Vercel, Supabase offers functions too)
The added value of having an edge function handling emails as opposed to netlify forms:
automated response back to sender
discord notification
custom email templates
Expected workflow:
user submits contact request
an email from<EMAIL_ADDRESS>is sent to the user to notify them that the request went through
an email is sent to<EMAIL_ADDRESS>with the details
(optional) a discord notification sent to me about it
user is redirected to some thank you page with a button to go back to the site
|
2025-04-01T06:39:38.319698
| 2020-06-09T21:01:35
|
635755550
|
{
"authors": [
"endrebak",
"golobor"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8597",
"repo": "mirnylab/bioframe",
"url": "https://github.com/mirnylab/bioframe/pull/41"
}
|
gharchive/pull-request
|
Minor improvements in efficiency
To make the overlap faster in general, you should add a flag whether you want the indexes lexsorted in the end. This will be the biggest speed win.
In general, my PR is not going to improve the efficiency much, but for the case where you overlap a few intervals against a giant reference, I think my implementation will be much faster. This is because you avoid concatenating and transposing all the self-overlaps. If you check the overlap of a few intervals against a full gencode GFF, 99.99% of the results will be self overlaps.
Another possible win is to exclude all the intervals in the subject that come before the first query start or after the last query end. In general, this is not going to improve the speed much (or at all) but for the cases where you overlap a few hundred intervals against a giant interval set, it might matter a lot. This I did not do, btw, but you'd just:
starts2 = starts2[starts2 > ends1.max()]
ends2 = ends2[ends2 < starts1.min()]
I don't care if you accept this PR or not, I was mostly playing around with the code to understand the algorithm better :) I like it a lot.
oh, just saw that. I'm going to look into it right now! (hopefully i didn't reinvent the wheel with my most recent commit)
Following your observation and advice, I introduced the flag to make sorting optional. Thank you!!
Re: self-overlaps, I had to work on that too, because a major slowdown that @mimakaev ran into (he reported it on slack). Please, take a look at the latest commits in the develop branch!
|
2025-04-01T06:39:38.336858
| 2020-08-14T11:35:22
|
679098244
|
{
"authors": [
"maarcingebala",
"neerajgupta2407"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8598",
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/issues/6001"
}
|
gharchive/issue
|
Insufficient Stock Error for Digital Product Type
What I'm trying to achieve
I want to checkout for digital type of product irrespective of country of shipping address and without worrying about
inventory.
Currently i want to offer online E-Learning course which user can buy across world.
I don't want to setup any warehouse or its inventory as it doesn't make sense for Digital Type Product.
Now when i am checking out on Storefront, its throwing error saying "Not enough Quantity" ( as Saleor core checkout if failing due to InsufficientStock Exception in check_stock_quantity(variant, country, quantity)
I have also verified these boolean fields. . product_type.is_digital = True, product_type.is_shipping_required = False
Steps to reproduce the problem
Create Digital Product Type.
Create Product with Digital Product type and don't assign quantity on any warehouse.
Checkout the product on Storefront
What I expected to happen
I should be able to checkout without any InsufficientStock exception for Digital Type of products.
Screenshots
https://prnt.sc/tzj3vw
System information
Operating system:
Linux
ProductVariant model has a field track_inventory. Can you see what value is set for that field in your product variant? When it's set to false than the quantity checks should be bypassed in checkout and it would work as you expect. This value defaults to true at the model level, but maybe it should be changed in case of digital products.
Hi @maarcingebala , Yes, track_inventory is set to false.
Problem which i was able to figure out is InsufficientStock exception is thrown before the variant.track_inventory check.
Current code : function: saleor/warehouse/availability.py: check_stock_quantity
stocks = Stock.objects.get_variant_stocks_for_country(country_code, variant)
if not stocks:
raise InsufficientStock(variant). ### Exception is thrown here..
if variant.track_inventory and quantity > _get_available_quantity(stocks):
raise InsufficientStock(variant)
This should fix the problem:
if variant.track_inventory:
stocks = Stock.objects.get_variant_stocks_for_country(country_code, variant)
if not stocks:
raise InsufficientStock(variant)
if quantity > _get_available_quantity(stocks):
raise InsufficientStock(variant)
Kindly suggest if this change should fix or it requires more changes?
At a first glance, it looks like it should solve the issue but it would need to be tested. If we apply this change and creating an order works well for digital products, we would have to also test that order fulfillment works well and it is not trying to decrease stock for these products (it shouldn't but it will be good to test).
Can you provide a PR with this fix and maybe some test for that? If not I'll add it to our tasks list but cannot promise when it will be tackled.
@maarcingebala I will create the PR with some tests.
@maarcingebala Created PR: https://github.com/mirumee/saleor/pull/6018
For order fulfillment, Testcase saleor/order/tests/test_fulfullments_actions.py::test_create_fulfillments_with_variant_without_inventory_tracking_and_without_stock is passing successfully. So i haven't created any new testcase.
Great! We'll take a look at the changes and will get back if anything more is needed.
|
2025-04-01T06:39:38.340481
| 2016-11-25T11:13:51
|
191679184
|
{
"authors": [
"krzysztofwolski",
"mikeres0",
"patrys"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8599",
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/issues/633"
}
|
gharchive/issue
|
Consider requiring PostgreSQL
We don't want to rush it but the team seems to be convinced that requiring PostgreSQL may be a good idea. It would allow us to use hstore type columns instead of the current generic JSON field (which really is a text field and cannot be filtered or aggregated upon).
Compatibility: We require Django 1.8 and that's when most of django.contrib.postgres appeared. Not sure if there are good reasons to use Saleor with another database given that we perform most optimizations working with PostgreSQL. One downside is being unable to test locally with SQLite which is often expected. With good Docker support we should be able to make it a non-issue.
Requiring PostgreSQL for deployment would be great, but as for losing the ability to use SQLite in my development environment I'm not so keen.
Have you tried to use docker to get a working PostgreSQL instance for your project?
I haven't yet, I haven't needed to deploy anywhere.
I mean to try and use Docker for local development. docker-compose is a great tool that can get you started in no time.
@mikeres0 For dev I like to run postgres in docker. For example:
docker run --name saleor-postgres -p 6432:5432 -e POSTGRES_PASSWORD=postgres -e POSTGRES_USER=postgres -e POSTGRES_DB=saleor -d postgres
It creates working db on port 6432. If I mess it up or just want to have clean DB, I remove container and create new one.
@patrys @krzysztofwolski I'll give this a go guys, cheers
|
2025-04-01T06:39:38.350054
| 2017-11-16T08:03:56
|
274426044
|
{
"authors": [
"codecov-io",
"maarcingebala",
"patrys",
"zaro"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8600",
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/pull/1318"
}
|
gharchive/pull-request
|
Fix very slow category index render with many discounts for #1314
This PR has some improvements for #1314
Codecov Report
Merging #1318 into master will increase coverage by 0.02%.
The diff coverage is 100%.
@@ Coverage Diff @@
## master #1318 +/- ##
=========================================
+ Coverage 70.37% 70.4% +0.02%
=========================================
Files 117 117
Lines 6158 6164 +6
Branches 786 788 +2
=========================================
+ Hits 4334 4340 +6
Misses 1661 1661
Partials 163 163
Impacted Files
Coverage Δ
saleor/discount/models.py
89.36% <100%> (+0.35%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 86d0f84...bb86cdc. Read the comment docs.
This looks like a candidate for prefetch_related instead of a secondary caching layer. With prefetch_related calls to relation.all() become very cheap.
OK, I'll experiment and see what happens. The thing is the real issue is not the SQL queries, at least according to django debug toolbar. It shows SQL queries take ~200ms. All this time is spent creating python objects or something like that.
The first call to sale.categories.all() will make a second DB query to actually fetch the categories. Subsequent calls to sale.categories.all() will use this already fetched data, and there will be no queries to the database.
But, in this particular case when there are a lot of discounts, products and product variants even using this cached data to construct new QuerySet objects adds up. The algorithm calculating the discounts for products has exponential time which is the main reason why there is such a drastic slowdown with not so much data.
This is also confirmed by the fact that once I added caching to (products/categories).all() the next thing that popped up was the pgettext call translating the message in NotApplicable constructor:
raise NotApplicable(
pgettext(
'Voucher not applicable',
'Discount not applicable for this product'))
I'm closing this PR for now, as we're rebuilding the storefront using GraphQL and React and we want to tackle optimization at the API level.
|
2025-04-01T06:39:38.354090
| 2020-07-08T16:55:40
|
653451849
|
{
"authors": [
"MarchingVoxels",
"misa-j"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8601",
"repo": "misa-j/social-network",
"url": "https://github.com/misa-j/social-network/issues/3"
}
|
gharchive/issue
|
Trouble setting up the project
Steps to reproduce on Ubuntu L18.04:
use git to clone the project locally
create variables.env in root directory, setting host to localhost:3000(since my nodejs starts at 3000)
change the window.location.hostname variable to localhost:3000 as well
change the proxy address to localhost:3000
This results in a bad request in Ubuntu with NodeJS v10 and a Request Header Too Long error in windows NodeJS v14
Am I doing something wrong?
Project is based on create-react-app and its running on port 3000 so you cant have that and nodejs on same port
also HOST should be your local ip on port 5000 if you choose that port for nodejs (its already running on port 5000)
HOST=http://<IP_ADDRESS>:5000
and put same thing for window.location.hostname
and you should run npm run dev in root directory
HI, I have solved the issue by correcting my MongoDB connection URL. Thank you for clarifying
|
2025-04-01T06:39:38.373859
| 2015-10-06T15:25:48
|
110035108
|
{
"authors": [
"driesdesmet",
"mishbahr"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8602",
"repo": "mishbahr/djangocms-forms",
"url": "https://github.com/mishbahr/djangocms-forms/issues/14"
}
|
gharchive/issue
|
Can't use filefield with S3 storage backend
Using this setting:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
I'm getting
NotImplementedError at /forms/submit/
This backend doesn't support absolute paths.
I wonder if it's really necessary to return the filepath in the handle_uploaded_files() function, since a StoredUploadedFile field is save in form.cleaned_data for later sending and saving.
From what I can remember — I needed the files later to be be used as an attachment for sending email submissions via email. See: https://github.com/mishbahr/djangocms-forms/blob/master/djangocms_forms/forms.py#L357
You seem to approach the uploaded files from the fields, while when using a FormView as you do, the files are available in form.files already. Any reason why you took this approach?
I'm more than happy for you to refactor bits.
Just looking through the code.. can't really remember why I did it this way.
I slightly refactored the email attachment logic last night. Now we get the files form request.FILES . Please try this, and let me know.
— Mishbah
Hi mishbahr. Thank you for your work. I was doing the same this morning, and since I'm trying to improve on my programming skill, I would care if you have a look at it. I've made a pull new PR15, but it doesn't automerge with the changes you've done.
I think my patch particularly solves my problem I had with an S3 storage class which doesn't provide a .path method. So in my case, your solution wouldn't work. I'm going to comment on your code to open up discussion. Again, I'm not an experienced contributor, but I'd like to very much improve. Let me know what you think.
Hmm. looks like we both did very similar thing. I removed the bit where I was trying to access the file path.. and as for attachments.. I'm looping through request.FILES instead of form.files as per doc here: https://docs.djangoproject.com/en/1.8/topics/http/file-uploads/#basic-file-uploads
Please try using my latest commit and see if it works with S3 Storage. If not .. I'll try to resolve the merge conflict. Thanks
UPDATE: I have just tried my code using django-storages-redux==1.3 with:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
Its uploads to S3 bucket just fine. However the email attachment seems to 0 Bytes :-(
Yes. I fixed this. One moment.
On 7 October 2015 at 15:49, Mishbah Razzaque<EMAIL_ADDRESS>wrote:
UPDATE: I have just tried my code using django-storages-redux==1.3 with:
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
Its uploads to S3 bucket just fine. However the email attachment seems to 0
Bytes :-(
—
Reply to this email directly or view it on GitHub
https://github.com/mishbahr/djangocms-forms/issues/14#issuecomment-146201217
.
--
Dries Desmet
URGA | creatieve communicatie
Zuiderlaan 69
8790 Waregem
t 056 61 41 94
m 0477 19 41 13
<EMAIL_ADDRESS>www.urga.be
It's because Django's file object returns an empty bytestring when the file object is not open yet. I'm not sure if this is expected behaviour. It should probably return an exception rather than just a zero length bytestring. the #16 PR runs for me now.
btw, I wondered why you create a hash for file names at all. Is that because they wouldn't get overwritten? Doesn't Django create a new filename anyway if it already exists? I also noticed that you generate a has only once for every file field, which means filenames uploaded within the same form would have the same hash. Intended?
Just committed fully working code :-)
As far as using a hash in filename -- its not based on filename but rather just a UUID appended to filename. So I'm hoping it will be unique per file.
If you'd like it to be unique per file, you should probably put it within the for loop for every file field in handle_uploaded_files. No?
btw, I've sent you a google chat invitation. Thought that would talk easier. Couldn't see your name in django-cms chatroom either. My nichname is TrioTorus.
You are correct!
Fixed in 88dda1fea8f61c61686e40058f58307ab58d0d31
Thanks
Happy to help. I'm delighted you made this plugin, and now it works with S3!
On 7 October 2015 at 16:21, Mishbah Razzaque<EMAIL_ADDRESS>wrote:
You are correct!
Fixed in 88dda1f
https://github.com/mishbahr/djangocms-forms/commit/88dda1fea8f61c61686e40058f58307ab58d0d31
Thanks
—
Reply to this email directly or view it on GitHub
https://github.com/mishbahr/djangocms-forms/issues/14#issuecomment-146209146
.
--
Dries Desmet
URGA | creatieve communicatie
Zuiderlaan 69
8790 Waregem
t 056 61 41 94
m 0477 19 41 13
<EMAIL_ADDRESS>www.urga.be
I'll push it to PyPi :-)
|
2025-04-01T06:39:38.421682
| 2023-09-23T01:25:24
|
1909660245
|
{
"authors": [
"PatP15",
"makaimann"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8603",
"repo": "mit-ll-trusted-autonomy/pyquaticus",
"url": "https://github.com/mit-ll-trusted-autonomy/pyquaticus/pull/17"
}
|
gharchive/pull-request
|
Pass obs normalizer to defend policy
Update the train_against_easy.py script to pass the observation normalizer to the DefendGen function correctly.
Haha I literally just made a branch to address this 5 minutes ago!
|
2025-04-01T06:39:38.428276
| 2023-09-18T18:57:26
|
1901560474
|
{
"authors": [
"julius-heitkoetter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8604",
"repo": "mit-submit/A2rchi",
"url": "https://github.com/mit-submit/A2rchi/pull/45"
}
|
gharchive/pull-request
|
changing environment variable to correct name
The huggingface token should be under HUGGING_FACE_HUB_TOKEN, not HF_TOKEN for huggingface authentication. Changes are made to reflect this.
This PR is not complete. I missed a couple changes. Will tear this down and add a new one.
|
2025-04-01T06:39:38.449347
| 2024-03-11T22:29:23
|
2180372073
|
{
"authors": [
"bilalag"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8605",
"repo": "mitchellkrogza/Phishing.Database",
"url": "https://github.com/mitchellkrogza/Phishing.Database/issues/841"
}
|
gharchive/issue
|
[FALSE-POSITIVE] saudi-k.com
Domains or links
Please list any domains and links listed here which you believe are a false positive.
saudi-k.com
More Information
How did you discover your web site or domain was listed here?
Website was hacked
Have you requested removal from other sources?
Please include all relevant links to your existing removals / whitelistings.
sophos
avira
webroot
eset
Netcraft
:exclamation:
We understand being listed on a Phishing Database like this can be frustrating and embarrassing for many web site owners. The first step is to remain calm. The second step is to rest assured one of our maintainers will address your issue as soon as possible. Please make sure you have provided as much information as possible to help speed up the process.
Send a Pull Request for faster removal
Users who understand github and creating Pull Requests can assist us with faster removals by sending a PR to mitchellkrogza/phishing repository, on the falsepositive.list file
https://github.com/mitchellkrogza/phishing/blob/main/falsepositive.list
Please include the same above information to help speed up the whitelisting process.
Dear Team,
Please update on this request.
Thank you.
@funilrys @mitchellkrogza
|
2025-04-01T06:39:38.451339
| 2018-04-22T11:50:55
|
316572651
|
{
"authors": [
"funilrys",
"xxcriticxx"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8606",
"repo": "mitchellkrogza/Ultimate.Hosts.Blacklist",
"url": "https://github.com/mitchellkrogza/Ultimate.Hosts.Blacklist/issues/301"
}
|
gharchive/issue
|
menshealth.com
Match found in list.34.hosts.ubuntu101.co.za.domains:
menshealth.com
www.menshealth.com
As this domain is also blocked by famous list, I choose not to whitelist this.
|
2025-04-01T06:39:38.457301
| 2017-07-04T14:25:56
|
240436156
|
{
"authors": [
"justinanderson",
"mpvartak",
"soellingeraj"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8607",
"repo": "mitdbg/modeldb",
"url": "https://github.com/mitdbg/modeldb/issues/258"
}
|
gharchive/issue
|
host=localhost not possible with docker mac
Not sure if anyone else has experienced this same issue.
I am using the docker-compose modeldb installation on a mac, and unable to connect the docker-machine to localhost. Here's another forum discussing the issue: https://forums.docker.com/t/using-localhost-for-to-access-running-container/3148
As a workaround, I used this script for the [path to modeldb directory] that find/replaces all the instances of localhost in the routing files.
sed -i -e 's/localhost/<docker-machine address>/g' server/src/main/resources/reference.conf
sed -i -e 's/localhost/<docker-machine address>/g' server/src/main/resources/reference-docker.conf
sed -i -e 's/localhost/<docker-machine address>/g' server/src/main/resources/reference-test.conf
sed -i -e 's/localhost/<docker-machine address>/g' client/syncer.json
sed -i -e 's/localhost/<docker-machine address>/g' frontend/util/check_thrift.js
sed -i -e 's/localhost/<docker-machine address>/g' frontend/util/thrift.js
sed -i -e 's/localhost/<docker-machine address>/g' client/python/modeldb/basic/Structs.py
For , do: $ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://<IP_ADDRESS>:2376 v17.05.0-ce
After that I was able to add new projects to the docker/modeldb.
I can branch and submit a pull request. Or, is there a better way? Please advise.
@justinanderson
Hi @soellingeraj, it sounds like you're using the older Docker Toolbox instead of Docker for Mac. If your system supports it, I recommend switching to Docker for Mac. It doesn't use virtualbox anymore and so removes many of the headaches like port forwarding involved with Docker development on a Mac.
If you use Docker for Mac, it will forward all exposed container ports to localhost automatically. docker-compose up should leave you with a web server reachable at http://localhost:3000/ and a docker-machine ls that looks like this:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
If you need to run Docker Toolbox, I suggest setting up port forwarding for the virtualbox VM used by docker-machine, as detailed in this Stack Overflow answer.
thank you. that worked.
you can close this one.
|
2025-04-01T06:39:38.466144
| 2016-07-27T15:37:32
|
167885633
|
{
"authors": [
"cle1000",
"mhils"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8608",
"repo": "mitmproxy/mitmproxy",
"url": "https://github.com/mitmproxy/mitmproxy/pull/1441"
}
|
gharchive/pull-request
|
Integrate mitmproxy contentviews
first PR for this issue, can you tell me more about the content generator?
Content views:
https://github.com/mitmproxy/mitmproxy/blob/d97fe767dc7b8ea47f0e170c6f002c506f606d57/mitmproxy/contentviews.py#L621-L632
https://github.com/mitmproxy/mitmproxy/blob/d97fe767dc7b8ea47f0e170c6f002c506f606d57/mitmproxy/contentviews.py#L117-L135
The content generator yields lists of (style, text) tuples, where each list represents a single line.
In a nutshell, a generator is a sequence you can iterate over. The main difference compared to a list is that generators are computed lazily. See: https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/
>>> import pprint
>>> from mitmproxy import contentviews
>>> view = contentviews.ViewURLEncoded()
>>> description, generator = view(b"foo=42&bar=43&baz=44")
>>> description
'URLEncoded form'
>>> generator
<generator object format_dict at 0x041DE1E0>
>>> lines = list(generator)
>>> pprint.pprint(lines)
[[('header', 'foo: '), ('text', '42')],
[('header', 'bar: '), ('text', '43')],
[('header', 'baz: '), ('text', '44')]]
The style is normally just "text", but it can also be "highlight", "offset" or "header". This is probably best solved by giving the span a matching className.
i am struggeling with the sticky contentview options, currently its not working, but the rest should be fine
Can you resolve the merge conflicts on this please?
This looks good. Let's get this in a mergable state ASAP so that we can move that into master and do smaller iterations there! :smiley:
|
2025-04-01T06:39:38.486301
| 2021-09-21T09:57:48
|
1002143718
|
{
"authors": [
"arslanashraf7",
"asadiqbal08"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8609",
"repo": "mitodl/mitxonline-theme",
"url": "https://github.com/mitodl/mitxonline-theme/pull/24"
}
|
gharchive/pull-request
|
asadiqbal08/Update xPro references
fixes: #10
Updated .html and .scss from xPro references.
@arslanashraf7 I guess there is another story for that https://github.com/mitodl/mitxonline-theme/issues/12
@arslanashraf7 I guess for point#1, there is another story for that #12
@asadiqbal08 , I didn't notice that ticket exists. But point#2,3 I think still belongs to this PR.
|
2025-04-01T06:39:38.488944
| 2022-02-25T19:24:53
|
1150775071
|
{
"authors": [
"pdpinch",
"rhysyngsun"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8610",
"repo": "mitodl/mitxonline",
"url": "https://github.com/mitodl/mitxonline/issues/443"
}
|
gharchive/issue
|
Epic: Support Program Requirements
We already have some initial support for programs in the form of a Program and ProgramEnrollment and want to expand on that to support ecommerce specifically for MicroMasters programs more thoroughly.
[x] A Program should be able to specify required Courses
[x] A Program should be able to specify elective Courses
[x] Program progress / completion tracking
[x] Program Certificates
Far Future
[ ] A Course should be able to specify prerequisite Courses
I checked all the boxes except for the "far future" one.
|
2025-04-01T06:39:38.531974
| 2018-10-11T02:33:28
|
438057449
|
{
"authors": [
"ArtificialErmine",
"qq854051086"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8611",
"repo": "mitre/adversary",
"url": "https://github.com/mitre/adversary/issues/10"
}
|
gharchive/issue
|
The RPC server is unavailable.
Hello,
I like this project very much. I don't know how to solve this problem. I hope you can help me.
look at it
As you can see, I opened the RPC service, but I still couldn't perform this operation.
Another problem is that my server uses redhunter, which prompts when I load pstools that the permissions aren't enough, and I've tried to modify the permissions, but it doesn't work.
If there is a systemic language problem, I think I was wrong from the beginning.
Look forward to your reply
If this issue still exists after upgrading to the most recent version of Caldera, feel free to re-open this issue or create a new one. Until then, this issue is being closed due to lack of activity.
|
2025-04-01T06:39:38.540385
| 2022-12-16T14:55:48
|
1500367721
|
{
"authors": [
"wjakob"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8612",
"repo": "mitsuba-renderer/drjit-core",
"url": "https://github.com/mitsuba-renderer/drjit-core/pull/52"
}
|
gharchive/pull-request
|
Switch to a node-based IR
Building on PR #41, this change refactors the logic in op.cpp, eval_llvm.cpp and eval_cuda.cpp to switch to a node-based IR.
In particular:
The very large generic function jit_var_new_op() was transformed into a set of operation-specific functions like jit_var_add(), jit_var_fma(). These produce an abstract IR representations, whose codegen-specific bits are now part of the corresponding backends. Splitting the generic function leads to code that is easier to understand and likely easier to compile as well.
The somewhat wordy pattern jit[c]_var_new_x was changed to the shorter jit[c]_var_x, throughout the codebase where x is usually an action.
It is now impossible to create statement-type variables (string-based IR) through the public API. However, internally, there are still a bunch of places that create such variables (CUDA textures, printf, ray tracing, ...). Eventually it will be nice to get rid of these as well, but let's do it in another PR (this one is already way too large)
The PR rolls in a few other minor changes:
Dr.Jit must know the LLVM version to generate the right set of intrinsics (which change between versions). To do this properly, it now tries harder to detect the version version. When all fails, it uses a fallback that resolves symbols known to only exist in specific LLVM versions, which is good enough for inferring the major version.
It removes the managed and managed-read-only CUDA memory types that we never used. They are only poorly supported on Windows and have a bunch of performance pitfalls.
It improves the formatting of some routines (dr.whos(), dr.graphviz()) that had bit-rotted somewhat.
Analogous to the LLVM backend, the CUDA and OptiX backends were reorganized into a parts related to dynamic API resolution-specific bits ({cuda,optix}_api.{h,cpp}) and the rest (_core.cpp)
I renamed eval_{cuda,llvm}.cpp to {cuda,llvm}_eval.cpp to follow the naming convention of the other files.
There was quite a bit of complex code in the memory allocation cache for propagating allocations between different threads that is no longer needed now that each device has a central queue. All of that could be removed, which will hopefully make it easier to support additional backends in the future.
It turns out that LLVM still had a per-thread queue, which wasn't consistent with how the CUDA backend works. I changed it so that there is also a global queue that different threads submit to. (The minor performance benefits of a per-thread queue are outweighted by the difficulty of getting it to work correctly when mitsuba loads scenes in parallel)
(Updated.)
I did an interactive rebase to squash some sub-commits and integrated your feedback. I added one more minor change (https://github.com/mitsuba-renderer/drjit-core/pull/52/commits/a3c7e5bef28e06dfe79e42dba3917645992241ab) to change the printf_async operation from textual IR into abstract IR.
|
2025-04-01T06:39:38.555018
| 2023-01-06T13:46:34
|
1522620438
|
{
"authors": [
"ddalcino",
"vkuznetsovgn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8613",
"repo": "miurahr/aqtinstall",
"url": "https://github.com/miurahr/aqtinstall/issues/632"
}
|
gharchive/issue
|
ERROR: Detected a package name collision
This bug happened to me when I added qthttpserver to an installation. Without qthttpserver everything installs fine.
List cmd:
$ aqt list-qt linux desktop --modules 6.4.2 gcc_64
debug_info qt3d qt5compat qtcharts qtconnectivity qtdatavis3d qthttpserver qtimageformats qtlanguageserver qtlottie qtmultimedia qtnetworkauth qtpdf qtpositioning qtquick3d qtquick3dphysics qtquicktimeline qtremoteobjects qtscxml qtsensors qtserialbus qtserialport qtshadertools qtspeech qtvirtualkeyboard qtwaylandcompositor qtwebchannel qtwebengine qtwebsockets qtwebview
Install cmd:
$ aqt install-qt -O /opt/Qt linux desktop 6.4.2 gcc_64 --m qt5compat qtcharts qthttpserver qtimageformats qtlottie qtmultimedia qtquicktimeline qtspeech qtwebchannel qtwebsockets qthttpserver
INFO : aqtinstall(aqt) v3.1.0 on Python 3.10.6 [CPython GCC 11.3.0]
WARNING : Specified Qt version is unknown: 6.4.2.
ERROR : Detected a package name collision
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/aqt/installer.py", line 177, in run
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/aqt/installer.py", line 400, in run_install_qt
qt_archives: QtArchives = retry_on_bad_connection(
File "/usr/local/lib/python3.10/dist-packages/aqt/helper.py", line 165, in retry_on_bad_connection
return function(base_url)
File "/usr/local/lib/python3.10/dist-packages/aqt/installer.py", line 401, in
lambda base_url: QtArchives(
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 300, in init
self._get_archives()
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 365, in _get_archives
self.get_archives_base(f"qt{self.version.major}{self._version_str()}{self._arch_ext()}", self._target_packages())
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 361, in _target_packages
target_packages.add(module, package_names)
File "/usr/local/lib/python3.10/dist-packages/aqt/archives.py", line 94, in add
assert package_name not in self._packages_to_modules, "Detected a package name collision"
AssertionError: Detected a package name collision
ERROR : aqtinstall(aqt) v3.1.0 on Python 3.10.6 [CPython GCC 11.3.0]
Working dir: /
Arguments: ['/usr/local/bin/aqt', 'install-qt', '-O', '/opt/Qt', 'linux', 'desktop', '6.4.2', 'gcc_64', '--m', 'qt5compat', 'qtcharts', 'qthttpserver', 'qtimageformats', 'qtlottie', 'qtmultimedia', 'qtquicktimeline', 'qtspeech', 'qtwebchannel', 'qtwebsockets', 'qthttpserver'] Host: uname_result(system='Linux', node='4a4d7c6111e3', release='5.15.0-53-generic', version='#59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022', machine='x86_64')
===========================PLEASE FILE A BUG REPORT===========================
You have discovered a bug in aqt.
Please file a bug report at https://github.com/miurahr/aqtinstall/issues
Please remember to include a copy of this program's output in your report.
This is definitely a CLI bug, but there's an easy workaround. You have the module qthttpserver listed twice in your command. If you remove one of the duplicates, your command should work properly.
#633 should fix this, if you want to try it. You should be able to put qthttpserver into the module list as many times as you want and it should still work.
The CI runs are failing right now due to an issue with tox; I'm not sure what the problem is, but it exists in the master branch as well.
Hi!
I tried the fix and yes it is working now.
|
2025-04-01T06:39:38.556371
| 2022-02-19T20:52:22
|
1144843634
|
{
"authors": [
"ddalcino",
"miurahr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8614",
"repo": "miurahr/aqtinstall",
"url": "https://github.com/miurahr/aqtinstall/pull/491"
}
|
gharchive/pull-request
|
Add explanation of --config flag in CLI docs
This adds the -c | --config flag to the CLI section of the documentation.
This problem was discovered while investigating #488. It does not attempt to fix that issue.
Thank you for the fix. merged.
|
2025-04-01T06:39:38.619861
| 2018-04-10T15:44:32
|
312981793
|
{
"authors": [
"DMRobertson",
"miyuchina"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8615",
"repo": "miyuchina/mistletoe",
"url": "https://github.com/miyuchina/mistletoe/issues/32"
}
|
gharchive/issue
|
Links to URLs ending in brackets are truncated
URLs like https://en.wikipedia.org/wiki/Set_(mathematics) which end with a bracket seem to get truncated by Mistletoe. It seems to grab a URL by searching for the first ) character, omitting brackets and any following characters. For instance:
$ mistletoe
mistletoe [version 0.5.4] (interactive)
Type Ctrl-D to complete input, or Ctrl-C to exit.
>>> [link](https://en.wikipedia.org/wiki/Set_(mathematics))
...
<p><a href="https://en.wikipedia.org/wiki/Set_%28mathematics">link</a>)
</p>
I'm not sure if this is a serious issue---after all, we can encode the bracket in the URL as %29. For what it's worth, GitHub's flavour of markdown seems to be happy with URLs ending in (at least one) bracket. The string [link](https://en.wikipedia.org/wiki/Set_(mathematics)) results in link, and all is well.
Ah, should be fixed in 2706db3. The problem was caused by a lazy matching character in the regex, because I was trying to solve the problem of multiple links in a paragraph. Disallowing whitespaces (as I should have done from the beginning, derp..) in URLs bypasses this problem.
mistletoe [version 0.5.5] (interactive)
Type Ctrl-D to complete input, or Ctrl-C to exit.
>>> [link](https://en.wikipedia.org/wiki/Set_(mathematics))
...
<p><a href="https://en.wikipedia.org/wiki/Set_%28mathematics%29">link</a>
</p>
This fix will be included in the next pypi release, so please use the dev branch if you need it working now. Thanks for the bug report!
|
2025-04-01T06:39:38.624345
| 2016-08-24T22:34:16
|
173073978
|
{
"authors": [
"ntucker",
"taion"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8616",
"repo": "mjackson/history",
"url": "https://github.com/mjackson/history/issues/353"
}
|
gharchive/issue
|
Question: why would you need to store state that isn't in the url?
Should the user expect to get a different page when they visit a url each time?
Sure. See for example the Pinterest example in React Router.
|
2025-04-01T06:39:38.643008
| 2023-08-21T15:02:14
|
1859570494
|
{
"authors": [
"CameronMacG",
"mjennings061"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8617",
"repo": "mjennings061/viking-log-keeper",
"url": "https://github.com/mjennings061/viking-log-keeper/issues/18"
}
|
gharchive/issue
|
Track metadata about the day
As a stats keeper, I want to track details about the day, so that I can provide context to the launch outputs.
For example, tracking what is submitted in the VGS Ops Data Return (stats) for each day (weather, staffing, awards)
Will be worth adding the MT state as well, but unsure if we are reporting this back on the GUR anymore?
|
2025-04-01T06:39:38.648330
| 2018-06-19T07:21:51
|
333547931
|
{
"authors": [
"klimov-rv",
"lefuturiste",
"ngarnier"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8618",
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/issues/1246"
}
|
gharchive/issue
|
Command line error: No input files found
Hello!
The compilation work but the watch no.
My version :
"mjml": "^4.0.5"
I'm on windows with Git Bash.
hey @lefuturiste, thanks for reporting. It's a known issue, we're going to fix it very soon in v4.1.0.
I'm closing this one to the benefit of the already-opened issue: https://github.com/mjmlio/mjml/issues/1171
Oh, now i know that i must have file "index.mjml" in directory before this entering this command?
|
2025-04-01T06:39:38.656791
| 2017-07-21T13:21:22
|
244666976
|
{
"authors": [
"Evensier",
"dalefish",
"iRyusa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8619",
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/issues/752"
}
|
gharchive/issue
|
MJ-column issue when mobile rendering is used
Hello there,
I've been trying to setup a template with some buttons with a text overlay, and I came up with the following code.
<mj-column width="25%">
<mj-image width="133px" src="http://static.evensi.com/business-square.png" />
<mj-text align="center"><h3 class="footer-interest-text-box"><a href="https://twitter.com/RecastAI" style="margin-top: -70px;" class="footer-interest-text">Business</a></h3></mj-text>
</mj-column>
The css class declared at the top are properly used, but the issue I've experiencing is related the layout once on the mobile view. The 25% width of the column is somewhat ignored and not coded in the final HTML. As a a result of this I have the screen rendering a list of boxes (I've 6 of them as per the above), piled up vertically in a non elegant way.
Anybody that can help me figuring out what's wrong?
Thanks
Hi @Evensier
You should take a look at https://mjml.io/documentation/#mjml-group it keeps columns inside mj-group as inline in mobile
I'm closing this issue, feel free to reopen if it doesn't work for you
I've tried that already, even before posting this issue, but things are getting overlapped with the design above.
mj-table could work for this, if I'm understanding the issue correctly
Kind Regards,
:----------------------------:
Dale McConnell
On 21 July 2017 at 16:43, Evensier<EMAIL_ADDRESS>wrote:
I've tried that already, even before posting this issue, but things are
getting overlapped with the design above.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/mjmlio/mjml/issues/752#issuecomment-317019874, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AKxLOFvvQt5ZD5TlQCCcy0ylpjhEeg5bks5sQLj_gaJpZM4OfZOI
.
You might need to do 2 group of 50/50
mj-group
mj-column x2 50%
mj-group
mj-column x2 50%
image could be too wide to allow you 4 column as mobile
Ok sort of getting what I need, although the mj-group doesn't expose any css-class property. Any idea on how to add some style there?
mj-group does support css-class like every other element in the body
@iRyusa true, but for whatever reason the MJML compiler add some inline style that override the class behaviour.
Any idea on how to avoid that?
I've created a fiddle here with part of the code I've written. Would love some advice to understand what I'm doing wrong
https://jsfiddle.net/syLdxd1e/
Thanks for the help.
|
2025-04-01T06:39:38.658282
| 2024-02-28T11:06:06
|
2158700425
|
{
"authors": [
"dargmuesli",
"iRyusa"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:8620",
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/pull/2838"
}
|
gharchive/pull-request
|
[FIX] Watch wrong files on CLI
Should fix #2823 soon
@iRyusa mind releasing a patch for this? Thank you for your work! :pray:
Should be available in 5.x branch
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.