added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:34:44.374795
| 2024-03-28T18:03:59
|
2213827874
|
{
"authors": [
"Pavan8104",
"VSCodeTriageBot"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8626",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/209047"
}
|
gharchive/issue
|
@vscode please help me I am facing issue in my mac m2 in running c++ code I am sharing the compiler issue below and I have all the requires compiler. @vscode please help me I am facing issue in my mac m2 in running c++ code I am sharing the compiler issue below and I have all the requires compiler After this is vscode takes me to vs code debug console
Does this issue occur when all extensions are disabled?: Yes/No
VS Code Version:
OS Version:
Steps to Reproduce:
We closed this issue because it is a question about using VS Code rather than an issue or feature request. Please search for help on StackOverflow, where the community has already answered thousands of similar questions. You may find their guide on asking a new question helpful if your question has not already been asked. See also our issue reporting guidelines.
Happy Coding!
|
2025-04-01T04:34:44.378486
| 2024-05-17T17:26:47
|
2303264335
|
{
"authors": [
"VSCodeTriageBot",
"rzhao271",
"studgeek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8627",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/212979"
}
|
gharchive/issue
|
Provide way to view the current value of Settings
Currently there is no one place a user can view their settings. They need to click across User, Remote, and Workspace. It would be very helpful if here was a way to view all the current values.
For example, the setting view could should the inherited value in addition to the default value. For example, when in Workspace instead of just saying "Modified in User". The view could show something like "Modified in User to 100".
Getting the current value of a setting requires more context than those three scopes so I'm unsure how implementable the "one place a user can view their settings" idea is
https://code.visualstudio.com/docs/getstarted/settings#_settings-precedence
For showing inherited values, I'd consider that a duplicate of https://github.com/microsoft/vscode/issues/58038.
For showing values in other scopes from a current scope, that sounds more realistic to me, but I wouldn't be surprised if there was a performance impact from loading setting values in other scopes, and I'm not sure how more complicated settings such as object settings would be rendered.
This issue has been closed automatically because it needs more information and has not had recent activity. See also our issue reporting guidelines.
Happy Coding!
|
2025-04-01T04:34:44.398423
| 2019-03-07T18:21:38
|
418449458
|
{
"authors": [
"Starwort",
"andyhasit",
"cdmihai"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8628",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/69968"
}
|
gharchive/issue
|
Support multiple sets of editor groups
Issue Type: Feature Request
Many times I need to work on different aspects of a repository. Each aspect would require a different set of editor groups. Ideally, I would open up each aspect in its own editor group and switch between editor groups when I switch between aspects. But today this is not possible.
By one editor group I understand all the horizontal / vertical splits on the screen. So a set of editor groups would be a collection of multiple vs "screen layouts" really.
Examples of aspects / editor groups:
build code editor group. This is all the relevant files of the build engine
real code editor group. All the sources
test results editor group. After I run unit tests, some xml files get generated
For each aspect I have a different optimal editor group arrangement with its own vertical / horizontal splits.
Implementation ideas:
support opening multiple vscode instances over the same directory, so I can have different editor groups in each one. Although this is not ideal, as the editor groups take time to setup and losing one instance would waste my setup effort
introduce the concept of editor group sets, where I press a button / shortcut which then "zooms out" and shows me all the editor groups, maybe a preview box for each group, which I could also name, and then have all of them persisted between vscode instances.
Similar concept in other tools:
TMUX, which persists multiple terminal tabs, where each terminal tab has multiple vertical / horizontal panes.
session buddy, a chrome extension, which allows one to save, name, and somewhat switch between multiple sets of tabs
VS Code version: Code 1.31.1 (1b8e8302e405050205e69b59abb3559592bb9e60, 2019-02-12T02:20:54.427Z)
OS version: Windows_NT x64 10.0.17763
I also think this would be incredibly useful.
A simple workaround in the meantime is to create multiple workspaces for the same project. Just Save Workspace As to create a clone, you can then use Open Workspace (even better with keyboard shortcut like ctrl+alt+p) to switch between them. If you save your .code-workspace files in the same directory without too many other things in there it works quite well.
The only downside are:
The CPU might skyrocket whenever you switch workspaces.
If you make changes to the folder list or settings then you need to manually replicate those across .code-workspace files .
I would also like this; Terminal group layers are already very useful and I'd make heavy use of an equivalent for editor groups
|
2025-04-01T04:34:44.412691
| 2019-07-05T08:21:22
|
464521999
|
{
"authors": [
"hidehiko-t",
"kabirz",
"nufsty2",
"roblourens",
"saeedizadi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8629",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/76675"
}
|
gharchive/issue
|
can't connnect to remote ssh in 1.36.0
VSCode Version: 1.36.0
OS Version: windows 10
Steps to Reproduce:
Only first connect remote ssh successfully, restart local vscode will can't connect again.(1.35.1 withot this issue)
remote process as bellow:
ps axu |grep vscode
huiping 18794 0.0 0.0 14264 3244 ? S 15:37 0:00 sh /home/huiping/.vscode-server/bin/0f3794b38477eea13fb47fbe15a42798e6129338/server.sh --enable-remote-auto-shutdown --port=0
huiping 18802 0.4 0.2<PHONE_NUMBER>8 ? Sl 15:37 0:01 /home/huiping/.vscode-server/bin/0f3794b38477eea13fb47fbe15a42798e6129338/node /home/huiping/.vscode-server/bin/0f3794b38477eea13fb47fbe15a42798e6129338/out/vs/server/main.js --enable-remote-auto-shutdown --port=0
huiping 20941 0.0 0.0 15968 1012 pts/0 S+ 15:44 0:00 grep --color=auto vscode
if kill the process of 18802 , can connect to remote
local vscode log as bellow:
Found existing installation at /home/huiping/.vscode-server/bin/0f3794b38477eea1
3fb47fbe15a42798e6129338...
Found running server...
Reminder: You may only use this software with Visual Studio family products,
as described in the license (https://go.microsoft.com/fwlink/?linkid=2077057)
Checking server status with wget
http://<IP_ADDRESS>/: 2019-07-05 15:51:41 ERROR 503: Service Unavailable.
de46cd64-f440-4ee0-88bb-3ceeae0b9919##28##
"install" terminal command done
Received install output: de46cd64-f440-4ee0-88bb-3ceeae0b9919##28##
Server status check failed - waiting and retrying 28
Found existing installation at /home/huiping/.vscode-server/bin/0f3794b38477eea1
3fb47fbe15a42798e6129338...
Found running server...
Reminder: You may only use this software with Visual Studio family products,
as described in the license (https://go.microsoft.com/fwlink/?linkid=2077057)
Checking server status with wget
http://<IP_ADDRESS>/: 2019-07-05 15:51:45 ERROR 503: Service Unavailable.
de46cd64-f440-4ee0-88bb-3ceeae0b9919##28##
"install" terminal command done
Received install output: de46cd64-f440-4ee0-88bb-3ceeae0b9919##28##
Server status check failed - waiting and retrying 29
Found existing installation at /home/huiping/.vscode-server/bin/0f3794b38477eea1
3fb47fbe15a42798e6129338...
Found running server...
Reminder: You may only use this software with Visual Studio family products,
as described in the license (https://go.microsoft.com/fwlink/?linkid=2077057)
Checking server status with wget
http://<IP_ADDRESS>/: 2019-07-05 15:51:49 ERROR 503: Service Unavailable.
de46cd64-f440-4ee0-88bb-3ceeae0b9919##28##
"install" terminal command done
Received install output: de46cd64-f440-4ee0-88bb-3ceeae0b9919##28##
Server status check failed - waiting and retrying 30
Is it going to be resolved soon?
I have the same problem!
I've got the same issue, and it works the workaround type "ps axu | grep vscode" and kill processes for vscode-server.
Client: Windows 10 + VSCode 1.36.0
Server: Ubuntu 18.04 + VSCode 1.36.0
I solved the problem.
Add "no_proxy=<IP_ADDRESS>,localhost" to ~/.wgetrc, this issue will not exist.
I solved the problem.
Add "no_proxy=<IP_ADDRESS>,localhost" to ~/.wgetrc, this issue will not exist.
It works, thank you!
I solved the problem.
Add "no_proxy=<IP_ADDRESS>,localhost" to ~/.wgetrc, this issue will not exist.
Didn't help
Duplicate of https://github.com/microsoft/vscode-remote-release/issues/895
|
2025-04-01T04:34:44.434716
| 2019-11-23T03:35:44
|
527503752
|
{
"authors": [
"corpseboat",
"egamma"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8630",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/85436"
}
|
gharchive/issue
|
Rename symbol on variable "r" in python3 causes raw string identifier to be changed as well.
Issue Type: Bug
Enter in
r"test"
r = 0
F2 or right click to rename symbol r in the second line to "test". This results in
rtest"test"
test = 0
This should only change the symbol indicated, instead of appending 'test' to the prefix for the string.
VS Code version: Code 1.39.2 (6ab598523be7a800d7f3eb4d92d7ab9a66069390, 2019-10-15T15:35:18.241Z)
OS version: Windows_NT x64 10.0.17763
System Info
Item
Value
CPUs
AMD Ryzen 7 2700 Eight-Core Processor (16 x 3194)
GPU Status
2d_canvas: enabledflash_3d: enabledflash_stage3d: enabledflash_stage3d_baseline: enabledgpu_compositing: enabledmultiple_raster_threads: enabled_onnative_gpu_memory_buffers: disabled_softwareoop_rasterization: disabled_offprotected_video_decode: unavailable_offrasterization: enabledskia_deferred_display_list: disabled_offskia_renderer: disabled_offsurface_synchronization: enabled_onvideo_decode: enabledviz_display_compositor: disabled_offwebgl: enabledwebgl2: enabled
Load (avg)
undefined
Memory (System)
15.93GB (9.76GB free)
Process Argv
Screen Reader
no
VM
0%
Extensions (4)
Extension
Author (truncated)
Version
vscode-django
bat
0.19.0
vscode-markdownlint
Dav
0.32.0
todo-tree
Gru
0.0.162
python
ms-
2019.11.50794
Similar (but not identical) bug found in f-string handling:
f"test"
f = 0
maps to
test"test"
test = 0
In this case, the renamed 'f' is replaced, rather than appended, but this is still unwanted behavior.
Please file the issue against the Python extension.
|
2025-04-01T04:34:44.437053
| 2019-12-18T22:34:14
|
539962078
|
{
"authors": [
"bbangert"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8631",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/87287"
}
|
gharchive/issue
|
VS Code Online has no way to sign git commits safely
VS Code Online is a nice/fast way to have an environment ready to code with. Unfortunately for git repos requiring signed commits, there's no obvious or easy way to sign commits without putting your private GPG keyfiles on the VS Online environment.
One approach I've used in the past (which was not clear, and tricky as to setup) was to use SSH agent forwarding to a remote env, and forward the GPG agent socket over it as well. This works fine with VS Code and SSH remote environments.
It doesn't appear that the terminal in VS Code Online using an Azure environment is using ssh, or if it is, it's not documented how it's configured such that I could setup similar forwarding config in my .ssh/config.
Being able to commit sign ones work when using VS Code Online seems like a pretty obvious thing one should be able to do, ideally with ease.
Sorry for the noise, just found the right repo.
Closing as dupe of MicrosoftDocs/vsonline#119
|
2025-04-01T04:34:44.445333
| 2020-02-20T11:34:00
|
568237416
|
{
"authors": [
"georges-dib",
"joaomoreno"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8632",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/91072"
}
|
gharchive/issue
|
Problem with staging and committing changes
I went through the file and staged the changes that I wanted. When I clicked stage, nothing changed in the line highlight on the left, but in the changes panel, the file appeared in the staged section. I went and continued staging changes in the same file. Clicking on the staged file showed that there are no differences in the code at all.
I thought it was a UI issue, so I committed and pushed, and nothing was pushed in the commit.
VS code v: 1.42.1
Can you reproduce? If so, can you show me the output of Git: Show Git Output?
Hi,
I cannot reproduce that exact scenario because when I closed the window and reopened it, it worked fine. However when I make some changes in my file, and I click on the diff, VS shows me my current changes, AND the changes of the previously merged commit (doesn't necessarily have to be mine). When I stage the file, it shows me the correct changes only without the previous changes.
It's a very weird behaviour but it has been going on for some time now, and when I close and reopen the window it works fine.
Show me the output of Git: Show Git Output, once it comes back, thanks!
The commit in the image is not mine. I dont know if this is what you want.
I got screenshots for my previous comment, the changes in the working tree are NOT ONLY mine (first image):
In the index after staging, they are the correct changes (2nd image):
No, I want the contents of F1, Git: Show Git Output.
Hello,
sorry for the latency I was off for some time, maybe this is what you're looking for. Let me know if I can help you further :)
git rev-list --left-right CoreDev...refs/remotes/origin/CoreDev
git for-each-ref --format %(refname) %(objectname) --sort -committerdate
git remote --verbose
Failed to watch ref 'c:\Users\Alain\Desktop\ui_v2.git\refs\remotes\origin\CoreDev', is most likely packed.
Error: ENOENT: no such file or directory, watch 'c:\Users\Alain\Desktop\ui_v2.git\refs\remotes\origin\CoreDev'
at FSWatcher.start (internal/fs/watchers.js:165:26)
at Object.watch (fs.js:1270:11)
at Object.t.watch (c:\Users\Alain\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\git\dist\main.js:1:592073)
at E.updateTransientWatchers (c:\Users\Alain\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\git\dist\main.js:1:123277)
at l.fire (c:\Users\Alain\AppData\Local\Programs\Microsoft VS Code\resources\app\out\vs\workbench\services\extensions\node\extensionHostProcess.js:48:561)
at R.updateModelState (c:\Users\Alain\AppData\Local\Programs\Microsoft VS Code\resources\app\extensions\git\dist\main.js:1:142135)
git config --get commit.template
|
2025-04-01T04:34:44.448536
| 2020-02-25T19:14:58
|
570774509
|
{
"authors": [
"kkimatx",
"roblourens"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8633",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/91448"
}
|
gharchive/issue
|
VS Code ipynb notebook: selects every cell or scrolls to bottom with no input
VSCode Version: 1.42.1
OS Version: Darwin x64 19.2.0
Steps to Reproduce:
When I am browsing between tabs in VS code, multiple things happen with the VS Code notebook:
If I go back to a Notebook tab, it will auto scroll to the bottom with no input.
Every other notebooks open will have every cell selected, with a cursor active in every cell.
Not sure exactly what triggers it, but this happens frequently.
Does this issue occur when all extensions are disabled?: Yes/No
Please file this issue on the vscode-python extension's repo: https://github.com/microsoft/vscode-python/issues
|
2025-04-01T04:34:44.451723
| 2020-03-11T09:34:18
|
579124697
|
{
"authors": [
"FIAPT",
"blarzHernandez",
"roblourens",
"sanket-bhalerao"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8634",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/92459"
}
|
gharchive/issue
|
Remote SSH uses hostname instead of username
VSCode Version: 1.43.0
OS Client: Windows 10
Remote: Ubuntu
When upgrading to Remote-SSH Extension Version 0.50.0 from Version 0.49.0 the Extension uses username of the host instead of username of the remote for connecting to remote.
possible duplicate of #92439
Additional comment and possible temporary fix: The ssh connection still works after switching the extension back to version 0.49.0
I just downgraded it too 0.49.0 and it worked. Thanks
Dupe of https://github.com/microsoft/vscode-remote-release/issues/2512
|
2025-04-01T04:34:44.452852
| 2020-04-23T22:47:03
|
605921983
|
{
"authors": [
"connor4312"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8635",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/96016"
}
|
gharchive/issue
|
Localization for renderers
We don't really have a good story for this for webviews right now either. We should figure out what to do here.
This should still be an eventual thing, but closing it for now as it's just being snowplowed.
|
2025-04-01T04:34:44.460078
| 2020-04-26T22:05:44
|
607125096
|
{
"authors": [
"isidorn",
"joanmarie",
"jvesouza"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8636",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/96210"
}
|
gharchive/issue
|
Screen reader: Improvement in the labels of some items present in the status bar.
Issue Type: Feature Request
When I request the contents of the status bar, the following is read by the orca.
'Open a remote window eita (Git) - master* No Problems Select a connection No Notifications Tweet Feedback Select Language Mode Select End of Line Sequence Select Encoding Select Indentation Go to Line/Column If you are not using a Screen Reader, please change the setting editor.accessibilitySupport to "off".'
In some cases what is being presented is the action to be taken and not the existing value. An example of this is the information of the row and column where the cursor is. Instead of reading the row and column number, the orca reads Go to Line / Column.
From @joanmarie
I've not taken a look yet, but from your description it sounds like we want Isidor to add an aria-label value like "Ln 7, Col 16" on the element which has a title value of "Go to Line/Column."
CC @isidorn
VS Code version: Code - Insiders 1.45.0-insider (a250df703de955a38aed427a917bce8278ab3331, 2020-04-24T09:57:20.729Z)
OS version: Linux x64 5.6.6-arch1-1
@jvesouza thanks for filling this!
I have pushed a fix, so now it should be better. Please try it out from Tuesday and let us know how it behaves for you.
It really improved a lot. Perhaps it is a matter of preference, but it seems to me that each piece of information should end with a period. In my opinion this would facilitate the understanding of the information.
Something like the following:
Remote master*. 7 Errors. Select a connection. Notifications. Tweet Feedback. Prettier. JavaScript. LF. UTF-8. Spaces: 2. Ln 2, Col 1. Screen Reader Optimized.
Based on the example above, I was in doubt if the information 'Prettier JavaScript' is a single piece of information or if it is two pieces of information.
@jvesouza: I just committed a change to Orca to insert the pause breaks. If your punctuation level is not set to "all," Orca should pause.
Related aside: The reason why the punctuation level matters is that Orca currently accomplishes the pause by inserting a period. And hearing "dot" when all we wanted was a pause is annoying. Changing Orca so that pauses are not periods and thus will work no matter what the punctuation level is will be a slightly bigger change that I don't yet have time for. So for now, Orca doesn't insert pauses when the punctuation level is "all."
I personally think the place to address this need is in the screen readers rather than in the apps (like VSCode). We cannot count on every web app with a status bar to end the status bar items with a period. Thus the problem is bigger than VSCode.
Furthermore, I don't believe the other screen readers are taking advantage of the ARIA code role yet. As a result, I think those users will have to set their punctuation level to all in VSCode. Which means they might start hearing "dot" in between each status bar item.
José: Please pull Orca master and see if the experience is better for you. Thanks!!
@joanmarie It looks great! thank you.
@joanmarie absolutely agree with you.
Screen readers should announce a pause between structured items (since these are different dom nodes it makes sense to have a puse in between). Adding a punctation on the vscode side does not make a lot of sense as you said as that might break other screen readers.
Thanks for fixing it that fast!
Per José's comment adding verified label.
|
2025-04-01T04:34:44.469568
| 2020-04-30T11:37:09
|
609857135
|
{
"authors": [
"Tyriar",
"alexr00",
"tonyobanon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8637",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/issues/96684"
}
|
gharchive/issue
|
Code Completion sometimes take too long to resolve
Issue Type: Bug
Code completion now has an issue where it takes longer than expected to resolve. This is a very recent issue that I began noticing, some days ago.
VS Code version: Code 1.44.2 (ff915844119ce9485abfe8aa9076ec76b5300ddd, 2020-04-16T17:07:18.473Z)
OS version: Darwin x64 18.7.0
System Info
Item
Value
CPUs
Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz (4 x 2300)
GPU Status
2d_canvas: enabledflash_3d: enabledflash_stage3d: enabledflash_stage3d_baseline: enabledgpu_compositing: enabledmetal: disabled_offmultiple_raster_threads: enabled_onoop_rasterization: disabled_offprotected_video_decode: unavailable_offrasterization: enabledskia_renderer: disabled_off_okvideo_decode: enabledviz_display_compositor: enabled_onviz_hit_test_surface_layer: disabled_off_okwebgl: enabledwebgl2: enabled
Load (avg)
2, 2, 2
Memory (System)
16.00GB (1.66GB free)
Process Argv
Screen Reader
no
VM
0%
Extensions (7)
Extension
Author (truncated)
Version
mustache
daw
1.1.1
vscode-eslint
dba
2.1.5
xml
Dot
2.5.0
gitlens
eam
10.2.1
dotenv
mik
1.0.1
vscode-docker
ms-
1.1.0
LiveServer
rit
5.6.1
This is probably caused by one of your installed extensions taking a long time to provide the completion. Can you try disabling your installed extensions and see if it still happens?
@tonyobanon what language are you using?
|
2025-04-01T04:34:44.471170
| 2021-02-09T20:35:32
|
804904318
|
{
"authors": [
"rzhao271"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8638",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/116230"
}
|
gharchive/pull-request
|
Bump Emmet
This PR fixes #115854, fixes #115839
Upstream changes:
https://github.com/microsoft/vscode-emmet-helper/compare/861aa7083d420a5bcdccf1b07d1aafe2d65a568e..ee3480d188fc622a187a70c04629e9a74ca262f6
|
2025-04-01T04:34:44.473253
| 2021-07-02T10:31:46
|
935641589
|
{
"authors": [
"bpasero"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8639",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/127870"
}
|
gharchive/pull-request
|
editors - make the check functions more robust (#124222)
This PR fixes #124222
Pushed a second commit to reduce the overall change a bit given it is late in endgame. Specifically I restored isEditorInput for callers outside of editor.ts since that check was fast to compute.
Merging this in for main and leaving https://github.com/microsoft/vscode/pull/127875 for a review for release/1.58
|
2025-04-01T04:34:44.474390
| 2022-12-01T05:38:23
|
1470713251
|
{
"authors": [
"joaomoreno",
"kimjihae3"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8640",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/167797"
}
|
gharchive/pull-request
|
readme-contribute
Thanks, but I don't feel things brings added value to our README.md entrypoint.
|
2025-04-01T04:34:44.476692
| 2024-10-11T14:01:10
|
2581462544
|
{
"authors": [
"deepak1556",
"gurusura"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8641",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/231119"
}
|
gharchive/pull-request
|
fix: use the mime package from snap to generate db
Refs https://github.com/microsoft/vscode/issues/230454#issuecomment-2407101935
Any updates on this crash? Still having the same issue:
Version: 1.94.2
Commit: 384ff7382de624fb94dbaf6da11977bba1ecd427
Date: 2024-10-09T16:08:44.566Z
Electron: 30.5.1
ElectronBuildId: 10262041
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: <IP_ADDRESS>-electron.0
OS: Linux x64 6.8.0-47-generic snap
|
2025-04-01T04:34:44.477668
| 2024-11-05T17:40:56
|
2636091698
|
{
"authors": [
"Tyriar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8642",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/233115"
}
|
gharchive/pull-request
|
Add terminal/command contributable menu
Part of #231188
Problems with this is we want the terminal and command API objects available when the commands are executed in the extension host.
|
2025-04-01T04:34:44.479029
| 2020-04-10T06:50:32
|
597729084
|
{
"authors": [
"bpasero",
"isidorn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8643",
"repo": "microsoft/vscode",
"url": "https://github.com/microsoft/vscode/pull/94873"
}
|
gharchive/pull-request
|
Recovery: 94726
This PR fixes #94726
Suggested fix is to pin the editor when it is being opened in the background. We used to behave like that in the past.
Code changes look good and I have verified this fixes the issue -> approved
Even though this is a regression I do not think it is that breaking especially since there is a workaround (disable preview). So can be included in the recovery but does not have to be imho
|
2025-04-01T04:34:44.493942
| 2021-08-02T11:37:24
|
958053457
|
{
"authors": [
"emtenet",
"riverar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8644",
"repo": "microsoft/windows-rs",
"url": "https://github.com/microsoft/windows-rs/issues/1017"
}
|
gharchive/issue
|
Overloaded members IDCompositionVisual2::SetOffsetX assigned to wrong vtable slots?
Whilst playing around with this crate I encountered an unexplained E_INVALIDARG whilst calling IDCompositionVisual2::SetOffsetX(&self, offsetx: f32)
I prepared a few short self contained projects to compare this behaviour with other compilers and crates.
See the following Gist.
My guess is that the call to:
HRESULT IDCompositionVisual2::SetOffsetX(float)
uses the wrong vtable slot and ends up calling this instead:
HRESULT IDCompositionVisual2::SetOffsetX(IDCompositionAnimation*)
When a float of 0.0 is passed it returns E_INVALIDARG complaing about a null animation pointer.
When other float values are passed it crashes.
The winapi crates seems to have the two overloaded SetOffsetX members ordered in vtable slots opposite to this crate.
Microsoft Windows 10 Pro
10.0.19043 N/A Build 19043
rustc 1.52.1 (9bc8c42bb 2021-05-09)
Microsoft Visual Studio Community 2019 Version 16.10.4
Hm, at a glance, not seeing anything wrong with the bindings. What version of the crate are you using? Can you create a minimum repro project so we can run it through the debugger?
This is from dcomp.h:
DECLARE_INTERFACE_IID_(IDCompositionVisual, IUnknown, "4d93059d-097b-4651-9a60-f0f25116e2f3")
{
// Changes the value of OffsetX property
STDMETHOD(SetOffsetX)(THIS_
float offsetX
) PURE;
// Animates the value of the OffsetX property.
STDMETHOD(SetOffsetX)(THIS_
_In_ IDCompositionAnimation* animation
) PURE;
// ...
The generated bindings look like:
pub struct IDCompositionVisual(::windows::IUnknown);
impl IDCompositionVisual {
pub unsafe fn SetOffsetX(&self, offsetx: f32) -> ::windows::Result<()> {
(::windows::Interface::vtable(self).3)(
::windows::Abi::abi(self),
::std::mem::transmute(offsetx),
)
.ok()
}
pub unsafe fn SetOffsetX2<'a>(
&self,
animation: impl ::windows::IntoParam<'a, IDCompositionAnimation>,
) -> ::windows::Result<()> {
(::windows::Interface::vtable(self).4)(
::windows::Abi::abi(self),
animation.into_param().abi(),
)
.ok()
}
// ...
I just realized you said IDCompositionVisual2, my mistake. Let me review how that's handled.
Looks okay there too. A repro at this point would be handy, thanks!
impl IDCompositionVisual2 {
pub unsafe fn SetOffsetX(&self, offsetx: f32) -> ::windows::Result<()> {
(::windows::Interface::vtable(self).3)(
::windows::Abi::abi(self),
::std::mem::transmute(offsetx),
)
.ok()
}
pub unsafe fn SetOffsetX2<'a>(
&self,
animation: impl ::windows::IntoParam<'a, IDCompositionAnimation>,
) -> ::windows::Result<()> {
(::windows::Interface::vtable(self).4)(
::windows::Abi::abi(self),
animation.into_param().abi(),
)
.ok()
}
// ...
I have prepared some sample projects in the following repository https://github.com/emtenet/set-offset-x
Start with the rust project in the issue directory.
I have updated it to use the latest 0.18.0 version of the windows crate.
My guess is that methods that are overloaded (same name, different parameter types) are not laid out in the vtable in the same order that they appear in the C++ header files.
This issue is looking at the two overloaded methods named SetOffsetX. In the header file they are in the following order:
SetOffsetX(float)
SetOffsetX(IDCompositionAnimation*)
Compare that with how the winapi crate places those two methods in the vtable, they are reversed.
https://github.com/retep998/winapi-rs/blob/0.3/src/um/dcomp.rs#L163-L169
My sample repository contains two additional projects implementing equivilent logic to the issue project.
The ok_rust project uses the winapi crate and runs successfully.
The third project ok_cpp is written in C++ using Visual Studio 2019 and also runs successfuly.
I have compared the assembly output of the three sample projects to confirm that the project using the winapi crate matches the C++ project in the vtable slot they use to call the SetOffsetX(float) method. I can provide further detail on how I did this comparison if you need it.
Agreed. Looks like an upstream SDK/metadata bug. Nice find @emtenet!
I filed a bug on win32metadata for you. https://github.com/microsoft/win32metadata/issues/600.
Will close this out as the crate is working properly but feel free to ping us if you have any further issues/questions/etc.
0:001> dps dcomp!Windows::UI::Composition::InteropVisual::Api::`vftable'
00007ff8`4ddd6c20 00007ff8`4dcc2390 dcomp![...]::QueryInterface
00007ff8`4ddd6c28 00007ff8`4dcd8f00 dcomp![...]::AddRef
00007ff8`4ddd6c30 00007ff8`4dcbe6f0 dcomp![...]::Release
-00007ff8`4ddd6c38 00007ff8`4dd61980 dcomp!Windows::UI::Composition::InteropVisual::Api::SetOffsetX
// ^^^^^^^^^^^^^^^^^ slot 3 = SetOffsetX(IDCompositionAnimation*)
00007ff8`4ddd6c40 00007ff8`4dcce450 dcomp!Windows::UI::Composition::InteropVisual::Api::SetOffsetX
00007ff8`4ddd6c48 00007ff8`4dd61a20 dcomp!Windows::UI::Composition::InteropVisual::Api::SetOffsetY
00007ff8`4ddd6c50 00007ff8`4dc96d00 dcomp!Windows::UI::Composition::InteropVisual::Api::SetOffsetY
00007ff8`4ddd6c58 00007ff8`4dccf970 dcomp!Windows::UI::Composition::InteropVisual::Api::SetTransform
00007ff8`4ddd6c60 00007ff8`4dd61de0 dcomp!Windows::UI::Composition::InteropVisual::Api::SetTransform
00007ff8`4ddd6c68 00007ff8`4dd61fe0 dcomp!Windows::UI::Composition::InteropVisual::Api::SetTransformParent
00007ff8`4ddd6c70 00007ff8`4dcd15e0 dcomp!Windows::UI::Composition::InteropVisual::Api::SetEffect
00007ff8`4ddd6c78 00007ff8`4dcd5c30 dcomp!Windows::UI::Composition::InteropVisual::Api::SetBitmapInterpolationMode
00007ff8`4ddd6c80 00007ff8`4dcd6850 dcomp!Windows::UI::Composition::InteropVisual::Api::SetBorderMode
00007ff8`4ddd6c88 00007ff8`4dd617a0 dcomp!Windows::UI::Composition::InteropVisual::Api::SetClip
00007ff8`4ddd6c90 00007ff8`4dcd4690 dcomp!Windows::UI::Composition::InteropVisual::Api::SetClip
00007ff8`4ddd6c98 00007ff8`4dcc85f0 dcomp!Windows::UI::Composition::InteropVisual::Api::SetContent
0:001> x /v dcomp!Windows::UI::Composition::InteropVisual::Api::SetOffsetX*
prv func 00007ff8`4dcce450 6c dcomp!Windows::UI::Composition::InteropVisual::Api::SetOffsetX (void)
-pub func 00007ff8`4dd61980 0 dcomp!Windows::UI::Composition::InteropVisual::Api::SetOffsetX (public: virtual long __cdecl Windows::UI::Composition::InteropVisual::Api::SetOffsetX(struct IDCompositionAnimation *))
// ^^^^^^^^^^^^^^^^^
@emtenet We grab metadata periodically and check it in @ https://github.com/microsoft/windows-rs/tree/master/crates/reader/default
I'll check up on this particular interface tomorrow; @kennykerr and I were recently chatting about testing this fix too, so good timing.
If you'd like to try the latest metadata in the meantime, and see if it Just Works™️, grab the latest winmd from https://github.com/microsoft/win32metadata/raw/master/scripts/BaselineWinmd/Windows.Win32.winmd.
|
2025-04-01T04:34:44.503016
| 2022-12-07T11:39:29
|
1481704049
|
{
"authors": [
"XYZliang",
"denelon",
"eabase"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8645",
"repo": "microsoft/winget-cli",
"url": "https://github.com/microsoft/winget-cli/issues/2752"
}
|
gharchive/issue
|
“winget install -l” The specified location is different from the installation location
Brief description of your issue
When I use
Tencent.TencentDocs -l "E:\Program Files\TencentDocs"
When I ordered the installation of TencentDocs, the software was incorrectly installed in the directory "E:\Program" instead of the specified "E:\Program Files\TencentDocs".
There is only an empty TencentDocs folder under the E:\Program Files folder
I've tried uninstalling and reinstalling many times, and that seems to happen when I install this particular program. No abnormalities have been found in other installation programs.
I don't know if the problem was on winget or the package, but I decided to raise the issu.
Steps to reproduce
just run Tencent.TencentDocs -l "E:\Program Files\TencentDocs" in cmd
Expected behavior
TencentDocs will be installed to the directory "E:\Program Files\TencentDocs"
Actual behavior
Under E:\Program Files, only a new folder TencentDocs was created, but no program was installed and the folder is empty. The Program is installed to E:\Program
Environment
Windows 程序包管理器 v1.3.2691
版权所有 (C) Microsoft Corporation。保留所有权利。
Windows: Windows.Desktop v10.0.22621.900
系统体系结构: X64
程序包: Microsoft.DesktopAppInstaller v1.18.2691.0
日志: %LOCALAPPDATA%\Packages\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe\LocalState\DiagOutputDir
链接
----------------------------------------------------------------------------
隐私声明 https://aka.ms/winget-privacy
许可协议 https://aka.ms/winget-license
第三方声明 https://aka.ms/winget-3rdPartyNotice
主页 https://aka.ms/winget
Windows 应用商店条款 https://www.microsoft.com/en-us/storedocs/terms-of-sale
Yes, what's your Terminal character code settings? Also try to use powershell for better terminal support. Clearly you need to escape the empty space somehow. What about using single quotes?
是的,您的终端字符代码设置是什么?还可以尝试使用Powershell以获得更好的终端支持。显然,您需要以某种方式逃离空白空间。使用单引号怎么样?
Thank you for your suggestion, I tried to install the software with this problem using powershell, but the problem persists.
Also, both my cmd and powershell encoding is 936, which is the default Simplified Chinese (GB2312). After I switched to UTF-8 (65001) encoding, the problem existed as well.
I also tried using single quotes instead of double quotes, but still no solution. Interestingly, this bug only occurs with some specific software, such as Motrix, Termius, uTools, Tencent Docs.
我想知道这是否与语法有关。字符串可能需要作为嵌套引用以保留空格。
Sorry I don't seem to understand what you mean, can you briefly say how one should try to fix it? I reinstalled my Windows system and tried to install all the software via winget, but there are more and more packages with this problem and it is only those few specific packages that show this exception.
From Microsoft Learn:
-l, --location
Location to install to (if supported).
It's possible these packages do not support passing the location through to the installer. If they do support this mechanism, then it's likely related to the syntax used to specify the location.
来自微软学习:
-l, --位置 要安装到的位置(如果支持)。
这些包可能不支持将位置传递到安装程序。如果它们确实支持此机制,则它可能与用于指定位置的语法有关。
感谢您的回复,我将这些出现安装路径问题软件安装到别的不带空格的文件夹,例如“D:\ProgramFiles”时,一切正常,所以他们应该支持将位置传递到安装程序。正如你所说,似乎是指定的安装位置中的空格引发了某种语法bug
|
2025-04-01T04:34:44.506254
| 2023-05-05T23:02:13
|
1698291096
|
{
"authors": [
"alecazam",
"mdanish-kh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8646",
"repo": "microsoft/winget-cli",
"url": "https://github.com/microsoft/winget-cli/issues/3226"
}
|
gharchive/issue
|
winget install cygwin succeeds, but uninstall fails
Brief description of your issue
Can winget please not allow the installation of apps that do not uninstall. I was going to try this out, it broke builds, and then I thought I'd be able to easily uninstall it from winget.
Steps to reproduce
>winget install cygwin
> where mkdir
C:\cygwin64\bin\mkdir.exe
>winget uninstall cygwin
No installed package found matching input criteria.
Expected behavior
I expect to be able to uninstall anything winget installs with the same command. I don't see anything in Add/Remove programs , and there's no un/installer for me to run since winget did the install.
Actual behavior
I can't remove cygwin at all, and now have to find the tedious uninstallation steps onlnie.
Environment
Windows Package Manager v1.4.10173
Copyright (c) Microsoft Corporation. All rights reserved.
Windows: Windows.Desktop v10.0.19045.2846
System Architecture: X64
Package: Microsoft.DesktopAppInstaller v1.19.10173.0
Logs: %LOCALAPPDATA%\Packages\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe\LocalState\DiagOutputDir
User Settings: %LOCALAPPDATA%\Packages\Microsoft.DesktopAppInstaller_8wekyb3d8bbwe\LocalState\settings.json
Links
---------------------------------------------------------------------------
Privacy Statement https://aka.ms/winget-privacy
License Agreement https://aka.ms/winget-license
Third Party Notices https://aka.ms/winget-3rdPartyNotice
Homepage https://aka.ms/winget
Windows Store Terms https://www.microsoft.com/en-us/storedocs/terms-of-sale
Duplicate of https://github.com/microsoft/winget-pkgs/issues/103866
|
2025-04-01T04:34:44.511177
| 2023-07-27T02:14:09
|
1823492739
|
{
"authors": [
"AsciiWolf",
"CoolPlayLin",
"Jvr2022"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8647",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/issues/113268"
}
|
gharchive/issue
|
[Package Request]: CW Skimmer
How can we help?
I would like someone else to build the manifest.
Please read and ensure the following
[X] The installer meets the above requirements
Please provide the following information
Download Page Url: https://www.dxatlas.com/Download.asp
Publisher: Afreet Software
Package Name: CW Skimmer
Description: Multi-channel CW decoder and analyzer.
Package Version: 2.1
Installer URL: https://www.dxatlas.com/CwSkimmer/Files/CwSkimmer.zip
i have done
#113292
Hi @Jvr2022
i have done
#113292
Add
Resolve #113268
to link this issue with your PR
Hi @Jvr2022
i have done
#113292
Add
Resolve #113268
into your PR's body so that you can link this issue with your PR
Done
|
2025-04-01T04:34:44.515811
| 2023-07-12T22:34:14
|
1801853306
|
{
"authors": [
"BrandonWanHuanSheng",
"denelon",
"upintheairsheep"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8648",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/111978"
}
|
gharchive/pull-request
|
New package: Samsung.RecoverySolution version <IP_ADDRESS>
[ ] Have you signed the Contributor License Agreement?
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] This PR only modifies one (1) manifest
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.4 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: codeflow:open?pullrequest=https://github.com/microsoft/winget-pkgs/pull/111978&drop=dogfoodAlpha
#111634
Supporting custom hardware in validation is "Not Planned".
https://github.com/microsoft/winget-pkgs/issues/199504
|
2025-04-01T04:34:44.521180
| 2023-12-05T16:50:44
|
2026674043
|
{
"authors": [
"DaCosySheeep",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8649",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/129316"
}
|
gharchive/pull-request
|
New version: DaCosySheeep.FactDownloader version 1.3.0
[x] Have you signed the Contributor License Agreement?
[x] Have you checked that there aren't other open pull requests for the same manifest update/change?
[x] This PR only modifies one (1) manifest
[x] Have you validated your manifest locally with winget validate --manifest <path>?
[x] Does your manifest conform to the 1.5 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
Microsoft Reviewers: Open in CodeFlow
Automatic Validation ended with:
Exception during executable launch operation System.AggregateException: One or more errors occurred. (An error occurred trying to start process 'C:\Users\Validator\AppData\Local\Programs\Fact downloader\Fact downloader.exe' with working directory 'D:\TOOLS'. The specified executable is not a valid application for this OS platform.) ---> System.ComponentModel.Win32Exception (216): An error occurred trying to start process 'C:\Users\Validator\AppData\Local\Programs\Fact downloader\Fact downloader.exe' with working directory 'D:\TOOLS'. The specified executable is not a valid application for this OS platform. --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
@wingetbot waivers Add Validation-Executable-Error
|
2025-04-01T04:34:44.524033
| 2024-01-08T20:17:43
|
2071155160
|
{
"authors": [
"Exorcism0666"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8650",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/133302"
}
|
gharchive/pull-request
|
New package: UweSieber.UsbTreeView version 3.8.9
Pull request has been created with Komac v1.11.0 :rocket:
Resolve: #133297
Microsoft Reviewers: Open in CodeFlow
For moderators
This account is automated by Github Actions and the source code was created by CoolPlayLin. If you have any questions about any pull request, don't hesitate to ping @Exorcism0666, I'll get a notification.
[!important]
Please carefully review these Pull Request before merging. If it is a Pull Request for removing incorrect content and the URLs are issue, free upon manual checking, please close this Pull Request directly. (It is best to inform Exorcism0666 of the closure.)
良い一日をお過ごしください!
|
2025-04-01T04:34:44.529321
| 2024-06-03T20:19:50
|
2331978035
|
{
"authors": [
"Portable-Apps-For-Windows",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8651",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/156419"
}
|
gharchive/pull-request
|
New package: Portable-Windows-Apps.KeePass 2.54
Checklist for Pull Requests
[ ] Have you signed the Contributor License Agreement?
[ ] Is there a linked Issue?
Manifests
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[ ] This PR only modifies one (1) manifest
[ ] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.6 schema?
Note: <path> is the name of the directory containing the manifest you're submitting.
@microsoft-github-policy-service agree
Hi @Portable-Apps-For-Windows,
This package already exists as DominikReichl.KeePass. Please search for a software package before submitting a PR for it.
Unfortunately, we only accept packages that are hosted by the developer. The developer's URL for this pacakge would be https://sourceforge.net/projects/keepass/files/KeePass 2.x/2.54/KeePass-2.54.zip/download.
|
2025-04-01T04:34:44.533784
| 2024-07-07T16:24:31
|
2394120956
|
{
"authors": [
"damn-good-b0t",
"hackean-msft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8652",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/161825"
}
|
gharchive/pull-request
|
New version: h3poteto.fedistar version 1.9.8
Pull request has been created with WinGet Updater :rocket:
Microsoft Reviewers: Open in CodeFlow
/azp run
|
2025-04-01T04:34:44.537347
| 2024-07-22T14:42:04
|
2423059059
|
{
"authors": [
"Legorooj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8653",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/164110"
}
|
gharchive/pull-request
|
New version: getzola.zola version 0.19.1
Checklist for Pull Requests
[ ] Have you signed the Contributor License Agreement?
[x] Have you checked that there aren't other open pull requests for the same manifest update/change?
[x] This PR only modifies one (1) manifest
[x] Have you validated your manifest locally with winget validate --manifest <path>?
[x] Does your manifest conform to the 1.6 schema?
Microsoft Reviewers: Open in CodeFlow
@microsoft-github-policy-service agree
|
2025-04-01T04:34:44.539586
| 2024-10-20T01:10:29
|
2599800645
|
{
"authors": [
"Exorcism0666",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8654",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/184500"
}
|
gharchive/pull-request
|
Remove version: qBittorrent.qBittorrent.Qt6 version 5.0.0
[Automated] It returns code over 400 in all urls
Microsoft Reviewers: Open in CodeFlow
URL: https://sourceforge.net/projects/qbittorrent/files/qbittorrent-win32/qbittorrent-5.0.0/qbittorrent_5.0.0_lt20_qt6_x64_setup.exe/download
Status Code: 200
(Automated message - build 897)
Close with reason: Package still available.;
|
2025-04-01T04:34:44.541232
| 2024-11-15T02:01:10
|
2660505726
|
{
"authors": [
"Exorcism0666",
"ItzLevvie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8655",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/191683"
}
|
gharchive/pull-request
|
Remove version: WikidPad.WikidPad version 2.2
[Automated] It returns code over 400 in all urls
Microsoft Reviewers: Open in CodeFlow
Close with reason: URL works;
|
2025-04-01T04:34:44.546656
| 2024-11-26T21:46:21
|
2696205904
|
{
"authors": [
"Trenly",
"stephengillie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8656",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/194404"
}
|
gharchive/pull-request
|
Remove: OpenDesignAlliance.ODAFileConverter version 23.8
Package now behind authentication
Checklist for Pull Requests
[x] Have you signed the Contributor License Agreement?
[x] Is there a linked Issue?
Manifests
[ ] Have you checked that there aren't other open pull requests for the same manifest update/change?
[x] This PR only modifies one (1) manifest
[x] Have you validated your manifest locally with winget validate --manifest <path>?
[ ] Have you tested your manifest locally with winget install --manifest <path>?
[ ] Does your manifest conform to the 1.6 schema?
Note: <path> is the directory's name containing the manifest you're submitting.
Resolves #194300
Microsoft Reviewers: Open in CodeFlow
URL: https://download.opendesign.com/guestfiles/Demo/ODAFileConverter_QT5_vc16_amd64dll_23.8.msi
Response status code does not indicate success: 405 (Method Not Allowed).
(Automated message - build 899)
|
2025-04-01T04:34:44.547900
| 2024-12-11T02:29:32
|
2731661262
|
{
"authors": [
"Exorcism0666",
"Trenly"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8657",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/197649"
}
|
gharchive/pull-request
|
Remove version: RoyQu.RedPanda-C++ version 2.20
[Automated] It returns code over 400 in all urls
Microsoft Reviewers: Open in CodeFlow
Close with reason: Duplicate;
|
2025-04-01T04:34:44.551496
| 2020-05-19T17:45:27
|
621168084
|
{
"authors": [
"KevinLaMS",
"SafetyPanda",
"superusercode"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8658",
"repo": "microsoft/winget-pkgs",
"url": "https://github.com/microsoft/winget-pkgs/pull/302"
}
|
gharchive/pull-request
|
add transmission client
Microsoft Reviewers: Open in CodeFlow
Looks like you are missing the Sha256 value SafetyPanda. You can generate the value by downloading the installer and running winget hash .
3.0 just released, so this version is now outdated.
|
2025-04-01T04:34:44.569569
| 2022-08-11T15:59:42
|
1336169991
|
{
"authors": [
"cjwgriesel",
"ericfrederich",
"hideyukn88",
"onomatopellan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8659",
"repo": "microsoft/wslg",
"url": "https://github.com/microsoft/wslg/issues/803"
}
|
gharchive/issue
|
Uncontrollable keyboard repeat when using NEDIT
Environment
Microsoft Windows [Version 10.0.19044.1889]
Ubuntu 20.04.4 LTS
NAME STATE VERSION
Ubuntu-20.04 Running 2
I was using NEDIT with ubuntu and wsl 2. After a while i would get uncontrollable key repeats. And it only occurs if I have held the key down too long so that the key repeat is enabled and then its uncontrollable. It only occurs sometimes but when it does its a nightmare!!
Cut and paste from an input file with uncontrollable keyboard repeat. Most of the time if I start another instance of nedit that will stop it. but sometimes it is not enough...
= 42.45 * 40.0 * 20.0 * 17.0 * 20.0 *21111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111
NEdit released by Debian (1:5.7-2)
@cjwgriesel, thanks for reporting the issue. I have been trying to reproduce this issue at my end, but no luck. Do you experience this issue with nedit only? Thanks!
Hello Hideyuki
It sometimes happens when using nedit. When it does it is extremely
irritating. I have only experienced this with nedit. When it does i try to
start another nedit and if successful it stops it. But now it is ever so
sensitive to starting uncontrolled repeat display of a character again.
Never experienced this with Linux. Only Ubuntu under WSL2.
Charles
On Thursday, 8 September 2022, Hideyuki Nagase @.***>
wrote:
@cjwgriesel https://github.com/cjwgriesel, thanks for reporting the
issue. I have been trying to reproduce this issue at my end, but no luck.
Do you experience this issue with nedit only? Thanks!
—
Reply to this email directly, view it on GitHub
https://github.com/microsoft/wslg/issues/803#issuecomment-1240711311,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/A2PMRPGVDDU4P7B5ILLNMO3V5HSATANCNFSM56IWN6YQ
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
Regards
Charles Griesel
Microsoft Windows [Version 10.0.19044.1889]
Remember WSLg will only work on Windows 11.
For what it's worth I was a long time NEdit user on Solaris, RHEL 4, 5, 6, etc... eventually I stopped using it because it was difficult to get working on modern Linux distros. It always looked bad, didn't play well with fonts, etc. I'm pretty sure that's an abandoned project.
Hello
I had to recompile nedit and it works fine on my personal linux and unix
systems but did have to use open motif to compile it. It is powerful and
enables me to do what i need to do. Under WSL2 it suffers from this
extremely annoying keyboard repeat problem. I am using it with WSL2 and X11
as it is one of the optional editors offered on our systems. Not the newer
experimental wayland. Personally i use it with linux and no problems. Also
used under various Unix distros and no problems. I note that other software
also suffers from keyboard repeat problems under WSL2. This issue did occur
under WSL1. At some point in the future our systems will be upgraded to
win11 from win10. So hopefully this issue will disappear ? Nedit is in the
public domain and there are so many users. I am sure it will survive...or
maybe i will become extinct...sorry i am biased against Microsoft because
of past behaviours.
I must apologise for not replying earlier to the last email as i have been
very busy.
Regards
Charles JW Griesel
On Friday, 28 October 2022, Eric L. Frederich @.***>
wrote:
For what it's worth I was a long time NEdit user on Solaris, RHEL 4, 5, 6,
etc... eventually I stopped using it because it was difficult to get
working on modern Linux distros. It always looked bad, didn't play well
with fonts, etc. I'm pretty sure that's an abandoned project.
—
Reply to this email directly, view it on GitHub
https://github.com/microsoft/wslg/issues/803#issuecomment-1294430644,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/A2PMRPBN6A5SP2UNSKZV7SDWFNHLJANCNFSM56IWN6YQ
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
Regards
Charles Griesel
Hello
Nedit has morphed to xnedit. Forked from nedit 5.7.
Did not know that so will take a look at it. It is actively being developed
and supported.
Regards
Charles JW Griesel
On Friday, 28 October 2022, Eric L. Frederich @.***>
wrote:
For what it's worth I was a long time NEdit user on Solaris, RHEL 4, 5, 6,
etc... eventually I stopped using it because it was difficult to get
working on modern Linux distros. It always looked bad, didn't play well
with fonts, etc. I'm pretty sure that's an abandoned project.
—
Reply to this email directly, view it on GitHub
https://github.com/microsoft/wslg/issues/803#issuecomment-1294430644,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/A2PMRPBN6A5SP2UNSKZV7SDWFNHLJANCNFSM56IWN6YQ
.
You are receiving this because you were mentioned.Message ID:
@.***>
--
Regards
Charles Griesel
@cjwgriesel, if this can be easily reproducible at your end, would you please help us by taking core dump while the key repeat is happening following steps at https://github.com/microsoft/wslg/issues/803#issuecomment-1248582944, and share with us, thanks!
|
2025-04-01T04:34:44.571210
| 2018-11-02T22:00:09
|
376981059
|
{
"authors": [
"gavinbarron"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8660",
"repo": "microsoftgraph/dotnetcore-console-sample",
"url": "https://github.com/microsoftgraph/dotnetcore-console-sample/pull/4"
}
|
gharchive/pull-request
|
Added instructions on permissions to read users and changed the outpu…
…t written to the console to display the UPN of the user returned
I see you have added most of my changes already, closing
|
2025-04-01T04:34:44.572197
| 2024-05-30T12:25:37
|
2325553755
|
{
"authors": [
"v-pughosh",
"v-uansari"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8661",
"repo": "microsoftgraph/entra-powershell",
"url": "https://github.com/microsoftgraph/entra-powershell/pull/797"
}
|
gharchive/pull-request
|
Add-EntraAdministrativeUnitMember
Example tested and working.
Example tested and working.
Examples tested and working on dev machine
|
2025-04-01T04:34:44.572993
| 2024-07-10T08:22:42
|
2400084787
|
{
"authors": [
"KenitoInc",
"emmanuel-karanja"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8662",
"repo": "microsoftgraph/entra-powershell",
"url": "https://github.com/microsoftgraph/entra-powershell/pull/910"
}
|
gharchive/pull-request
|
Corrected Get-EntraDomainFederationSettings spelling
Corrected the spelling for 'Get-EntraDomainFederationSettings' which had been misspelled as 'Fedration'.
Fixed by #908
|
2025-04-01T04:34:44.589017
| 2021-12-10T13:00:13
|
1076823663
|
{
"authors": [
"1fabi0",
"nikskhubani"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8663",
"repo": "microsoftgraph/microsoft-graph-comms-samples",
"url": "https://github.com/microsoftgraph/microsoft-graph-comms-samples/issues/513"
}
|
gharchive/issue
|
Bot does not receive update for sharing "Live Presentation"
Describe the issue
When in teams call, we can share screen OR Presentation Live as in this screenshot
Sharing screen works fine and we receive correct input on our notification input (vbssParticipant.Direction == MediaDirection.SendOnly)
However, when a participant shares a presentation live, we basically get an update from the call but NOTHING seems to change in the update. Please see logs below.
Code Snippet
NA
Expected behaviour
We need an indication from the call about the Presentation Live update and also we need to understand how to subscribe for those buffers.
Graph SDK (please complete the following information):
Doesn't matter I believe. but can provide if it does :)
Call ID
1a205500-755b-4f81-b97a-c73861eae5ef
Logs
When the live presentation started
{<EMAIL_ADDRESS>"#microsoft.graph.commsNotifications", "value": [ {<EMAIL_ADDRESS>"#microsoft.graph.commsNotification", "changeType": "updated", "resource": "/app/calls/1a205500-755b-4f81-b97a-c73861eae5ef/participants", "resourceUrl": "/communications/calls/1a205500-755b-4f81-b97a-c73861eae5ef/participants", "resourceData": [ {<EMAIL_ADDRESS>"#microsoft.graph.participant", "info": {<EMAIL_ADDRESS>"#microsoft.graph.participantInfo", "identity": {<EMAIL_ADDRESS>"#microsoft.graph.identitySet", "application": {<EMAIL_ADDRESS>"#microsoft.graph.identity", "id": "fea13c03-9c0b-48a9-91f9-9a6dada5f70f", "displayName": "<Hidden>", "tenantId": "17702a9f-84fd-48c4-ade1-7557dc48148c", "identityProvider": "AAD" } }, "endpointType": "default", "endpointId": "2ffa29e5-581b-415e-adb7-1e9c6eb918e8", "clientVersion": "<Hidden>", "participantId": "00fb639f-89f2-47ce-b2f0-fd9a3fe22ba8", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/120604635/k3/120604734/replacement?rt=00fb639f89f247ceb2f0fd9a3fe22ba8&rc=eyJydGlkIjoiMjg6ZmVhMTNjMDMtOWMwYi00OGE5LTkxZjktOWE2ZGFkYTVmNzBmIiwicnRycyI6IkVudGVycHJpc2VQcm94eSIsInJ0cGZzIjp7InJlcXVlc3RlZEludGVyYWN0aXZpdHlMZXZlbCI6ImludGVyYWN0aXZlSGlnaFByaW9yaXR5In19&i=7&e=637745299055035764" }, "mediaStreams": [ {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "audio", "label": "main-audio", "sourceId": "414", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "video", "label": "main-video", "sourceId": "415", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "videoBasedScreenSharing", "label": "applicationsharing-video", "sourceId": "425", "direction": "receiveOnly", "serverMuted": false } ], "isMuted": false, "isInLobby": false, "publishedStates": [], "meetingRole": "presenter", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/120604635/k3/120604734/replacement?rt=00fb639f89f247ceb2f0fd9a3fe22ba8&rc=eyJydGlkIjoiMjg6ZmVhMTNjMDMtOWMwYi00OGE5LTkxZjktOWE2ZGFkYTVmNzBmIiwicnRycyI6IkVudGVycHJpc2VQcm94eSIsInJ0cGZzIjp7InJlcXVlc3RlZEludGVyYWN0aXZpdHlMZXZlbCI6ImludGVyYWN0aXZlSGlnaFByaW9yaXR5In19&i=7&e=637745299055035764", "id": "00fb639f-89f2-47ce-b2f0-fd9a3fe22ba8" }, {<EMAIL_ADDRESS>"#microsoft.graph.participant", "info": {<EMAIL_ADDRESS>"#microsoft.graph.participantInfo", "identity": {<EMAIL_ADDRESS>"#microsoft.graph.identitySet", "user": {<EMAIL_ADDRESS>"#microsoft.graph.identity", "id": "dbee4e5c-954f-4540-aa0c-3849a5cfc961", "displayName": "Nitin", "tenantId": "17702a9f-84fd-48c4-ade1-7557dc48148c", "identityProvider": "AAD" } }, "endpointType": "default", "endpointId": "620ef4e9-ffff-ffff-5ff1-165db40c9133", "platformId": "27", "clientVersion": "CallSignalingAgent (27/<IP_ADDRESS>270//;release_yangsun/2634239_backportCloudURLsR42.2<IP_ADDRESS>;releases/CL2021.R42)", "participantId": "6b4baff7-0834-422a-9d6e-c157f1a54424", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/10/k3/296/replacement?rt=6b4baff70834422a9d6ec157f1a54424&rc=eyJydGlkIjoiODpvcmdpZDpkYmVlNGU1Yy05NTRmLTQ1NDAtYWEwYy0zODQ5YTVjZmM5NjEiLCJydGxpIjoiZW4tZ2IiLCJydHJzIjoiRW50ZXJwcmlzZVByb3h5IiwicnRwZnMiOnsicmVxdWVzdGVkSW50ZXJhY3Rpdml0eUxldmVsIjoiYWx3YXlzSW50ZXJhY3RpdmUifX0%253d&i=7&e=637745299055035764" }, "mediaStreams": [ {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "audio", "label": "main-audio", "sourceId": "201", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "video", "label": "main-video", "sourceId": "202", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "videoBasedScreenSharing", "label": "applicationsharing-video", "sourceId": "212", "direction": "receiveOnly", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "data", "label": "data", "sourceId": "213", "direction": "sendReceive", "serverMuted": false } ], "isMuted": false, "isInLobby": false, "publishedStates": [], "meetingRole": "organizer", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/10/k3/296/replacement?rt=6b4baff70834422a9d6ec157f1a54424&rc=eyJydGlkIjoiODpvcmdpZDpkYmVlNGU1Yy05NTRmLTQ1NDAtYWEwYy0zODQ5YTVjZmM5NjEiLCJydGxpIjoiZW4tZ2IiLCJydHJzIjoiRW50ZXJwcmlzZVByb3h5IiwicnRwZnMiOnsicmVxdWVzdGVkSW50ZXJhY3Rpdml0eUxldmVsIjoiYWx3YXlzSW50ZXJhY3RpdmUifX0%253d&i=7&e=637745299055035764", "id": "6b4baff7-0834-422a-9d6e-c157f1a54424" } ] } ] }
When presentation stopped
{<EMAIL_ADDRESS>"#microsoft.graph.commsNotifications", "value": [ {<EMAIL_ADDRESS>"#microsoft.graph.commsNotification", "changeType": "updated", "resource": "/app/calls/1a205500-755b-4f81-b97a-c73861eae5ef/participants", "resourceUrl": "/communications/calls/1a205500-755b-4f81-b97a-c73861eae5ef/participants", "resourceData": [ {<EMAIL_ADDRESS>"#microsoft.graph.participant", "info": {<EMAIL_ADDRESS>"#microsoft.graph.participantInfo", "identity": {<EMAIL_ADDRESS>"#microsoft.graph.identitySet", "application": {<EMAIL_ADDRESS>"#microsoft.graph.identity", "id": "fea13c03-9c0b-48a9-91f9-9a6dada5f70f", "displayName": "<Hidden>", "tenantId": "17702a9f-84fd-48c4-ade1-7557dc48148c", "identityProvider": "AAD" } }, "endpointType": "default", "endpointId": "2ffa29e5-581b-415e-adb7-1e9c6eb918e8", "clientVersion": ""<Hidden>", "participantId": "00fb639f-89f2-47ce-b2f0-fd9a3fe22ba8", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/120604635/k3/120604734/replacement?rt=00fb639f89f247ceb2f0fd9a3fe22ba8&rc=eyJydGlkIjoiMjg6ZmVhMTNjMDMtOWMwYi00OGE5LTkxZjktOWE2ZGFkYTVmNzBmIiwicnRycyI6IkVudGVycHJpc2VQcm94eSIsInJ0cGZzIjp7InJlcXVlc3RlZEludGVyYWN0aXZpdHlMZXZlbCI6ImludGVyYWN0aXZlSGlnaFByaW9yaXR5In19&i=7&e=637745299055035764" }, "mediaStreams": [ {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "audio", "label": "main-audio", "sourceId": "414", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "video", "label": "main-video", "sourceId": "415", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "videoBasedScreenSharing", "label": "applicationsharing-video", "sourceId": "425", "direction": "receiveOnly", "serverMuted": false } ], "isMuted": false, "isInLobby": false, "publishedStates": [], "meetingRole": "presenter", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/120604635/k3/120604734/replacement?rt=00fb639f89f247ceb2f0fd9a3fe22ba8&rc=eyJydGlkIjoiMjg6ZmVhMTNjMDMtOWMwYi00OGE5LTkxZjktOWE2ZGFkYTVmNzBmIiwicnRycyI6IkVudGVycHJpc2VQcm94eSIsInJ0cGZzIjp7InJlcXVlc3RlZEludGVyYWN0aXZpdHlMZXZlbCI6ImludGVyYWN0aXZlSGlnaFByaW9yaXR5In19&i=7&e=637745299055035764", "id": "00fb639f-89f2-47ce-b2f0-fd9a3fe22ba8" }, {<EMAIL_ADDRESS>"#microsoft.graph.participant", "info": {<EMAIL_ADDRESS>"#microsoft.graph.participantInfo", "identity": {<EMAIL_ADDRESS>"#microsoft.graph.identitySet", "user": {<EMAIL_ADDRESS>"#microsoft.graph.identity", "id": "dbee4e5c-954f-4540-aa0c-3849a5cfc961", "displayName": "Nitin", "tenantId": "17702a9f-84fd-48c4-ade1-7557dc48148c", "identityProvider": "AAD" } }, "endpointType": "default", "endpointId": "620ef4e9-ffff-ffff-5ff1-165db40c9133", "platformId": "27", "clientVersion": "CallSignalingAgent (27/<IP_ADDRESS>270//;release_yangsun/2634239_backportCloudURLsR42.2<IP_ADDRESS>;releases/CL2021.R42)", "participantId": "6b4baff7-0834-422a-9d6e-c157f1a54424", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/10/k3/296/replacement?rt=6b4baff70834422a9d6ec157f1a54424&rc=eyJydGlkIjoiODpvcmdpZDpkYmVlNGU1Yy05NTRmLTQ1NDAtYWEwYy0zODQ5YTVjZmM5NjEiLCJydGxpIjoiZW4tZ2IiLCJydHJzIjoiRW50ZXJwcmlzZVByb3h5IiwicnRwZnMiOnsicmVxdWVzdGVkSW50ZXJhY3Rpdml0eUxldmVsIjoiYWx3YXlzSW50ZXJhY3RpdmUifX0%253d&i=7&e=637745299055035764" }, "mediaStreams": [ {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "audio", "label": "main-audio", "sourceId": "201", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "video", "label": "main-video", "sourceId": "202", "direction": "sendReceive", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "videoBasedScreenSharing", "label": "applicationsharing-video", "sourceId": "212", "direction": "receiveOnly", "serverMuted": false }, {<EMAIL_ADDRESS>"#microsoft.graph.mediaStream", "mediaType": "data", "label": "data", "sourceId": "213", "direction": "sendReceive", "serverMuted": false } ], "isMuted": false, "isInLobby": false, "publishedStates": [], "meetingRole": "organizer", "replacementLink": "https://cc-jpea-04.cc.skype.com/cc/v1/callParticipant/f262994d-4c97-4ba7-b7af-8f8e021fb514/10/k3/296/replacement?rt=6b4baff70834422a9d6ec157f1a54424&rc=eyJydGlkIjoiODpvcmdpZDpkYmVlNGU1Yy05NTRmLTQ1NDAtYWEwYy0zODQ5YTVjZmM5NjEiLCJydGxpIjoiZW4tZ2IiLCJydHJzIjoiRW50ZXJwcmlzZVByb3h5IiwicnRwZnMiOnsicmVxdWVzdGVkSW50ZXJhY3Rpdml0eUxldmVsIjoiYWx3YXlzSW50ZXJhY3RpdmUifX0%253d&i=7&e=637745299055035764", "id": "6b4baff7-0834-422a-9d6e-c157f1a54424" } ] } ] }
As this is is shared via an shared iframe the bot doesn't has access to it. To avoid missing things in recordings you can disable share to stage (this is how the feature with the iframe is called) in the settings of your teams tenant.
Thank you @1fabi0 for your reply
Is there any article from Microsoft which explains what you said about iFrame for a live presentation? That is just to keep our clients updated on why it cannot be done?
Right now I do not find anything that is directly but you can already know this from this two articles
https://techcommunity.microsoft.com/t5/microsoft-teams-blog/beyond-sharing-your-screen-interactive-collaboration-with-apps/ba-p/2709595
https://support.microsoft.com/en-us/office/sshare-powerpoint-slides-in-a-teams-meeting-fc5a5394-2159-419c-bc59-1f64c1f4e47
Also there may also be comments from Microsoft employees confirming this in the issues. They may have somewhere documentation that is explaining it but it may be internal or I would have to research the whole web for it.
Thank you @1fabi0 for a quick response again 🙏
The link you shared for support.microsoft is broken, here is correct one.
Also I see what you mean, they have a note "Meeting recordings won't capture any videos or animations embedded in PowerPoint Live presentations." in that article which says 1000 words :)
I got my answer so closing this ticket.
|
2025-04-01T04:34:44.703255
| 2019-06-11T18:05:00
|
454829815
|
{
"authors": [
"ksikorsk",
"saraswat40"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8664",
"repo": "microsoftgraph/microsoft-graph-comms-samples",
"url": "https://github.com/microsoftgraph/microsoft-graph-comms-samples/issues/81"
}
|
gharchive/issue
|
Error while trying to publish the audio video bot sample
I am trying to publish the AudioVideo sample as described here: https://github.com/microsoftgraph/microsoft-graph-comms-samples/tree/master/Samples/V1.0Samples/LocalMediaSamples/AudioVideoPlaybackBot
I am on Deploy, Step 4 but I get the following error:
Error CS0234 The type or namespace name 'Common' does not exist in the namespace 'Sample' (are you missing an assembly reference?) FrontEnd C:\Users\admin\source\repos\microsoft-graph-comms-samples\Samples\V1.0Samples\LocalMediaSamples\AudioVideoPlaybackBot\FrontEnd\Bot\Bot.cs 23 Active
I don't have a background in c# development and I don't know how to troubleshoot this. Please help.
Most of my problems were related to Windows 7. Once I moved to Windows 10. The solution is published successfully to Azure Cloud Service. For some reason the cloud service is not reachable. I don't know why (yet).
I am able to send a POST request to my application at the joinCall endpoint, but consistently get this error:
Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException: AADSTS7000215: Invalid client secret is provided.
We have regenerated/rebuilt/republished the build several times but not able to get past this issue.
Did you put a break point to ensure that the client secret is as expected? Depending on the sample you may have to "escape" your secret just so it is stored/read correctly.
Yes, this one gets deployed in Azure and runs in Azure so I'm not sure if the client secret is as expected. But we are aware of issues with '+' and '/' characters in the secret and I tried with secrets without those characters.
I expect that the error is accurate, and the secret is getting somehow adjusted. You can attach your debugger to the Azure deployed code (you have to publish a Debug build so that breakpoints are accurate). Alternatively you can run locally if you have that set up.
If neither of the above work for you, try some online xml encoder and save the encoded secret within the config file.
I generated another secret last night without the '+' and '/' characters and it didn't seem to have done anything. But as of few hours ago, I no longer get this error. I get a call Id in response as expected. Now I am trying to change the role to viewer I get this error:
Status Code: NotFound
Microsoft.Graph.Communications.Core.Exceptions.ServiceException: Code: generalException
Message: Unexpected exception returned from the service.
Status Code: NotFound
Scenario Id: 0c7e8c37-0700-43a0-a340-af4bad280830
---> Microsoft.Graph.Communications.Core.Exceptions.ServiceException: Code: generalException
Message: Unexpected exception returned from the service.
Status Code: NotFound
at Microsoft.Graph.Communications.Client.Transport.GraphAuthClient.d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Graph.Communications.Client.Transport.GraphAuthClient.d__42.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Graph.Communications.Client.Transport.GraphClientWrapper.<ValidateAndWrapAsync>d__121.MoveNext()
--- End of inner exception stack trace ---
at Microsoft.Graph.Communications.Client.Transport.GraphClientWrapper.d__121.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Graph.Communications.Client.Transport.GraphClientWrapper.<SendAsync>d__112.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Graph.Communications.Calls.StatefulCall.d__27.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Sample.AudioVideoPlaybackBot.FrontEnd.Bot.Bot.d__23.MoveNext() in C:\temp2\microsoft-graph-comms-samples\Samples\V1.0Samples\LocalMediaSamples\AudioVideoPlaybackBot\FrontEnd\Bot\Bot.cs:line 134
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Sample.AudioVideoPlaybackBot.FrontEnd.Http.ChangeScreenSharingRoleController.d__0.MoveNext() in C:\temp2\microsoft-graph-comms-samples\Samples\V1.0Samples\LocalMediaSamples\AudioVideoPlaybackBot\FrontEnd\Http\Controllers\ChangeScreenSharingRoleController.cs:line 47
Can you provide a full (with redacted personal information) http trace of the request? You should be able to get the logs using https://service.com/logs/0c7e8c37-0700-43a0-a340-af4bad280830 endpoint.
Another problem is that even though joinCall returns a callid, the bot is not in the call. I don't see another participant in the participants view either. I will post the logs shortly.
The error is that you are trying to change screen sharing role for a call that does not exist:
[LogProtocolMessage,LoggingMessageHandler.cs(374) 9df204c3-5f5b-48f1-9429-5f5c6f845cfa 1438098] || activityid=9df204c3-5f5b-48f1-9429-5f5c6f845cfa || ltid=1438098 || msgid= || request.msgid=125ce2c1-2a40-4b02-afe8-5b16e632df61 || statuscode=NotFound || fid= || local.msgid=6350b0fc-0fae-4208-9881-79c8e7a3aa98 ||
Response to incoming:: POST /app/calls/431f0500-0d6d-47c7-b2f2-e58f3c860149/Microsoft.Graph.changeScreenSharingRole
404 Not Found
Client-Request-ID: 125ce2c1-2a40-4b02-afe8-5b16e632df61
Scenario-ID: 0c7e8c37-0700-43a0-a340-af4bad280830
X-Microsoft-Skype-Chain-ID: 9df204c3-5f5b-48f1-9429-5f5c6f845cfa
OData-Version: 4.0
Date: Tue, 18 Jun 2019 17:50:13 GMT
Server: Microsoft-HTTPAPI/2.0
Content-Length: 53
Content-Type: application/json; odata.metadata=minimal
{
"error": {
"code": "9997",
"message": "Call not found."
}
}
Here are the logs:
$>2019-06-18T19:34:30.8224571Z Info: LoggingMessageHandler.cs:167 SendAsync
TransactionDirection: Incoming
TraceType: HttpResponse
ResponseTime: 230
request: POST https://audiovideo.cloudapp.net/calls/441f0500-4361-4a26-8722-65fb3c1b7b3a/changeRole
response: 404 NotFound
headers:
Transfer-Encoding: chunked
request-id: 603a0aaa-b699-4add-ac3d-8fca3d85870b
client-request-id: a7664838-d4b5-4447-8683-6e17de118757
x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"West US 2","Slice":"SliceC","Ring":"4","ScaleUnit":"000","RoleInstance":"AGSFE_IN_4","ADSiteName":"WUS2"}}
scenario-id: 370a43a2-fa21-4f28-8a8a-cd6fe6b46e0e
Duration: 213.2369
Strict-Transport-Security: max-age=31536000
Cache-Control: private
Date: Tue, 18 Jun 2019 19:34:30 GMT
Content-Type: text/plain; charset=utf-8
$>2019-06-18T19:34:30.8224571Z Error: GraphAuthClient.cs:114 SendHttpRequestAsync
ScenarioId: 370a43a2-fa21-4f28-8a8a-cd6fe6b46e0e
AppId: b22ad71e-224e-48b2-a1e0-15d2a20dadcf
AppName: AudioVideoPlaybackBot
CallId: 441f0500-4361-4a26-8722-65fb3c1b7b3a
TenantId: xxxxxxx
TransactionDirection: Outgoing
TraceType: HttpResponse
ResponseTime: 226
request: POST https://graph.microsoft.com/beta/app/calls/441f0500-4361-4a26-8722-65fb3c1b7b3a/microsoft.graph.changeScreenSharingRole
response: 404 NotFound
headers:
Transfer-Encoding: chunked
request-id: 603a0aaa-b699-4add-ac3d-8fca3d85870b
client-request-id: a7664838-d4b5-4447-8683-6e17de118757
x-ms-ags-diagnostic: {"ServerInfo":{"DataCenter":"West US 2","Slice":"SliceC","Ring":"4","ScaleUnit":"000","RoleInstance":"AGSFE_IN_4","ADSiteName":"WUS2"}}
scenario-id: 370a43a2-fa21-4f28-8a8a-cd6fe6b46e0e
Duration: 213.2369
Strict-Transport-Security: max-age=31536000
Cache-Control: private
Date: Tue, 18 Jun 2019 19:34:30 GMT
Content-Type: application/json
{
"error": {
"code": "generalException",
"message": "Unexpected exception returned from the service.\r\nStatus Code: NotFound"
},
"responseHeaders": [
{
"key": "Transfer-Encoding",
"value": [
"chunked"
]
},
{
"key": "request-id",
"value": [
"603a0aaa-b699-4add-ac3d-8fca3d85870b"
]
},
{
"key": "client-request-id",
"value": [
"a7664838-d4b5-4447-8683-6e17de118757"
]
},
{
"key": "x-ms-ags-diagnostic",
"value": [
"{"ServerInfo":{"DataCenter":"West US 2","Slice":"SliceC","Ring":"4","ScaleUnit":"000","RoleInstance":"AGSFE_IN_4","ADSiteName":"WUS2"}}"
]
},
{
"key": "scenario-id",
"value": [
"370a43a2-fa21-4f28-8a8a-cd6fe6b46e0e"
]
},
{
"key": "Duration",
"value": [
"213.2369"
]
},
{
"key": "Strict-Transport-Security",
"value": [
"max-age=31536000"
]
},
{
"key": "Cache-Control",
"value": [
"private"
]
},
{
"key": "Date",
"value": [
"Tue, 18 Jun 2019 19:34:30 GMT"
]
}
],
"statusCode": 404,
"message": "Code: generalException\r\nMessage: Unexpected exception returned from the service.\r\nStatus Code: NotFound\r\n",
"data": {},
"targetSite": {
"Name": "MoveNext",
"AssemblyName": "Microsoft.Graph.Communications.Client, Version=<IP_ADDRESS>1, Culture=neutral, PublicKeyToken=31bf3856ad364e35",
"ClassName": "Microsoft.Graph.Communications.Client.Transport.GraphAuthClient+d__5",
"Signature": "Void MoveNext()",
"Signature2": "System.Void MoveNext()",
"MemberType": 8,
"GenericArguments": null
},
"stackTrace": " at Microsoft.Graph.Communications.Client.Transport.GraphAuthClient.d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at Microsoft.Graph.Communications.Client.Transport.GraphAuthClient.d__4`2.MoveNext()",
"source": "Microsoft.Graph.Communications.Client",
"hResult": -2146233088
}
Yes, I thought since the call was from yesterday I should try to create a new meeting and a new call. The logs above are from the new call I just created.
Have you created a call first? Do you have the logs for the call creation?
Looks like our service cannot reach your bot:
Request
POST https://audiovideo.cloudapp.net:10100/api/calling HTTP/1.1
Authorization: (redacted)
X-Microsoft-Skype-Chain-ID: adfd1055-8701-4fdd-9c09-cea0bf0c6c7b
Scenario-ID: 370a43a2-fa21-4f28-8a8a-cd6fe6b46e0e
Accept: application/json
User-Agent: Microsoft-Skype/3.0,(Calling/1.0)
X-Microsoft-Skype-Message-ID: c7984298-b3a4-4a61-8fef-c86d65cca91d
Content-Type: application/json; charset=utf-8
{
<EMAIL_ADDRESS>"#microsoft.graph.commsNotifications",
"value": [
{
<EMAIL_ADDRESS>"#microsoft.graph.commsNotification",
"changeType": "updated",
"resource": "/app/calls/441f0500-4361-4a26-8722-65fb3c1b7b3a",
"resourceData": {
<EMAIL_ADDRESS>"#microsoft.graph.call",
"state": "establishing",
"chatInfo": {
<EMAIL_ADDRESS>"#microsoft.graph.chatInfo",
"threadId"<EMAIL_ADDRESS> "messageId": "0"
},
"meetingInfo": "(redacted)"
}
}
]
}
Response
<no response>
Cannot establish media either... and so call is terminated. There may be something wrong with your bot deployment.
Request
HTTP/1.1 500
X-Microsoft-Skype-Processing-Instance: Microsoft.Skype.MediaController.FrontEnd_IN_4
X-Microsoft-Skype-Message-ID: 6f1580e3-8a9c-4afe-8822-bb4bd0acf5b0
X-Microsoft-Skype-Chain-ID: adfd1055-8701-4fdd-9c09-cea0bf0c6c7b
X-Microsoft-Skype-Proxy-Instance: Microsoft.Skype.MediaController.FrontEnd_IN_9
Date: Tue, 18 Jun 2019 19:31:03 GMT
Server: Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0
Content-Length: 209
Content-Type: application/json; charset=utf-8
{
"addParticipantResponse": null,
"debugContent": {
"correlationId": "adfd1055-8701-4fdd-9c09-cea0bf0c6c7b"
},
"reason": "MP resources creation failure",
"mpUri": "net.tcp://audiovideo.cloudapp.net:20100/MediaProcessor"
}
The bot deployment is handled by visual studio and visual studio doesn't show any errors. We didn't change the source code or any configuration files except those mentioned in the readme. Also port 443 is reachable. I'm not sure how to troubleshoot this.
Go to azure... make sure your application is running and no errors are reported. Ensure that the URI matches that of your configuration.
Looks like your SSL cert is invalid:
The diagnose and solve problems link didn't help. We are using a self signed certificate because we have not been able to generate a signed certificate. Before the solution is deployed, we don't have the machine running on that domain name so it is difficult to get the certificate signed by the CA. After the solution is deployed, I have tried to enable IIS but for some reason the IIS is not accessible from an external IP address even with windows firewall disabled.
Generally clients have their own domain, add a CName entry to resolve the *.cloudapp.net domain, and create an SSL cert for their own domain. I've even heard of cases where the cert was managed by Let's Encrypt. I'm not sure why you need to access IIS to create a cert signed by CA.
Create a cloud service (classic) in Azure. Get your "Site URL" from Azure portal, this will be your DNS name and CN name for later configuration, for example: bot.contoso.com.
Set up SSL certificate and upload to the cloud service
Create a wildcard certificate for your service. This certificate should not be a self-signed certificate. For instance, if your bot is hosted at bot.contoso.com, create the certificate for *.contoso.com.
Upload the certificate to the cloud service.
Copy the thumbprint for later.
https://github.com/microsoftgraph/microsoft-graph-comms-samples/blob/master/Samples/V1.0Samples/LocalMediaSamples/AudioVideoPlaybackBot/README.md
Thanks for your help with this. I got the sample to work after creating my own domain and a signed SSL.
|
2025-04-01T04:34:45.476431
| 2024-01-12T02:12:00
|
2077938329
|
{
"authors": [
"phausleitner"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8665",
"repo": "microweber/microweber",
"url": "https://github.com/microweber/microweber/issues/1051"
}
|
gharchive/issue
|
Can't create category
Hello. I'd like to create some categories for posts. But if I try, I can't because I need parent category or site for the tag...
I can't have parent category because I can't create category and parent site, because search of sites in that field doesn't work (nor typing full name of the current page manually).
Using version [2.0.5.]
Thanks
In 2.0.7 it seems to be working
In 2.0.9 the problem returns.. I don ´t really get the logic, why is needed old category or page for new category :/
I think I got it.. First is needed the Dynamic page for that. Personally, it's quite strange for me, will play with it.
|
2025-04-01T04:34:45.497414
| 2023-05-11T17:24:23
|
1706245557
|
{
"authors": [
"Ultima14",
"mienaiyami",
"zeoint"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8666",
"repo": "mienaiyami/yomikiru",
"url": "https://github.com/mienaiyami/yomikiru/issues/134"
}
|
gharchive/issue
|
epub continue reading
Description of the new feature / enhancement
The way the manga has a page number in continue reading, the same for epub to have progress indicator.
For example, I closed a manga chapter at page 5/30.
When I open it again using the continue reading tab, the app will start the chapter at page 5.
Now,
When I close an epub file at 30% and access it again using continue reading, it shows 30% on the page indicator, but it always starts at the top of the screen.
So, I request for it to work the same way as for manga.
So, when I close an epub file at x%, it should start from x% and not from the top.
Scenario when this would be used?
While opening epub's from continue reading tab.
Supporting information
No response
I was planning to add this but its not same as using page number. You can check yourself, when you change size or font-size, progress changes even though same content is on view.
Im planning to use same system as bookmark, which is against the logic of app.
I recommend you use bookmark feature till then as it records content on top of screen instead of progress.
In case of Manga Reader, page number is stored on highest level and is updated every time page changes, so on closing reader/window, app is able to get this page number and update history.
But in epub reader, to autoscroll to exact line/paragraph I need to store its special ID(its css querySelector) and since it is not as bigger than simple page number its not possible to update it on each scroll. After reader closes, I cant check which line was on top of reader, so this is hard to add.
In bookmarks, reader stay open when adding bookmark so it works there.
Its solution might be not that hard but I havent had much time to work on it.
I've the same problem. It doesn't always work when I exit the epub.
Please report if this is still an issue after v2.15.0.
Not working for me, it starts at the top of the chapter instead of the percentage
@Ultima14 Will be helpful if you can record and show, or provide that epub to test. Are you on latest version, v2.15.4?
|
2025-04-01T04:34:45.536702
| 2019-07-24T14:18:27
|
472310750
|
{
"authors": [
"miguelcobain",
"rahst12"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8667",
"repo": "miguelcobain/jalphabet",
"url": "https://github.com/miguelcobain/jalphabet/issues/6"
}
|
gharchive/issue
|
Next Letter?
Is there a way to ask the Alphabet, I'm at Letter 'D', give me the next letter: 'E'?
Or, if I made a custom alphabet of '0123456789abcdefghijklmnopqrstuvwxyz', could I keep calling a getNext() type function to go from 009 -> 00a, and ultimately, 00z to 010?
I guess the only way to do that at the moment is:
convert the word to a long (using the toLong() method),
add 1
do a parseLong again
repeat until word is the one you desire
It would be great to have arithmetic operations on words directly. I don't maintain this library anymore. :/
|
2025-04-01T04:34:45.558958
| 2019-04-09T22:30:36
|
431224566
|
{
"authors": [
"ghxstdev",
"miguelpruivo",
"whigger85"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8668",
"repo": "miguelpruivo/plugins_flutter_file_picker",
"url": "https://github.com/miguelpruivo/plugins_flutter_file_picker/issues/68"
}
|
gharchive/issue
|
Multiple file extensions when using getMultiFilePath()
Hey, I've seen 2 issues before me, #42 and #32 .
The questions are both closed but I can't seem to find a mention of this feature being added in the changelog.
For clarification, I'm looking to pick multiple files of a pre-defined extension (for example .jpg and .mp4).
Maybe I have missed it somewhere, if so, apologies.
Hi. If you want to pick images or videos you can use FileType.IMAGE or FileType.VIDEO respectively. However, picking both images (jpg) and video (mp4) at the same time isn’t currently supported, for that I suggest you to use FileType.ALL that lets you pick anything and then you can quickly check if any of the picked files are not jpg/mp4.
Thank you for the suggestion.
Thank you for the clarification, I will follow your advice and use FileType.ALL for the time being, I hope to see multiple file extensions implemented in the future. If I may, I would like to suggest the solution discussed in #32 or a delimiter such as 'jpg|mp4' so it doesn't break existing code.
Thank you for the great work.
I will take this in consideration if possible for both platforms.
As a note, I always try to avoid making a feature only available to a specific platform to make the experience as seamlessly as possible. Android, in general, is more flexible and supports more options that iOS, however, it is also more tricky to handle for the available scenarios.
So if sometimes I don't add a requested feature, may be because it isn't supported by one platform even if it's supported by the other.
Moved to #99.
It would be great to define multiple file extensions in combination with the "FileType.CUSTOM" parameter when selecting a single file. Of course you can check the selected file type afterwards, but it would be a better user experience if the user wouldn't even be able to select an unsupported file type in the first place.
@whigger85 thank you for the input, it surely will be added in a future update if possible.
|
2025-04-01T04:34:45.573177
| 2023-10-15T19:39:03
|
1944025273
|
{
"authors": [
"OmarHegazy93",
"mikaelacaron"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8669",
"repo": "mikaelacaron/Basic-Car-Maintenance",
"url": "https://github.com/mikaelacaron/Basic-Car-Maintenance/pull/177"
}
|
gharchive/pull-request
|
FEATURE - implement realtime alert functionality
What it Does
Wire the implemented UI for realtime alert system with the server so that it gets updated each time new alert is arrived
Closes #107
Describe what your change does
Add functionality to realtime alert system by:
listening to the server for newly added alerts
Add a query to get only ONE alert with isOn: true, and if there are viewed alerts previously, append to the query to exclude all alerts that ids match the persisted ids
if the alert is new, present it and save its id locally so that it gets filtered next time when new alerts are received
How I Tested
Install the app and run it for the first time
An alert should appear on the screen
Dismiss it
Reopen the app
Notice the behavior
Notes
When running the app for the second time, there will be an error from the firebase in the console as shown in the screenshot below, as per the answer here, @mikaelacaron should open that link so that it gets created automatically afterwards. (please let me know if there is something else I should do about it)
Screenshot
Add a screenshot of your new feature! OR show a screen recording of it in action. (On the simulator press Cmd + R to record, and the stop button when done). This video should be no longer than 30 seconds.
showing \n\n rather than as line breaks
the title is clipped
|
2025-04-01T04:34:45.581871
| 2016-05-01T09:37:45
|
152190932
|
{
"authors": [
"mike820324"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8670",
"repo": "mike820324/microProxy",
"url": "https://github.com/mike820324/microProxy/issues/37"
}
|
gharchive/issue
|
Change config file without reload proxy
Currently, we will need to restart the proxy whenever we change the config file.
Not sure is it possible to hot load the modified config file without restart the proxy.
It is just a nice to have feature, but not necessary.
After some offline discussion, this feature will not be implemented. Closed
|
2025-04-01T04:34:45.612387
| 2024-09-10T20:41:34
|
2517748106
|
{
"authors": [
"maltfield"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8671",
"repo": "mikegleasonjr/ansible-role-firewall",
"url": "https://github.com/mikegleasonjr/ansible-role-firewall/issues/43"
}
|
gharchive/issue
|
⚠ SECURITY ISSUE: using ipset breaks persistance
This ticket is a bug report for an issue where this ansible role will render a server to have no firewall rules on boot if a user happens to add a rule with ipset to its firewall rules on Debian.
⚠ WARNING: Currently, if a user adds a firewall rule to this ansible role's config (firewall_v4_default_rules or firewall_v6_default_rules, etc), that uses the -m set module then on the next server reboot, the server will come-up without any firewall rules, leaving the server very exposed and at-risk!
Problem
This module installs iptables-persistent but not ipset-persistent. As a result, when a system whose firewall is managed by this ansible role) reboots, the iptables.v4.generated script will exit with an error if a user uses this ansible role to define an iptables rule using the -m set module:
iptables vA.B.C (legacy): Set example.com doesn't exist.
The result is that the firewall table will be completely empty on-boot, leaving the server very exposed and at-risk!
ipset what?
What is the ipset module? I'll quote from their website:
IP sets are a framework inside the Linux kernel, which can be administered by the ipset utility. Depending on the type, an IP set may store IP addresses, networks, (TCP/UDP) port numbers, MAC addresses, interface names or combinations of them in a way, which ensures lightning speed when matching an entry against a set.
If you want to
store multiple IP addresses or port numbers and match against the collection by iptables at one swoop;
* dynamically update iptables rules against IP addresses or ports without performance penalty;
* express complex IP address and ports based rulesets with one single iptables rule and benefit from the speed of IP sets
then ipset may be the proper tool for you.
Using ipset is very useful to be able to create iptables rules that match against DNS addresses, for example.
steps to reproduce
To reproduce this issue, first install ipset and create a simple ipset.
# install prereqs
apt install ipset dnsutils
ipset create ipv4-github.com hash:ip family inet
ips=$(dig -t A +short "github.com" | grep -v '\.$')
ipset add ipv4-github.com "${ips}"
# verify
ipset list ipv4-github.com
Update your ansible role variables
firewall_v4_default_rules:
001 default policies:
- -P INPUT ACCEPT
- -P OUTPUT ACCEPT
- -P FORWARD DROP
002 allow loopback:
- -A INPUT -i lo -s <IP_ADDRESS>/8 -d <IP_ADDRESS>/8 -j ACCEPT
003 block github
- -A INPUT -m set --match-set ipv4-github.com -j DROP
Reboot
reboot
The system will come-up without any iptables rules :(
Solution
The solution to this problem appears to be very simple: just ensure ipset-persistent is installed.
See also:
https://stackoverflow.com/questions/35873976/rules-with-ipset-after-restart
https://dhtar.com/make-ipset-and-iptables-configurations-persistent-in-debianubuntu.html
I've created a PR to fix this
https://github.com/mikegleasonjr/ansible-role-firewall/pull/44
|
2025-04-01T04:34:45.622755
| 2016-02-16T17:00:42
|
134036018
|
{
"authors": [
"dylanbeattie",
"hnilsen",
"linde12",
"mhogerheijde",
"mikekelly",
"nerdyglasses",
"tomq42"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8672",
"repo": "mikekelly/hal-browser",
"url": "https://github.com/mikekelly/hal-browser/issues/77"
}
|
gharchive/issue
|
Hal Browser demo doesn't work
Attempting to use
http://haltalk.herokuapp.com/explorer/browser.html
doesn't work.
If I just use GET /users, I get an internal server error.
Attempting to POST /users to create a new user leads to a 404.
Yep, it needs some love. I did try and fix the dependencies but it was taking too long with no results, I intend to rewrite it if/when I get time.
OK, no problem.
Thing is there's not a lot of doc about HAL. Specifically how collections are
represented, which is slightly unusual IMO. So I was trying to find some
examples. I eventually found some examples elsewhere (apigility I think).
Thanks.
On 17 February 2016 at 17:24 Mike Kelly<EMAIL_ADDRESS>wrote:
Yep, it needs some love. I did try and fix the dependencies but it was taking too
long with no results, I intend to rewrite it if/when I get time.
—
Reply to this email directly or view it on GitHub
https://github.com/mikekelly/hal-browser/issues/77#issuecomment-185311858 .
@tomq42 - have you seen the docs at http://phlyrestfully.readthedocs.org/en/latest/halprimer.html#collections ? Some very useful examples in there of how to model collections in HAL.
Yes, but it was so weird that I don't think I understood it originally. But I
think I understand the example a bit better now.
On 18 February 2016 at 10:04 Dylan Beattie<EMAIL_ADDRESS>wrote:
@tomq42 https://github.com/tomq42 - have you seen the docs at
http://phlyrestfully.readthedocs.org/en/latest/halprimer.html#collections ?
Some very useful examples in there of how to model collections in HAL.
—
Reply to this email directly or view it on GitHub
https://github.com/mikekelly/hal-browser/issues/77#issuecomment-185635948 .
Would be nice to see the demo page fixed indeed!
Just dropping in to give my support :-)
Hey yeah. I'd like to help too if there is something I can do.
Hi all. So the rails app is a write-off, it uses a very old database driver. It was something I hacked together in a couple of hours anyway.
I think the best course of action is for someone to create a a new implementation of the hal-talk API. I don't have the bandwidth to do this at the moment. If someone wants to take this on and put it on a public github repo, I'll setup the heroku app so that it auto-deploys from it.
I can dump the existing data, and extract the link relation docs... so if someone does want to take this one let me know and I'll do that
So we're talking about having a small app that implements a few API calls backed by some sort of persistence layer, right? The front-end itself is just (statically served) Javascript, right?
What about the API documentation? That's served (statically) by the backend as well right?
Do you have an opinion on the Tech-stack that will be used?
Not really, preferably something simple and lightweight. Ruby or node.js or similar. It doesn't matter too much to be honest.
I think basically the API just needs some use cases that demonstrates key concepts in a HAL API, ie: traversing links (read, write, delete), embedding resources, etc
I picked "talk" just because it's a relatively simple read/write domain
All right, coming weekend is a busy one for me, so I'll try to make a start next week/weekend. I'll keep you posted.
|
2025-04-01T04:34:45.642394
| 2014-10-27T19:38:31
|
46949601
|
{
"authors": [
"docwhat",
"jrstarke",
"miketheman"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8673",
"repo": "miketheman/nginx",
"url": "https://github.com/miketheman/nginx/issues/293"
}
|
gharchive/issue
|
Can't setup logformats to be used in nginx.conf
If you want custom logformats for access and error logs, you can't do it via the node['nginx']['access_log_options'] and node['nginx']['error_log_options'] attributes because the conf.d directory isn't included until after the logs are defined in nginx.conf.
I was trying to get all my logs to be in JSON format.
I'm not sure what the right solution is, unless you create a special "formats.d" folder or something that gets included ASAP in nginx.conf.
Why not just create the log formats in a file like conf.d/log_formats.conf. You could generate this in a custom chef recipe using a template.
@jrstarke it won't work for the error log because it is included after the main config. See the original comment.
Solved by #302
|
2025-04-01T04:34:45.657723
| 2022-02-14T16:35:00
|
1137554485
|
{
"authors": [
"david-broman",
"dlunde"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8674",
"repo": "miking-lang/miking-dppl",
"url": "https://github.com/miking-lang/miking-dppl/pull/87"
}
|
gharchive/pull-request
|
Update rootppl/compile.mc compatibility
Minor PR that updates rootppl/compile.mc after https://github.com/miking-lang/miking/pull/540.
When I compile with the latest version of Miking, I get the following error with the patch when running make clean and make in dppl.
I'm getting the same error (but I didn't get it at the time this PR was created). @larshum Do you know what might have gone wrong here? It's weird to me that it gives this error in miking-dppl, but not when running the tests in miking.
|
2025-04-01T04:34:45.677244
| 2024-01-26T16:49:10
|
2102531723
|
{
"authors": [
"B4nan",
"NeelDigonto",
"jsprw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8675",
"repo": "mikro-orm/mikro-orm",
"url": "https://github.com/mikro-orm/mikro-orm/issues/5174"
}
|
gharchive/issue
|
How to use weighted FTS with multiple fields of similar weight
How to use weighted FTS with multiple fields of similar weight?
For example how to achieve something like this?
BEGIN;
CREATE TABLE log(
id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
level TEXT NOT NULL,
message TEXT NOT NULL,
traceid TEXT NOT NULL,
document TSVECTOR GENERATED ALWAYS AS (
setweight(to_tsvector('english', level), 'A') ||
setweight(to_tsvector('english', traceid), 'A') ||
setweight(to_tsvector('english', message), 'B')
) STORED
);
CREATE INDEX log_document_idx ON log USING GIN (document);
COMMIT;
This query has set both level and traceid to weight A.
Reference:
https://www.postgresql.org/docs/current/textsearch-controls.html
I have gone over the MikroORM documentation.
type WeightedFullTextValue = {
A?: string | null | undefined;
B?: string | null | undefined;
C?: string | null | undefined;
D?: string | null | undefined;
}
export type WeightedFullTextValue = {
[K in FullTextWeight]?: string | null;
};
Here I can only set one column for each weighted level.
If the requested feature is not possible in the short term, I am also fine with stored procedures, and I will also prefer that to the onUpdate way of doing this, since I could see seperate UPDATE queries being sent to db with the onUpdate method.
I tried the following with STORED procedures.
@Entity()
export class Tenant {
@PrimaryKey({ type: 'integer' })
id!: number;
@Property({ type: 'text', nullable: false })
name: string = '';
@Property({ type: 'text', nullable: false })
description = '';
@Property({ type: 'integer', nullable: false })
maxActiveUsers = 20;
@Property({ type: 'timestamptz' })
createdAt = new Date();
@Property({ type: 'timestamptz', onUpdate: () => new Date() })
updatedAt = new Date();
@Index({ type: "GIN" })
@Property({
columnType: `TSVECTOR GENERATED ALWAYS AS setweight(to_tsvector('english', level), 'A') ||
setweight(to_tsvector('english', traceid), 'A') ||
setweight(to_tsvector('english', message), 'B') STORED NOT`,
type: 'unknown',
nullable: true
})
searchableTitle!: string;
}
The last NOT is there since drizzle puts a null attribute due to nullable: true.
I kept the nullable: true since I it's a stored procedure and I don't intend to populate that field, but only query it.
But it fails with
[query] begin
[query] insert into "Tenant" ("name", "description", "maxActiveUsers", "createdAt", "updatedAt") values ('yayma', 'tomdo', 20, '2024-01-26T16:21:11.933Z', '2024-01-26T16:21:11.933Z') returning "id" [took 2 ms, 1 row affected]
[query] update "Tenant" set "searchableTitle" = to_tsvector('simple', '''tomdo'':2B ''yayma'':1A'), "updatedAt" = '2024-01-26T16:21:11.937Z' where "id" = 1 returning "searchableTitle" [took 2 ms]
[query] rollback
node:internal/process/promises:289
triggerUncaughtException(err, true /* fromPromise */);
^
DriverException: update "Tenant" set "searchableTitle" = to_tsvector('simple', '''tomdo'':2B ''yayma'':1A'), "updatedAt" = '2024-01-26T16:21:11.937Z' where "id" = 1 returning "searchableTitle" - column "searchableTitle" can only be updated to DEFAULT
I tried the generated field too, but something similar happened there too.
Can you give some alternative for getting it to work?
any idea on this @jsprw?
@B4nan
Also, with something simplier and documented.
Lets say
@Entity()
export class Tenant {
@PrimaryKey({ type: 'integer' })
id!: number;
@Property({ type: 'text' })
name: string = '';
@Property({ type: 'text' })
description = '';
@Property({ type: 'integer' })
maxActiveUsers = 20;
@Property({ type: 'timestamptz' })
createdAt = new Date();
@Property({ type: 'timestamptz', onUpdate: () => new Date() })
updatedAt = new Date();
@Index({ type: 'fulltext' })
@Property({
type: FullTextType,
onCreate: (tenant: Tenant) => ({ A: tenant.name, B: tenant.description }),
onUpdate: (tenant: Tenant) => ({ A: tenant.name, B: tenant.description })
})
searchableTitle!: WeightedFullTextValue;
When I do something like
const rnum = Math.floor(Math.random() * 1_000_000)
const tenant = new Tenant();
tenant.name = `org-${rnum}`;
tenant.description = `comment-${rnum}`;
await lem.persist(tenant).flush();
We get
[query] begin
[query] insert into "Tenant" ("name", "description", "maxActiveUsers", "createdAt", "updatedAt", "searchableTitle") values ('org-274573', 'comment-274573', 20, '2024-01-26T20:13:05.418Z', '2024-01-26T20:13:05.418Z', setweight(to_tsvector('simple', 'org-274573'), 'A') || setweight(to_tsvector('simple', 'comment-274573'), 'B')) returning "id" [took 9 ms, 1 row affected]
[query] commit
But by adding another find query, the update statement is kind of messed up?
const [items, total] = await lem.findAndCount(
Tenant,
{},
{
limit: 10,
offset: 0,
},
);
console.log(items, total);
const rnum = Math.floor(Math.random() * 1_000_000)
const tenant = new Tenant();
tenant.name = `org-${rnum}`;
tenant.description = `comment-${rnum}`;
await lem.persist(tenant).flush();
[query] begin
[query] insert into "Tenant" ("name", "description", "maxActiveUsers", "createdAt", "updatedAt", "searchableTitle") values ('org-694169', 'comment-694169', 20, '2024-01-26T20:17:31.031Z', '2024-01-26T20:17:31.031Z', setweight(to_tsvector('simple', 'org-694169'), 'A') || setweight(to_tsvector('simple', 'comment-694169'), 'B')) returning "id" [took 6 ms, 1 row affected]
[query] update "Tenant" set "searchableTitle" = case when ("id" = 7) then setweight(to_tsvector('simple', 'org-364372'), 'A') || setweight(to_tsvector('simple', 'comment-364372'), 'B') when ("id" = 1) then setweight(to_tsvector('simple', 'org-424799'), 'A') || setweight(to_tsvector('simple', 'comment-424799'), 'B') when ("id" = 2) then setweight(to_tsvector('simple', 'org-863365'), 'A') || setweight(to_tsvector('simple', 'comment-863365'), 'B') when ("id" = 3) then setweight(to_tsvector('simple', 'org-267250'), 'A') || setweight(to_tsvector('simple', 'comment-267250'), 'B') when ("id" = 4) then setweight(to_tsvector('simple', 'org-20977'), 'A') || setweight(to_tsvector('simple', 'comment-20977'), 'B') when ("id" = 5) then setweight(to_tsvector('simple', 'org-769056'), 'A') || setweight(to_tsvector('simple', 'comment-769056'), 'B') when ("id" = 6) then setweight(to_tsvector('simple', 'org-289869'), 'A') || setweight(to_tsvector('simple', 'comment-289869'), 'B') when ("id" = 8) then setweight(to_tsvector('simple', 'org-274573'), 'A') || setweight(to_tsvector('simple', 'comment-274573'), 'B') else "searchableTitle" end, "updatedAt" = case when ("id" = 7) then '2024-01-26T20:17:31.036Z' when ("id" = 1) then '2024-01-26T20:17:31.036Z' when ("id" = 2) then '2024-01-26T20:17:31.036Z' when ("id" = 3) then '2024-01-26T20:17:31.036Z' when ("id" = 4) then '2024-01-26T20:17:31.036Z' when ("id" = 5) then '2024-01-26T20:17:31.036Z' when ("id" = 6) then '2024-01-26T20:17:31.037Z' when ("id" = 8) then '2024-01-26T20:17:31.037Z' else "updatedAt" end where "id" in (7, 1, 2, 3, 4, 5, 6, 8) returning "searchableTitle" [took 6 ms, 8 rows affected]
[query] commit
Why is this updating the searchableTitle field of all the entities, except the new entity everything is unchanged.
For these kind of complex data types stored procedures will be much better ig.
So, if possible I just want mikro-orm to know that I will provide a generated | stored procedure to the db and the db will populate the field on its own.
So mikro-orm should only touch this field while migrating and doing search querries.
Is this something which can do or is it better to let mikro-orm handle it.
But anyways this approach is kindof unstable if many rows have same data, then the db searchableTitle field becomes a mess.
For example if I just remove the onUpdate but keep the onCreate field.
In your example, I see you're using properties that are not defined in your entity.
However, when you add these properties to the entity, you can combine them into a string for weighted value A. You could combine them with spaces in between. If I'm not mistaken the two statements below yield the same result.
select setweight(to_tsvector('english', level), 'A') || setweight(to_tsvector('english', traceid), 'A');
select setweight(to_tsvector('english', level || ' ' || traceid), 'A');
---standalone test
select setweight(to_tsvector('english', 'HELLO'), 'A') || setweight(to_tsvector('english', 'NAME'), 'A');
select setweight(to_tsvector('english', 'HELLO' || ' ' || 'NAME'), 'A');
@Entity()
export class Tenant {
@PrimaryKey({ type: 'integer' })
id!: number;
@Property({ type: 'text', nullable: false })
name: string = '';
@Property({ type: 'text', nullable: false })
level: string = '';
@Property({ type: 'text', nullable: false })
traceid: string = '';
@Property({ type: 'text', nullable: false })
description = '';
@Property({ type: 'integer', nullable: false })
maxActiveUsers = 20;
@Property({ type: 'timestamptz' })
createdAt = new Date();
@Property({ type: 'timestamptz', onUpdate: () => new Date() })
updatedAt = new Date();
@Index({ type: "fulltext" })
@Property({
type: new FullTextType('english'),
nullable: true,
onUpdate: (tenant: Tenant) => ({ A: [tenant.name, tenant.traceid, tenant. level].join(' '), B: tenant.message })
})
searchableTitle!: string;
}
Can you check whether the above example works for you?
For these kind of complex data types stored procedures will be much better ig.
Yes, but in this example you can control the the input to the ts_vector from the JS backend (eg add additional search terms), while with stored procedures, the logic is quite hidden and you can only access database fields.
So mikro-orm should only touch this field while migrating and doing search querries.
We have been experiencing this as well. It occurs due to the ORM not being able to compare the values (the input value is a lot different than the output value) and therefore thinking the value is always dirty when you've changed it once. When it is removed and re-added to the uow, it works fine. As you can see in the insert query, nothing special happens there
Why the field looks so messed up, is something that is not caused by the queries above your screenshot. If you execute those functions in a single select, you will get the expected result of the to_tsvector function.
@jsprw I tried it
@Entity()
export class Tenant {
@PrimaryKey({ type: 'integer' })
id!: number;
@Property({ type: 'text' })
name: string = '';
@Property({ type: 'text' })
kind = '';
@Property({ type: 'text' })
description = '';
@Property({ type: 'integer' })
maxActiveUsers = 20;
@Property({ type: 'timestamptz' })
createdAt = new Date();
@Property({ type: 'timestamptz', onUpdate: () => new Date() })
updatedAt = new Date();
@Index({ type: "fulltext" })
@Property({
type: new FullTextType('english'),
nullable: true,
onUpdate: (tenant: Tenant) => ({ A: [tenant.name, tenant.kind].join(' '), B: tenant.description })
})
searchableTitle!: string;
}
const rnum = Math.floor(1_000_000)
const tenant = new Tenant();
tenant.name = `org`;
tenant.kind = `gold`;
tenant.description = `comment`;
await lem.persist(tenant).flush();
[query] begin
[query] insert into "Tenant" ("name", "kind", "description", "maxActiveUsers", "createdAt", "updatedAt") values ('org', 'gold', 'comment', 20, '2024-01-26T21:49:29.328Z', '2024-01-26T21:49:29.328Z') returning "id" [took 2 ms, 1 row affected]
[query] commit
Without onCreate, the tsvector field remains null.
As soon as I add an onCreate with the same value as on onUpdate and the field gets populated,
it sends a complex update querry.
Is that query correct?
The query is correct but quite massive as I showed an example earlier.
The DB state seems okay though
With your example there was no update query going on, after adding in onCreate the query is somewhat like
Yeah sorry, the onCreate was meant to be there but you've added it correctly. It looks like the original issue was resolved by this change?
The db state becomes okay, but just that the update query is a bit massive.
Yeah that has to do with what I described before. When you don't load all these entities in the UoW, it wont generate this big of a query (although the query doesn't do any harm and updates the right fields correctly).
It looks like the original issue was resolved by this change?
Yes, indeed, we can close this issue.
Thank you so much for the prompt reply!
Have a great day!
|
2025-04-01T04:34:45.684119
| 2020-10-24T20:14:26
|
728867898
|
{
"authors": [
"B4nan",
"bahdcoder"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8676",
"repo": "mikro-orm/mikro-orm",
"url": "https://github.com/mikro-orm/mikro-orm/issues/957"
}
|
gharchive/issue
|
Schema generator does not add new indexes when updating
Describe the bug
When running schemaGenerator.updateSchema(), new unique indexes added to the column do not get created.
To Reproduce
Steps to reproduce the behavior:
Create an entity such as Article with a slug string column.
Create the articles table and slug field in the database.
Run schema generator update schema method. Notice the new index is not created.
Expected behavior
A clear and concise description of what you expected to happen.
Additional context
I have only tested this on mysql.
I believe this is the origin of the problem after spending a moment debugging:
// @mikro-orm/knex/SchemaHelper
hasSameIndex(prop, column) {
if ([core_1.ReferenceType.SCALAR, core_1.ReferenceType.EMBEDDED].includes(prop.reference)) {
// TODO: Check indexes on scalar fields instead of returning true.
return true;
}
return prop.referencedColumnNames.some(referencedColumnName => {
return !!column.fk && referencedColumnName === column.fk.referencedColumnName && prop.referencedTableName === column.fk.referencedTableName;
});
}
Versions
Dependency
Version
node
12.17.0
typescript
3.9.5
mikro-orm
4.2.0
your-driver
mysql
Hey @B4nan please could you share an update on this bug ? Would really appreciate it, thanks.
Index diffing happens in findIndexDifference method (hasSameIndex is used only for diffing FK indexes), but that was adding missing indexes only for those defined on entity level (those are stored in meta.indexes and meta.uniques), not on property level (stored on meta.properties[prop].index and unique respectively).
|
2025-04-01T04:34:45.700848
| 2024-02-14T14:09:09
|
2134441572
|
{
"authors": [
"SlexAxton",
"milesj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8677",
"repo": "milesj/packemon",
"url": "https://github.com/milesj/packemon/issues/235"
}
|
gharchive/issue
|
.css.ts imports not inlined in output
I am guessing this is related to https://github.com/milesj/packemon/issues/229.
When I import File.css.ts it no longer throws an error, but also it does not inline the import into the index.js output file. So in the eventual build I'm missing all of the css files and they are not inlined.
So my build output will succeed, but esm/index.js will contain stuff like:
import { link } from './Link.css';
import { card } from './Card.css';
But packemon won't output those Link.css.ts and Card.css.ts files.
I haven't been able to dive too far into packemon, but I assume there's some other place where something is assuming that these are real .css files and not .css.ts files.
Ooh, I can also confirm that if I change my .css.ts file to .vss.ts and import it as File.vss - then it is correctly inlined. So I think that should more or less narrow it down to packemon handling .css in a weird way.
I think the fact is that './Link.css' being a TS file under the hood is just bad design on their part. I'm not sure there's an easy way around this from Packemon's side, so I think I'll just add a setting to disable the asset stuff.
Potential middle ground would be control over that constant of asset file extensions.
@SlexAxton I just released 4.0.0-alpha.0, can you give that a shot?
I haven't changed any configuration, but by default I still have the same issue:
Is there a flag or something I need to add?
Whats the content of one of these css files, so I can test it thoroughly
Minimum viable .css.ts file is gonna be something like:
style.css.ts
import {style} from '@vanilla-extract/css';
export const redText = style({
color: 'red',
});
Component.tsx
import {redText} from './style.css';
export const MyComponent = (props) => (
<div className={redText} {...props} />
);
packemon.config.js
Then technically we'd also want to make sure these got built at build time. I have this configured, but it fails for the same reason with or without this configuration at the moment.
const {vanillaExtractPlugin} = require('@vanilla-extract/rollup-plugin');
module.exports = {
rollupInput(config) {
config.plugins.unshift(vanillaExtractPlugin());
},
};
Result
The resulting build output without the plugin added should just be the .css.ts file is inlined as a module just like other .ts files.
The resulting build output with the plugin should more or less replace the contents of style.css.ts with:
export const redText = 'redtextclass-982398';
and then the rollup file also ends up generating a css output file at like esm/assets/src/style.css.ts.vanilla.css and it would have something like:
.redtextclass-982398 {color:red}
Thanks, will give this a shot.
Figured this out, will release shortly https://github.com/milesj/packemon/pull/240
Released 4.0.0-alpha.2
Fixed in v4
it did indeed work, i think, in my small test! Thanks so much!
Alex Sexton
On Sat, Mar 9, 2024 at 6:12 PM Miles Johnson @.***>
wrote:
Fixed in v4
—
Reply to this email directly, view it on GitHub
https://github.com/milesj/packemon/issues/235#issuecomment-1987019404,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAXSKQRW6ZNQZPADWVBQR3YXOQN7AVCNFSM6AAAAABDILJDXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBXGAYTSNBQGQ
.
You are receiving this because you were mentioned.Message ID:
@.***>
|
2025-04-01T04:34:45.702292
| 2015-06-30T15:29:47
|
92124946
|
{
"authors": [
"JurajBurian",
"joroKr21"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8678",
"repo": "milessabin/shapeless",
"url": "https://github.com/milessabin/shapeless/issues/423"
}
|
gharchive/issue
|
enum value should be serializable and should also keep referential integrity
// let say that "enum" WeekDay with value Mon is defined
// then this test should be ok:
val out = new ByteArrayOutputStream()
new ObjectOutputStream(out).writeObject(Mon);
out.close();
val monday = new ObjectInputStream(new ByteArrayInputStream(out.toByteArray)).readObject()
assert(monday == Mon)
See #1271 for an example
|
2025-04-01T04:34:45.726236
| 2024-06-14T09:02:35
|
2352898685
|
{
"authors": [
"LoveEachDay",
"Ywandung-Lyou",
"liyun95"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8679",
"repo": "milvus-io/milvus-docs",
"url": "https://github.com/milvus-io/milvus-docs/issues/2658"
}
|
gharchive/issue
|
[Suggestion]: How to enable authentication of Milvus in Docker instead of Docker Compose
Is there an existing issue for this?
[X] I have searched the existing issues
Topic
I install Milvus with standalone_embed.sh.
How could I enable the authentication? It would be best if you could add the procedures in Authenticate User Access.
Type
tutorials
Outline
No response
/assign @LoveEachDay
@Ywandung-Lyou It's recommended to deploy milvus with docker-compose if you'd like to change milvus configurations on the fly.
Read more from: https://milvus.io/docs/configure-docker.md?tab=component
|
2025-04-01T04:34:45.845932
| 2024-10-15T07:56:52
|
2587959228
|
{
"authors": [
"JAORMX",
"coveralls"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8680",
"repo": "mindersec/minder",
"url": "https://github.com/mindersec/minder/pull/4752"
}
|
gharchive/pull-request
|
Output test coverage from unit tests action
Summary
This converges the test and the cover actions into one. Having two is
wasteful and the benefit unclear.
Change Type
Mark the type of change your PR introduces:
[ ] Bug fix (resolves an issue without affecting existing features)
[ ] Feature (adds new functionality without breaking changes)
[ ] Breaking change (may impact existing functionalities or require documentation updates)
[ ] Documentation (updates or additions to documentation)
[x] Refactoring or test improvements (no bug fixes or new functionality)
Testing
Outline how the changes were tested, including steps to reproduce and any relevant configurations.
Attach screenshots if helpful.
Review Checklist:
[x] Reviewed my own code for quality and clarity.
[ ] Added comments to complex or tricky code sections.
[ ] Updated any affected documentation.
[ ] Included tests that validate the fix or feature.
[ ] Checked that related changes are merged.
Pull Request Test Coverage Report for Build<PHONE_NUMBER>2
Details
0 of 0 changed or added relevant lines in 0 files are covered.
1 unchanged line in 1 file lost coverage.
Overall coverage increased (+0.6%) to 54.024%
Files with Coverage Reduction
New Missed Lines
%
internal/authz/authz.go
1
69.85%
Totals
Change from base Build<PHONE_NUMBER>7:
0.6%
Covered Lines:
14800
Relevant Lines:
27395
💛 - Coveralls
|
2025-04-01T04:34:46.004411
| 2021-10-25T00:27:28
|
1034556021
|
{
"authors": [
"Cogit8or",
"lantz"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8684",
"repo": "mininet/mininet",
"url": "https://github.com/mininet/mininet/issues/1093"
}
|
gharchive/issue
|
Proposed addition to webpage tutorial
Proposed addition to the mininet tutorial at http://mininet.org/vm-setup-notes/
To be placed between steps 3 and 4 under the VirtualBox subheading (but not as an additional numbered step, I don't think):
If starting the VM gives an error which mentions "64-bit CPU," it is possible that virtualization needs to be enabled in your computer's BIOS settings. This takes only seconds, but requires a computer restart to access the BIOS. Which key to press during startup to access your BIOS, as well as the name of the setting which controls virtualization, varies by motherboard (e.g. 'SVM Mode'), but should be easy to look up provided you know what brand of motherboard you have. Some BIOS interfaces also have search features to make finding specific settings painless.
This is from my colleague actually; moved to https://github.com/mininet/mininet.github.com/issues/17
|
2025-04-01T04:34:46.126554
| 2015-08-03T13:13:48
|
98740050
|
{
"authors": [
"colinbruce",
"kelliematheson"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8685",
"repo": "ministryofjustice/fr-staffapp",
"url": "https://github.com/ministryofjustice/fr-staffapp/pull/226"
}
|
gharchive/pull-request
|
Update style name
One of the css styles was spelled inconsistently
update the style name
add the style to one row
update all views that referenced the old name
@kelliematheson Can you approve this one?
It was adding the space to page 2 above the refund/probate checkboxes.
approved, please merge
|
2025-04-01T04:34:46.130382
| 2024-03-11T09:41:21
|
2178677596
|
{
"authors": [
"davidkelliott",
"markgov"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8686",
"repo": "ministryofjustice/modernisation-platform",
"url": "https://github.com/ministryofjustice/modernisation-platform/issues/6441"
}
|
gharchive/issue
|
Complete production ready checklist for corporate staff rostering
User Story
As a modernisation platform engineer
I need to be confident that I have all the information I need for supporting new applications as they go live
So that I can effectively provide support.
Details are provided here:
https://user-guide.modernisation-platform.service.justice.gov.uk/user-guide/production-ready-checklist.html
User Type(s)
MP engineers
Value
Ensure code quality and security of the platform
Assumptions / Hypothesis / Questions / Unknowns
Definition of done
[ ] checklist udpated
[ ] reviewed by another team member
Reference
How to write good user stories
found 27 HIGH severity level findings in Security Hub
the 27 highs have been explained and accepted
|
2025-04-01T04:34:46.133848
| 2022-06-17T10:50:38
|
1274882238
|
{
"authors": [
"townxelliot",
"williamfalconeruk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8687",
"repo": "ministryofjustice/opg-lpa",
"url": "https://github.com/ministryofjustice/opg-lpa/pull/997"
}
|
gharchive/pull-request
|
LPAL-755 Upgrade service-pdf to PHP 8.1
Purpose
Briefly describe the purpose of the change, and/or link to the JIRA ticket for context
Fixes LPAL-####
Approach
Explain how your code addresses the purpose of the change
Learning
Any tips and tricks, blog posts or tools which helped you. Plus anything notable you've discovered about the LPA service
Checklist
[ ] I have performed a self-review of my own code
[ ] I have updated documentation (Confluence/GitHub wiki/tech debt doc) where relevant
[ ] I have added tests to prove my work
[ ] I have added mandatory tags to terraformed resources, where possible
[ ] If I added a package.json or composer.json, I also made sure this is included in the script in .github/workflows/dependabot-update.yml
[ ] The product team have tested these changes
@townxelliot - this will need manual intervention and a merge before rebuilding. i'm closing temporarily until you are available.
|
2025-04-01T04:34:46.135106
| 2016-05-17T17:27:20
|
155316201
|
{
"authors": [
"mloughran"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8688",
"repo": "ministryofjustice/prison-visits-public",
"url": "https://github.com/ministryofjustice/prison-visits-public/pull/54"
}
|
gharchive/pull-request
|
Call api to validate visitors
WIP – the API still needs writing – but tests pass.
@alan if you pick this up in the morning, the tests I've removed here might be worth considering adding to PVB2 in some form. The tests there for VisitorStep are a mess, but probably not worth fixing there since we intend to delete the entire class.
Looks good.
|
2025-04-01T04:34:46.185347
| 2021-08-06T13:09:02
|
962722488
|
{
"authors": [
"mintusf"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8691",
"repo": "mintusf/land_cover_tracking",
"url": "https://github.com/mintusf/land_cover_tracking/issues/11"
}
|
gharchive/issue
|
Decide labeling
Summary
In the dataset, there are for types of masks (as described here):
IGBP (International Geosphere-Bioshpere Programme): 67% accuracy
LCCS LC (land cover): 74% accuracy
LCCS LU (land use): 81% accuracy
LCCS SH (surface hydrology): 87% accuracy
The goal is to decide how to combine these 4 types in order to create one ground truth.
Goal
Describe the definition by which this issue will be closed.
Todo
[ ] perform masks EDA
[ ] decide labeling generation
Deadline
xx/xx
Parent issue
None
References
None
Notes
None
Based on masks analysis, attached below, following ground truth was decided:
since LCCS SH has the highest accuracy and low amount of unlabeled pixels, it is selected as base mask
additionally, urban and croplands masks from LCCS LU mask (9, 25, 35, 36) will be utilized
selected LCCS LU masks will have higher priority over LCCS SH
Explanation below shows that this method is acceptable
|
2025-04-01T04:34:46.259605
| 2022-05-03T10:26:16
|
1223921119
|
{
"authors": [
"Ngoguey42",
"samoht",
"tomjridge"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8692",
"repo": "mirage/irmin",
"url": "https://github.com/mirage/irmin/issues/1831"
}
|
gharchive/issue
|
layers/GC: "read from gap" / "Kind.of_magic: unexpected magic char '\000'"
Victor reports the following:
+7550103145us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: check_worker_status called (Some case)
+7550103250us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: check_worker_status: worker terminated
+7550103267us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: checking existence of next sparse+suffix
+7550308372us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: copy additional data from current suffix to next
+7550308657us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: switching to next sparse+suffix
+7550382299us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: switched to new sparse+suffix
+7550382426us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: removing old files
+7554134620us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/layers/pre_io.ml: RO instance for /home/vicall/.tezos-node_layered/context\
/store.pack reloading for generation 1235
+7554819822us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/layers/pre_io.ml: RO instance for /home/vicall/.tezos-node_layered/context\
/store.pack reloaded; updating ro_fields
+7554819912us index [INFO] [context] sync
May 3 11:19:24.971 - validator.block: block BKucWhjSBRrvsHCd5qVJ5ntXeXz1CjErvnj5fcCrjQk8FSodAdq successfully validated
May 3 11:19:24.971 - validator.block: Request pushed on 2022-05-03T09:19:09.709-00:00, treated in 28.883us, completed in 15.262s
May 3 11:19:25.083 - validator.chain: Update current head to BKucWhjSBRrvsHCd5qVJ5ntXeXz1CjErvnj5fcCrjQk8FSodAdq (level 2302735, timestamp \
2022-04-22T14:26:29-00:00, fitness 02::0023230f::::ffffffff::00000000), same branch
May 3 11:19:25.083 - validator.chain: Request pushed on 2022-05-03T09:19:24.971-00:00, treated in 642us, completed in 111ms
+7554066625us irmin.layers.io [ERROR] vendors/irmin/src/irmin-pack/layers/sparse_file.ml: attempt to read from gap
vendors/irmin/src/irmin-pack/layers/sparse_file.ml: attempt to read from gap, voff=3146378311, len=42
Error:
Kind.of_magic: unexpected magic char '\000'
This was whilst bootstrapping from mainnet, with a gap_tolerance of 1000. The "read from gap" error means that the node is attempting to read from part of the file that has been GC'ed.
These sorts of errors are quite difficult to track down. The obvious candidates are:
Somehow the "reachable/live data in the file" was not calculated correctly - a bit of the file that was live was missed.
The "reachable data in the file" was calculated correctly, but the node wrongly tried to access a part of the file it shouldn't (e.g. it retained some reference to a part of the file in a cache, and later tried to use this reference).
Some other error, e.g. the processing of the "reachable" data to produce the "extent" data was buggy.
+7554066625us irmin.layers.io [ERROR] vendors/irmin/src/irmin-pack/layers/sparse_file.ml: attempt to read from gap vendors/irmin/src/irmin-pack/layers/sparse_file.ml: attempt to read from gap, voff=3146378311, len=42
and
Error: Kind.of_magic: unexpected magic char '\000' look like 2 distinct errors. Do you know why both show up?
At the moment, the "read from gap" is logged as an error, but the code fills the buffer with '0' characters. So the exception actually gets thrown when the code attempts to interpret the '0' characters as a Pack_value.Kind t. Probably better just to make the "read from gap" case throw an exception instead.
Just altered this in the current branch; "read from gap" now results in a thrown exception.
Something I didn't notice when first reading the error: the log reports the error as:
+7554066625us irmin.layers.io [ERROR] vendors/irmin/src/irmin-pack/layers/sparse_file.ml: attempt to read from gap
Note the time stamp of "+7554066625us". Note that the previous log line was
+7554819912us index [INFO] [context] sync
So these lines are out of order, if we believe the timestamps. If we believe the timestamps, the error occurs between the following two events:
+7550382426us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/unix/pack_store_IO.ml: removing old files
+7554134620us irmin.layers.io [INFO] vendors/irmin/src/irmin-pack/layers/pre_io.ml: RO instance for /home/vicall/.tezos-node_layered/context/store.pack reloading for generation 1235
Since the RO instance always reloads (if it can) before serving a call, the error must originate from the RW instance, immediately after switching to the next sparse+suffix (again, all this relies on the correctness of the timestamps).
I was able to reproduce this error in the following way:
importing a snapshot seems currently not possible because of https://github.com/mirage/irmin/issues/1830
instead, we can download a mainnet rolling tarball from https://mainnet.xtz-shots.io/
then we need to convert the files to layers format (needs documentation - ask me if you need to do this)
then we can run a rolling node
soon after the first GC, we get a similar "read from gap" error
After further investigations:
the "read from gap" error is at some position/offset 98765 (say); this position is indeed GC'ed by layers
this position corresponds to a non-commit object; there is an entry for this object (with the position 98765) in the index
at some point after the GC, index.find is called with the hash of this non-commit object; it finds the hash in the index, with the corresponding position 98765, returns that; the next step is an attempt to read from that offset, resulting in the error
So, it looks like the problem is that the index contains entries for non-commit objects, and these cause problems.
If we patch the index to only return results for commit objects, and to return "None" for everything else, we have a different problem:
When GC is called, during Repo.iter, we get Fatal error: exception Irmin_pack_unix__Pack_store.Corrupted_store("Unexpected object CoW1MP7bzPiEUZp1J7vng94pvFYmTUwdtmXKLPJMGz1M3esUqPkt missing from index")
This object is a non-commit object that was expected to be in the index
After examining the code, we see that Irmin is working with a key of the form Indexed hash; such keys MUST be present in the index for irmin to work; in this case, our patch invalidates this invariant, and we have an exception raised
So, at this point we have:
Working from a snapshot, or imported tarball, seems to give rise to keys of the form Indexed hash for objects that are not commits
After GC runs, these objects can be invalid because GC has removed the file regions they point to
For the imported tarball:
It is sort-of expected that a tarball produced from a "non-minimal" indexing strategy, will not be compatible with the minimal indexing strategy that we need for layers
It is not obvious how to fixup the imported tarball so that it can be used with layers
For the snapshot:
Importing a snapshot in minimal-indexing mode should presumably mean that the index ONLY contains entries for commit objects. Thus, it is unclear why Victor hit this error when importing a snapshot. The suspicion is that snapshot import somehow introduces entries into index for non-commit objects.
To progress this issue:
We need to fixup snapshot import
Then we need to see if we can reproduce this "read from gap" error after GC
Then we need to confirm that the index contains entries for non-commit objects
After this, we somehow need to reconcile the way that snapshot import is working, with the requirement that index only contain entries for commit objects.
Using a mainnet v4 in-mem snapshot import into a clean data-dir, and then bootstrapping, with GC every 100 commits, this error was no longer observed (whereas previously it happened every time, almost immediately after GC switched to the new sparse and suffix).
Let's close this - the snapshot import code should be ok. We can re-open a new issue in case there are more issues with snapshots.
|
2025-04-01T04:34:46.289272
| 2016-09-30T23:07:49
|
180419988
|
{
"authors": [
"Atrion",
"miroof",
"roba91"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8693",
"repo": "miroof/node-virtual-gamepads",
"url": "https://github.com/miroof/node-virtual-gamepads/issues/18"
}
|
gharchive/issue
|
Possible to switch server address?
Currently I run an apache server on my system as well as emulation station which I plan to use this virtual gamepad with.
The address the controller will want to connect to is already in use is there a way to change the port or the server address?
Hi Atrion,
You can modify the port number in the config.json file before running virtual gamepad application.
Regards,
Jeremy
I guess this issue can be closed...
|
2025-04-01T04:34:46.322389
| 2016-08-02T11:27:29
|
168862494
|
{
"authors": [
"mirekm",
"patrys"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8694",
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/issues/536"
}
|
gharchive/issue
|
Meeting notes, 2nd August 2016
Saleor for developers (unsorted)
Simple models and structure
Dockerized
Well-defined workflow
Documentation
Versioning (even if forkable by principle)
Good internal API / Pluggable: Checkout, Product Form, …
Customizable Dashboard (responsive, Material design)
REST API
Performance
Tests workflow
Saleor for easy deployment / out of the box use (unsorted)
EAV but we need a common API with non-EAV models (generator as a possible solution)
Good configuration options available from the Dashboard: payments, …
Easy onboarding
Customisation (themes) & branding (possibly themes marketplace, requires versioning)
Changes required based on the latest commercial project (unsorted):
[x] Simplify Delivery Groups handling (remove from Cart, rethink statuses and workflow in Dashboard)
[x] Carts no longer in session - Issue #468
[x] Cart requires better API (items, variants)
[ ] Dropping CBV
[x] Search API (Haystack)
[ ] Dropping Satchless API
[ ] Switching Payments to Decimals
[ ] Use Payment model for order creation
[ ] No order creation on checkout
[x] Integration with Google Merchant
[ ] Rethink Categories and Taxonomies
[ ] Documentation
[ ] High level overview of architecture
[ ] Design your first Product (Model, Product Form)
Sprints proposal:
Cart related issues
Cart no longer in session
Simnplify delivery groups handling
API update for items and variants
Categories and taxonomies
Split into main category and additional taxonomies
Integration with Google Merchant
Drop obsolete dependencies
Dropping Satchless API
Move get_price_per_item to helper (with rounding support)
Dropping Class-based views
Dropping weight from products
Payments
Merge summary step with payments
Create order model on payment success
Create order from payment
Better payments integration (requires Payments sprint first)
Use Payment model for order creation
Dashboard upgrade (TBD)
Rethink how we extend dashboard
Closing this in favor of our roadmap
|
2025-04-01T04:34:46.327770
| 2017-11-06T21:59:22
|
271635749
|
{
"authors": [
"b4oshany",
"codecov-io",
"patrys"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8695",
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/pull/1265"
}
|
gharchive/pull-request
|
Changed bonsai:sandbox to bonsai:sandbox-10
This fixes the Couldn't find either the add-on service or the add-on plan of "bonsai:sandbox" error
Codecov Report
Merging #1265 into master will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1265 +/- ##
=======================================
Coverage 69.85% 69.85%
=======================================
Files 117 117
Lines 6083 6083
Branches 775 775
=======================================
Hits 4249 4249
Misses 1677 1677
Partials 157 157
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fb915b8...fe639f3. Read the comment docs.
Interesting, I've changed it based on what Heroku's CLI reports. We need to change this in docs as well.
|
2025-04-01T04:34:46.348078
| 2019-06-19T11:57:33
|
457970370
|
{
"authors": [
"NyanKiyoshi",
"django-queries"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8696",
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/pull/4319"
}
|
gharchive/pull-request
|
Fixed internal error when adding a note to an anonymous order
AttributeError: 'NoneType' object has no attribute 'pk'
File "django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "saleor/order/views.py", line 45, in details
note_form.save(user=request.user)
File "saleor/order/forms.py", line 76, in save
order=self.instance, user=user, message=self.instance.customer_note
File "saleor/order/events.py", line 225, in order_note_added_event
if order.user.pk == user.pk:
Pull Request Checklist
[ ] Privileged views and APIs are guarded by proper permission checks.
[ ] All visible strings are translated with proper context.
[ ] All data-formatting is locale-aware (dates, numbers, and so on).
[ ] Database queries are optimized and the number of queries is constant.
[ ] Database migration files are up to date.
[x] The changes are tested.
[ ] The code is documented (docstrings, project documentation).
[ ] GraphQL schema and type definitions are up to date.
[x] Changes are mentioned in the changelog.
Here is the report for d62f4afa0204d66df0a0b5c5c22147e7f55676b3 (NyanKiyoshi/saleor @ fix/events/add-note-anonymous)
Base comparison is c651d214349aed1be84d084c33122164be0a37c0.
No differences were found. (click me)
# api.benchmark checkout
test name left count right count
------------------------------------ ----------- -----------
add billing address to checkout 41 41
add shipping to checkout 7 7
checkout payment charge 16 16
complete checkout 6 6
create checkout 45 45
# api.benchmark homepage
test name left count right count
------------------------------------ ----------- -----------
retrieve main menu 5 5
retrieve product list 4 4
retrieve secondary menu 5 5
retrieve shop 2 2
# api.benchmark product
test name left count right count
------------------------------------ ----------- -----------
product details 16 16
# api.benchmark variant
test name left count right count
------------------------------------ ----------- -----------
retrieve variant list 13 13
|
2025-04-01T04:34:46.352784
| 2019-11-13T09:45:57
|
522080360
|
{
"authors": [
"django-queries",
"raymondfx"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8697",
"repo": "mirumee/saleor",
"url": "https://github.com/mirumee/saleor/pull/4967"
}
|
gharchive/pull-request
|
Saleor
I want to merge this change because...
Screenshots
Pull Request Checklist
[ ] Privileged views and APIs are guarded by proper permission checks.
[ ] All visible strings are translated with proper context.
[ ] All data-formatting is locale-aware (dates, numbers, and so on).
[ ] Database queries are optimized and the number of queries is constant.
[ ] Database migration files are up to date.
[ ] The changes are tested.
[ ] GraphQL schema and type definitions are up to date.
[ ] Changes are mentioned in the changelog.
Here is the report for 5ba446765c915b9e6041a28e5d41ee52a9e91d70 (raymondfx/saleor @ master)
Base comparison is 92f5ba113426bf18595bb069c9cb2f739f5e7e88.
No differences were found. (click me)
# api.benchmark checkout
test name left count right count duplicate count
------------------------------------------- ----------- ----------- ---------------
add billing address to checkout 34 34 20
add shipping to checkout 7 7 0
checkout payment charge 14 14 0
complete checkout 6 6 0
create checkout 50 50 24
# api.benchmark homepage
test name left count right count duplicate count
------------------------------------------- ----------- ----------- ---------------
retrieve main menu 5 5 0
retrieve product list 4 4 0
retrieve secondary menu 5 5 0
retrieve shop 2 2 0
# api.benchmark product
test name left count right count duplicate count
------------------------------------------- ----------- ----------- ---------------
product details 15 15 3
retrieve product attributes 13 13 2
# api.benchmark variant
test name left count right count duplicate count
------------------------------------------- ----------- ----------- ---------------
product variant bulk create 51 51 3
retrieve variant list 15 15 6
# api product sorting attributes
test name left count right count duplicate count
------------------------------------------- ----------- ----------- ---------------
sort product not having attribute data 21 21 0
|
2025-04-01T04:34:46.356127
| 2023-11-22T14:28:41
|
2006470352
|
{
"authors": [
"ahmednasserzaza",
"mirzemehdi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8698",
"repo": "mirzemehdi/KMPNotifier",
"url": "https://github.com/mirzemehdi/KMPNotifier/issues/1"
}
|
gharchive/issue
|
App crashes after sync library dependencies
My app is Crashing at run time (onCreate method) after adding all dependecies required in docs
When I removed library dependency ,the app works well.
Do you use koin library in your app?. If you use which version do you use @ahmednasserzaza ?
@mirzemehdi
Yes,
here are all the versions I use:
koin-version = "3.4.3"
koin-compose-version = "1.0.4"
koinKspVersion = "1.2.2"
@ahmednasserzaza Could you please make these steps if possible and see if this still continues? Sometimes when different koin versions are used this can occur
Upgrade koin version to 3.5.0
Invalidate cache and restart
|
2025-04-01T04:34:46.376894
| 2015-01-21T16:38:52
|
55043066
|
{
"authors": [
"jkxyz",
"mishal"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8699",
"repo": "mishal/iless",
"url": "https://github.com/mishal/iless/issues/38"
}
|
gharchive/issue
|
Compiling long url strings
I'm getting an issue which I can only seem to put down to the parser hanging when it tries to compile long url strings. i.e. url("...."). Specifically, I have a number of font-face declarations using WebType fonts with long strings for the font URLs, and ILess refuses to parse these.
Can you please provide a sample less code to test?
I've tested very long urls and without problems.
Please reopen and attach problematic code if it's still a problem.
|
2025-04-01T04:34:46.387508
| 2017-01-17T04:07:24
|
201171759
|
{
"authors": [
"cfbenn",
"toomanybrians"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8702",
"repo": "missionpinball/mpf-mc",
"url": "https://github.com/missionpinball/mpf-mc/issues/231"
}
|
gharchive/issue
|
Using same event as the one that starts a mode will not execute
When using the same event in "start_events" of a mode and a widget_player, the widget_player is never executed. If event is different for widget_player it will work. This also appears to be true for other event triggers like slide_players but is not tested.
Look at base.yaml for an example that works and another example that does not work.
Mode Start Event Cannot Be Used Elsewhere.zip.txt
This is a known behavior since the config players don't register their event handlers until the mode starts, so if you use the same event that starts a mode as a config player then it will never fire since by the time it registered the handler, the event has already happened.
But I'll keep this open because maybe this is a special case we should look for to create an error? It would be pretty easy to just make sure that no config player event names are in the mode's start events and then raise an error if they are. Thoughts?
|
2025-04-01T04:34:46.388688
| 2018-05-15T07:45:23
|
323102334
|
{
"authors": [
"jabdoa2",
"muffler-aus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8703",
"repo": "missionpinball/mpf",
"url": "https://github.com/missionpinball/mpf/pull/1178"
}
|
gharchive/pull-request
|
Issue1177 - Added support for JSON logging via --json-logging command line option
Initial PR to allow JSON logging as per https://github.com/missionpinball/mpf/issues/1177
awesome! thanks!
|
2025-04-01T04:34:46.400235
| 2024-10-31T13:12:36
|
2626827315
|
{
"authors": [
"harrisonhxy",
"missuo",
"yikZero"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8704",
"repo": "missuo/bob-plugin-deeplx",
"url": "https://github.com/missuo/bob-plugin-deeplx/issues/23"
}
|
gharchive/issue
|
deeplx 503 error
The Deeplx plugin on Bob is not working again. But I saw the latest issue on Deeplx, reporting error 503. it because of Deeplx's problem that caused Bob to fail?
Thank you for your reply.
Temporary Fix for DeepLX Bob Plugin Translation Issues
Background
DeepL has officially discontinued their old API. While the DeepLX author has implemented a temporary fix in v<IP_ADDRESS>, some language variant features (like handling zh-Hans/zh-Hant) still need to be restored. Source
Steps to Fix
1. Update DeepLX to Latest Version
First, update your DeepLX installation to v<IP_ADDRESS>:
brew update
brew upgrade deeplx
brew services restart owo-network/brew/deeplx
2. Add Temporary Language Variant Handling
Add the following code to your Bob plugin's main.js to temporarily restore language variant support:
if (query.from === "auto") {
sourceLang = lang.langMap.get(query.detectFrom);
} else {
sourceLang = lang.langMap.get(query.from);
}
let targetLang = "";
if (query.to === "auto") {
targetLang = lang.langMap.get(query.detectTo);
} else {
targetLang = lang.langMap.get(query.to);
}
+ // Handle language variants (e.g., zh-Hans, zh-Hant, pt-BR)
+ function normalizeLanguageTarget(originalTargetLang) {
+ // Check if the language code contains a variant identifier "-"
+ const parts = originalTargetLang.split('-');
+ if (parts.length > 1) {
+ // Get base language code and full variant code
+ const baseLang = parts[0].toUpperCase();
+ return {
+ target_lang: baseLang,
+ regional_variant: originalTargetLang
+ };
+ }
+ return {
+ target_lang: originalTargetLang
+ };
+ }
+ // Process target language
+ const normalizedTarget = normalizeLanguageTarget(targetLang);
+ // Construct request body with variant handling
const body = JSON.stringify({
text: query.text,
source_lang: sourceLang,
- target_lang: targetLang,
+ target_lang: normalizedTarget.target_lang,
+ ...(normalizedTarget.regional_variant && {
+ regional_variant: normalizedTarget.regional_variant
+ })
});
Note
This is a temporary solution while we wait for the DeepLX upstream to restore full language variant handling functionality. Once that's implemented, this client-side modification won't be necessary anymore.
Fixed.
https://github.com/OwO-Network/DeepLX/commit/9edb997f069eec845cb9aa385290f9fea8894790
https://github.com/OwO-Network/DeepLX/commit/62a993bb13229ced1521d69aad31bba47e957a08
|
2025-04-01T04:34:46.403955
| 2015-02-16T02:54:46
|
57760050
|
{
"authors": [
"halatmit",
"jisqyv"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8705",
"repo": "mit-cml/appinventor-sources",
"url": "https://github.com/mit-cml/appinventor-sources/issues/267"
}
|
gharchive/issue
|
Update activity starter documentation
This is out of date for App Inventor 2. It should be rewritten and also reformatted to permit user comments. Also add tooltips and consider renaming "plainText" to "rawText".
documentation is done, but still need to do the renaming
Change merged into master
|
2025-04-01T04:34:46.419549
| 2021-06-16T14:48:05
|
922714971
|
{
"authors": [
"Zachinquarantine",
"mitchellkrogza"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8706",
"repo": "mitchellkrogza/Phishing.Database",
"url": "https://github.com/mitchellkrogza/Phishing.Database/issues/211"
}
|
gharchive/issue
|
[BUG] Pull requests automatically close at repository reset
Describe the bug
When you create a pull request on this repo and it goes the entire day without being merged, at repository reset, it will automatically force close your PR.
To Reproduce
Create a pull request
Don't merge it, and wait until repository reset
Expected behavior
It shouldn't have closed.
Additional context
See my PR as an example: #210
Hi @Zachinquarantine yeah I see that, due to time constraints I don't get to PR's before the reset, I'll have to rethink this.
Hi @Zachinquarantine I have addressed the by creating a new repo for the PR's which gets pulled in here every hour
https://github.com/mitchellkrogza/Phishing.Database/issues/215
Thanks!
|
2025-04-01T04:34:46.464079
| 2022-02-17T03:52:06
|
1140843724
|
{
"authors": [
"MAbdurrehman12",
"pdpinch"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8707",
"repo": "mitodl/ocw-hugo-themes",
"url": "https://github.com/mitodl/ocw-hugo-themes/issues/449"
}
|
gharchive/issue
|
Code formatting
Some courses include code snippets.
These should be rendered with appropriate HTML tags, and the styling should use a monospaced font.
example 1:
legacy: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-006-introduction-to-algorithms-fall-2011/syllabus/software/
nextgen: https://ocwnext.odl.mit.edu/courses/6-006-introduction-to-algorithms-fall-2011/pages/syllabus/software/#ide
markdown: https://github.mit.edu/mitocwcontent/6.006-fall-2011/blob/main/content/pages/syllabus/software.md
example 2:
legacy:
nextgen: https://ocwnext.odl.mit.edu/courses/6-006-introduction-to-algorithms-fall-2011/pages/readings/binary-search-trees/
Looking at the markdown, I don't think we have enough data to identify the code snippets. This was probably lost in the import, so this may require work in ocw-to-hugo and ocw-hugo-themes.
so this may require work in ocw-to-hugo and ocw-hugo-themes.
Yeah, this will require work in ocw-to-hugo and in order to run a course and generate/download its markdown via ocw-to-hugo we will be either needing: AWS access key and secret or JSON of that course, but I think a read-only key will be much better
Got the key and secret from DevOps, I'll now look into this
Where are you seeing the inline styling?
Instead of inline styling it would be better to convert them to the correct markdown, which I think is a single backtick for inline code and three backticks for blocks of code.
|
2025-04-01T04:34:46.468432
| 2023-05-23T21:06:56
|
1722788188
|
{
"authors": [
"feoh"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8708",
"repo": "mitodl/ol-infrastructure",
"url": "https://github.com/mitodl/ol-infrastructure/issues/1535"
}
|
gharchive/issue
|
Set up tubular retirement pipeline for MITXPro
Description/Context
Set up a new instance of the Tubular retirement pipeline defined here for the MITXPro application.
This should simply be a matter of defining some new secrets in Vault and defining a new pipeline with Concourse's fly CLI.
Plan/Design
[x] Initialize DB tables, users and DOT application
[x] Take clientid / secret from the first step and populate Vault via SOPS/Git. Wait for pipeline to bring them to production.
[ ] Use fly to actually create the pipeline
[ ] Profit!
Follow this document to define the required secrets and set up a new instance of the pipeline for MITXPro.
Pipeline is green. This is all set!
|
2025-04-01T04:34:46.469778
| 2023-11-07T17:05:03
|
1981829949
|
{
"authors": [
"collinpreston",
"shaidar"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8709",
"repo": "mitodl/ol-infrastructure",
"url": "https://github.com/mitodl/ol-infrastructure/issues/1901"
}
|
gharchive/issue
|
Keycloak deployed to Production environment
Description/Context
In order to support SSO with MIT Open in the Production environment, we need Keycloak to also be deployed to the Production environment.
New Keycloak production instances in place along with the deployed configs. Working on updating Touchstone IDP config (not necessary, but just more consistent) to use the updated SSO hostname.
Touchstone IDP has been updated and I tested the login flow with it which appears to be functional.
|
2025-04-01T04:34:46.544337
| 2015-03-26T00:15:42
|
64401400
|
{
"authors": [
"chekunkov",
"juokaz"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8710",
"repo": "mitsuhiko/flask-sqlalchemy",
"url": "https://github.com/mitsuhiko/flask-sqlalchemy/pull/273"
}
|
gharchive/pull-request
|
Allow overriding binds in SignallingSession
This allows for a flask-sqlalchemy connection to be configured with custom list of binds (or an empty list).
I believe issue was resolved by https://github.com/mitsuhiko/flask-sqlalchemy/commit/ac135c674fb2d4ae49395e15283be155e826b3ea
You are correct, I closed this issue.
|
2025-04-01T04:34:46.548440
| 2023-06-03T10:26:16
|
1739381260
|
{
"authors": [
"bluss",
"dsp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8711",
"repo": "mitsuhiko/rye",
"url": "https://github.com/mitsuhiko/rye/pull/292"
}
|
gharchive/pull-request
|
Run pip in workspace root (#284)
pip currently is excuted in the working directory of rye. This means that pip cannot find the appropriate pyproject and will fail to write the requirements.lock. We now explicitly call pip in the determined workspace_path.
Please note this is an early PR and my first change. I don't fully understand the rye codebase yet and have not extensively tested this change yet. So beware, there might be dragons.
Testing
Currently this has been tested doing the following:
$ rye init root
success: Initialized project in /home/dsp/src/root
Run `rye sync` to get started
$ cd root
$ rye init submodule
success: Initialized project in /home/dsp/src/root/submodule
Run `rye sync` to get started
$ echo "[tool.rye.workspace]\nmembers=[\"submodule\"]" >> pyproject.toml
$ cd submodule
$ rye sync
Before this change
Generating production lockfile: /Users/dsp/src/rye/rye-test/requirements.lock
Traceback (most recent call last):
File<EMAIL_ADDRESS>line 8, in <module>
sys.exit(cli())
^^^^^
File<EMAIL_ADDRESS>line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File<EMAIL_ADDRESS>line 1055, in main
rv = self.invoke(ctx)
...
pip._internal.exceptions.InstallationError: file:///submodule (from -r /var/folders/yl/jkvwmym95_sgrmcx_nmn21000000gn/T/.tmpGOZrJH (line 1)) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.
Error: could not write production lockfile for workspace
After this change
Generating production lockfile: /home/dsp/src/root/requirements.lock
Generating dev lockfile: /home/dsp/src/root/requirements-dev.lock
Installing dependencies
Looking in indexes: https://pypi.org/simple/
Obtaining file:///. (from -r /var/folders/yl/jkvwmym95_sgrmcx_nmn21000000gn/T/tmp_zayqwfo (line 1))
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Obtaining file:///submodule (from -r /var/folders/yl/jkvwmym95_sgrmcx_nmn21000000gn/T/tmp_zayqwfo (line 2))
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Building wheels for collected packages: root, submodule
Building editable for root (pyproject.toml) ... done
Created wheel for root: filename=root-0.1.0-py3-none-any.whl size=994 sha256=c3d665d5a85c414f3963c4eddc9c141bec42689133c127c3265e5e28f9f5c38d
Stored in directory: /private/var/folders/yl/jkvwmym95_sgrmcx_nmn21000000gn/T/pip-ephem-wheel-cache-1x_8h2fi/wheels/97/54/f5/d849319cdfa096e074df352654ee2e7c919da8951f090690c6
Building editable for submodule (pyproject.toml) ... done
Created wheel for submodule: filename=submodule-0.1.0-py3-none-any.whl size=1056 sha256=81643b6ab37c8be2e102259034e1dca0f46eba764b1980dcb93599994e15dd78
Stored in directory: /private/var/folders/yl/jkvwmym95_sgrmcx_nmn21000000gn/T/pip-ephem-wheel-cache-1x_8h2fi/wheels/24/6f/3c/0d52b0234f153b8aa3f922419ed47c4c9cc62a221afb196f10
Successfully built root submodule
Installing collected packages: submodule, root
Successfully installed root-0.1.0 submodule-0.1.0
Done!
With this change, maybe that's enough for sync and lock to support the --pyproject argument like - #232 - it can then be used to sync "from the outside" of a project directory. The basename still has to be exactly "pyproject.toml" then, so it's not full support.
|
2025-04-01T04:34:46.611895
| 2014-06-25T18:35:49
|
36509234
|
{
"authors": [
"aramboyajyan",
"mjhea0"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8712",
"repo": "mjhea0/flask-boilerplate",
"url": "https://github.com/mjhea0/flask-boilerplate/pull/12"
}
|
gharchive/pull-request
|
Updates to all front end libraries
Everything is described in the commits - let me know if you have any questions.
Thanks!
:+1:
|
2025-04-01T04:34:46.614789
| 2017-10-25T21:18:10
|
268552792
|
{
"authors": [
"iRyusa",
"vielhuber"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8713",
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/issues/856"
}
|
gharchive/issue
|
mj-column padding
Hello!
I like resetting the padding for all elements like this:
<mj-all padding="0" />
This also removes the padding inside <mj-column>.
Officially <mj-column> does not support paddings.
What to do if I have a layout where in the left column I have a big image (without padding)
and in the right column I have several elements with a padding around?
Hi,
I don't really understand your issue, reseting the padding on all element doesn't prevent you to overide the padding on a single element ?
Sorry, the tags got stripped out from the original question. Please reread.
You'll have to manually set a padding in each elements of column, padding in column is a new feature for MJML4 to create gutter around the column cf #160
Thank you! And thanks for your awesome work.
|
2025-04-01T04:34:46.615926
| 2021-02-11T18:07:47
|
806623046
|
{
"authors": [
"iRyusa",
"lahdekorpi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8714",
"repo": "mjmlio/mjml",
"url": "https://github.com/mjmlio/mjml/pull/2176"
}
|
gharchive/pull-request
|
Add support for title attribute in mj-button
This is added to the last element, a or p as the title attribute similarly to name. Closes #2175
Not sure if a version bump or other stuff related to maintaining is required in this project.
We'll bump everything on release thanks for this !
|
2025-04-01T04:34:46.634684
| 2021-06-01T17:34:16
|
908533637
|
{
"authors": [
"fenimi",
"martinpopel"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8715",
"repo": "mjpost/sacrebleu",
"url": "https://github.com/mjpost/sacrebleu/issues/154"
}
|
gharchive/issue
|
raise EOFError("No valid references for a sentence!") EOFError: No valid references for a sentence!
I'm getting the EOFError: No valid references for a sentence when trying to evaluate an MT model in fairseq using sacrebleu. My evaluation files are correct and bilingual. I wonder what could be wrong.
I'll appreciate any advice.
Thank you
How do you use sacrebleu? Can you provide a minimal example?
My evaluation files are correct and bilingual
SacreBLEU does not need the source-language files. You provide just the system translation file and the reference translation (ideally, you specify the reference with -t and -l if it is one of the supported and automatically downloaded testsets).
I'm using sacrebleu to evaluate after each epoch in fairseq. This is not the final testing phase. Here is an example of my validation set.
Source: "ganduje responds to allegations of selling daula hotel - freedom radio nigeria ."
Target: "ganduje ya maida martani kan zargin saida otel ɗin daula - freedom radio nigeria ."
The source and target are stored in separate files. I use the key word "--eval-bleu" in fairseq. I've used sacrebleu many times and I'm only reusing code I have always used and have never had this experience.
I guess the validation set consists of more than one sentence. The error
https://github.com/mjpost/sacrebleu/blob/5dfcaa3cee00039bcad7a12147b6d6976cb46f42/sacrebleu/metrics/bleu.py#L286
is triggered if there is a sentence with empty reference translation (empty string or None). So double check there are no empty lines in the reference file (including empty lines at the end of the file).
I am not familiar with Fairseq, so I cannot provide more help.
BTW: SacreBLEU should be used on non-tokenized (or de-tokenized) sentences, but your source and target seems to be tokenized (nigeria .). You can still use SacreBLEU, just the final scores won't be comparable to other works. Also, your sentences seem to be lowercased, but SacreBLEU is supposed to be applied on final translations to be provided to the users (i.e. after re-casing, if the raw output is lowercased) - there is an option -lc for case-insensitive BLEU, if needed. Both may be OK for your internal evaluation after each epoch. Just don't forget that the final evaluation on the post-processed translations may give different scores.
Thank you so much, I found an empty line and that solved it. Thanks a lot.
|
2025-04-01T04:34:46.642627
| 2017-12-05T13:15:59
|
279370947
|
{
"authors": [
"mkalam-alami",
"pfefferminzz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8716",
"repo": "mkalam-alami/ludumdare-feedback-friends",
"url": "https://github.com/mkalam-alami/ludumdare-feedback-friends/issues/61"
}
|
gharchive/issue
|
LD40 Unfinished Games
Hey,
i was wondering if you could include Unfinished games as a filter option, next to Compo | JAM | Both?
Best regards,
Chris
To be done:
[ ] Support Unfinished games (they are currently stored as Jam)
[ ] Replace the [Compo|Jam|Both] filters with [Compo|Jam|Unfinished|All]
Three filter checkboxes would be nice for Compo, All and Unfinished.
The unfinished games are currently on top with the Both filter, because nobody is rating them. They are not included in the All filter on the official LD games page.
I can't find my game at all https://ldjam.com/events/ludum-dare/40/do-not-feed. :(
|
2025-04-01T04:34:46.693371
| 2015-04-30T14:08:31
|
72173132
|
{
"authors": [
"jxu",
"keitherskine",
"mkleehammer",
"ramprax",
"scholer",
"snargleplax",
"vl85"
],
"license": "MIT-0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8718",
"repo": "mkleehammer/pyodbc",
"url": "https://github.com/mkleehammer/pyodbc/issues/43"
}
|
gharchive/issue
|
Connection object behavior within context managers
Python context managers for connection objects are typically used like this:
with pyodbc.connect('mydsn') as cnxn:
do_stuff
Right now in pyodbc 3.0.10, that seems to be the equivalent of:
cnxn = pyodbc.connect('mydsn')
do_stuff
if not cnxn.autocommit:
cnxn.commit()
Could we re-visit this functionality because I do have some reservations about that behavior, especially when autocommit is off:
The connection object is not closed when the context is exited (unlike exiting the context manager for a file, for example).
Assuming autocommit is off, in the event of an exception occurring, the transactions on the connections are not explicitly rolled back (which they would be if the connection was closed on exit).
Lastly, and I appreciate I appear to be in a minority on this, but I have a real issue with a commit being issued "under the hood" when autocommit is off (assuming no exceptions raised).
To elaborate on those points:
For files, context managers operate like this
with file('/tmp/myfile', 'w') as f:
f.write('hello\n')
which is equivalent to:
f = file('/tmp/myfile', 'w')
try:
f.write('hello\n')
finally:
f.close()
From my perspective, the point of a context manager is to make sure the context object is cleaned up when the context exits (typically by closing the object). I'm surprised pyodbc leaves the connection open, to be closed only when it is deleted.
Following on from 1), my understanding is that contexts should be essentially self-contained, and they should tidy themselves up during the exit process. In this case, for me that includes rolling back transactions if any exception has occurred. Instead, pyodbc leaves that up to surrounding code. Bear in mind, explicit rolling back would not be needed if the connection was closed, the database would do this anyway.
Looking at other Python odbc managers, it appears I am going against the tide on this, but I still feel strongly that the context manager should never issue an implicit commit when autocommit is off.
http://cx-oracle.readthedocs.org/en/latest/connection.html#Connection.__exit__
https://docs.python.org/3.4/library/sqlite3.html#using-the-connection-as-a-context-manager
http://initd.org/psycopg/docs/connection.html#connection.commit
https://pythonhosted.org/oursql/api.html#cursor-context-manager
https://dev.mysql.com/doc/connector-python/en/index.html
http://sourceforge.net/p/adodbapi/code/ci/default/tree/adodbapi.py
My preference would be that the pyodbc connection context manager would be just like the file context manager and doing nothing more than the equivalent of this:
cnxn = pyodbc.connect('mydsn')
try:
do_stuff
finally:
cnxn.close()
I realize this would be a change in behavior, but right now, the connection context manager has behavior that makes it essentially unusable for me when I want to use a connection with autocommit off.
Finally, pretty much all of what I have said here is equally applicable to pyodbc cursor objects too.
I was actually opposed to adding the context at all because it just confuses things. If I understand correctly, the functionality you are looking for can be accomplished without it like this:
def somefunction():
cnxn = pyodbc.connect('mydsn')
cursor = cnxn.cursor()
cursor.execute("delete something")
cnxn.commit()
If everything is successful, the code reaches the bottom and commits. If an exception occurs, both cursor and cnxn go out of scope and are immediately deleted. Each will close when they are deleted and closing the connection will roll back the transaction.
There is no way this is going to be changed any time soon and an enormous amount of code takes advantage of that. (It is also what keeps Python programs from using so much memory, but it is slower than standard garbage collection.) I was careful to design the objects so cursors refer to connections but not vice versa so there are no cycles.
With autocommit turned on, each statement is committed as it occurs, so there is no reason to use a context manager.
Fun fact: the cursor has a reference to the connection and a commit method specifically so you can use a cursor without needing to manage the connection object also:
def somefunction():
cursor = pyodbc.connect('mydsn').cursor()
cursor.execute("delete something")
cursor.commit()
I certainly agree that context managers can cause confusion because the functionality is deliberately hidden away behind the syntactic sugar. Hence, the onus should be on making the context manager functionality as simple and intuitive as possible. Right now that doesn't appear to be the case, especially to me. From what I can make out, it seems the current functionality is simply a way of getting around setting the autocommit flag to True. In effect, use a context manager and all your SQL statements are going to be committed, regardless of whether you set the autocommit flag or not. Although this is convenient for programmers in many cases, it is not at all obvious.
As I mentioned in my original comment, who would intuitively think that:
with pyodbc.connect('mydsn') as cnxn:
do_stuff
actually means:
cnxn = pyodbc.connect('mydsn')
do_stuff
if not cnxn.autocommit:
cnxn.commit()
Not me. Even when it's spelled out, it seems odd, and very different from the way context managers work with files, etc.
At the very least, it might be worth putting this in the docs.
I think the behavior needs to stay as it is. As you pointed out, it would be consistent with other database libraries. It would also be consistent with early copies of PEP 343. Note the example does exactly what everyone is doing.
I want to reiterate that I feel they are completely unnecessary. I use pyodbc in multiple 24x7 servers in the financial industry and have never used a context manager with it, or with anything else like files. I gave examples of how you could write code without a context manager which is much clearer. (And that is what I meant about people getting confused - not the action of the context manager itself, but the fact that they are completely unnecessary in C Python today.)
For example, this code from your post would never be written that way:
with file('/tmp/myfile', 'w') as f:
f.write('hello\n')
You would just do this:
open('/tmp/myfile', 'w').write('hello\n')
You'll see code like this all the time. (I know your example was shortened and there are usually more lines written, but stick with me.) Objects in Python are deleted automatically and the all "do the right thing" when they are deleted. Files close. Sockets close. Database connections rollback and close.
In a longer function with more lines, it would look like this:
def f():
fd = open('/tmp/myfile', 'w')
fd.write('1\n')
fd.write('2\n')
The file will be closed as soon as the function is exited, either normally at the bottom or if an exception occurs.
The only time it could be useful is in a loop where you catch exceptions. In that case you can eliminate a finally clause:
def f():
while x > y:
try:
data = wait_for_data()
with fd as open('filename', 'w'):
fd.write(data + '\n')
except:
logger.error('Error occurred', exc_info=True)
In this case fd does get closed before the next iteration calls wait_for_data.
In my servers connections are primarily used in two patterns:
In "task loops", a connection is created and used by each task in the queue. When an error occurs, the error is logged and the connection is closed as part of the cleanup. A new one is allocated and the process continues. This type of code is too complex for a with since there is a lot of code. There aren't just a couple of lines to indent.
The second type is a function allocates a connection, uses it, then commits, just like the examples from my previous post. These are very clean and easy. Connections and cursors are closed automatically:
def somefunction():
cursor = pyodbc.connect('mydsn').cursor()
cursor.execute("delete something")
cursor.commit()
I recommend giving this a try in a couple of places and see if it works for you. It is one of the things I greatly prefer about Python over Javascript and Java. Both of those languages can use context managers. (C++ and C# support object types on the stack that run destructors, so a simple "wrapper" object would be just as easy.)
I think I'm finally starting to understand what you mean about not needing context managers. As a database guy dealing with terabytes of data, and single database transactions that involve updating multiple table with multiple SQL statements, which in turn add gigabytes to the database transaction log, database transactions are kinda important to me. The data integrity of my database depends on it. Yet database transactions are not explicitly part of PEP 249. Sure, they are implicitly there, with pyodbc.connect() and cnxn.commit()/close(), but there is no explicit PEP 249 equivalent of:
BEGIN TRANSACTION
UPDATE TABLE T1 SET ...
UPDATE TABLE T2 SET ...
UPDATE TABLE T3 SET ...
END TRANSACTION
My assumption was that context managers would help manage this kind of behavior. Apparently this isn't the case.
I was also concerned that cursors would maintain locks on tables or rows unless they were explicitly closed, and I didn't fully comprehend that cursors would definitely be closed (and the locks released) when they left the scope. Again, locking of table rows is a big deal for me.
Thank you for taking the time to give comprehensive answers to my questions, Michael. It's definitely appreciated.
Hi,
I know this is an old thread. Just wanted to say that the detailed answer was very helpful for me.
Thanks a lot.
I was worried that the connection may get closed on calling __exit__(). This would create problems while implementing connection pooling.
For implementing my connection pool, I wrote a class PooledConnection as a wrapper around pyodbc connection. The PooledConnection's __enter__() & __exit__() methods call the __enter__() & __exit__() methods of the underlying pyodbc connection.
Now, I do want/need all the things that pyodbc connection's __exit__() method does, but definitely do not want it to close the connection.
In the PooledConnection.__exit__(), after calling pyobc connection's __exit__(), I would just mark the connection as free and "return" it to the pool.
So, basically, I count on __exit__() not closing the connection.
Thanks,
Ram
Connections and cursors are closed automatically:
CPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed detection of cyclically linked garbage, which collects most objects as soon as they become unreachable, but is not guaranteed to collect garbage containing circular references. See the documentation of the gc module for information on controlling the collection of cyclic garbage. Other implementations act differently and CPython may change. Do not depend on immediate finalization of objects when they become unreachable (so you should always close files explicitly).
See https://docs.python.org/3/reference/datamodel.html#index-2
So RAII is not available as an option.
I would rather see connections/cursors closed by context manager and commits are invoked explicitly than connections/cursors not closed and commits made implicitly. Main expectation from context managers is resource management and current design fails to manage connection/cursor. I've read that one might see it as a transaction context, not connection/cursors' one. But that doesn't make sense. One can look at cursor as a transaction, but not at connection. It also doesn't make sense to commit transaction on error.
For select queries it doesn't do anything useful.
If there is an issue with backward compatibility, then a separate method could be used for that matter. Like sconnect. And scursor. Meaning safe or secure.
I'd also like to voice in on the unexpected behavior of pyodbc's context manager not closing connections.
import pandas as pd, pyodbc
with pyodbc.connect(connection_string) as con:
test_df = pd.read_sql_query(query, con)
con.closed # False. pyodbc connection is not closed when leaving the context.
@mkleehammer I understand why you were opposed to adding a context manager in the first place, and I agree that adding this context manager certainly adds confusion.
You mentioned that in your typical work, you just depend on the connection to go out of scope and then relying invocation of the connection's destructor method to close the connection. I want to add that, if the connection object is created as part of a long-running job, then the connection object will not go out of scope and the connection will remain open. An example of this would be if the connection is created within a Jupyter notebook, which are very popular among data scientists, who also use SQL a lot.
Example: A data scientist writing this in his notebook might assume he is safe to use this in his notebook to retrieve data from his SQL database:
import pandas as pd, pyodbc
with pyodbc.connect(connection_string) as con:
test_df = pd.read_sql_query(query, con)
# (the data scientist continues working with the test_df results and forgets `con`, assuming the context manager will naturally close it, just like context managers typically do).
# (the notebook keeps running and the connection is kept open until the server closes it. 😯)
# (the database administrators get super angry/annoyed with our data scientist because he makes a lot of long-lived connections and never closes them. 😡)
Honestly, the expected behavior for a context manager is that the context manager will take care of closing connections and handles, leaving the system in a "safe" state when the context is closed. Otherwise, what is the point of a context manager?
@mkleehammer would you ever be open to adding another context manager to pyodbc that actually closes the connection when leaving the context? I know it is fairly trivial for each user to implement, but it just adds one more custom function or class that us data science users will have to import (or copy/paste) into our notebooks whenever we want to use an sql connection. And again, many users of pyodbc might not be aware that context-managed connections are not actually being closed when leaving the context.
Example of user-implemented context manager:
from contextlib import contextmanager
@contextmanager
def connect(connection_string):
con = pyodbc.connect(connection_string)
try:
yield con
finally:
con.close()
# Example usage:
with connect(connection_string) as con:
test_df = pd.read_sql_query(cmsobjectmodel_query, con)
con.closed # True. Connection is closed when leaving the context.
For the record, SQL-Alchemy does close context-managed connections when leaving the context:
import sqlalchemy, pandas as pd
engine = sqlalchemy.create_engine(connection_string)
with engine.connect() as con:
test_df = pd.read_sql_query(query, con)
con.closed # True. SQLAlchemy closes context-managed connections when leaving the context.
However, adding SQLAlchemy to a project's dependencies, just to get a "sane" connection context manager seems a bit bloated.
With all that said, I want to just say a quick thank you for your work on pyodbc. 🙏 I've been using it a lot and it works great! (context-manager quirks aside) 😄
Example: A data scientist writing this in his notebook might assume he is safe to use this in his notebook to retrieve data from his SQL database:
import pandas as pd, pyodbc
with pyodbc.connect(connection_string) as con:
test_df = pd.read_sql_query(query, con)
# (the data scientist continues working with the test_df results and forgets `con`, assuming the context manager will naturally close it, just like context managers typically do).
# (the notebook keeps running and the connection is kept open until the server closes it. 😯)
# (the database administrators get super angry/annoyed with our data scientist because he makes a lot of long-lived connection
I ran into almost this exact issue, with my explicit con.close() not being called properly in a notebook cell and the angry DB admins.
I'll add my voice here, after dealing with the same issue this week. I was surprised and misled by this behavior, as were multiple other veteran engineers on my team. The revelation by @vl85 about immediate GC finalization not being reliable appears to unrecoverably dismantle the justification provided up to that point for rejecting this suggestion.
I expect context managers to work like RAII. The hazard of reasonable assumptions creating a production incident due to misleading behavior is a far greater concern than whatever benefit is perceived to accrue from the "convenience" of using it to autocommit without autocommit. Too much sugar rots your teeth. I find design precedent from other Python ODBC managers unpersuasive, and would apply the same objection there.
I am open to both adding a new context manager or even changing the behavior of the old one if possible without breaking everything.
I will say, this is the first time I've seen a very good example of when one could be useful - Jupyter Notebooks. In the past, the only way one could be open forever is if it is a global variable or you have a function you literally never leave. In those cases, I don't think it should surprise someone if it is still connected. In a sense, Jupyter notebooks are nothing but a script where everything is a global variable. Therefore decisions that make sense in a larger program might not make sense.
That said, I would not characterize GC finalization as being unreliable. I've purposely ensured the internals don't create cycles the GC system can see for this reason. Unless you create a chain by adding attributes to a connection that somehow point back to the same connection, it should be safe if you are sticking with CPython.
Now, what options are there?
Break compatibility in a later version, but provide a flag when connecting to restore the old (current) behavior?
Create two new functions safe_connect and safe_cursor or sconnect and scursor? These would have to set a flag on the objects to know how to behave, so we would also have a flag you could set, I guess.
Provide a separate object in the pyodbc namespace that is a context manager? I don't like this as it doesn't match expectations.
Anything else? Any votes?
The basic question seems to be whether the context manager on the Connection class should either manage the connection (i.e. close it on exit) or manage a database transaction (i.e. commit on exit). Some people think the former, some people think the latter. Full disclosure, I'm very much in the former camp, which I think is more Pythonic. The current implementation of the context manager does the latter.
Ref:
https://realpython.com/python-with-statement/#managing-resources-in-python
https://book.pythontips.com/en/latest/context_managers.html#context-managers
https://docs.python.org/3/library/contextlib.html#contextlib.contextmanager
I agree it's a bit of a pain to add a custom context manager in code specifically to manage a pyodbc connection but I just wanted to remind folks that there is a built-in context manager utility contextlib.closing that essentially does the job of closing the connection on exit, for example:
from contextlib import closing
import pyodbc
with closing(pyodbc.connect('mydsn', autocommit=False)) as cnxn:
crsr = cnxn.cursor()
crsr.execute("UPDATE T1 SET ...")
cnxn.commit()
cnxn.closed # True
The basic question seems to be whether the context manager on the Connection class should either manage the connection (i.e. close it on exit) or manage a database transaction (i.e. commit on exit).
Well it is called pyodbc.connect and not say pyodbc.transact
Just a thought, but one possibility is to add a context manager to pyodbc specifically to encapsulate a database transaction. Something equivalent to:
from contextlib import contextmanager
@contextmanager
def transaction(*args, **kwargs) -> Cursor:
if kwargs.get('autocommit', False) is not False:
raise ValueError('Cannot set autocommit in the transaction context manager')
cnxn = connect(*args, **kwargs)
crsr = cnxn.cursor()
try:
yield crsr
cnxn.commit()
finally:
crsr.close()
cnxn.close() # includes a rollback under the hood
The parameters for this context manager would be the same as for the connect() function. It would be used as follows:
with pyodbc.transaction('mydsn') as crsr:
crsr.execute("UPDATE T1 SET ...")
crsr.execute("UPDATE T2 SET ...")
This would at least make it clear the context is a database transaction.
|
2025-04-01T04:34:46.709903
| 2022-11-17T20:02:23
|
1453910663
|
{
"authors": [
"UmbrellaMalware",
"mkrd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8719",
"repo": "mkrd/DictDataBase",
"url": "https://github.com/mkrd/DictDataBase/issues/38"
}
|
gharchive/issue
|
Add XPath-like searching for keys
{
"users": {
"Ben": {
"age": 30,
"job": "Software Engineer"
}
},
"Ben": {"job": "Plumber"}
}
print(DDB.at("users", key="job").read())
>>> "Plumber"
Have you thought about requiring the key to be more explicit? Maybe something like XPath (key='Ben/job'), jq (key='.Ben.job'), or glom (key='Ben.job'), or something else?
Hi, could you check this PR for this functionality?
https://github.com/mkrd/DictDataBase/pull/42
|
2025-04-01T04:34:46.713601
| 2023-03-17T00:13:10
|
1628493028
|
{
"authors": [
"cutlass90",
"mkshing"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8720",
"repo": "mkshing/e4t-diffusion",
"url": "https://github.com/mkshing/e4t-diffusion/issues/5"
}
|
gharchive/issue
|
Can you specify versions in requirements.txt?
Thank you for publishing this code.
I tried to run it but got an error I could not handle. It may be for the reason I have different libraries versions.
Error traceback is here
Traceback (most recent call last):
File "pretrain_e4t.py", line 20, in <module>
from diffusers import DDPMScheduler, AutoencoderKL
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/diffusers/__init__.py", line 35, in <module>
from .models import (
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/diffusers/models/__init__.py", line 19, in <module>
from .autoencoder_kl import AutoencoderKL
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/diffusers/models/autoencoder_kl.py", line 23, in <module>
from .vae import Decoder, DecoderOutput, DiagonalGaussianDistribution, Encoder
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/diffusers/models/vae.py", line 22, in <module>
from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/diffusers/models/unet_2d_blocks.py", line 20, in <module>
from .attention import AdaGroupNorm, AttentionBlock
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/diffusers/models/attention.py", line 22, in <module>
from .cross_attention import CrossAttention
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/diffusers/models/cross_attention.py", line 29, in <module>
import xformers.ops
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/xformers/ops/__init__.py", line 8, in <module>
from .fmha import (
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/xformers/ops/fmha/__init__.py", line 10, in <module>
from . import cutlass, flash, small_k, triton
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/xformers/ops/fmha/triton.py", line 15, in <module>
if TYPE_CHECKING or _is_triton_available():
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/xformers/__init__.py", line 33, in func_wrapper
value = func()
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/xformers/__init__.py", line 44, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/xformers/triton/__init__.py", line 12, in <module>
from .dropout import FusedDropoutBias, dropout # noqa
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/xformers/triton/dropout.py", line 13, in <module>
import triton
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/triton/__init__.py", line 20, in <module>
from .runtime import (
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/triton/runtime/__init__.py", line 1, in <module>
from .autotuner import Config, Heuristics, autotune, heuristics
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/triton/runtime/autotuner.py", line 7, in <module>
from ..compiler import OutOfResources
File "/home/jupyter/e4t-diffusion/nenv/lib/python3.7/site-packages/triton/compiler.py", line 1047, in <module>
@functools.lru_cache
File "/opt/conda/lib/python3.7/functools.py", line 490, in lru_cache
raise TypeError('Expected maxsize to be an integer or None')
TypeError: Expected maxsize to be an integer or None
Apparently, you are using <Python3.7. https://github.com/NVIDIA/DeepLearningExamples/issues/1016
Apparently, you are using <Python3.7. NVIDIA/DeepLearningExamples#1016 You need python >3.8 instead.
Indeed, switching to python3.8 fixed the problem. Thanks
|
2025-04-01T04:34:46.726834
| 2024-05-30T23:02:02
|
2326666476
|
{
"authors": [
"minghuaw"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8721",
"repo": "ml-explore/mlx-c",
"url": "https://github.com/ml-explore/mlx-c/pull/28"
}
|
gharchive/pull-request
|
Catching c++ exception as error
Description
This is an initial attempt at catching the c++ exception before crossing the ffi boundary. The python script is modified to also generate bindings will have "_try_" inserted between the namespace prefix and the function name (in separate files that are prefixed with "try_") in addition to the original bindings, and the newly added bindings performs try-catch and return a tagged union to indicate success or failure. More detailed description about the changes can be found below.
This draft is more about collecting feedback and thus only included functions that return one of the following three types mlx_array, mlx_vector_array, and mlx_vector_vector_array.
Related issue
#25
Major changes
1. New result types
Introduce a few result structs (in result.h) that contains a tag and a union for the success case and failure error message. Functions that help user access the tag and values in the union are provided in result.h as well. The mlx_array_result is shown below as an example
typedef enum {
mlx_result_tag_ok,
mlx_result_tag_runtime_error,
mlx_result_tag_invalid_argument,
mlx_result_tag_out_of_range,
} mlx_result_tag;
typedef struct {
mlx_result_tag tag;
union {
mlx_array ok_array;
mlx_string error_message;
};
} mlx_array_result;
mlx_result_tag mlx_array_result_get_tag(mlx_array_result* result);
mlx_array mlx_array_result_get_ok(mlx_array_result result);
mlx_string mlx_array_result_get_error_message(mlx_array_result result);
2. Modified script to generate additional bindings that return results
Modifed binding generator script c.py to generate the bindings that return the above mentioned result struct. The generator script is added with one optional argument --trycatch which defaults to False when not passed into the script. When enabled, the script will try to generate bindings with "_try_" inserted into the function name. For example
python generator.py --header <path::to::mlx::linalg.h> --namespace "mlx::core::linalg" --trycatch > ../mlx/c/try_linalg.h
will generate the following header file
/* Copyright © 2023-2024 Apple Inc. */
/* */
/* This file is auto-generated. Do not edit manually. */
/* */
#include <stdio.h>
#ifndef MLX_TRY_LINALG_H
#define MLX_TRY_LINALG_H
#include "mlx/c/result.h"
#include "mlx/c/array.h"
#include "mlx/c/closure.h"
#include "mlx/c/future.h"
#include "mlx/c/ioutils.h"
#include "mlx/c/map.h"
#include "mlx/c/stream.h"
#include "mlx/c/string.h"
#ifdef __cplusplus
extern "C" {
#endif
mlx_array_result mlx_linalg_try_cholesky(mlx_array a, bool upper, mlx_stream s);
mlx_array_result mlx_linalg_try_inv(mlx_array a, mlx_stream s);
mlx_array_result mlx_linalg_try_norm_p(
mlx_array a,
double ord,
const int* axis,
size_t num_axis,
bool keepdims,
mlx_stream s);
mlx_array_result mlx_linalg_try_norm_ord(
mlx_array a,
mlx_string ord,
const int* axis,
size_t num_axis,
bool keepdims,
mlx_stream s);
mlx_array_result mlx_linalg_try_norm(
mlx_array a,
const int* axis,
size_t num_axis,
bool keepdims,
mlx_stream s);
mlx_vector_array_result mlx_linalg_try_qr(mlx_array a, mlx_stream s);
mlx_vector_array_result mlx_linalg_try_svd(mlx_array a, mlx_stream s);
#ifdef __cplusplus
}
#endif
#endif
For conciseness, the generated code with --implementation enabled will not be displayed here. The implementation code will perform try catch with new macros defined in private/utils.h to turn the array or exception into the result type.
Files generated
Below is a list of files generated using the above mentioned method
try_fast.h, try_fast.cpp
try_fft.h, try_fft.cpp
try_linalg.f, try_linalg.cpp
try_ops.h, try_ops.cpp
try_random.h, try_random.cpp
try_transforms.h, try_transforms.cpp
try_transforms_impl.h, try_transforms_impl.cpp
Example
An example examples/example-try.c is added to showcase that exception can be caught and returned as an error. Below shows the output from the new example
hello world!
array([[1, 2, 3],
[4, 5, 6]], dtype=float32)
arange(0, 3, 0.5)
arange is ok
array([0, 0.5, 1, 1.5, 2, 2.5], dtype=float32)
arange(0, 3, 0)
Error: [arange] Maximum size exceeded.
divive by 2!
array([[0.5, 1, 1.5],
[2, 2.5, 3]], dtype=float32)
Closing in favour of #29
|
2025-04-01T04:34:46.734350
| 2024-03-22T03:53:33
|
2201645280
|
{
"authors": [
"angeloskath",
"awni",
"zcbenz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8722",
"repo": "ml-explore/mlx",
"url": "https://github.com/ml-explore/mlx/pull/876"
}
|
gharchive/pull-request
|
Change unary/binary vmap to accept vector as axis
Proposed changes
It is hard to tell what the code does without looking at the source code when you call something like vmap(fun, 1, 2).
Refactored the code of vmap to make use of move semantics.
Reordered the code so the order of declaration matches the definition.
Checklist
Put an x in the boxes that apply.
[x] I have read the CONTRIBUTING document
[x] I have run pre-commit run --all-files to format my code / installed pre-commit prior to committing changes
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have updated the necessary documentation (if needed)
Looks like we are using these in the tests... you could update the tests to use the vector version. I don't mind removing these overloads, they are mostly there to make it easy to use vmap in C++. On the other hand, I don't really mind keeping them either..
PS probably you did not run the C++ tests which is why you did not notice we use them. I recommend running those in addition to the Python tests, especially if you are doing a lot of C++ changes.
Sorry I forgot the C++ tests.
It is actually nice for the APIs to be able to accepts unary/binary functions, I have refactored the code to make unary/binary vmap accept vector instead of ints, so all three versions of vmap have the same APIs.
(It is also possible to write a template version of vmap that accepts a function with arbitrary numbers of arrays, but without C++20 it would be too hard to deduce the types of function parameters from lambda.)
The axes parameters are intended to match the number of arrays which makes it easy to call these functions from C++ otherwise partly defeats the purpose.
Probably we should close this one as I don't really think it makes to change it to how you have it. Removing some overloads might be nice, but again isn't super important since we basically never use those functions except in the tests.
Yeah I feel I'm spending too much efforts on something not important.
@awni By the way, is it possible to reuse the build dir for building both python tests and C++ tests? Running python3 setup.py build_ext --inplace && cmake --build build results in cmake error.
I never tried what you are doing exactly but I build in the same dir usually using:
env CMAKE_BUILD_PARALLEL_LEVEL="" python setup.py develop
And for C++ in the build dir (though cmake --build build -- -j should be fine)
make -j
This is my goto for building both
CMAKE_ARGS='-DMLX_BUILD_TESTS=ON' python setup.py build_ext --inplace -j 8
|
2025-04-01T04:34:46.743986
| 2022-04-08T23:07:07
|
1197921182
|
{
"authors": [
"eduongAZ"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8723",
"repo": "ml4ai/tomcat-baseline-tasks",
"url": "https://github.com/ml4ai/tomcat-baseline-tasks/issues/60"
}
|
gharchive/issue
|
Hide the timer during the affective task
Chinmai reported that the participants relied on the timer to synchronize their taps instead of establishing their own tapping rhythm. We should consider hiding the timer from the participants to ensure that they rely on each other for timing their taps.
No longer relevant
|
2025-04-01T04:34:46.748255
| 2024-03-11T15:21:23
|
2179417941
|
{
"authors": [
"GeorgesLorre",
"mrchtr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8724",
"repo": "ml6team/fondant",
"url": "https://github.com/ml6team/fondant/pull/902"
}
|
gharchive/pull-request
|
Implementation new dataset interface
First steps for the implementation of the new dataset interface:
Removed Pipeline class
Added Workspace singleton to hold pipeline name, base_path, etc. .. (shouldn't be the focus of this PR)
Moved Pipeline.read(..) to Dataset class
Starting to look good @mrchtr
I agree that that the workspace concept might still need some further refinement. How we instantiate it and how we link it to an execution but this already serving us. We could refine it in a follow up PR or it might still be mergable in the runners/compilers.
The pipeline/Dataset changes are looking good, still some todo's but the mvp is here:
move the .read() method
add validation
Starting to look good @mrchtr
I agree that that the workspace concept might still need some further refinement. How we instantiate it and how we link it to an execution but this already serving us. We could refine it in a follow up PR or it might still be mergable in the runners/compilers.
The pipeline/Dataset changes are looking good, still some todo's but the mvp is here:
move the .read() method
add validation
I'm ok with merging this in the other branch and focusing on green pipelines. Then we can add the missing functionality.
Sounds like a plan! Going to fix the tests and then we can divide the work from there on.
|
2025-04-01T04:34:46.750500
| 2022-12-21T13:03:19
|
1506266686
|
{
"authors": [
"mikekeke"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8725",
"repo": "mlabs-haskell/embedano",
"url": "https://github.com/mlabs-haskell/embedano/issues/8"
}
|
gharchive/issue
|
HD wallet library with no_std support
Core functionality:
[x] Key and address derivation for the given path. Probably, we will use some known/static entropy for root key creation, as entropy generation may be part of clients' code and depend on concrete hardware (see also #10 )
[x] ~Cardano transaction signing~ 0 scope shrinked to "sign transaction id"
[x] Arbitrary data signing
[x] Proof of ownership
UPD 03-02-22:
[ ] cleanup current working wariant
Working variant is ready, but some cleanup and polishing is required.
|
2025-04-01T04:34:46.776347
| 2023-09-27T16:29:47
|
1915941433
|
{
"authors": [
"marcenacp",
"omshinde"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8726",
"repo": "mlcommons/croissant",
"url": "https://github.com/mlcommons/croissant/issues/231"
}
|
gharchive/issue
|
Converter script can output a valid JSON-LD for ibm-nasa-geospatial/hls_burn_scars.
Link: https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars
@xhagrg @omshinde I could successfully generate the Croissant file with:
python scripts/from_huggingface_to_croissant.py --dataset ibm-nasa-geospatial/hls_burn_scars --output /tmp/json-ld.json
However, when generating this dataset (python scripts/load.py --file /tmp/json-ld.json --record_set=default), I still have an error:
...
File "/usr/local/google/home/pierremarcenac/Documents/mlcommons/croissant/python/mlcroissant/mlcroissant/_src/operation_graph/operations/field.py", line 29, in _cast_value
return deps.PIL_Image.open(io.BytesIO(value))
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7f38038690d0>
Opening the parquet file with Hugging Face gives the same error, so it's likely that the generated parquet files are corrupted.
So, in this case, we cannot rely on HF infrastructure. There are 804 512x512 scenes. Its primary purpose is for training geospatial machine learning models.\n",
"citation": "@software{HLS_Foundation_2023,\n author = {Phillips, Christopher and Roy, Sujit and Ankur, Kumar and Ramachandran, Rahul},\n doi = {10.57967/hf/0956},\n month = aug,\n title = {{HLS Foundation Burnscars Dataset}},\n url = {https://huggingface.co/ibm-nasa-geospatial/hls_burn_scars},\n year = {2023}\n}\n",
"license": "cc-by-4.0",
"url": "https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars",
"distribution": [
{
"@type": "sc:FileObject",
"name": "tar-gz",
"description": "Source *.tar.gz containing all the data.",
"contentUrl": "https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars/resolve/main/hls_burn_scars.tar.gz",
"encodingFormat": "application/x-tar",
"sha256": "4e6f99a75cb2c500547b20662a15cbd531dc421376f815e91846ea542798e8e6"
},
{
"@type": "sc:FileSet",
"name": "source-images",
"containedIn": "tar-gz",
"encodingFormat": "image/tiff",
"includes": "*/*._merged.tif"
},
{
"@type": "sc:FileSet",
"name": "source-annotations",
"containedIn": "tar-gz",
"encodingFormat": "image/tiff",
"includes": "*/*.mask.tif"
}
],
"recordSet": [
{
"@type": "ml:RecordSet",
"name": "images",
"description": "The images.",
"field": [
{
"@type": "ml:Field",
"name": "id",
"description": "Unique ID.",
"dataType": "sc:Text",
"source": {
"distribution": "source-images",
"extract": {
"fileProperty": "filename"
},
"transform": {
"regex": "^(\\*)\\._merged\\.tif$"
}
}
},
{
"@type": "ml:Field",
"name": "content",
"description": "Each tiff file contains a 512x512 pixel tiff file. Scenes contain six bands, and masks have one band. For satellite scenes, each band has already been converted to reflectance.",
"dataType": "sc:ImageObject",
"source": {
"distribution": "source-images",
"extract": {
"fileProperty": "content"
}
}
},
{
"@type": "ml:Field",
"name": "split",
"description": "The split the data belongs to (either `training` or `validation`). The 804 files have been randomly split into training (2/3) and validation (1/3) directories, each containing the masks, scenes, and index files.",
"dataType": "sc:Text",
"source": {
"distribution": "source-images",
"extract": {
"fileProperty": "fullpath"
},
"transform": {
"regex": "^(training|validation)\\/\\*\\.tiff$"
}
}
}
]
},
{
"@type": "ml:RecordSet",
"name": "annotations",
"description": "The annotations.",
"field": [
{
"@type": "ml:Field",
"name": "id",
"description": "Unique ID.",
"dataType": "sc:Text",
"source": {
"field": "source-annotations",
"extract": {
"fileProperty": "filename"
},
"transform": {
"regex": "^(\\*)\\.mask\\.tif$"
}
}
},
{
"@type": "ml:Field",
"name": "content",
"description": "Masks are a single band with values: 1 = Burn scar 0 = Not burned -1 = Missing data.",
"dataType": "sc:ImageObject",
"source": {
"distribution": "source-annotations",
"extract": {
"fileProperty": "content"
}
}
}
]
},
{
"@type": "ml:RecordSet",
"name": "images_with_annotations",
"description": "The images with the annotations.",
"field": [
{
"@type": "ml:Field",
"name": "split",
"description": "See images/split.",
"dataType": "sc:Text",
"source": {
"field": "images/split"
}
},
{
"@type": "ml:Field",
"name": "id",
"description": "Unique ID.",
"dataType": "sc:Text",
"references": {
"field": "annotations/id"
},
"source": {
"field": "images/id"
}
},
{
"@type": "ml:Field",
"name": "image",
"description": "See images/content.",
"dataType": "sc:ImageObject",
"source": {
"field": "images/content"
}
},
{
"@type": "ml:Field",
"name": "annotation",
"description": "See annotations/content.",
"dataType": "sc:ImageObject",
"source": {
"field": "annotations/content"
}
}
]
}
]
}
However, there is still a problem when loading this file, because there is a brand new case where we have to "chain" multiple RecordSets. I can try to implement this and add the dataset to the integration tests.
Thank you @marcenacp ..Really appreciate it. Will it be possible for us to tag along in this development? Please let us know. Both of us, @xhagrg and I are interested.
However, there is still a problem when loading this file, because there is a brand new case where we have to "chain" multiple RecordSets. I can try to implement this and add the dataset to the integration tests.
Am I correct to interpret "chaining" as "nesting" of files? Also, if a json-ld file is laded correctly, can we directly use it for training or access the dataset from it. Just trying to understand the workflow.
Am I correct to interpret "chaining" as "nesting" of files?
It's the first example of an ml:RecordSet consuming another ml:RecordSet. This is what I meant with "chain".
It's still a WIP, but PR https://github.com/mlcommons/croissant/pull/244 should allow consuming the JSON-LD for ibm-nasa-geospatial/hls_burn_scars
Thanks for the update @marcenacp ..Appreciate it.. We will report back to you with confirmation.
cc: @xhagrg
@omshinde @xhagrg Do you have any news?
After consideration, I think option #1 (https://github.com/mlcommons/croissant/issues/231#issuecomment-1764773876) is better in the long term, because Hugging Face doesn't auto convert to Parquet big datasets.
Thanks @marcenacp .. I am getting following message while executing the script -
(base) rajatshinde@shinde23-mbp mlcroissant % python scripts/from_huggingface_to_croissant.py --dataset ibm-nasa-geospatial/hls_burn_scars --output /tmp/json-ld.json
Traceback (most recent call last):
File "/Users/rajatshinde/impact/croissant/python/mlcroissant/mlcroissant/scripts/from_huggingface_to_croissant.py", line 193, in <module>
app.run(main)
File "/opt/homebrew/anaconda3/lib/python3.10/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/opt/homebrew/anaconda3/lib/python3.10/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "/Users/rajatshinde/impact/croissant/python/mlcroissant/mlcroissant/scripts/from_huggingface_to_croissant.py", line 185, in main
jsonld = convert(dataset)
File "/Users/rajatshinde/impact/croissant/python/mlcroissant/mlcroissant/scripts/from_huggingface_to_croissant.py", line 147, in convert
encoding_format=mlc.EncodingFormat.GIT,
AttributeError: module 'mlcroissant' has no attribute 'EncodingFormat'
Python version - 3.10.9
OS - MacOS
Steps -
Cloned the mlcroissant repo
Run installation using python -m pip install ".[dev]"
Executed python scripts/from_huggingface_to_croissant.py --dataset ibm-nasa-geospatial/hls_burn_scars --output /tmp/json-ld.json
Am I missing something here?
You should use the JSON-LD from https://github.com/mlcommons/croissant/issues/231#issuecomment-1764773876.
Copy the JSON to /tmp/hls_burn_scars.json.
Run the generation in Python with:
import mlcroissant as mlc
dataset = mlc.Dataset("/tmp/hls_burn_scars.json", debug=True)
records = dataset.records("images_with_annotations")
print(next(iter(records)))
The last step will return the first record which is a dict with:
{
"image": "... very long bytes with the image ...",
"split": b"training",
"id": b"subsetted_512x512_HLS.S30.T10SDH.2020248.v1.4",
"annotation": <PIL.TiffImagePlugin.TiffImageFile image mode=I size=512x512 at 0x7F0256311B40>
}
(PS: However I will investigate the error you encountered with scripts/from_huggingface_to_croissant.py.)
Am I missing something here?
As far as the Hugging Face script is concerned, you need to be in the Python project (= at the same level as the pyproject.toml).
git clone ...
cd croissant/python/mlcroissant (and not cd croissant/python/mlcroissant/mlcroissant)
python mlcroissant/scripts/from_huggingface_to_croissant.py --dataset ...
PR https://github.com/mlcommons/croissant/pull/283 will simplify this by having a CLI that you can call from the command line.
The last step will return the first record which is a dict with:
{
"image": "... very long bytes with the image ...",
"split": b"training",
"id": b"subsetted_512x512_HLS.S30.T10SDH.2020248.v1.4",
"annotation": <PIL.TiffImagePlugin.TiffImageFile image mode=I size=512x512 at 0x7F0256311B40>
}
If it's useful to you, you can create a PR with the dataset. Maybe you want to add some more metadata.
I'am able to generate this, there was some issue with pygraphviz in my local setup. Thank you @marcenacp
PR https://github.com/mlcommons/croissant/pull/283 will simplify this by having a CLI that you can call from the command line. Please, tell me if it fixes your issue.
Great. Thanks for this too.
So, I confirm that this script is working, although we will have to wait till we get it working for the HF datasets as most of our datasets will be hosted there. We would try to perform some experiments for our other datasets and post the updates here.
till we get it working for the HF datasets as most of our datasets will be hosted there. We would try to perform some experiments for our other datasets and post the updates here.
The script scripts/from_huggingface_to_croissant.py works when Hugging Face successfully auto-generated the Parquet files for the dataset. If it's not the case (as for ibm-nasa-geospatial/hls_burn_scars), the JSON-LD must be manually crafted. That's what I did in https://github.com/mlcommons/croissant/issues/231#issuecomment-1764773876. It may also be the case for the other datasets.
Got it, thank you.
Closing this issue as it's now resolved!
|
2025-04-01T04:34:46.901445
| 2021-02-03T23:39:28
|
800788546
|
{
"authors": [
"mwawrzos"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8727",
"repo": "mlcommons/logging",
"url": "https://github.com/mlcommons/logging/pull/83"
}
|
gharchive/pull-request
|
ComplianceChecker update
This PR includes:
copy of 0.7.0 rules to 1.0.0
making 1.0.0 the default version
added checks for gradinet_accumulation_steps (#78 )
adding RNN-T rules (it includes a new check for weights_initialization key (#80 ))
adding Unet3D rules
recheck
|
2025-04-01T04:34:46.909954
| 2016-04-19T10:21:29
|
149415457
|
{
"authors": [
"mlenser"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8728",
"repo": "mlenser/roll20-character-sheets",
"url": "https://github.com/mlenser/roll20-character-sheets/issues/110"
}
|
gharchive/issue
|
Secondary Inventory Toggles
https://app.roll20.net/forum/permalink/3106881/
Closed as part of my new issue tracker via bitbucket
|
2025-04-01T04:34:47.083597
| 2021-06-17T13:29:06
|
923928258
|
{
"authors": [
"MichaelLeeHobbs",
"mliebelt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8729",
"repo": "mliebelt/pgn-spec-commented",
"url": "https://github.com/mliebelt/pgn-spec-commented/issues/11"
}
|
gharchive/issue
|
Question: What are your thoughts on updating the standard?
What are your thoughts on updating the standard? Maybe call it pgn spec 2.0? I'd be willing to help as I have a lot of experience with these sorts of standards ie DICOM/HL7.
I have no idea on that, but perhaps there would be enough interest in doing this. By comparing the defined "standard" (with its holes in it), and the extensions that partly work around it, I think a new version would be possible. The goals are essential here:
Try to be liberal in reading, but provide much more guidance than currently available.
Try to include all the extensions that are widely used: different variants (as supported by all major chess websites), additions for clock / evaluation / ...
Try to fill the gaps like the NAGs that are not or inconsistent defined.
... (more to come)
I have no idea how much work this will be, and I don't know who would define a standard when there is no standardization gremium availble.
Nice idea ...
I don't know who would define a standard when there is no standardization gremium availble. If we did a good job and people adopted it then we would become the defacto standardization body. Eventually, some big organization would likely ask us to turn it over but I'm more than fine with that so long as they keep it open. Would be pointless if they closed, so not likely to happen.
Thoughts I've had.
Strict vs Non-Strict. Could have the standard as Strict MUST be x, y, z, etc where as Non-Strict is more flexible. Strict would still be flexible to a degree ie can still have custom tags but core/standard tags must come first.
File tags at top? Set of tags that gives information about what is in the file for multi game files.
JSON spec.
|
2025-04-01T04:34:47.089303
| 2023-10-16T19:17:26
|
1945923264
|
{
"authors": [
"fabio-ferraz-sh3",
"spawnia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8730",
"repo": "mll-lab/laravel-graphiql",
"url": "https://github.com/mll-lab/laravel-graphiql/issues/33"
}
|
gharchive/issue
|
Wrong function call
I received this msg: "Method MLL\GraphiQL\DownloadAssetsCommand::explorerPluginPath does not exist."
At DownloadAssetsCommand.php only exist "pluginExplorerPath"
I think it should be explorerPluginPath instead.
Your view is outdated.
|
2025-04-01T04:34:47.157134
| 2022-11-15T02:55:31
|
1449074553
|
{
"authors": [
"mmahdavian",
"zhang-mu-zhi"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8731",
"repo": "mmahdavian/semantic_visual_teach_repeat",
"url": "https://github.com/mmahdavian/semantic_visual_teach_repeat/issues/2"
}
|
gharchive/issue
|
Could NOT find yolo2(missing:yolo2_DIR)
When I run "cakin_make",the following error occurs
-- Could NOT find yolo2 (missing: yolo2_DIR)
-- Could not find the required component 'yolo2'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found.
CMake Error at /opt/ros/melodic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "yolo2" with any of
the following names:
yolo2Config.cmake
yolo2-config.cmake
Add the installation prefix of "yolo2" to CMAKE_PREFIX_PATH or set
"yolo2_DIR" to a directory containing one of the above files. If "yolo2"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
Robust UAV Visual Teach and Repeat Using Only Sparse Semantic Object Features/semantic_vtr-master/CMakeLists.txt:10 (find_package)
-- Configuring incomplete, errors occurred!
See also "/home/zhangmuzhi/svtar_ws/build/CMakeFiles/CMakeOutput.log".
See also "/home/zhangmuzhi/svtar_ws/build/CMakeFiles/CMakeError.log".
Invoking "cmake" failed
How do I handle this error?
Hi. I think your problem is related to another paper and repo. You should ask it here: https://github.com/fshamshirdar/semantic_vtr
|
2025-04-01T04:34:47.162488
| 2023-12-16T03:10:36
|
2044581662
|
{
"authors": [
"mmaitre314",
"quasar098"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8732",
"repo": "mmaitre314/picklescan",
"url": "https://github.com/mmaitre314/picklescan/pull/23"
}
|
gharchive/pull-request
|
Patched memo vulnerability
quick fix for https://github.com/mmaitre314/picklescan/issues/22
I felt bad for solving the challenge on the Imaginary ctf but not disclosing the vulnerability, so I am making this pull request
it patches the case where the memo is not in order.
import pickle, base64
pkl = b''.join([
pickle.UNICODE + b'os\n',
pickle.PUT + b'2\n',
pickle.POP,
pickle.UNICODE + b'system\n',
pickle.PUT + b'3\n',
pickle.POP,
pickle.UNICODE + b'torch\n',
pickle.PUT + b'0\n',
pickle.POP,
pickle.UNICODE + b'LongStorage\n',
pickle.PUT + b'1\n',
pickle.POP,
pickle.GET + b'2\n',
pickle.GET + b'3\n',
pickle.STACK_GLOBAL,
pickle.MARK,
pickle.UNICODE + b'cat flag.txt\n',
pickle.TUPLE,
pickle.REDUCE
]) + b'.'
print(base64.b64encode(pkl).decode())
perhaps another idea is that pickles could be checked to make sure the memo is in order (pickler always does memo from 0 counting up from what i can see, so any other order is suspicious)
the test appears to have failed but I do not think it was my code that did it
First, thanks for the contribution -- I thought I would make progress on that issue but didn't. Second, the test failure looks pretty close of the code changed. It seems that memo position is accessed but does not exist (KeyError: 1 accessing memo[int(ops[n - offset][1])]). Probably a bit more fix needed. I'll try to debug (time permitting).
Should the fix apply to PUT, BINPUT, LONG_BINPUT but not MEMOIZE? It looks like the latter has no input argument and puts the memo at the end: https://github.com/python/cpython/blob/1583c40be938d2caf363c126976bc8757df90b13/Lib/pickletools.py#L1864
On the other hand, the picketools doc mentions the other three take the memo location as input parameter and there the fix looks valid: https://github.com/python/cpython/blob/1583c40be938d2caf363c126976bc8757df90b13/Lib/pickletools.py#L1827
oops, i forgot about MEMOIZE i was just testing with the proof of concept vulnerability that i found on the imaginary ctf website, i will check that out
Cool. The fix looks simpler than I expected. I can add the MEMOIZE case and some tests. Thanks for your help!
alright
|
2025-04-01T04:34:47.171687
| 2016-01-22T19:42:46
|
128229009
|
{
"authors": [
"mmberg",
"zywind"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:8733",
"repo": "mmberg/nadia",
"url": "https://github.com/mmberg/nadia/issues/4"
}
|
gharchive/issue
|
HTTPAction expects XML responses
It seems HTTPAction expects XML responses. A lot of services today return JSON data. I think we should handle that. Also plain text responses. I am not sure what the API should look like though.
One possibility would be to create new subclasses HttpXmlAction, HttpJsonAction, etc.
Another one would be to only have HTTPAction and add a new parameter "responseType" that allows to distinguish between XPath and a JSON-expression.
I will have a look at it...
There is now an HttpJsonAction which can be used to access responses from RESTful web services.
If you want to access the result, just use JS notation like "title" or "object.name" or "array[0].title". Internally, we use https://github.com/jayway/JsonPath, so have a look there in case of questions about the syntax of the query. The "$." is automatically prepended.
If you want to use the old functionality of HTTPAction, you have to use HttpXpathAction now.
2c71a7e188fca432e5752e145e372803a2889de7 (development)
Will be merged with master after more testing.
Added HttpTextAction for processing plain text responses. You can provide a regex as query to filter the response.
See f96c4ee1fe078b541229b959ecd1ffc07f2fadcc
Examples (extracts):
HttpJsonAction:
HttpAction httpaction=new HttpJsonAction("My result is: #result");
httpaction.setUrl("http://freemusicarchive.org/api/get/genres.json");
httpaction.setMethod("get");
httpaction.setParams("api_key=60BLHNQCAOUFPIBZ&limit=2");
httpaction.setQuery("dataset[0].genre_color");
task.setAction(httpaction);
TextAction:
HttpTextAction httpaction=new HttpTextAction("My result is: #result");
httpaction.setUrl("https://www.example.com/");
httpaction.setMethod("get");
httpaction.setQuery("coo.{5}");
task.setAction(httpaction);
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.