added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:54:42.658137
2021-12-01T14:33:09
1068480222
{ "authors": [ "iamAbhishekkumar" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13144", "repo": "AnuvabSen/Virtual_Key_Board_Using_Opencv", "url": "https://github.com/AnuvabSen/Virtual_Key_Board_Using_Opencv/pull/3" }
gharchive/pull-request
Fixed #1 feat : added requirements.txt and .gitignore files fix : removed .idea file as it is specific to pycharm configs Minor Bug Detected. Will resend after fixing that.
2025-04-01T04:54:42.717874
2015-07-14T01:54:38
94850001
{ "authors": [ "Apostolique", "KickingRG", "laxika", "marcocc" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13145", "repo": "Apostolique/Agar.io-bot", "url": "https://github.com/Apostolique/Agar.io-bot/issues/209" }
gharchive/issue
Impressive Your bot is so impressive!! It really helped me get to the leader place!!! That's pretty nice. I guess Terminators aren't so bad after all. I hope you will add the auto-split feature/ option. It will be a lot better. I had one hit 6k score. Got no screen sadly. :( Little sneak peak of what I'll be working on today: ![http://i.imgur.com/x9crfEF.png] ragegaming.net/Screenshots/2015-07-11_12-38-27.mp4 18k eh I've showed this numerous amount of times. But it's pretty sweet. 5 hours. Did it while I was asleep. If I've seen this before, I probably wasn't looking at those numbers. :D xD
2025-04-01T04:54:42.718953
2016-05-05T21:25:49
153329897
{ "authors": [ "Drflash55", "Stedman42" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13146", "repo": "Apostolique/Agar.io-bot", "url": "https://github.com/Apostolique/Agar.io-bot/issues/623" }
gharchive/issue
Agario update I think agar updated today, and this brilliant bot no longer works. I have been using it for a few weeks now and it always amazes me how well it did. Could someone please fix it so it works again? Please? Refer to #616.
2025-04-01T04:54:42.755350
2016-12-01T17:13:38
192907415
{ "authors": [ "RaiaN", "cdonati", "menivaitsi" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13147", "repo": "AppScale/appscale", "url": "https://github.com/AppScale/appscale/pull/2242" }
gharchive/pull-request
Appscale dashboard cron page Hi! I implemented basic system to view/run cron jobs from Appscale dashboard. It looks like this: What do you think about it? I still have one issue about how to give an access to /etc/cron.d/ and /var/apps/ directories for appscaledashboard application. I did this through by modifying the file AppServer/google/appengine/tools/devappserver2/python/sandbox.py. Peter Hi! I made changes regarding: not used method parsing 7-elems cron entries (changes reverted) 2 new tests for 4-elems cron entries. About cron parser implementation. I think default Appscale implementation is definitely better because when we consider the schedule like "every 30 minutes from 6:45 to 7:50" we would usually expect cron job will be executed at 6.45, 7.15 and 7.45. Anyway one possible improvement can be done 7-elems cron entries. That is instead of creating separate cron entry for the same minutes but different hours we can use only one cron entry. For example: 30,35,40,45,50,55 1 * * * 0,5,10,15,20,25,30,35,40,45,50,55 2 * * * 0,5,10,15,20,25,30,35,40,45,50,55 3 * * * 0,5,10,15,20,25,30,35,40,45,50,55 4 * * * 0 5 * * * can be replaced with 30,35,40,45,50,55 1 * * * 0,5,10,15,20,25,30,35,40,45,50,55 2-4 * * * 0 5 * * * Hi! Yeah, actually compressing should be done carefully. I think about different one commit. Maybe not even this PR. Is that ok? Of course! More PRs are always welcome. :) I will start a build for this branch. https://ocd.appscale.com:8080/job/Daily Build/2387/ @RaiaN Thank you for your contribution! :) P.S. Feel free to reach out with more ideas or questions.
2025-04-01T04:54:42.882364
2023-04-04T16:14:46
1654186122
{ "authors": [ "jl-applied", "youngxguo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13148", "repo": "AppliedIntuition/black", "url": "https://github.com/AppliedIntuition/black/pull/1" }
gharchive/pull-request
Pin click==8.0.4 See: https://github.com/psf/black/issues/2964#issuecomment-1080971383. This is also the version we pin click to in applied2. Seeing this error without click pinned Traceback (most recent call last): File "/home/young/.local/bin/black", line 8, in <module> sys.exit(patched_main()) File "/home/young/.local/lib/python3.8/site-packages/black/__init__.py", line 6609, in patched_main patch_click() File "/home/young/.local/lib/python3.8/site-packages/black/__init__.py", line 6598, in patch_click from click import _unicodefun # type: ignore ImportError: cannot import name '_unicodefun' from 'click' (/home/young/.local/lib/python3.8/site-packages/click/__init__.py) Does this give substantially better UX than say pip install click==8.0.4? We're working on removing our dependency on this fork and favor of using vanilla black and would rather avoid touching this unless necessary. we're deprecating this fork soon anyway, so closing this
2025-04-01T04:54:43.410894
2021-03-17T17:33:04
834023276
{ "authors": [ "Arkq", "hymie0" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13149", "repo": "Arkq/bluez-alsa", "url": "https://github.com/Arkq/bluez-alsa/issues/426" }
gharchive/issue
Couldn't get BlueALSA PCM list: Rejected send message Hello. I'm brand new to the world of Bluetooth, and I hope you can point what I (probably) did wrong. I have my Jabra headset paired to my computer. I set up my /etc/asound.conf pcm.btheadset { type plug slave { pcm { type bluealsa device 30:50:75:43:F4:AE profile "a2dp" } } hint { show on description "BT Headset" } } Then I run bluealsa ... % sudo src/bluealsa src/bluealsa: D: ../../src/bluez.c:497: Creating media endpoint object: /org/bluez/hci0/A2DP/SBC/Source/1 src/bluealsa: D: ../../src/bluez.c:413: Registering media endpoint: /org/bluez/hci0/A2DP/SBC/Source/1 src/bluealsa: D: ../../src/bluez.c:497: Creating media endpoint object: /org/bluez/hci0/A2DP/SBC/Source/2 src/bluealsa: D: ../../src/bluez.c:413: Registering media endpoint: /org/bluez/hci0/A2DP/SBC/Source/2 src/bluealsa: D: ../../src/bluez.c:792: Creating hands-free profile object: /org/bluez/HSP/AudioGateway src/bluealsa: D: ../../src/bluez.c:726: Registering hands-free profile: /org/bluez/HSP/AudioGateway src/bluealsa: D: ../../src/bluez.c:792: Creating hands-free profile object: /org/bluez/HFP/AudioGateway src/bluealsa: D: ../../src/bluez.c:726: Registering hands-free profile: /org/bluez/HFP/AudioGateway src/bluealsa: D: ../../src/main.c:396: Acquiring D-Bus service name: org.bluealsa src/bluealsa: D: ../../src/main.c:401: Starting main dispatching loop But none of the tools are succeeding. % amixer -D bluealsa ALSA lib ../../../src/asound/bluealsa-ctl.c:972:(_snd_ctl_bluealsa_open) Couldn't get BlueALSA PCM list: Rejected send message, 1 matched rules; type="method_call", sender=":1.186" (uid=1000 pid=23605 comm="amixer -D bluealsa ") interface="org.bluealsa.Manager1" member="GetPCMs" error name="(unset)" requested_reply="0" destination="org.bluealsa" (uid=0 pid=22707 comm="src/bluealsa ") amixer: Mixer attach bluealsa error: No such device Can you give me a push? OK, I think I found my first mistake. I had to be running bluealso before I paired the headphones. Now I run bluealsa and I have this: src/bluealsa: D: ../../src/dbus.c:59: Called: org.bluez.Profile1.NewConnection() on /org/bluez/HFP/AudioGateway src/bluealsa: D: ../../src/ba-rfcomm.c:1285: Created new RFCOMM thread [ba-rfcomm]: HFP Audio Gateway (CVSD) src/bluealsa: D: ../../src/ba-rfcomm.c:901: Starting RFCOMM loop: HFP Audio Gateway (CVSD) src/bluealsa: D: ../../src/bluez.c:616: HFP Audio Gateway (CVSD) configured for device 30:50:75:43:F4:AE src/bluealsa: D: ../../src/ba-transport.c:669: Starting transport: HFP Audio Gateway (CVSD) src/bluealsa: D: ../../src/sco.c:245: IO loop: START: sco_thread: HFP Audio Gateway (CVSD) src/bluealsa: D: ../../src/ba-transport.c:1043: Created new transport thread [ba-sco]: HFP Audio Gateway (CVSD) src/bluealsa: D: ../../src/at.c:161: AT message: SET: command:+BRSF, value:923 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:+BRSF, value:2272 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/ba-rfcomm.c:126: RFCOMM: HFP Audio Gateway (CVSD) state transition: 0 -> 2 src/bluealsa: D: ../../src/at.c:161: AT message: TEST: command:+CIND, value:(null) src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:+CIND, value:("service",(0-1)),("call",(0,1)),("callsetup",(0-3)),("callheld",(0-2)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)) src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/ba-rfcomm.c:126: RFCOMM: HFP Audio Gateway (CVSD) state transition: 2 -> 5 src/bluealsa: D: ../../src/at.c:161: AT message: GET: command:+CIND, value:(null) src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:+CIND, value:0,0,0,0,0,0,5 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/ba-rfcomm.c:126: RFCOMM: HFP Audio Gateway (CVSD) state transition: 5 -> 7 src/bluealsa: D: ../../src/at.c:161: AT message: SET: command:+CMER, value:3, 0, 0, 1 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/ba-rfcomm.c:126: RFCOMM: HFP Audio Gateway (CVSD) state transition: 7 -> 8 src/bluealsa: D: ../../src/ba-rfcomm.c:126: RFCOMM: HFP Audio Gateway (CVSD) state transition: 8 -> 9 src/bluealsa: D: ../../src/at.c:161: AT message: SET: command:+VGS, value:07 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/at.c:161: AT message: SET: command:+VGM, value:09 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/at.c:161: AT message: SET: command:+XAPL, value:0B0E-BABE-0123,14 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:+XAPL=BlueALSA,6 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/at.c:161: AT message: GET: command:+BTRH, value:(null) src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/at.c:161: AT message: SET: command:+BIA, value:0,1,1,1,0,0,0,0 src/bluealsa: D: ../../src/ba-rfcomm.c:107: Sending AT message: RESP: command:(null), value:OK src/bluealsa: D: ../../src/dbus.c:59: Called: org.bluez.MediaEndpoint1.SelectConfiguration() on /org/bluez/hci0/A2DP/SBC/Source/1 src/bluealsa: D: ../../src/bluez.c:922: Signal: org.freedesktop.DBus.ObjectManager.InterfacesAdded() src/bluealsa: D: ../../src/dbus.c:59: Called: org.bluez.MediaEndpoint1.SetConfiguration() on /org/bluez/hci0/A2DP/SBC/Source/1 src/bluealsa: D: ../../src/a2dp.c:723: Selected A2DP SBC bit-pool range: [2, 53] src/bluealsa: D: ../../src/bluez.c:298: A2DP Source (SBC) configured for device 30:50:75:43:F4:AE src/bluealsa: D: ../../src/bluez.c:301: Configuration: channels: 2, sampling: 48000 src/bluealsa: D: ../../src/bluez.c:1110: Signal: org.freedesktop.DBus.Properties.PropertiesChanged(): org.bluez.MediaTransport1: Delay but I'm still having no luck with % amixer -D bluealsa ALSA lib ../../../src/asound/bluealsa-ctl.c:972:(_snd_ctl_bluealsa_open) Couldn't get BlueALSA PCM list: Rejected send message, 1 matched rules; type="method_call", sender=":1.202" (uid=1000 pid=25588 comm="amixer -D bluealsa ") interface="org.bluealsa.Manager1" member="GetPCMs" error name="(unset)" requested_reply="0" destination="org.bluealsa" (uid=0 pid=22707 comm="src/bluealsa ") amixer: Mixer attach bluealsa error: No such device Rejected send message, 1 matched rules This is a message from D-Bus daemon which have its own ACL system (other than UNIX's DAC). You need properly configured /etc/dbus-1/system.d/bluealsa.conf file (assuming that on your distro this config file is in that location). By default only root and user in the audio group are allowed to connect with bluealsa service. You're right ... things work fine for root so it must be a permission error. However, my user is in the audio group, so I'm not sure what the next step is. % groups hymie root disk lp floppy dialout audio video cdrom netdev scanner vboxusers ===== I (think I) figured it out. Even though my groups listing includes "audio" , I'm not listed in /etc/group in the "audio" group. Once I did that, things are working. Thank you for the help!
2025-04-01T04:54:43.421202
2023-04-03T13:42:06
1652097994
{ "authors": [ "jerryneedell", "lingmaple", "sandeepmistry" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13150", "repo": "ArmDeveloperEcosystem/lorawan-library-for-pico", "url": "https://github.com/ArmDeveloperEcosystem/lorawan-library-for-pico/issues/34" }
gharchive/issue
undefined reference to "make_timeout_time_ms" please This can be fixed by adding including "pico/time.h" in src/lorawan.c diff --git a/src/lorawan.c b/src/lorawan.c index 9e4cec3..dc24e60 100644 --- a/src/lorawan.c +++ b/src/lorawan.c @@ -28,6 +28,7 @@ #include <string.h> #include "pico/lorawan.h" +#include "pico/time.h" #include "board.h" #include "rtc-board.h" @lingmaple did you get a changes to try @jerryneedell's suggestion above (https://github.com/ArmDeveloperEcosystem/lorawan-library-for-pico/issues/34#issuecomment-1494472699) ?
2025-04-01T04:54:43.440764
2019-02-18T11:25:28
411429538
{ "authors": [ "AJunque9", "emiliocortina" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13151", "repo": "Arquisoft/dechat_en2b", "url": "https://github.com/Arquisoft/dechat_en2b/issues/13" }
gharchive/issue
Design UI/UX Design the user interface of our application Designed and uploaded the document containing the main page #21 Uploaded finished prototype for the chat view.
2025-04-01T04:54:43.549479
2022-03-11T17:13:03
1166662713
{ "authors": [ "AsjadOooO", "H-nta1", "Hellscap3d", "wolfff21123" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13152", "repo": "AsjadOooO/Zero-attacker", "url": "https://github.com/AsjadOooO/Zero-attacker/issues/20" }
gharchive/issue
AND I KNOW FILES, I CODE PY ASWELL. Bro its not malware lol pyinstaller use hook methods to make python runtime run the code without install python xD
2025-04-01T04:54:43.616443
2018-09-03T07:54:26
356395071
{ "authors": [ "MarcelCutts", "grsabreu", "happylinks" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13153", "repo": "Astrocoders/reform", "url": "https://github.com/Astrocoders/reform/issues/57" }
gharchive/issue
Handle Validation onBlur Hi, do you have any advice on how to handle validation onBlur with this library? I was thinking of a workaround with handling onBlur myself and then doing something like SetFocussedField("email"), and checking that before showing the validation message. But probably it could be done in a nicer way. I'm willing to make a PR if this is a feature that more people would like :) The library itself does not provide yet a onBlur handler, we recommend bringing your own handler for now. That is interesting to have indeed, we didn't had this use case before (of keeping the focused field state). probably something like? <Form etc> ...({ focusedField: FormParams.fields, setFocusedField: FormParams.fields => unit, handleChange } => { }) </Form> Yeah, I think focusedField could work. It would let the user decide themselves what they want to do with that info. My use-case is only showing the validation message if there is an error and the user is already done with the input (not focused anymore). I'll see if I can set something up soon. I just came across this exact use case also @happylinks! I also want my user to be able to lose focus of an email field before they're attacked by an error message, because their first keystroke will trigger that error regardless. My thought-solution was decoupling validation from onChange but this way is probably more elegant. This is now released in 5.1.1-beta.6. Closing this issue :)
2025-04-01T04:54:43.632915
2020-07-19T14:22:23
660864874
{ "authors": [ "hank-lenzi", "luan-dev", "matthewjcable", "mgramigna", "zeevosec" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13154", "repo": "Asymmetrik/node-fhir-server-core", "url": "https://github.com/Asymmetrik/node-fhir-server-core/issues/239" }
gharchive/issue
Has anyone begun work to implement support for 4.0.1? Especially for the CMS Final rule? Do you want to request a feature, report a bug, or improve documentation? If you are reporting a bug? What is the current behavior? What is the expected behavior? What are the steps to reproduce? What OS are you using and what version of node.js and @asymmetrik/node-fhir-server-core are you running? Not that I know of. Do you have documentation on the CMD Final rule I could check out? High-level details are here: https://www.cms.gov/Regulations-and-Guidance/Guidance/Interoperability/index Links to specs are about 1/3 of the way down the page. I would like to follow up on this. Is there any planned support for FHIR 4.0.1? No answer means this should eventually be forked, in order to support 4.0.1 I opened https://github.com/Asymmetrik/node-fhir-server-core/pull/303 for this 4.0.1 support has been added
2025-04-01T04:54:43.651913
2021-04-01T14:02:16
848497499
{ "authors": [ "bahorn", "normansophie789" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13155", "repo": "Athons/wiki", "url": "https://github.com/Athons/wiki/pull/241" }
gharchive/pull-request
Adding Leadership Society's Enterprise Hackathon I'm one of the organisers for Leadership Society's Enterprise Hackathon (based in Newcastle University), and we're hosting our Regional Enterprise Hackthon soon. Let me know if there's anything I need to change! Everything looks find, so i'll merge in now!
2025-04-01T04:54:43.658639
2017-01-24T12:46:15
202808712
{ "authors": [ "AkenRoberts", "TriAnMan", "l3pp4rd", "svgrafov" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13156", "repo": "Atlantic18/DoctrineExtensions", "url": "https://github.com/Atlantic18/DoctrineExtensions/pull/1742" }
gharchive/pull-request
Minor update of documentation for Translatable Update Translatable documentation to warn users of performance cap. https://github.com/Atlantic18/DoctrineExtensions/issues/1502#issuecomment-171011176 https://github.com/Atlantic18/DoctrineExtensions/issues/860#issuecomment-26154908 well, there are many users who use it in production for larger projects. the point is that by default translatable casts foreign key to string (which is not optimal if you have integer). in that case optimization is to override the mapping of and change the field to integer. It could mention instead that it may handle optimally up to million row tables with certain optimizations like using personal translations or overriding mapping of foreign key to prevent casting in database. @l3pp4rd not millions. 50k of translation rows is enough for performance cap mentioned in #1512 yes, but it mentions exactly that, you need to modify the type of foreignKey to improve the performance. Have fun with blazing speed of your DB queries. (In some cases we have accelerated our queries in more than 500 times: 130 milliseconds instead of 72 seconds) So it is capable to handle big row size if the foreignKey is of correct type in order to prevent casting. That should mention it and point to that issue in order to update metadata for this column. If we would say that it is capable to handle only small sets of data, that would not be true, because you can apply this optimization. Hello, and thank you for your contribution. In the interest of spring cleaning and package modernization, I'm going through and closing old issues and pull requests that have had no recent activity, do not have associated tests, or have failing tests. If you are still experiencing this issue or wish to contribute an up-to-date pull request, please create a new issue. Thank you!
2025-04-01T04:54:43.715374
2024-07-23T15:11:21
2425464616
{ "authors": [ "Aton-Kish", "Etanarvazac" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13157", "repo": "Aton-Kish/your-reputation", "url": "https://github.com/Aton-Kish/your-reputation/issues/44" }
gharchive/issue
[Feature] 1.21 Compatibility? Describe the solution you'd like I am looking to see if this mod will be updated to 1.21. From Minecraft 1.21+, I separated the mod project. WTHIT curseforge: https://www.curseforge.com/minecraft/mc-mods/your-reputation-wthit modrinth: https://modrinth.com/mod/your-reputation-wthit Jade curseforge: https://www.curseforge.com/minecraft/mc-mods/your-reputation-jade modrinth: https://modrinth.com/mod/your-reputation-jade From Minecraft 1.21+, I separated the mod project. WTHIT curseforge: https://www.curseforge.com/minecraft/mc-mods/your-reputation-wthit modrinth: https://modrinth.com/mod/your-reputation-wthit Jade curseforge: https://www.curseforge.com/minecraft/mc-mods/your-reputation-jade modrinth: https://modrinth.com/mod/your-reputation-jade This is due to issues #33 #34 . I do apologize for seeing this only now. Thank you for the update!
2025-04-01T04:54:43.747067
2024-04-29T02:49:38
2268018697
{ "authors": [ "NickCulbertson", "vycoder" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13158", "repo": "AudioKit/AudioKit", "url": "https://github.com/AudioKit/AudioKit/issues/2914" }
gharchive/issue
Oscillator only plays for a short time then stops in a physical device macOS Version(s) Used to Build macOS 13 Ventura Xcode Version(s) Xcode 14 Description I've tried writing a very simple Oscillator that just plays a certain frequency. It's working on the emulator but once I deployed it to a my iPhone (12 mini, iOS version 17.4.1) it just plays for a short while but then stops (about 1 second). I've tried checking whether the Oscillator really started using isStarted but it prints out true. Is there any configuration settings that I need to setup beforehand? I'm fairly new to AudioKit (and swift in general), was hoping someone can point me in the right direction. Here's my code: public class HertzPlugin:CAPPlugin { let engine = AudioEngine() let oscillator = Oscillator() @objc func play(_ call: CAPPluginCall) { AudioKit.Settings.enableLogging = true let frequency = call.getFloat("value", 0) oscillator.stop() oscillator.frequency = frequency oscillator.amplitude = 1.0 engine.output = oscillator do { try engine.start() } catch let err { print("error") Log(err) } print("playing...") oscillator.start() print("played \(oscillator.isStarted)") // always returns true call.resolve(["value": frequency]) } @objc func stop(_ call: CAPPluginCall) { if (oscillator.isStarted == true) { oscillator.stop() engine.stop() } call.resolve() } } I'm running this using an M2 Macbook Air, MacOs Sonoma 14.4.1 Crash Logs, Screenshots or Other Attachments (if applicable) No response Try out the Cookbook Oscillator example and see if it is giving you the same issue. https://github.com/AudioKit/Cookbook My guess is that you need to set the AVAudioSession category in your main app file like this: init() { #if os(iOS) do { Settings.bufferLength = .medium try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(Settings.bufferLength.duration) try AVAudioSession.sharedInstance().setCategory(.playback, options: [.mixWithOthers, .allowBluetoothA2DP]) try AVAudioSession.sharedInstance().setActive(true) } catch let err { print(err) } #endif } That worked really well! Closing this issue. Thanks for the help! Is the Cookbook the official place where to find all the necessary information regarding AudioKit? I'm very new to all of this, and I'm having trouble navigating the documentation. I don't even know how to run the Cookbook, any suggestions on where absolute beginners like me should start? The Cookbook showcases most of the various pieces of AudioKit and how to implement them. I started learning AudioKit with the Cookbook. You can download the project from GitHub and run it like any other Xcode project (unless you’re getting some weird compiler errors). You can also find some tutorials on YouTube that should help in going up the AudioKit learning curve.On May 4, 2024, at 11:11 PM, yev @.***> wrote: That worked really well! Closing this issue. Thanks for the help! Is the Cookbook the official place where to find all the necessary information regarding AudioKit? I'm very new to all of this, and I'm having trouble navigating the documentation. I don't even know how to run the Cookbook, any suggestions on where absolute beginners like me should start? —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
2025-04-01T04:54:43.755952
2024-08-05T02:00:26
2447442120
{ "authors": [ "cadzchua", "nonnoxer", "yuurraa" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13159", "repo": "AudricY/dscc-hub", "url": "https://github.com/AudricY/dscc-hub/issues/2" }
gharchive/issue
Remove PDFs The amount of PDFs greatly bloats the size of the repository unnecessarily, especially since these resources can be accessed from NTULearn anyway. ok nigga well played
2025-04-01T04:54:43.762033
2018-05-12T20:00:31
322538837
{ "authors": [ "jessepollak", "joeykrug" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13160", "repo": "AugurProject/augur", "url": "https://github.com/AugurProject/augur/issues/1574" }
gharchive/issue
Feedback Hey all -- wasn't sure where the best place to put this was, so figured I'd just open up an issue! I spent 30 minutes playing around with Augur today and wanted to share some high level feedback from a first time user. First thing I want to say is: wow! This has come a long way. The UI is beautiful, the sections make sense, the interface feels snappy and exciting. Amazing work! Some smaller nits: They distinction between Buy/Sell is confusing. On this trade page, I am very confused about how to bet "No", I do not think they will win the championship. I executed a "Sell" (though I don't think I owned anything?) and it seems to have executed. Does this mean I bet against it? I also issue a "Buy" and the order is still pending? I'm pretty confused by what's going on here. It would be amazing if in the UI there was some Augur 101 stuff that walked me through the basics of how I invest on either side of a prediction. I am confused by this market. How do I actually pick a degree that I think it will be? I can place a "Buy" or a "Sell", but I can't figure out what this actually means. This is similar feedback to above, but since it's not a binary prediction, I think I require even more UI / UX help. When I place an order, there's not status or indicator that my transaction is pending. It would be amazing if the little prompt that shows up bottom right also included a link to something like Etherscan where I could track status. Also, it would be great if pending transactions could show up in the My Positions section with a pending status. On the Porfolio page, I'm not sure what this chart is. It was there before I had an positions, is it the entire market? Fake data? Anyways, just wanted to dump out thoughts in case they were useful. Really excited to see this continuing to mature 👏 🎉 💛 cc @joeykrug Thanks! This is [one of] the right place[s]! Yep it means you short sold / bet against it. It is kinda confusing, there'll be a tutorial walkthough explaining it all in the UI eventually. Essentially it's a trading platform first, then there'll be a simpler UI skin that makes it easier / simpler to use for betting, which should also help with this issue. Did your buy order ever finish pending? Or does it instead say "open order" [this is like an open unfilled order on gdax if that's what it says] Ah yeah, so you don't actually pick an exact degree it'll be, what you do is buy / sell or go long / short the market at a given price. So for instance, say the current prediction is 0.27 degrees and you think it'll get up to 20 degrees, just how if in a binary market you would buy up to say 60% odds or a price of 0.60 and you'd enter in 0.60 for your price in the buy box, here you'd enter in 20 and the amount you want to buy as well. I agree, good call x2 Yeah, that's a bug I've reported as well, it's fake data haha 🙏
2025-04-01T04:54:43.765111
2019-04-18T18:59:47
434912810
{ "authors": [ "Chwy5", "andrewdamelio", "matt-bullock" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13161", "repo": "AugurProject/augur", "url": "https://github.com/AugurProject/augur/issues/1730" }
gharchive/issue
Scroll bar issue - multiple scrollbars @chwy gets 2 individual scrollbars show up in trade history on windows. Could you supply a screenshot I get two types of scroll bar: 1 default: 1 active: Can we discuss a better solution for this Closing - this is the default behavior in Chrome. You get that one type of bar when scrolling BUT if you move and hold your mouse inside the SCROLL BAR path Chrome will draw the scroll bar margins.
2025-04-01T04:54:43.768311
2019-11-15T17:40:38
523602102
{ "authors": [ "bconfortin", "matt-bullock" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13162", "repo": "AugurProject/augur", "url": "https://github.com/AugurProject/augur/issues/4797" }
gharchive/issue
Design QA: Chat - Chat screen Design: https://www.figma.com/file/fLWVwmanAwetVZbujQquEi/Market-Page?node-id=194%3A3832 [x] Centre align text for '205 Online' [x] Adjust height of chat input box to match design. Currently it looks small leaving a large gap at the bottom [x] Update username text style [x] Check user's messages are using colour text/primary [x] 'Type your message' text. Make colour text/secondary at 75% opacity [x] Active input text should be colour text/primary at 100% opacity [x] Check types style for input - might need updating to larger font size Oops
2025-04-01T04:54:43.770649
2019-12-13T16:58:38
537658950
{ "authors": [ "cillianhunter0", "matt-bullock" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13163", "repo": "AugurProject/augur", "url": "https://github.com/AugurProject/augur/issues/5238" }
gharchive/issue
No alerts confirming REP migration There's currently no confirmation alert when the REP is migrated to V2. Nothing happens... Ran it by @Chwy5 show same spinner proposed for notifications when transaction is being processed. show spinning icon when migration transaction is being processed. Shows up in the bell dropdown but not as a toast. Is toast required? +1 we should enable the toast for this, it's an important transaction, we should make the confirmation state as visible as possible toast and bell now there
2025-04-01T04:54:43.776179
2020-05-12T21:27:54
616972186
{ "authors": [ "Chwy5", "matt-bullock" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13164", "repo": "AugurProject/augur", "url": "https://github.com/AugurProject/augur/issues/7745" }
gharchive/issue
Eth Reserve Top off messaging - See designs in comment Add the dai equivalent of eth reserve balance to the current tooltip....like "Your total funds does not include the Eth reserve balance of $7.94" Inform the user when their ETH reserves get topped-off. [x] bell alert [x] order form ETH reserve top-off Edit copy in ETH reserve tooltip in account [x] tooltip Augur runs on a peer-to-peer network which requires transaction fees that are paid in ETH. These fees go entirely to the network and Augur doesn’t collect any of these fees. If your account balance exceeds $40, a portion of this equivilant to 0.04ETH will be held in your ETH reserve to cover transaction fees resulting in cheaper transaction fees. So long as your available account balance remains over $40 Dai, your ETH reserve will replenish automatically. Your ETH reserve can be easily cashed out at anytime using the withdraw button in the transactions section of your account summary. [x] Total Funds tooltip Design for order form message Bell & Toast message In the 3rd paragraph of the tooltip "As long as your available account balance remains over $40 Dai, your ETH reserve will automatically be replenish." need to add an ed onto replenish....should be replenished Checked. Closing
2025-04-01T04:54:43.777853
2020-06-14T11:47:02
638336300
{ "authors": [ "Chwy5" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13165", "repo": "AugurProject/augur", "url": "https://github.com/AugurProject/augur/issues/8038" }
gharchive/issue
No Contest Outcome + Rules - Sports templates Change all of the ones listed below (including mentions in the rules to No Contest) with the exception of NFL (that would change to Tie/No Contest) No Winner Tie/No Winner (For NFL moneyline only) Unofficial Game/Cancelled No Winner/Event Cancelled updated - closing
2025-04-01T04:54:43.781055
2023-10-26T11:11:21
1963279335
{ "authors": [ "Aurion72", "TokyConstellation" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13166", "repo": "Aurion72/nuxt-rating", "url": "https://github.com/Aurion72/nuxt-rating/issues/7" }
gharchive/issue
rating value 0 what if i want rating value 0 when i click Hello, it is impossible to interact with a click on the component to send 0 as the value. And if you click outside the component, it doesn't have access to this information and can't deduce that the value should be 0. You need to manage this outside the component (set the default value to 0 and/or add a button to reset/apply the value 0 to the component). In most cases, you don't want to have 0 in the component, but if your case requires it, you'll have to manage it in your project ;)
2025-04-01T04:54:43.843688
2022-07-04T21:26:16
1293577657
{ "authors": [ "Auties00", "danillo10" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13167", "repo": "Auties00/WhatsappWeb4j", "url": "https://github.com/Auties00/WhatsappWeb4j/issues/154" }
gharchive/issue
How to use it ? How i use this library ? How i use this library ? The readme is probably a good hint
2025-04-01T04:54:43.848543
2015-10-31T08:34:04
114398209
{ "authors": [ "moodmosaic", "ploeh" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13168", "repo": "AutoFixture/AutoFixture", "url": "https://github.com/AutoFixture/AutoFixture/issues/458" }
gharchive/issue
Make some InlineAutoDataAttribute constructor overloads protected InlineAutoDataAttribute has a constructor overload that takes an AutoDataAttribute as an argument. As #457 implicitly points out, there's no way to use this constructor overload when using the [InlineAutoData] attribute to annotate tests. This constructor overload only exists to enable a derived class to supply a custom, derived instance of AutoDataAttribute, thereby customizing the behaviour of the derived attribute. Since this constructor overload only exists to enable inheritance, we should consider changing its accessibility level to protected. This is a breaking change, so can only be done in AutoFixture 4. Note that this ought to be done for both AutoFixture.Xunit and AutoFixture.Xunit2. I'm going to take a look at this, and (hopefully) open a Pull Request over the next couple of days. I think we can now close this, since #462 is merged. Addressed by 5ac109164aa212c5963d620d3a4a166ebbe1096f
2025-04-01T04:54:43.855232
2023-04-07T03:19:54
1658307925
{ "authors": [ "AutoViML", "GDGauravDutta" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13169", "repo": "AutoViML/pandas_dq", "url": "https://github.com/AutoViML/pandas_dq/issues/1" }
gharchive/issue
DQ Issue unable to identify Id needs to be removed in dq_report & Fix_dq remove them Algorithm needs to remove id like feaatures Hi @GDGauravDutta 👍 Thanks for trying out pandas_dq. I have made some bug fixes already. Can you please upgrade and then check? pip install pandas_dq --ignore-installed --no-cache-dir If you see version 1.7 or higher, then you have the right version. Thanks Auto Vimal team It now removes ID like features, On Thursday, April 6, 2023 at 11:25:19 PM EDT, GDGauravDutta @.***> wrote: Algorithm needs to remove id like feaatures — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>
2025-04-01T04:54:44.105600
2019-07-15T18:00:40
468260195
{ "authors": [ "dchymko" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13170", "repo": "Automattic/vip", "url": "https://github.com/Automattic/vip/issues/416" }
gharchive/issue
Handle CTRL-C gracefully in WP CLI Description We need to handle when a user cancels a long-running WP CLI command Forward the Ctrl-C character (x03) to the command runner Call an endpoint in the API to mark the command as cancelled (and do any other cleanup) There are a few cases we need to test to make sure that the command cancel logic is run correctly on Ctrl-C/SIGINT: Standard commands Interactive command (such as user create --prompt, help) Pseudo shell commands wp shell
2025-04-01T04:54:44.318269
2017-12-04T12:36:41
278980994
{ "authors": [ "4brunu", "vldalx" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13171", "repo": "AutosoftDMS/SignalR-Swift", "url": "https://github.com/AutosoftDMS/SignalR-Swift/issues/24" }
gharchive/issue
SignalR throw an error when reconnecting, even if successfully Hi, I have an issue, when I connect to the server with SignalR, lock de screen and unlocked, HubConnection.error is fired with the following error "NSURLErrorDomain" - code:<PHONE_NUMBER>, but the library is able reconnect successfully. The library https://github.com/DyKnow/SignalR-ObjC doesn't have the same behaviour. When the connection is lost, it try to reconnect, and only if the reconnect attempt is failed, them it throw's an error, and that seems the right behaviour to me. The problem with this, is that when an error happen, I show a different screen to the user, but actually this isn't an error, only a temporary lost of connectivity, and I don't a have a secure way to differentiate this error from the others. Is this a bug? Here is the log it prints when unlocking the screen in case it helps. [] nw_socket_get_input_frames recvmsg(fd 4, 1024 bytes): [57] Socket is not connected [] nw_socket_get_input_frames recvmsg(fd 9, 1024 bytes): [57] Socket is not connected [] nw_socket_get_input_frames recvmsg(fd 11, 1024 bytes): [57] Socket is not connected [] nw_socket_get_input_frames recvmsg(fd 12, 1024 bytes): [57] Socket is not connected [] nw_socket_get_input_frames recvmsg(fd 13, 1024 bytes): [57] Socket is not connected [] nw_endpoint_handler_add_write_request [2.1 <IP_ADDRESS>:443 failed socket-flow (satisfied)] cannot accept write requests [] nw_endpoint_handler_add_write_request [5.1 <IP_ADDRESS>:443 failed socket-flow (satisfied)] cannot accept write requests [] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument [] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument [] nw_endpoint_handler_add_write_request [7.1 <IP_ADDRESS>:443 failed socket-flow (satisfied)] cannot accept write requests [] nw_endpoint_handler_add_write_request [6.1 <IP_ADDRESS>:443 failed socket-flow (satisfied)] cannot accept write requests [] nw_endpoint_handler_add_write_request [4.1 <IP_ADDRESS>:443 failed socket-flow (satisfied)] cannot accept write requests [] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument [] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument [] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument I've tried to reproduce the issue described above, but no such luck. HubConnection.error callback is not invoked after the screen has been unlocked. I don't know what I'm doing wrong. By the way, there is a workaround - you can handle "lock screen" "unlock screen" events manually private func subscribeAppLifeCycleNotifications() { let center = NotificationCenter.default center.addObserver(self, selector: #selector(connect), name: .UIApplicationWillEnterForeground, object: nil) center.addObserver(self, selector: #selector(disconnect), name: .UIApplicationDidEnterBackground, object: nil) } @objc private func connect() { // your code to connect } @objc private func disconnect() { // your code to disconnect }
2025-04-01T04:54:44.320412
2023-11-13T19:47:46
1991361817
{ "authors": [ "LauRoxx", "brycehowitson" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13172", "repo": "Availity/element", "url": "https://github.com/Availity/element/issues/138" }
gharchive/issue
CSS variables contain unsupported values none is being produced as CSS values in the Design Tokens variables. This value is unsupported on multiple attributes and should be 0 (assuming that none is default/unchanged value in Figma). Current examples: --*-letter-spacing: none --*-paragraph-spacing: none text-case ( css text-transform) has a none value. This is valid but should probably be initial instead gotcha. We'll double back with UX to confirm 0 is what is desired for those.
2025-04-01T04:54:44.327813
2024-06-19T05:39:55
2361365164
{ "authors": [ "Sewer56", "maxkatz6" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13173", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/issues/16062" }
gharchive/issue
D3D11CreateDevice+Hardware crashes and fallback doesn't work @Sewer56 looks like this PR break Angle/DX on my machine: [OpenGL]Unable to initialize ANGLE-based rendering with DirectX11 : 'System.ArgumentException: Value does not fall within the expected range. at Avalonia.Win32.DirectX.DirectXUnmanagedMethods.D3D11CreateDevice(IntPtr adapter, D3D_DRIVER_TYPE DriverType, IntPtr Software, UInt32 Flags, D3D_FEATURE_LEVEL[] pFeatureLevels, UInt32 FeatureLevels, UInt32 SDKVersion, IntPtr& ppDevice, D3D_FEATURE_LEVEL& pFeatureLevel, IntPtr* ppImmediateContext) at Avalonia.Win32.OpenGl.Angle.AngleWin32EglDisplay.CreateD3D11Device(IDXGIAdapter1 chosenAdapter, D3D_FEATURE_LEVEL[] featureLevels) in E:\Work\Projects\AvaloniaCopy1\src\Windows\Avalonia.Win32\OpenGl\Angle\AngleWin32EglDisplay.cs:line 175 at Avalonia.Win32.OpenGl.Angle.AngleWin32EglDisplay.CreateD3D11Display(Win32AngleEglInterface egl) in E:\Work\Projects\AvaloniaCopy1\src\Windows\Avalonia.Win32\OpenGl\Angle\AngleWin32EglDisplay.cs:line 106 at Avalonia.Win32.OpenGl.Angle.D3D11AngleWin32PlatformGraphics.TryCreate(Win32AngleEglInterface egl) in E:\Work\Projects\AvaloniaCopy1\src\Windows\Avalonia.Win32\OpenGl\Angle\D3D11AngleWin32PlatformGraphics.cs:line 77' [OpenGL]Unknown requested PlatformApi 'DirectX11' There are couple of problems: D3D11CreateDevice+HARDWARE fails here with invalid argument. Don't really know why. D3D11CreateDevice+SOFTWARE fails too, but UNKNOWN works fine. D3D11CreateDevice return type should be "int" (non-void), and PreserveSig = false should be removed, if you want to implement fallback logic. Before each fallback, there should be a warning log message, so developers can easier find this problem. Something like: Logger.TryGet(LogEventLevel.Warning, LogArea.Win32Platform)?.Log(null, "Unable to create hardware ID3D11Device, error code = {ErrorCode}", $"0x{result:X}") Originally posted by @maxkatz6 in https://github.com/AvaloniaUI/Avalonia/issues/16035#issuecomment-2177788039 This is technically fixed by https://github.com/AvaloniaUI/Avalonia/pull/16063 And the feedback was implemented in: https://github.com/Sewer56/Avalonia/commit/ec04ffacc56a5484c10ece72e9f1685d606bc9c0 As per https://github.com/AvaloniaUI/Avalonia/pull/16035#issuecomment-2177884713, I'm not sure what the current course of action should be.
2025-04-01T04:54:44.333586
2024-07-30T11:21:48
2437591212
{ "authors": [ "genment", "mterwoord" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13174", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/issues/16537" }
gharchive/issue
FontWeight.ExtraBold is rendered heavier since 11.1 Describe the bug I'm using FontWeight.ExtraBold in an application. After upgrading from 11.0.10 to 11.1.1, the text is rendered way heavier, and is harder to read. To Reproduce See attached solution. Set all avalonia package versions to 11.0.10 to see the old behavior. Upgrade to 11.1.1 or (at this moment) latest preview version to see the changed rendering. AvaloniaApplication1Changed.zip Expected behavior I didn't expect any changes, or I would have expected a breaking change notice in the release notes somewhere. Old behavior: New behavior: Avalonia version 11.1.1 OS Windows Additional context No response I'm using Avalonia version 11.1.3 and encountered a similar issue. I'm not sure if it's the same bug. From the image, it seems that the font weights 600 and 700 are swapped, and the same issue appears between 800 and 900 as well. <Grid RowDefinitions="Auto Auto Auto Auto Auto Auto Auto Auto Auto"> <Grid.Styles> <Style Selector="TextBlock"> <Setter Property="FontFamily" Value="Arial"></Setter> <Setter Property="FontSize" Value="28"></Setter> </Style> </Grid.Styles> <TextBlock Grid.Row="0" Grid.Column="0" FontWeight="100">FontWeight 100</TextBlock> <TextBlock Grid.Row="1" Grid.Column="0" FontWeight="200">FontWeight 200</TextBlock> <TextBlock Grid.Row="2" Grid.Column="0" FontWeight="300">FontWeight 300</TextBlock> <TextBlock Grid.Row="3" Grid.Column="0" FontWeight="400">FontWeight 400</TextBlock> <TextBlock Grid.Row="4" Grid.Column="0" FontWeight="500">FontWeight 500</TextBlock> <TextBlock Grid.Row="5" Grid.Column="0" FontWeight="600">FontWeight 600</TextBlock> <TextBlock Grid.Row="6" Grid.Column="0" FontWeight="700">FontWeight 700</TextBlock> <TextBlock Grid.Row="7" Grid.Column="0" FontWeight="800">FontWeight 800</TextBlock> <TextBlock Grid.Row="8" Grid.Column="0" FontWeight="900">FontWeight 900</TextBlock> </Grid>
2025-04-01T04:54:44.338365
2024-12-16T13:27:09
2742395908
{ "authors": [ "giacarrea", "timunie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13175", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/issues/17785" }
gharchive/issue
Sorting issues on datagrid with user control as data template Describe the bug I have a datagrid with templated columns, one of which being a user control. That user control doesn't seem to properly sort with the rest of the line: Sample project demonstrating the issue: Avalonia.Samples-main - gridbug.zip To Reproduce From the sample project, fiddle a few time with the tag column header sorting. Expected behavior No response Avalonia version 11.2.2 OS Windows Additional context No response The issue seem to stem from the fact that I'm adding the textboxes programmatically in the TagManagerUserControl UC. The column sorts correctly when using ItemsControl. Avalonia.Samples-main - gridbug.zip [^Demo project using itemscontrol] Why do you need to add controls programatically? I highly suggest you use DataTemplates. If not, add them at least in OnDataContextChanged override to ensure correct DataContext is available.
2025-04-01T04:54:44.341026
2019-10-11T10:55:20
505782526
{ "authors": [ "aguahombre", "grokys" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13176", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/issues/3098" }
gharchive/issue
Carousel gestures v0.8.999 I'm working on adding a swipe gesture to the Carousel control but I have discovered that the control overrides OnKeyDown and OnPointerPressed. /// <inheritdoc/> protected override void OnKeyDown(KeyEventArgs e) { // Ignore key presses. } /// <inheritdoc/> protected override void OnPointerPressed(PointerPressedEventArgs e) { // Ignore pointer presses. } The OnPointerPressed override prevents gestures from working on the Carousel control. Is there a good reason why these methods are overriden like this? I can't remember 100% but I think it's to prevent the default keyboard behavior from SelectingItemsControl from happening. Could be that we should either: Move the keyboard behavior to a derived class. This would mean that keyboard handling would be implemented in ListBox, ComboBox etc separately Simply set e.Handled in these events in Carousel
2025-04-01T04:54:44.346634
2021-06-25T14:04:01
930206197
{ "authors": [ "BAndysc", "Mikolaytis", "grokys", "iMonZ", "vmelnikov" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13177", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/issues/6144" }
gharchive/issue
Clipping Geometry on Apple M1 Geometry clipping does not work correctly on Apple M1 I think something wrong with SkiaGpuRenderTarget. Everything is ok if With(new AvaloniaNativePlatformOptions { UseGpu = false }) added in Program.cs We have the same issue. It hurts. @vmelnikov thanks for a solution! I think an update of SkiaSharp skia version up to version 90+ (for example) will fix this... Current version is 80. It was published before m1 mac release AFAIK. Later skia version maybe fixes this issue. Let's wait for a new release from @mattleibow @Mikolaytis Sadly latest version 2.88.0-preview.127 on .net6.0, native m1 doesn't fix the problem @BAndysc wow. thanks a lot for a pretty useful information. any thoughts what can fix it? I know there is two options of clipping in skia (antialiased and basic). Maybe let's try to switch this option and check it out? Any news on that? Investigating this now. Minimal repro: <Border Background="Yellow" Width="200" Height="200" ClipToBounds="True"> <Image Height="48" Margin="0,200,0,0"> <Image.Source> <DrawingImage> <GeometryDrawing Brush="Red" Geometry="F1M14.707,4.707L6,13.414 1.293,8.707 2.707,7.293 6,10.586 13.293,3.293z" /> </DrawingImage> </Image.Source> </Image> Output on M1 Mac: Definitely looks to be a Skia issue. If I add the following code at this line: https://github.com/AvaloniaUI/Avalonia/blob/master/src/Skia/Avalonia.Skia/DrawingContextImpl.cs#L185 Canvas.Clear(SKColors.Gold); (i.e. immediately before drawing the geometry) I get the following output. You can see that the .Clear() call uses the correct clip but the geometry is just ignoring it:
2025-04-01T04:54:44.355858
2022-02-18T17:07:43
1143426618
{ "authors": [ "fitdev", "kant2002", "timunie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13178", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/issues/7644" }
gharchive/issue
Rare exception in RelayCommand Describe the bug Sometimes RelayCommand crash application with following stack trace. Unhandled Exception: System.InvalidOperationException: Call from invalid thread at Avalonia.Threading.Dispatcher.VerifyAccess() + 0x4c at Avalonia.AvaloniaObject.GetValue[T](StyledPropertyBase`1) + 0x2e at Avalonia.Controls.MenuItem.CanExecuteChanged(Object, EventArgs) + 0x30 at System.Reactive.AnonymousSafeObserver`1.OnNext(T) + 0x28 at System.Reactive.Subjects.FastImmediateObserver`1.EnsureActive(Int32) + 0x334 at System.Reactive.Subjects.ReplaySubject`1.ReplayBase.OnNext(T) + 0xc0 at System.Reactive.Linq.ObservableImpl.CombineLatest`3._.SecondObserver.OnNext(TSecond) + 0x7d at System.Reactive.Subjects.FastImmediateObserver`1.EnsureActive(Int32) + 0x334 at System.Reactive.Subjects.ReplaySubject`1.ReplayBase.OnNext(T) + 0xc0 at System.Reactive.SafeObserver`1.WrappingSafeObserver.OnNext(TSource) + 0x2b at System.Reactive.ObserveOnObserverLongRunning`1.Drain() + 0xa6 at AvaloniaCoreRTDemo!<BaseAddress>+0x1693985 at AvaloniaCoreRTDemo!<BaseAddress>+0x1693d69 To Reproduce I do not have reliable repro steps. This is happens occasionally and non-deterministic. It seems to be easier to repro on first or maybe second launch and then cannot repro. Clone https://github.com/teobugslayer/AvaloniaCoreRTDemo on Linux Run application using dotnet run Enter text in text box Open Help menu Press Exit. Crash! Boom! Bang! happens here. Expected behavior Reliably not crash application. Desktop (please complete the following information): OS: Linux Gentoo (but seems to be unrelated) Version 0.10.8 Additional context Add any other context about the problem here. Any updates on this? I am getting very similar thing with Avalonia 11. Whenever I call CanExecuteChanged?.Invoke(this, EventArgs.Empty) on my implementation of ICommand. Though in my case it calls into Avalonia.Controls.dll!Avalonia.Controls.Button.CanExecuteChanged(object sender, System.EventArgs e) and throws at Avalonia.Base.dll!Avalonia.Threading.Dispatcher.VerifyAccess.__ThrowVerifyAccess|16_0(). Shouldn't Avalonia take care of automatically delegating all such calls to the UI thread by queuing them up, instead of trying to execute on the calling thread? No, you need to use Dispather.UiThread.Post if you want to invalidate the command from async task or similar No, you need to use Dispather.UiThread.Post Is this true of raising CollectionChanged event too? Os is this only particular to ICommand? All events that should be consumed by the UI. If so, we can close this as by design. All events that should be consumed by the UI. CollectionChanged and also PropertyChanged Thanks for a quick response. This is rather unfortunate, and I am not sure correct. How would any "Observable" Collection (like the built-in System.Collections.ObjectModel.ObservableCollection, or any custom one) know anything at all about the UI thread or Avalonia? They wouldn't - in fact ObservableCollection just simply raises an event whenever it changed, and it is up to the event subscriber, i.e. Avalonia ListBox for example to properly process the event on the correct thread. Does it then follow that all collection or observable object manipulations that could raise either CollectionChanged or PropertyChanged events subscribed to by Avalonia - would all have to be done on the UI thread? This approach could easily exhaust the UI thread. Wouldn't it be better if Avalonia one some lower level would automatically dispatch processing of such events to the UI thread? Or am I misunderstanding something? That's why .Add or .Remove should only happen on UiThread. Same for other UI-Libs like WPF. If you look into async collections, I can say that DynamicData (shipped with ReactiveUI) can handle this by .ObserveOn https://www.reactiveui.net/docs/handbook/collections/
2025-04-01T04:54:44.362248
2023-05-17T14:05:21
1713994323
{ "authors": [ "avaloniaui-team", "grokys", "workgroupengineering" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13179", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/pull/11418" }
gharchive/pull-request
Fixes Issue #6263 What does the pull request do? What is the current behavior? What is the updated/expected behavior with this PR? How was the solution implemented (if it's not obvious)? Checklist [ ] Added unit tests (if possible)? [ ] Added XML documentation to any related classes? [ ] Consider submitting a PR to https://github.com/AvaloniaUI/Documentation with user documentation Breaking changes Obsoletions / Deprecations Fixed issues Fixes issue #6263 You can test this PR using the following package version. 11.0.999-cibuild0034924-beta. (feed url: https://pkgs.dev.azure.com/AvaloniaUI/AvaloniaUI/_packaging/avalonia-all/nuget/v3/index.json) [PRBUILDID] You can test this PR using the following package version. 11.0.999-cibuild0035053-beta. (feed url: https://pkgs.dev.azure.com/AvaloniaUI/AvaloniaUI/_packaging/avalonia-all/nuget/v3/index.json) [PRBUILDID] You can test this PR using the following package version. 11.0.999-cibuild0035206-beta. (feed url: https://pkgs.dev.azure.com/AvaloniaUI/AvaloniaUI/_packaging/avalonia-all/nuget/v3/index.json) [PRBUILDID] Just encountered a similar problem in #11626, which I think this PR should also fix, but I'm kinda not sure about always scheduling the scroll on the dispatcher, though it might not be a problem in practice. Need a little time to think about potential fixes.
2025-04-01T04:54:44.375371
2020-04-26T14:16:49
607023156
{ "authors": [ "Gillibald", "MarchingCube", "Sorien", "danwalmsley" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13180", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/pull/3833" }
gharchive/pull-request
Allow for showing child windows What does the pull request do? Adds a concept and implementation of window ownership. What is the current behavior? WindowBase.Owner is simply ignored. What is the updated/expected behavior with this PR? You can use Window.ShowChild on desktop platforms to show child windows. On Win32 you can open child windows and closing parent window will close all child windows. On X11 and OSX while child windows are open you won't be able to close parent window. This might get addressed in the future. How was the solution implemented (if it's not obvious)? Checklist [ ] Added unit tests (if possible)? [ ] Added XML documentation to any related classes? [ ] Consider submitting a PR to https://github.com/AvaloniaUI/Avaloniaui.net with user documentation Depends: https://github.com/AvaloniaUI/Avalonia/pull/3867 what happens when you try to show as a child with a parent that is is not visible? Good question, I've checked WPF and this PR - both just show child window normally. WPF has one quirk - you must show parent window at least once before assigning it as Owner. WPF Specification: If an owner window is minimized, all its owned windows are minimized as well. If an owned window is minimized, its owner is not minimized. If an owner window is maximized, both the owner window and its owned windows are restored. An owner window can never cover an owned window. Owned windows that were not opened using ShowDialog are not modal. The user can still interact with the owner window. If you close an owner window, its owned windows are also closed. If an owned window was opened by its owner window using Show, and the owner window is closed, the owned window's Closing event is not raised. WPF Specification Win32: [x] If an owner window is minimized, all its owned windows are minimized as well. [ ] If an owned window is minimized, its owner is not minimized. [ ] If an owner window is maximized, both the owner window and its owned windows are restored. [x] An owner window can never cover an owned window. [ ] Owned windows that were not opened using ShowDialog are not modal. The user can still interact with the owner window. [ ] If you close an owner window, its owned windows are also closed. [ ] If an owned window was opened by its owner window using Show, and the owner window is closed, the owned window's Closing event is not raised. WPF Specification X11: [x] If an owner window is minimized, all its owned windows are minimized as well. [ ] If an owned window is minimized, its owner is not minimized. [ ] If an owner window is maximized, both the owner window and its owned windows are restored. [x] An owner window can never cover an owned window. [ ] Owned windows that were not opened using ShowDialog are not modal. The user can still interact with the owner window. [ ] If you close an owner window, its owned windows are also closed. [ ] If an owned window was opened by its owner window using Show, and the owner window is closed, the owned window's Closing event is not raised. This is actually more complicated to get right cross-plat than anticipated and our shutdown/window close might need more work to support everything. Might revisit in the future. I suggested to move the window bookkeeping into the windowing platform in the past. Maybe that makes things easier. What part of your work didn't work out well? Main issues were with closing parent windows and cancellation of such. WPF for instance won't invoke any Closing callbacks when you close parent window. For us we wanted to make sure that we invoke Closing handlers for dialogs. Which is causing issues since Window.Close will just dispose platform implementation which on X11 and Win32 will just destroy current window. On X11 it will just leave child windows open. I've tried to come up with a solution for this, but in the end I am not sure what is the correct behavior here across different platform. Thanks for your summary bringing this back to life! @danwalmsley @kekekeks Reworked this on top of the new API. @MarchingCube wow much simpler now :) will test in the morning. Final Tests: [ ] Windows [ ] OSX [ ] Linux
2025-04-01T04:54:44.376627
2020-11-24T10:02:03
749559781
{ "authors": [ "jmacato", "workgroupengineering" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13181", "repo": "AvaloniaUI/Avalonia", "url": "https://github.com/AvaloniaUI/Avalonia/pull/5100" }
gharchive/pull-request
Add keyboard navigation to slider cc @danwalmsley there is an e.t.a. for this PR? there is an e.t.a. for this PR?
2025-04-01T04:54:44.448628
2021-07-13T12:24:37
943292866
{ "authors": [ "tanvi355" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13182", "repo": "Ayushparikh-code/Web-dev-mini-projects", "url": "https://github.com/Ayushparikh-code/Web-dev-mini-projects/pull/201" }
gharchive/pull-request
BMI calculator using Flask Description A simple BMI calculator. Checklist [x] I've been assigned an issue related to this PR. [x] I've used beautifiers. [x] I've added my Project's name and description to Index.md [x] I've made a README.md file for my Project. [x] The README.md file of my projrct contains Project title, Description, Use of project, Set up, Stack used and Output (Screenshots). Related Issues or Pull Requests number Fixes #184 Hey, @Ayushparikh-code please add the level label as well. Thanks.
2025-04-01T04:54:44.458966
2017-09-16T19:06:34
258251859
{ "authors": [ "codecov-io", "thonatos" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13183", "repo": "Azard/egg-oauth2-server", "url": "https://github.com/Azard/egg-oauth2-server/pull/9" }
gharchive/pull-request
feat: add authorize func Checklist [x] npm test passes [x] tests and/or benchmarks are included [x] documentation is changed or added [x] commit message follows commit guidelines Affected core subsystem(s) Description of change enable authorization_code mode enbale options for authenticate/token method update document update to 2.0.1 Codecov Report Merging #9 into master will decrease coverage by 16.56%. The diff coverage is 11.76%. @@ Coverage Diff @@ ## master #9 +/- ## =========================================== - Coverage 92.75% 76.19% -16.57% =========================================== Files 3 3 Lines 69 84 +15 =========================================== Hits 64 64 - Misses 5 20 +15 Impacted Files Coverage Δ lib/server.js 75.38% <11.76%> (-22.62%) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 8db48f9...c69d33a. Read the comment docs.
2025-04-01T04:54:44.462784
2024-01-17T18:40:41
2086768270
{ "authors": [ "critesjosh", "rahul-kothari" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13184", "repo": "AztecProtocol/aztec-packages", "url": "https://github.com/AztecProtocol/aztec-packages/pull/4110" }
gharchive/pull-request
chore(docs): Update lsp install instructions Remove the recommendation to install nargo for syntax highlighting. Add recommendation to install the LSP extensions and point it to aztec-nargo. closes #4098 Merged Palla's PR on removing some tech debt around noir which updated the doc and has caused merged conflict. My recommendation would be to merge from master and once again search for nargo. Otherwise LGTM!!!! @critesjosh fixed your merge conflicts - and enabling auto merge!
2025-04-01T04:54:44.515802
2024-03-25T17:24:24
2206295381
{ "authors": [ "AztecBot", "spalladino" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13185", "repo": "AztecProtocol/aztec-packages", "url": "https://github.com/AztecProtocol/aztec-packages/pull/5431" }
gharchive/pull-request
feat: Throw by default when awaiting a tx that reverted A tx that returns from a tx.wait() is then guaranteed to have succeeded. Otherwise, we were having txs that failed silently, since we were not manually checking the receipt status after every action. Benchmark results No metrics with a significant change found. Detailed results All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write. This benchmark source data is available in JSON format on S3 here. Values are compared against data from master at commit 933145e8 and shown if the difference exceeds 1%. L2 block published to L1 Each column represents the number of txs on an L2 block published to L1. Metric 8 txs 32 txs 64 txs l1_rollup_calldata_size_in_bytes 676 676 676 l1_rollup_calldata_gas 6,400 6,364 (-1%) 6,424 l1_rollup_execution_gas 585,733 585,697 585,757 l2_block_processing_time_in_ms 1,286 (-1%) 4,573 (-3%) 8,634 (-6%) note_successful_decrypting_time_in_ms 174 (-6%) 519 (-1%) 976 (+1%) note_trial_decrypting_time_in_ms 77.6 (-14%) 33.4 (-16%) 112 (+18%) l2_block_building_time_in_ms 13,560 (-3%) 50,103 (-3%) 99,629 (-2%) l2_block_rollup_simulation_time_in_ms 7,757 (-3%) 27,227 (-3%) 54,004 (-2%) l2_block_public_tx_process_time_in_ms 5,783 (-2%) 22,828 (-2%) 45,532 (-2%) L2 chain processing Each column represents the number of blocks on the L2 chain where each block has 16 txs. Metric 5 blocks 10 blocks node_history_sync_time_in_ms 14,037 26,844 (+1%) note_history_successful_decrypting_time_in_ms 1,307 (+4%) 2,520 (+4%) note_history_trial_decrypting_time_in_ms 132 (+46%) 207 (+75%) node_database_size_in_bytes 18,616,400 34,869,328 pxe_database_size_in_bytes 29,859 59,414 Circuits stats Stats on running time and I/O sizes collected for every circuit run across all benchmarks. Circuit circuit_simulation_time_in_ms circuit_input_size_in_bytes circuit_output_size_in_bytes private-kernel-init 237 (+1%) 44,377 28,214 private-kernel-ordering 210 (+1%) 52,880 14,296 base-parity 4,559 (-4%) 128 311 root-parity 1,704 (+6%) 1,244 311 base-rollup 17,757 (-1%) 165,760 861 root-rollup 49.8 4,359 725 private-kernel-inner 308 (+1%) 73,794 28,214 public-kernel-app-logic 126 (+1%) 35,251 28,217 public-kernel-tail 170 (+1%) 40,928 28,217 merge-rollup 8.21 (-11%) 2,568 861 Tree insertion stats The duration to insert a fixed batch of leaves into each tree type. Metric 1 leaves 16 leaves 64 leaves 128 leaves 512 leaves 1024 leaves 2048 leaves 4096 leaves 32 leaves batch_insert_into_append_only_tree_16_depth_ms 9.99 15.8 (-1%) N/A N/A N/A N/A N/A N/A N/A batch_insert_into_append_only_tree_16_depth_hash_count 16.8 31.6 N/A N/A N/A N/A N/A N/A N/A batch_insert_into_append_only_tree_16_depth_hash_ms 0.582 0.489 (-1%) N/A N/A N/A N/A N/A N/A N/A batch_insert_into_append_only_tree_32_depth_ms N/A N/A 47.3 (+2%) 71.5 (-1%) 230 448 (+1%) 836 (-3%) 1,665 (-3%) N/A batch_insert_into_append_only_tree_32_depth_hash_count N/A N/A 96.0 159 543 1,055 2,079 4,127 N/A batch_insert_into_append_only_tree_32_depth_hash_ms N/A N/A 0.484 (+1%) 0.441 (-1%) 0.420 0.419 (+1%) 0.399 (-3%) 0.400 (-3%) N/A batch_insert_into_indexed_tree_20_depth_ms N/A N/A 53.4 (-3%) 105 (-1%) 329 (-3%) 666 (+1%) 1,263 (-3%) 2,528 (-3%) N/A batch_insert_into_indexed_tree_20_depth_hash_count N/A N/A 104 207 691 1,363 2,707 5,395 N/A batch_insert_into_indexed_tree_20_depth_hash_ms N/A N/A 0.475 (-3%) 0.474 (-1%) 0.448 (-3%) 0.457 (+1%) 0.439 (-3%) 0.440 (-2%) N/A batch_insert_into_indexed_tree_40_depth_ms N/A N/A N/A N/A N/A N/A N/A N/A 61.2 batch_insert_into_indexed_tree_40_depth_hash_count N/A N/A N/A N/A N/A N/A N/A N/A 109 batch_insert_into_indexed_tree_40_depth_hash_ms N/A N/A N/A N/A N/A N/A N/A N/A 0.534 Miscellaneous Transaction sizes based on how many contract classes are registered in the tx. Metric 0 registered classes 1 registered classes tx_size_in_bytes 14,966 494,914 Transaction processing duration by data writes. Metric 0 new note hashes 1 new note hashes tx_pxe_processing_time_ms 2,377 (+1%) 1,407 (+1%) Metric 0 public data writes 1 public data writes tx_sequencer_processing_time_ms 14.4 (-12%) 732 (+1%)
2025-04-01T04:54:44.669834
2024-08-30T13:50:31
2497303271
{ "authors": [ "AztecBot", "TomAFrench" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13186", "repo": "AztecProtocol/aztec-packages", "url": "https://github.com/AztecProtocol/aztec-packages/pull/8295" }
gharchive/pull-request
chore: fix a bunch of generics issues in aztec-nr This PR removes a bunch of unnecessary generics from the aztec-nr codebase as this is becoming a hard error in new versions of nargo. Benchmark results Metrics with a significant change: avm_simulation_time_ms (Token:mint_public): 370 (+632%) avm_simulation_time_ms (Token:transfer_public): 35.0 (+54%) Detailed results All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write. This benchmark source data is available in JSON format on S3 here. Proof generation Each column represents the number of threads used in proof generation. Metric 1 threads 4 threads 16 threads 32 threads 64 threads proof_construction_time_sha256_ms 5,753 1,585 (+1%) 712 774 (+2%) 773 (-1%) proof_construction_time_sha256_30_ms 11,449 3,109 (+1%) 1,380 1,429 (-1%) 1,475 (+1%) proof_construction_time_sha256_100_ms 43,998 11,789 (-2%) 5,461 5,407 (-2%) 5,800 (+2%) proof_construction_time_poseidon_hash_ms 79.0 (+1%) 34.0 34.0 57.0 88.0 proof_construction_time_poseidon_hash_30_ms 1,528 421 202 (-1%) 229 267 (-1%) proof_construction_time_poseidon_hash_100_ms 5,637 1,509 672 735 (-1%) 750 (+1%) L2 block published to L1 Each column represents the number of txs on an L2 block published to L1. Metric 4 txs 8 txs 16 txs l1_rollup_calldata_size_in_bytes 4,356 7,876 14,884 l1_rollup_calldata_gas 50,208 93,008 178,144 l1_rollup_execution_gas 845,542 1,579,238 3,364,430 l2_block_processing_time_in_ms 258 (+2%) 457 (+4%) 849 (+6%) l2_block_building_time_in_ms 11,441 (+1%) 22,373 44,715 (+1%) l2_block_rollup_simulation_time_in_ms 11,441 (+1%) 22,373 44,715 (+1%) l2_block_public_tx_process_time_in_ms 9,767 (+1%) 20,653 42,974 (+1%) L2 chain processing Each column represents the number of blocks on the L2 chain where each block has 8 txs. Metric 3 blocks 5 blocks node_history_sync_time_in_ms 3,025 3,964 (+4%) node_database_size_in_bytes 12,640,336 16,695,376 pxe_database_size_in_bytes 16,254 26,813 Circuits stats Stats on running time and I/O sizes collected for every kernel circuit run across all benchmarks. Circuit simulation_time_in_ms witness_generation_time_in_ms input_size_in_bytes output_size_in_bytes proving_time_in_ms private-kernel-init 96.0 (+4%) 394 (-1%) 21,735 44,860 N/A private-kernel-inner 191 (+7%) 700 (-1%) 72,544 45,007 N/A private-kernel-reset-tiny 312 (-1%) 716 65,593 44,846 N/A private-kernel-tail 167 136 50,644 52,257 N/A base-parity 5.59 (-1%) N/A 160 96.0 N/A root-parity 35.8 (+1%) N/A 73,948 96.0 N/A base-rollup 2,974 (+1%) N/A 189,136 664 N/A block-root-rollup 41.5 N/A 58,205 2,448 N/A public-kernel-setup 84.6 N/A 105,085 71,222 N/A public-kernel-app-logic 97.6 N/A 104,911 71,222 N/A public-kernel-tail 861 N/A 390,582 16,414 N/A private-kernel-reset-small 310 N/A 66,341 45,629 N/A private-kernel-tail-to-public 668 632 (+3%) 455,400 1,825 N/A public-kernel-teardown 84.2 N/A 105,349 71,222 N/A merge-rollup 19.9 N/A 38,174 664 N/A undefined N/A N/A N/A N/A 78,438 (-1%) Stats on running time collected for app circuits Function input_size_in_bytes output_size_in_bytes witness_generation_time_in_ms ContractClassRegisterer:register 1,344 11,731 345 ContractInstanceDeployer:deploy 1,408 11,731 18.2 (-1%) MultiCallEntrypoint:entrypoint 1,920 11,731 407 (+1%) FeeJuice:deploy 1,376 11,731 391 (+1%) SchnorrAccount:constructor 1,312 11,731 74.2 (+1%) SchnorrAccount:entrypoint 2,336 11,731 394 Token:privately_mint_private_note 1,280 11,731 106 (+3%) FPC:fee_entrypoint_public 1,344 11,731 28.9 (+4%) Token:transfer 1,312 11,731 227 (-2%) Benchmarking:create_note 1,344 11,731 86.5 (-1%) SchnorrAccount:verify_private_authwit 1,280 11,731 27.7 Token:unshield 1,376 11,731 520 FPC:fee_entrypoint_private 1,376 11,731 690 (-1%) AVM Simulation Time to simulate various public functions in the AVM. Function time_ms bytecode_size_in_bytes FeeJuice:_increase_public_balance 56.7 (-2%) 8,174 FeeJuice:set_portal 11.2 (-6%) 4,055 Token:constructor 81.5 (-2%) 29,082 FPC:constructor 55.0 (-2%) 18,940 FeeJuice:mint_public 46.1 (+12%) 6,522 Token:mint_public :warning: 370 (+632%) 12,704 Token:assert_minter_and_mint 319 (-1%) 8,467 AuthRegistry:set_authorized 38.6 (-21%) 4,194 FPC:prepare_fee 235 (-2%) 6,747 Token:transfer_public :warning: 35.0 (+54%) 39,863 FPC:pay_refund 52.6 (-20%) 9,398 Benchmarking:increment_balance 1,224 7,263 Token:_increase_public_balance 42.7 (+2%) 8,686 FPC:pay_refund_with_shielded_rebate 63.3 (+2%) 9,881 Public DB Access Time to access various public DBs. Function time_ms get-nullifier-index 0.155 (-2%) Tree insertion stats The duration to insert a fixed batch of leaves into each tree type. Metric 1 leaves 16 leaves 64 leaves 128 leaves 256 leaves 512 leaves 1024 leaves batch_insert_into_append_only_tree_16_depth_ms 2.19 (+1%) 3.95 (+2%) N/A N/A N/A N/A N/A batch_insert_into_append_only_tree_16_depth_hash_count 16.8 31.7 N/A N/A N/A N/A N/A batch_insert_into_append_only_tree_16_depth_hash_ms 0.114 0.112 (+2%) N/A N/A N/A N/A N/A batch_insert_into_append_only_tree_32_depth_ms N/A N/A 11.3 (+2%) 17.7 (+2%) 31.0 (+1%) 59.9 (+3%) 117 (+4%) batch_insert_into_append_only_tree_32_depth_hash_count N/A N/A 95.9 159 287 543 1,055 batch_insert_into_append_only_tree_32_depth_hash_ms N/A N/A 0.108 (+1%) 0.103 (+2%) 0.101 (+1%) 0.103 (+2%) 0.105 (+4%) batch_insert_into_indexed_tree_20_depth_ms N/A N/A 14.8 (+2%) 25.9 (+2%) 44.3 (+2%) 87.9 (+8%) 164 (+2%) batch_insert_into_indexed_tree_20_depth_hash_count N/A N/A 109 207 355 691 1,363 batch_insert_into_indexed_tree_20_depth_hash_ms N/A N/A 0.113 (+2%) 0.104 (+1%) 0.108 (+2%) 0.109 (+8%) 0.103 batch_insert_into_indexed_tree_40_depth_ms N/A N/A 16.9 (+2%) N/A N/A N/A N/A batch_insert_into_indexed_tree_40_depth_hash_count N/A N/A 132 N/A N/A N/A N/A batch_insert_into_indexed_tree_40_depth_hash_ms N/A N/A 0.108 (+2%) N/A N/A N/A N/A Miscellaneous Transaction sizes based on how many contract classes are registered in the tx. Metric 0 registered classes 1 registered classes tx_size_in_bytes 64,779 668,997 Transaction size based on fee payment method | Metric | | | - | |
2025-04-01T04:54:44.678567
2017-01-24T00:58:45
202696061
{ "authors": [ "VitorX", "yfan183" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13187", "repo": "Azure-Samples/active-directory-dotnet-graphapi-web", "url": "https://github.com/Azure-Samples/active-directory-dotnet-graphapi-web/issues/49" }
gharchive/issue
Missing assembly reference or using directive This occurred for a lot of name space such as 'Group' and 'Azure'. I've restored the nuget packages and rebuilt the solutions but it didn't solve the problem Please get the latest version, it works well for me.
2025-04-01T04:54:44.688414
2023-10-11T03:43:59
1936737653
{ "authors": [ "damienpontifex", "jaredfholgate" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13188", "repo": "Azure-Samples/azure-devops-terraform-oidc-ci-cd", "url": "https://github.com/Azure-Samples/azure-devops-terraform-oidc-ci-cd/issues/3" }
gharchive/issue
Terraform only supports service principal authorization for azure This issue is for a: (mark with an x) - [x] bug report -> please search issues before submitting - [ ] feature request - [x] documentation issue or request - [ ] regression (a behavior that used to work and stopped in a new release) Minimal steps to reproduce Using the "JJ" task very similar to this commit https://github.com/Azure-Samples/azure-devops-terraform-oidc-ci-cd/commit/2fd009feb533d9eaf08470416cfbd223adbf1433 but we're using Terraform cloud as our backend state store Also set the runAzLogin: true parameter to the "JJ" tasks Set user assigned managed identity with Contributor role to resource group Set user assigned managed identity federation with Azure DevOps project Terraform that's using the AzureRM resource provider Any log messages given by the failure Log message failure on pipeline run ##[error]Terraform only supports service principal authorization for azure Expected/desired behavior Terraform operates with managed identity federated credentials OS and Version? Linux (ubuntu-latest azure devops) Versions Version 1.0.6 of the Charles Zipp extension (which seems to be equal to the JJ extension as linked repo in marketplace is the same) Terraform CLI 1.6.1 Terraform AzureRM 3.75.0 Mention any other details that might be useful Logs from terraform plan stage /opt/hostedtoolcache/terraform/1.6.1/x64/terraform version 2023-10-11T03:22:01.957Z [INFO] Terraform version: 1.6.1 2023-10-11T03:22:01.957Z [DEBUG] using github.com/hashicorp/go-tfe v1.34.0 2023-10-11T03:22:01.957Z [DEBUG] using github.com/hashicorp/hcl/v2 v2.18.1 2023-10-11T03:22:01.957Z [DEBUG] using github.com/hashicorp/terraform-svchost v0.1.1 2023-10-11T03:22:01.957Z [DEBUG] using github.com/zclconf/go-cty v1.14.1 2023-10-11T03:22:01.957Z [INFO] Go runtime version: go1.21.1 2023-10-11T03:22:01.957Z [INFO] CLI args: []string{"/opt/hostedtoolcache/terraform/1.6.1/x64/terraform", "version"} 2023-10-11T03:22:01.957Z [TRACE] Stdout is not a terminal 2023-10-11T03:22:01.957Z [TRACE] Stderr is not a terminal 2023-10-11T03:22:01.957Z [TRACE] Stdin is not a terminal 2023-10-11T03:22:01.957Z [DEBUG] Attempting to open CLI config file: /home/AzDevOps/.terraformrc 2023-10-11T03:22:01.957Z [DEBUG] File doesn't exist, but doesn't need to. Ignoring. 2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory terraform.d/plugins 2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /home/AzDevOps/.terraform.d/plugins 2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /home/AzDevOps/.local/share/terraform/plugins 2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /usr/local/share/terraform/plugins 2023-10-11T03:22:01.958Z [DEBUG] ignoring non-existing provider search directory /usr/share/terraform/plugins 2023-10-11T03:22:01.958Z [INFO] CLI command args: []string{"version"} Terraform v1.6.1 on linux_amd64 + provider registry.terraform.io/hashicorp/azurerm v3.75.0 ##[error]Terraform only supports service principal authorization for azure ##[error]Terraform only supports service principal authorization for azure Finishing: terraform plan @jaredfholgate that did the trick...need to get dependabot updating our yaml azure devops tasks in use...but also noticed, we were using @0 because that's what the readme has https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform Maybe the main resolution here is for us to send a PR to the repo to update the readme so the docs guide us to @1 by default now. Thanks for the quick response Closing this for now since it is not related to this repo. If you continue to have issues, please raise a new issue in the task repo here: https://github.com/jason-johnson/azure-pipelines-tasks-terraform/issues Thanks @jaredfholgate that did the trick...need to get dependabot updating our yaml azure devops tasks in use...but also noticed, we were using @0 because that's what the readme has https://marketplace.visualstudio.com/items?itemName=JasonBJohnson.azure-pipelines-tasks-terraform Maybe the main resolution here is for us to send a PR to the repo to update the readme so the docs guide us to @1 by default now. Thanks for the quick response Good point. Yes I think the docs need to be updated over there. I added this issue and will work on it when I get some free time: https://github.com/jason-johnson/azure-pipelines-tasks-terraform/issues/381
2025-04-01T04:54:44.692101
2021-09-09T16:10:46
992398461
{ "authors": [ "glharper", "noahsw" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13189", "repo": "Azure-Samples/cognitive-services-speech-sdk", "url": "https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1252" }
gharchive/issue
Add GovCloud regions to Sample SDK https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/e1ccf3156c94f0b1a668946fc7dd41a64af1230e/samples/js/browser/index.html#L30 Thanks! @noahsw Please see the linked JS PR. Adding those regions to the sample won't, by itself, enable access via JS SDK. The linked PR, which should be in version 1.19 (due by mid-October), will help. I will add these regions to the sample to coincide with the 1.19 release. @noahsw JS Speech SDK version 1.19.0 has been released, with support for usgov regions. Thanks again for writing this issue up! Thx @glharper but I'm not seeing any changes to https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/e1ccf3156c94f0b1a668946fc7dd41a64af1230e/samples/js/browser/index.html#L30. I was hoping to see the GovCloud regions in the dropdown. @noahsw I have a PR now for this file. The structure of the sample has changed, so the file path you linked to will be deleted at some point.
2025-04-01T04:54:44.692980
2022-06-08T16:03:27
1264976427
{ "authors": [ "kendallroden", "safari137" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13190", "repo": "Azure-Samples/container-apps-store-api-microservice", "url": "https://github.com/Azure-Samples/container-apps-store-api-microservice/issues/23" }
gharchive/issue
Missing components directory Was the components directory left out of this repo intentionally? It's required to run locally. fixed
2025-04-01T04:54:44.820148
2023-04-20T17:01:17
1677094890
{ "authors": [ "SrinivasAtmakuri", "bitoku" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13191", "repo": "Azure/ARO-RP", "url": "https://github.com/Azure/ARO-RP/pull/2863" }
gharchive/pull-request
Test cases for GA SupportedVMSizes Which issue this PR addresses: Jira: https://issues.redhat.com/browse/ARO-2883 What this PR does / why we need it: Test plan for issue: Is there any documentation that needs to be updated for this PR? Test cases for https://github.com/Azure/ARO-RP/pull/2798 cc @cadenmarchese I don't think comparing the json bytes is not a good idea, because validate.Supported**VmSizes is a map and a map is not ordered. I think we should avoid comparing in the way depending on the order of the map. https://go.dev/ref/spec#Map_types A map is an unordered group of elements of one type, called the element type, indexed by a set of unique keys of another type, called the key type. If you use *map[api.VMSize]api.VMSizeStruct for wantResponse, you can compare wantResponse and its response directly with validateResponse, which uses deep.Equal and not dependent on the order. type test struct { name string query string wantStatusCode int wantResponse *map[api.VMSize]api.VMSizeStruct wantError string } I don't think comparing the json bytes is not a good idea, because validate.Supported**VmSizes is a map and a map is not ordered. I think we should avoid comparing in the way depending on the order of the map. https://go.dev/ref/spec#Map_types A map is an unordered group of elements of one type, called the element type, indexed by a set of unique keys of another type, called the key type. If you use *map[api.VMSize]api.VMSizeStruct for wantResponse, you can compare wantResponse and its response directly with validateResponse, which uses deep.Equal and not dependent on the order. type test struct { name string query string wantStatusCode int wantResponse *map[api.VMSize]api.VMSizeStruct wantError string } I wish to use the map[api.VMSize]api.VMSizeStructforwantResponse` so it's a lot easier to compare and I don't need to json.MarshallIndent, but the API returns a []byte response as you can see here., https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/admin_supportvmsizes_list.go#L34 Yes, the API returns []byte response in json format. but validateResponse unmarshals the response if wantResponse can't be casted to []byte. https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/shared_test.go#L262 so when you use *map[api.VMSize]api.VMSizeStruct as a wantResponse's type, validateResponse unmarshals the response and use deep.Equal to compare. I looked over some APIs that return []byte and their tests. They just use the pointer of struct as a wantResponse's type and don't marshal it. https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/admin_openshiftversion_list_test.go https://github.com/Azure/ARO-RP/blob/master/pkg/frontend/asyncoperationsstatus_get_test.go Now I see what you meant, made the changes!
2025-04-01T04:54:44.824265
2022-11-30T20:37:48
1470189872
{ "authors": [ "haitch", "vishiy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13192", "repo": "Azure/AgentBaker", "url": "https://github.com/Azure/AgentBaker/pull/2466" }
gharchive/pull-request
Add newer images for azure monitor metrics What type of PR is this? /kind feature What this PR does / why we need it: Which issue(s) this PR fixes: Fixes # Requirements: [ ] uses conventional commit messages [ ] includes documentation [ ] adds unit tests [x] tested upgrade from previous version Special notes for your reviewer: Release note: none we were thinking to slim down VHD size to speed VM creation, to my knowledge this image isn't a key component blocking cluster startup.
2025-04-01T04:54:44.830187
2019-09-15T14:10:33
493741899
{ "authors": [ "v-liatba" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13193", "repo": "Azure/Azure-Security-Center", "url": "https://github.com/Azure/Azure-Security-Center/pull/31" }
gharchive/pull-request
Update README.md New update in diagnostic settings and README.md file Hi @tarosler , can you pls merge it to the master? Thanks
2025-04-01T04:54:44.838371
2021-11-03T10:19:13
1043359454
{ "authors": [ "annanra", "ritika-msft", "v-rucdu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13194", "repo": "Azure/Azure-Sentinel", "url": "https://github.com/Azure/Azure-Sentinel/issues/3370" }
gharchive/issue
Pulse secure parser for Azure Sentinel For standard log format with syslog type as below, the regex used in the pulseconnectsecure(function developed by @shainw and @acnccd) parser is not matching '2021-11-03 01:20:46 - ive - [<IP_ADDRESS>] Jane, Mitchel(Employee)[] - Received OCSP response from '<IP_ADDRESS>' with url 'http://abc.com', user: 'Jane, Mitchel' serial number: 'D8:DB:12:8A:DE:00:00:00:04:B1:8D'' Current regex : (\d{4}-\d{2}-\d{2})\s(\d{2}:\d{2}:\d{2})\s(\S+)\s(\S+)\s(\S+)\s[(\S+)]\s(\S+,)\s(\S+)((.)?)[(.)]\s-\s(.*) Can someone help me with a regex which matches for all the below syslog messages '2021-11-03 01:20:46 - ive - [<IP_ADDRESS>] Jane, Mitchel(Employee)[] - Received OCSP response from '<IP_ADDRESS>' with url 'http://abc.com', user: 'Jane, Mitchel' serial number: 'D8:DB:12:8A:DE:00:00:00:04:B1:8D'' 2020-05-01 05:36:14 - ive - [<IP_ADDRESS>] user100(ABC Realm)[Personal_PC No RDP] - WebRequest ok : Host: sample.abc.com, Request: GET /Citrix/XDSWeb/dample/js/ctxs.webui.min_0204820BD028.js HTTP/1.1 2020-05-01 00:07:21 - ive - [<IP_ADDRESS>] System()[] - User Accounts modified. Removed username ABC\user34 from authentication server ABC Active Directory. Hey.. we are looking into this issue and would get back to you asap... Thanks!!! Hi @annanra , We have updated the regex of the parser to match the syslog messages shared by you. https://github.com/Azure/Azure-Sentinel/pull/3887. Can you please follow below steps and save the function with a different name and update us if the parser is working fine now? Open Log Analytics Workspace Open new Query window Copy and Paste the updated query from the PulseConnectSecure.txt file provided in the https://github.com/Azure/Azure-Sentinel/pull/3887/files. In the query window, on the second line of the query, enter the hostname(s) of your Pulse Connect Secure device(s) and any other unique identifiers for the logstream. // For example: | where Computer in ("server1, server2") and Facility == "local7" Click on Save button and select as Function from drop down by specifying function name and alias as PulseConnectSecure_Test. Run the query to validate data is being received and parsed. Let us know, if the query works fine. Thanks!! Hi @annanra, Closing this issue as there is no response on this for more than a month. Please re-open or create a new one, if the issue persists and needs help. Thanks.
2025-04-01T04:54:44.856771
2016-10-18T16:24:59
183736901
{ "authors": [ "amitkb3", "hangzh-msft" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13195", "repo": "Azure/Azure-TDSP-Utilities", "url": "https://github.com/Azure/Azure-TDSP-Utilities/issues/3" }
gharchive/issue
Warning: Error in winDialog: winDialog() cannot be used non-interactively Hi, I am trying to run IDEAR.rmd and get the following error "Warning: Error in winDialog: winDialog() cannot be used non-interactively". Any idea. Regards, Amit session Info: sessionInfo() R version 3.3.1 (2016-06-21) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 (build 7601) Service Pack 1 locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252 [4] LC_NUMERIC=C LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] rmarkdown_1.0 loaded via a namespace (and not attached): [1] Rcpp_0.12.6 digest_0.6.10 mime_0.5 R6_2.1.3 xtable_1.8-2 magrittr_1.5 evaluate_0.9 [8] stringi_1.1.1 miniUI_0.1.1 shinyjs_0.7 tools_3.3.1 stringr_1.0.0 shiny_0.13.2 httpuv_1.3.3 [15] yaml_2.1.13 rsconnect_0.4.3 htmltools_0.3.5 the rmarkdown log is listed below: Loading required package: shiny Listening on http://<IP_ADDRESS>:3994 |.. | 3% inline R code fragments |.... | 6% label: unnamed-chunk-1 (with options) List of 3 $ echo : logi FALSE $ message: logi FALSE $ warning: logi FALSE processing file: IDEAR.rmd Quitting from lines 23-240 (IDEAR.rmd) Warning: Error in winDialog: winDialog() cannot be used non-interactively Stack trace (innermost first): 105: winDialog 104: eval [#29] 103: eval 102: withVisible 101: withCallingHandlers 100: handle 99: evaluate_call 98: evaluate 97: in_dir 96: block_exec 95: call_block 94: process_group.block 93: process_group 92: withCallingHandlers 91: process_file 90: knitr::knit 89: 88: do.call 87: contextFunc 86: .getReactiveEnvironment()$runWith 85: shiny::maskReactiveContext 84: reactive reactive({ out <- rmd_cached_output(file, encoding) output_dest <- out$dest if (out$cached) { if (nchar(out$resource_folder) > 0) { shiny::addResourcePath(basename(out$resource_folder), out$resource_folder) } return(out$shiny_html) } if (!file.exists(dirname(output_dest))) { dir.create(dirname(output_dest), recursive = TRUE, mode = "0700") } resource_folder <- knitr_files_dir(output_dest) perf_timer_reset_all() dependencies <- list() shiny_dependency_resolver <- function(deps) { dependencies <<- deps list() } output_opts <- list(self_contained = FALSE, copy_resources = TRUE, dependency_resolver = shiny_dependency_resolver) message("\f") args <- merge_lists(list(input = reactive_file(), output_file = output_dest, output_dir = dirname(output_dest), output_options = output_opts, intermediates_dir = dirname(output_dest), runtime = "shiny", envir = new.env()), render_args) result_path <- shiny::maskReactiveContext(do.call(render, args)) if (!dir_exists(resource_folder)) dir.create(resource_folder, recursive = TRUE) shiny::addResourcePath(basename(resource_folder), resource_folder) dependencies <- append(dependencies, list(create_performance_dependency(resource_folder))) write_deps <- base::file(file.path(resource_folder, "shiny.dep"), open = "wb") on.exit(close(write_deps), add = TRUE) serialize(dependencies, write_deps, ascii = FALSE) if (!isTRUE(out$cacheable)) { shiny::onReactiveDomainEnded(shiny::getDefaultReactiveDomain(), function() { unlink(result_path) unlink(resource_folder, recursive = TRUE) }) } shinyHTML_with_deps(result_path, dependencies) }) 73: doc 72: shiny::renderUI 71: func 70: output$reactivedoc 3: 2: do.call 1: rmarkdown::run Hi, thanks for the feedback. To run IDEAR, you should open the Run-IDEAR.R in RStudio, and click Source. Was this how you ran it and got the error? I ran directly IDEAR in R Studio as I was getting error with running Run-IDEAR.R in RStudio. I have opened a separate issue https://github.com/Azure/Azure-TDSP-Utilities/issues/4. Thanks for the quick response. Regards, Amit Hi, Amitkb3, For issue #4, we have figured out a possible cause of the error, and provided a way to run IDEAR correctly in RStudio. Basically, you need to (1) Open Run-IDEAR.r in RStudio; (2) Click Source to launch IDEAR. DO NOT execute the code line by line, or select all lines and click Run. For this issue #3, running IDEAR.rmd directly will fail as expected. IDEAR.rmd should be triggered by Run-IDEAR.r in the way described above. Hope it helps. Let us know if you run into any other issue.
2025-04-01T04:54:44.877823
2024-04-29T14:58:36
2269270689
{ "authors": [ "ChrisChapman-gh", "bordera-randy", "pagyP", "prjelesi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13196", "repo": "Azure/Azure-Verified-Modules", "url": "https://github.com/Azure/Azure-Verified-Modules/issues/913" }
gharchive/issue
Check for previous/existing GitHub issues/module proposals [X] I have checked for previous/existing GitHub issues/module proposals. Check this module doesn't already exist in the module indexes [X] I have checked for that this module doesn't already exist in the module indexes; or I'm proposing the module to be migrated from CARML/TFVM. Bicep or Terraform? Terraform Module Classification? Pattern Module Module Name avm-ptn-azure-ipam Module Details A terraform pattern to deploy the https://azure.github.io/ipam/#/ container as a web app with supporting resources. Supported deployment models are the default public deployment or a virtual network integrated web app that can use service routes or private endpoints to access resources. Do you want to be the owner of this module? No Module Owner's GitHub Username (handle) No response (Optional) Secondary Module Owner's GitHub Username (handle) ChrisChapman-gh We (BJSS) have most of the terraform written which we can submit as a pull request from a fork. This should get the module most of the way there. Were also happy to contribute to the maintenance. @ChrisChapman-gh thank for proposal. Give as a time to search for module owner. @prjelesi I'm happy to own and to work with BJSS on this That would be awesome, thanks Paul. Hi @pagyP Thanks for requesting/proposing to be an AVM module owner! We just want to confirm you agree to the below pages that define what module ownership means: Team Definitions & RACI Shared Specification (Bicep & Terraform) Module Support Any questions or clarifications needed, let us know! If you agree, please just reply to this issue with the exact sentence below (as this helps with our automation 👍): "I CONFIRM I WISH TO OWN THIS AVM MODULE AND UNDERSTAND THE REQUIREMENTS AND DEFINITION OF A MODULE OWNER" Thanks, The AVM Core Team #RR "I CONFIRM I WISH TO OWN THIS AVM MODULE AND UNDERSTAND THE REQUIREMENTS AND DEFINITION OF A MODULE OWNER" Hi @prjelesi - What needs to happen next? can we get a repo spun up so I can fork it and start some PRs? Cheers Chris Hi @pagyP Thanks for confirming that you wish to own this AVM module and understand the related requirements and responsibilities! Before starting development, please ensure ALL the following requirements are met. Please use the following values explicitly as provided in the module index page: For your module: ModuleName - for naming your module TelemetryIdPrefix - for your module's telemetry For your module's repository: Repo name and folder path are defined in RepoURL Create GitHub teams for module owners and contributors and grant them permissions as outlined here. Grant permissions for the AVM core team and PG teams on your GitHub repo as described here. Check if this module exists in the other IaC language. If so, collaborate with the other owner for consistency. 👍 You can now start the development of this module! ✅ Happy coding! 🎉 Please respond to this comment and request a review from the AVM core team once your module is ready to be published! Please include a link pointing to your PR, once available. 🙏 Any further questions or clarifications needed, let us know! Thanks, The AVM Core Team Hi @prjelesi - What needs to happen next? can we get a repo spun up so I can fork it and start some PRs? Cheers Chris Hi @ChrisChapman-gh , @pagyP will initiate repo creation process and when it is created you will be able to fork and work. Let me know if you need any help to move fw. @ChrisChapman-gh I've initiated the repository creation, just pending approval now. @ChrisChapman-gh repo should now be available https://github.com/Azure/terraform-azurerm-avm-ptn-azure-ipam (thanks for your patience ), please fork and submit a PR as per https://azure.github.io/Azure-Verified-Modules/contributing/terraform/review/ @ChrisChapman-gh are you still planning/able to contribute to this? Let me know either way please., thanks. Hi Paul Yes absolutely – had some leave and then straight onto a new project which has taken a lot of time to get up and running. I should have some time soon to fork and contribute. Cheers Chris From: Paul Paginton @.> Sent: Tuesday, September 10, 2024 1:01 PM To: Azure/Azure-Verified-Modules @.> Cc: Chris Chapman @.>; Mention @.> Subject: Re: [Azure/Azure-Verified-Modules] [Module Proposal]: avm-ptn-azure-ipam (Issue #913) @ChrisChapman-ghhttps://github.com/ChrisChapman-gh are you still planning/able to contribute to this? Let me know either way please., thanks. — Reply to this email directly, view it on GitHubhttps://github.com/Azure/Azure-Verified-Modules/issues/913#issuecomment-2340485709, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A4J7HYDQZ2YNXPCEEMTSLM3ZV3NQPAVCNFSM6AAAAABG6QBSOSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBQGQ4DKNZQHE. You are receiving this because you were mentioned.Message ID<EMAIL_ADDRESS>The information included in this email and any files transmitted with it may contain information that is confidential and it must not be used by, or its contents or attachments copied or disclosed to, persons other than the intended addressee. If you have received this email in error, please notify BJSS. In the absence of written agreement to the contrary BJSS' relevant standard terms of contract for any work to be undertaken will apply. Please carry out virus or such other checks as you consider appropriate in respect of this email. BJSS does not accept responsibility for any adverse effect upon your system or data in relation to this email or any files transmitted with it. BJSS Limited, a company registered in England and Wales (Company Number 2777575), VAT Registration Number 613295452, Registered Office Address, 1 Whitehall Quay, Leeds, LS1 4HR Is there any update on this module? I am prepping to deploy IPAM and would like to include this in my pipeline
2025-04-01T04:54:44.879599
2018-12-02T13:53:27
386561807
{ "authors": [ "XiaoningLiu", "michaelkruglos" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13197", "repo": "Azure/Azurite", "url": "https://github.com/Azure/Azurite/issues/125" }
gharchive/issue
Blobs with backward slashes in their names are stored with incorrect name Blobs which have name like "Path\To\My\Blob" are created as "Blob" instead. @michaelkruglos Please try with Azurite V3 has solid support for blob creation! Will close this issue.
2025-04-01T04:54:44.885053
2019-12-12T11:35:41
536923903
{ "authors": [ "XiaoningLiu", "blueww", "ljian3377" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13198", "repo": "Azure/Azurite", "url": "https://github.com/Azure/Azurite/issues/292" }
gharchive/issue
[blob-storage]BlobClient.getProperties not returning archiveStatus Which service(blob, file, queue, table) does this issue concern? blob Which version of the Azurite was used? 3.3.0-preview Where do you get Azurite? (npm, DockerHub, NuGet, Visual Studio Code Extension) npm What's the Node.js version? v12.13.0 What problem was encountered? Unit test failure: BlobClient.beginCopyFromURL with rehydrate priority it.only("beginCopyFromURL with rehydrate priority", async () => { recorder.skip("browser"); const newBlobURL = containerClient.getBlobClient(recorder.getUniqueName("copiedblobrehydrate")); const initialTier = BlockBlobTier.Archive; const result = await (await newBlobURL.beginCopyFromURL(blobClient.url, { tier: initialTier, rehydratePriority: "Standard" })).pollUntilDone(); assert.ok(result.copyId); delay(1 * 1000); const properties1 = await blobClient.getProperties(); const properties2 = await newBlobURL.getProperties(); assert.deepStrictEqual(properties1.contentMD5, properties2.contentMD5); assert.deepStrictEqual(properties2.copyId, result.copyId); assert.deepStrictEqual(properties2.copySource, blobClient.url); assert.equal(properties2.accessTier, initialTier); await newBlobURL.setAccessTier(BlockBlobTier.Hot); const properties3 = await newBlobURL.getProperties(); assert.equal(properties3.archiveStatus!.toLowerCase(), "rehydrate-pending-to-hot"); }); assert.equal(properties3.archiveStatus!.toLowerCase(), "rehydrate-pending-to-hot") throw error because properties3.archiveStatus is undefined. /** * For blob storage LRS accounts, valid values are * rehydrate-pending-to-hot/rehydrate-pending-to-cool. If the blob is being rehydrated and is not * complete then this header is returned indicating that rehydrate is pending and also tells the * destination tier. */ archiveStatus?: string; Do we support LRS accounts? Steps to reproduce the issue? Re-run the test-case. Have you found a mitigation/solution? N/A debug.log debug.log request_id b1b8441a-d563-496d-ba8a-a40650d74307 Should add "archiveStatus" for get properties options. Per rest doc, valid values are rehydrate-pending-to-hot/rehydrate-pending-to-cool. Since Azurite don't have this pedning status won't fix it. @XiaoningLiu Would you please help to close it?
2025-04-01T04:54:44.893481
2020-11-30T10:04:43
917493447
{ "authors": [ "marcschier", "martin-weber" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13199", "repo": "Azure/Industrial-IoT", "url": "https://github.com/Azure/Industrial-IoT/issues/1212" }
gharchive/issue
Suggestion: Group Monitored Items in one structure in standalone publisher with possibility of trigger For our use cases we need a set of data points sampled and published in the same message. Additional, it would be great if all the nodes are publishing when a certain endpoint changes, for example a boolean switching from false to true. Grouping We would like to be able to group nodes in the publishednodes.json config file of the standalone publisher. The configuration could then look like the following with just a new section OpcNodeGroups: [ { "EndpointUrl": "opc.tcp://host.docker.internal:49322/", "UseSecurity": false, "OpcNodes": [ { "Id": "ns=2;s=Machine1.Status", "OpcSamplingInterval": 1000, "OpcPublishingInterval": 5000, "DisplayName": "Machine1 Status" }, { "Id": "ns=2;s=Machine1.Speed", "OpcSamplingInterval": 500, "OpcPublishingInterval": 2000, "DisplayName": "Machine1 Speed" } ], "OpcNodeGroups": [ { "GroupId": "Machine1", "OpcSamplingInterval": 1000, "OpcPublishingInterval": 2000, "nodes": [ { "Id": "ns=2;s=Machine1.Pressure", "DisplayName": "Pressure" }, { "Id": "ns=2;s=Machine1.Temperature", "DisplayName": "Temperature" }, { "Id": "ns=2;s=Machine1.TorqueA", "DisplayName": "TorqueA" }, { "Id": "ns=2;s=Machine1.TorqueB", "DisplayName": "TorqueB" }, { "Id": "ns=2;s=Machine1.OrderNumber", "DisplayName": "OrderNumber" }, { "Id": "ns=2;s=Machine1.PartNumber", "DisplayName": "PartNumber" }, { "Id": "ns=2;s=Machine1.PartFinished", "DisplayName": "PartFinished" } ] } ] } ] The resulting message might then look like this the following sample. The messages asscociated the Group would then have a GroupId. [ { "NodeId": "nsu=MYServer;s=Machine1.Status", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "Machine1 Status", "Value": { "Value": 389, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "NodeId": "nsu=MYServer;s=Machine1.Speed", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "Machine1 Speed", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "GroupId": "Machine1", "NodeId": "nsu=MYServer;s=Machine1.Pressure", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "Pressure", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "GroupId": "Machine1", "NodeId": "nsu=MYServer;s=Machine1.Speed", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "Speed", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "GroupId": "Machine1", "NodeId": "nsu=MYServer;s=Machine1.TorqueA", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "TorqueA ", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "GroupId": "Machine1", "NodeId": "nsu=MYServer;s=Machine1.TorqueB", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "TorqueB", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "GroupId": "Machine1", "NodeId": "nsu=MYServer;s=Machine1.OrderNumber", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "OrderNumber", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "GroupId": "Machine1", "NodeId": "nsu=MYServer;s=Machine1.PartNumber", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "PartNumber", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } }, { "GroupId": "Machine1", "NodeId": "nsu=MYServer;s=Machine1.PartFinished", "ApplicationUri": "urn:MYSERVER:UA%20Server", "DisplayName": "PartFinished", "Value": { "Value": -298, "SourceTimestamp": "2020-11-20T15:39:52.2521132Z" } } ] Trigger for Group It would be great if all the nodes are publishing when a certain endpoint changes, for example a boolean switching from false to true. Therefore a Trigger in the OpcNodeGroup would be required, which defines the node for the trigger. [ { "EndpointUrl": "opc.tcp://host.docker.internal:49322/", "UseSecurity": false, "OpcNodes": [ { "Id": "ns=2;s=Machine1.Status", "OpcSamplingInterval": 1000, "OpcPublishingInterval": 5000, "DisplayName": "Machine1 Status" }, { "Id": "ns=2;s=Machine1.Speed", "OpcSamplingInterval": 500, "OpcPublishingInterval": 2000, "DisplayName": "Machine1 Speed" } ], "OpcNodeGroups": [ { "GroupId": "Machine1", "OpcSamplingInterval": 1000, "OpcPublishingInterval": 2000, "Trigger": { "Id": "ns=2;s=Machine1.PartFinished", "Trigger": { "Type": "Once", "Value": true } }, "nodes": [ { "Id": "ns=2;s=Machine1.Pressure", "DisplayName": "Pressure" }, { "Id": "ns=2;s=Machine1.Temperature", "DisplayName": "Temperature" }, { "Id": "ns=2;s=Machine1.TorqueA", "DisplayName": "TorqueA" }, { "Id": "ns=2;s=Machine1.TorqueB", "DisplayName": "TorqueB" }, { "Id": "ns=2;s=Machine1.OrderNumber", "DisplayName": "OrderNumber" }, { "Id": "ns=2;s=Machine1.PartNumber", "DisplayName": "PartNumber" }, { "Id": "ns=2;s=Machine1.PartFinished", "DisplayName": "PartFinished" } ] } ] } ] This is just a proposal /feature request. I'm glad to see other options and possibilities. Thanks, Martin OPC UA PubSub supports datasets, which are equivalent to the subscription (per endpoint, per tag inside endpoint, batches of 1000). They are part of the same network message (size permitting). Regarding triggering, this has been requested as "polled" mode, where a set of nodes are sent on a trigger. This has nothing to do with the way OPC UA subscriptions work, but rather would leverage something like the OPC Twin and a scheduler. A poll mode issue feature request has been added as #1934.
2025-04-01T04:54:44.927002
2022-09-21T06:26:56
1380389596
{ "authors": [ "horihiro", "k-in", "koudaiii" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13200", "repo": "Azure/aca-review-apps", "url": "https://github.com/Azure/aca-review-apps/issues/58" }
gharchive/issue
In the case of parallel execution, one succeeds, but all others fail. While provisioning a container app is in progress, other provision reques will be fail. We need guidance or solution to this issue. Cloud you please comment on how to do it, as well as any logs or screen shots? If the Container App is in the process of provisioning, the following error will occur when the GitHub action is executed. Looks like retry logic is needed, further investigation is needed to decide which layer the logic should be implemeted in. I think the workaround would be to implement retry. In addition, I think that it is necessary to decide what it should be by discussing it separately. @azure/core-rest-pipeline already has retry logic as described at the following documentation. https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/core/core-rest-pipeline/documentation/pipeline.md But the status code 409 in this case is not included in retry-target-response-codes as far as reading the following code. https://github.com/Azure/azure-sdk-for-js/blob/82996230773ab8295f06b17a8b6f449f9d9f2a8c/sdk/core/core-rest-pipeline/src/retryStrategies/exponentialRetryStrategy.ts#L81-L94 Ref: https://github.com/Azure/azure-sdk-for-js/issues/23298 As guidance, at a minimum, the following guidance should be guided. Run only one workflow at a time Ref: Workflow syntax for GitHub Actions The guidance is about GitHub Actions workflows' concurrency. On the other hands, this issue can be happened when the action runs during any updates from Azure portal, CLI, PowerShell or REST API. @k-in Could you please try an action<EMAIL_ADDRESS>that retries 10 times at maxiumum and let me know the result? Released<EMAIL_ADDRESS>that exposes max retry count as an input max-retries. https://github.com/horihiro/aca-preview/blob/5d75583459e1a53ed6ebf3f8d956e0144bb7eb68/src/main.ts#L127-L137 By design, currently,
2025-04-01T04:54:44.929768
2017-01-25T00:11:21
202978156
{ "authors": [ "anhowe", "colemickens", "khenidak" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13201", "repo": "Azure/acs-engine", "url": "https://github.com/Azure/acs-engine/issues/208" }
gharchive/issue
kubectl should be extracted from the hyperkube image Say you're working with or testing managed disks. It's vital the kubectl is at the exact same revision as the running apiserver. If we would extract hyperkube (and thus kubectl) from the container, we could place that on host. We'd also get to eliminate the extra kubectlVersion parameter/variable we have now. Should be possible to do this with something roughly like: docker run <hyperkube_spec> -v /usr/local/bin:/host cp /hyperkube /host/hyperkube && hyperkube --symlinks Thinking out loud: We need to stream line the entire process. This will happen not only during testing beta feature (similar to md) but also when people upgrade cluster to versions that will break existing kubectl. maybe we can create a script toolkit as docker image that has bunch of scripts, one of them can extract kubectl from hyperkube spec. Also consider the reverse, upgrade something opps, now i want my old kubectl back. Closing this P2 as it is nice to have, but can be currently solved in alternate ways. We can consider re-opening if we get requests for this.
2025-04-01T04:54:44.932557
2018-10-24T21:52:13
373691948
{ "authors": [ "jackfrancis", "mboersma" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13202", "repo": "Azure/acs-engine", "url": "https://github.com/Azure/acs-engine/pull/4117" }
gharchive/pull-request
Fix potential nil pointer dereference when VM tags are empty What this PR does / why we need it: In setting up a regular Jenkins upgrade test with VM tags removed, I found a logging-related panic that this fixes. If applicable: [ ] documentation [ ] unit tests [ ] tested backward compatibility (ie. deploy with previous version, upgrade with this branch) Release note: NONE /lgtm
2025-04-01T04:54:44.945758
2023-11-02T16:58:06
1974671558
{ "authors": [ "OliverMKing", "coveralls" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13203", "repo": "Azure/aks-app-routing-operator", "url": "https://github.com/Azure/aks-app-routing-operator/pull/123" }
gharchive/pull-request
add crd webhooks Description Adds crd webhooks. Type of change Please delete options that are not relevant. [ ] Bug fix (non-breaking change which fixes an issue) [x] New feature (non-breaking change which adds functionality) [x] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] This change requires a documentation update How Has This Been Tested? Tested locally and unit tested. Will be e2e tested in a future PR. Checklist: [x] My code follows the style guidelines of this project [x] I have performed a self-review of my code [x] I have commented my code, particularly in hard-to-understand areas [x] I have made corresponding changes to the documentation [x] My changes generate no new warnings [x] I have added tests that prove my fix is effective or that my feature works [x] New and existing unit tests pass locally with my changes [x] Any dependent changes have been merged and published in downstream modules Pull Request Test Coverage Report for Build<PHONE_NUMBER> 224 of 498 (44.98%) changed or added relevant lines in 5 files are covered. 5 unchanged lines in 1 file lost coverage. Overall coverage decreased (-5.7%) to 78.448% Changes Missing Coverage Covered Lines Changed/Added Lines % pkg/webhook/webhook.go 85 127 66.93% pkg/webhook/cert.go 24 86 27.91% pkg/webhook/nginxingress.go 96 180 53.33% pkg/controller/controller.go 0 86 0.0% Files with Coverage Reduction New Missed Lines % pkg/controller/controller.go 5 20.34% Totals Change from base Build<PHONE_NUMBER>: -5.7% Covered Lines: 2355 Relevant Lines: 3002 💛 - Coveralls /ok-to-test sha=399b4f /ok-to-test sha=399b54f
2025-04-01T04:54:44.965410
2018-01-16T18:39:50
289013895
{ "authors": [ "DzonnyDZ", "alincosmin7", "davidebbo", "glaidler", "kentongray" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13204", "repo": "Azure/app-service-announcements-discussions", "url": "https://github.com/Azure/app-service-announcements-discussions/issues/26" }
gharchive/issue
[Discussion] ARRAffinity cookie is changing to become 'HttpOnly' Discussion thread for https://github.com/Azure/app-service-announcements/issues/12 Hi, re. the SecureOnly flag; without this, anyone using Testing in Production will cause a penetration test to fail, irrespective of the lack of sensitive data in the cookie. Are there any plans to introduce this? Hi, Is there any way that we can append the secure tag for ARRAffinity ? I saw this post on msdn blog ( https://blogs.msdn.microsoft.com/webtopics/2018/05/14/securing-the-arraffinity-cookie/ ) I tried that solution for an ASP.NET Core web app but is not working, the ARRAffinity is still without the secure tag. I'd also like to chime in the same. I agree that it is not a real "risk" but marking it as secure is good practice since it is using HTTPS and should cause no issues. Just came across this, actually across #12. I needed this for a scenario when I was handling OAuth form a 3rd-party which absolutely didn't understand how OAuth should be implemented, so the 3rd-party was making call to our servers as part of authentication. That way of course any affinity was lost and I needed to pass affinity somehow around in OAuthe state. I had problems to figure out where to get WEBSITE_INSTANCE_ID from. Hint for anybody who doesn't know where to read it from: Read WEBSITE_INSTANCE_ID from IConfiguration (which is available via DI in .NET Core) like config["WEBSITE_INSTANCE_ID"].
2025-04-01T04:54:44.969279
2020-10-19T07:46:49
724366080
{ "authors": [ "dipakyadav", "vishal8k" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13205", "repo": "Azure/application-gateway-kubernetes-ingress", "url": "https://github.com/Azure/application-gateway-kubernetes-ingress/issues/1024" }
gharchive/issue
setting custom health probe via portal gets reset automatically after some time Describe the bug I have deployed Elasticsearch in AKS using the link https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html, and exposed (public ip / domain) using ingress app gateway. Using ingress health probe is automatically created for elasticsearch, but since elasticsearch is by default protected with basic auth it returns 401 status code, which causes backend health to fail. To resolve this, I tried manually editing the health probe from portal and set it as 200-399,401, which resolves the issue for sometime (it gets reset to 200-399 automatically after some time.) @dipakyadav By default AGIC assumes full control of the appgw. Hence any manual changes done by you in the application gateway shall be revoked by AGIC whenever any of these actions happen (New Pod launch, New ingress rule addition, Removal of ingress rules etc..). You have two options here, One is defining a ProhibitedTarget (you have to launch AGIC in shared mode for this). Another is editing the health check of the elasticsearch deployment (Editing the existing liveness/readiness probe). Reference link for Azure Ingress Prohibited target https://github.com/Azure/application-gateway-kubernetes-ingress/tree/master/crds This issue shall be closed. Thanks @vishal8k , will check
2025-04-01T04:54:44.971248
2023-05-29T08:18:16
1730314799
{ "authors": [ "edburns", "galiacheng" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13206", "repo": "Azure/arm-ttk", "url": "https://github.com/Azure/arm-ttk/pull/743" }
gharchive/pull-request
Fix bug with "PasswordBoxes-Must-Have-Min-Length" test Resolved #742 . Added test case. Hello @StartAutomating Brian Moore has directed me to you for your help. Here is a description of what we need, in addition to this being merged. Get the PR merged and a new release performed. Get Partner Center to uptake the release with the fix.
2025-04-01T04:54:44.985835
2023-05-24T17:04:53
1724485010
{ "authors": [ "jamesfan1", "joaobarraca", "m-nash" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13207", "repo": "Azure/autorest.csharp", "url": "https://github.com/Azure/autorest.csharp/issues/3433" }
gharchive/issue
autorest is adding u8 at the end of the file Before asking the question: [x] have you checked the faq, the documentation in the docs folder and couldn't find the information there [x] have you checked existing issues for a similar question? We are encountering an issue where we are using .net6, AutoRest code generation utility [cli version: 3.3.2; node: v16.14.2], config as below input-file: openapi.json project-folder: . output-folder: $(project-folder)/Generated clear-output-folder: true csharp: true public-clients: true skip-csproj: true generation1-convenience-client: true it is adding u8 at the end of many generated string part ex: writer.WritePropertyName("contents"u8); writer.WritePropertyName("protectedFiles"u8); does anyone know why that's the case? It is causing some error for our build as we current do not support c# version 11 as we are still on .net6. I have tried using many different versions of autorest but it still appends the u8 at end the of the string This was recently add since it provides a performance improvement in serialization. We can add a flag to turn this off which you can set in your autorest.md but we will probably keep the default for this to be on. @AlexanderSher do you mind picking this up? This was recently add since it provides a performance improvement in serialization. We can add a flag to turn this off which you can set in your autorest.md but we will probably keep the default for this to be on. @AlexanderSher do you mind picking this up? is there a specific flag that can be used to force the use of .net6? or csharp v10 instead of v11 ?
2025-04-01T04:54:44.987522
2024-03-05T23:22:44
2170317466
{ "authors": [ "jorgerangel-msft", "m-nash" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13208", "repo": "Azure/autorest.csharp", "url": "https://github.com/Azure/autorest.csharp/issues/4333" }
gharchive/issue
[Microsoft Generator CSharp] Migrate All Unbranded Test Projects As a follow up to https://github.com/Azure/autorest.csharp/issues/4200, all of the remaining unbranded test projects should be migrated to the new generator and should serve as the validation when making changes to the generator code. I want to consider the trade offs with only having cadl-ranch tests with very few exceptions.
2025-04-01T04:54:44.992556
2023-03-02T06:51:34
1606209290
{ "authors": [ "ArcturusZhang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13209", "repo": "Azure/autorest.csharp", "url": "https://github.com/Azure/autorest.csharp/pull/3182" }
gharchive/pull-request
skip the validation for external projects Fixes https://github.com/Azure/autorest.csharp/issues/3179 Description Add your description here! Checklist To ensure a quick review and merge, please ensure: [ ] The PR has a understandable title and description explaining the why and what. [ ] The PR is opened in draft if not ready for review yet. If opened in draft, please allocate sufficient time (24 hours) after moving out of draft for review [ ] The branch is recent enough to not have merge conflicts upon creation. Ready to Land? [ ] Build is completely green Submissions with test failures require tracking issue and approval of a CODEOWNER [ ] At least one +1 review by a CODEOWNER [ ] All -1 reviews are confirmed resolved by the reviewer Override/Marking reviews stale must be discussed with CODEOWNERS first @m-nash I made a few changes, could you take a look again? @m-nash latest regen is here: https://github.com/Azure/azure-sdk-for-net/pull/35727
2025-04-01T04:54:45.013003
2018-07-02T02:28:59
337349119
{ "authors": [ "JonKohler", "deathly809", "fearthecowboy", "mallochine" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13210", "repo": "Azure/autorest.incubator", "url": "https://github.com/Azure/autorest.incubator/issues/20" }
gharchive/issue
Release Notes for pre-beta BEFORE YOU BEGIN: IMPORTANT NOTES There are a lot of what I consider to be "minor" issues in here. My priority at this point has been to get enough generated code in place to get the complete end-to-end scenarios functioning, and I can worry about the 'details' once we start thoroughly testing. Modifing the generation is actually really really trivial for me, so I consider the vast majority of the rest of it 'pretty minor' I have code that's not exposed where auth and other things are handled for generating support for Azure ARM resources, and that's not going help you. If this doesn't work for for you Don't panic; this is pre-beta, and it works great for certain scenarios, but they may not be yours If you can't get it to work with your stuff Wait till I get back from vacation. I can't really help before I get back. If you try to use petstore.json for testing You will be smacked with a newspaper, and I shall not give you a treat. If you don't think the cmdlets are very good I should remind you that Jeffrey Snover reviewed this and he thinks a bit different. You will have to wait till I get back for me to document why this works the way it does. Trust me when I say, I went to great lengths to design things a certain way, and everything has purpose and reason. Caveats and known issues: PLEASE IGNORE the errors like: (they are actually harmless) Error occurred in handler for 'ReadFile' in session 'session_5': Error: Could not read 'obj/test.txt'. at QuickDataSource.ReadStrict (C:\Users\garretts\.autorest\@microsoft.azure_autorest-core@2.0.4280\node_modules\@microsoft.azure\autorest-core\dist\lib\data-store\data-store.js:26:19) at <anonymous> Error occurred in handler for 'ReadFile' in session 'session_7': Error: Could not read 'ContainerRegistryManagement.private.csproj'. at QuickDataSource.ReadStrict (C:\Users\garretts\.autorest\@microsoft.azure_autorest-core@2.0.4280\node_modules\@microsoft.azure\autorest-core\dist\lib\data-store\data-store.js:26:19) at <anonymous> Bugs there are a lot of little bugs. I haven't published my backlog yet. WHAT IS MISSING FROM THIS BUILD auth support documentation and explinations. field/class/property/method descriptions. Some are there, but I gotta go thru and do good pass everywhere. Handlers for Retry/etc -- Persisting/editing names/parameters/etc for generated commands use of proxy settings/etc no support for streams, duration, or arrays at the root/base level of the request/response Getting Started Requires: node 8.11.3 (stay away from node 10 for now https://nodejs.org/dist/v8.11.3/node-v8.11.3-x64.msi autorest 2.0.4280+ npm install -g autorest dotnet 2.0.0 sdk - either install the one from the .net page or use npm install -g dotnet-sdk-2.0.0 powershell core 6 https://github.com/PowerShell/PowerShell/releases/download/v6.0.2/PowerShell-6.0.2-win-x64.zip install the autorest.incubator plugin : autorest --reset autorest "--use=@microsoft.azure/autorest.incubator@preview" Usage autorest --use="@microsoft.azure/autorest.incubator@preview" --powershell --output-folder=output-folder --input-file=path-or-url-to-swagger-file sample run: # run these from powershell core. # Windows Powershell is not ok. # The cmdlets can work in both, but be patient autorest --use="@microsoft.azure/autorest.incubator@preview" --powershell --input-file=https://github.com/Azure/azure-rest-api-specs/blob/master/specification/redis/resource-manager/Microsoft.Cache/stable/2018-03-01/redis.json --output-folder=./generated # will generate the cmdlets into the output-folder cd ./generated # now we do one last step (build proxies and import the module) ./generate-proxies.ps1 -test # Now you're in a new pwsh instance with the cmdlets #example: get-operation -verbose -debug One quick comment - we're working through this - small typo on npm install, should be - npm i -g dotnet-sdk-2.0.0 not npm i -g dotnet-sdk.2.0.0 $ sudo npm install -g dotnet-sdk.2.0.0 npm WARN notice Due to a recent security incident, all user tokens have been invalidated. Please see https://status.npmjs.org/incidents/dn7c1fgrr7ng for more details. To generate a new token, visit https://www.npmjs.com/settings/~/tokens or run "npm login". npm ERR! code E404 npm ERR! 404 Not Found: dotnet-sdk.2.0.0@latest don't think it works on centos7 Trying the powershell autorest generation code here: https://github.com/Azure/autorest/tree/master/Samples/1a-code-generation-minimal $ autorest --use="@microsoft.azure/autorest.incubator@preview" --powershell AutoRest code generation utility [version: 2.0.4280; node: v10.5.0] (C) 2018 Microsoft Corporation. https://aka.ms/autorest There is a new version of AutoRest available (2.0.4282). > You can install the newer version with with npm install -g autorest@latest Loading AutoRest core '/Users/alex.guo/.autorest/@microsoft.azure_autorest-core<EMAIL_ADDRESS>(2.0.4280) Loading AutoRest extension<EMAIL_ADDRESS>(preview->1.0.86) Loading AutoRest extension<EMAIL_ADDRESS>(~2.2.51->2.2.67) Loading AutoRest extension<EMAIL_ADDRESS>(2.3.50->2.3.50) WARNING (UndefinedTypeWithSchema): The schema 'User' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object' - swagger-document:1:0 WARNING (UndefinedTypeWithSchema): The schema 'Category' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object' - swagger-document:1:0 WARNING (UndefinedTypeWithSchema): The schema 'Pet' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object' - swagger-document:1:0 WARNING (UndefinedTypeWithSchema): The schema 'Tag' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object' - swagger-document:1:0 WARNING (UndefinedTypeWithSchema): The schema 'Order' with an undefined type and decalared properties is a bit ambigious. This has been auto-corrected to 'type:object' - swagger-document:1:0 WARNING (TypeFileNotValid): The schema type 'file' is not a OAI standard type. This has been auto-corrected to 'type:string' and 'format:binary' - swagger-document:1:0 Error: Format 'Binary' not implemented. at SchemaDefinitionResolver.resolveTypeDeclaration (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/csharp/schema/schema-resolver.ts:80:19) at nameStuffRight (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/csharp/namer.ts:146:49) Error: Inputs missing. at process (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/csharp/lowlevel-generator/main.ts:20:13) Error: Inputs missing. at Object.processCodeModel (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/common/process-code-model.ts:18:13) Error: Inputs missing. at processRequest (/Users/alex.guo/.autorest/@microsoft.azure_autorest.incubator@1.0.86/node_modules/@microsoft.azure/autorest.incubator/src/powershell/powershell-generator.ts:17:13) And yes, we realize that test with petstore is a slap on the nose, but given we were having some challenges, we decided to start with the basics as a sanity check @mallochine : "no support for streams, duration, or arrays at the root/base level of the request/response" I'm not that familiar with the petstore example (linked in 1a-code-generation-minimal). What is a stream exactly? Is it where 'readme.md' is read into stdin? Should I actually have done --input-file="a.json" or something similar? If yes to that last question, then I actually still run into the same code generation problem. So, please elaborate on what is a 'stream'? No; the swagger file has format: binary somewhere in it. That ends up being a stream. Not yet supported. Ahha! We've got those all over the place in our code, which means a nice "e brake" for us here :( Are you back from PTO @fearthecowboy ? Or more apt question - is there another drop that might support binary in the works? Does it make sense for us to dig into the code and submit a PR, or just leave you too it at this point? No rush, we want to help you out here as much as we can, as i strongly believe that AutoRest is going to be our true north for both traditional client binaries (for other languages) and PowerShell (from the sounds of it) Spacing between comments and previous code would be great, /// <summary>The URI for the proxy server to use</summary> [System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "The URI for the proxy server to use")] public System.Uri Proxy {get;set;} /// <summary>Credentials for a proxy server to use for the remote call</summary> [System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "Credentials for a proxy server to use for the remote call")] [System.Management.Automation.ValidateNotNull] public System.Management.Automation.PSCredential ProxyCredential {get;set;} to /// <summary>The URI for the proxy server to use</summary> [System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "The URI for the proxy server to use")] public System.Uri Proxy {get;set;} /// <summary>Credentials for a proxy server to use for the remote call</summary> [System.Management.Automation.Parameter(Mandatory = false, DontShow= true, HelpMessage = "Credentials for a proxy server to use for the remote call")] [System.Management.Automation.ValidateNotNull] public System.Management.Automation.PSCredential ProxyCredential {get;set;} Yes. Agreed. that's been driving me crazy. For the ItemGroup > PackageReferences - what are your thoughts about adding Rosyln analyzers in there? When we were handcrafting our cmdlets, we found that using the in-built dotnet analyzers did help give us a wee bit of sanity right on build. Before we did that, we ended up chasing half working code in some areas, that various "stuff" caught. I'm happy to send in a PR if you'd like with my ideas @fearthecowboy I'm talking about this part here: https://github.com/Azure/autorest.incubator/blob/a86ff8df11778f385e41243d90e7169ebe6e3347/src/powershell/powershell-generator.ts#L79 Yeah; we should have it conditional based on a configuration setting. ie, add one of these: https://github.com/Azure/autorest.incubator/blob/a86ff8df11778f385e41243d90e7169ebe6e3347/src/powershell/project.ts#L304 this.enableRoslynAnalyzer = await service.GetValue('enable-roslyn-analyzer') || false; and then emit it when project.enableRoslynAnalyzer is set. (trying to get things more configuration driven) ok, I'll give it a crack if I can break away a few cycles over the next few days. That seems like a sane approach. The only tricky-ish bit is that we're statically creating the csproj by basically echo'ing a static config into a file with .writefile at the moment, which itself is fine. Do we need to change that to be more like the bit right below it in code (the psd1 bit), where it has some conditional logic? I could change it over to be this in the PR, but wanted to get your thoughts first https://github.com/Azure/autorest.incubator/blob/a86ff8df11778f385e41243d90e7169ebe6e3347/src/powershell/powershell-generator.ts#L108 Yeah, it'd probably a good idea to move to the text-emitter style; it makes it easy to exclude stuff.
2025-04-01T04:54:45.016197
2023-10-19T23:55:24
1953229115
{ "authors": [ "kristapratico", "msyyc" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13211", "repo": "Azure/autorest.python", "url": "https://github.com/Azure/autorest.python/issues/2200" }
gharchive/issue
[typespec-python] when response body is a string, we shouldn't call json on it typespec here defines our response body as a string: https://github.com/kristapratico/azure-rest-api-specs/blob/azopenai-python/specification/cognitiveservices/OpenAI.Inference/routes.tsp#L131 Generated code tries to call json() on response here and it fails because it's not json: https://github.com/kristapratico/azure-rest-api-specs/blob/azopenai-python/specification/cognitiveservices/OpenAI.Inference/tsp-output/%40azure-tools/typespec-python/azure/openai/operations/_operations.py#L1335 Not entirely sure, but I think the code should be something like: deserialized = _deserialize( str, response.text() ) Will discuss about it in scrum meeting
2025-04-01T04:54:45.021781
2021-02-25T15:49:26
816560803
{ "authors": [ "DennisR73", "alex-frankel", "felipebbc", "pkhabazi" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13212", "repo": "Azure/azure-blueprints", "url": "https://github.com/Azure/azure-blueprints/issues/52" }
gharchive/issue
BluePrint configure resourceGroup tags from parameter I am trying to configure tags for ResourceGroups from the blueprints and I am experiencing some issue. In the example below I am want to parse the Tags as an object parameter: { "properties": { "type": "Microsoft.Blueprint/blueprints", "description": "Management Blueprint", "targetScope": "subscription", "parameters": { "tags": { "type": "object", "metadata": { "displayName": "Enter the Tags that need to be configured" }, "defaultValue": { "tags1": "value1", "tags2": "value2" } } }, "resourceGroups": { "ResourceGroup1": { "name": "resourceGroup01", "location": "westeurope", "metadata": { "displayName": "resourceGroup01" }, "dependsOn": [], "tags": "[parameters('tags')]" } } } } However, the above template returns the error message below: Import-AzBlueprintWithArtifact: Can't deserialize the JSON file '/management/Blueprint.json'. 'Error converting value "[parameters('tags')]" to type 'System.Collections.Generic.IDictionary`2[System.String,System.String]'. Path 'tags', line 26, position 38.' When I configure the Tags on ResourceGroup and use a String parameter as input it works fine: { "properties": { "type": "Microsoft.Blueprint/blueprints" , "description": "Management Blueprint", "targetScope": "subscription", "parameters": { "tagsvalue": { "type": "string", "defaultValue": "value1" } }, "resourceGroups": { "ResourceGroup1": { "name": "resourceGroup01", "location": "westeurope", "metadata": { "displayName": "resourceGroup01" }, "dependsOn": [], "tags": { "Tags1": "[parameters('tagsvalue')]" } } } } } Is this a bug or is it not possible to forward an object to the Tags part? +1 I don't think we support object parameters. I think the tag values need to be passed individually. Have you tried that? Hi @pkhabazi The reason I believe you are getting the error message 'Error converting value "[parameters('tags')]" to type 'System.Collections.Generic.IDictionary`2[System.String,System.String]' is because the "[parameters('tags')]" expression within the resourceGroups section does not actually get substituted with the "tags" object you defined in the parameters section. The ARM interpreter is passing a literal string "[parameters('tags')]" to the function that attempts conversion to a System.Collections.Generic.IDictionary<String,String> object and that is why it fails. So in your first example, you are actually trying to pass a string to the tags element while it is expecting an object. In your second example, you are passing an object. Keep in mind that in the second example, the "[parameters('tagsvalue')]" string is also not getting replaced with "value1" when you run the Import-AzBlueprintWithArtifact command. It just sets the value of the "Tags1" key to the "[parameters('tagsvalue')]" string. VS Code's syntax highlighting for ARM templates is misleading in this case. The color coding for the "[parameters('tags')]" expression in the resourceGroups section should be represented in the standard red denoting a string.
2025-04-01T04:54:45.029656
2020-07-03T22:36:43
650784426
{ "authors": [ "admin-simeon", "b0bu", "crmitchelmore", "saamorim", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13213", "repo": "Azure/azure-cli-extensions", "url": "https://github.com/Azure/azure-cli-extensions/issues/1971" }
gharchive/issue
Azure DevOps CLI Create Service Connection with Installation Token Extension name: Azure DevOps Description How do I create a GitHub service connection using an InstallationToken (the installed GitHub app) instead of an OAuth flow/PAT? I can see the following service connection in one of my existing orgs, by running: az devops service-endpoint list [ { "authorization": { "scheme": "InstallationToken" }, "createdBy": { ... }, "data": { "AvatarUrl": "https://avatars3.githubusercontent.com/u/63518284?v=4", "pipelinesSourceProvider": "github" }, ... "type": "GitHub", "url": "https://github.com" } ] But how do I create such a connection using the DevOps CLI? It only seems to support OAuth or entering a PAT when creating a GitHub connection. devops Bump. Not supported? This would be really useful. At the moment using terraform for devops isn't really possible without this As a workaround, I've tried using the az devops service-endpoint create --org [ORG] -p [PROJECT] --service-endpoint-configuration CustomServiceEndpoint.json with the file below, without much luck. When I access the project build information got the error An error occurred while fetching the pipeline. TF400864: The Item specified by the key '26501285-261b-409c-b7ce-154f1da82d74' does not exist. where the key 26501285... is the actual project id { "description": "", "administratorsGroup": null, "authorization": { "parameters": { "accessToken": null }, "scheme": "InstallationToken" }, "createdBy": null, "data": { "AvatarUrl": "https://avatars2.githubusercontent.com/u/2916417?v=4", "pipelinesSourceProvider": "github" }, "name": "SomeName", "type": "GitHub", "url": "https://github.com", "readersGroup": null, "groupScopeId": null, "serviceEndpointProjectReferences": null, "operationStatus": null, "isReady": true, "isShared": false, "owner": "Library" } Don't have the specs for this json and since I cannot create it by hand I cannot follow the https://docs.microsoft.com/en-gb/azure/devops/cli/service-endpoint?view=azure-devops#create-service-endpoint-using-a-configuration-file to get the json. If any one figures this json schema, drop here a message. This process is terrbile and not really fit for production but you can sort of manage it via terraform. I haven't tried creating one via the cli. You can create an oauth service connection in terraform: resource "azuredevops_serviceendpoint_github" "organisation" { project_id = azuredevops_project.platform.id service_endpoint_name = "some_org" description = "" auth_oauth { oauth_configuration_id = "000000000-0000-0000-0000-000000000000" } } Which you then have to go an manually authorize as the azuredevops_resource_authorization resource doesn't work. You can then go to your pipeline and "convert" the oauth service endpoint to an "app connection", like this: This creates a new service connection named "my connection (1)" or whatever you called it. You can them import this new object to terraform with terraform import azuredevops_serviceendpoint_github.s projectid/service-endpoint-id and use it for any new pipelines for that project. I haven't come acorss a scenario where you'd need multiple github connections per project so it seems to be a one time operation. resource "azuredevops_serviceendpoint_github" "s" { project_id = azuredevops_project.platform.id service_endpoint_name = "my connection (1)" description = "" }
2025-04-01T04:54:45.034874
2019-03-20T12:25:23
423223589
{ "authors": [ "KevinBrooke", "bebattis", "jiasli", "kylecweeks" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13214", "repo": "Azure/azure-cli-extensions", "url": "https://github.com/Azure/azure-cli-extensions/issues/585" }
gharchive/issue
This page is incorrect The examples are different from the definitions. Also, --group-name is now --name. Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 01f8fe72-48f6-2e30-0741-d83bafc5d438 Version Independent ID: 32ed6908-2c71-3c17-683b-d07eb08d4590 Content: az account management-group Content Source: latest/docs-ref-autogen/ext/managementgroups/account/management-group.yml Service: azure GitHub Login: @rloutlaw Microsoft Alias: routlaw Yeah, I see this too. Various commands are mixing up the az account management-group with the az cli extension for managementgroups. Please remove this page, or reference to the correct cli https://docs.microsoft.com/en-us/cli/azure/account/management-group?view=azure-cli-latest -- this caused much confusion. Thank all for the findings. I will remove this extension from this repo.
2025-04-01T04:54:45.048403
2019-04-04T13:15:41
429272929
{ "authors": [ "jayaakshayan", "tamirkamara" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13215", "repo": "Azure/azure-cli-extensions", "url": "https://github.com/Azure/azure-cli-extensions/issues/622" }
gharchive/issue
AZ image copy between different EA subscription ( same location/Different location) not working. If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at Azure/azure-cli Extension name (the extension in question) az image copy Description of issue (in as much detail as possible) Throws error while copying the image from one EA subscription to another subscription. Tried with same location also different location. command failed: ['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '****', '--tags', 'created_by=image-copy-extension'] output: ERROR: Operation failed with status: 'Forbidden'. Details: 403 Client Error: Forbidden for url: https://management.azure.com/subscriptions//resourcegroups/image-copy-rg?api-version=2018-05-01 The command failed with an unexpected error. Here is the traceback: Command '['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '**********', '--tags', 'created_by=image-copy-extension']' returned non-zero exit status 1. Traceback (most recent call last): File "/opt/az/lib/python3.6/site-packages/knack/cli.py", line 206, in invoke cmd_result = self.invocation.execute(args) File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 351, in execute raise ex File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 409, in _run_jobs_serially results.append(self._run_job(expanded_arg, cmd_copy)) File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 402, in _run_job six.reraise(sys.exc_info()) File "/opt/az/lib/python3.6/site-packages/six.py", line 693, in reraise raise value result = cmd_copy(params) File "/opt/az/lib/python3.6/site-packages/azure/cli/core/commands/init.py", line 171, in call return self.handler(args, kwargs) File "/opt/az/lib/python3.6/site-packages/azure/cli/core/init.py", line 451, in default_command_handler return op(command_args) File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/custom.py", line 104, in imagecopy target_subscription) File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/custom.py", line 193, in create_resource_group run_cli_command(cli_cmd) File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/cli_utils.py", line 35, in run_cli_command raise ex File "/home/jaya/.azure/cliextensions/image-copy-extension/azext_imagecopy/cli_utils.py", line 21, in run_cli_command cmd_output = check_output(cmd, stderr=STDOUT, universal_newlines=True) File "/opt/az/lib/python3.6/subprocess.py", line 336, in check_output kwargs).stdout File "/opt/az/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '', '--tags', 'created_by=image-copy-extension']' returned non-zero exit status 1. @jayaakshayan the command failed because the temporary resource group creation failed - see the first few lines of your message. Is the user running the image copy command has permissions to create new resource groups? If not, a quick workaround is to pre-create the group manually just like this command tries to do: command failed: ['/opt/az/bin/python3', '-m', 'azure.cli', 'group', 'create', '--name', 'image-copy-rg', '--location', 'southindia', '--output', 'json', '--subscription', '*************', '--tags', 'created_by=image-copy-extension'] Thanks Tamir, it was policy level restriction not to create resource group with out tag, I have removed the policy for the specific resource group which sorted out the issue.
2025-04-01T04:54:45.053063
2021-05-19T03:20:32
894949262
{ "authors": [ "00Kai0", "kairu-ms", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13216", "repo": "Azure/azure-cli-extensions", "url": "https://github.com/Azure/azure-cli-extensions/pull/3394" }
gharchive/pull-request
[Front Door] Add update func for backend and fix backend_host_header Fix issues: https://github.com/Azure/azure-cli/issues/17270 https://github.com/Azure/azure-cli/issues/17269 This checklist is used to make sure that common guidelines for a pull request are followed. For new extensions: [ ] My extension description/summary conforms to the Extension Summary Guidelines. About Extension Publish There is a pipeline to automatically build, upload and publish extension wheels. Once your PR is merged into master branch, a new PR will be created to update src/index.json automatically. The precondition is to put your code inside this repo and upgrade the version in the PR but do not modify src/index.json. Front Door Please update extension version and history before release.
2025-04-01T04:54:45.057405
2023-03-20T04:11:18
1631352179
{ "authors": [ "xfz11", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13217", "repo": "Azure/azure-cli-extensions", "url": "https://github.com/Azure/azure-cli-extensions/pull/6038" }
gharchive/pull-request
serviceconnector-passwordless update dependency This checklist is used to make sure that common guidelines for a pull request are followed. For new extensions: [ ] My extension description/summary conforms to the Extension Summary Guidelines. About Extension Publish There is a pipeline to automatically build, upload and publish extension wheels. Once your pull request is merged into main branch, a new pull request will be created to update src/index.json automatically. You only need to update the version information in file setup.py and historical information in file HISTORY.rst in your PR but do not modify src/index.json. serviceconnector Hi @jsntcy, could you help merge the PR. It's an urgent fix.
2025-04-01T04:54:45.062238
2023-12-14T06:58:37
2041039711
{ "authors": [ "moarychan", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13218", "repo": "Azure/azure-cli-extensions", "url": "https://github.com/Azure/azure-cli-extensions/pull/7083" }
gharchive/pull-request
Add new params to support auto binding This checklist is used to make sure that common guidelines for a pull request are followed. Related command Add arguments --bind-service-registry in spring app create. Add arguments --bind-application-configuration-service in spring app create. General Guidelines [ ] Have you run azdev style <YOUR_EXT> locally? (pip install azdev required) [x] Have you run python scripts/ci/test_index.py -q locally? (pip install wheel==0.30.0 required) For new extensions: [ ] My extension description/summary conforms to the Extension Summary Guidelines. About Extension Publish There is a pipeline to automatically build, upload and publish extension wheels. Once your pull request is merged into main branch, a new pull request will be created to update src/index.json automatically. You only need to update the version information in file setup.py and historical information in file HISTORY.rst in your PR but do not modify src/index.json. Thank you for your contribution! We will review the pull request and get back to you soon. Please fix CI issues.
2025-04-01T04:54:45.086252
2019-09-06T20:44:13
490515129
{ "authors": [ "naragati", "qwordy", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13219", "repo": "Azure/azure-cli", "url": "https://github.com/Azure/azure-cli/issues/10464" }
gharchive/issue
Failure in running az vmss update on diskprofile This is autogenerated. Please review and update as needed. Describe the bug Command Name az vmss update Errors: pop from empty list Traceback (most recent call last): Temp\pip-install-qxmmnr17\knack\knack\cli.py, ln 206, in invoke azure\cli\core\commands\__init__.py, ln 603, in execute azure\cli\core\commands\__init__.py, ln 661, in _run_jobs_serially azure\cli\core\commands\__init__.py, ln 654, in _run_job Local\Temp\pip-install-qxmmnr17\six\six.py, ln 693, in reraise azure\cli\core\commands\__init__.py, ln 631, in _run_job azure\cli\core\commands\__init__.py, ln 305, in __call__ azure\cli\core\commands\arm.py, ln 525, in handler azure\cli\core\commands\arm.py, ln 806, in set_properties azure\cli\core\commands\arm.py, ln 968, in _get_name_path IndexError: pop from empty list To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. Put any pre-requisite steps here... az vmss update -g {} -n {} --set {} {} {} Expected Behavior Environment Summary Windows-10-10.0.18362-SP0 Python 3.6.6 Shell: cmd.exe azure-cli 2.0.72 Additional Context This is autogenerated. Please review and update as needed. Describe the bug Command Name az vmss update Errors: pop from empty list Traceback (most recent call last): Temp\pip-install-qxmmnr17\knack\knack\cli.py, ln 206, in invoke azure\cli\core\commands\__init__.py, ln 603, in execute azure\cli\core\commands\__init__.py, ln 661, in _run_jobs_serially azure\cli\core\commands\__init__.py, ln 654, in _run_job Local\Temp\pip-install-qxmmnr17\six\six.py, ln 693, in reraise azure\cli\core\commands\__init__.py, ln 631, in _run_job azure\cli\core\commands\__init__.py, ln 305, in __call__ azure\cli\core\commands\arm.py, ln 525, in handler azure\cli\core\commands\arm.py, ln 806, in set_properties azure\cli\core\commands\arm.py, ln 968, in _get_name_path IndexError: pop from empty list To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. Put any pre-requisite steps here... az vmss update --name {} -g {} --set {} {} {} Expected Behavior Environment Summary Windows-10-10.0.18362-SP0 Python 3.6.6 Shell: cmd.exe azure-cli 2.0.72 Additional Context The same works from Powershell PS C:\Users\naragati> $vmss = Get-AzVmss -ResourceGroupName "inmobilab" -VMScaleSetName "navininmboi" PS C:\Users\naragati> $vmss.VirtualMachineProfile.StorageProfile.OsDisk.DiskSizeGB = 512 PS C:\Users\naragati> $vmss | Update-AzVmss ResourceGroupName : inmobilab Sku : Name : Standard_DS1_v2 Tier : Standard Capacity : 2 UpgradePolicy : Mode : Manual VirtualMachineProfile : OsProfile : ComputerNamePrefix : navininmb AdminUsername : navin LinuxConfiguration : DisablePasswordAuthentication : False ProvisionVMAgent : True StorageProfile : ImageReference : Publisher : Canonical Offer : UbuntuServer Sku : 16.04-LTS Version : latest OsDisk : Caching : ReadWrite CreateOption : FromImage DiskSizeGB : 512 ManagedDisk : StorageAccountType : Premium_LRS NetworkProfile : NetworkInterfaceConfigurations[0] : Name : navininmboiNic Primary : True EnableAcceleratedNetworking : False NetworkSecurityGroup : Id : /subscriptions/8d2d94f8-2e1c-428f-9e66-d36fd0b60f42/resourceGroups/inmobilab/p roviders/Microsoft.Network/networkSecurityGroups/navininmboinsg DnsSettings : IpConfigurations[0] : Name : navininmboiIpConfig Subnet : Id : /subscriptions/8d2d94f8-2e1c-428f-9e66-d36fd0b60f42/resourceGroups/inmobilab/p roviders/Microsoft.Network/virtualNetworks/inmobi/subnets/default PublicIPAddressConfiguration : Name : pub1 IdleTimeoutInMinutes : 15 PrivateIPAddressVersion : IPv4 EnableIPForwarding : False Priority : Regular ProvisioningState : Succeeded Overprovision : True DoNotRunExtensionsOnOverprovisionedVMs : False UniqueId : 78e029f1-e242-42ac-9c62-a4fa2fe1b388 SinglePlacementGroup : True PlatformFaultDomainCount : 5 Id : /subscriptions/8d2d94f8-2e1c-428f-9e66-d36fd0b60f42/resourceGroups/inmobilab/p roviders/Microsoft.Compute/virtualMachineScaleSets/navininmboi Name : navininmboi Type : Microsoft.Compute/virtualMachineScaleSets Location : centralus Tags : {} @qwordy please take a look and response. Thanks for you feedback. Could you please run az vmss update --name {} -g {} --set {} {} {} --debug and paste the output here Hi, could you provide more detailed info? You can paste the full command here except sensitive information like resource group or name, so that I can diagnose it. Hi, I’ve updated the outputs in the github page on same day. Regards, Navin From: Feiyue Yu<EMAIL_ADDRESS>Sent: Sunday, September 8, 2019 7:05 PM To: Azure/azure-cli<EMAIL_ADDRESS>Cc: Navin Kumar Ragati<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [Azure/azure-cli] Failure in running az vmss update on diskprofile (#10464) Hi, could you provide more detailed info? You can paste the full command here except sensitive information like resource group or name, so that I can diagnose it. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%2Fissues%2F10464%3Femail_source%3Dnotifications%26email_token%3DANDUPBGI6J4VA4KPN3SRWJDQIWVODA5CNFSM4IUM55J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6GBI6Y%23issuecomment-529273979&data=02|01|Navin.Ragati%40microsoft.com|d963abc0fa37468c6fce08d734ca2bf8|72f988bf86f141af91ab2d7cd011db47|1|0|637035915245571635&sdata=WQ9f%2FBg1tOT39SnycxPNOhRLO%2Bj%2FDcfqr9M07hru27A%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FANDUPBAOZBNXADKZRU7MAETQIWVODANCNFSM4IUM55JQ&data=02|01|Navin.Ragati%40microsoft.com|d963abc0fa37468c6fce08d734ca2bf8|72f988bf86f141af91ab2d7cd011db47|1|0|637035915245581631&sdata=u%2Ftut06gS0gdTQ4kqOh6QGQEj02lgXNPJ9GxadioFVE%3D&reserved=0. This is the same command that I ran the other day for my internal lab C:\Users\naragati>az vmss update -g inmobilab -n navininmboi --set VirtualMachineProfile.StorageProfile.OsDisk.DiskSizeGB = 512 Do you have a space between DiskSizeGB and 512? It's the problem. I see the same error if I insert space between key and value. Space is used to separate multiple key-value pairs in --set. However, I think the error message is confusing. I plan to make it more user-friendly. Thanks for the feedback, yeah if the error message can be user friendly it would help in self-serving. From: Feiyue Yu<EMAIL_ADDRESS>Sent: Monday, September 9, 2019 11:09 PM To: Azure/azure-cli<EMAIL_ADDRESS>Cc: Navin Kumar Ragati<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [Azure/azure-cli] Failure in running az vmss update on diskprofile (#10464) Do you have a space between DiskSizeGB and 512? It's the problem. I see the same error if I insert space between key and value. Space is used to separate multiple key-value pairs in --set. However, I think the error message is confusing. I plan to make it more user-friendly. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cli%2Fissues%2F10464%3Femail_source%3Dnotifications%26email_token%3DANDUPBDOFWAWWYL4NNHZV4DQI42W7A5CNFSM4IUM55J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6J6FPA%23issuecomment-529785532&data=02|01|Navin.Ragati%40microsoft.com|c15301258386494e7aa908d735b5582f|72f988bf86f141af91ab2d7cd011db47|1|0|637036925364059850&sdata=HNju57Epu%2BuIG8G9MWfflQtG8mxrKWYd2zjnqw0g%2BrE%3D&reserved=0, or mute the threadhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FANDUPBBNDXPRLNAZUPW6EJTQI42W7ANCNFSM4IUM55JQ&data=02|01|Navin.Ragati%40microsoft.com|c15301258386494e7aa908d735b5582f|72f988bf86f141af91ab2d7cd011db47|1|0|637036925364069848&sdata=mc3fQiUllzlhSgDo%2FBcjBE9T2nV2B2KpgChJrfE3yVY%3D&reserved=0. Add to Sprint 75. Make error message more understandable. The user's problem has been solved. However, we'll improve user experience by providing more accurate error information in future release. Close the issue. I have made the error message more clear. PR merged.
2025-04-01T04:54:45.093538
2020-09-21T01:10:30
705226223
{ "authors": [ "alexeldeib", "qwordy", "yungezz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13220", "repo": "Azure/azure-cli", "url": "https://github.com/Azure/azure-cli/issues/15217" }
gharchive/issue
az vmss update can't set osDisk caching to None This is autogenerated. Please review and update as needed. Describe the bug These commands work: $ az vmss update --set virtualMachineProfile.storageProfile.osDisk.caching=ReadOnly # outputs ... "storageProfile": { "osDisk": { "caching": "ReadOnly" ... $ az vmss update --set virtualMachineProfile.storageProfile.osDisk.caching=ReadWrite # outputs ... "storageProfile": { "osDisk": { "caching": "ReadWrite" ... This one does not. It returns successfully but will not change the value of the field to "None". $ az vmss update --set virtualMachineProfile.storageProfile.osDisk.caching=None # outputs ... "storageProfile": { "osDisk": { "caching": "ReadWrite" ... If I try to PATCH the VMSS directly and then update all instances via the CLI, the operation succeeds. So it's not an API level issue, the bug is in CLI or Python SDK. ace@ace-vm:~$ cat patch.json { "properties": { "virtualMachineProfile": { "storageProfile": { "osDisk": { "caching": "None" } } } } } ace@ace-vm:~$ ace@ace-vm:~$ az rest --method patch --uri "${VMSS_RESOURCE_ID}?api-version=2020-06-01" --body "${PATCH}" ... "storageProfile": { "osDisk": { "caching": "None" ... Command Name az vmss update Errors: To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. Put any pre-requisite steps here... Create a VMSS with ReadWrite/ReadOnly OS disk caching, then try this: az vmss update -g {} -n {} --set virtualMachineProfile.storageProfile.osDisk.caching=None Expected Behavior It should change the caching mode to None, as demonstrated via the patch. Environment Summary Linux-5.4.0-37-generic-x86_64-with-debian-bullseye-sid Python 3.6.10 Installer: DEB azure-cli 2.11.1 Extensions: kusto 0.1.0 (dev) /home/ace/code/azure-cli-extensions/src/kusto aks-preview 0.4.62 (dev) /home/ace/code/azure-cli-extensions/src/aks-preview Additional Context hi @qwordy could you pls help to look at is this a partial patch issue? thanks. You can't set it to None through PUT. None means don't change. It keeps the old value. This is designed by service. @qwordy can you please explain the sample output for my PATCH? 'None' does NOT mean "don't change". It is a valid value from the service perspective? I disagree that this works as intended. Please see my workaround for this bug: https://github.com/alexeldeib/azbench/blob/3d59017d1a0c3dcb5a6a9a06088ff825da5145e3/scripts/cluster.sh#L73-L88 It is a known issue in Azure CLI. The default implementation of update command in CLI is using PUT. We are migrating it to PATCH. It is a long term plan.
2025-04-01T04:54:45.117868
2024-02-01T02:27:09
2111357182
{ "authors": [ "vtjc2002", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13221", "repo": "Azure/azure-cli", "url": "https://github.com/Azure/azure-cli/issues/28286" }
gharchive/issue
az eventgrid system-topic event-subscription create - unable to handle/validate webhook endpoint with multiple query parameters correctly. Describe the bug When using az eventgrid system-topic event-subscription create to create an event-subscription and it validates the webhook endpoint, it does not send the query parameters beyond the 1st one. Repo: Create a system topic with az eventgrid system-topic create command to create a Microsoft.Storage.StorageAccounts topic set $endpoint = "https://${functionAppUrl}/runtime/webhooks/blobs?functionName=Host.Functions.MyFunctionEventTrigger&code=$functionKey" run az eventgrid system-topic event-subscription create --name scanned-images-blob-created --system-topic-name scanned-items-blobs-topic --endpoint-type "WebHook" --resource-group $resourceGroup --endpoint $endpoint The code is the system key to the Azure function. The command errors out with "'code' is not recognized as an internal or external command, operable program or batch file." Related command az eventgrid system-topic event-subscription create Errors 'code' is not recognized as an internal or external command, operable program or batch file. if I switch code and functionName around it will say: 'functionName ' is not recognized as an internal or external command, operable program or batch file. Issue script & Debug output 'code' is not recognized as an internal or external command, operable program or batch file. Expected behavior It should validate the webhook endpoint using the entirety of the url+query parameters. This works when using portal.azure.com Environment Summary azure-cli 2.56.0 core 2.56.0 telemetry 1.1.0 Extensions: aks-preview 0.5.149 fleet 0.2.7 Dependencies: msal 1.24.0b2 azure-mgmt-resource 23.1.0b2 Python location 'C:\Program Files\Microsoft SDKs\Azure\CLI2\python.exe' Extensions directory 'C:\Users\johnsontseng.azure\cliextensions' Python (Windows) 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] Additional context No response Thank you for opening this issue, we will look into it.
2025-04-01T04:54:45.123930
2020-11-18T08:46:43
745458553
{ "authors": [ "houk-ms", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13222", "repo": "Azure/azure-cli", "url": "https://github.com/Azure/azure-cli/pull/15963" }
gharchive/pull-request
{Error Improvement} Error category and error message refining Description This PR provides the following error improvements. Suppress UnknownError in error message. Catgorize the uncommon 4xx HTTP response errors into the UnclassifiedUserFaults instead of UnknownError Fix the untidy error message issue in HttpOperationError #15872 Testing Guide For the untidy error message in HttpOperationError, just type az account management-group show --name non-existing for testing. Previously, code: AuthorizationFailed - , The client<EMAIL_ADDRESS>with object id '6d97229a-391f-473a-893f-f0608b592d7b' does not have authorization to perform action 'Microsoft.Management/managementGroups/read' over scope '/providers/Microsoft.Management/managementGroups/non-existing' or the scope is invalid. If access was recently granted, please refresh your credentials. Now, AuthorizationFailed: The client<EMAIL_ADDRESS>with object id '6d97229a-391f-473a-893f-f0608b592d7b' does not have authorization to perform action 'Microsoft.Management/managementGroups/read' over scope '/providers/Microsoft.Management/managementGroups/non-existing' or the scope is invalid. If access was recently granted, please refresh your credentials. History Notes [Component Name 1] BREAKING CHANGE: az command a: Make some customer-facing breaking change. [Component Name 2] az command b: Add some customer-facing feature. This checklist is used to make sure that common guidelines for a pull request are followed. [x] The PR title and description has followed the guideline in Submitting Pull Requests. [x] I adhere to the Command Guidelines. [x] I adhere to the Error Handling Guidelines. @jiasli @evelyn-ys for awareness
2025-04-01T04:54:45.128839
2022-10-14T07:11:39
1408880268
{ "authors": [ "kevinzz6", "wangzelin007", "yonzhan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13223", "repo": "Azure/azure-cli", "url": "https://github.com/Azure/azure-cli/pull/24204" }
gharchive/pull-request
{Synapse} Update artifact version to 0.14.0 Related command Update azure-synapse-artifacts to 0.14.0 version Description Fix icm339842304 https://portal.microsofticm.com/imp/v3/incidents/details/339842304/home Testing Guide History Notes This checklist is used to make sure that common guidelines for a pull request are followed. [x] The PR title and description has followed the guideline in Submitting Pull Requests. [x] I adhere to the Command Guidelines. [x] I adhere to the Error Handling Guidelines. Synapse /azp run /azp run
2025-04-01T04:54:45.133098
2017-11-21T00:29:46
275545256
{ "authors": [ "azuresdkci", "tjprescott" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13224", "repo": "Azure/azure-cli", "url": "https://github.com/Azure/azure-cli/pull/4945" }
gharchive/pull-request
Fixes for Resource modules This checklist is used to make sure that common guidelines for a pull request are followed. General Guidelines [ ] The PR has modified HISTORY.rst describing any customer-facing, functional changes. Note that this does not include changes only to help content. (see Modifying change log). Command Guidelines [ ] Each command and parameter has a meaningful description. [ ] Each new command has a test. (see Authoring Command Modules) View a preview at https://prompt.ws/r/Azure/azure-cli/4945 This is an experimental preview for @microsoft.com users. (It may take a minute or two for your instance to be ready) Email feedback to 'azfeedback' with subject 'Prompt Feedback'.
2025-04-01T04:54:45.138456
2021-07-10T01:04:14
941139942
{ "authors": [ "tamilmani1989" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13225", "repo": "Azure/azure-container-networking", "url": "https://github.com/Azure/azure-container-networking/pull/929" }
gharchive/pull-request
removed lock for version command Reason for Change: Issue Fixed: Requirements: [ ] uses conventional commit messages [ ] includes documentation [ ] adds unit tests Notes: /azp run
2025-04-01T04:54:45.144242
2015-03-10T18:51:22
60549863
{ "authors": [ "ShawnJackson", "azurecla" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13226", "repo": "Azure/azure-content", "url": "https://github.com/Azure/azure-content/pull/3033" }
gharchive/pull-request
Edit hdinsight-hbase-provision-vnet.md Edit complete. On line 23, in the phrase "virtual network integration," if "virtual network" refers to the Azure service rather than the generic term, it should be capitalized as "Virtual Network." Per naming guidelines, I changed instances of "PowerShell" by itself to "Azure PowerShell." Please make sure that all mentions are technically accurate and shouldn't be "Windows PowerShell" instead. Some of the UI text that appears in all caps in screenshots is in title case in text. It would be better if capitalization of UI elements mentioned in text consistently matched what the UI shows. The link "Use Azure Blob storage with Hadoop in HDInsight" goes to a page titled "Query big data from Hadoop-compatible Blob storage for analysis in HDInsight." Please confirm that this is the right link. The link "Analyze Twitter sentiment with HBase in HDInsight" didn't work when I tried it. Please make sure that the "Hadoop Command Line" screenshot doesn't reveal any potentially sensitive information. In the text "For more information on name resolution in Azure virtual networks..." (line 314), please confirm that "virtual networks" (plural generic term) is accurate and shouldn't be "Virtual Network" (Azure service name). The last cmdlet mentioned, Get-AzureHDInsightCluster (line 353), should be formatted like the cmdlet mentioned earlier--for the sake of consistency. Hi @ShawnJackson, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution! It looks like you're working at Microsoft (v-shawja). If you're full-time, we DON'T require a contribution license agreement. If you are a vendor, or work for Microsoft Open Technologies, DO please sign the electronic contribution license agreement. It will take 2 minutes and there's no faxing! https://cla.azure.com. TTYL, AZPRBOT;
2025-04-01T04:54:45.147055
2015-09-08T19:16:51
105451324
{ "authors": [ "azurecla", "jeannt" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13227", "repo": "Azure/azure-content", "url": "https://github.com/Azure/azure-content/pull/4448" }
gharchive/pull-request
Update machine-learning-azure-ml-netsharp-reference-guide.md Explained auto option and default activation functions. Used bold for keywords and emphasis, italics for names. Added acknowledgements. Hi @jeannt, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution! It looks like you're working at Microsoft (jeannt). If you're full-time, we DON'T require a contribution license agreement. If you are a vendor, or work for Microsoft Open Technologies, DO please sign the electronic contribution license agreement. It will take 2 minutes and there's no faxing! https://cla.azure.com. TTYL, AZPRBOT;
2025-04-01T04:54:45.169659
2022-06-13T07:09:47
1269003015
{ "authors": [ "FabianMeiswinkel", "ealsur", "giladl99-eng" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13228", "repo": "Azure/azure-cosmos-dotnet-v3", "url": "https://github.com/Azure/azure-cosmos-dotnet-v3/issues/3263" }
gharchive/issue
System.ObjectDisposedException: Cannot access a closed Stream Following our migration to cosmos SDK v3 and usage of feed range API, we started to see this exception occasionally in our traces: Error while copying content to a stream. ---> System.ObjectDisposedException: Cannot access a closed Stream. at System.IO.Stream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken) at System.Net.Http.DelegatingStream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken) at System.Net.Http.StreamToStreamCopy.CopyAsync(Stream source, Stream destination, Int32 bufferSize, Boolean disposeSource, CancellationToken cancellationToken) It is coming from cosmos client when trying to pull feed ranges, see full stack trace attached: at Microsoft.Azure.Cosmos.GatewayStoreClient.d__5.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Cosmos.GatewayStoreModel.d__9.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Cosmos.Routing.PartitionKeyRangeCache.d__11.MoveNext() --- End of stack trace from previous location where exception was thrown --- at Microsoft.Azure.Cosmos.Routing.PartitionKeyRangeCache.d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Cosmos.Routing.PartitionKeyRangeCache.d__6.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Cosmos.ContainerCore.d__35.MoveNext() Looks like you forgot to attach the full callstack (the one in the bug description doesn't show what API on container is called and from where). Can you please attach the full callstack? docdb sdkv3 stack trace.txt System.Net.Http.HttpRequestException: Error while copying content to a stream. ---> System.ObjectDisposedException: Cannot access a closed Stream. at System.IO.Stream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken) at System.Net.Http.DelegatingStream.CopyToAsync(Stream destination, Int32 bufferSize, CancellationToken cancellationToken) at System.Net.Http.StreamToStreamCopy.CopyAsync(Stream source, Stream destination, Int32 bufferSize, Boolean disposeSource, CancellationToken cancellationToken) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Net.Http.HttpContent.<CopyToAsyncCore>d__44.MoveNext() --- End of inner exception stack trace --- at System.Net.Http.HttpContent.<CopyToAsyncCore>d__44.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Cosmos.GatewayStoreClient.<BufferContentIfAvailableAsync>d__13.MoveNext() Looks like the issue is related to System.Net.Http. In this flow, the application received the HTTP response and our SDK code is asking the HttpResponseMessage to copy the Stream content to a MemoryStream: https://github.com/Azure/azure-cosmos-dotnet-v3/blob/8154849cdf441b3c9669550b64344be7e4c84ad3/Microsoft.Azure.Cosmos/src/GatewayStoreClient.cs#L228-L239 The ownership of the source Stream is on the System.Net space, why was it disposed is something we don't quite know but it's not controlled by the SDK code. Which .NET Framework / NET Standard implementation version are you running on? .NET 4.6.2 From: Matias Quaranta @.> Sent: Monday, June 13, 2022 5:17 PM To: Azure/azure-cosmos-dotnet-v3 @.> Cc: Gilad Levy @.>; Author @.> Subject: Re: [Azure/azure-cosmos-dotnet-v3] System.ObjectDisposedException: Cannot access a closed Stream (Issue #3263) Which .NET Framework / NET Standard implementation version are you running on? Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cosmos-dotnet-v3%2Fissues%2F3263%23issuecomment-1153977334&data=05|01|gilevy%40microsoft.com|9b00f3e75d0449d6155408da4d4767c7|72f988bf86f141af91ab2d7cd011db47|1|0|637907266358982457|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=XJiRgDuK3BiO%2FXzDgcFyHPwwoOYEgDLZ5OvcnoQdNRU%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FATKWCGP7J7ORAG2NTYUMN33VO47GNANCNFSM5YTFTS4Q&data=05|01|gilevy%40microsoft.com|9b00f3e75d0449d6155408da4d4767c7|72f988bf86f141af91ab2d7cd011db47|1|0|637907266358982457|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=YtL0ou2tl3D%2BllyFnnR0lmiFIIV4JmlvCWLr4O46A1o%3D&reserved=0. You are receiving this because you authored the thread.Message ID<EMAIL_ADDRESS> @giladl99-eng What is the lifetime of your CosmosClient? Are there any events that would call its Dispose? The only relationship I can see if, when the CosmosClient is disposed the HttpClient is disposed. I wonder if this occurs while a response Content is being accessed, if this might the reason. Hi Matias, We use cosmos client as singleton, dispose happens when our service shuts down. We saw those exceptions frequently with no correlation to process termination. From: Matias Quaranta @.> Sent: Monday, June 13, 2022 7:20 PM To: Azure/azure-cosmos-dotnet-v3 @.> Cc: Gilad Levy @.>; Mention @.> Subject: Re: [Azure/azure-cosmos-dotnet-v3] System.ObjectDisposedException: Cannot access a closed Stream (Issue #3263) @giladl99-enghttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fgiladl99-eng&data=05|01|gilevy%40microsoft.com|b965570bfea94dbbc89708da4d589bbc|72f988bf86f141af91ab2d7cd011db47|1|0|637907340236250322|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=KclBYpfkHlhdOACkHoaYaKkbWNIa9UYiKKq8j9Xz8tI%3D&reserved=0 What is the lifetime of your CosmosClient? Are there any events that would call its Dispose? The only relationship I can see if, when the CosmosClient is disposed the HttpClient is disposed. I wonder if this occurs while a response Content is being accessed, if this might the reason. Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FAzure%2Fazure-cosmos-dotnet-v3%2Fissues%2F3263%23issuecomment-1154122868&data=05|01|gilevy%40microsoft.com|b965570bfea94dbbc89708da4d589bbc|72f988bf86f141af91ab2d7cd011db47|1|0|637907340236250322|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=q8QWEjDLXmdbTNUv2Yr8yqLsugw9S9qcNz4PNJgBsfE%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FATKWCGNTJW5JB3IX6OZXZTLVO5NUFANCNFSM5YTFTS4Q&data=05|01|gilevy%40microsoft.com|b965570bfea94dbbc89708da4d589bbc|72f988bf86f141af91ab2d7cd011db47|1|0|637907340236250322|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000|||&sdata=FsxtxeCLhoZDIpHBQExQjdpYL7mw2IMIY0KPN3LVTJQ%3D&reserved=0. You are receiving this because you were mentioned.Message ID<EMAIL_ADDRESS> We were able to root cause and find the source of this problem, it was affecting only Bounded Staleness/Strong accounts and it was related to the handling of quorum responses. Incorrect linking. This is not the case that was fixed and found. This Issue's stack trace and error is not related to the issue found and fixed. The issue we found and fixed was related to Bounded Staleness/Strong barrier requests on the TCP stack, this stack trace is for an HTTP operation obtaining metadata information and the disposing is tied to the HttpClient Stream management.
2025-04-01T04:54:45.175466
2021-07-29T11:37:57
955749815
{ "authors": [ "gauravsaralMs", "roshan-sy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13229", "repo": "Azure/azure-devops-cli-extension", "url": "https://github.com/Azure/azure-devops-cli-extension/issues/1165" }
gharchive/issue
Investigate/ fix failures in live tests https://dev.azure.com/ms/azure-devops-cli-extension/_build/results?buildId=205118&view=logs&j=15d4e81f-d43d-597d-0244-b3aa26f78abd [x] #1173 [x] #1174 [x] #1175 [x] #1176 [x] #1177 [x] #1179 [x] #1180 [x] #1181 [x] #1182 Adding new / remaining issues [ ] #1192 [ ] #1193 [ ] #1194 [ ] #1195
2025-04-01T04:54:45.178327
2016-07-20T01:39:18
166476044
{ "authors": [ "azurecla", "jamesbannan" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13230", "repo": "Azure/azure-devtestlab", "url": "https://github.com/Azure/azure-devtestlab/pull/113" }
gharchive/pull-request
DevTest Labs artifact - Windows AD domain join Added DevTest Labs artifact to join a Windows VM to a specified Active Directory Domain. Requires that the Windows operating system can resolve the domain name and the domain controller via DNS. Hi @jamesbannan, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution! In order for us to evaluate and accept your PR, we ask that you sign a contribution license agreement. It's all electronic and will take just minutes. I promise there's no faxing. https://cla.azure.com. TTYL, AZPRBOT; @jamesbannan, Thanks for signing the contribution license agreement so quickly! Actual humans will now validate the agreement and then evaluate the PR. Thanks, AZPRBOT;
2025-04-01T04:54:45.186131
2018-12-24T18:06:25
393916916
{ "authors": [ "VSC-Service-Account", "azuresdkci" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13231", "repo": "Azure/azure-docs-cli-python", "url": "https://github.com/Azure/azure-docs-cli-python/pull/1217" }
gharchive/pull-request
Update CLI extensions available doc Update CLI extensions available doc. Triggered by Azure/azure-cli-extensions - TRAVIS_BUILD_ID=471919847 https://github.com/Azure/azure-cli-extensions/commit/3bcb0df1477e27762f6756fd1cdd3da0ef2a93bf OPS Build status updates of commit dc05bf5: :clock10: Preparing: average preparing time is 57 sec(s) OPS Build status updates of commit dc05bf5: :clock10: Incremental building: average incremental building time is 12 min(s) 18 sec(s) OPS Build status updates of commit dc05bf5: :white_check_mark: Validation status: passed File Status Preview URL Details docs-ref-conceptual/azure-cli-extensions-list.md :white_check_mark:Succeeded View (azure-cli-latest) View (azure-cli-2017-03-09-profile) View (azure-cli-2018-03-01-hybrid) For more details, please refer to the build report. Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
2025-04-01T04:54:45.198833
2021-02-11T21:11:35
806745205
{ "authors": [ "azure-sdk", "openpublishbuild" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13232", "repo": "Azure/azure-docs-sdk-dotnet", "url": "https://github.com/Azure/azure-docs-sdk-dotnet/pull/1846" }
gharchive/pull-request
Docs.MS Release Updates for Azure.Analytics.Synapse.Spark Update docs metadata and targeting for release of Azure.Analytics.Synapse.Spark Docs Build status updates of commit 627868e: :warning: Validation status: warnings File Status Preview URL Details api/overview/azure/analytics.synapse.spark-readme-pre.md :warning:Warning View Details api/overview/azure/analytics.synapse.spark-readme-pre.md Line 2, Column 1: [Warning-ms-prod-and-service] Only one of the following attributes can exist: 'ms.prod', 'ms.service'. Use ms.prod for on-premise products, or ms.service for cloud services. Line 9, Column 16: [Warning-ms-prod-technology-invalid] Invalid value for 'ms.technology': 'azure' is not valid with 'ms.prod' value 'azure'. Line 11, Column 13: [Warning-ms-service-subservice-invalid] Invalid value for 'ms.service': 'synapseanalytics'. Line 2, Column 1: [Suggestion-description-missing] Missing required attribute: 'description'. Line 138, Column 1: [Suggestion-table-syntax-invalid] Table syntax is invalid. Ensure your table includes a header and is surrounded by empty lines. NOTE: This Suggestion will become a Warning on 1/29/21. For more details, please refer to the build report. If you see build warnings/errors with permission issues, it might be due to single sign-on (SSO) enabled on Microsoft's GitHub organizations. Please follow instructions here to re-authorize your GitHub account to Docs Build. Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report. Note: Your PR may contain errors or warnings unrelated to the files you changed. This happens when external dependencies like GitHub alias, Microsoft alias, cross repo links are updated. Please use these instructions to resolve them. For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel
2025-04-01T04:54:45.211268
2020-09-26T01:40:37
709395508
{ "authors": [ "azure-sdk", "opbld32", "opbld34" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13233", "repo": "Azure/azure-docs-sdk-java", "url": "https://github.com/Azure/azure-docs-sdk-java/pull/1277" }
gharchive/pull-request
Docs.MS Release Updates for azure-core-amqp Update docs metadata and targeting for release of azure-core-amqp Docs Build status updates of commit 6f29f44: :clock10: Full build: the average full build time is 94 min(s) 47 sec(s), this is based on the last 802 build(s) for this repository. Docs Build status updates of commit 6f29f44: :warning: Validation status: warnings File Status Preview URL Details legacy/docs-ref-autogen/overview/EventHubs/Client.yml :warning:Warning View (azure-java-legacy) Details docs-ref-services/core-amqp-readme-pre.md :white_check_mark:Succeeded View (azure-java-stable) View (azure-java-preview) View (azure-java-legacy) package.json :white_check_mark:Succeeded legacy/docs-ref-autogen/overview/EventHubs/Client.yml [Warning-DuplicateUids] Uid(azure.java.sdk.landingpage.services.eventhub.Client) has already been defined in docs-ref-services/messaging-eventhubs-readme.md. For more details, please refer to the build report. If you see build warnings/errors with permission issues, it might be due to single sign-on (SSO) enabled on Microsoft's GitHub organizations. Please follow instructions here to re-authorize your GitHub account to Docs Build. Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report. Note: Your PR may contain errors or warnings unrelated to the files you changed. This happens when external dependencies like GitHub alias, Microsoft alias, cross repo links are updated. Please use these instructions to resolve them. For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel
2025-04-01T04:54:45.213039
2019-06-24T22:00:49
460115609
{ "authors": [ "devigned", "elsesiy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13234", "repo": "Azure/azure-event-hubs-go", "url": "https://github.com/Azure/azure-event-hubs-go/pull/116" }
gharchive/pull-request
Fix import links in readme.md The current readme.md references another project for the go get instructions. Good catch. Obviously, that was a bit of copypasta. Sure thing, happens to all of us!
2025-04-01T04:54:45.241303
2022-01-11T21:09:36
1099618272
{ "authors": [ "JohnGe0rge", "gitmadness", "justinyoo" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13235", "repo": "Azure/azure-functions-openapi-extension", "url": "https://github.com/Azure/azure-functions-openapi-extension/issues/347" }
gharchive/issue
OpenAPI functions RenderOAuth2Redirect, RenderOpenApiDocument, RenderSwaggerDocument, and RenderSwaggerUI missing when published to folder Published my function app to a folder so I can zip it and put it on a share and I noticed that the functions created by Microsoft.Azure.WebJobs.Extensions.OpenApi are not there. I see the functions when debugging. Did I miss a step somewhere? Using V3 Functions SDK and VS2019. Hi there, I am also rather confused by this. When running via debug, the additional RenderOAuth2Redirect, RenderOpenApiDocument, RenderSwaggerDocument, and RenderSwaggerUI endpoints are present. When publishing the Azure Function (via func tasks in VSCode) it does deploy successfully however as @gitmadness has noted our shiny OpenAPI pages got lost somewhere. Assuming that it might be some hidden magic, I have tried visiting; {functionUri}/swagger.json?code={code} {functionUri}/swagger/swagger.json?code={code} {functionUri}/swagger/ui?code={code} ..all to no avail. Justin Yoo over at MSFT has created some extensions to allow publishing of these as functions but I am not sure whether these work with the current landscape we're using (dotnet6, inproc, v4 runtime - posting this primarily for @gitmadness's benefit as this may help them). Blog post describing the implementation: https://devkimchi.com/2019/02/02/introducing-swagger-ui-on-azure-functions/ Repository containing the libs: https://github.com/aliencube/AzureFunctions.Extensions I have run into some implementation issues with the above, like GetSpecVersion() and GetExtension() methods, these are used in the example without any prefixes; suggesting they should be present in the example somewhere, but aren't, nor can I find them in the assembly anywhere. Some more search-fu later and I landed on this github issue. Ok so I made some headway on this, started a fresh test project using the AlienCube nuget mentioned previously and got it working quite happily. Dotnet 6 IoC, v4 runtime Builds ok Debugs locally ok with all the right endpoints Deploys to Azure FunctionApp ok Has all the right endpoints published, and they all work To be brutally honest with myself I just RTFM with a fresh test project and it worked 🤷 I think my test yesterday must have had some conflicting config from all the headscratching trying to get it working. Given the nature of the lineage of these nugets, I tried porting the same working config to use the Microsoft.Azure.Functions.Worker.Extensions.OpenApi nuget, all works flawlessly... apart from no OpenApi Functions published to the Function App on deployment. No offense to anybody, but I suspect the current release here is borked. With the above workaround it's no biggie (for me) now though - and porting the same config to this repo's way of doing it is actually quite trivial, technical debt can be paid off at some point with a single paycheque 😁 @gitmadness feel free to give me a yell if you can't get the AlienCube way of doing it working, happy to help. @gitmadness @JohnGe0rge Thanks for the issue! I assume that you're using "in-proc" worker of Azure Functions app, by using the Microsoft.Azure.WebJobs.Extensions.OpenApi package. Because it's the characteristics of the "in-proc" worker, those endpoints are showing up on your local machine, but they're hidden (encapsulated) when deployed to Azure. Just FYI – The Aliencube one is no longer maintained, and this official extension has more features than that.
2025-04-01T04:54:45.307579
2018-05-28T09:21:28
326958256
{ "authors": [ "RaoulHolzer", "msftclas" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13236", "repo": "Azure/azure-mobile-apps-net-server", "url": "https://github.com/Azure/azure-mobile-apps-net-server/pull/245" }
gharchive/pull-request
Update jquery and jquery-ui sources jquery-3.3.1 and jquery-ui 1.12.1 Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.:x: RaoulHolzer sign nowYou have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T04:54:45.398230
2020-08-03T07:10:31
671852243
{ "authors": [ "ArcturusZhang", "wmengmsft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13237", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/issues/10323" }
gharchive/issue
Cannot enable CMK on a pre-existing cosmosdb account I created a cosmosdb account by using the REST API body: { "kind": "GlobalDocumentDB", "location": "westus2", "properties": { "consistencyPolicy": { "defaultConsistencyLevel": "BoundedStaleness", "maxStalenessPrefix": 200, "maxIntervalInSeconds": 10 }, "locations": [ { "locationName": "westus2", "failoverPriority": 0 } ], "databaseAccountOfferType": "Standard", "ipRules": [], "isVirtualNetworkFilterEnabled": false, "enableAutomaticFailover": true, "capabilities": [], "virtualNetworkRules": [], "enableMultipleWriteLocations": false, "enableFreeTier": false }, "tags": {} } then I am trying to enable CMK on this cosmosdb using the PATCH RESP API with the body { "properties": { "keyVaultKeyUri": "https://<vault-name>.vault.azure.net/keys/<key-name>", }, } I got an error: Code="BadRequest" Message="Updating KeyVaultKeyUri is not supported\r\nActivityId: 5f367e7d-92ee-42a7-966b-fb12e53ab69a, Microsoft.Azure.Documents.Common/2.11.0" hi @ArcturusZhang, enabling CMK on existing accounts is not supported. This is something we are looking into to be supported in the future. Please consider supporting this feature request at https://feedback.azure.com/forums/263030-azure-cosmos-db.
2025-04-01T04:54:45.415781
2017-06-20T17:30:31
237287734
{ "authors": [ "azuresdkci", "dsgouda", "henry416", "salameer" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13238", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/1331" }
gharchive/pull-request
VirtualNetworkGateway.json QOL fixes PR exclusively for general QOL fixes for VirtualNetworkGateway.json This checklist is used to make sure that common issues in a pull request are addressed. This will expedite the process of getting your pull request merged and avoid extra work on your part to fix issues discovered during the review process. PR information [ ] The title of the PR is clear and informative. [ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For information on cleaning up the commits in your pull request, see this page. [ ] Except for special cases involving multiple contributors, the PR is started from a fork of the main repository, not a branch. [ ] If applicable, the PR references the bug/issue that it fixes. [ ] Swagger files are correctly named (e.g. the api-version in the path should match the api-version in the spec). Quality of Swagger [ ] I have read the contribution guidelines. [ ] My spec meets the review criteria: [ ] The spec conforms to the Swagger 2.0 specification. [ ] The spec follows the guidelines described in the Swagger checklist document. [ ] Validation tools were run on swagger spec(s) and have all been fixed in this PR. Hi There, I am the AutoRest Linter Azure bot. I am here to help. My task to analyze the situation from the AutoRest linter perspective. Please review the below analysis result: File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json Before the PR: Warning(s): 22 Error(s): 11 After the PR: Warning(s): 22 Error(s): 11 Thanks for your co-operation. Hi There, I am the AutoRest Linter Azure bot. I am here to help. My task to analyze the situation from the AutoRest linter perspective. Please review the below analysis result: File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json Before the PR: Warning(s): 22 Error(s): 11 After the PR: Warning(s): 22 Error(s): 11 Thanks for your co-operation. Hi There, I am the AutoRest Linter Azure bot. I am here to help. My task to analyze the situation from the AutoRest linter perspective. Please review the below analysis result: File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json Before the PR: Warning(s): 22 Error(s): 11 After the PR: Warning(s): 22 Error(s): 11 Thanks for your co-operation. @dsgouda Normalized provisioning state across entire JSON @azuresdkci Test this please Hi There, I am the AutoRest Linter Azure bot. I am here to help. My task is to analyze the situation from the AutoRest linter perspective. Please review the below analysis result: File: arm-network/2017-06-01/swagger/virtualNetworkGateway.json Before the PR: Warning(s): 22 Error(s): 11 After the PR: Warning(s): 22 Error(s): 11 Know more about AutoRest Linter Guidelines. Send feedback and make AutoRest Linter Azure Bot smarter day by day! Thanks for your co-operation. @dsgouda are we good t o merge here? We are good, but I need to check why the travis build is failing, will investigate now. CI is reporting a failure to generate sdk for ruby, get a feeling it may have to do with using old command line args for AutoRest, will post my findings soon. @henry416 looks like you are updating provisioningState enum only here and not in the other json files which are a part of the corresponding composite. When generating the SDK, AutoRest expects to have exactly one unique x-ms-enum extension or the exact same definition repeated (like you have in virtualNetworkGateway.json), either update all definitions for ProvisioningState or undo this particular change. FYI The definition for ProvisioningState in networkWatcher.json has an additional enum value. ping @henry416 @salameer I am aware of this, however I am busy with other issues at the moment, I will take time when it is finished to deal with this. Sounds Good @henry416 But please note that we'll have to close this pr by end of the Next week July 7th if this is not updated. and please feel free to open a new PR after that if your unable to make these changes by that time. Thanks, Samer Closing due to no response.
2025-04-01T04:54:45.433481
2021-06-24T15:06:39
929327471
{ "authors": [ "ArcturusZhang", "JackTn", "RamyasreeChakka", "ansahdev" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13239", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/14976" }
gharchive/pull-request
Adding swagger json and examples for Compute Diagnostic Resource Provider - preview MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow. Changelog Please ensure to add changelog with this PR by answering the following questions. What's the purpose of the update? [x] new service onboarding [x] new API version [ ] update existing version for new feature [ ] update existing version to fix swagger quality issue in s360 [ ] Other, please clarify When you are targeting to deploy new service/feature to public regions? Please provide date, or month to public if date is not available yet. When you expect to publish swagger? Please provide date, or month to public if date is not available yet. If it's an update to existing version, please select SDKs of specific language and CLIs that require refresh after swagger is published. [ ] SDK of .NET (need service team to ensure code readiness) [ ] SDK of Python [ ] SDK of Java [ ] SDK of Js [ ] SDK of Go [ ] PowerShell [ ] CLI [ ] Terraform [ ] No, no need to refresh for updates in this PR Contribution checklist: [x] I commit to follow the Breaking Change Policy of "no breaking changes" [x] I have reviewed the documentation for the workflow. [x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix? If any further question about AME onboarding or validation tools, please view the FAQ. ARM API Review Checklist [x] Ensure to check this box if one of the following scenarios meet updates in the PR, so that label “WaitForARMFeedback” will be added automatically to involve ARM API Review. Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs, all “removals” and “adding a new property” no more require ARM API review. Adding new API(s) Adding a new API version [ ] Ensure to copy the existing version into new directory structure for first commit (including refactoring) and then push new changes including version updates in separate commits. This is required to review the changes efficiently. Adding a new service [x] Please ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board. [x] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Breaking Change Review Checklist If there are following updates in the PR, ensure to request an approval from Breaking Change Review Board as defined in the Breaking Change Policy. [ ] Removing API(s) in stable version [ ] Removing properties in stable version [ ] Removing API version(s) in stable version [ ] Updating API in stable or public preview version with Breaking Change Validation errors [ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews) Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki. Please follow the link to find more details on PR review process. As discussed offline, please implement below APIs POST /subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/diagnostics/diskInspection/run GET /subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/diagnostics/diskInspection { id: xxx, name: diskInspection, properties: { supportedResourceTypes: [ "VMs"] } } GET /subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/diagnostics { value: [ { id: xxx, name: diskInspection, properties: { supportedResourceTypes: [ "VMs"] } } ] } /azp run Closing this as created a new PR with changes Hi, @ansahdev. The PR has be closed for a long time and it's related branch still exist. Please tell me if you still need this branch or i will delete it in 14 days.
2025-04-01T04:54:45.452669
2021-08-02T15:36:48
958267207
{ "authors": [ "deymadhumanti", "zhenglaizhang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13240", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/15475" }
gharchive/pull-request
Added new API version 2021-07-01 with the existing swagger json files for ResourceGuard feature copied the existing swagger json files (2021-06-01 for recoveryservicesbackup and 2021-01-01 for dataprotection) to create the base swagger json files with API version 2021-07-01. Will add ResourceGuard related changes (which are already checked-in in preview folder with API version 2021-02-01-preview) on top of this base swagger json files. MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow. Changelog Add a changelog entry for this PR by answering the following questions: What's the purpose of the update? [ ] new service onboarding [ ] new API version [ ] update existing version for new feature [ ] update existing version to fix swagger quality issue in s360 [ ] Other, please clarify When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month. When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month. If updating an existing version, please select the specific langauge SDKs and CLIs that must be refreshed after the swagger is published. [ ] SDK of .NET (need service team to ensure code readiness) [ ] SDK of Python [ ] SDK of Java [ ] SDK of Js [ ] SDK of Go [ ] PowerShell [ ] CLI [ ] Terraform [ ] No refresh required for updates in this PR Contribution checklist: [ ] I commit to follow the Breaking Change Policy of "no breaking changes" [ ] I have reviewed the documentation for the workflow. [ ] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix? If any further question about AME onboarding or validation tools, please view the FAQ. ARM API Review Checklist Applicability: :warning: If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review. Change to data plane APIs Adding new properties All removals Otherwise your PR may be subject to ARM review requirements. Complete the following: [ ] Check this box if any of the following apply to the PR so that label “WaitForARMFeedback” will be added automatically to begin ARM API Review. Failure to comply may result in delays to the manifest. Adding a new service Adding new API(s) Adding a new API version -[ ] To review changes efficiently, ensure you copy the existing version into the new directory structure for first commit (including refactoring) and then push new changes, including version updates, in separate commits. [ ] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board. [ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Breaking Change Review Checklist If any of the following scenarios apply to the PR, request approval from the Breaking Change Review Board as defined in the Breaking Change Policy. [ ] Removing API(s) in a stable version [ ] Removing properties in a stable version [ ] Removing API version(s) in a stable version [ ] Updating API in a stable or public preview version with Breaking Change Validation errors [ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews) Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki. Please follow the link to find more details on PR review process. Hi @deymadhumanti now the default branch is main, I have updated the PR to based on main branch. Hi, Closing this PR Since I am unable to push the changes from my local repo to this PR. Raised a new PR : https://github.com/Azure/azure-rest-api-specs/pull/15514 with these changes. So please review the new PR : https://github.com/Azure/azure-rest-api-specs/pull/15514 Hi, Closing this PR Since I am unable to push the changes from my local repo to this PR. Raised a new PR : https://github.com/Azure/azure-rest-api-specs/pull/15515 with these changes. So please review the new PR : https://github.com/Azure/azure-rest-api-specs/pull/15515
2025-04-01T04:54:45.471753
2017-08-23T17:34:40
252360272
{ "authors": [ "AutorestCI", "anuchandy", "hrkulkarMsft", "msftclas", "ravbhatnagar" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13241", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/1580" }
gharchive/pull-request
[Traffic Manager] Adding new preview api-version for Traffic Manager. This adds two new features to Traffic Manager: RealUserMetricsKey, and HeatMap. This checklist is used to make sure that common issues in a pull request are addressed. This will expedite the process of getting your pull request merged and avoid extra work on your part to fix issues discovered during the review process. PR information [X] The title of the PR is clear and informative. [X] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For information on cleaning up the commits in your pull request, see this page. [X] Except for special cases involving multiple contributors, the PR is started from a fork of the main repository, not a branch. [X] If applicable, the PR references the bug/issue that it fixes. [X] Swagger files are correctly named (e.g. the api-version in the path should match the api-version in the spec). Quality of Swagger [X] I have read the contribution guidelines. [X] My spec meets the review criteria: [X] The spec conforms to the Swagger 2.0 specification. [X] The spec follows the guidelines described in the Swagger checklist document. [ ] Validation tools were run on swagger spec(s) and have all been fixed in this PR. @hrkulkarMsft, Thanks for your contribution as a Microsoft full-time employee or intern. You do not need to sign a CLA. Thanks, Microsoft Pull Request Bot @ravbhatnagar this is new new API version, please review and sign-off. @hrkulkarMsft please update readme.md file to have a tag for this new api-version. Should the readme.md still try to build non-preview api? Or should I add? openapi-type: arm tag: package-2017-09-preview Tag: package-2017-09-preview These settings apply only when --tag=package-2017-09-preview is specified on the command line. input-file: - Microsoft.Network/2017-09-01-preview/trafficmanager.json @hrkulkarMsft, the readme file seems correct, it seems CI is failing because from the swagger you are referring HeatMap-GET.json but in the file system the file name is HeatMap-Get.json, please use the same casing in both places. If you already have autorest installed I would recommend to run it locally against your swagger to catch linter errors, and fix them. Assuming you are in the root folder of local clone of rest-api-spec, you can run this: autorest --validation --azure-validator --message-format=json --input-file=./specification/trafficmanager/resource-manager/Microsoft.Network/2017-09-01-preview/trafficmanager.json Thanks, my validator wasn't throwing that warning because git hadn't renamed the case remotely to match my local branch. For Operations API -- our parent RP (Microsoft.Network) implements this should we still reference it in our Swagger? For Operations API no need to reference to the parent network swagger. @ravbhatnagar please note that linter shows RPCViolation saying Operations API is missing but the same operation is implemented in the parent RP https://github.com/Azure/azure-rest-api-specs/tree/current/specification/network/resource-manager. To repro semantic and model validation locally install oav npm install -g oav run oav validate-spec ./specification/trafficmanager/resource-manager/Microsoft.Network/2017-09-01-preview/trafficmanager.json oav validate-example ./specification/trafficmanager/resource-manager/Microsoft.Network/2017-09-01-preview/trafficmanager.json Thanks Anu, I was able to successfully run these validations this time. @hrkulkarMsft regarding TrackedResourceListByImmediateParent RPCViolation - you mentioned that in the case of this particular resource -- it could be very large, if i understand correctly the number of HeapMap child resources associated with an instance of traffic manager profile resource can be huge. If that is the case, this could be a paged collection right? which can be expressed using x-ms-pageable extension, https://github.com/Azure/azure-rest-api-specs/blob/master/documentation/creating-swagger.md#Paging-x-ms-pageable. @anuchandy Sorry, I should have been more specific. The payload can return close to 4MB in our preview version -- this will be right under ARM limit to keep it from needing pagination. If we decide to have multiple heatMap resources at some point then (i.e.: a history of a Profile's heatMap), I would imagine that a List operation could get large. @hrkulkarMsft thanks for clarifying, got it so there will be only one HeapMap child instance associated with a traffic manager profile instance, i.e. this is not a collection. I would let Gaurav comment on this. @ravbhatnagar we have RPCViolation warning in this case TrackedResourceListByImmediateParent, please share your thoughts. @ravbhatnagar : Submitted a new iteration in the review with the changes we discussed. • {heatMapsType}: can now only be “default” in value. • Removed endpoint-identifying properties. • Added query parameters. • Removed App Rum to get HeatMap through for now, and will open separate review. I’ll take note of the general API feedback to fix, but for this iteration would it be possible to only fix the HeatMap implementation, and then update the existing API comments later? @hrkulkarMsft - sounds good. RubyCoden is failing https://travis-ci.org/Azure/azure-rest-api-specs/jobs/268142263, checking with the codegen owners. Hey, is there anything I need to do to unblock codegen? hi @hrkulkarMsft sorry for the delay. No action required from your side now, we are tracking the code-gen issue here https://github.com/Azure/azure-sdk-for-ruby/issues/944. Merging this PR. @ravbhatnagar as i mentioned above though there is a RPCViolation OperationsAPIImplementation reported by linter, the API is implemented in the parent Network swagger. Cool, thanks Anu! Changed the PR to better reflect these changes since the other feature was moved to a separate PR. No modification for AutorestCI/azure-sdk-for-node No modification for AutorestCI/azure-sdk-for-python
2025-04-01T04:54:45.488536
2022-05-24T19:05:24
1247009997
{ "authors": [ "gregoks" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13242", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/19207" }
gharchive/pull-request
Search API's in marketplace catalog MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow. Changelog Add a changelog entry for this PR by answering the following questions: What's the purpose of the update? [x] new service onboarding [ ] new API version [ ] update existing version for new feature [ ] update existing version to fix swagger quality issue in s360 [ ] Other, please clarify When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month - July 2022 When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month - already published If updating an existing version, please select the specific language SDKs and CLIs that must be refreshed after the swagger is published. [ ] SDK of .NET (need service team to ensure code readiness) [ ] SDK of Python [ ] SDK of Java [ ] SDK of Js [ ] SDK of Go [ ] PowerShell [ ] CLI [ ] Terraform [ ] No refresh required for updates in this PR Contribution checklist: [x] I commit to follow the Breaking Change Policy of "no breaking changes" [x] I have reviewed the documentation for the workflow. [x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix? If any further question about AME onboarding or validation tools, please view the FAQ. ARM API Review Checklist Applicability: :warning: If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review. Change to data plane APIs Adding new properties All removals Otherwise your PR may be subject to ARM review requirements. Complete the following: [x] Check this box if any of the following appy to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment. Adding a new service Adding new API(s) Adding a new API version -[ ] To review changes efficiently, ensure you are using OpenAPIHub to initialize the PR for adding a new version. More details, refer to the wiki. [x] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board. [ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Breaking Change Review Checklist If any of the following scenarios apply to the PR, request approval from the Breaking Change Review Board as defined in the Breaking Change Policy. [ ] Removing API(s) in a stable version [ ] Removing properties in a stable version [ ] Removing API version(s) in a stable version [ ] Updating API in a stable or public preview version with Breaking Change Validation errors [ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews) Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki. Please follow the link to find more details on PR review process. Opened 2 separate pull requests - one for dataplan and one for resource manager
2025-04-01T04:54:45.503792
2022-06-14T23:32:39
1271469447
{ "authors": [ "xirzec" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13243", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/19451" }
gharchive/pull-request
Remove storage arm constraint Adding a small swagger transform to work around some invalid client platform validation that is blocking Storage Explorer customers. Changelog Add a changelog entry for this PR by answering the following questions: What's the purpose of the update? [ ] new service onboarding [ ] new API version [ ] update existing version for new feature [ ] update existing version to fix swagger quality issue in s360 [x] Other, please clarify When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month. When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month. If updating an existing version, please select the specific language SDKs and CLIs that must be refreshed after the swagger is published. [ ] SDK of .NET (need service team to ensure code readiness) [ ] SDK of Python [ ] SDK of Java [x] SDK of Js [ ] SDK of Go [ ] PowerShell [ ] CLI [ ] Terraform [ ] No refresh required for updates in this PR Contribution checklist: [ ] I commit to follow the Breaking Change Policy of "no breaking changes" [ ] I have reviewed the documentation for the workflow. [ ] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix? If any further question about AME onboarding or validation tools, please view the FAQ. ARM API Review Checklist Applicability: :warning: If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review. Change to data plane APIs Adding new properties All removals Otherwise your PR may be subject to ARM review requirements. Complete the following: [ ] Check this box if any of the following appy to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment. Adding a new service Adding new API(s) Adding a new API version -[ ] To review changes efficiently, ensure you are using OpenAPIHub to initialize the PR for adding a new version. More details, refer to the wiki. [ ] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board. [ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Breaking Change Review Checklist If any of the following scenarios apply to the PR, request approval from the Breaking Change Review Board as defined in the Breaking Change Policy. [ ] Removing API(s) in a stable version [ ] Removing properties in a stable version [ ] Removing API version(s) in a stable version [ ] Updating API in a stable or public preview version with Breaking Change Validation errors [ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews) Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki. Please follow the link to find more details on PR review process. Ugh trying to figure out why so many commits got pulled in Ah my branch was still on master was the problem