id stringlengths 4 10 | text stringlengths 4 2.14M | source stringclasses 2
values | created timestamp[s]date 2001-05-16 21:05:09 2025-01-01 03:38:30 | added stringdate 2025-04-01 04:05:38 2025-04-01 07:14:06 | metadata dict |
|---|---|---|---|---|---|
1870078050 | ENH restriction on overload length is too tight
For a single layer of other domains, we can overload up to the full length of the other domain just fine.
cc @michaelbuehlmann
I don't think this will work if the decomposition along an axis is 2 (a particle will need to be sent to the neighbor twice, once left and once right – which I don't think is possible with the current implementation). Can we change the assert to use the 0.5 factor if there are only 2 ranks along that dimension, and the full cell size if the rank count is >= 3?
@beckermr, if my changes look good to you, feel free to merge
LGTM and thank you!
| gharchive/pull-request | 2023-08-28T16:27:15 | 2025-04-01T04:32:16.534526 | {
"authors": [
"beckermr",
"michaelbuehlmann"
],
"repo": "ArgonneCPAC/MPIPartition",
"url": "https://github.com/ArgonneCPAC/MPIPartition/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
194734016 | Backup reminder & Linking wallet to mobile app
Currently there doesn't appear to be a function to backup your passphrase, as i assume if lost you will never be able to access the wallet again.
I think it would be helpful if it reinforced this at the start of creating a wallet and then asks periodically to backup. I guess as the wallet doesn't sync the whole chain then it doesn't need backing up in the traditional sense?
Is there future dev to link your desktop ark client to your mobile client? Will you be able to do that via a QR code.
yes you can already use qr code to transfer your wallet to mobile wallet. Try it out!
I have not yet included passphrases in the desktop client, because i did not have time to make the secure framework to encrypt passphrases in the app, but will do.
unfortunately i can't as i i OS wallet hasn't been made available, i'm guessing due to Apples app vetting process. do you have an eta on this iOS wallet?
Thank you
No mobile wallet ATM. Let's delay for now
| gharchive/issue | 2016-12-10T01:17:22 | 2025-04-01T04:32:16.564734 | {
"authors": [
"Doweig",
"fix",
"jamiec79"
],
"repo": "ArkEcosystem/ark-desktop",
"url": "https://github.com/ArkEcosystem/ark-desktop/issues/22",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
260058793 | Fixes #11. Upgraded wrapper script and added version to build config
Old wrapper script packaged with the repo was unable to handle the model block as shown in issue #11. Explicitly specified the version of the new wrapper script that is packaged with the repo in the build.gradle file so another user can regenerate wrappers using gradle wrapper
mm true that i think i messed up with the wrapper, thanks!
@Guppster can you provide me with your Ark address?
@boldninja Sent on slack
| gharchive/pull-request | 2017-09-24T05:15:15 | 2025-04-01T04:32:16.566370 | {
"authors": [
"Guppster",
"boldninja",
"fix"
],
"repo": "ArkEcosystem/ark-java",
"url": "https://github.com/ArkEcosystem/ark-java/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
517962206 | HFP plays silence, bluez-alsa to bluez-alsa
I've only successfully played HFP audio through a TaoTronics Bluetooth to 3.5mm adapter. With other devices (a sound bar, multiple vehicles, etc), the audio "plays" with no errors, etc, but no sound is heard on the device. I'm confident that the SCO routing issues are no longer a factor for this chipset, because it is playing and audible with the TaoTronics even with "bluealsa -p hfp-ag". Because I wasn't sure if I was just unlucky with my playback devices, I decided to connect bluez-alsa (a2dp-source, hfp-ag) on my embedded device to bluez-alsa (a2dp-sink, hfp-hf) on a Raspberry Pi 3. First, I got a2dp audio working and it played on the remote device. However, sco audio does not audibly play (I'm using bluealsa-aplay). I forced connection via HFP profile only, with the same results.
Note that I am not using ofono nor phonesim, and bluealsa is compiled without --enable-ofono.
Due to lack of error messages, I'm having trouble investigating this issue. I'm happy to provide more information or do additional troubleshooting or even code changes.
Are you using the latest master? Because it might work ONLY with latest commits. I haven't tested such scenario, though.
No, I missed those! Thanks! I'll give it a try.
On master I can still play to the TaoTronics, but nothing else. One change in behavior: Playing SCO from bluealsa to bluealsa doesn't play, but now aplay also no longer terminates. Here's what I see on both ends (added some blank lines to separate setup/connect/"playback"):
HFP-AG side:
$ sudo bluealsa -p hfp-ag
bluealsa: ba-adapter.c:151: Freeing adapter: hci0
bluealsa: bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/AudioGateway
bluealsa: bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/AudioGateway
bluealsa: main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: main.c:358: Starting main dispatching loop
bluealsa: bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: sco.c:154: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: sco.c:69: Starting SCO dispatcher loop: hci0
bluealsa: bluez.c:997: RFCOMM: HFP Audio Gateway configured for device B8:27:EB:87:AD:65
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: rfcomm.c:593: Starting loop: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:982: Created new thread [ba-rfcomm]: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: ba-transport.c:982: Created new thread [ba-sco]: HFP Audio Gateway (CVSD)
bluealsa: sco.c:194: Starting SCO loop: HFP Audio Gateway (CVSD)
bluealsa: at.c:161: AT message: SET: command:+BRSF, value:628
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+BRSF, value:2528
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 0 -> 2
bluealsa: at.c:161: AT message: TEST: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 2 -> 5
bluealsa: at.c:161: AT message: GET: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:0,0,0,0,0,0,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 5 -> 7
bluealsa: at.c:161: AT message: SET: command:+CMER, value:3,0,0,1,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 7 -> 8
bluealsa: rfcomm.c:124: HFP state transition: 8 -> 9
bluealsa: rfcomm.c:124: HFP state transition: 9 -> 13
bluealsa: bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ba-transport.c:875: Checking Broadcom internal SCO routing
bluealsa: ba-transport.c:881: Current SCO interface setup: 1 2 0 1 1
bluealsa: ba-transport.c:915: New SCO link: B8:27:EB:87:AD:65: 15 (MTU: R:64 W:64)
bluealsa: ba-transport.c:635: State transition: 2 -> 2
bluealsa: ba-transport.c:863: Reusing SCO: 15
bluealsa: ba-transport.c:931: Closing SCO: 15
aplay (doesn't exit until Ctrl+C pressed):
$ aplay -v -D bluealsa:DEV=B8:27:EB:87:AD:65,PROFILE=sco test.wav
$ aplay -v -D bluealsa:DEV=B8:27:EB:87:AD:65,PROFILE=sco test.wav
bluealsa-pcm.c:625: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Setting constraints
Playing WAVE 'test.wav' : Signed 16 bit Little Endian, Rate 22050 Hz, Mono
bluealsa-pcm.c:344: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing HW
bluealsa-pcm.c:371: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: FIFO buffer size: 4096
bluealsa-pcm.c:377: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Selected HW buffer: 4 periods x 2000 bytes == 8000 bytes
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:414: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Prepared
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
Plug PCM: Rate conversion PCM (8000, sformat=S16_LE)
Converter: libspeex (builtin)
Protocol version: 10002
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 22050
exact rate : 22050 (22050/1)
msbits : 16
buffer_size : 11025
period_size : 2756
period_time : 125000
tstamp_mode : NONE
tstamp_type : GETTIMEOFDAY
period_step : 1
avail_min : 2756
period_event : 0
start_threshold : 11025
stop_threshold : 11025
silence_threshold: 0
silence_size : 0
boundary : 6206523236469964800
Slave: BlueALSA PCM: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco
BlueALSA BlueZ device: /org/bluez/hci0/dev_B8_27_EB_87_AD_65
BlueALSA Bluetooth codec: 0x1
bluealsa-pcm.c:264: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Starting
bluealsa-pcm.c:137: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Starting IO loop: 8
HFP-HF side:
$ sudo bluealsa -p hfp-hf
bluealsa: ../../src/ba-adapter.c:151: Freeing adapter: hci0
bluealsa: ../../src/bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/HandsFree
bluealsa: ../../src/bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/HandsFree
bluealsa: ../../src/main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: ../../src/main.c:358: Starting main dispatching loop
bluealsa: ../../src/bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: ../../src/bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: ../../src/sco.c:154: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: ../../src/sco.c:69: Starting SCO dispatcher loop: hci0
bluealsa: ../../src/bluez.c:997: RFCOMM: HFP Hands-Free configured for device 70:2C:1F:32:15:85
bluealsa: ../../src/ba-transport.c:635: State transition: 0 -> 2
bluealsa: ../../src/ba-transport.c:982: Created new thread [ba-rfcomm]: RFCOMM: HFP Hands-Free
bluealsa: ../../src/ba-transport.c:635: State transition: 0 -> 2
bluealsa: ../../src/sco.c:194: Starting SCO loop: HFP Hands-Free (CVSD)
bluealsa: ../../src/ba-transport.c:982: Created new thread [ba-sco]: HFP Hands-Free (CVSD)
bluealsa: ../../src/rfcomm.c:593: Starting loop: RFCOMM: HFP Hands-Free
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+BRSF, value:628
bluealsa: ../../src/bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ../../src/at.c:161: AT message: RESP: command:+BRSF, value:2528
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 0 -> 1
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 1 -> 2
bluealsa: ../../src/rfcomm.c:106: Sending AT message: TEST: command:+CIND, value:(null)
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIND, value:("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 2 -> 4
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 4 -> 5
bluealsa: ../../src/rfcomm.c:106: Sending AT message: GET: command:+CIND, value:(null)
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIND, value:0,0,0,0,0,0,0
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 5 -> 6
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 6 -> 7
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+CMER, value:3,0,0,1,0
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 7 -> 8
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 8 -> 9
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 9 -> 13
bluealsa: ../../src/sco.c:95: New incoming SCO link: 70:2C:1F:32:15:85: 8
bluealsa: ../../src/ba-transport.c:931: Closing SCO: 8
bluealsa-aplay:
$ bluealsa-aplay --profile-sco -v 70:2C:1F:32:15:85
Selected configuration:
BlueALSA service: org.bluealsa
PCM device: default
PCM buffer time: 500000 us
PCM period time: 100000 us
Bluetooth device(s): 70:2C:1F:32:15:85
Profile: SCO
bluealsa-aplay: ../../utils/aplay.c:801: Starting main loop
bluealsa-aplay: ../../utils/aplay.c:525: Creating PCM worker 70:2C:1F:32:15:85
bluealsa-aplay: ../../utils/aplay.c:356: Starting PCM loop
This is what it looks like when I play to the TaoTronics Wireless Receiver TT-BR08 (where I can hear the sound):
$ sudo bluealsa -p hfp-ag
bluealsa: ba-adapter.c:151: Freeing adapter: hci0
bluealsa: bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/AudioGateway
bluealsa: bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/AudioGateway
bluealsa: main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: main.c:358: Starting main dispatching loop
bluealsa: bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: sco.c:154: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: sco.c:69: Starting SCO dispatcher loop: hci0
bluealsa: bluez.c:997: RFCOMM: HFP Audio Gateway configured for device E9:08:EF:2D:40:33
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: rfcomm.c:593: Starting loop: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:982: Created new thread [ba-rfcomm]: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: ba-transport.c:982: Created new thread [ba-sco]: HFP Audio Gateway (CVSD)
bluealsa: sco.c:194: Starting SCO loop: HFP Audio Gateway (CVSD)
bluealsa: at.c:161: AT message: SET: command:+BRSF, value:191
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+BRSF, value:2528
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 0 -> 2
bluealsa: at.c:161: AT message: TEST: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 2 -> 5
bluealsa: at.c:161: AT message: GET: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:0,0,0,0,0,0,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 5 -> 7
bluealsa: at.c:161: AT message: SET: command:+CMER, value:3, 0, 0, 1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 7 -> 8
bluealsa: rfcomm.c:124: HFP state transition: 8 -> 9
bluealsa: rfcomm.c:124: HFP state transition: 9 -> 13
bluealsa: at.c:161: AT message: SET: command:+VGS, value:15
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: GET: command:+BTRH, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: SET: command:+XAPL, value:0000-0000-0100,7
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:+XAPL=BlueALSA,0
bluealsa: at.c:161: AT message: SET: command:+IPHONEACCEV, value:2,1,8,2,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: CMD: command:+CLCC, value:(null)
bluealsa: Unsupported AT message: CMD: command:+CLCC, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+BIA, value:1,1,0,0,0,0,1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: SET: command:+CSRSF, value:0,0,0,1,0,0,0
bluealsa: Unsupported AT message: SET: command:+CSRSF, value:0,0,0,1,0,0,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ba-transport.c:875: Checking Broadcom internal SCO routing
bluealsa: ba-transport.c:881: Current SCO interface setup: 1 2 0 1 1
bluealsa: ba-transport.c:635: State transition: 2 -> 2
bluealsa: ba-transport.c:915: New SCO link: E9:08:EF:2D:40:33: 15 (MTU: R:64 W:64)
bluealsa: ba-transport.c:863: Reusing SCO: 15
bluealsa: ba-transport.c:863: Reusing SCO: 15
bluealsa: ba-transport.c:718: PCM drained
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: ba-transport.c:955: Closing PCM: 8
bluealsa: ba-transport.c:931: Closing SCO: 15
$ sudo bluetoothctl
Agent registered
[bluetooth]# connect E9:08:EF:2D:40:33
Attempting to connect to E9:08:EF:2D:40:33
[CHG] Device E9:08:EF:2D:40:33 Connected: yes
Connection successful
[CHG] Device E9:08:EF:2D:40:33 ServicesResolved: yes
[TaoTronics TT-BR08]# quit
$ aplay -v -D bluealsa:DEV=E9:08:EF:2D:40:33,PROFILE=sco test.wav
bluealsa-pcm.c:625: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Setting constraints
Playing WAVE 'test.wav' : Signed 16 bit Little Endian, Rate 22050 Hz, Mono
bluealsa-pcm.c:344: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Initializing HW
bluealsa-pcm.c:371: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: FIFO buffer size: 4096
bluealsa-pcm.c:377: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Selected HW buffer: 4 periods x 2000 bytes == 8000 bytes
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Initializing SW
bluealsa-pcm.c:414: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Prepared
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Initializing SW
Plug PCM: Rate conversion PCM (8000, sformat=S16_LE)
Converter: libspeex (builtin)
Protocol version: 10002
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 22050
exact rate : 22050 (22050/1)
msbits : 16
buffer_size : 11025
period_size : 2756
period_time : 125000
tstamp_mode : NONE
tstamp_type : GETTIMEOFDAY
period_step : 1
avail_min : 2756
period_event : 0
start_threshold : 11025
stop_threshold : 11025
silence_threshold: 0
silence_size : 0
boundary : 6206523236469964800
Slave: BlueALSA PCM: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco
BlueALSA BlueZ device: /org/bluez/hci0/dev_E9_08_EF_2D_40_33
BlueALSA Bluetooth codec: 0x1
bluealsa-pcm.c:264: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Starting
bluealsa-pcm.c:137: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Starting IO loop: 8
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Initializing SW
bluealsa-pcm.c:169: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: IO thread paused: 4
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Stopping
bluealsa-pcm.c:105: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: IO thread cleanup
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Stopping
bluealsa-pcm.c:384: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Freeing HW
bluealsa-pcm.c:334: /org/bluealsa/hci0/dev_E9_08_EF_2D_40_33/sco: Closing
Another log, this time from a soundbar where I don't hear anything. I'm able to play to this soundbar from my Android phone, where I manually uncheck "Media audio" in the Bluetooth settings for the device, which results in it saying "Connected (no media)". If I play a sound on my phone, it plays over my phone speaker. Then I run the app BTmono and turn that On. If I play a high quality sound on my phone, it then plays to the soundbar (in the poor quality you'd expect from HFP). When I use the same process in my car, it shows up as a phone, so that indicates to me that it is in HFP mode rather than HSP mode. Additionally, I cannot seem to connect to the soundbar in HSP mode, nor is it listed in the capabilities. All that leads me to believe that the Android test is valid.
Anyhow, here are the logs from the bluealsa playback:
$ sudo bluealsa -p hfp-ag
bluealsa: ba-adapter.c:151: Freeing adapter: hci0
bluealsa: bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/AudioGateway
bluealsa: bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/AudioGateway
bluealsa: main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: main.c:358: Starting main dispatching loop
bluealsa: bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: sco.c:154: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: sco.c:69: Starting SCO dispatcher loop: hci0
bluealsa: bluez.c:997: RFCOMM: HFP Audio Gateway configured for device 6B:F4:54:06:AC:D1
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: rfcomm.c:593: Starting loop: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:982: Created new thread [ba-rfcomm]: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: ba-transport.c:982: Created new thread [ba-sco]: HFP Audio Gateway (CVSD)
bluealsa: sco.c:194: Starting SCO loop: HFP Audio Gateway (CVSD)
bluealsa: at.c:161: AT message: SET: command:+BRSF, value:031
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+BRSF, value:2528
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 0 -> 2
bluealsa: at.c:161: AT message: TEST: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 2 -> 5
bluealsa: at.c:161: AT message: GET: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:0,0,0,0,0,0,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 5 -> 7
bluealsa: at.c:161: AT message: SET: command:+CMER, value:3,0,0,1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 7 -> 8
bluealsa: rfcomm.c:124: HFP state transition: 8 -> 9
bluealsa: rfcomm.c:124: HFP state transition: 9 -> 13
bluealsa: at.c:161: AT message: TEST: command:+CHLD, value:(null)
bluealsa: Unsupported AT message: TEST: command:+CHLD, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+CMEE, value:1
bluealsa: Unsupported AT message: SET: command:+CMEE, value:1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+CLIP, value:1
bluealsa: Unsupported AT message: SET: command:+CLIP, value:1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+CCWA, value:1
bluealsa: Unsupported AT message: SET: command:+CCWA, value:1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+NREC, value:0
bluealsa: Unsupported AT message: SET: command:+NREC, value:0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+VGS, value:15
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: SET: command:+VGM, value:15
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: SET: command:+XAPL, value:ABCD-1234-0100,10
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:+XAPL=BlueALSA,0
bluealsa: bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ba-transport.c:875: Checking Broadcom internal SCO routing
bluealsa: ba-transport.c:881: Current SCO interface setup: 1 2 0 1 1
bluealsa: ba-transport.c:915: New SCO link: 6B:F4:54:06:AC:D1: 15 (MTU: R:64 W:64)
bluealsa: ba-transport.c:635: State transition: 2 -> 2
bluealsa: ba-transport.c:863: Reusing SCO: 15
bluealsa: ba-transport.c:863: Reusing SCO: 15
bluealsa: ba-transport.c:718: PCM drained
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: sco.c:464: PCM poll error status: 0x10
bluealsa: ba-transport.c:955: Closing PCM: 8
bluealsa: ba-transport.c:955: Closing PCM: 8
bluealsa: ba-transport.c:931: Closing SCO: 15
This one is trusted, etc and seems to autoconnect from the device, so I don't need to use bluetoothctl to connect.
$ aplay -v -D bluealsa:DEV=6B:F4:54:06:AC:D1,PROFILE=sco test.wav
bluealsa-pcm.c:625: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Setting constraints
Playing WAVE 'test.wav' : Signed 16 bit Little Endian, Rate 22050 Hz, Mono
bluealsa-pcm.c:344: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing HW
bluealsa-pcm.c:371: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: FIFO buffer size: 4096
bluealsa-pcm.c:377: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Selected HW buffer: 4 periods x 2000 bytes == 8000 bytes
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:414: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Prepared
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
Plug PCM: Rate conversion PCM (8000, sformat=S16_LE)
Converter: libspeex (builtin)
Protocol version: 10002
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 22050
exact rate : 22050 (22050/1)
msbits : 16
buffer_size : 11025
period_size : 2756
period_time : 125000
tstamp_mode : NONE
tstamp_type : GETTIMEOFDAY
period_step : 1
avail_min : 2756
period_event : 0
start_threshold : 11025
stop_threshold : 11025
silence_threshold: 0
silence_size : 0
boundary : 6206523236469964800
Slave: BlueALSA PCM: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco
BlueALSA BlueZ device: /org/bluez/hci0/dev_6B_F4_54_06_AC_D1
BlueALSA Bluetooth codec: 0x1
bluealsa-pcm.c:264: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Starting
bluealsa-pcm.c:137: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Starting IO loop: 8
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:169: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: IO thread paused: 4
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Stopping
bluealsa-pcm.c:105: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: IO thread cleanup
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Stopping
bluealsa-pcm.c:384: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Freeing HW
bluealsa-pcm.c:334: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Closing
Android phone with BTmono connected to bluealsa on my Raspberry Pi 3 also results in me being able to hear sound from the Pi 3.5mm jack:
$ sudo bluealsa -p hfp-hf
bluealsa: ../../src/ba-adapter.c:151: Freeing adapter: hci0
bluealsa: ../../src/bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/HandsFree
bluealsa: ../../src/bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/HandsFree
bluealsa: ../../src/main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: ../../src/main.c:358: Starting main dispatching loop
bluealsa: ../../src/bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: ../../src/bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: ../../src/sco.c:154: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: ../../src/sco.c:69: Starting SCO dispatcher loop: hci0
bluealsa: ../../src/bluez.c:997: RFCOMM: HFP Hands-Free configured for device 2C:FD:A1:21:0D:B4
bluealsa: ../../src/ba-transport.c:635: State transition: 0 -> 2
bluealsa: ../../src/ba-transport.c:982: Created new thread [ba-rfcomm]: RFCOMM: HFP Hands-Free
bluealsa: ../../src/ba-transport.c:635: State transition: 0 -> 2
bluealsa: ../../src/ba-transport.c:982: Created new thread [ba-sco]: HFP Hands-Free (CVSD)
bluealsa: ../../src/sco.c:194: Starting SCO loop: HFP Hands-Free (CVSD)
bluealsa: ../../src/rfcomm.c:593: Starting loop: RFCOMM: HFP Hands-Free
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+BRSF, value:628
bluealsa: ../../src/bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ../../src/at.c:161: AT message: RESP: command:+BRSF, value: 2919
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 0 -> 1
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 1 -> 2
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+BAC, value:1
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 2 -> 3
bluealsa: ../../src/rfcomm.c:106: Sending AT message: TEST: command:+CIND, value:(null)
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIND, value: ("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0,1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 3 -> 4
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 4 -> 5
bluealsa: ../../src/rfcomm.c:106: Sending AT message: GET: command:+CIND, value:(null)
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIND, value: 0,0,1,5,0,4,0
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 5 -> 6
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 6 -> 7
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+CMER, value:3,0,0,1,0
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 7 -> 8
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 8 -> 9
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIEV, value: 2,2
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIEV, value: 2,3
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIEV, value: 1,1
bluealsa: ../../src/sco.c:95: New incoming SCO link: 2C:FD:A1:21:0D:B4: 8
bluealsa: ../../src/ba-transport.c:863: Reusing SCO: 8
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIEV, value: 2,0
$ bluealsa-aplay --profile-sco -v 2C:FD:A1:21:0D:B4
Selected configuration:
BlueALSA service: org.bluealsa
PCM device: default
PCM buffer time: 500000 us
PCM period time: 100000 us
Bluetooth device(s): 2C:FD:A1:21:0D:B4
Profile: SCO
bluealsa-aplay: ../../utils/aplay.c:801: Starting main loop
bluealsa-aplay: ../../utils/aplay.c:525: Creating PCM worker 2C:FD:A1:21:0D:B4
bluealsa-aplay: ../../utils/aplay.c:356: Starting PCM loop
There are some feature differences between the bluealsa (+BRSF value 2528) and my Android phone (+BRSF value 2919) AG's. Here is a decoding:
bluealsa Android
/* AG feature flags */ 2528 2919
#define HFP_AG_FEAT_3WC (1 << 0) False True
#define HFP_AG_FEAT_ECNR (1 << 1) False True
#define HFP_AG_FEAT_VOICE (1 << 2) False True
#define HFP_AG_FEAT_RING (1 << 3) False False
#define HFP_AG_FEAT_VTAG (1 << 4) False False
#define HFP_AG_FEAT_REJECT (1 << 5) True True
#define HFP_AG_FEAT_ECS (1 << 6) True True
#define HFP_AG_FEAT_ECC (1 << 7) True False
#define HFP_AG_FEAT_EERC (1 << 8) True True
#define HFP_AG_FEAT_CODEC (1 << 9) False True
#define HFP_AG_FEAT_HFIND (1 << 10) False False
#define HFP_AG_FEAT_ESCO (1 << 11) True True
I believe that difference explains why I don't see these lines on the Android phone:
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+BAC, value:1
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 2 -> 3
On the bluealsa AG, HFP_AG_FEAT_CODEC is NO, due to my not enabling mSBC. Therefore, rfcomm.c:311 automatically knows to use CVSD. And it doesn't send +BAC, because of rfcomm.c:635. I don't see a problem there, it was just a difference in output I noticed between the two.
Another difference is that bluealsa performs this state transition, but with the the Android phone AG it doesn't:
bluealsa: ../../src/rfcomm.c:124: HFP state transition: 9 -> 13
State 9 is HFP_SLC_CONNECTED, and state 13 is HFP_CONNECTED. I see the reason at rfcomm.c:675, where if HFP_AG_FEAT_CODEC is set then there is a break, whereas that doesn't happen otherwise and it falls through to line 681 where it sets the state and then does a bluealsa_dbus_transport_update. I experimented with commenting out rfcomm.c:675 if (t->rfcomm.hfp_features & HFP_AG_FEAT_CODEC), but it didn't seem to make any difference.
The HFP connection establishment with a phone or another bluealsa was not tested extensively. Currently I'm refactoring this part of code. Maybe in a week or two I will publish some enhancements. Right now I've changed the codec selection logic and other minor parts and it seems that the connection works much better. I have to test bluealsa-bluealsa interaction, though. Also, I think, I will add test coverage for such scenario. So, stay tuned :)
@killerkalamari - I see that in the hfp-ag output above, you have modified the bluealsa source to give a SCO mtu of 64; but the hfp-hf debug output unfortunately does not show the mtu. Have you also modified the mtu in the file sco.c as well as ba-transport.c, or are are you using the default of 48? You say the hfp-hf case is playing sound correctly so I guess whatever value for mtu you used must be the correct one. For my own information (i'm still investigating the whole sco mtu mess on linux) could you confirm the mtu value used in your hfp-hf test case? Thanks.
@borine You're right, I did make a single change to MTU (sorry, I am not sure which file right now, but I can look it up Monday), but only to bluealsa HFP-AG; the Raspberry Pi HFP-HF is stock.
@killerkalamari I've pushed some "work in progress" to master. Right now it should be possible to connect phone (Audio Gateway) to bluealsa. I've tested it with few devices (android 6.0 and iPhone 13.) and it seems to work. If you've got access to other phones it would be great if you could check connection with them. Playing between bluez-alsa and bluez-alsa might still not work. The problem is with connection acquisition logic in the SCO thread. I will have to rework it as well, because in the current form it causes more troubles than receiving data from AG (e.g. it's hard to implement proper PCM drain).
@borine Lately, I've been working around mSBC setup, and I've found out that there is a socket option sco_options which returns MTU. Unfortunately, this call also returns wrong value. I haven't check a lot of kernel code around SCO MTU, but it definitely seems like kernel bug...
@Arkq I'm currently thinking that the mtu value obtained from the kernel is unreliable only for USB HCI adapters; UART HCI ones seem to provide a correct value. This is based on my own very limited testing and the few forum posts for bluez and pulseaudio that give sufficient info. If I'm right then the function hci_sco_get_mtu() could make use of the struct hci_dev_info for the adapter to use code such as:
switch (a->hci.type & 0x0f) {
case HCI_USB:
if (voice.setting == BT_VOICE_TRANSPARENT)
options.mtu = 24;
else
options.mtu = 48;
break;
// [ any other special cases handled here ]
}
where a is a pointer the the adapter info.
This is all just conjecture at present - I have no idea if TRANSPARENT streams require special treatment for UART, for example.
However, if it turns out to be a practical approach, it would mean more embedded boards would have SCO work out-of-the-box with bluealsa rather than needing code patches.
@killerkalamari your HFP-AG profile host requiring mtu == 64 is that an on-board UART adapter? And the HFP-HF profile host with mtu == 48 (ie default) is that an external USB adapter?
If both are on-board, it would be really helpful if you are able to test the HFP-HF host again, but with bluealsa modified to use an MTU of 64. The place to edit the code has unfortunately just moved again because of the re-factoring, so it might be best to wait a few weeks until @Arkq has the current set of SCO changes complete.
I'm also thinking it might be advantageous to add a command-line option allowing a savvy user to select the correct sco mtu policy for their adapter(s). Pulseaudio does this (actually with a module parameter). It would need more thought on the exact options to offer.
Once the current round of code re-factoring is complete I would be happy to submit a PR with some code with which to invite discussion and maybe even persuade some volunteers to test with a variety of adapters.
I think that most of the refactoring part is done already. Some enhancement might be required, though, because connection process is not reliable... But I think that the changes will be rather cosmetic.
@borine I've got also an UART based BT and the MTU reported by it is also 64 bytes. I haven't tested running bluealsa with such MTU, though. With 48/24 it seems to work fine. However, I will make test with original MTU and see how it will behave :)
@Arkq UART adapters tend to have an mtu size that fills their buffer (eg 64:1 is common for Broadcom/Cypress while 120:1 is common for Texas Instruments). USB tend to have larger buffers, but often smaller mtus (eg 50:8 for Qualcomm/Atheros, 64:8 for Intel).
The value of 64:8 is often generated by the kernel btusb driver as a quirk for controllers that it claims report wrong values. But for me, using a kernel mtu for any USB device crashes the controller whether "quirked" or not.
This is the patch I was using, took it out for now:
--- bluez-alsa-2cd6e4686f7808276480b430fb37df55dfdcc02b.orig/src/ba-transport.c 2019-10-17 13:32:38.942308319 -0700
+++ bluez-alsa-2cd6e4686f7808276480b430fb37df55dfdcc02b/src/ba-transport.c 2019-10-17 14:05:31.618254533 -0700
@@ -673,7 +673,7 @@ int ba_transport_drain_pcm(struct ba_tra
* arbitrary time before releasing transport. In order to make it right,
* there is a requirement for an asynchronous release mechanism, which
* is not implemented - it requires a little bit of refactoring. */
- usleep(200000);
+ usleep(1000000);
debug("PCM drained");
return 0;
@@ -839,8 +839,8 @@ static int transport_acquire_bt_sco(stru
/* XXX: It seems, that the MTU values returned by the HCI interface
* are incorrect (or our interpretation of them is incorrect). */
- t->mtu_read = 48;
- t->mtu_write = 48;
+ t->mtu_read = 64;
+ t->mtu_write = 64;
if (t->type.codec == HFP_CODEC_MSBC)
t->mtu_read = t->mtu_write = 24;
I've re-tested the various combos.
Newly working:
bluealsa to bluealsa
Continues to work:
bluealsa to TaoTronics
Android to bluealsa
Still playing silence:
bluealsa to soundbar
By my original bug report, this is resolved, thanks! I was hoping the resolution for bluealsa would also fix the soundbar, but no such luck, I guess. I know you're still in the middle of improving things, so I'll keep watching the commits. In case it's useful, here's the output I get when trying to play to the soundbar:
$ sudo bluealsa -p hfp-ag
bluealsa: ba-adapter.c:151: Freeing adapter: hci0
bluealsa: bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/AudioGateway
bluealsa: bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/AudioGateway
bluealsa: main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: main.c:358: Starting main dispatching loop
bluealsa: bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: sco.c:154: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: bluez.c:997: RFCOMM: HFP Audio Gateway configured for device 6B:F4:54:06:AC:D1
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: rfcomm.c:593: Starting loop: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:982: Created new thread [ba-rfcomm]: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: ba-transport.c:982: Created new thread [ba-sco]: HFP Audio Gateway (CVSD)
bluealsa: sco.c:194: Starting SCO loop: HFP Audio Gateway (CVSD)
bluealsa: sco.c:69: Starting SCO dispatcher loop: hci0
bluealsa: at.c:161: AT message: SET: command:+BRSF, value:031
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+BRSF, value:2528
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 0 -> 2
bluealsa: at.c:161: AT message: TEST: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 2 -> 5
bluealsa: at.c:161: AT message: GET: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:0,0,0,0,0,0,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 5 -> 7
bluealsa: at.c:161: AT message: SET: command:+CMER, value:3,0,0,1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:124: HFP state transition: 7 -> 8
bluealsa: rfcomm.c:124: HFP state transition: 8 -> 9
bluealsa: rfcomm.c:124: HFP state transition: 9 -> 13
bluealsa: at.c:161: AT message: TEST: command:+CHLD, value:(null)
bluealsa: Unsupported AT message: TEST: command:+CHLD, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+CMEE, value:1
bluealsa: Unsupported AT message: SET: command:+CMEE, value:1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+CLIP, value:1
bluealsa: Unsupported AT message: SET: command:+CLIP, value:1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+CCWA, value:1
bluealsa: Unsupported AT message: SET: command:+CCWA, value:1
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+NREC, value:0
bluealsa: Unsupported AT message: SET: command:+NREC, value:0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:ERROR
bluealsa: at.c:161: AT message: SET: command:+VGS, value:15
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: SET: command:+VGM, value:15
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: SET: command:+XAPL, value:ABCD-1234-0100,10
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:+XAPL=BlueALSA,0
bluealsa: bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ba-transport.c:875: Checking Broadcom internal SCO routing
bluealsa: ba-transport.c:881: Current SCO interface setup: 1 2 0 1 1
bluealsa: ba-transport.c:915: New SCO link: 6B:F4:54:06:AC:D1: 14 (MTU: R:64 W:64)
bluealsa: ba-transport.c:635: State transition: 2 -> 2
bluealsa: ba-transport.c:863: Reusing SCO: 14
bluealsa: ba-transport.c:863: Reusing SCO: 14
bluealsa: ba-transport.c:718: PCM drained
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: sco.c:464: PCM poll error status: 0x10
bluealsa: ba-transport.c:955: Closing PCM: 8
bluealsa: ba-transport.c:931: Closing SCO: 14
$ aplay -v -D bluealsa:DEV=6B:F4:54:06:AC:D1,PROFILE=sco test.wav
bluealsa-pcm.c:625: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Setting constraints
Playing WAVE 'test.wav' : Signed 16 bit Little Endian, Rate 22050 Hz, Mono
bluealsa-pcm.c:344: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing HW
bluealsa-pcm.c:371: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: FIFO buffer size: 4096
bluealsa-pcm.c:377: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Selected HW buffer: 4 periods x 2000 bytes == 8000 bytes
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:414: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Prepared
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
Plug PCM: Rate conversion PCM (8000, sformat=S16_LE)
Converter: libspeex (builtin)
Protocol version: 10002
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 22050
exact rate : 22050 (22050/1)
msbits : 16
buffer_size : 11025
period_size : 2756
period_time : 125000
tstamp_mode : NONE
tstamp_type : GETTIMEOFDAY
period_step : 1
avail_min : 2756
period_event : 0
start_threshold : 11025
stop_threshold : 11025
silence_threshold: 0
silence_size : 0
boundary : 6206523236469964800
Slave: BlueALSA PCM: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco
BlueALSA BlueZ device: /org/bluez/hci0/dev_6B_F4_54_06_AC_D1
BlueALSA Bluetooth codec: 0x1
bluealsa-pcm.c:264: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Starting
bluealsa-pcm.c:137: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Starting IO loop: 8
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Initializing SW
bluealsa-pcm.c:169: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: IO thread paused: 4
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Stopping
bluealsa-pcm.c:105: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: IO thread cleanup
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Stopping
bluealsa-pcm.c:384: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Freeing HW
bluealsa-pcm.c:334: /org/bluealsa/hci0/dev_6B_F4_54_06_AC_D1/sco: Closing
Audio gateway (obsolete Samsung Artik):
$ sudo hciconfig hci0 version
hci0: Type: Primary Bus: UART
BD Address: 70:2C:1F:32:15:85 ACL MTU: 1021:8 SCO MTU: 64:1
HCI Version: 4.0 (0x6) Revision: 0xe0
LMP Version: 4.0 (0x6) Subversion: 0x6119
Manufacturer: Broadcom Corporation (15)
Hands-free (Pi 3):
sudo hciconfig hci0 version
hci0: Type: Primary Bus: UART
BD Address: B8:27:EB:87:AD:65 ACL MTU: 1021:8 SCO MTU: 64:1
HCI Version: 4.1 (0x7) Revision: 0x168
LMP Version: 4.1 (0x7) Subversion: 0x2209
Manufacturer: Broadcom Corporation (15)
I'll go ahead and change it to 64 and see what happens.
bluealsa to bluealsa output with the following patch applied to both AG and HF devices. Plays okay, the audio of longer files is momentarily garbled periodically, but I'm guessing that's just a feature of HFP:
--- bluez-alsa-ce960be902ea9dd9d7dd211afd0ec232433b3eda.orig/src/ba-transport.c 2019-11-11 13:18:58.868404453 -0700
+++ bluez-alsa-ce960be902ea9dd9d7dd211afd0ec232433b3eda/src/ba-transport.c 2019-11-11 13:34:46.102692599 -0700
@@ -713,7 +713,7 @@ int ba_transport_drain_pcm(struct ba_tra
* arbitrary time before releasing transport. In order to make it right,
* there is a requirement for an asynchronous release mechanism, which
* is not implemented - it requires a little bit of refactoring. */
- usleep(200000);
+ usleep(1000000);
debug("PCM drained");
return 0;
@@ -877,7 +877,9 @@ static int transport_acquire_bt_sco(stru
debug("New SCO link: %s: %d", batostr_(&t->d->addr), t->bt_fd);
+ debug("MTU before: R:%zu W:%zu", t->mtu_read, t->mtu_write);
t->mtu_read = t->mtu_write = hci_sco_get_mtu(t->bt_fd);
+ debug("MTU after: R:%zu W:%zu", t->mtu_read, t->mtu_write);
return t->bt_fd;
--- bluez-alsa-ce960be902ea9dd9d7dd211afd0ec232433b3eda.orig/src/hci.c 2019-11-11 13:18:51.684417398 -0700
+++ bluez-alsa-ce960be902ea9dd9d7dd211afd0ec232433b3eda/src/hci.c 2019-11-11 13:43:24.745751499 -0700
@@ -128,7 +128,7 @@ unsigned int hci_sco_get_mtu(int sco_fd)
/* XXX: It seems, that the MTU value returned by kernel
* is incorrect (or our interpretation of it). */
- options.mtu = 48;
+ options.mtu = 64;
if (voice.setting == BT_VOICE_TRANSPARENT)
options.mtu = 24;
$ sudo bluealsa -p hfp-ag
bluealsa: ba-adapter.c:151: Freeing adapter: hci0
bluealsa: bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/AudioGateway
bluealsa: bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/AudioGateway
bluealsa: main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: main.c:358: Starting main dispatching loop
bluealsa: bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: sco.c:167: Checking Broadcom internal SCO routing
bluealsa: sco.c:173: Current SCO interface setup: 0 2 0 1 1
bluealsa: sco.c:175: Setting SCO routing via transport interface
bluealsa: sco.c:201: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: sco.c:77: Starting SCO dispatcher loop: hci0
bluealsa: bluez.c:997: RFCOMM: HFP Audio Gateway configured for device B8:27:EB:87:AD:65
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: rfcomm.c:698: Starting loop: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:949: Created new thread [ba-rfcomm]: RFCOMM: HFP Audio Gateway
bluealsa: ba-transport.c:635: State transition: 0 -> 2
bluealsa: ba-transport.c:949: Created new thread [ba-sco]: HFP Audio Gateway (CVSD)
bluealsa: sco.c:242: Starting SCO loop: HFP Audio Gateway (CVSD)
bluealsa: at.c:161: AT message: SET: command:+BRSF, value:628
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+BRSF, value:2272
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:125: RFCOMM: HFP Audio Gateway state transition: 0 -> 2
bluealsa: at.c:161: AT message: TEST: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:125: RFCOMM: HFP Audio Gateway state transition: 2 -> 5
bluealsa: at.c:161: AT message: GET: command:+CIND, value:(null)
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:+CIND, value:0,0,0,0,0,0,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:125: RFCOMM: HFP Audio Gateway state transition: 5 -> 7
bluealsa: at.c:161: AT message: SET: command:+CMER, value:3,0,0,1,0
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: rfcomm.c:125: RFCOMM: HFP Audio Gateway state transition: 7 -> 8
bluealsa: rfcomm.c:125: RFCOMM: HFP Audio Gateway state transition: 8 -> 9
bluealsa: rfcomm.c:859: RFCOMM poll timeout
bluealsa: at.c:161: AT message: SET: command:+VGM, value:15
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: at.c:161: AT message: SET: command:+VGS, value:15
bluealsa: rfcomm.c:106: Sending AT message: RESP: command:(null), value:OK
bluealsa: bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ba-transport.c:635: State transition: 2 -> 2
bluealsa: ba-transport.c:878: New SCO link: B8:27:EB:87:AD:65: 18
bluealsa: ba-transport.c:880: MTU before: R:0 W:0
bluealsa: hci.c:126: SCO link socket MTU: 18: 64
bluealsa: ba-transport.c:882: MTU after: R:64 W:64
bluealsa: ba-transport.c:718: PCM drained
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: a2dp.c:122: PCM read buffer flushed: 0
bluealsa: sco.c:512: PCM poll error status: 0x10
bluealsa: ba-transport.c:922: Closing PCM: 8
bluealsa: ba-transport.c:922: Closing PCM: 8
bluealsa: sco.c:335: Releasing SCO due to PCM inactivity
bluealsa: ba-transport.c:898: Closing SCO: 18
bluealsa: sco.c:335: Releasing SCO due to PCM inactivity
$ aplay -v -D bluealsa:DEV=B8:27:EB:87:AD:65,PROFILE=sco test.wav
bluealsa-pcm.c:625: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Setting constraints
Playing WAVE 'test.wav' : Signed 16 bit Little Endian, Rate 22050 Hz, Mono
bluealsa-pcm.c:344: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing HW
bluealsa-pcm.c:371: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: FIFO buffer size: 4096
bluealsa-pcm.c:377: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Selected HW buffer: 4 periods x 2000 bytes == 8000 bytes
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:414: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Prepared
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
Plug PCM: Rate conversion PCM (8000, sformat=S16_LE)
Converter: libspeex (builtin)
Protocol version: 10002
Its setup is:
stream : PLAYBACK
access : RW_INTERLEAVED
format : S16_LE
subformat : STD
channels : 1
rate : 22050
exact rate : 22050 (22050/1)
msbits : 16
buffer_size : 11025
period_size : 2756
period_time : 125000
tstamp_mode : NONE
tstamp_type : GETTIMEOFDAY
period_step : 1
avail_min : 2756
period_event : 0
start_threshold : 11025
stop_threshold : 11025
silence_threshold: 0
silence_size : 0
boundary : 6206523236469964800
Slave: BlueALSA PCM: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco
BlueALSA BlueZ device: /org/bluez/hci0/dev_B8_27_EB_87_AD_65
BlueALSA Bluetooth codec: 0x1
bluealsa-pcm.c:264: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Starting
bluealsa-pcm.c:137: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Starting IO loop: 8
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:392: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Initializing SW
bluealsa-pcm.c:169: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: IO thread paused: 4
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Stopping
bluealsa-pcm.c:105: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: IO thread cleanup
bluealsa-pcm.c:305: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Stopping
bluealsa-pcm.c:384: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Freeing HW
bluealsa-pcm.c:334: /org/bluealsa/hci0/dev_B8_27_EB_87_AD_65/sco: Closing
$ sudo bluealsa -p hfp-hf
bluealsa: ../../src/ba-adapter.c:151: Freeing adapter: hci0
bluealsa: ../../src/bluez.c:1105: Creating hands-free profile object: /org/bluez/HFP/HandsFree
bluealsa: ../../src/bluez.c:1135: Registering hands-free profile: /org/bluez/HFP/HandsFree
bluealsa: ../../src/main.c:353: Acquiring D-Bus service name: org.bluealsa
bluealsa: ../../src/main.c:358: Starting main dispatching loop
bluealsa: ../../src/bluealsa-dbus.c:139: Called: org.bluealsa.Manager1.GetPCMs()
bluealsa: ../../src/bluez.c:1073: Called: org.bluez.Profile1.NewConnection()
bluealsa: ../../src/sco.c:167: Checking Broadcom internal SCO routing
bluealsa: ../../src/sco.c:173: Current SCO interface setup: 1 2 0 1 1
bluealsa: ../../src/sco.c:77: Starting SCO dispatcher loop: hci0
bluealsa: ../../src/sco.c:201: Created SCO dispatcher [ba-sco-dispatch]: hci0
bluealsa: ../../src/bluez.c:997: RFCOMM: HFP Hands-Free configured for device 70:2C:1F:32:15:85
bluealsa: ../../src/ba-transport.c:635: State transition: 0 -> 2
bluealsa: ../../src/ba-transport.c:949: Created new thread [ba-rfcomm]: RFCOMM: HFP Hands-Free
bluealsa: ../../src/ba-transport.c:635: State transition: 0 -> 2
bluealsa: ../../src/ba-transport.c:949: Created new thread [ba-sco]: HFP Hands-Free (CVSD)
bluealsa: ../../src/bluealsa-dbus.c:333: Called: org.bluealsa.PCM1.Open()
bluealsa: ../../src/rfcomm.c:698: Starting loop: RFCOMM: HFP Hands-Free
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+BRSF, value:628
bluealsa: ../../src/sco.c:242: Starting SCO loop: HFP Hands-Free (CVSD)
bluealsa: ../../src/at.c:161: AT message: RESP: command:+BRSF, value:2272
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 0 -> 1
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 1 -> 2
bluealsa: ../../src/rfcomm.c:106: Sending AT message: TEST: command:+CIND, value:(null)
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIND, value:("call",(0,1)),("callsetup",(0-3)),("service",(0-1)),("signal",(0-5)),("roam",(0-1)),("battchg",(0-5)),("callheld",(0-2))
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 2 -> 4
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 4 -> 5
bluealsa: ../../src/rfcomm.c:106: Sending AT message: GET: command:+CIND, value:(null)
bluealsa: ../../src/at.c:161: AT message: RESP: command:+CIND, value:0,0,0,0,0,0,0
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 5 -> 6
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 6 -> 7
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+CMER, value:3,0,0,1,0
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 7 -> 8
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 8 -> 9
bluealsa: ../../src/rfcomm.c:859: RFCOMM poll timeout
bluealsa: ../../src/rfcomm.c:627: Updating microphone gain: 15
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+VGM, value:15
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 9 -> 9
bluealsa: ../../src/rfcomm.c:658: Updating speaker gain: 15
bluealsa: ../../src/rfcomm.c:106: Sending AT message: SET: command:+VGS, value:15
bluealsa: ../../src/at.c:161: AT message: RESP: command:, value:OK
bluealsa: ../../src/rfcomm.c:125: RFCOMM: HFP Hands-Free state transition: 9 -> 9
bluealsa: ../../src/sco.c:103: New incoming SCO link: 70:2C:1F:32:15:85: 8
bluealsa: ../../src/hci.c:126: SCO link socket MTU: 8: 64
bluealsa: ../../src/sco.c:412: SCO poll error status: 0x18
bluealsa: ../../src/ba-transport.c:898: Closing SCO: 8
$ bluealsa-aplay --profile-sco 70:2C:1F:32:15:85
bluealsa-aplay: ../../utils/aplay.c:801: Starting main loop
bluealsa-aplay: ../../utils/aplay.c:525: Creating PCM worker 70:2C:1F:32:15:85
bluealsa-aplay: ../../utils/aplay.c:356: Starting PCM loop
bluealsa-aplay: ../../utils/aplay.c:373: Device marked as inactive: 70:2C:1F:32:15:85
| gharchive/issue | 2019-11-05T18:52:47 | 2025-04-01T04:32:16.663547 | {
"authors": [
"Arkq",
"borine",
"killerkalamari"
],
"repo": "Arkq/bluez-alsa",
"url": "https://github.com/Arkq/bluez-alsa/issues/264",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1572517864 | Training with your own audio with MAX9814 and Pico Wireless
I followed your instruction on the Colab. How can I train with my own audio with MAX9814?
and how can I use the USB feature with Pico Wireless ?
Hi @ferrygun,
You would need to replace the pdm_microphone_* API calls in https://github.com/ArmDeveloperEcosystem/microphone-library-for-pico/blob/main/examples/usb_microphone/main.c with the equivalent analog_microphone_* calls from the microphone-library-for-pico's analog_microphone.h
| gharchive/issue | 2023-02-06T12:53:47 | 2025-04-01T04:32:16.668977 | {
"authors": [
"ferrygun",
"sandeepmistry"
],
"repo": "ArmDeveloperEcosystem/ml-audio-classifier-example-for-pico",
"url": "https://github.com/ArmDeveloperEcosystem/ml-audio-classifier-example-for-pico/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
396485114 | Asymetric serialization
Part of #271
As a developer, I want to be able to specify read-only fields to control how serialization works (for read and for write)?
Acceptance criterias
[x] Add a readOnly field to the gpf.typedef.serializableProperty
[x] readOnly = false means the property can be written
[x] readOnly = true means the property can't be written
[x] readOnly = undefined will check if the field owns a setter using Object.getOwnPropertyDescriptor()
(if available) to decide if the field is read-only, default is false.
[x] the readOnly property must be resolved when invoking the converter function
[x] a readOnly property may not be modified when using gpf.serial.from: a special value can be returned (undefined) in the converter function. If the converter function is not specified, read-only properties are not modified.
[x] a readOnly property may be ignored when using gpf.serial.to: a special value can be returned (undefined) in the converter function. If the converter function is not specified, read-only properties are ignored.
Lots of rewrites in the US
| gharchive/issue | 2019-01-07T13:22:40 | 2025-04-01T04:32:16.680474 | {
"authors": [
"ArnaudBuchholz"
],
"repo": "ArnaudBuchholz/gpf-js",
"url": "https://github.com/ArnaudBuchholz/gpf-js/issues/299",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1352626487 | 🛑 ArnaudLigny.fr is down
In dc18e7e, ArnaudLigny.fr (https://arnaudligny.fr) was down:
HTTP code: 503
Response time: 5599 ms
Resolved: ArnaudLigny.fr is back up in c16b4b1.
| gharchive/issue | 2022-08-26T18:53:56 | 2025-04-01T04:32:16.683554 | {
"authors": [
"ArnaudLigny"
],
"repo": "ArnaudLigny/upptime",
"url": "https://github.com/ArnaudLigny/upptime/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
106050185 | Fixed a paths issue on Windows.
Should fix #5.
In my case no generated files were copied to public at all.
The problem was, as always, in win\nix path separators difference.
This change looks good. Something is wrong with the build as path.parse is failing. Once that's resolved I'll merge this change and cut a new release. Thanks for your work. I know a fair number of people have been wanting windows support.
Seems like I should target Node 0.10 API. I will update it soon.
| gharchive/pull-request | 2015-09-11T16:50:35 | 2025-04-01T04:32:16.687091 | {
"authors": [
"BrandonArp",
"DGolubets"
],
"repo": "ArpNetworking/sbt-typescript",
"url": "https://github.com/ArpNetworking/sbt-typescript/pull/14",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
298070489 | Modificar datos mostrados de un agente al hacer login web
Cuando un agente se logea vía web están apareciendo datos: dirección, nacionalidad, apellidos.
Hay que modificarlo para que sólo aparezca:
nombre (nombre y apellidos en caso de una persona)
localización (coordenadas gps, si no tiene sale en blanco)
email
id
kind
kindCode
Completado y con tests pasando.
| gharchive/issue | 2018-02-18T09:16:41 | 2025-04-01T04:32:16.689211 | {
"authors": [
"PelayoDiaz",
"ameliafb"
],
"repo": "Arquisoft/Agents_e3a",
"url": "https://github.com/Arquisoft/Agents_e3a/issues/11",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
} |
1589415175 | ADR. Propuesta y decisión en relación a la arquitectura a definir para el lado del cliente
Arquitectura lado del cliente de la aplicación.
Tomar la decisión y completar la documentación relativa al ADR https://github.com/Arquisoft/lomap_es5a/wiki/ADR.-Estructura-Cliente-%5BFront-End%5D
Estructura de carpetas y proovedor SOLID definido.
Estructura del cliente definida
| gharchive/issue | 2023-02-17T14:07:08 | 2025-04-01T04:32:16.691012 | {
"authors": [
"RichardPix12",
"franciscocoya"
],
"repo": "Arquisoft/lomap_es5a",
"url": "https://github.com/Arquisoft/lomap_es5a/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1680031158 | Improvement: utilize environment variable for paths
Thanks a for your work @ArtemkaKun 🚀! I will certainly use a lot of it in some upcoming V projects 🙂
The PR utilizes an environment variable for the runner work path. If someone wants to use the action it's just one place that needs to be changed then. It also makes it a little less prone to typos in the path and more concise.
There is one thing I'm noticing when I try to use the action.
You won't run into this because your V installation is always restored from cache as it looks.
When using the setup-v action it's always failing when a version is specified:
Run vlang/setup-v@v1.3
Resolved as weekly.2023.08
Downloading vlang weekly.2023.08...
Creating directory: /home/runner/work/v-test/v-test/vlang/vlang_linux_x64
Downloading the archive
Downloaded archive 'https://codeload.github.com/vlang/v/legacy.tar.gz/refs/tags/weekly.2023.08'
Writing archive to disk
Extracting the archive
/usr/bin/tar xz --warning=no-unknown-keyword --overwrite -C /home/runner/work/v-test/v-test/vlang/vlang_linux_x64/4d2[9](https://github.com/tobealive/v-test/actions/runs/4778371619/jobs/8494753360#step:3:10)43d3-eaf3-49[12](https://github.com/tobealive/v-test/actions/runs/4778371619/jobs/8494753360#step:3:13)-81e8-4aab43bda48b -f /home/runner/work/v-test/v-test/vlang/vlang_linux_x64/4d2943d3-eaf3-4912-81e8-4aab43bda48b.tar.gz
Resolved version vlang-v-096065f
Running make...
vlib/rand/rand.v:447:39: warning: variable `it` in array initialization will soon be replaced with `index`
Error: Command failed: make
This also happens when i specify a newer version, i.E. weekly.2023.15.
Installation works when removing the version.
This looks like an upstream issue with setup-v to me.
Looks like running the CI on a PR that doesn't has cache access runs just into this mentioned issue.
Hi, thanks for the PR. Cache on GitHub Actions is removed automatically after some time of no usage. So we should fix v-setup error anyway (or don't use the cache, which is probably not the best solution right now)
The one workaround we could use - is to not specify the version and build the latest commit every time, cache it, and then use version output from v-setup as key for cache label
Yep sounds good. The installation takes about 20 seconds so it's totally reasonable
I will merge this PR since it seems to be legit and will introduce that v-setup workaround in a few hours
Thanks. PR was tested on a private repo before submitting it. Also I just used the setup from latest commit and the full workflow run through 👍
| gharchive/pull-request | 2023-04-23T13:37:55 | 2025-04-01T04:32:16.705016 | {
"authors": [
"ArtemkaKun",
"tobealive"
],
"repo": "ArtemkaKun/v-project-basement",
"url": "https://github.com/ArtemkaKun/v-project-basement/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
279698115 | Set new Twitter characters limit
Since november 2017, Twitter allow to tweet up to 280 characters.
This plugin could be updated to prevent to cut tweet to previous 140-characters limit.
It should work with v1.3.1.
Indeed ;-) Thanks
| gharchive/issue | 2017-12-06T10:19:31 | 2025-04-01T04:32:16.709781 | {
"authors": [
"ArthurHoaro",
"alexisju"
],
"repo": "ArthurHoaro/shaarli2twitter",
"url": "https://github.com/ArthurHoaro/shaarli2twitter/issues/9",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
503905203 | Fix size of cropping window
Can i keep a fix size of the cropping window. the width and height should 300 X 300. User can just reduce the cropping window but cannot make it a larger one because my server side can store image of width and height upto 300 x 300.
Thanks
@Ani24may Hey have you found any solution?
no, not yet
@Ani24may hello I have found something you can set this both as same to crop of fixed size
.setMinCropResultSize(1000,500)
.setMaxCropResultSize(1000,500)
Thanks mate. Will definitely try this.
thanks @abbasalid
Below code works for me.
CropImage.activity().setGuidelines(CropImageView.Guidelines.ON)
.setMinCropResultSize(500, 300)
.setMaxCropResultSize(500, 300)
.start(this);
| gharchive/issue | 2019-10-08T08:35:07 | 2025-04-01T04:32:16.712423 | {
"authors": [
"Ani24may",
"abbasalid",
"asif-ullah"
],
"repo": "ArthurHub/Android-Image-Cropper",
"url": "https://github.com/ArthurHub/Android-Image-Cropper/issues/719",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
829959160 | Did someone make an error when running lag_indexes method in make_features file
KeyError: "Passing list-likes to .loc or [] with any missing labels is no longer supported. The following labels were missing: DatetimeIndex(['2015-04-01', '2015-04-02', '2015-04-03', '2015-04-04',\n '2015-04-05',\n ...\n '2015-06-26', '2015-06-27', '2015-06-28', '2015-06-29',\n '2015-06-30'],\n dtype='datetime64[ns]', length=92, freq=None). See https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike"
把.loc[]改成reindex()
把.loc[]改成reindex()
别乱回答
Done, works just iloc[dates] to reindex(dates)
| gharchive/issue | 2021-03-12T10:08:38 | 2025-04-01T04:32:16.720892 | {
"authors": [
"1980695671",
"EnriqueGautoSand",
"Tovi-A",
"timtian12"
],
"repo": "Arturus/kaggle-web-traffic",
"url": "https://github.com/Arturus/kaggle-web-traffic/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1401176101 | break out :)
A 2d game made with html, css, javascript
@aditi5050 can you plz explain what you are going to do?
You have put the code which is not at all related to our site.
I want a justification over this code.
okay, then should I close this PR, and instead of this shall I change the description written in the POPULAR PACKAGES section to describe what Kedarnath & all are famous for.
@aditi5050 plz create issue by describing what you want to do
Show your support by stargazing th repo 🤩
| gharchive/pull-request | 2022-10-07T13:28:28 | 2025-04-01T04:32:16.723436 | {
"authors": [
"Arun9739",
"aditi5050"
],
"repo": "Arun9739/Paryatana",
"url": "https://github.com/Arun9739/Paryatana/pull/141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1425008078 | Created Snake Game
This is a snake game using python library turtle.
Already present, cant merge.
| gharchive/pull-request | 2022-10-27T04:22:42 | 2025-04-01T04:32:16.724236 | {
"authors": [
"Arunim313",
"yogesh78026"
],
"repo": "Arunim313/Hackto.py",
"url": "https://github.com/Arunim313/Hackto.py/pull/176",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1833923832 | Feature Request: Parameter to modify the UHD optical flow process resolution
I observe that with UHD mode I get much less distortions - but sometimes it still seems like a fixed UHD mode alone is not enough.
Could we get a uhd_scale parameter to modify the process resolution for the optical flow model?
Like the scale parameter described here:
https://github.com/megvii-research/ECCV2022-RIFE
This looks like more feature request for the rife ncnn library.
| gharchive/issue | 2023-08-02T21:25:30 | 2025-04-01T04:32:16.733812 | {
"authors": [
"Asd-g",
"Samhayne"
],
"repo": "Asd-g/AviSynthPlus-RIFE",
"url": "https://github.com/Asd-g/AviSynthPlus-RIFE/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1600154189 | feat: adapt to 1.9.6
Linked issues
Fix #18
Dependencies
Changes
Additional comments
This is just the stuff related to the breaking changes of 1.9.6
We should update the supported versions accordingly
| gharchive/pull-request | 2023-02-26T18:31:29 | 2025-04-01T04:32:16.743267 | {
"authors": [
"Nuttymoon",
"leopaul36"
],
"repo": "AshAvalanche/ansible-avalanche-collection",
"url": "https://github.com/AshAvalanche/ansible-avalanche-collection/pull/19",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
981687603 | HeightCalculator: Ensure all parents are inspected for height calculation
Visit all parents in the order they appear in source control, instead of just the first parent. This allows updating feature branches to get new versions instead of the version from which they originally branched.
[x] Make the algorithm iterative instead of recursive
[x] Unit/integration tests added
[x] Linked issue if exists
[x] Updated documentation
Note: code coverage regression is incorrect, the 2 lines are covered.
| gharchive/pull-request | 2021-08-28T01:13:34 | 2025-04-01T04:32:16.753796 | {
"authors": [
"AshleighAdams"
],
"repo": "AshleighAdams/Verlite",
"url": "https://github.com/AshleighAdams/Verlite/pull/45",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
503732515 | Arduino - przesył przez moduł Wi-Fi
Moduł ESP8266
ESP8266 wysyła deviceID oraz ntu za pomocą skryptu. Przetestowane - działa. Zajmuję się #31.
| gharchive/issue | 2019-10-07T22:53:55 | 2025-04-01T04:32:16.776778 | {
"authors": [
"wojtaskamil"
],
"repo": "AsteRons/IoT_project",
"url": "https://github.com/AsteRons/IoT_project/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1012282749 | Reverse An Array
The program below is used to reverse an array
for example an array of {1,2,3,4,5}
will become {5,4,3,2,1} upon reversing
@Astrodevil #14
| gharchive/pull-request | 2021-09-30T14:55:10 | 2025-04-01T04:32:16.793977 | {
"authors": [
"KshitizSharma11"
],
"repo": "Astrodevil/Programming-Basics",
"url": "https://github.com/Astrodevil/Programming-Basics/pull/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
314210452 | ONC FHIR Challenge Vulnerability: Token creation and validation
Two separate issues here but can be fixed with a similar patch. The oauth server used in this repo as an example was not a fully implemented oauth server. Replacing our mock oauth server with a fully secured oauth server should remediate both the following issues, but they need to be verified independently.
First issue is that with a client id and client secret, tokens can be self generated without going to an oauth server, and then used to access resources.
Second issue is that a client secret is not required to request an auth token when the client is "trusted".
mock services has been removed
| gharchive/issue | 2018-04-13T18:39:32 | 2025-04-01T04:32:16.801580 | {
"authors": [
"Robert-W",
"jonterrylee"
],
"repo": "Asymmetrik/node-fhir-server-core",
"url": "https://github.com/Asymmetrik/node-fhir-server-core/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1072652083 | Question: any recommendation on how to visualize the output?
I like the library. But is there a recommended way to visualize the output? I'm looking for something that looks like jest-diff output or "github PR changes", e.g just for the terminal?
Good question! Currently, there is no recommended way to visualize changes. However, it should not be difficult to make your own method of doing this, and if you make an open-source library for this, I would love to feature it on the Microdiff README. Also, #17 should help once it is merged.
I like the library. But is there a recommended way to visualize the output? I'm looking for something that looks like jest-diff output or "github PR changes", e.g just for the terminal?
Hello, would you like to visualize the output like this one?
(Green - was created, Red - was deleted, Yellow - was changed)
Hello, would you like to visualize the output like this one?
Yes, exactly.
Like github visualizes differences. Or Jest.
Hello, would you like to visualize the output like this one?
Yes, exactly.
Like github visualizes differences. Or Jest.
Okay, I asked because GitHub cannot show that something has been changed. It shows that the content of the line was deleted and then added, but the microdiff has a CHANGE type. So it will be okay to have a third format for changed values?
Aha, then I misunderstood.
No, I don't see the need for CHANGE type, but instead I expect to see "what exactly" is changed in that line. E.g if a string is changed, it would show in red the removed characters and in green the added characters.
So, consider these objects like old and new
const oldObject = {
a: 'value',
b: 'old value'
};
const newObject = {
b: 'another value',
c: 'new value'
};
Would you like to get output like this?
Closing this because a solution to this should be developed separately.
| gharchive/issue | 2021-12-06T21:50:10 | 2025-04-01T04:32:16.806983 | {
"authors": [
"AsyncBanana",
"denis-bel",
"kirillgroshkov"
],
"repo": "AsyncBanana/microdiff",
"url": "https://github.com/AsyncBanana/microdiff/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
122768983 | PHP 7 compatibility
I'm upgrading my project to PHP7, and I'm checking the compatibility of my dependencies.
I've run PHP7CC against my vendor folder, and it found just this line in this project:
File: [...]/vendor/gedmo/doctrine-extensions/lib/Gedmo/Tree/Entity/Repository/ClosureTreeRepository.php
> Line 320: Possible array element creation during by-reference assignment
$refs[$node[$idField]] =& $tmp[$key];
That refers to this line:
https://github.com/Atlantic18/DoctrineExtensions/blob/e94b30342810d028559dade3e4bac13520e93ba1/lib/Gedmo/Tree/Entity/Repository/ClosureTreeRepository.php#L320
Afar from this, is this project usable under PHP7? Can we put an upper boundary to the composer.json line about this?
I just fixed our Travis setup so that the tests actually run on PHP 7 (was broken because it was trying to install the ODM and ext-mongo does not exist on PHP 7). Tests are passing on PHP 7 (for the ORM of course, not for the ODM).
Great, thanks!
Regarding the case reported above, I would need more explanation about when such element creation happens, to understand whether it is a false-positive or no
It's related to this BC change in PHP7: http://php.net/manual/en/migration70.incompatible.php#migration70.incompatible.variable-handling.array-order
It seems to affect only the order of the array. I don't know right now if it matters in this context.
Well, it is not relevant here, as we are dealing with different arrays.
I'm closing this issue now
| gharchive/issue | 2015-12-17T16:22:31 | 2025-04-01T04:32:16.842057 | {
"authors": [
"Jean85",
"stof"
],
"repo": "Atlantic18/DoctrineExtensions",
"url": "https://github.com/Atlantic18/DoctrineExtensions/issues/1491",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1726147717 | Login or sign up issue
The login page does not work with any of the social media options and with sign up option too. I tried using Google, Facebook, and GitHub but none of them worked. I cleared my cache and cookies but the issue persists. Please fix this as soon as possible.
Could you assign this task to me. I can fix it.
Hello @ManavMalhotra, you pointed a good issue.
I need your help regarding this project.
Actually I am not being able to log in or signup by any way, I tried this for a whole week by reading through the lines of code.
Your help will be of great value for me.
Thanks !
| gharchive/issue | 2023-05-25T16:36:02 | 2025-04-01T04:32:16.873334 | {
"authors": [
"ManavMalhotra",
"singhkaran202"
],
"repo": "AtriSukul1508/Njack_Hack_It_Out-Tur1ng-",
"url": "https://github.com/AtriSukul1508/Njack_Hack_It_Out-Tur1ng-/issues/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
352223623 | Phoebemirman/ch12671/add checks from walkthrough to portfolio
https://app.clubhouse.io/augur/story/12671/add-checks-from-walkthrough-to-portfolio-my-markets-test
Coverage remained the same at 66.623% when pulling 5275f42dd03413904b049f7be6e526b7f655af15 on phoebemirman/ch12671/add-checks-from-walkthrough-to-portfolio into ddc46cca07d555cd60750b6397ae8bfa458b65b2 on master.
Coverage remained the same at 66.623% when pulling 8de82565ba7c0248beea397fe0879d6bfd978e1a on phoebemirman/ch12671/add-checks-from-walkthrough-to-portfolio into ddc46cca07d555cd60750b6397ae8bfa458b65b2 on master.
Coverage remained the same at 66.623% when pulling 8de82565ba7c0248beea397fe0879d6bfd978e1a on phoebemirman/ch12671/add-checks-from-walkthrough-to-portfolio into ddc46cca07d555cd60750b6397ae8bfa458b65b2 on master.
Coverage remained the same at 66.623% when pulling f42703a03a131af39d52b53c215b1afc393acce7 on phoebemirman/ch12671/add-checks-from-walkthrough-to-portfolio into ddc46cca07d555cd60750b6397ae8bfa458b65b2 on master.
Coverage remained the same at 66.623% when pulling f42703a03a131af39d52b53c215b1afc393acce7 on phoebemirman/ch12671/add-checks-from-walkthrough-to-portfolio into ddc46cca07d555cd60750b6397ae8bfa458b65b2 on master.
| gharchive/pull-request | 2018-08-20T17:16:30 | 2025-04-01T04:32:16.910845 | {
"authors": [
"coveralls",
"phoebemirman"
],
"repo": "AugurProject/augur-ui",
"url": "https://github.com/AugurProject/augur-ui/pull/2206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
262366251 | Create Range annotation
A field-level annotation that limits the random values generated for the annotated field.
For instance:
public class Address {
@Range(min=0,max=122)
private int streetNumber;
}
Only numbers between 0 and 122 (inclusive(?)) will be generated for streetNumber.
can I give it a try ?
Hey mate,
I was originally going to do it, but if you're up for it, go ahead! I will say that I think this will be somewhat complex, and some of the Generators will need to be refactored a little to support this functionality.
hi @AussieGuy0
It does seem complex. I will work on this but you please continue with your original plan. I will create a pull request if I am able to complete it.
| gharchive/issue | 2017-10-03T09:47:12 | 2025-04-01T04:32:16.938098 | {
"authors": [
"AussieGuy0",
"codejayant"
],
"repo": "AussieGuy0/SDgen",
"url": "https://github.com/AussieGuy0/SDgen/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
262368565 | Add Javadocs to Definition classes
Documentation needs to be added to the Definition classes. They are pretty simple and are described in the Readme.
I will do this
I would love to do this.
Awesome! If you have any questions, let me know.
Hi :) This is my first contribution... I haven't installed or learnt Maven yet so I was hoping to use the jar you mention in the README but the link is a 404.
Hey Jo,
Sorry about that! Click here to get the jar. I've updated the Readme as well.
Also, you may need to get the uniVocity parser library from here as well
No worries lol I got the jar now but the library is 'not found'. All good though I don't really need it to do the javadocs although it will probably help me to understand the code better... and use your program lol.
OK so I have done the javadocs to the best of my knowledge. Not sure how to push them with out them going straight into the repo. Should I make a branch first or something? Being my first commit to someone else's project I'm not sure. Or will it automatically go to a pull request if I push? (git push origin master).
OK pull request is there now from fork.
| gharchive/issue | 2017-10-03T09:55:29 | 2025-04-01T04:32:16.942291 | {
"authors": [
"AussieGuy0",
"Jo-Hunter",
"shivanshsoni"
],
"repo": "AussieGuy0/SDgen",
"url": "https://github.com/AussieGuy0/SDgen/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1939896946 | LAT-129 Contrib-Payload-Logger
[x] Beim Identifier Feld funktioniert das JSONata nicht richtig.
Es wird wohl nur der Pfad herausgelesen anstatt alles in JSONata umzuwandeln.
[x] Erweiterung: Append to File Funktion.
[x] Scorerboard vervollständigen:
Node.js Version fehlt
[x] Kategorie umbennen: Kategorien werden immer im Kleinbuchstaben geschrieben:
Daher von Autexis zu autexis umbenennen
[x] Code cleanup (awaits in loops, unused vars, etc...)
Changelog.md-Eintrag fehlt noch.
Closes Issue #5 in commit 595302b
| gharchive/pull-request | 2023-10-12T12:33:04 | 2025-04-01T04:32:16.948634 | {
"authors": [
"dgr-autexis",
"jhe-autexis"
],
"repo": "Autexis-IT/node-red-contrib-payload-logger",
"url": "https://github.com/Autexis-IT/node-red-contrib-payload-logger/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1993686005 | Create a modal to answer questions from recommandation
Create a modal template to answer questions
Handle request to retrieve questions files
Due to wrong structure - the structure will be fixed in this issue to (ex: create id correspondence for the front)
| gharchive/issue | 2023-11-14T22:31:00 | 2025-04-01T04:32:16.993081 | {
"authors": [
"Mijux"
],
"repo": "Autom-A/AutomA-WebUI",
"url": "https://github.com/Autom-A/AutomA-WebUI/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1336383227 | [Draft] Adds publish job to release to NPM when a new GitHub release is detected
Description
This is a work-in-progress that aims to created automated releases to NPM when a new GitHub release is created. It needs https://github.com/Automattic/vip-actions/pull/4 to be merged first. Not tested yet.
Steps to Test
TODO
Check out PR.
Run npm run build
Run ./dist/bin/vip-cookies.js nom
Verify cookies are delicious.
Closing. We'll pursuit another route.
| gharchive/pull-request | 2022-08-11T19:15:57 | 2025-04-01T04:32:17.206461 | {
"authors": [
"brunobasto"
],
"repo": "Automattic/vip",
"url": "https://github.com/Automattic/vip/pull/1076",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2413075673 | Improve style of metric control
Description
Having two metrics takes all my page. Metric takes 100% of my screen. This is unusable by itself.
The size must be configurable.
This should look like this by default:
Acceptance Criteria
[ ] Ensure new code is unit tested, and check code coverage is at least 90%.
[ ] Propagate any change on the demos and run all of them to ensure there is no breaking change.
[ ] Ensure any change is well documented.
Code of Conduct
[X] I have checked the existing issues.
[ ] I am willing to work on this issue (optional)
Is this still an issue @FlorianJacta ?
What has changed? I believe it is still an issue
You used metrics in your demo on Friday and they looked fine
I used the layout and height property to adjust it. There is also an issue that the metric component doesn't work is the height is too little
| gharchive/issue | 2024-07-17T09:08:10 | 2025-04-01T04:32:17.322031 | {
"authors": [
"FlorianJacta",
"FredLL-Avaiga"
],
"repo": "Avaiga/taipy",
"url": "https://github.com/Avaiga/taipy/issues/1535",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1740612017 | UserWarning: on_action: 'on_button_action' is not a valid function
Trying to follow the tutorial in docs - https://docs.taipy.io/en/latest/getting_started/getting-started-gui/step_03/ReadMe/
Not sure where this warning is coming from and notify/'Reset' is not working, to reproduce, here's the code (main.py)
from textwrap import dedent
from taipy.gui import Gui, notify
def main() -> int:
text = "Original text"
page = dedent("""\
# Getting started with *Taipy*
My text: <|{text}|>
<|{text}|input|>
<|Run local|button|on_action=on_button_action|>
""")
Gui(page=page).run(use_reloader=True)
return 0
def on_button_action(state):
notify(state, "info", f"The text is {state.text}")
state.text = "Button Pressed"
def on_change(state, var_name, var_value):
if var_name == "text" and var_value == "Reset":
state.text = ""
return
if __name__ == "__main__":
raise SystemExit(main())
$ cat requirements.txt
taipy
torch
transformers
# jupyter notebook
jupyterlab
$ python --version
Python 3.11.2
$ pip freeze | grep taipy
taipy==2.2.0
taipy-config==2.2.0
taipy-core==2.2.3
taipy-gui==2.2.1
taipy-rest==2.2.1
$ python main.py --port 5001
[2023-06-04 22:22:16,655][Taipy][INFO] application is running in 'debug' mode
[2023-06-04 22:22:16,655][Taipy][INFO] 'allow_unsafe_werkzeug' has been set to True
[2023-06-04 22:22:16,655][Taipy][INFO] 'async_mode' parameter has been overridden to 'threading'. Using Flask built-in development server with debug mode
[2023-06-04 22:22:16,836][Taipy][INFO] * Server starting on http://127.0.0.1:5001
* Serving Flask app 'Taipy'
* Debug mode: on
[2023-06-04 22:22:18,630][Taipy][INFO] * Server reloaded on http://127.0.0.1:5001
WARNING:root:
--- 1 warning(s) were found for page '/' in variable 'page' ---
- Warning 1: button.on_action: on_button_action is not a function
----------------------------------------------------------------
UserWarning: on_action: 'on_button_action' is not a valid function
The Warning means that Taipy didn't find the function: on_button_action. It comes from the fact that the Gui is not being run directly in the Python script.
The code below works:
from textwrap import dedent
from taipy.gui import Gui, notify
text = "Original text"
page = dedent("""\
# Getting started with *Taipy*
My text: <|{text}|>
<|{text}|input|>
<|Run local|button|on_action=on_button_action|>
""")
def on_button_action(state):
notify(state, "info", f"The text is {state.text}")
state.text = "Button Pressed"
def on_change(state, var_name, var_value):
if var_name == "text" and var_value == "Reset":
state.text = ""
return
if __name__ == "__main__":
Gui(page=page).run(use_reloader=True)
Is there a reason why you would want to run it inside a function?
New setup works. Or even if I put the functions inside the main() function like this works too:
from textwrap import dedent
from taipy.gui import Gui, notify
def main() -> int:
def on_button_action(state):
notify(state, "info", f"The text is {state.text}")
state.text = "Button Pressed"
def on_change(state, var_name, var_value):
if var_name == "text" and var_value == "Reset":
state.text = ""
return
text = "Original text"
page = dedent("""\
# Getting started with *Taipy*
My text: <|{text}|>
<|{text}|input|>
<|Run local|button|on_action=on_button_action|>
""")
Gui(page=page).run(use_reloader=True)
return 0
if __name__ == "__main__":
raise SystemExit(main())
I wonder why? After a little debugging, it looks like, taipy is looking for functions in locals() scope, not in globals() scope
In the working code, these functions are present in locals() scope
(Pdb) pp locals()
{'on_button_action': <function main.<locals>.on_button_action at 0x10cd9a520>,
'on_change': <function main.<locals>.on_change at 0x10ce36660>}
In the previous version, they were not in locals() but in globals()
(Pdb) pp globals()
{'Gui': <class 'taipy.gui.gui.Gui'>,
'__annotations__': {},
'__builtins__': <module 'builtins' (built-in)>,
'__cached__': None,
'__doc__': None,
'__file__': '/Users/saurabhchopra/dev/learnings/learn_taipy/main1.py',
'__loader__': <_frozen_importlib_external.SourceFileLoader object at 0x10d0ad450>,
'__name__': '__main__',
'__package__': None,
'__spec__': None,
'dedent': <function dedent at 0x10d1a63e0>,
'main': <function main at 0x10d1a65c0>,
'notify': <function notify at 0x137231bc0>,
'on_button_action': <function on_button_action at 0x10d0d4b80>,
'on_change': <function on_change at 0x10d10a520>}
The problem with this setup would be that it dosen't scale to a big project? - When these functions are imported from different .py files, they will come in globals() scope and taipy won't discover them.
Might I propose this as a new Feature Request?
This is done on purpose. We propose in Taipy a Page Scope where you can import efficiently functions or variables from other modules to decompose your code. We normally decompose the code as below:
pages/
|--- page1.py
|--- page2.py
main.py
You can find in pagex.py all the functions, the Markdown, and variables needed for a particular page. In main.py, you provide the global logic of the application as well as the creation of the multipage application.
Take a look at this repository , for example. We import in the main.py, the global functions that Taipy needs across pages. All the other variables, pages, and functions specific for a page can be found in the page folder. Like this, you can independently use the same variable name for different pages.
What do you think about this setup?
Have you had the time to check this solution? What do you think?
The documentation has totally changed. We also changed to Taipy 3.0. Were you able to test the code? Can we close the issue?
Here is the new version of this code.
| gharchive/issue | 2023-06-04T21:25:03 | 2025-04-01T04:32:17.333669 | {
"authors": [
"FlorianJacta",
"saurbhc"
],
"repo": "Avaiga/taipy",
"url": "https://github.com/Avaiga/taipy/issues/164",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
448581326 | Calendar icon overlaps dropdown - React
When running my local server http://localhost:3000/ I get taken to the My Healthcare App where I see a form I noticed when I selected the health care provider the calendar icon for the input field below the health care provider is above the dropwdown of the ** health care** '
closing since this should be opened against https://github.com/Availity/availity-react
This was fixed in an update a few weeks back to availity-react. Forgot to close this issue after the fact.
| gharchive/issue | 2019-05-26T16:01:35 | 2025-04-01T04:32:17.336537 | {
"authors": [
"GoPro16",
"devpato",
"robmcguinness"
],
"repo": "Availity/availity-workflow",
"url": "https://github.com/Availity/availity-workflow/issues/222",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1803032866 | Restored functionalities and improvements
Made Hyperlink available to both messageboxes.
Added Input capability to both messageboxes.
Created design viewmodels for proper view design.
So cool pr, grate job.
please resolve conflict and i approve pr
join to our official avalonia community https://t.me/Avalonia
Conflicts resolved :)
Thanks @gusmanb
works awesome! I've added multiline support for the input to your changes as well: #152
| gharchive/pull-request | 2023-07-13T13:28:34 | 2025-04-01T04:32:17.341218 | {
"authors": [
"CreateLab",
"Fruchtzwerg94",
"gusmanb"
],
"repo": "AvaloniaCommunity/MessageBox.Avalonia",
"url": "https://github.com/AvaloniaCommunity/MessageBox.Avalonia/pull/147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1560625285 | AvaloniaList/List compiler support
Is your feature request related to a problem? Please describe.
Right now, when developer sets a string value to the AvaloniaList property, compiler will emit "AvaloniaListConverter".
Which internally will use TypeUtilities.TryConvert which is marked as unsafe for reflection, as it can fallback to it in some code paths.
Instead, compiler should attempt to parse list in compile time, and if it was successful, emit pre-constructed collection.
Describe the solution you'd like
To have a flexible support, we should add an attribute, which will work as a hint for the compiler, how to separate items in the collection.
I.e., Control.Classes is separated by spaces, while AvaloniaListConverter expects comma separated items.
We can expect something like:
[AvaloniaListAttribute(Separators = new [] { " " })]
public Classes Classes { get { } set { } }
[AvaloniaListAttribute(Separators = new [] { "," })]
public Avalonia.Collections.AvaloniaList<double>? Dashes { get { } set { } }
Additional context
It improves following APIs:
// Shape
public Avalonia.Collections.AvaloniaList<double>? StrokeDashArray { get { throw null; } set { } }
// TickBar/Slider
public Avalonia.Collections.AvaloniaList<double> Ticks { get { throw null; } set { } }
// Border
public Avalonia.Collections.AvaloniaList<double>? BorderDashArray { get { throw null; } set { } }
// DashStyle
public Avalonia.Collections.AvaloniaList<double>? Dashes { get { throw null; } set { } }
// PolylineGeometry
public Avalonia.Points Points { get { throw null; } set { } }
Right now, we already do the same for Control.Classes property as a special case. It can be generalized.
Currently Avalonia.Themes.Fluent has exactly one usage of AvaloniaListConverter which can be avoided completely.
Additional context #2
WinUI/UWP has implemented a very similar idea but hardcoded it to only commas (and semicolors?). There is quite a lot of discussions about this, while I believe we should use an attribute as a hint, while keeping default behavior just like AvaloniaListConverter, i.e. - commas.
https://github.com/microsoft/microsoft-ui-xaml/issues/673
https://github.com/microsoft/microsoft-ui-xaml-specs/blob/master/active/gridsyntax/GridSyntaxSpec2.md#other-use-cases
Also, there was an interesting idea to introduce markup extension - https://github.com/kekekeks/XamlX/pull/28. Which, in my opinion, should be a separated feature, because we already have support for string-to-array parsing in runtime using converter.
| gharchive/issue | 2023-01-28T01:29:04 | 2025-04-01T04:32:17.346875 | {
"authors": [
"maxkatz6"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/10109",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1613172929 | No device input data
Describe the bug
StartLinuxFbDev no any input event.
To Reproduce
Steps to reproduce the behavior:
open view.
Expected behavior
input event can fire.
Screenshots
Desktop (please complete the following information):
OS: Linux Ubunt 20.04
Version 11.0.0-preview5
Additional context
chip: rk3568
on linux:
on my pc:
work in root user
| gharchive/issue | 2023-03-07T10:45:50 | 2025-04-01T04:32:17.351722 | {
"authors": [
"Coloryr"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/10578",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2290711263 | MenuFlyout ItemsSource item binding command always disabled
Describe the bug
After I upgrade to 11.1.0-beta2, all my menu items are disabled.
To Reproduce
<Button ToolTip.Tip="摄像头">
<Image Source="avares:/Assets/Images/btn_camera.png" />
<Button.Flyout>
<MenuFlyout ItemsSource="{Binding VideoDevItems}" Opening="MenuVideo_Opening" />
</Button.Flyout>
<Button.Styles>
<Style Selector="MenuItem" x:DataType="models:VideoDeviceViewModel">
<Setter Property="Header" Value="{Binding Text}"/>
<Setter Property="Command" Value="{Binding #PART_Frame.SelectVideoDevice}" />
<Setter Property="CommandParameter" Value="{Binding}" />
<Setter Property="Icon">
<Template>
<Image Source="avares:/Assets/Images/checked.png" IsVisible="{Binding IsSelected}" />
</Template>
</Setter>
</Style>
</Button.Styles>
</Button>
The menu items are always disabled.
If I remove
<Setter Property="Command" Value="{Binding #PART_Frame.SelectVideoDevice}" />
MenuItem will be enabled.
This code is work fine on 11.0.10,
Expected behavior
No response
Avalonia version
11.1.0-beta2
OS
Windows
Additional context
No response
Hi @29654761 - I've tried to reproduce your problem but unfortunately I don't have enough information to be able to do so. For example, the #PART_Frame control that is the source of your command isn't included in your example code.
Would you be able to create a minimal repro in the form of a GitHub repository or a .zip file that I can run?
I Created a new project and copy code, but it works ok.
The raw project is created from old version, I tried to figure out the diffence,
I found out remove the below line in csproj file can work fine.
<AvaloniaUseCompiledBindingsByDefault>true</AvaloniaUseCompiledBindingsByDefault>
@29654761 that means you are having a DataContext issue most likely. You may need to cast or sprcify x:DataType
See: https://docs.avaloniaui.net/docs/basics/data/data-binding/compiled-bindings
[!NOTE]
If you upload a minimal sample and we verify it is an issue, I'll reopen this issue.
AvaloniaApplication1.zip
@timunie This minimal sample can reproduce the issue.
@29654761 that means you are having a DataContext issue most likely. You may need to cast or sprcify x:DataType
See: https://docs.avaloniaui.net/docs/basics/data/data-binding/compiled-bindings
Note
If you upload a minimal sample and we verify it is an issue, I'll reopen this issue.
AvaloniaApplication1.zip
Sorry , I uploaded again.
The repo is great. Points out, it works if:
I use ReflectionBinding
of I use an ICommand instead of void
or I consume the void from outside the Popup.
No clue why, but maybe @grokys has an idea where to look at. No priorty issue, as we have enough ways to make the same thing working.
| gharchive/issue | 2024-05-11T06:25:08 | 2025-04-01T04:32:17.361359 | {
"authors": [
"29654761",
"grokys",
"rabbitism",
"timunie"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/15689",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2492578134 | Win+H dialog is not working correctly
Describe the bug
When I focused on the TextBox and press Win+H dialog says:
To Reproduce
Create TextBox without brush set focus on it and press Win+H.
Expected behavior
Avalonia must support Windows voice input dialog.
Avalonia version
11.1.0
OS
Windows
Additional context
No response
FocusManager.ClearFocus();
SearchBoxUI.Focus(NavigationMethod.Tab);
InputSimulator input = new InputSimulator();
input.Keyboard.KeyPress(WindowsInput.Native.VirtualKeyCode.TAB);
input.Keyboard.KeyPress(WindowsInput.Native.VirtualKeyCode.TAB);
input.Keyboard.KeyPress(WindowsInput.Native.VirtualKeyCode.TAB);
await Task.Delay(5000);
App.VoiceService.LaunchVoiceInput();
I had to use a hard hack because when focusing with TAB selection it works😅
| gharchive/issue | 2024-08-28T16:50:34 | 2025-04-01T04:32:17.364368 | {
"authors": [
"DmitryBorodiy"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/16847",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
338534435 | ScreenPage throws NullReferenceException
The ScreenPage in the ControlCatalog currently does not work. It creates a new FormattedText, but without a typeface.
Both Skia and Direct2D will fail to create a FormattedTextImpl, because they assume that the typeface is non-null.
Maybe make passing a non-null typeface a requirement when creating a FormattedText? Or just defaulting to a typeface when it is null?
I'm not sure why this page draws everything in its Render method, it's a bit strange.
Also Screen isn't a control so I'm not sure it's even relevant to have it in ControlCatalog. Should we remove it?
I think there two separate issues:
The Screen page is a little strange (but also cool IMO).
FormattedText without a Typeface throw a NullReferenceException
FormattedText without a Typeface throw a NullReferenceException
Yes, but if we made the page a more standard control then we wouldn't have this problem...
Can we fallback to typeface = new Typeface(FontFamily.Default) , but yeah...the Screen page is pretty weird..
Yeah we should do that at least. I would prefer to have it removed or at least rewritten to be less weird, but that would prevent the exception for the moment.
| gharchive/issue | 2018-07-05T11:25:46 | 2025-04-01T04:32:17.367980 | {
"authors": [
"CommonGuy",
"grokys",
"nc4rrillo"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/1730",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
590000084 | Add Support for Shorthand Hex Color String in SolidColorBrush etc
A shorthand color string should be supported. E.g. #000 (which translates to FF000000). WPF supports this, so I am surprised this got left out.
I think we can start from here
Background="#ff123456" works without issues
I suspect that @Metric wants the short color strings supported
i.e.
#f00 -> #ff0000
#f0f -> #ff00ff
#f321 -> #ff332211
#3f25 -> #33ff2255
You can add this to Color.Parse(), simply duplicating every char, excluding the #
if (s.Length >= 4 && s.Length <= 5)
{
s = string.Concat(s.SelectMany(@char => new[] { @char, @char })).Substring(1);
}
| gharchive/issue | 2020-03-30T05:17:54 | 2025-04-01T04:32:17.371151 | {
"authors": [
"Gillibald",
"Metric",
"rstm-sf",
"wdcossey"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/3706",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2442108101 | Remove internal access modifier from IWindowImpl.GetWindowsZOrder
What is the current behavior?
External platform implementations requiring IWindowImpl cannot be created by third-parties (affects 11.1.0 and 11.1.1 releases).
Fixed issues
Fixes #16553
Is there any other place where there are two lists/spans in a public API returned together where indexes are expected to match?
I'm pretty sure this is an API design violation. A single list should be returned with a container containing the window/zindex. Otherwise all consumers have to check and verify indexing.
This was internal for a reason I think.
On second thought, I don't think this should even be here. This interface is for a single window. We can have a GetZIndez that returns for the window instance ONLY. And higher level functionality should be some place else like a window manager.
@robloo that's a private API.
Public API method has different definition: https://github.com/AvaloniaUI/Avalonia/blob/4c2d9fcda7bb703600e4146cefb790c5cbbb3bb6/src/Avalonia.Controls/Window.cs#L849
@maxkatz6 @MrJul This is "public" in the sense that 3rd parties are implementing it already. Although I realize Impl are considered unstable. I'm still really disappointed in this design. It's a mistake to have two arrays indexed together like this. It would have been better to use a Tuple within the Span at least.
Moreover, the architecture is very questionable. First, I think we got lazy adding this to the IWindowImpl interface. We are starting to have a number of issues in this area IMO. TopLevel, App, Lifetime and now Window are convoluted and the architecture and functionality was not designed properly between these. I ran across this already with the shutdown logic. Now here with Window being aware of and containing higher-level functionality. It's certainly not properly following a control hierarchy in the case of Window...
@BAndysc
This whole thing feels hacky to me. Your first ideas in the original PR seem much more in-line with what I am thinking here. I'm really surprised the core team asked you to change it. I disagree with the reasoning.
I would have added this functionality to the lifetime and kept sorting internal.
In fact, I would have the existing Windows property always return the windows sorted by ZOrder. Then the actual sorting logic like this doesn't need to be exposed to the application which is very strange especially as a static method in the Window.
The actual functionality to get OS window Z order should have been in IWindowingPlatform.
Proper API design should have been the number 1 consideration in the original PR and it seems it was not.
So long story short I still think all of this is a design mistake even though I agree the functionality is very important to have.
@robloo I suggest opening a new issue for redesign discussion. This PR is only meant to remedy a breaking change for third-party platform implementers that can't be worked around. This makes Avalonia better today with minimal effort. There aren't contract guarantees, so the API could be moved or reworked. I have no strong opinions about this design as I don't use the functionality.
@robloo
I'm answering here a last time, but let's open an issue or discussion if you want to push this further.
It's a mistake to have two arrays indexed together like this. It would have been better to use a Tuple within the Span at least.
While it's not the best API ever, I personally think it's fine enough for a low-level API, especially an internal one. For example Array.Sort(keys, items) (https://learn.microsoft.com/en-us/dotnet/api/system.array.sort?view=net-8.0#system-array-sort-2(-0()-1()) which is very similar has existed forever.
Moreover, the architecture is very questionable. First, I think we got lazy adding this to the IWindowImpl interface.
The actual functionality to get OS window Z order should have been in IWindowingPlatform.
Here I completely agree, it should definitely have been part of IWindowingPlatform or equivalent, it's quite odd on IWindowImpl.
TopLevel, App, Lifetime and now Window are convoluted and the architecture and functionality was not designed properly between these.
I also agree. Lifetime is a bit separate (it's really a higher optional level API compared to the *Impl one), but ITopLevelImpl and IWindowImpl are becoming a bit messy are are due for a cleanup. (For example, I've wished several times that Window wasn't a TopLevel.) But we can discuss that separately.
This is "public" in the sense that 3rd parties are implementing it already
I disagree with this point: these are marked unstable for a reason. Believe me, I've mentioned before v11 that I was disappointed in so many public API being made "semi-internal". But that's also the strength of the system: we're able to change them. People have been warned, and if anyone implemented them, they should be prepared for potential breaking changes. And I'm including myself in those people. It's not a perfect system, it's a difficult balance, but I much prefer this to having everything internal (I'm looking at you, WPF).
That being said, I really hope that in the future their shape would be stable enough to make them officially public again.
Regarding the ZOrder API itself:
Your first ideas in the original PR seem much more in-line with what I am thinking here
The original PR, while being better at first glance, was posing problems in both performance (which isn't good but can sometimes be accepted), and more importantly in correctness. Z-order stability wasn't guaranteed, which completely defeated the purpose of a Z-order.
In fact, I would have the existing Windows property always return the windows sorted by ZOrder.
So paying the overhead of iterating through most windows in the operating system to simply list the app's windows? For an order most people won't care about? Which could change between two properties calls? I'm sorry, but that's a recipe for disaster: a simple for loop without capturing the property could potentially return the same window twice.
I believe that every feature should be "pay for play" as much as possible.
Proper API design should have been the number 1 consideration in the original PR and it seems it was not.
Correctness is probably the number 1 consideration, but I agree that that the API design could have been better. Honestly, I wish we could have a proper API review process, that would work for both the core team and contributors. I know this has been discussed briefly in the past, but I think it's becoming more and more important. Even though I don't agree with everything you said, you raise an important point here.
| gharchive/pull-request | 2024-08-01T10:36:58 | 2025-04-01T04:32:17.385032 | {
"authors": [
"MrJul",
"maxkatz6",
"robloo",
"stevemonaco"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/16562",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
517157251 | Use a newer macosx SDK
What does the pull request do?
What is the current behavior?
What is the updated/expected behavior with this PR?
How was the solution implemented (if it's not obvious)?
Checklist
[ ] Added unit tests (if possible)?
[ ] Added XML documentation to any related classes?
[ ] Consider submitting a PR to https://github.com/AvaloniaUI/Avaloniaui.net with user documentation
Breaking changes
Fixed issues
We should probably change XCode version instead. Something like
/bin/bash -c sudo xcode-select --switch /Applications/Xcode_10.3.app/Contents/Developer
See
https://github.com/Microsoft/azure-pipelines-image-generation/blob/master/images/macos/macos-10.14-Readme.md#installed-sdks
https://github.com/Microsoft/azure-pipelines-image-generation/issues/886#issuecomment-487101917
We can use a specific version of xCode y. It should work either way.
| gharchive/pull-request | 2019-11-04T13:36:40 | 2025-04-01T04:32:17.389744 | {
"authors": [
"Gillibald",
"kekekeks"
],
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/3211",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2566052719 | added crypto-dashboard
Closes: #785
Title: Crypto Dasboard
Name: Aditi
Idenitfy yourself: Gssoc ext and Hacktoberfest 2024
Screenshots 📷
Example how to mark a checkbox:-
[x] My code follows the code style of this project.
-->
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Code style update (formatting, local variables)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
Checklist: ☑️
[x] My code follows the Contributing Guidelines & Code of Conduct of this project.
[x] This PR does not contain plagiarized content.
[x] I have performed a self-review of my own code.
[x] I have commented my code, particularly wherever it was hard to understand.
[x] My changes generate no new warnings.
https://github.com/JayeshLab/vue-crypto-dashboard
@Avdhesh-Varshney please can you remove the spam tag because due to this got disqualied from hacktoberfest
please i will not repeat this again can you remove spam tag
Don't repeat it next time
OK thank you sir
| gharchive/pull-request | 2024-10-04T10:38:16 | 2025-04-01T04:32:17.405955 | {
"authors": [
"Aditijainnn",
"Avdhesh-Varshney"
],
"repo": "Avdhesh-Varshney/WebMasterLog",
"url": "https://github.com/Avdhesh-Varshney/WebMasterLog/pull/790",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
585787513 | onRegionChangeComplete is missing latitudeDelta, longitudeDelta
onRegionChangeComplete is called with an object of shape {lat, lng} when it should also have latDelta, lngDelta. Need to find a workaround
Forked react-native-web-maps to do this
| gharchive/issue | 2020-03-22T18:58:13 | 2025-04-01T04:32:17.406995 | {
"authors": [
"kafkas"
],
"repo": "AvenCloud/coronamap",
"url": "https://github.com/AvenCloud/coronamap/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1274919696 | 🛑 Microsoft Minecraft API is down
In a7bf779, Microsoft Minecraft API (https://api.minecraftservices.com/authentication/login_with_xbox) was down:
HTTP code: 502
Response time: 253 ms
Resolved: Microsoft Minecraft API is back up in b2746af.
| gharchive/issue | 2022-06-17T11:31:27 | 2025-04-01T04:32:17.409256 | {
"authors": [
"GeekCornerGH"
],
"repo": "AventiumSoftworks/helios-status-page",
"url": "https://github.com/AventiumSoftworks/helios-status-page/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1228659791 | FPS drops a lot when playing in third person view with aimbot enabled
Please fix fps dividing by 4 in third person view
probably fixed
| gharchive/issue | 2022-05-07T15:52:10 | 2025-04-01T04:32:17.409852 | {
"authors": [
"Averiias",
"alex421220"
],
"repo": "Averiias/Universal-SilentAim",
"url": "https://github.com/Averiias/Universal-SilentAim/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1635768922 | fix(ci): skip task with secrets from for PullRequest
hi @jafin , this is not perfect but it should allow to run unit test from fork pullrequest.
We may set up a working branch in order to autodeploy demo and alpha-beta after a merge.
@guillaume-chervet thanks a good improvement!
| gharchive/pull-request | 2023-03-22T13:29:32 | 2025-04-01T04:32:17.413308 | {
"authors": [
"guillaume-chervet",
"jafin"
],
"repo": "AxaFrance/react-oidc",
"url": "https://github.com/AxaFrance/react-oidc/pull/1011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
621889603 | [FEAT] Add possibility to customize Authenticating component for oidc-redux as in oidc-context
Issue and Steps to Reproduce
Currently react-oidc-redux only allow to customize 4 components:
notAuthenticated: PropTypes.elementType,
notAuthorized: PropTypes.elementType,
callbackComponentOverride: PropTypes.elementType,
sessionLostComponent: PropTypes.elementType,
and not Authenticating component as in oidc-context
Versions
Latest
Screenshots
Expected
Add authenticating props to Oidc or OidcSecure Component
Actual
Additional Details
Installed packages:
I know it's not recommended to use redux for new project but for old one with redux i thinks it's a good idea to have the same feature
The same happens to me, I need Authenticating component support to change the look and feel of that component in a Redux Project.
it's relatively simple to add a prop to OidcSecure component. If the owner is ok i think i can make a PR after my holidays if no one is available in the meantime :P
yes, go on :)
PR created
Thank you for merging my PR, however there's an issues regarding core package dependencies after your release:
npm ERR! code ETARGET npm ERR! notarget No matching version found for @axa-fr/react-oidc-core@^3.1.3. npm ERR! notarget In most cases you or one of your dependencies are requesting npm ERR! notarget a package version that doesn't exist. npm ERR! notarget npm ERR! notarget It was specified as a dependency of '@axa-fr/react-oidc-redux'
Would you mind having a look at this issue because all my pipeline is blocked.
Thanks !
thank you @vivo87 for you awesome pull request! It is working now.
| gharchive/issue | 2020-05-20T16:13:59 | 2025-04-01T04:32:17.418409 | {
"authors": [
"drwaky",
"guillaume-chervet",
"guillaumechervetaxa",
"vivo87"
],
"repo": "AxaGuilDEv/react-oidc",
"url": "https://github.com/AxaGuilDEv/react-oidc/issues/452",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
230143235 | ERROR: monitor is not valid
Hello.
First of all, congratulations for your project and your work you have done.
Since yesterday, when I type the command "python3.5 update_status.py config.ini" I have this error: "ERROR: monitor is not valid" and impossible to go further as it stops.
My UptimeRobot API key and correct and my Cachet API key and correct also.
For now, I'm stuck by this error. Where is the problem ?
Make sure you have the same url you are monitoring in the config file.
| gharchive/issue | 2017-05-20T09:52:52 | 2025-04-01T04:32:17.421502 | {
"authors": [
"FleuryK",
"camilonova"
],
"repo": "Axiacore/cachet-uptime-robot",
"url": "https://github.com/Axiacore/cachet-uptime-robot/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2184837067 | APIM-cli import breaks feAPIs with basic auth w/out password
APIM-CLI version
1.14.2
API-Management version
7.7.3
Bug description
For a feAPI that has an outbound backend servier with BASIC auth, and only username defined and password is empty, "apim api get" retrieves:
" [...]
"authenticationProfiles" : [ {
"name" : "_default",
"isDefault" : true,
"parameters" : {
"username" : "user",
"id" : 0
},
[...]"
Doing an import with this, breaks the feAPI and user gets 500. Exception in logs reads:
"
[...]
nested fault: no password specified
java.lang.NullPointerException: no password specified
at com.vordel.dwe.http.ClientTransaction.basicAuthString(Native Method)
at com.vordel.circuit.net.BasicAuthNConnectionHandler$BasicState.getAuthToken(BasicAuthNConnectionHandler.java:152)
at com.vordel.circuit.net.BasicAuthNConnectionHandler$BasicState.expectContinue(BasicAuthNConnectionHandler.java:191)
[...]"
When setting this basic (user with no password) via API Manager UI, the PUT request body reads as:
" [...]
"authenticationProfiles": [
{
"name": "_default",
"isDefault": true,
"parameters": {
"username": "username",
"id": 0,
"password": ""
},
"type": "http_basic"
}
[...]"
Steps to reproduce
No response
Relevant log output
No response
@rathnapandi this is fixed in 1.14.4 right?
Hi @emmanuel-galindo, Can you test it with latest relelase?
| gharchive/issue | 2024-03-13T20:32:04 | 2025-04-01T04:32:17.432343 | {
"authors": [
"emmanuel-galindo",
"rathnapandi"
],
"repo": "Axway-API-Management-Plus/apim-cli",
"url": "https://github.com/Axway-API-Management-Plus/apim-cli/issues/470",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1999155705 | Security vulnerability
What is a proper way to report security vulnerability?
Hello @korzo, it'll be better if you can contact me with the details on my email: ayubtouba@gmail.com
And thank you for the interest to improve the library 🙏
@AyubTouba I sent you an email.
| gharchive/issue | 2023-11-17T13:38:03 | 2025-04-01T04:32:17.441352 | {
"authors": [
"AyubTouba",
"korzo"
],
"repo": "AyubTouba/nestjs-dbvalidator",
"url": "https://github.com/AyubTouba/nestjs-dbvalidator/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
953228617 | Maximal Rectangle (LeetCode) (Hard)
Details
Given a rows x cols binary matrix filled with 0's and 1's, find the largest rectangle containing only 1's and return its area.
This problem is from LeetCode. It can be solved using stack. I will be using next smallest and previous smallest element problems.
Link to the problem: https://leetcode.com/problems/maximal-rectangle/
Programming language
[ ] C
[x] C++
[ ] Java
[ ] Python
I would like to work on this issue using C++ language.
Please assign me this issue under LGMSoC'21.
Please assign this issue to me under LGM-SoC'21
@ravikr126 @sameeksharl @Amit366 @Ayush7614
Please assign this issue to me under LGM-SoC'21 in java
@ravikr126 @sameeksharl @Amit366 @Ayush7614
Please assign this issue to me under LGM-SoC'21 in python
@ravikr126 @sameeksharl @Amit366 @Ayush7614
Please assign this issue to me under LGM-SoC'21 in python
@ravikr126 @sameeksharl @Amit366 @Ayush7614
You can also work on it @Pratik57 just create pr of this question .No need to open a new issue
Please assign this issue to me under LGM-SoC'21 in python
@ravikr126 @sameeksharl @Amit366 @Ayush7614
You can also work on it @Pratik57 just create pr of this question .No need to open a new issue
No, i am not opening any new issue. In this issue if you can assign me for working in python then i can send the pull request.
@Pratik57 go ahead
@sameeksharl Sorry but this question comes in category of hard problem atleast it should be level 3 or 2.
| gharchive/issue | 2021-07-26T19:44:34 | 2025-04-01T04:32:17.457053 | {
"authors": [
"Karnika06",
"Pratik57",
"Rounak-chauhan",
"ravikr126",
"sameeksharl"
],
"repo": "Ayush7614/Daily-Coding-DS-ALGO-Practice",
"url": "https://github.com/Ayush7614/Daily-Coding-DS-ALGO-Practice/issues/2258",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1277562742 | Add Version Parameter
Add a parameter to manage version, because now all the components are using the same version number
This is handle using the appVersion inside the Chart.yaml
Fix #49
| gharchive/issue | 2022-06-21T01:02:38 | 2025-04-01T04:32:17.508099 | {
"authors": [
"alfespa17",
"jcanizalez"
],
"repo": "AzBuilder/terrakube-helm-chart",
"url": "https://github.com/AzBuilder/terrakube-helm-chart/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2373169196 | Aura fix
Missed a 0 at level 3. Was giving 90% reduction but should be 9%
I think this can be closed as techno did a hotfix release
| gharchive/pull-request | 2024-06-25T16:55:06 | 2025-04-01T04:32:17.514972 | {
"authors": [
"vampirika"
],
"repo": "AzerothWarsLR/WarcraftLegacies",
"url": "https://github.com/AzerothWarsLR/WarcraftLegacies/pull/3096",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2197907632 | feat: add gas costs to TxEffect
Please read contributing guidelines and remove this line.
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#5337 👈
#5309
#5301
#5226
master
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @just-mitch and the rest of your teammates on Graphite
Benchmark results
Metrics with a significant change:
note_trial_decrypting_time_in_ms (32): 112 (+234%)
Detailed results
All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write.
This benchmark source data is available in JSON format on S3 here.
Values are compared against data from master at commit 4064e180 and shown if the difference exceeds 1%.
L2 block published to L1
Each column represents the number of txs on an L2 block published to L1.
Metric
8 txs
32 txs
64 txs
l1_rollup_calldata_size_in_bytes
676
676
676
l1_rollup_calldata_gas
6,424
6,424
6,412
l1_rollup_execution_gas
585,757
585,757
585,745
l2_block_processing_time_in_ms
1,336 (+1%)
4,714 (+1%)
9,033
note_successful_decrypting_time_in_ms
196 (+20%)
497 (-3%)
968 (-1%)
note_trial_decrypting_time_in_ms
102 (+50%)
:warning: 112 (+234%)
96.9 (-13%)
l2_block_building_time_in_ms
12,128 (+3%)
44,599 (+3%)
89,684 (+5%)
l2_block_rollup_simulation_time_in_ms
7,369 (+3%)
25,854 (+3%)
50,964 (+4%)
l2_block_public_tx_process_time_in_ms
4,738 (+4%)
18,697 (+4%)
38,628 (+6%)
L2 chain processing
Each column represents the number of blocks on the L2 chain where each block has 16 txs.
Metric
5 blocks
10 blocks
node_history_sync_time_in_ms
14,648 (-2%)
26,317 (-7%)
note_history_successful_decrypting_time_in_ms
1,321 (+3%)
2,362 (-4%)
note_history_trial_decrypting_time_in_ms
72.9 (-45%)
115 (-22%)
node_database_size_in_bytes
18,645,072
34,844,752 (-1%)
pxe_database_size_in_bytes
29,859
59,414
Circuits stats
Stats on running time and I/O sizes collected for every circuit run across all benchmarks.
Circuit
circuit_simulation_time_in_ms
circuit_input_size_in_bytes
circuit_output_size_in_bytes
private-kernel-init
182 (-2%)
44,379
28,222
private-kernel-ordering
171
52,888
14,304
base-parity
3,499 (+2%)
128
311
root-parity
1,209 (-4%)
1,244
311
base-rollup
14,955 (+2%)
165,768
861
root-rollup
49.2 (-1%)
4,359
725
private-kernel-inner
226 (-3%)
73,803
28,222
public-kernel-app-logic
97.5 (+3%)
35,260
28,225
public-kernel-tail
141 (+4%)
40,936
28,225
merge-rollup
7.47 (-18%)
2,568
861
public-kernel-teardown
94.6 (+4%)
35,260
28,225
public-kernel-setup
93.8 (+5%)
35,260
28,225
Tree insertion stats
The duration to insert a fixed batch of leaves into each tree type.
Metric
1 leaves
16 leaves
64 leaves
128 leaves
512 leaves
1024 leaves
2048 leaves
4096 leaves
32 leaves
batch_insert_into_append_only_tree_16_depth_ms
10.0 (+1%)
16.0 (+1%)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_count
16.8
31.6
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_ms
0.584 (+1%)
0.493 (+1%)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_32_depth_ms
N/A
N/A
47.6 (+4%)
71.7 (+2%)
231 (+3%)
448
854 (+2%)
1,726 (+4%)
N/A
batch_insert_into_append_only_tree_32_depth_hash_count
N/A
N/A
96.0
159
543
1,055
2,079
4,127
N/A
batch_insert_into_append_only_tree_32_depth_hash_ms
N/A
N/A
0.488 (+4%)
0.442 (+2%)
0.421 (+3%)
0.417
0.405 (+1%)
0.413 (+4%)
N/A
batch_insert_into_indexed_tree_20_depth_ms
N/A
N/A
54.2 (-1%)
106 (+1%)
335 (+1%)
665
1,289 (+2%)
2,567 (+2%)
N/A
batch_insert_into_indexed_tree_20_depth_hash_count
N/A
N/A
105
207
691
1,363
2,707
5,395
N/A
batch_insert_into_indexed_tree_20_depth_hash_ms
N/A
N/A
0.478 (-1%)
0.480 (+1%)
0.456 (+2%)
0.458 (+1%)
0.446 (+2%)
0.448 (+2%)
N/A
batch_insert_into_indexed_tree_40_depth_ms
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
61.4 (+1%)
batch_insert_into_indexed_tree_40_depth_hash_count
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
109
batch_insert_into_indexed_tree_40_depth_hash_ms
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
0.537 (+1%)
Miscellaneous
Transaction sizes based on how many contract classes are registered in the tx.
Metric
0 registered classes
1 registered classes
tx_size_in_bytes
14,976
495,328
Transaction size based on fee payment method
Metric
none fee payment method
native fee payment method
fpc_public fee payment method
fpc_private fee payment method
tx_with_fee_size_in_bytes
781 (+1%)
913 (+1%)
1,169 (+1%)
1,385 (+1%)
Transaction processing duration by data writes.
Metric
0 new note hashes
1 new note hashes
2 new note hashes
tx_pxe_processing_time_ms
1,785 (+2%)
1,198 (-1%)
4,453 (+2%)
Metric
0 public data writes
1 public data writes
2 public data writes
3 public data writes
4 public data writes
5 public data writes
8 public data writes
tx_sequencer_processing_time_ms
9.99 (+1%)
586 (+2%)
448 (+3%)
1,187 (+4%)
495 (+4%)
2,048 (+3%)
572 (+4%)
| gharchive/pull-request | 2024-03-20T15:58:43 | 2025-04-01T04:32:17.576163 | {
"authors": [
"AztecBot",
"just-mitch"
],
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/pull/5337",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2440713864 | chore: Validate outputs in kernel circuits
Kernel circuits have a validation step that checks their output, which is constructed in an unconstrained function earlier. This check should not be needed when running in unconstrained mode, so they were skipped.
However, this means that errors in the unconstraiend construction of the output could go unnoticed in tests that run without real proofs enabled, since those use the -simulated versions of kernels which are unconstrained. To catch these errors, we now force kernel circuits to always validate this output.
Benchmark results
Metrics with a significant change:
proof_construction_time_sha256_ms (4): 1,553 (-48%)
proof_construction_time_sha256_ms (16): 700 (-17%)
proof_construction_time_sha256_100_ms (32): 5,538 (-19%)
proof_construction_time_sha256_30_ms (1): 11,767 (-27%)
protocol_circuit_simulation_time_in_ms (private-kernel-tail): 687 (+60%)
protocol_circuit_proving_time_in_ms (base-parity): 2,187 (+50%)
Detailed results
All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write.
This benchmark source data is available in JSON format on S3 here.
Proof generation
Each column represents the number of threads used in proof generation.
Metric
1 threads
4 threads
16 threads
32 threads
64 threads
proof_construction_time_sha256_ms
5,720
:warning: 1,553 (-48%)
:warning: 700 (-17%)
754 (+2%)
761 (-1%)
proof_construction_time_sha256_30_ms
:warning: 11,767 (-27%)
3,155
1,416
1,453 (+1%)
1,468
proof_construction_time_sha256_100_ms
43,937 (-3%)
12,409 (+3%)
5,649 (-1%)
:warning: 5,538 (-19%)
5,490 (-1%)
proof_construction_time_poseidon_hash_ms
78.0
34.0
34.0
57.0 (-2%)
87.0
proof_construction_time_poseidon_hash_30_ms
1,518
417
202 (+1%)
223 (-3%)
263 (-1%)
proof_construction_time_poseidon_hash_100_ms
5,608 (-5%)
1,514
685 (-1%)
721
750 (+1%)
L2 block published to L1
Each column represents the number of txs on an L2 block published to L1.
Metric
4 txs
8 txs
16 txs
l1_rollup_calldata_size_in_bytes
708
708
708
l1_rollup_calldata_gas
6,600
6,588
6,576
l1_rollup_execution_gas
611,442
611,581
611,720
l2_block_processing_time_in_ms
763 (-2%)
1,422 (-1%)
2,767
l2_block_building_time_in_ms
10,674 (-1%)
20,733 (-1%)
41,336 (-1%)
l2_block_rollup_simulation_time_in_ms
10,674 (-1%)
20,733 (-1%)
41,336 (-1%)
l2_block_public_tx_process_time_in_ms
8,927 (-1%)
18,779 (-1%)
39,287 (-1%)
L2 chain processing
Each column represents the number of blocks on the L2 chain where each block has 8 txs.
Metric
3 blocks
5 blocks
node_history_sync_time_in_ms
7,116
9,980
node_database_size_in_bytes
12,742,736
16,887,888
pxe_database_size_in_bytes
16,254
26,813
Circuits stats
Stats on running time and I/O sizes collected for every kernel circuit run across all benchmarks.
Circuit
simulation_time_in_ms
witness_generation_time_in_ms
input_size_in_bytes
output_size_in_bytes
proving_time_in_ms
proof_size_in_bytes
num_public_inputs
size_in_gates
private-kernel-init
111 (+14%)
411 (-1%)
21,798
53,090
N/A
N/A
N/A
N/A
private-kernel-inner
188 (+7%)
718 (-1%)
80,891
53,266
N/A
N/A
N/A
N/A
private-kernel-tail
:warning: 687 (+60%)
872 (-2%)
61,159
57,447
N/A
N/A
N/A
N/A
base-parity
6.43 (-1%)
683 (+3%)
160
96.0
:warning: 2,187 (+50%)
13,188
19.0
131,072
root-parity
111 (-2%)
130 (-1%)
69,084
96.0
31,935 (+1%)
13,188
19.0
4,194,304
base-rollup
3,344 (-1%)
4,930
182,192
632
44,821
13,988
44.0
4,194,304
root-rollup
156 (-1%)
129 (+3%)
54,461
684
29,503 (+2%)
13,956
43.0
4,194,304
public-kernel-setup
96.6 (-3%)
2,749 (+1%)
112,985
80,246
18,418
141,444
4,027
2,097,152
public-kernel-app-logic
113 (-1%)
4,033 (-1%)
112,985
80,246
11,592 (+1%)
141,444
4,027
1,048,576
public-kernel-tail
629 (-1%)
24,605 (-10%)
410,726
10,814
68,727 (+1%)
26,372
431
8,388,608
private-kernel-reset-tiny
239 (+4%)
895 (-1%)
76,585
52,961
N/A
N/A
N/A
N/A
private-kernel-tail-to-public
5,248 (-1%)
1,559 (-1%)
888,732
1,697
N/A
N/A
N/A
N/A
public-kernel-teardown
90.6 (-1%)
4,084 (+3%)
112,985
80,246
19,761
141,444
4,027
2,097,152
merge-rollup
60.7 (-1%)
N/A
35,678
632
N/A
N/A
N/A
N/A
undefined
N/A
N/A
N/A
N/A
158,826
N/A
N/A
N/A
Stats on running time collected for app circuits
Function
input_size_in_bytes
output_size_in_bytes
witness_generation_time_in_ms
proof_size_in_bytes
proving_time_in_ms
ContractClassRegisterer:register
1,312
11,699
343 (-1%)
N/A
N/A
ContractInstanceDeployer:deploy
1,376
11,699
27.3 (-2%)
N/A
N/A
MultiCallEntrypoint:entrypoint
1,888
11,699
601 (-2%)
N/A
N/A
GasToken:deploy
1,344
11,699
430 (-4%)
N/A
N/A
SchnorrAccount:constructor
1,280
11,699
427 (-1%)
N/A
N/A
SchnorrAccount:entrypoint
2,272
11,699
817
N/A
N/A
Token:privately_mint_private_note
1,248
11,699
504
N/A
N/A
FPC:fee_entrypoint_public
1,312
11,699
86.1 (-2%)
N/A
N/A
Token:transfer
1,280
11,699
1,238 (-3%)
N/A
N/A
AuthRegistry:set_authorized (avm)
18,427
N/A
N/A
102,048
2,142 (+5%)
FPC:prepare_fee (avm)
22,894
N/A
N/A
102,112
2,382 (+8%)
Token:transfer_public (avm)
46,817
N/A
N/A
102,112
6,853 (+5%)
AuthRegistry:consume (avm)
26,670
N/A
N/A
102,080
2,719 (+4%)
FPC:pay_refund (avm)
26,163
N/A
N/A
102,080
3,698 (+3%)
Benchmarking:create_note
1,312
11,699
421
N/A
N/A
SchnorrAccount:verify_private_authwit
1,248
11,699
43.7 (-8%)
N/A
N/A
Token:unshield
1,344
11,699
1,052 (-3%)
N/A
N/A
FPC:fee_entrypoint_private
1,344
11,699
1,351 (-4%)
N/A
N/A
AVM Simulation
Time to simulate various public functions in the AVM.
Function
time_ms
bytecode_size_in_bytes
GasToken:_increase_public_balance
95.2 (-1%)
8,139
GasToken:set_portal
11.5 (-10%)
2,362
Token:constructor
119 (-3%)
16,400
FPC:constructor
91.8
9,304
GasToken:mint_public
79.6 (-3%)
6,150
Token:mint_public
102 (-8%)
11,720
Token:assert_minter_and_mint
62.3 (+4%)
8,028
AuthRegistry:set_authorized
44.4 (+2%)
4,537
FPC:prepare_fee
238 (+4%)
8,812
Token:transfer_public
41.9 (+7%)
32,641
FPC:pay_refund
73.3 (+10%)
12,114
Benchmarking:increment_balance
1,044
7,450
Token:_increase_public_balance
11.8 (+1%)
8,960
FPC:pay_refund_with_shielded_rebate
142 (-1%)
12,663
Public DB Access
Time to access various public DBs.
Function
time_ms
get-nullifier-index
0.155 (+2%)
Tree insertion stats
The duration to insert a fixed batch of leaves into each tree type.
Metric
1 leaves
16 leaves
64 leaves
128 leaves
256 leaves
512 leaves
1024 leaves
batch_insert_into_append_only_tree_16_depth_ms
10.3 (-2%)
16.6 (-1%)
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_count
16.8
31.7
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_16_depth_hash_ms
0.594 (-2%)
0.512 (-1%)
N/A
N/A
N/A
N/A
N/A
batch_insert_into_append_only_tree_32_depth_ms
N/A
N/A
47.8 (-1%)
77.9 (+2%)
135 (+2%)
245 (-2%)
472 (-1%)
batch_insert_into_append_only_tree_32_depth_hash_count
N/A
N/A
95.9
159
287
543
1,055
batch_insert_into_append_only_tree_32_depth_hash_ms
N/A
N/A
0.488 (-1%)
0.478 (+1%)
0.463 (+1%)
0.444 (-3%)
0.439 (-2%)
batch_insert_into_indexed_tree_20_depth_ms
N/A
N/A
59.0 (-1%)
112 (-1%)
183 (-2%)
352 (-1%)
722 (+2%)
batch_insert_into_indexed_tree_20_depth_hash_count
N/A
N/A
109
207
355
691
1,363
batch_insert_into_indexed_tree_20_depth_hash_ms
N/A
N/A
0.499 (-1%)
0.500 (-1%)
0.485 (-1%)
0.478
0.495 (+2%)
batch_insert_into_indexed_tree_40_depth_ms
N/A
N/A
72.7 (-1%)
N/A
N/A
N/A
N/A
batch_insert_into_indexed_tree_40_depth_hash_count
N/A
N/A
133
N/A
N/A
N/A
N/A
batch_insert_into_indexed_tree_40_depth_hash_ms
N/A
N/A
0.518 (-1%)
N/A
N/A
N/A
N/A
Miscellaneous
Transaction sizes based on how many contract classes are registered in the tx.
Metric
0 registered classes
1 registered classes
tx_size_in_bytes
72,017
666,910
Transaction size based on fee payment method
| Metric | |
| - | |
| gharchive/pull-request | 2024-07-31T18:55:03 | 2025-04-01T04:32:17.665717 | {
"authors": [
"AztecBot",
"spalladino"
],
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/pull/7706",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
629347018 | GitVersion: Mainline doesn't yet support pre-release tags.
When running workflow in a branch, the pipeline fails because there is a previous release with a pre-release semver and GitVersions versioning mode is set to mainline. Results in the error above.
Current working solution is to delete those tags via command line with
git push --delete origin tagname
We've tested this from scratch an we could not repo the problem. We're going to close.
| gharchive/issue | 2020-06-02T16:34:40 | 2025-04-01T04:32:17.700270 | {
"authors": [
"Joll59",
"cmayomsft"
],
"repo": "Azure-Samples/Speech-Service-DevOps-Template",
"url": "https://github.com/Azure-Samples/Speech-Service-DevOps-Template/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2458753232 | Add python flask sample
Purpose
Adds a Python flask sample to support new Python tutorial
Does this introduce a breaking change?
[ ] Yes
[ x] No
Pull Request Type
What kind of change does this Pull Request introduce?
[ ] Bugfix
[ ] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Documentation content changes
[x ] Other... Please describe: Adds a Python sample to support new Python tutorial
How to Test
To test the code follow the instructions in this tutorial (preview): https://review.learn.microsoft.com/en-us/azure/app-service/tutorial-connect-msi-key-vault-python?branch=pr-en-us-283653
What to Check
Verify that the language detector works
Other Information
@pamelafox Need to pick this one or #5 . Please see email titled "Can you check out these two Azure Sample PRs and give feedback?". Thanks!
| gharchive/pull-request | 2024-08-09T22:41:32 | 2025-04-01T04:32:17.719464 | {
"authors": [
"JimacoMS4"
],
"repo": "Azure-Samples/app-service-language-detector",
"url": "https://github.com/Azure-Samples/app-service-language-detector/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1660223383 | Is it possible to use only the gpt3.5 turbo model?
I noticed that this project uses both models, Davinci-003 and GPT3.5 Turbo. What are their respective roles? As GPT3.5 Turbo is a general-purpose model, can we use only it to complete the working? Because the price of Davinci-003 is 10 times higher than GPT3.5 Turbo. :smile:
Davinci-003 gives better answer in Q&A
@hophanms It is possible to use the davinci model instead of gpt-35-turbo? We've applied for gpt-35/gpt-4 but still on the waitlist. If we can utiliise davinci instead, it would be great.
| gharchive/issue | 2023-04-10T04:47:18 | 2025-04-01T04:32:17.721126 | {
"authors": [
"bixinhaner",
"hophanms",
"nubgamerz"
],
"repo": "Azure-Samples/azure-search-openai-demo",
"url": "https://github.com/Azure-Samples/azure-search-openai-demo/issues/102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2398378291 | Work with existing data resources and ensure testability of API configuration
Please provide us with the following information:
This issue is for a: (mark with an x)
- [ ] bug report -> please search issues before submitting
- [ x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Minimal steps to reproduce
Any log messages given by the failure
Expected/desired behavior
There are so many moving parts; it would be good to provide a test script to verify backend setup:
model/deployment name
data source credentials
API version (!)
Also, the README suggest creation of new data resources based on local file ingestion instead of connecting to existing data resources in Azure.
OS and Version?
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
azd version?
run azd version and copy paste here.
Versions
Mention any other details that might be useful
Thanks! We'll be in touch soon.
Could you clarify more what the test script would do? This repo currently includes various kinds of tests: unit/integration tests with Pytest and E2E tests with Playwright. It also includes a GitHub action that can deploy the repo.
Are you looking for smoke tests on the deployed app?
| gharchive/issue | 2024-07-09T14:27:40 | 2025-04-01T04:32:17.725751 | {
"authors": [
"pamelafox",
"synergiator"
],
"repo": "Azure-Samples/azure-search-openai-demo",
"url": "https://github.com/Azure-Samples/azure-search-openai-demo/issues/1802",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1658984655 | Microsoft.CognitiveServices.Speech.core.dll not found, NuGet errors
Describe the bug
Running new code using the text to speech for the first time. In calling the SpeechConfig.FromSubscription function I get the exception:
System.DllNotFoundException
HResult=0x80131524
Message=Unable to load DLL 'Microsoft.CognitiveServices.Speech.core.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)
Source=Microsoft.CognitiveServices.Speech.csharp
StackTrace:
at Microsoft.CognitiveServices.Speech.Internal.SpeechConfig.speech_config_from_subscription(IntPtr& config, String subscriptionKey, String region)
at Microsoft.CognitiveServices.Speech.SpeechConfig.FromSubscription(String subscriptionKey, String region)
at JTLanguageModelsWindows.Media.TextToSpeechPlatform.d__38.MoveNext() in C:\JTLanguage\JTLanguageModelsWindows\Media\TextToSpeechPlatform.cs:line 832
However, I failed to notice that there were some NuGet errors when I installed the Microsoft.CognitiveServices.Speech package via NuGet. See the errors below.
To Reproduce
Create new .Net console project, .Net 4.7.2 (I called it JTExp).
Open Manage NuGet Packages for Solution and add Microsoft.CognitiveServices.Speech.
See the NuGet errors:
Successfully installed 'Microsoft.CognitiveServices.Speech 1.27.0' to JTExp
Failed to update binding redirects for JTExp\JTExp.csproj : Exception has been thrown by the target of an invocation.
Expected behavior
A clear and concise description of what you expected to happen.
Version of the Cognitive Services Speech SDK
Which version of the SDK are you using.
Platform, Operating System, and Programming Language
OS: Windows 10
Hardware - x64
Programming language: C#
Browser Edge
Additional context
I'm running an AST.NET web app using .NET Framework 4.7.2.
I installed the Microsoft.CognitiveServices.Speech 1.27.0 via Manage NuGet Packages for Solution in a freshly updated Visual Studio 2022 (17.5.3), however, I failed to notice there were some error messages:
Successfully installed 'Microsoft.CognitiveServices.Speech 1.27.0' to JTLanguageModelsWindows
Failed to update binding redirects for JTLanguageModelsWindows\JTLanguageModelsWindows.csproj : Exception has been thrown by the target of an invocation.
Executing nuget actions took 2.05 sec
Package 'Microsoft.CognitiveServices.Speech.1.27.0' already exists in folder 'C:\JTLanguage\packages'
Added package 'Microsoft.CognitiveServices.Speech.1.27.0' to 'packages.config'
Successfully installed 'Microsoft.CognitiveServices.Speech 1.27.0' to JTLanguageWeb
Failed to update binding redirects for JTLanguageWeb\JTLanguageWeb.csproj : Exception has been thrown by the target of an invocation.
Executing nuget actions took 1.37 sec
Package 'Microsoft.CognitiveServices.Speech.1.27.0' already exists in folder 'C:\JTLanguage\packages'
Added package 'Microsoft.CognitiveServices.Speech.1.27.0' to 'packages.config'
Successfully installed 'Microsoft.CognitiveServices.Speech 1.27.0' to JTLanguageWorker
Failed to update binding redirects for JTLanguageTools\JTLanguageWorker\JTLanguageWorker.csproj : Exception has been thrown by the target of an invocation.
These are the messages from my repo case:
Attempting to gather dependency information for package 'Microsoft.CognitiveServices.Speech.1.27.0' with respect to project 'JTExp', targeting '.NETFramework,Version=v4.7.2'
Gathering dependency information took 347 ms
Attempting to resolve dependencies for package 'Microsoft.CognitiveServices.Speech.1.27.0' with DependencyBehavior 'Lowest'
Resolving dependency information took 0 ms
Resolving actions to install package 'Microsoft.CognitiveServices.Speech.1.27.0'
Resolved actions to install package 'Microsoft.CognitiveServices.Speech.1.27.0'
Package 'Microsoft.CognitiveServices.Speech.1.27.0' already exists in folder 'C:\JTLanguage\packages'
Added package 'Microsoft.CognitiveServices.Speech.1.27.0' to 'packages.config'
Successfully installed 'Microsoft.CognitiveServices.Speech 1.27.0' to JTExp
Failed to update binding redirects for JTExp\JTExp.csproj : Exception has been thrown by the target of an invocation.
Executing nuget actions took 897 ms
Time Elapsed: 00:00:01.2527044
========== Finished ==========
I was able to get this to reproduce for an ASP.Net framework based web app.
Talking to a colleague, there had been another report of this issue, that customer worked around it by switching to a dotnet core based app.
Looking at the deployed webapp in Azure, the native dll's that the SDK depends on aren't being deployed into the bin folder, not sure why they aren't.
Ok, I managed to get the Speech SDK to load in a .Net Framework based web app.
The steps...
ANY CPU didn't work, as it builds for the current machine, and not the target machine.
I selected a x86 build (my Azure web app was 32 bit windows) and built it locally.
Then in the webapp project's property page under Publish there is an option to deploy all files in the project folder and that deployed the core dll to the site.
hi @jtsoftware, were you able to resolve your issue with @rhurey's suggestion?
Sorry, I've been on vacation out of the country the last two weeks.
No, I haven't been able to resolve this. I tried creating a web app with ASP.Net 4.7.2 with MVC, and added x86 and x64 configurations, but I get a build error:
1>C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\Bin\amd64\Microsoft.Common.CurrentVersion.targets(832,5): error : The BaseOutputPath/OutputPath property is not set for project 'WebApplication1.csproj'. Please check to make sure that you have specified a valid combination of Configuration and Platform for this project. Configuration='Debug' Platform='x86'. This error may also appear if some other project is trying to follow a project-to-project reference to this project, this project has been unloaded or is not included in the solution, and the referencing project does not build using the same or an equivalent Configuration or Platform.
Apparently, adding that configuration didn't add the necessary stuff to the project file. How did you do it?
I get the same kind of error if I try it in my main solution.
The problem is that I have a 10-year old legacy solution with 22 projects totaling over ~1 million lines of code, so I'm kind of uncomfortable messing around with the core configuration. Converting to dotnet core would probably kill the project with the effort needed to convert to something newer. A bit after I originally created the project, I tried switching to the new core libraries, but it was missing some key functionality I used, so I stayed with the .Net framework.
Rather than me trying to work around the problem, can you fix the problems that are keeping me from being able to use it? or give me a simpler solution such as just copying some libraries? Otherwise, I may have to stick with using Google Text-to-Speech, which works fine, though I really would like to use the Microsoft voices. This is for a language learning platform, so have good speech is important.
I tried the nuGet install again, and it worked, the only difference being that I ran it in a Visual Studio not as Administrator. (You can only debug web apps as an Administrator). I don't see the .core library in bin\Debug, but I am able to call functions without getting the Unable to load DLL message. I haven't got the code fully working, but I can generate a .wav file, so it looks like the problem is not fatal. Should I close this, or does it need more research?
Actually, I forgot that I added the core DLL path to the PATH variable, which is why it works for me. i.e.:
C:\JTLanguage\packages\Microsoft.CognitiveServices.Speech.1.27.0\runtimes\win-x64\native;
So a fix for nuGet is still probably needed.
I have a second machine and installed the nuGet package and set the path like above, but it still can't find the DLL. If I copy the DLLs manually to the web app bin directory, I get a System.ApplicationError exception with error code 5. Any ideas?
Is the WebApp running x64?
The default for the WebApp was x86 when I was looking at this.
The key takeaway from when I'd gotten a successful run was getting the correct native binaries in the runtime directory, and then having them published to the cloud.
I'd set my local build to x86 to achieve the first part, and then there was a project setting that published all the files in the target directory to the cloud, and not just ones that showed up as a managed reference.
I got the text to speech working. I didn't have the license files in place. Does this account for error 5? Or could there be something more random going on that's unrelated? Because the runtime libraries are not in the bin directory of the other machine the worked first.
I'm still using Any CPU, as I couldn't get either x64 or x86 to build.
This seems like related issue and there is workaround for .NET framework
https://stackoverflow.com/questions/75616169/unable-to-load-dll-microsoft-cognitiveservices-speech-core-dll-could-not-be-f/75623332#75623332
We would need to update the NuGet to automatically to deploy all binaries.
Closing the issue as there has been workarounds provided, we will plan the support to official Nuget.
| gharchive/issue | 2023-04-07T16:32:41 | 2025-04-01T04:32:17.748848 | {
"authors": [
"glecaros",
"jhakulin",
"jtsoftware",
"rhurey"
],
"repo": "Azure-Samples/cognitive-services-speech-sdk",
"url": "https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/1907",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
939047548 | Improves SKU request
What this PR does / why we need it:
Improves SKU request: ask Azure API for SKU related only to the RP location, not all SKUs.
Test plan for issue:
Tested locally in eastus. I guess we will have to test in prod for rest of the regions.
Is there any documentation that needs to be updated for this PR?
No, it is just optimisation.
/azp run e2e
| gharchive/pull-request | 2021-07-07T16:08:54 | 2025-04-01T04:32:17.769311 | {
"authors": [
"m1kola"
],
"repo": "Azure/ARO-RP",
"url": "https://github.com/Azure/ARO-RP/pull/1588",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2378823860 | fix: allow partially-succeeded builds in UVM artifact download task
What type of PR is this?
What this PR does / why we need it:
uvm signing builds are in a "partiallySucceeded" state due to warnings caused by non-blocking supply-chain violations coming from RP, this change still allows the VHD build to go through even if the build we download UVM artifacts from is in a "partiallySucceeded" state
Which issue(s) this PR fixes:
Fixes #
Requirements:
[ ] uses conventional commit messages
[ ] includes documentation
[ ] adds unit tests
[ ] tested upgrade from previous version
Special notes for your reviewer:
Release note:
none
Pull Request Test Coverage Report for Build 9701672643
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 70.486%
Totals
Change from base Build 9693263400:
0.0%
Covered Lines:
2627
Relevant Lines:
3727
💛 - Coveralls
| gharchive/pull-request | 2024-06-27T18:22:09 | 2025-04-01T04:32:17.776944 | {
"authors": [
"cameronmeissner",
"coveralls"
],
"repo": "Azure/AgentBaker",
"url": "https://github.com/Azure/AgentBaker/pull/4559",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2424390846 | docs: update windows release notes for 2024-07B
What type of PR is this?
/kind documentation
What this PR does / why we need it:
Add windows image release notes for new AKS Windows images with 2024-07B
Which issue(s) this PR fixes:
Fixes #
Requirements:
[x] uses conventional commit messages
[ ] includes documentation
[ ] adds unit tests
[ ] tested upgrade from previous version
Special notes for your reviewer:
Release note:
none
Pull Request Test Coverage Report for Build 10054104023
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 71.021%
Totals
Change from base Build 10052545766:
0.0%
Covered Lines:
2566
Relevant Lines:
3613
💛 - Coveralls
| gharchive/pull-request | 2024-07-23T06:46:35 | 2025-04-01T04:32:17.784669 | {
"authors": [
"AbelHu",
"coveralls"
],
"repo": "Azure/AgentBaker",
"url": "https://github.com/Azure/AgentBaker/pull/4681",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
216785739 | Error on applying Azure Stack policy module to an Azure Subscription's Resource Group
I am trying to apply the Azure Stack policy module as per this link ( https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-policy-module ) onto my subscription, but get the following error. The tools download is from the public GitHub repo for Azure-Stack tools. Am I doing something wrong OR is this a bug?
PS C:\AzureStack-Tools-master> New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $policy -Scope /subscriptions/$s.Subscription.SubscriptionId/resourceGroups/$resourceGroupName
New-AzureRmPolicyAssignment : The pipeline has been stopped.
At line:1 char:1
New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $polic ...
+ CategoryInfo : CloseError: (:) [New-AzureRmPolicyAssignment], PipelineStoppedException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzurePolicyAssignmentCmdlet
New-AzureRmPolicyAssignment : BadRequest : The subscription identifier 'Microsoft.Azure.Commands.Profile.Models.PSAzureContext.Subscription.SubscriptionId' exceeded the
maximum length of '64' characters.
At line:1 char:1
New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $polic ...
+ CategoryInfo : CloseError: (:) [New-AzureRmPolicyAssignment], ErrorResponseMessageException
+ FullyQualifiedErrorId : BadRequest,Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzurePolicyAssignmentCmdlet
Thanks,
Girish.
Hey Girish,
It looks like you specified the scope of the policy as:
-Scope /subscriptions/$s.Subscription.SubscriptionId/resourceGroups/$resourceGroupName
Instead, retrieve the subscription ID ahead of the scope request.
The following example on the ReadMe may be helpful:
Import-Module .\AzureStack.Policy.psm1
Login-AzureRmAccount
$s = Select-AzureRmSubscription -SubscriptionName "<sub name>"
$subId = $s.Subscription.SubscriptionId
$policy = New-AzureRmPolicyDefinition -Name AzureStack -Policy (Get-AzureStackRmPolicy)
New-AzureRmPolicyAssignment -Name AzureStack -PolicyDefinition $policy -Scope /subscriptions/$subId
With the above, I was unable to reproduce any error. Closing at this time.
| gharchive/issue | 2017-03-24T13:30:07 | 2025-04-01T04:32:17.789523 | {
"authors": [
"GirishPillai123",
"MatthewMcGlynn"
],
"repo": "Azure/AzureStack-Tools",
"url": "https://github.com/Azure/AzureStack-Tools/issues/126",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2228127786 | Make TransferSubscriptionsOnReconnect configurable (via publishednodes.json)
Is your feature request related to a problem? Please describe.
We recently changed to 2.9 version of the opc publisher and since than we had a problem with one of our connection to one OPC UA Server. After some investigation we found that the OPC UA Server does not support the TransferSubscriptionsRequest and crashes when the OPC Publisher sends this request. In the OPC UA Publisher 2.9 you introduces the transfer subscription on reconnect feature.
Describe the solution you'd like
Be able to configure the TransferSubscriptionsOnReconnect for each endpoint in the publishednodes.json. E.G:
"UseTransferSubscriptionsOnReconnect": true,
Describe alternatives you've considered
For now we need to deploy the OPC Publisher in Version 2.8 for this one OPC UA Server.
Additional context
New option "dtr|disabletransferonreconnect" or "DisableSubscriptionTransfer": true in published nodes configuration.
Thank you very much for the quick implementation. We already rolled it out and it seems to have solved our problem!
Also in 2.9.8, just released to MCR.
| gharchive/issue | 2024-04-05T14:06:43 | 2025-04-01T04:32:17.799444 | {
"authors": [
"marcschier",
"steffenbeermann"
],
"repo": "Azure/Industrial-IoT",
"url": "https://github.com/Azure/Industrial-IoT/issues/2205",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1438921592 | [Data Mapper] Custom Tree component
-Implemented a custom Tree component to allow for greater and more efficient customization
yes!!
| gharchive/pull-request | 2022-11-07T20:17:54 | 2025-04-01T04:32:17.800719 | {
"authors": [
"NicL9923",
"rickixie"
],
"repo": "Azure/LogicAppsUX",
"url": "https://github.com/Azure/LogicAppsUX/pull/1267",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1830704944 | Azure Redis MSI not working for Redis instances in non Microsoft Tenant
Redis MSI is not working for non Microsoft Tenant.While following the documentation we tried to implement it. Its working for Azure Cache in Microsoft tenant but not working in PME ( custom internal tenant). On digging in the code provided in Microsoft.Azure.StackExchangeRedis, we found out that the audience is hard coded here
I have used the sample provided by the SDK provided by Azure. While running the Sample with our setup values, we are getting the following exception
Options for connecting to an Azure Cache for Redis resource:
1. Authenticate using an access key
2. Authenticate using a system-assigned managed identity
3. Authenticate using a user-assigned managed identity
4. Authenticate using service principal
5. Exit
Enter a number: 3
Redis cache host name: pst-int-westus.geo.redis.cache.windows.net
Managed identity Client ID or resource ID: b5aac26a-4dbf-44c2-8118-62490d80faed
Managed identity Principal (object) ID ('Username' from the 'Data Access Configuration' blade on the Azure Cache for Redis resource): b5aac26a-4dbf-44c2-8118-62490d80faed
Connecting with a user-assigned managed identity...
Failed to connect: System.Net.Http.HttpRequestException: A socket operation was attempted to an unreachable network. (169.254.169.254:80)
---> System.Net.Sockets.SocketException (10051): A socket operation was attempted to an unreachable network.
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token)
at System.Net.Sockets.Socket.<ConnectAsync>g__WaitForConnectWithCancellation|277_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken)
at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
at Microsoft.Azure.StackExchangeRedis.AzureCacheOptionsProviderWithToken.AcquireTokenAsync(Boolean forceRefresh, Boolean throwOnFailure) in C:\__w\1\s\src\AzureCacheOptionsProviderWithToken.cs:line 200
at StackExchange.Redis.AzureCacheForRedis.ConfigureForAzureAsync(ConfigurationOptions configurationOptions, AzureCacheOptions azureCacheOptions) in C:\__w\1\s\src\AzureCacheForRedis.cs:line 93
at StackExchange.Redis.AzureCacheForRedis.ConfigureForAzureWithUserAssignedManagedIdentityAsync(ConfigurationOptions configurationOptions, String clientId, String principalId) in C:\__w\1\s\src\AzureCacheForRedis.cs:line 49
at MyWinTrust.Program.Main(String[] args) in C:\Users\tabiswas\source\repos\RPE.Partner.Service.BizOps\PrOSE\MyWinTrust\Program.cs:line 61
Connection log from StackExchange.Redis:
C:\Users\tabiswas\source\repos\RPE.Partner.Service.BizOps\PrOSE\MyWinTrust\bin\Release\net6.0\MyWinTrust.exe (process 16160) exited with code 0.
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
Press any key to close this window . . .
Its not a network issue. If I try to run the same sample console application with option with connectionString I'm able to get connected from my local machine on running the console application. For option 3 ( user assigned MSI, its failing )
Logs -
Options for connecting to an Azure Cache for Redis resource:
1. Authenticate using an access key
2. Authenticate using a system-assigned managed identity
3. Authenticate using a user-assigned managed identity
4. Authenticate using service principal
5. Exit
Enter a number: 1
Redis cache connection string: pst-int-westus.geo.redis.cache.windows.net:6380,password=<redacted_password>=,ssl=True,abortConnect=False
Connecting with an access key...
Connected successfully!
Connection log from StackExchange.Redis:
09:48:12.4642: Connecting (sync) on .NET 6.0.18 (StackExchange.Redis: v2.6.111.64013)
09:48:12.6260: pst-int-westus.geo.redis.cache.windows.net:6380,password=*****,ssl=True,abortConnect=False
09:48:12.6409: pst-int-westus.geo.redis.cache.windows.net:6380/Interactive: Connecting...
09:48:12.6777: pst-int-westus.geo.redis.cache.windows.net:6380: BeginConnectAsync
09:48:12.7361: 1 unique nodes specified (with tiebreaker)
09:48:12.7378: pst-int-westus.geo.redis.cache.windows.net:6380: OnConnectedAsync init (State=Connecting)
09:48:12.7389: Allowing 1 endpoint(s) 00:00:05 to respond...
09:48:12.7448: Awaiting 1 available task completion(s) for 5000ms, IOCP: (Busy=0,Free=1000,Min=8,Max=1000), WORKER: (Busy=0,Free=32767,Min=8,Max=32767), POOL: (Threads=0,QueuedItems=0,CompletedItems=0)
09:48:13.2023: Configuring TLS
09:48:13.7418: TLS connection established successfully using protocol: Tls12
09:48:13.7439: pst-int-westus.geo.redis.cache.windows.net:6380/Interactive: Connected
09:48:13.7576: pst-int-westus.geo.redis.cache.windows.net:6380: Server handshake
09:48:13.7577: pst-int-westus.geo.redis.cache.windows.net:6380: Authenticating (password)
09:48:13.8348: pst-int-westus.geo.redis.cache.windows.net:6380: Setting client name: MININT-KODICEF(SE.Redis-v2.6.111.64013)
09:48:13.8363: pst-int-westus.geo.redis.cache.windows.net:6380: Setting client lib/ver
09:48:13.8453: pst-int-westus.geo.redis.cache.windows.net:6380: Auto-configuring...
09:48:13.8469: pst-int-westus.geo.redis.cache.windows.net:6380: Requesting tie-break (Key="__Booksleeve_TieBreak")...
09:48:13.8484: pst-int-westus.geo.redis.cache.windows.net:6380/Interactive: Writing: GET __Booksleeve_TieBreak
09:48:13.8490: pst-int-westus.geo.redis.cache.windows.net:6380: Sending critical tracer (handshake): ECHO
09:48:13.8490: pst-int-westus.geo.redis.cache.windows.net:6380/Interactive: Writing: ECHO
09:48:13.8491: pst-int-westus.geo.redis.cache.windows.net:6380: Flushing outbound buffer
09:48:13.8491: pst-int-westus.geo.redis.cache.windows.net:6380: OnEstablishingAsync complete
09:48:13.8491: pst-int-westus.geo.redis.cache.windows.net:6380: Starting read
09:48:14.0836: pst-int-westus.geo.redis.cache.windows.net:6380: Auto-configured (INFO) role: primary
09:48:14.0842: pst-int-westus.geo.redis.cache.windows.net:6380: Auto-configured (INFO) version: 6.0.14
09:48:14.0842: pst-int-westus.geo.redis.cache.windows.net:6380: Auto-configured (INFO) server-type: standalone
09:48:14.0851: Response from pst-int-westus.geo.redis.cache.windows.net:6380/Interactive / GET __Booksleeve_TieBreak: (null)
09:48:14.0856: Response from pst-int-westus.geo.redis.cache.windows.net:6380/Interactive / ECHO: BulkString: 16 bytes
09:48:14.0868: pst-int-westus.geo.redis.cache.windows.net:6380: OnConnectedAsync completed (From command: ECHO)
09:48:14.0879: All 1 available tasks completed cleanly, IOCP: (Busy=0,Free=1000,Min=8,Max=1000), WORKER: (Busy=1,Free=32766,Min=8,Max=32767), POOL: (Threads=7,QueuedItems=0,CompletedItems=18)
09:48:14.0885: Endpoint summary:
09:48:14.0885: pst-int-westus.geo.redis.cache.windows.net:6380: Endpoint is (Interactive: ConnectedEstablished, Subscription: ConnectedEstablished)
09:48:14.0885: Task summary:
09:48:14.0885: pst-int-westus.geo.redis.cache.windows.net:6380: Returned with success as Standalone primary (Source: From command: ECHO)
09:48:14.0921: Election summary:
09:48:14.0921: Election: pst-int-westus.geo.redis.cache.windows.net:6380 had no tiebreaker set
09:48:14.0921: Election: Single primary detected: pst-int-westus.geo.redis.cache.windows.net:6380
09:48:14.0921: pst-int-westus.geo.redis.cache.windows.net:6380: Clearing as RedundantPrimary
09:48:14.0928: Endpoint Summary:
09:48:14.0942: pst-int-westus.geo.redis.cache.windows.net:6380: Standalone v6.0.14, primary; keep-alive: 00:01:00; int: ConnectedEstablished; sub: ConnectedEstablished, 1 active
09:48:14.0956: pst-int-westus.geo.redis.cache.windows.net:6380: int ops=13, qu=0, qs=1, qc=0, wr=0, socks=1; sub ops=6, qu=0, qs=0, qc=0, wr=0, subs=1, socks=1
09:48:14.1017: pst-int-westus.geo.redis.cache.windows.net:6380: Circular op-count snapshot; int: 0+13=13 (1.30 ops/s; spans 10s); sub: 0+6=6 (0.60 ops/s; spans 10s)
09:48:14.1019: Sync timeouts: 0; async timeouts: 0; fire and forget: 0; last heartbeat: -1s ago
09:48:14.1021: Starting heartbeat...
09:48:14.3402: Total connect time: 1,881 ms
### Tasks
Closing this to clean up. Please reactivate if necessary
| gharchive/issue | 2023-08-01T08:09:22 | 2025-04-01T04:32:17.805308 | {
"authors": [
"Tommykewl",
"philon-msft"
],
"repo": "Azure/Microsoft.Azure.StackExchangeRedis",
"url": "https://github.com/Azure/Microsoft.Azure.StackExchangeRedis/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
224941979 | Remove expected file checks
Deleted expected template and parameter files
Removed generation and comparing of template and param files with expected ones.
This change is
@acs-bot test this please
@anhowe could you review this ?
@acs-bot test this please
@acs-bot test this please
| gharchive/pull-request | 2017-04-27T23:39:31 | 2025-04-01T04:32:17.814008 | {
"authors": [
"dmitsh",
"shrutir25"
],
"repo": "Azure/acs-engine",
"url": "https://github.com/Azure/acs-engine/pull/547",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
234620565 | Fix api test
What this PR does / why we need it:
Fix empty password in api test.
This change is
@acs-bot test this please
| gharchive/pull-request | 2017-06-08T18:44:51 | 2025-04-01T04:32:17.816185 | {
"authors": [
"JiangtianLi",
"dmitsh"
],
"repo": "Azure/acs-engine",
"url": "https://github.com/Azure/acs-engine/pull/743",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
623008822 | Microsoft.Authorization/policyAssignments
Describe the noise
Resource type (i.e. Microsoft.Storage/storageAccounts)
Microsoft.Authorization/policyAssignments
apiVersion (i.e. 2019-04-01)
2019-01-01
Client (PowerShell, Azure CLI, or API)
PowerShell
Relevant ARM Template code (we only need the resource object for the above resourceType and apiVersion, but if it's easier you can include the entire template
Expected response (i.e. "I expected no noise since the template has not been modified since the resources were deployed)
No changes to the SKU property.
Current (noisy) response (either include a screenshot of the what-if output, or copy/paste the text)
Scope: /subscriptions/<subscription ID>
~ Microsoft.Authorization/policyAssignments/add-vms-to-backup-1 [2019-01-01]
- sku:
name: "A0"
tier: "Free"
Additional context
Add any other context about the problem here.
[w26]
This should be resolved, let us know if you still see this noise
| gharchive/issue | 2020-05-22T07:19:38 | 2025-04-01T04:32:17.827584 | {
"authors": [
"alex-frankel",
"shenglol",
"slavizh"
],
"repo": "Azure/arm-template-whatif",
"url": "https://github.com/Azure/arm-template-whatif/issues/68",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
853634399 | allow for deployment().location on nested deployment resources
For a subscription scoped module, bicep will codegen a location of [deployment().location] - for the two location tests, we need to account for this and allow it.
fixed in #348
| gharchive/issue | 2021-04-08T16:14:28 | 2025-04-01T04:32:17.828646 | {
"authors": [
"bmoore-msft"
],
"repo": "Azure/arm-ttk",
"url": "https://github.com/Azure/arm-ttk/issues/346",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
610189759 | Support additional types in WriteObjectValue
Tables rest client attempts to serialize a few additional types that were not currently handled.
/azp run
The hook for reporting success seems to be broken. Just look at the little dot beside the commit number. When that is green, feel free to merge.
| gharchive/pull-request | 2020-04-30T16:03:31 | 2025-04-01T04:32:17.829807 | {
"authors": [
"MiYanni",
"christothes"
],
"repo": "Azure/autorest.csharp",
"url": "https://github.com/Azure/autorest.csharp/pull/719",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1675296264 | Invalid Handling of Flattened Operation Parameters with Required Properties
While making changes to com.azure:azure-security-keyvault-administration I updated it to have required properties be part of the constructor. After regeneration I found
public class UpdateSettingRequest {
public UpdateSettingRequest(String value) ...
}
Was being called with the pattern
UpdateSettingRequest parameters = new UpdateSettingRequest();
parameters.setValue(value);
In the generated client method. Calling this code path to generate that: https://github.com/Azure/autorest.java/blob/main/javagen/src/main/java/com/azure/autorest/template/ClientMethodTemplate.java#L270
I believe the root cause of this is that the operation definition for the client method flattens UpdateSettingRequest:
https://github.com/Azure/azure-rest-api-specs/blob/551275acb80e1f8b39036b79dfc35a8f63b601a7/specification/keyvault/data-plane/Microsoft.KeyVault/stable/7.4/settings.json#L46
This seems to be parameter body flatten or parameter grouping?
| gharchive/issue | 2023-04-19T17:04:55 | 2025-04-01T04:32:17.832441 | {
"authors": [
"alzimmermsft",
"weidongxu-microsoft"
],
"repo": "Azure/autorest.java",
"url": "https://github.com/Azure/autorest.java/issues/2090",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
267268422 | Support building custom Dockerfiles on the node
Currently we force users who want to use a custom container to build an image and upload it to Docker Hub.
Users should have the option to just pass their Dockerfile and have the container built on the node at runtime.
This would also be great for development/testing.
Strictly speaking this is not universally true. Windows based containers do not require a custom image.
Licensing issues block the flow you describe but we are working on it.
has there been a specific customer ask for this? (just curious)
no specific ask, just a nice-to-have I was thinking about.
| gharchive/issue | 2017-10-20T18:45:00 | 2025-04-01T04:32:17.834269 | {
"authors": [
"darylmsft",
"jafreck",
"jiata"
],
"repo": "Azure/aztk",
"url": "https://github.com/Azure/aztk/issues/183",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
893419413 | az postgres up command crashes with an SSL error
This is autogenerated. Please review and update as needed.
Describe the bug
I was working through Azure's Ruby Postgres Tutorial and was on the Create Postgres database in Azure step.
Command Name
az postgres up Extension Name: db-up. Version: 0.2.2.
Errors:
The command failed with an unexpected error. Here is the traceback:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
FATAL: SSL connection is required. Please specify SSL options and retry.
Traceback (most recent call last):
File "/usr/local/Cellar/azure-cli/2.23.0/libexec/lib/python3.8/site-packages/knack/cli.py", line 231, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/local/Cellar/azure-cli/2.23.0/libexec/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 657, in execute
raise ex
File "/usr/local/Cellar/azure-cli/2.23.0/libexec/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 720, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/usr/local/Cellar/azure-cli/2.23.0/libexec/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 691, in _run_job
result = cmd_copy(params)
File "/usr/local/Cellar/azure-cli/2.23.0/libexec/lib/python3.8/site-packages/azure/cli/core/commands/__init__.py", line 328, in __call__
return self.handler(*args, **kwargs)
File "/usr/local/Cellar/azure-cli/2.23.0/libexec/lib/python3.8/site-packages/azure/cli/core/commands/command_operation.py", line 121, in handler
return op(**command_args)
File "/Users/muel42/.azure/cliextensions/db-up/azext_db_up/custom.py", line 134, in postgres_up
_run_postgresql_commands(host, user, administrator_login_password, database_name)
File "/Users/muel42/.azure/cliextensions/db-up/azext_db_up/custom.py", line 380, in _run_postgresql_commands
connection = psycopg2.connect(user=user, host=host, password=password, database=database)
File "/Users/muel42/.azure/cliextensions/db-up/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
FATAL: SSL connection is required. Please specify SSL options and retry.
To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
az postgres up --resource-group my-resource-group --server-name postgres-server --database-name database_name --admin-user admin --admin-password password --ssl-enforcement Enabled
Expected Behavior
A postgres database to be created in the Azure cloud.
Environment Summary
macOS-11.3.1-x86_64-i386-64bit
Python 3.8.10
Installer: HOMEBREW
azure-cli 2.23.0
Extensions:
db-up 0.2.2
db up
This is working now.
| gharchive/issue | 2021-05-17T14:56:27 | 2025-04-01T04:32:17.840272 | {
"authors": [
"RakeshMohanMSFT",
"kurt-mueller-osumc",
"yonzhan"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/issues/3390",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1645185749 | Test auto repply for issue
If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at Azure/azure-cli
Related command
Extension name (the extension in question)
Description of issue (in as much detail as possible)
Thank you for opening this issue, we will look into it.
| gharchive/issue | 2023-03-29T07:22:05 | 2025-04-01T04:32:17.842304 | {
"authors": [
"wangzelin007",
"yonzhan"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/issues/6124",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1171978706 | Remove "packaging" module since 2.16.0 doesn't support it
This checklist is used to make sure that common guidelines for a pull request are followed.
General Guidelines
[ ] Have you run azdev style <YOUR_EXT> locally? (pip install azdev required)
[ ] Have you run python scripts/ci/test_index.py -q locally?
For new extensions:
[ ] My extension description/summary conforms to the Extension Summary Guidelines.
About Extension Publish
There is a pipeline to automatically build, upload and publish extension wheels.
Once your PR is merged into master branch, a new PR will be created to update src/index.json automatically.
The precondition is to put your code inside this repo and upgrade the version in the PR but do not modify src/index.json.
Remove "packaging"
| gharchive/pull-request | 2022-03-17T06:20:50 | 2025-04-01T04:32:17.845508 | {
"authors": [
"sirireddy12",
"yonzhan"
],
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/pull/4545",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
921478787 | VMSS-Uniform Capacity Reservation Properties
Resource Provider
Compute Resource Provider (CRP) - > Virtual Machine Scale Sets (VMSS)
Description of Feature or Work Requested
Two new resources are being introduced, separate from these changes: Capacity Reservation Group (CRG) and Capacity Reservation (CR). Associations are made between these resources and VMSS resources.
Specifically, in the VMScaleSet -> Properties -> VirtualMachineProfile, the new property "CapacityReservationProfile" has been added as "capacityReservation". This is a class that contains the "capacityReservationGroup" as an ApiEntityRef. This is then used in operations to retrieve the CRG object and assign it in the data model. This is present to see if and which CRG a VMSS is associated with.
Similarly, in the VMScaleSetVM -> Properties (inherited from the single "VM"), the CapacityReservationProfile object is present, called "capacityReservation" storing the ApiEntityRef of the CRG. This is done because the CapacityReservationGroup resource contains a list of all the VMs associated, whether they are standalone VMs or the instances from a VMSS association.
Minimum API Version Required
2021-04-01
Swagger Link
https://github.com/Azure/azure-rest-api-specs/pull/15070
Target Date
July 1, 2021
Compute
Hi @micahjo, as the swagger is still not merged and it is impossible to support it in 7/1 because Azure CLI code complete date is 6/30 and we need swagger and python SDK is ready before at least one week, could we support it in next release (8/3)?
And may I know the feature is for GA or public preview?
Yes, that is okay. The feature is for public preview and there are also changes for Capacity Reservation and single-VM supporting Capacity Reservation pending.
Hi @micahjo, checked swagger and it seems there is no support for 2021-07-01 https://github.com/Azure/azure-rest-api-specs/tree/master/specification/compute/resource-manager/Microsoft.Compute/stable
Sorry, I have updated. This should be for 2021-04-01 now.
Sorry, I have updated. This should be for 2021-04-01 now.
2021-04-01 is still in preparing. We will add the support when 2021-04-01 features are merged to master as Azure CLI requires python SDK public first.
@micahjo Could you provide the design of specific commands and parameters for CI interface?
Hi @zhoxing-ms , I've added the commands we need in the description. I was also wondering about the list calls (list vmss, list instances); crg should show on the vmss and vm instances for these (GET calls contain the crg ids in the response); is any change necessary to support that?
@hari-bodicherla could you verify this is the format we want for cli specifying capacity reservation?
az vmss create [--capacity-reservation-group] az vmss update [--capacity-reservation-group]
@micahjo Are you asking what changes need to be added to Swagger if the list operation supports capacity reservation? If so, may I ask do the parameters passed in for these list operations need to be changed?
That was what I wanted to ask- the CR should show as it should in the GET for VMSS and VMSS VM- if that would mean it will show up in the list operation by default, then no change should be necessary.
@micahjo , @zhoxing-ms Can one of you please explain what is the issue here that is blocking this PR. we have a CLI request that supposed to be covering all CR, VM and VMSS changes. Can we close this and track all the changes in the below CLI issue for the CR changes
https://github.com/Azure/azure-cli/issues/18943
I will close this issue. I don't think they're anything blocking this PR. Just making clarifications.
| gharchive/issue | 2021-06-15T14:46:07 | 2025-04-01T04:32:17.854615 | {
"authors": [
"Juliehzl",
"hari-bodicherla",
"micahjo",
"yonzhan",
"zhoxing-ms"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/18495",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1212488437 | afd endpoint create with origin-response-timeout-seconds fails
Describe the bug
Command Name
az afd endpoint create
Errors:
unrecognized arguments: --origin-response-timeout-seconds 60
To Reproduce:
Steps to reproduce the behavior.
az afd endpoint create -g group --endpoint-name endname --profile-name profile --origin-response-timeout-seconds 60 --enabled-state Enabled
az afd endpoint create -g {} --endpoint-name {} --profile-name {} --origin-response-timeout-seconds {} --enabled-state {}
Environment Summary
Linux-5.4.0-107-generic-x86_64-with-glibc2.29, Ubuntu 20.04.4 LTS
Python 3.8.10
Installer:
azure-cli 2.35.0
Extensions:
front-door 1.0.16
Dependencies:
msal 1.17.0
azure-mgmt-resource 20.0.0
@arunvc We are looking into it and get back to you for any additional information.
route to CXP team
Hi @SaurabhSharma-MSFT and @arunvc , the origin-response-timeout-seconds option is moved to profile level after GA: https://docs.microsoft.com/en-us/cli/azure/afd/profile?view=azure-cli-latest#az-afd-profile-create-optional-parameters
| gharchive/issue | 2022-04-22T15:16:12 | 2025-04-01T04:32:17.858966 | {
"authors": [
"SaurabhSharma-MSFT",
"arunvc",
"t-bzhan",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/22159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1415023915 | AZ Interactive Error - No module named 'prompt_toolkit.interface' within Microsoft Learn Sandbox Environment
This is autogenerated. Please review and update as needed.
Describe the bug
Command Name
az interactive
Errors:
The command failed with an unexpected error. Here is the traceback:
No module named 'prompt_toolkit.interface'
Traceback (most recent call last):
File "/usr/lib64/az/lib/python3.9/site-packages/knack/cli.py", line 233, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/lib64/az/lib/python3.9/site-packages/azure/cli/core/commands/__init__.py", line 663, in execute
raise ex
File "/usr/lib64/az/lib/python3.9/site-packages/azure/cli/core/commands/__init__.py", line 726, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/usr/lib64/az/lib/python3.9/site-packages/azure/cli/core/commands/__init__.py", line 697, in _run_job
result = cmd_copy(params)
File "/usr/lib64/az/lib/python3.9/site-packages/azure/cli/core/commands/__init__.py", line 333, in __call__
return self.handler(*args, **kwargs)
File "/usr/lib64/az/lib/python3.9/site-packages/azure/cli/core/commands/command_operation.py", line 121, in handler
return op(**command_args)
File "/usr/lib64/az/lib/python3.9/site-packages/azure/cli/command_modules/interactive/custom.py", line 41, in start_shell
azext_interactive.start_shell(cmd, style=style)
File "/home/v_w_n/.azure/cliextensions/interactive/azext_interactive/__init__.py", line 10, in start_shell
from .azclishell.app import AzInteractiveShell
File "/home/v_w_n/.azure/cliextensions/interactive/azext_interactive/azclishell/app.py", line 33, in <module>
from prompt_toolkit.interface import Application, CommandLineInterface
ModuleNotFoundError: No module named 'prompt_toolkit.interface'
To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
Put any pre-requisite steps here...
az interactive
Expected Behavior
Environment Summary
Linux-5.4.0-1091-azure-x86_64-with-glibc2.35 (Cloud Shell), CBL-Mariner/Linux
Python 3.9.13
Installer: RPM
azure-cli 2.41.0
Extensions:
interactive 0.4.5
ai-examples 0.2.5
ml 2.10.0
ssh 1.1.2
Dependencies:
msal 1.20.0b1
azure-mgmt-resource 21.1.0b1
Additional Context
Duplicate of https://github.com/Azure/azure-cli/issues/24213
| gharchive/issue | 2022-10-19T13:54:51 | 2025-04-01T04:32:17.863172 | {
"authors": [
"DaVyndycator",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/24282",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2438494262 | {monitor} HTML tags aren't allowed in published reference content
Describe the bug
Problem to solve:
HTML tags in source code are causing build warnings in MicrosoftDocs/azure-docs-cli. More importantly, customer-facing reference docs are not instantiating properly -- parameter values formatted with `<>` are missing. Here's an example:
Requested solution:
These values should be escaped with a backtick. Here's an example from az webpubsub.
Related GitHub issues:
This is the same issue as these GitHub issues.
List of HTML tag instances that need update:
File name
Build error
latest/docs-ref-autogen/monitor/account.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/account.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/action-group.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/action-group.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/action-group.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/action-group.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/action-group.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/action-group.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/scope.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/scope.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/scope.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/activity-log/alert/scope.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/alert-processing-rule.yml
HTML tag 'operator' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/alert-processing-rule.yml
HTML tag 'space-delimited' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/app-insights/component/billing.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/app-insights/component/billing.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/app-insights/component/favorite.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/app-insights/component/favorite.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/app-insights/workbook.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/app-insights/workbook.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/autoscale.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/autoscale.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/endpoint.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/endpoint.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/association.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/association.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/log-analytics.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/log-analytics.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/performance-counter.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/performance-counter.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/syslog.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/syslog.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/windows-event-log.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/data-collection/rule/windows-event-log.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/diagnostic-settings.yml
HTML tag 'normalized' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/diagnostic-settings.yml
HTML tag 'normalized' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/diagnostic-settings.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/diagnostic-settings.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/diagnostic-settings/subscription.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/diagnostic-settings/subscription.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/cluster.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/cluster.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/query-pack.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/query-pack.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/query-pack/query.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/query-pack/query.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/workspace.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/workspace.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/workspace/data-export.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/workspace/data-export.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/workspace/linked-service.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-analytics/workspace/linked-service.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-profiles.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/log-profiles.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/metrics/alert.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/metrics/alert.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/private-link-scope.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/private-link-scope.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/private-link-scope/private-endpoint-connection.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/private-link-scope/private-endpoint-connection.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/private-link-scope/private-endpoint-connection.yml
HTML tag 'indexToRemove' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
latest/docs-ref-autogen/monitor/private-link-scope/private-endpoint-connection.yml
HTML tag 'value' isn't allowed. Replace it with approved Markdown or escape the brackets if the content is a placeholder.
Related command
az monitor
Errors
n/a
Issue script & Debug output
n/a
Expected behavior
n/a
Environment Summary
This GitHub issue was created at the time of Azure CLI 2.62.0
Additional context
No response
Thank you for opening this issue, we will look into it.
| gharchive/issue | 2024-07-30T19:06:08 | 2025-04-01T04:32:17.897836 | {
"authors": [
"dbradish-microsoft",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/29526",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
177861062 | Proposal: az account refresh
Sometimes I have new subscriptions associated to my account that did not exist when I did azure login so I have to do azure login again and go through the interactive login flow to get the new subscriptions I have show up on my local CLI.
Since CLI already has my working credentials, perhaps a new az account refresh can re-download my subscriptions list?
@Yugang-msft: Thoughts?
I am curious how common is this scenario?
Instead of az account refresh i'd like az account list to just pull a live list of the active subscriptions. I'd like to see real time data.
I will expose that in “az account list”. About opt-in vs opt-out, we can leave it as an open question and decide it later on. Either way, we need new logics to get the live list, and flipping the default is easy.
JFYI, caching the subscriptions and access tokens, so to avoid asking for user credentials and active subscription on running every command, is the design coming from CLI 1.0. For creds sharing, CLI 2.0 inherited it.
This issue has a PR there
| gharchive/issue | 2016-09-19T18:30:58 | 2025-04-01T04:32:17.901649 | {
"authors": [
"ahmetb",
"listonb",
"mayurid",
"yugangw-msft"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/919",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
606560680 | [Extensions] Enable CLI extensions to include packages in the 'azure' namespace
Description
Extensions currently cannot modules in the azure namespace, which prevents us building extensions to support preview SDKs that are not yet part of the CLI.
This change extends the submodule import path for both azure and azure.mgmt modules when adding an extension to sys.path. This allows extensions to include submodules that do not exist in the CLI's base install (if they do exist, they'll be preferred).
Testing Guide
Create an extension with a dependency on, say, azure-storage-file-share and include import azure.storage.fileshare.
Add the extension and try to invoke its command. It will fail without this fix, and succeed with it.
This checklist is used to make sure that common guidelines for a pull request are followed.
[ ] The PR title and description has followed the guideline in Submitting Pull Requests.
[ ] I adhere to the Command Guidelines.
add to S170
Not sure why CI isn't running, but I'm going to stop trying. Feel free to do whatever you need to wake it up.
@fengzhou-msft could you help?
@yonzhan @fengzhou-msft
Any updates on this? My extension is blocked from using the latest SDK's until this issue is resolved
@zooba can you take a look at the CI failure?
This may be relevant to https://github.com/Azure/azure-cli/pull/12778#issuecomment-605759789. Any reason this has to been done in the azure namespace? Why can't the SDK be vendored in the extension?
Another concern is that allowing extensions to override SDKs from venv may cause a security breach - a malicious extension can inject tampered SDKs to intercept the calls thus overriding the behavior, also impacting other command modules that rely on the SDKs.
So as an ultimate solution, the extension should vendor the SDK by putting it under vendored_sdks to avoid the conflict. Then they can use something like from .vendored_sdks.timeseriesinsights.models import LongTermEnvironmentUpdateParameters.
Actually this function is just used in specific module and doesn't influence all modules.
But one more thing to note is that it seems it only help import azure.mgmt. What about other azure modules such as azure.storage?
It may be worth doing for azure.storage as well, assuming that it's also a faux namespace package that doesn't install an azure/storage/__init__.py file anywhere and it's installed by azure-cli. But apparently that one worked for the user I was working with, even though he was using one of the new azure.storage libraries.
Both azure and azure.mgmt lock down their packages on install by adding the __init__.py file. This prevents applications from hijacking the core libraries via the PYTHONPATH environment variable (because regular namespace packages will merge all matching directory names). It also prevents users from breaking themselves with an azure/ folder in their own sources, as they'll get a much more obvious error. It's only while the SDK team is developing it that you want it to act like a namespace package, as that will save you from having to do an install just to test parts of it.
But once it's locked down, it becomes impossible to use a different part of the namespace separately. And so extension authors have to do the vendoring.
Now, the problem with that is, it's not very discoverable. So while it's a viable solution, you will still have expert developers getting angry at you for making it so hard. When they eventually find the little tiny note hidden away in dev docs, they're not going to be much happier. So your choices are either make the workaround really obvious (put it in every sample, prominent documentation, templates, etc.) or just do it once in your extension loading code and never let anyone worry about it again.
By the way, the "injecting libraries" argument is a red herring - an extension is an injected library. There's no need for more trickery once you're already running arbitrary code. Though advertising how to work around this issue may lead more people to think about actively trying to override parts of the CLI via an extension, whereas simply letting things work won't encourage that and it leaves you with the ability to control the priority (as I've done in this PR, by making the extension's azure submodules lower priority than the ones installed with the CLI).
So it's up to you guys, really. I personally really dislike giving advice on how to work around things that could just be fixed, but I'll certainly do it. And since this one is not very discoverable, if the issue comes up much more then it may be worth publishing a blog post about it - that looks pretty bad, but not as bad as people posting terrible workarounds.
@jiasli
Any updates on this? My CLI extensions are still unable to use the latest Azure SDK for Python without terrible workarounds. I do not want to vendor the library and take on the burden of maintenance & updates. Azure SDK's should work seamlessly with the Azure CLI
More and more teams pilot their functionality (AKS, web apps, etc) or completely launch feature sets (Azure ML, AzDO, etc) via CLI extensions. None of those services will be able to use the latest SDK's until this change or something similar is implemented. imo it would be smart to fix this ahead of time so they never notice a problem rather than blocking multiple services on a fix that's already been written
@noelbundick I have verified that make azure-cli follow PEP420 would achieve the same goal as this PR. @arrownj has started the work on #13293. The code change is simple, we just need to do more testing.
@fengzhou-msft Moving entirely to PEP 420 will open up the risk of (intentional or otherwise) package hijacking, so be aware of the security implications, especially since the CLI handles high-value credentials for many users.
Also, you may need to do a manual step to make sure your packaged azure\__init__.py and azure\mgmt\__init__.py are deleted, since those are the problem files. The ones deleted by the PR you linked don't have any impact on people using the SDKs - they'll only affect people trying to hijack CLI modules ;)
Exactly, we see the risk and evaluating it now. add @arrownj who works on it now.
@zooba we will merge this PR first while evalutaing the impact of PEP 420. Can you fix the CI and update the doc? We would still recommend to use vendored SDK if possible.
I'll take a look.
Note that "vendored" SDK is exactly what this enables. I think what you mean is "custom generated SDK", which is only possible for management SDKs (and even then is a really big ask).
| gharchive/pull-request | 2020-04-24T20:20:26 | 2025-04-01T04:32:17.916726 | {
"authors": [
"Juliehzl",
"fengzhou-msft",
"jiasli",
"noelbundick",
"yonzhan",
"yungezz",
"zooba"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/13163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
759634555 | [Security] - Added secure score CLIs
Description
Added Security Center's Secure Score related CLIs
Testing Guide
History Notes
This checklist is used to make sure that common guidelines for a pull request are followed.
[X] The PR title and description has followed the guideline in Submitting Pull Requests.
[X] I adhere to the Command Guidelines.
[X] I adhere to the Error Handling Guidelines.
Security
Could you resolve the CI errors first?
Hi @haroldrandom ,
Can you review?
I couldn't find the failed tests in Security module.
/azp run
Could you resolve the CI errors first?
All CI errors fixed.
Waiting for final approval
| gharchive/pull-request | 2020-12-08T17:25:18 | 2025-04-01T04:32:17.922149 | {
"authors": [
"giromm2ms",
"haroldrandom",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/16198",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
929968208 | {Sql} Fix tests in network and monitor module after sql bump version
Description
Fix CI
Testing Guide
History Notes
[Component Name 1] BREAKING CHANGE: az command a: Make some customer-facing breaking change.
[Component Name 2] az command b: Add some customer-facing feature.
This checklist is used to make sure that common guidelines for a pull request are followed.
[ ] The PR title and description has followed the guideline in Submitting Pull Requests.
[ ] I adhere to the Command Guidelines.
[ ] I adhere to the Error Handling Guidelines.
Sql
| gharchive/pull-request | 2021-06-25T08:42:34 | 2025-04-01T04:32:17.926042 | {
"authors": [
"evelyn-ys",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/18643",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1258882377 | {Network} Azure Bastion integrating azure cli with RG gateway
Related command
az network bastion rdp will now use this new approach to try to connect to a target machine using the bastion as a gateway.
To use the legacy method, we added a flag --disable-gateway which will then use the legacy way to connect to a bastion.
Description
This is a new feature that we are wanting to use to connect using native client.
Testing Guide
az network bastion rdp --name bastion --rg rg --target-resource-id vm-id
History Notes
[Component Name 1] BREAKING CHANGE: az command a: Make some customer-facing breaking change
[Component Name 2] az command b: Add some customer-facing feature
This checklist is used to make sure that common guidelines for a pull request are followed.
[x] The PR title and description has followed the guideline in Submitting Pull Requests.
[x] I adhere to the Command Guidelines.
[x] I adhere to the Error Handling Guidelines.
Network
| gharchive/pull-request | 2022-06-02T22:38:02 | 2025-04-01T04:32:17.931070 | {
"authors": [
"aavalang",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/22717",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1286337593 | [ARM] az bicep install: Address issue installing bicep on non-musl default systems with musl
Related command
az bicep install
Description
Installing the bicep command on systems using glibc that also have MUSL
installed generate errors at runtime (See https://github.com/Azure/bicep/issues/5040)
This expands the MUSL detection logic to exclude systems with glibc,
which is the primary non-musl libc.
Testing Guide
Run az bicep install on systems with glibc and musl, note that the standard linux bicep gets installed.
Run az bicep install on systems with glibc and without musl, note that the standard linux bicep gets installed.
Run az bicep install on systems with musl and without glibc, note that the musl linux bicep gets installed.
History Notes
[ARM] az bicep install: Address issue installing bicep on non-musl default systems with musl
[x] The PR title and description has followed the guideline in Submitting Pull Requests.
[x] I adhere to the Command Guidelines.
[x] I adhere to the Error Handling Guidelines.
bicep
/azp run
@shenglol Could you please help review the bicep related PR?
Looks good!
| gharchive/pull-request | 2022-06-27T20:41:29 | 2025-04-01T04:32:17.936997 | {
"authors": [
"bmc-msft",
"shenglol",
"yonzhan",
"zhoxing-ms"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/23040",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1745197876 | [Compute] az vm update: Add parameter --security-type to support VM Gen2 to Trusted Launch conversion
Related command
az vm update
Description
Add parameter --security-type for az vm update command to support VM Gen2 to Trusted Launch conversion
Close: #26387
Testing Guide
History Notes
[Component Name 1] BREAKING CHANGE: az command a: Make some customer-facing breaking change
[Component Name 2] az command b: Add some customer-facing feature
This checklist is used to make sure that common guidelines for a pull request are followed.
[x] The PR title and description has followed the guideline in Submitting Pull Requests.
[x] I adhere to the Command Guidelines.
[ ] I adhere to the Error Handling Guidelines.
Compute
| gharchive/pull-request | 2023-06-07T07:04:39 | 2025-04-01T04:32:17.942141 | {
"authors": [
"yanzhudd",
"yonzhan"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/26626",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2434711134 | [Backup] backup restore restore-disks: Add support for enabling Disk access settings for managed VM restores
Related command
az backup restore restore-disks
Description
Some managed VMs have disks with PE enabled (they have some Disk Access associated). We can now use Enhanced Policy to back them up. During restore, we have the option of allowing public access, specifying a new disk access, or specifying PE with the same disk access. We have added changes that allow for this.
Testing Guide
History Notes
[Backup] Add support for enabling Disk access settings for managed VM restores[Component Name 2] az command b: Add some customer-facing feature
This checklist is used to make sure that common guidelines for a pull request are followed.
[x] The PR title and description has followed the guideline in Submitting Pull Requests.
[x] I adhere to the Command Guidelines.
[x] I adhere to the Error Handling Guidelines.
Backup
Requesting a review for this release cycle (subject to merge validation).
Please fix CI issues
@zubairabid Please add some test cases for these new parameters
Please note that we are launching the release for this sprint this week. Please resolve all comments by tomorrow, otherwise the release of this PR will have to be postponed to the next sprint (on 10-08)
@zubairabid Any update? Please note that we are launching the release for this sprint this week. Please resolve all comments and CI issues by tomorrow, otherwise the release of this PR will have to be postponed to the next sprint (on 10-08)
@zubairabid Any update? Please note that we are launching the release for this sprint this week. Please resolve all comments and CI issues by tomorrow, otherwise the release of this PR will have to be postponed to the next sprint (on 11-05)
Hi @zhoxing-ms , we're waiting on the service team to confirm when we can go ahead with this release, will fix the issues after that. Might go into the next sprint.
@zhoxing-ms I've fixed the PR content issue flagged by the pipeline, please rerun the pipeline once. Service team has given their approval, if all else is good then we can merge this into main for next sprint's release.
Please add some test cases for these changes
Added a new test, and that + all local tests are working - please rerun the pipeline @zhoxing-ms
/azp run
/azp run
| gharchive/pull-request | 2024-07-29T07:50:46 | 2025-04-01T04:32:17.950889 | {
"authors": [
"yonzhan",
"zhoxing-ms",
"zubairabid"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/29508",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
230493788 | [Network] Fix Express-Route RouteFilter issue
Partially addresses #3386.
Also, fixes some additional express-route issues reported via email.
This checklist is used to make sure that common guidelines for a pull request are followed.
General Guidelines
[X] The PR has modified HISTORY.rst with an appropriate description of the change (see Modifying change log).
Command Guidelines
[N/A] Each command and parameter has a meaningful description.
[N/A] Each new command has a test.
(see Authoring Command Modules)
Codecov Report
Merging #3419 into master will decrease coverage by <.01%.
The diff coverage is 0%.
@@ Coverage Diff @@
## master #3419 +/- ##
=========================================
- Coverage 70.81% 70.8% -0.01%
=========================================
Files 394 394
Lines 25546 25546
Branches 3891 3891
=========================================
- Hits 18090 18088 -2
- Misses 6312 6314 +2
Partials 1144 1144
Impacted Files
Coverage Δ
...etwork/azure/cli/command_modules/network/custom.py
62.16% <0%> (-0.15%)
:arrow_down:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9c00880...9965d60. Read the comment docs.
| gharchive/pull-request | 2017-05-22T19:25:11 | 2025-04-01T04:32:17.959703 | {
"authors": [
"codecov-io",
"tjprescott"
],
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/3419",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1870650876 | fix: don't delete an existing conflist (#2115)
Reason for Change:
Backporting https://github.com/Azure/azure-container-networking/pull/2115 into release/v1.4.44 branch
Requirements:
[x] uses conventional commit messages
[ ] includes documentation
[x] adds unit tests
[x] relevant PR labels added
Notes:
Merged because CI mostly passed except for AKS-Engine tests, which are expected to fail at this point https://msazure.visualstudio.com/One/_build/results?buildId=78817067&view=results
| gharchive/pull-request | 2023-08-28T23:38:09 | 2025-04-01T04:32:17.963546 | {
"authors": [
"thatmattlong"
],
"repo": "Azure/azure-container-networking",
"url": "https://github.com/Azure/azure-container-networking/pull/2189",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2352199236 | ci: Remove depreciated values from Cilium nightly config
Reason for Change:
Updating cilium-nightly-config.yaml to be single point of truth of latest relevant cilium config values.
https://github.com/cilium/cilium/blob/main/Documentation/operations/upgrade.rst#removed-options
Deprecated Options
The unused flag sidecar-istio-proxy-image has been removed.
The enable-remote-node-identity flag has been deprecated and will be removed in Cilium 1.16. This flag is needed for various features to work correctly and has been enabled by default since Cilium 1.7. There is no benefit in disabling it anymore.
Issue Fixed:
Requirements:
[x] uses conventional commit messages
[ ] includes documentation
[ ] adds unit tests
[x] relevant PR labels added
Notes:
Ensure that these are removed from appropriate cilium-config(s) when applicable.
v1.16 remove sidecar-istio-proxy-image
v1.15 remove enable-remote-node-identity
/azp Azure Container Networking PR
| gharchive/pull-request | 2024-06-13T23:12:30 | 2025-04-01T04:32:17.969757 | {
"authors": [
"jpayne3506"
],
"repo": "Azure/azure-container-networking",
"url": "https://github.com/Azure/azure-container-networking/pull/2787",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
147622067 | modified the max # of replicas to 12 from 6
I read ' Capacity planning in Azure Search' page and found a wrong number in the following sentence:
Replicas are copies of the search engine. A single Search service can have a maximum of 6 replicas.
doc ref: https://azure.microsoft.com/en-us/documentation/articles/search-capacity-planning/
I believe actual max number of replica is** 12 **at this moment.
Please review this.
Hi @yokawasa, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution!
You've already signed the contribution license agreement. Thanks!
We will now validate the agreement and then real humans will evaluate your PR.
TTYL, AZPRBOT;
@yokawasa
Thank you for your contribution. I'll verify it with the author, and get right back to you.
--Carolyn
@HeidiSteen
Please check this contribution, and let me know whether I should accept the PR.
Thanks!
@HeidiSteen
Ping! Please let me know what to do with this PR.
#sign-off this change looks good!
Really sorry for the delay. I was on vacation and didn't catch it when going through email.
@HeidiSteen
Thanks, Heidi!
@yokawasa
Good suggestion! Thank you for your contribution.
--Carolyn
| gharchive/pull-request | 2016-04-12T03:30:44 | 2025-04-01T04:32:17.974917 | {
"authors": [
"HeidiSteen",
"azurecla",
"cjgronlund",
"yokawasa"
],
"repo": "Azure/azure-content",
"url": "https://github.com/Azure/azure-content/pull/6280",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
150215312 | Update data-factory-azure-datalake-connector.md
I am from Microsoft Azure Data Lake PM team. We need to add text around AvroFormat property.
Hi @sachincsheth, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution!
This seems like a small (but important) contribution, so no contribution license agreement is required at this point. Real humans will now evaluate your PR.
TTYL, AZPRBOT;
@spelluru
FYI
@sachincsheth
Thank you for the contribution!
#sign-off
| gharchive/pull-request | 2016-04-21T23:06:07 | 2025-04-01T04:32:17.977646 | {
"authors": [
"azurecla",
"cjgronlund",
"sachincsheth",
"spelluru"
],
"repo": "Azure/azure-content",
"url": "https://github.com/Azure/azure-content/pull/6364",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
403423693 | CosmosQueryRequestOptions excludes old FeedOptions
Is your feature request related to a problem? Please describe.
Not sure if this is a regression bug or feature request but the PopulateQueryMetrics feed option is not exposed as a member of the CosmosQueryRequestOptions class.
In fact there are a number of feed options which have been left out between the current client and this one. Have these options been left off for a reason or are they configurable else where?
Comparison between FeedOptions & CosmosQueryRequestOptions
Property
Status
ConsistencyLevel
Supported
EnableCrossPartitionQuery
Internally Set (no public access)
EnableLowPrecisionOrderBy
Supported
EnableScanInQuery
Supported
JsonSerializerSettings
Not Supported
MaxBufferedItemCount
Supported
MaxDegreeOfParallelism
Supported (called MaxConcurrency and no longer request option)
MaxItemCount
Supported (no longer request option)
PartitionKey
Supported
PartitionKeyRangeId
Not Supported
PopulateQueryMetrics
Not Supported
RequestContinuation
Not Supported
ResponseContinuationTokenLimitInKb
Supported
SessionToken
Supported
Describe the solution you'd like
Are there any objections to adding PopulateQueryMetrics as an option to CosmosQueryRequestOptions? I've tested and confirmed that CosmosQueryResponse<T> includes the metrics if set. This setting is really useful for performance tuning etc.
We are still working on the v3 diagnostics design. I will create a github issue once we have a design. We want query and point operations to have the same story. In v2 they are completely different which doesn't provide the best user experience.
These properties have been updated.
Property
Status
EnableCrossPartitionQuery
Internally Set (no public access). Automatically set when uses overload with max concurrency.
JsonSerializerSettings
Supported via CosmosJsonSerializer
PartitionKeyRangeId
No plans to support.
PopulateQueryMetrics
Planned
RequestContinuation
Renamed to ContinuationToken
Is this something that will be resolved before GA release?
In the meantime I guess a workaround would be using a handler to add the "x-ms-documentdb-populatequerymetrics" header to the request and read the "x-ms-documentdb-query-metrics" from the Response.
Hi, is there an ETA on the PopulateQueryMetrics property?
| gharchive/issue | 2019-01-26T09:21:15 | 2025-04-01T04:32:17.987325 | {
"authors": [
"RobertDougan",
"davetheunissen",
"j82w",
"martinsmith123456"
],
"repo": "Azure/azure-cosmos-dotnet-v3",
"url": "https://github.com/Azure/azure-cosmos-dotnet-v3/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2090998689 | Populate Headers.Session on FeedResponse from GetChangeFeedIterator (pull model)
Is your feature request related to a problem? Please describe.
I have a change feed consumer that reacts to certain changes by putting a message on a service bus queue to be handled by another process downstream. I don't include the actual document in the service bus queue because the downstream process may not need it, or may need other documents in the same partition to complete its task. That process is designed to work independently, knowing just the partition ID it's working with.
The problem is that we're using session consistency and sometimes the change consumer becomes aware of a new document, but the downstream process doesn't get that document when it reads all the documents out of the partition.
Describe the solution you'd like
FeedResponse.Headers.Session exists, but is always null when using GetChangeFeedIterator. I'd like it to not be null.
Describe alternatives you've considered
I've considered putting the document in my service bus message, but that comes with some weirdness, as described above. I've also considered retrying downstream if the latest document observed by the change feed consumer isn't in my query results if and when the downstream process attempts to do its work. The latter is what I'm doing now, but having access to the Session token header would really provide a better solution.
Additional context
Passing around and using the the session token when making requests is described as a suggested pattern in the CosmosDb documentation, but isn't possible in this scenario because the SDK isn't returning the Session token:
https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency?tabs=portal%2Cdotnetv2%2Capi-async#utilize-session-tokens
Also worth noting, the Session header is available when using the change feed processor, instead of the pull model.
Duplicate of https://github.com/Azure/azure-cosmos-dotnet-v3/issues/3115
| gharchive/issue | 2024-01-19T17:40:21 | 2025-04-01T04:32:17.992437 | {
"authors": [
"TravisTroyer",
"ealsur"
],
"repo": "Azure/azure-cosmos-dotnet-v3",
"url": "https://github.com/Azure/azure-cosmos-dotnet-v3/issues/4267",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
348911758 | How to partition the query to different physical partitions.
Hi,
I'm trying to execute a very large query that needs to return millions of records, so I want to partition the query and use multiple machines to process the results.
My logical partition key would be a UUID of a document, so that will not be very helpful for me to allocate different parts to each worker node. Can I get the physical partition ID and execute my query only within a particular physical partition?
Here's what I have tried:
FeedOptions feedOptions = new FeedOptions();
feedOptions.setEnableCrossPartitionQuery(false);
feedOptions.setPartitionKeyRangeIdInternal("0");
client.queryDocuments(collectionPath, "SELECT * FROM e where e.docType = 'address'", feedOptions)
.flatMapIterable(FeedResponse::getResults);
But changing the partitionKeyRangeId doesn't seem to change the results at all.
Please advise.
@zhouchong90 which version of the SDK are you using?
I'm currently using 1.0.0. Is there a new feature in the latest version that changes this?
@zhouchong90 you are on a very old version. We are doing active development and bugfixes and this issue has been fixed a while back. As of now the latest version is 2.0.1.
| gharchive/issue | 2018-08-08T22:15:30 | 2025-04-01T04:32:17.995987 | {
"authors": [
"moderakh",
"zhouchong90"
],
"repo": "Azure/azure-cosmosdb-java",
"url": "https://github.com/Azure/azure-cosmosdb-java/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
851393862 | Get-AzureADAuditSignInLogs with filter "userPrincipalName eq" not working
Using the cmdlet Get-AzureADAuditSignInLogs with the filter statement -Filter "userPrincipalName eq 'admin@contoso.com'" do not produce any result.
I had to change the filter in -Filter "startswith(userPrincipalName,'admin@contoso.com')" to get results
Hello @reikor0x thanks for highlighting this and apologies for the delayed response.
I have created a PR to remove the example that is not working as I follow up on what changed.
I will proceed to close this issue. Feel free to reopen it with any follow up feedback.
| gharchive/issue | 2021-04-06T12:52:56 | 2025-04-01T04:32:18.001903 | {
"authors": [
"msewaweru",
"reikor0x"
],
"repo": "Azure/azure-docs-powershell-azuread",
"url": "https://github.com/Azure/azure-docs-powershell-azuread/issues/593",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.