added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:10:09.477612
| 2021-08-03T07:56:45
|
958838482
|
{
"authors": [
"Amit366",
"OmGole",
"Shoray2002"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13372",
"repo": "Amit366/FunwithPhysics",
"url": "https://github.com/Amit366/FunwithPhysics/issues/54"
}
|
gharchive/issue
|
Kinematics
Is your feature request related to a problem? Please describe.
Add Kinematics calculator
Describe the solution you'd like
Follow the momentum calculator and add the contents of kinematics
@Shoray2002 I hope u have checked out the other calculators, so design it in a similar manner.
ofcourse
Hi, can you assign me this issue, I would create the kinematics calculator just like the momentum calculator.
@OmGole this is already assignment to Shoray. If u want u can open new issues of calculators and work on them.
@Shoray2002 r u working on it??
For further doubts join the discord channel whose link is in readme
@Shoray2002 r u working on this??
Yes will be done by Wednesday
|
2025-04-01T04:10:09.491010
| 2020-07-23T00:28:43
|
664127728
|
{
"authors": [
"A7madXatab",
"AmruthPillai",
"SophieMdl",
"darshittrevadia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13373",
"repo": "AmruthPillai/Reactive-Resume",
"url": "https://github.com/AmruthPillai/Reactive-Resume/issues/256"
}
|
gharchive/issue
|
Unable to change font size in web app.
I cannot seem to find an option to change the font size.
How do I change the font size on the web app?
@AmruthPillai I could handle this one :)
@SophieMdl Please do give it a shot :) I would very much appreciate that. I'm finding very less time to work on this as my work tasks are very taxing. If you could work on the same and raise a PR, it would be just great!
@SophieMdl are you still working on this or i should do it ?
thanks.
Hi, sorry but I don’t have time currently. Thanks to handle this :)
Closing this in favor of Reactive Resume v3 release.
If you continue to face this issue, please raise a new one and I'll have a look at it as soon as I can.
In the meantime, if you are able to, please consider donating to keep supporting the project: https://www.buymeacoffee.com/AmruthPillai :)
Thank you!
|
2025-04-01T04:10:09.498482
| 2022-08-01T16:00:40
|
1324639718
|
{
"authors": [
"Death-Pact"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13374",
"repo": "AmruthPillai/Reactive-Resume",
"url": "https://github.com/AmruthPillai/Reactive-Resume/issues/958"
}
|
gharchive/issue
|
[BUG] Health of Server Docker Container unhealthy at initialization.
Describe the bug
When starting Reactive Resume stack on docker, I am met with this error in the server log:
[Nest] 42 - 08/01/2022, 11:39:16 AM ERROR [HealthCheckService] Health Check has failed! {"docs":{"status":"down","message":"Request failed with status code 404","statusCode":404,"statusText":"Not Found"}}
This is the only error I am getting and the container is marked unhealthy. Cannot get it to work. I can get to the homepage of the app and attempt to login, but cannot successfully login or create an account.
Product Flavor
[ ] Managed (https://rxresu.me)
[x] Self Hosted
To Reproduce
Delete docker volumes
update docker images to 'server-latest' & 'client-latest'
start stack
check logs to see error on the server container.
Expected behavior
Server container should start and initialize healthy.
Screenshots
Desktop (please complete the following information):
OS: docker
Browser brave
Additional context
Here is my docker-compose file (with info redacted.):
version: '3'
services:
reactive_resume_postgres:
image: postgres:14.2-alpine
container_name: reactive_resume_postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U postgres" ]
start_period: 15s
interval: 30s
timeout: 30s
retries: 3
restart: unless-stopped
reactive_resume_traefik:
image: traefik:rocamadour
container_name: reactive_resume_traefik
command:
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:XXXX
ports:
- XXXX:XXXX
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
reactive_resume_server:
image: amruthpillai/reactive-resume:server-latest
container_name: reactive_resume_server
environment:
- TZ=UTC
- PUBLIC_URL=https://resume.xyz.com
- PUBLIC_SERVER_URL=https://resume.xyz.com/api
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- SECRET_KEY=somelongstring
- POSTGRES_HOST=reactive_resume_postgres
- POSTGRES_PORT=5432
- JWT_SECRET=somelongstring2
- JWT_EXPIRY_TIME=604800
- STORAGE_S3_ENABLED=false
volumes:
- uploads:/app/server/dist/assets/uploads
depends_on:
- reactive_resume_traefik
- reactive_resume_postgres
labels:
- traefik.enable=true
- traefik.http.routers.server.entrypoints=web
- traefik.http.routers.server.rule=Host(`resume.xyz.com`) && PathPrefix(`/api/`)
- traefik.http.routers.server.middlewares=server-stripprefix
- traefik.http.middlewares.server-stripprefix.stripprefix.prefixes=/api
- traefik.http.middlewares.server-stripprefix.stripprefix.forceslash=true
restart: unless-stopped
reactive_resume_client:
image: amruthpillai/reactive-resume:client-latest
container_name: reactive_resume_client
environment:
- TZ=UTC
- PUBLIC_URL=https://resume.xyz.com
- PUBLIC_SERVER_URL=https://resume.xyz.com/api
- PUBLIC_FLAG_DISABLE_SIGNUPS=false
depends_on:
- reactive_resume_traefik
- reactive_resume_server
labels:
- traefik.enable=true
- traefik.http.routers.client.rule=Host(`resume.xyz.com`)
- traefik.http.routers.client.entrypoints=web
restart: unless-stopped
volumes:
pgdata:
uploads:
I was able to resolve the issue by removing the healthcheck line for the https://docs.rxresu.me in the ./server/src/health/health.controller.ts file.
see code revision below.
import { Controller, Get } from '@nestjs/common';
import { HealthCheck, HealthCheckService, HttpHealthIndicator, TypeOrmHealthIndicator } from '@nestjs/terminus';
@Controller('health')
export class HealthController {
constructor(
private health: HealthCheckService,
private db: TypeOrmHealthIndicator,
private http: HttpHealthIndicator
) {}
@Get()
@HealthCheck()
check() {
return this.health.check([
() => this.db.pingCheck('database'),
() => this.http.pingCheck('app', 'https://rxresu.me'),
() => this.http.pingCheck('docs', 'https://docs.rxresu.me'),
]);
}
}
|
2025-04-01T04:10:09.514999
| 2022-05-11T09:42:17
|
1232333771
|
{
"authors": [
"AnWeber",
"dlucazeau"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13375",
"repo": "AnWeber/httpbook",
"url": "https://github.com/AnWeber/httpbook/issues/53"
}
|
gharchive/issue
|
The cells are modified after execution
For some time now, the results of the queries are inserted in the cell, like this:
POST ------- HTTP/1.1
Content-Type: application/json
Authorization: Bearer {{accessToken}}
connection: close
date: Tue, 10 May 2022 11:51:54 GMT
content-type: application/json; charset=utf-8
server: Kestrel
content-length: 5832
Is it when saving the file, closing it or closing VSCode, I don't know exactly when this change is made?
Version: 1.67.1 (user setup)
Commit: da15b6fd3ef856477bf6f4fb29ba1b7af717770d
Date: 2022-05-06T12:37:03.389Z
Electron: 17.4.1
Chromium: 98.0.4758.141
Node.js: 16.13.0
V8: <IP_ADDRESS>-electron.0
OS: Windows_NT x64 10.0.19044
httpBook v 3.1.0
I was able to reproduce it just now. It seems that currently the parser does not distinguish correctly between request header and response header. I'll take a look at it.
The response header symbol was incorrectly marked as request header. This caused httpbook to assume that the area belonged to the code and not only to the OutputCell. The bug will be fixed with the next update of vscode-httpyac.
@dlucazeau v5.4.2 of vscode-httpyac is published. thx
|
2025-04-01T04:10:09.519106
| 2021-02-28T12:11:20
|
818192167
|
{
"authors": [
"AnWeber",
"IntranetFactory"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13376",
"repo": "AnWeber/httpyac",
"url": "https://github.com/AnWeber/httpyac/issues/2"
}
|
gharchive/issue
|
OAuth2 Implicit flow
It seems that the implicit flow isn't implemented. Do you have any plans to add that as well? That just became a blocker for our pm migration.
Currently, the implicit flow is not implemented. It would be helpful if you give me a description, how this is used (response_code id_token vs. token). In my current environment, I do not use this flow. I need to build up a valid test.
Implicit flow doesn't use the secret, provide only an access_token (but no refresh token). I'll prepare some information what should probably be added/changed to support it.
I think it's enough to change the response type to token e.g. https://login.microsoftonline.com/common/oauth2/v2.0/authorize?response_type=token&state=dev&client_id=0c1d9732-466e-458f-85d7-260e448831a8&scope=openid%20profile%20email%20User.Read&redirect_uri=http%3A%2F%2Flocalhost%3A3000%2Fcallback. After sign in a redirect to http://localhost:3000/callback#access_token=XXX&token_type=Bearer&expires_in=3599&scope=email+openid+profile+User.Read&state=dev&session_state=beacc8af-a36a-46f7-bf0e-3d255d2ea83d happens (where xxx is the opaque token value). The parameters are provided after the # so they are not send to the server. I think that adding an AJAX request to the page /callback sending the hash parameters to the extension server should be enough to support the implicit flow.
The token can then be send in the Bearer header to request e.g. GET https://graph.microsoft.com/v1.0/me
I've setup an AAD Test App
local_authorizationEndpoint=https://login.microsoftonline.com/common/oauth2/v2.0/authorize
local_tokenEndpoint=https://login.microsoftonline.com/common/oauth2/v2.0/token
local_clientId=0c1d9732-466e-458f-85d7-260e448831a8
local_scope=openid profile email User.Read
From my understanding it's secure to share that information, as it only allows an user to request a token for this own account. I don't need to share the secret, and there is no risk to loose the related refresh token.
I released a new version with implicit flow. Can you please test.
Great work, from my pov oauth2 support is now perfect.
|
2025-04-01T04:10:09.533544
| 2024-05-10T21:54:31
|
2290457532
|
{
"authors": [
"anaconda-pkg-build",
"danpetry"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13377",
"repo": "AnacondaRecipes/cuda-tools-feedstock",
"url": "https://github.com/AnacondaRecipes/cuda-tools-feedstock/pull/1"
}
|
gharchive/pull-request
|
PKG-4715 v12.4 initial
Destination channel: defaults
Links
PKG-4715
Explanation of changes:
Part of CUDA 12.4 build, following conda-forge's pattern which was developed in conjunction with Nvidia. I don't see a need to stage these then release them all at once. They won't cause a problem released one at a time.
Please check the main branch too, this has just been forked
Changes made to adjust feedstocks for differences between conda-forge and defaults. Please see commit messages for description of changes.
Linter check found the following problems:
ERROR conda.cli.main_run:execute(124): `conda run conda-lint /tmp/abs_bdkg0a3np0/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:33: license_file_overspecified: Using license_file and license_url is overspecified.
===== ERRORS =====
clone/recipe/meta.yaml:33: missing_dev_url: The recipe is missing a dev_url
clone/recipe/meta.yaml:33: missing_license_family: The recipe is missing the about/license_family key.
===== Final Report: =====
2 Errors and 1 Warning were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(125): `conda run conda-lint /tmp/abs_db5g01j8lx/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:33: license_file_overspecified: Using license_file and license_url is overspecified.
===== ERRORS =====
clone/recipe/meta.yaml:33: missing_dev_url: The recipe is missing a dev_url
clone/recipe/meta.yaml:33: missing_license_family: The recipe is missing the about/license_family key.
===== Final Report: =====
2 Errors and 1 Warning were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(125): `conda run conda-lint /tmp/abs_49646afz0v/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:33: license_file_overspecified: Using license_file and license_url is overspecified.
===== ERRORS =====
clone/recipe/meta.yaml:33: missing_license_family: The recipe is missing the about/license_family key.
clone/recipe/meta.yaml:33: missing_dev_url: The recipe is missing a dev_url
===== Final Report: =====
2 Errors and 1 Warning were found
Linter check found the following problems:
ERROR conda.cli.main_run:execute(125): `conda run conda-lint /tmp/abs_02adryr55h/clone` failed. (See above for error)
The following problems have been found:
===== WARNINGS =====
clone/recipe/meta.yaml:33: license_file_overspecified: Using license_file and license_url is overspecified.
===== ERRORS =====
clone/recipe/meta.yaml:33: missing_license_family: The recipe is missing the about/license_family key.
clone/recipe/meta.yaml:33: missing_dev_url: The recipe is missing a dev_url
===== Final Report: =====
2 Errors and 1 Warning were found
|
2025-04-01T04:10:09.544961
| 2018-11-29T13:59:44
|
385749420
|
{
"authors": [
"jjhelmus",
"sodre"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13378",
"repo": "AnacondaRecipes/tqdm-feedstock",
"url": "https://github.com/AnacondaRecipes/tqdm-feedstock/issues/1"
}
|
gharchive/issue
|
Update to latest versions from Conda-Forge
Would it be possible to update the tqdm packages to the more recent ones available in Conda-Forge?
Thank you!
Patrick
tqdm 4.28.1 is available in defaults which is also the latest version available in conda-forge and on pypi. Is there a particular version you are looking for?
@jjhelmus, I apologize for opening this. I was told offline that https://github.com/tqdm/tqdm/pull/641 had been merged into master and it was available in pypi/conda-forge but not on anaconda. ''
I forgot to verify that had been the case before I opened the issue.
|
2025-04-01T04:10:09.549497
| 2023-03-27T20:16:42
|
1642753538
|
{
"authors": [
"Jacob-Scheiffler",
"Jake-Carter",
"leoreyesth"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13379",
"repo": "Analog-Devices-MSDK/msdk",
"url": "https://github.com/Analog-Devices-MSDK/msdk/issues/513"
}
|
gharchive/issue
|
MAX32672 Evaluation Kit - LP Example - Current Consumption
Hi,
I'm trying to achieve the low power consumption of the MAX32672 in the EV kit.
I uploaded the LP example code and followed the steps from the EV Kit Datasheet to measure the current.
The lowest measured power consumption at deepsleep mode was 470uA @3.3V at VDD, but the DataSheet states 4.4uA @3.3V at VDD.
Is there some modification to the LP example code to lower even more the power consumption?
@leoreyesth Which rail are you measuring current on?
@leoreyesth Sorry for the delay, but I got it figured out this morning. The steps I followed were:
Remove jumpers JP2 through JP11
Disconnect LCD
Set all GPIO pins low except for SWD pins (P0.0/P0.1), push button (P0.18), and P0.10.
Set GPIO pin P0.10 high. (It is connected to a pullup resistor that can't be disabled.)
This the function I used to set the GPIO pins to the appropriate state:
#include "gpio.h"
void configure_gpios(void)
{
mxc_gpio_cfg_t out_clr;
mxc_gpio_cfg_t out_set;
// Set all GPIO pins low except for SWD, push button, and P0.10
out_clr.port = MXC_GPIO0;
out_clr.mask = 0xFFFBFBFC;
out_clr.func = MXC_GPIO_FUNC_OUT;
out_clr.pad = MXC_GPIO_PAD_NONE;
out_clr.vssel = MXC_GPIO_VSSEL_VDDIOH;
MXC_GPIO_Config(&out_clr);
MXC_GPIO_OutClr(out_clr.port, out_clr.mask);
//Set GPIO P0.10 high
out_set.port = MXC_GPIO0;
out_set.mask = MXC_GPIO_PIN_10;
out_set.func = MXC_GPIO_FUNC_OUT;
out_set.pad = MXC_GPIO_PAD_NONE;
out_set.vssel = MXC_GPIO_VSSEL_VDDIOH;
MXC_GPIO_Config(&out_set);
MXC_GPIO_OutSet(out_set.port, out_set.mask);
}
With these configurations I was able to get a Deep Sleep current of 3.1uA and Backup current of 0.62uA. Also, you may have had a typo, but the VDD jumpers is actually JP13. Hope this helps!
Thanks @Jacob-Scheiffler. I will try the example.
I dont know if a real problem but the function MXC_GPIO_Confing sets the padctrl1 register that is reserved and read only, and also sets the vssel register that is reserved DN
Closing issue as #519 is merged. GPIO drivers will be modified in a separate PR
|
2025-04-01T04:10:09.553005
| 2015-02-02T22:49:06
|
56305827
|
{
"authors": [
"hpinkos",
"mramato",
"pjcozzi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13380",
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/issues/2461"
}
|
gharchive/issue
|
Picking precision issue
Run the simple-style example
Zoom out so that all of the pins are lumped together. Try and pick the top of the billboard, it fails to select.
If you continue to zoom in a little at a time, eventually you get close enough that the selection works.
You can always select the base of the image, it's only when selecting the top that there is a problem.
@bagnell perhaps this is a culling issue with the billboard's bounding sphere that becomes evident because of the pick frustum.
@mramato was this fixed when we fixed the billboard bounding sphere? It seems to work okay for me now.
Yes, and what a glorious day it was!
|
2025-04-01T04:10:09.562070
| 2019-05-24T21:02:45
|
448352353
|
{
"authors": [
"OmarShehata",
"gcatto",
"lilleyse"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13381",
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/issues/7868"
}
|
gharchive/issue
|
Cannot select entities in 2D when zoomed out
Steps to reproduce:
Navigate to the Billboard Sandcastle app (link below)
Switch to 2D mode
Zoom all the way out
Try to click to select the Cesium logo, but be unable to do so
Zoom in some
Try to click to select the Cesium logo again and one will be able to
Edit: if one uses Cesium.MapMode2D.ROTATE instead of Cesium.MapMode2D.INFINITE_SCROLL, the problem does not seem to happen
Sandcastle example: https://cesiumjs.org/Cesium/Build/Apps/Sandcastle/#c=1Vp7T9tIEP8qFv/gSOnGefEoUB2PcEQCgkJK1SsV2tiTeHUbO1pvoKHiu9+s7cTvxLTA0UhBznpmdua3M7Ozs9xTod0zeAChHWgOPGjH4LHZhNz4Y/qm6f88dh1JmQNis7J369w6o5ljSuY6GrWsI8b50KXC0ivaz1tHw881dSyTepIDscDkVIAeJwxkKMJgZgKOZJKBR5BKD2Woz9T1mD/Nx4Vax1RIfKJOk4yEOzmBsQDw9A/bbdLe3d7ermotgxjNnZ1mpRrJGS5m1j7GpKsPm9AxoPhNQmr+s1cLZro7d8funXsPgtM5mTrjzYjxKXh8UnY8JeDwQC6tvBLuFIQyayUw+Sz/J0Tar2NU1Wo1zYIRnXH5UZs5+IheYyXlebb7gOKkmEGcPkk0ZT+A90YjhAdpY465NK+hG1XtQ9uoJCdVo0YlKQzmsFJUU0dEkC38UylWy3YFe1TBwHuCjVkM9rPUG3LcuRx0+sWi7tVKm1lBN4lxctQbDHoXSRsD0SlUkUWtUkMZESeuEyNJabrcFTF/UT/Jefeik+T7ctYddJKcwpU06WwXVNrkqnvXu+n07057n/tJGUZ6bsrZGB3i8Afzcj32n06/VwzZA7OkjXx1wyjlaTawsa0WvdFeR14c06ZNnTE8N6wLuWKRjanXD+s5pt5XDPOq1jTURzn2Swd8PoCRfdEsB6GtZDm0F5Atf5PAhQ+0JjEy7wKnPUg6re+i5IFJ+5BPbYpR3GjnJGX2GC1E17kACWJNTs7jePuFe4X8nMoZaOfCvDAjr1zQwNRH150MXD1AIAs3AnCBUcamPMLQW1cbZDmScHO05LPgiHcJK/civhE1GUc1M7yLF2TMRpvvcLMNDc5fjhdVdccgbUynzTapt56pZAzdt1G03qhiLietrZdWNJUxVCI6mp8wT1LHhNW5Ikn7zgu3hNtXcyqIyJJkrXQJVJxScY00VOh10ga1FKrWUM/bqm5qV8pgO6JWWWiTpH9qSZzklII6Hp+Z4JjzV8fa9YvekminiRfxGyr1YINDKOf6t5hB4bs+eO5MmEBGIE27qwDRS0ATB7ukqIT/xjD4HhO1MF8PeCrpxcNK8NBbHHrVZuZpJnc9/CHdZaaIVr+a4WaOKYB6kDikINyBCcrMmO+gTGljtRvUoqkjQGyHOgtq1YPAt7xv9e8kYFm6fUwDVQ2ZVIIVTeRV8fQ5pQJHNbo0bwxyYV1q6tUx9NKxVCKm0rG1xKGaT/f8o1i+nGedw7IinpJDy23vjwPaeFdAF8go1RZQmfJDMq4q6+VdP2f7M6CpsnGYkrfCtkH+HL+S8nPk1yvlne8pvyyn4l8QzyjKU/TRDqxSqGVhwsE1pTyWhLQh5kVLJUNpYx4aYroP/YyFQ5JTj7zjffzBZhKubTpFhXL2b5/qejbswzhQLbaIR+7MsZgz7oOJSzzmoLd2USW1kjvqm3aQwkbMWxTTLeVczd3faQq8KFRbdaxx1kIVj3g15Hf7SHsZK0blDbDb9jNMq/5+sEPP31H2t9S3PHYKOIPsvjF6Td/ztt8Neg1Er26g6zVUmd0oFaX9zslbQLVLgpP5e4GqbqhQ8/+Uz2hfO+fnvS8rziuqDJYgBGXOlXDvmQViL7ZzWcyjQw4nMJX2ADy5cttKE0d7VmqGqHWXnTq2iFmu0DZ1BJDwxRXcGgQ0emr9PdzxgYy5O1TqhfocjpHSkyELilNNt5c/2dYbDVLfbeMitbbwod5+nZZBcELpwwgEBDXNojJMviDH54cXV3eD3t3f/d7ny5OUmPSixWqky9lkiIhc9a67g+5N5657edq97A6+ro2+sEuZQnTpczHXQZAHrsuHVFyAM9O//Vz4yw9VY26qQmeJ2AIA1/GAY1io3tLiuJk4aCYvRmNngtjENmLEFU6Z29HQrqeqllLmGuveaPmmy1sFrKs03yNRkdI6Ft1WrtO2+MqyUO9j/zokV/XS6q64iFmn8crbmGKw2WNcZQzXid8qLw9w/s3DWnyLrh8KNVU+Oglb6LFq/DnemtezL+G2+Y37Ykj9K57hfNGfsMJILw9pujW7Fsxsf7ZQu1Nq/ZZy6d7mOt2yDc5C1cIz728ol20GrlOvqCNY7IL+gfFXHTBzOi3jfjlH1EIVT4Jtxj+H+hsyvvGkpjqbGlYPkyn47bqxUHVPadWz5ck6vfNqlIXS34OCJcYnQC38wXJ2LZo+3NxMOsFjOBnx+Zk7Ad0oqkMFTPA8fsi5HlVFbKTp4Z4d3onrqZKnkrC2sCzKr6LyC6/l/fteRnC5omlEuQcx1PBxo7qx78k5h0/B8F9sMnWF1GaC61jMSJhMORZsXm04M/8FSUzPd5b92oJp32L3GrMObjdS//N0u6G8w/PwzWjGudoVbjc+7deQPsHGXaqq5V7Q4VYkdv3TeTBICNmv4c8slwxKj5jE/wA
Browser: Chrome
Operating System: Win10
Thanks for reporting this and for the Sandcastle @gcatto. This is definitely a bug. I'm not sure when we'll have time to look into a fix here, but here are my debugging notes if anyone wants to give it a shot.
The picking of entities happens in the Viewer here:
https://github.com/AnalyticalGraphicsInc/cesium/blob/master/Source/Widgets/Viewer/Viewer.js#L112
This is supposed to return the billboard when you click on it, but returns undefined when you're zoomed all the way out.
The pick function is in Scene.js here:
https://github.com/AnalyticalGraphicsInc/cesium/blob/master/Source/Scene/Scene.js#L3533
This is what's returning undefined. The way this picking works is that Cesium renders the scene to a framebuffer, with a unique color for each object, and then reads back those pixels to figure out what object you clicked on.
The reading of the pixels happens in PickFramebuffer.js here:
https://github.com/AnalyticalGraphicsInc/cesium/blob/master/Source/Scene/PickFramebuffer.js#L119
It is specifically this line:
var object = context.getObjectByPickColor(colorScratch);
In a successful pick, colorScratch has a non-zero value. When you zoom all the way out and click on the billboard, you get a color with all zero's. So it's almost as if it's not correctly rendering to the pick framebuffer when you're zoomed all the way out?
@lilleyse do you have any other advice or thoughts to add?
Opened a fix here https://github.com/AnalyticalGraphicsInc/cesium/issues/7868
|
2025-04-01T04:10:09.567538
| 2016-10-06T21:49:56
|
181534737
|
{
"authors": [
"hpinkos",
"jhwohlgemuth",
"mramato",
"pjcozzi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13382",
"repo": "AnalyticalGraphicsInc/cesium",
"url": "https://github.com/AnalyticalGraphicsInc/cesium/pull/4411"
}
|
gharchive/pull-request
|
electron-prebuilt --> electron
From the electron-prebuilt repo installation instructions:
...use either name, but electron is recommended, as the electron-prebuilt name is deprecated, and will only be published until the end of 2016.
electron-prebuilt is not deprecated yet, but better to be ahead of the curve, right?
Thanks @jhwohlgemuth. Even though this is a dirt simple change, we still need a CLA from you to review and merge this, as described at https://github.com/AnalyticalGraphicsInc/cesium/blob/master/CONTRIBUTING.md#opening-a-pull-request
If for some reason you can't sign the CLA, I'd be happy to vouch and remake this change in another branch.
Thanks again.
My bad 😰
I totally read the contributing guide, but I was so focused on not adding my name to the CONTRIBUTORS.md file (I thought to myself, "Greenkeeper.io is not on it" 😄)... I forgot the whole "legal" part of signing a CLA...
In any case, I emailed a signed CLA, but you are free to do whatever is easiest for you.
Sorry again for causing more harm than good.
No worries @jhwohlgemuth. Thanks! I can confirm we've received your CLA
I'll let @mramato merge this
We have a CLA from @jhwohlgemuth. Thanks again for the contribution!
Thanks @jhwohlgemuth!
|
2025-04-01T04:10:09.572570
| 2023-10-03T16:54:54
|
1924555803
|
{
"authors": [
"Anandsg",
"ShahIsCoding"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13383",
"repo": "Anandsg/Hungry-hero",
"url": "https://github.com/Anandsg/Hungry-hero/pull/42"
}
|
gharchive/pull-request
|
adding a popup message
hi @Anandsg,
i have added the popup functionality for adding and removing favorite restaurants.
@ShahIsCoding The popup message and code changes are fine, but could we tweak the popup's appearance using Tailwind CSS? Right now, it's tucked away on the left side of the screen, and it's easy to miss. It would be better if we could center it or move it to the middle or top of the page for better visibility.
good idea @Anandsg I have shifted the popup from bottom left to top right. I think I the center it will be an unnecessary distraction.
@ShahIsCoding Could you please update yourself in the READMe.md file as contributor and create new PR
|
2025-04-01T04:10:09.592506
| 2018-10-13T21:27:29
|
369843047
|
{
"authors": [
"AndreaM16",
"karmakaze"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13384",
"repo": "AndreaM16/aws-sdk-go-bindings",
"url": "https://github.com/AndreaM16/aws-sdk-go-bindings/pull/11"
}
|
gharchive/pull-request
|
Stop using new(T); Start using &T{} #10
I just ran:
for x in `find . -name \*.go`; do sed -i '' -e 's/new(\([^)]*\))/\&\1{}/' $x; done
There was one that wasn't changed:
pkg/aws/dynamodb/service.go:
- out := new(GetItemOutput)
+ out := &GetItemOutput{}
Results in the error . messge invalid pointer type *GetItemOutput for composite literal.
@AndreaM16 Updated with struct member initializers as noted.
Thanks for your time @karmakaze . Great job!
|
2025-04-01T04:10:09.624776
| 2018-03-11T09:55:33
|
304145108
|
{
"authors": [
"AndrewGaspar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13385",
"repo": "AndrewGaspar/cmake-cargo",
"url": "https://github.com/AndrewGaspar/cmake-cargo/issues/14"
}
|
gharchive/issue
|
Validate the required CMake Version
I've been using 3.0, but I don't know if this is accurate to the features we're using.
Requires 3.12
|
2025-04-01T04:10:09.648003
| 2024-10-17T09:19:33
|
2594131348
|
{
"authors": [
"Alegzandra",
"AndyTheFactory"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13387",
"repo": "AndyTheFactory/romanian-nlp-datasets",
"url": "https://github.com/AndyTheFactory/romanian-nlp-datasets/pull/2"
}
|
gharchive/pull-request
|
Update README.md
Added SART and COVIDSentiRO datasets.
Nice work, congrats
Thanks, to you too!
On Fri, 18 Oct 2024 at 22:01, Andrei Paraschiv @.***>
wrote:
Nice work, congrats
—
Reply to this email directly, view it on GitHub
https://github.com/AndyTheFactory/romanian-nlp-datasets/pull/2#issuecomment-2423067697,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABCIYZRRTXH3JUUEKG6D5EDZ4FLIDAVCNFSM6AAAAABQDI7PV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMRTGA3DONRZG4
.
You are receiving this because you authored the thread.Message ID:
@.***>
|
2025-04-01T04:10:09.649024
| 2017-01-18T06:34:51
|
201491427
|
{
"authors": [
"l34Um1",
"warpdragon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13388",
"repo": "AngelArenaAllstars/suggestions",
"url": "https://github.com/AngelArenaAllstars/suggestions/issues/26"
}
|
gharchive/issue
|
[UI] Auto-Health bar segment adjustment
Would be nice to make a script that automatically sets the health bar segment length for heroes like Axe depending on the current level of Culling Blade, Oracle depending on his level of Purifying Flames, etc.
No. Do the math yourself. Closed.
|
2025-04-01T04:10:09.717713
| 2024-07-26T18:17:17
|
2432759936
|
{
"authors": [
"Damini2004",
"varshith257"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13389",
"repo": "Anishkagupta04/RAPIDOC-HEALTHCARE-WEBSITE-",
"url": "https://github.com/Anishkagupta04/RAPIDOC-HEALTHCARE-WEBSITE-/pull/1032"
}
|
gharchive/pull-request
|
Animated emoji Addition
Description
Animated emoji Addition
Related Issues
[Cite any related issue(s) this pull request addresses. If none, simply state “None”]
Closes #995
Type of PR
[ ] Bug fix
[x] Feature enhancement
[ ] Documentation update
[ ] Other (specify): _______________
Screenshots / videos (if applicable)
[Attach any relevant screenshots or videos demonstrating the changes]
Checklist
[x] I have gone through the contributing guide
[x] I have updated my branch and synced it with project main branch before making this PR
[x] I have performed a self-review of my code
[x] I have tested the changes thoroughly before submitting this pull request.
[x] I have provided relevant issue numbers, screenshots, and videos after making the changes.
[x] I have commented my code, particularly in hard-to-understand areas.
Additional context:
[Include any additional information or context that might be helpful for reviewers.]
@varshith257 review it.
@varshith257 kinldy review and merge it.
@Damini2004 Solve conflicts and tag me for quick review
@varshith257 Does the conflicts get resolved ?
@varshith257 review it.
|
2025-04-01T04:10:09.724504
| 2024-10-22T13:01:24
|
2605394345
|
{
"authors": [
"Ankith080",
"NavyaSree1603"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13390",
"repo": "Ankith080/Project5",
"url": "https://github.com/Ankith080/Project5/issues/34"
}
|
gharchive/issue
|
Updating loginRegister.jsx and loginRegister.css
Add password field and enhance login for added security.
This issue has been addressed! I’ve added the password field and enhanced the login functionality for better security in loginRegister.jsx and loginRegister.css. Everything has been tested and is now live. Thank you for your input, @Ankith080! If you have any further suggestions or feedback, feel free to reach out. Closing this issue now!
|
2025-04-01T04:10:09.808919
| 2024-10-12T18:10:49
|
2583306974
|
{
"authors": [
"AnthonyMichaelTDM"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13391",
"repo": "AnthonyMichaelTDM/mecomp",
"url": "https://github.com/AnthonyMichaelTDM/mecomp/issues/160"
}
|
gharchive/issue
|
bug(daemon/windows): can't parse music paths from Mecomp.toml if drive included in library path
If a path in the library_paths field in Mecomp.toml includes the drive (ex: C:\Foo\Bar\Baz) then the daemon will fail to parse the config file and will crash.
This needs 2 things done to be considered fixed:
to properly handle paths that include the drive
to give users more feedback in the event that the daemon crashes before logging is set up.
Saw this happening on a friend's install, but strangely, I can't reproduce it...
|
2025-04-01T04:10:09.818820
| 2018-12-30T11:22:40
|
394864447
|
{
"authors": [
"AntonyCorbett",
"tenisak"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13392",
"repo": "AntonyCorbett/OnlyM",
"url": "https://github.com/AntonyCorbett/OnlyM/issues/168"
}
|
gharchive/issue
|
The DELETE command does not delete the image from the list
Describe the bug
The DELETE command (in Command panel) does not delete the image from playlist. Nothing will happen. I've found that the image has a read-only attribute.
Expected behavior
Delete the file (even if it has a read-only attribute). <-- Preferred solution
Or show some information why deletion has not occurred.
Desktop (please complete the following information):
Windows 10 1803
@tenisak A message is now displayed in the notification bar if a file cannot be deleted (for whatever reason)
|
2025-04-01T04:10:09.852939
| 2023-11-04T11:11:57
|
1977285752
|
{
"authors": [
"Nik-V9",
"TheProjectsGuy",
"babyporgrammer"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13393",
"repo": "AnyLoc/AnyLoc",
"url": "https://github.com/AnyLoc/AnyLoc/issues/18"
}
|
gharchive/issue
|
Recall reproduce
Hi thanks for your amazing work, the performance is absolutely impressive. just one question, I am trying to reproduce the result of Recall rate, but find there is no code for it? any plan to release this part of code? your reply is very appreciate.
Hey @babyporgrammer,
Thanks for taking an interest in our work. I might have misunderstood your question about the recall rate, but I'll try my best to answer.
I am trying to reproduce the result of Recall rate, but find there is no code for it?
Though our demos do not have the code for reproducing recalls (because their purpose is only to do global feature extraction), we have the code for estimating recalls in our scripts like in dino_v2_vlad (which is our main AnyLoc-VLAD-DINOv2 method)
https://github.com/AnyLoc/AnyLoc/blob/2ae462dbce05a2741fe8d075a1313488804e29a3/scripts/dino_v2_vlad.py#L371-L376
The function get_top_k_recall can be found in utilities.py of the main repository (not the demo folder).
Let me know if I was able to answer your query.
@TheProjectsGuy Maybe you should add a Benchmarking sub-section to the main README?
@TheProjectsGuy The scripts folder needs to be updated. It takes a lot of work to try out the best AnyLoc method.
Could you make a benchmarking folder with bash scripts for the best-performing variant of AnyLoc-DINOv2 (Different ViT architectures) and AnyLoc-DINO?
Also, people can select the ViT architecture and dataset through args.
|
2025-04-01T04:10:09.879187
| 2022-03-05T20:46:47
|
1160442724
|
{
"authors": [
"unparalleled-js"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13394",
"repo": "ApeWorX/ape",
"url": "https://github.com/ApeWorX/ape/pull/545"
}
|
gharchive/pull-request
|
fix: handle exiting pytest when provider never connects
What I did
fixes: #525
Before output:
collected 0 items / 1 error
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
ERROR: (ProviderNotConnectedError) Not connected to a network provider
New output:
==================================================================================== no tests ran in 0.01s =====================================================================================
How I did it
Don't try to disconnect if connect never happens
Remove logger error stmt from ape_test._cli.py as it just noise and does add anything (pytest output is what matters)
This works exactly the same using pytest without ape installed.
How to verify it
go to a directory without tests and run ape test
See output expected outputs above for both main and this branch!
Checklist
[ ] Passes all linting checks (pre-commit and CI jobs)
[ ] New test cases have been added and are passing
[ ] Documentation has been updated
[ ] PR title follows Conventional Commit standard (will be automatically included in the changelog)
I think it may be possible to run some of our "functional" tests using ape test since that works the same as pytest now.
We can consider that during the test refactor.
|
2025-04-01T04:10:09.907051
| 2020-07-10T09:58:14
|
654665757
|
{
"authors": [
"Addi2020",
"AlexandrZabolotny",
"kevin-y-wang"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13395",
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/11741"
}
|
gharchive/issue
|
MRAC vs MPC control?
Apollo 5.5
Q1. Apollo 5.5 introduce MRAC(model reference adaptive control) together with LQR control for steering control (lateral control), what is the advantage of using MARC with LQR instead of MPC?
Q2. If use MRAC then we need to tune both LQR
( matrix_q: 0.05
matrix_q: 0.0
matrix_q: 1.0
matrix_q: 0.0)
and
MRAC gain vlaues
(adaption_state_gain: 1.0
adaption_desired_gain: 1.0
adaption_nonlinear_gain:1.0
adaption_matrix_p: 1.0) ?
I used both LQR and MPC control on real car, LQR control tuning did not give stable steering control where as after tuning MPC control give much better control of steering at apollo3.5 platform.
To clarify this argument, actually MRAC control is an additional controller stacked on the lateral control. MRAC can be cascaded with both the LQR control and MPC control; i.e., we may use LQR + MRAC or MPC + MRAC as the lateral controller. The purpose of designing the MRAC control is to addresses the by-wire steering actuation system dynamics and time-latency, which is neglected in both LQR and MPC design process.
In particular, the MRAC actually uses the steering command from either LQR or MPC as input, and generated a compensated steering command which may improve the response of the steering control.
Hopefully this will help to explain the relations among LQR, MPC and MRAC
@kevin-y-wang Thank you for you explanation.
As in apollo 5.5, MRAC is added with LQR, Similarly we can add ouself MRAC with MPC to get better performance of steering control.
Hello colleagues!
I am trying to tune a control module on a real car.
I will try to run both MPC and LQR+PID
I get the following result
In the case of LQR+PID
Good speed control (longitudinal control), but poor lateral control. When cornering, it does not follow the desired trajectory and therefore dynamically turns the steering wheel.
2.in case of MPC
It repeats the trajectory well. Smoothly turns the steering wheel. Provides a comfortable ride. But poor speed control. A very large longitudinal error, also the car does not stop at the required point.
I want to use MPC. So I have questions.
Is it possible to set up longitudinal control for MPC? If so, how?
Is it possible to use MPC for lateral control, but for longitudinal - PID ??
|
2025-04-01T04:10:09.989463
| 2019-04-12T14:56:50
|
432608379
|
{
"authors": [
"FlorentRevest",
"freeHackOfJeff",
"natashadsouza"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13396",
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/issues/7840"
}
|
gharchive/issue
|
Calibration tools compatibilty with 3rd party cameras
In our project, we want to use the cameras integrated to our LIDAR, the HESAI Pandora. (listed as supported by Apollo)
However, when running the camera-camera or camera-lidar calibration tools, we observed that the tools would crash.
First, it would assert that the camera dimensions are 1920x1080, however the Pandora cameras have a resolution of 1280x720. Couldn't the tools be extended to support this resolutions?
The tool would also crash when trying to read the images because the Pandora driver publishes images in RGB format and the tools expect YUV images. Again, could the calibration tools be extended to support RGB images?
As a workaround, I made a tool that scales images up and converts them from RGB to YUV: https://github.com/FlorentRevest/apollo/blob/v3.0/modules/tools/pandora_camera_scaler/pandora_camera_scaler.py After converting a bag full of feature-points to a format accepted by the tools, I am able to run the camera-camera calibration without crashes. However, the tool never stops running and never outputs any calibration files. Am I still missing something?
@FlorentRevest Many thanks for trying out and help testing calibration tools! The current tool is still in development and improvement stage, and we'd like to get more input in order to cover more scenarios. Also, we'll take your questions and issues into consideration for further development.
Good to hear @freeHackOfJeff, thank you !
Just for the record, if anyone stumbles upon this issue while trying to calibrate a HESAI Pandora, I would recommend extracting both the extrinsic and intrinsic calibration from what's recorded in the device itself.
https://github.com/HesaiTechnology/pandora_projection_ros/blob/master/src/pandora_projection.cc#L546
This does not solve the radar-camera calibration problem and doesn't help with other camera types but it can help someone in the same situation as me.
@FlorentRevest thank you for sharing your fix with the community. We really appreciate your contribution! Closing this issue currently.
|
2025-04-01T04:10:09.990518
| 2019-01-30T09:57:05
|
404689113
|
{
"authors": [
"freeHackOfJeff"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13397",
"repo": "ApolloAuto/apollo",
"url": "https://github.com/ApolloAuto/apollo/pull/6676"
}
|
gharchive/pull-request
|
Perception: fix ObjectInRoiTest that the size should be 0 after clear…
… a vector
Looks Xiangquan has also changed relevant file, I'll close this PR once another fix as you mentioned is ready, thanks!
|
2025-04-01T04:10:10.016810
| 2020-09-24T23:16:48
|
708530313
|
{
"authors": [
"Apoorve73",
"docs4opensource",
"viveksahu26"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13398",
"repo": "Apoorve73/racefor100.github.io",
"url": "https://github.com/Apoorve73/racefor100.github.io/issues/31"
}
|
gharchive/issue
|
Improve Readme
[x] Add the Game Rules and workflow(as seen by the player).
[x] Add a GIF for the working of Game,
The app is purely static, so can easily be cloned and used. :)
I'm organizing a documentation event for Hacktoberfest, and this looks like a good issue for some of our first-timers. Do you have the Game Rules someone could use to create the docs with? If so, someone from our event could take this on. https://organize.mlh.io/participants/events/4451-hacktoberfest-documentation-meetup-kick-off-event
This will be really great @docs4opensource ! The game rules have been mentioned on the website itself. Find them here at https://apoorve73.github.io/racefor100.github.io/ . So feel free to check it. The same needs to be added in an elegant manner to the repo.
Ask any queries, if you have any doubts. You can pass this on to someone from the event! :)
All the best :+1:
Hey @Apoorve73 , is this work open for first time user or new contributer. Then i would like to work on it..
This is open for all! Do give a read to rules mentioned in README regarding "How to make your PR?" You can start working on this. If you pass the checks of BOT, then I will review that at my end.
|
2025-04-01T04:10:10.039421
| 2017-10-05T14:12:20
|
263140702
|
{
"authors": [
"kie-ci",
"manstis"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13399",
"repo": "AppFormer/uberfire",
"url": "https://github.com/AppFormer/uberfire/pull/859"
}
|
gharchive/pull-request
|
GUVNOR-3524: Do not expose unsupported features by UI (e.g. scorecards, decision trees)
See https://issues.jboss.org/browse/GUVNOR-3524
This PR adds PermissionTreeProvider for "registered editors" (i.e. a white-list of editors we want to control access to).
Part of an emsemble:
https://github.com/AppFormer/uberfire/pull/859
https://github.com/kiegroup/kie-wb-common/pull/1211
https://github.com/kiegroup/drools-wb/pull/641
https://github.com/kiegroup/kie-wb-distributions/pull/631
@dgutierr Mind taking a look please (you probably know most about this area).
@jomarko Changes completed.
Build finished. 3164 tests run, 5 skipped, 0 failed.
|
2025-04-01T04:10:10.057903
| 2018-03-09T12:39:27
|
303828225
|
{
"authors": [
"gagabu",
"gickis"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13400",
"repo": "AppMetrics/AppMetrics",
"url": "https://github.com/AppMetrics/AppMetrics/pull/238"
}
|
gharchive/pull-request
|
Fix issue #235
The issue or feature being addressed
Issue #235
Details on the issue fix or feature implementation
Fixed issue when tags for options not included in metricName when trying to use it without explicitly provided tags.
Confirm the following
[x] I have ensured that I have merged the latest changes from the dev branch
[x] I have successfully run a local build
[x] I have included unit tests for the issue/feature
[x] I have included the github issue number in my commits
As figured out, this commit causes significant performance regression (cpu and memory).
In my tests, execution time drops from ~70ns to ~1ms.
Memory allocation rises from 200b to 984b per one call that is not suitable for heavy loaded applications.
You can aslo notice it here
|
2025-04-01T04:10:10.096392
| 2024-12-05T13:05:37
|
2720369067
|
{
"authors": [
"adrtsc",
"jluethi"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13401",
"repo": "Apricot-Therapeutics/APx_fractal_task_collection",
"url": "https://github.com/Apricot-Therapeutics/APx_fractal_task_collection/pull/8"
}
|
gharchive/pull-request
|
Add task metadata & github CI
Hey @adrtsc
For when you're back from your holidays: We're adding metadata to the tasks to be able to build better task interfaces. I've created this PR to show how this metadata is structured. The core fields for each task are:
Category: Conversion, Segmentation, Registration, Measurement, Image Processing
Modality: HCS, lightsheet, EM
Tags: Arbitrary strings that are searchable and provide info about the task
Category & Modality work by convention, but you can add new entries when useful.
Additionally, there is now an "authors" field for the whole package.
In order for the future public Fractal page to list your tasks (see https://github.com/fractal-analytics-platform/fractal-analytics-platform.github.io/issues/6), the package needs to either be on PyPI or have github releases with whl files. As part of this PR, I'm adding a Github CI that would run tests and create releases with whl files upon tags. Let me know if that works for you.
Additionally, the CI includes an easy option to activate publishing to PyPI (which would be great for your package!). Feel free to turn it on by removing the if false option on the PyPI publishing in build_and_test.yml and following the setup steps here: https://github.com/fractal-analytics-platform/fractal-tasks-template/blob/main/DEVELOPERS_GUIDE.md
=> Pypi instructions: https://docs.pypi.org/trusted-publishers
What remains to be done:
Check whether my suggestions in the image list make sense
Add additional metadata, e.g. more tags
It would be great to mark the tasks as HCS specific which assume that there is a plate. I suspect many of the compound tasks will qualify for that. Any task that works for HCS, but also works for an arbitrary list of zarr_urls doesn't need that modality tag.
Decide whether the Github CI & whl releases is something you want
Decide whether you want to publish the package on PyPI
Hi Joel,
|
2025-04-01T04:10:10.099345
| 2023-04-25T14:35:13
|
1683298988
|
{
"authors": [
"AquaAuma",
"jepa"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13402",
"repo": "AquaAuma/FishGlob_data",
"url": "https://github.com/AquaAuma/FishGlob_data/issues/19"
}
|
gharchive/issue
|
Nor-BTS survey years
@LaurenePecuchet advises to remove years prior to 2004 because data are not good enough for biodiversity studies?
Or maybe just mention in the usage notes of the paper? Using that data for other purposes could still be good
I suggest we put a note in the summary report. But we might want to discuss as a group what our overall decission is and apply it for all surveys
@LaurenePecuchet added another issue about this elsewhere, so I'm closing this one as they relate to the same issue
|
2025-04-01T04:10:10.245633
| 2018-12-10T19:21:00
|
389448544
|
{
"authors": [
"eapodaca",
"jeremy-moffitt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13403",
"repo": "ArdanaCLM/ardana-installer-ui",
"url": "https://github.com/ArdanaCLM/ardana-installer-ui/pull/255"
}
|
gharchive/pull-request
|
SCRD-5901 Fix some issues when fetching the monasca server status
handle missing server in internal model, not sure how that happened to me
Handle when state cannot be translated -> 'Unknown'
Handle exception is thrown -> 'Unknown'
latest version looks ok to me at a glance, I'm rebuilding dogwood in hopes of testing it just in case
@GarySmith see updates.
|
2025-04-01T04:10:10.350446
| 2023-10-11T08:08:44
|
1937118074
|
{
"authors": [
"matejv",
"wols"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13404",
"repo": "ArnesSI/netbox-inventory",
"url": "https://github.com/ArnesSI/netbox-inventory/issues/135"
}
|
gharchive/issue
|
Warranty division by zero
Error
<class 'ZeroDivisionError'>
division by zero
Python version: 3.11.5
NetBox version: 3.6.3
Plugins:
netbox_attachments: 3.0.1
netbox_inventory: 1.5.1
Changelog:
Difference
{
"purchase": null,
"warranty_end": null,
"warranty_start": null
}
{
"purchase": 11,
"warranty_end": "2011-02-16",
"warranty_start": "2011-02-16"
}
Probably a date difference of at least one should be required?
Manual fixed:
update netbox_inventory_asset set warranty_end = '2013-02-16' where purchase = 11;
Could you let me know on which URL did this error occur?
Sorry,
I updated an asset: /plugins/inventory/assets/ID/edit/
and added the purchase and warranty date.
|
2025-04-01T04:10:10.363277
| 2024-06-01T20:03:58
|
2329316607
|
{
"authors": [
"chokosabe"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13405",
"repo": "ArroyoSystems/arroyo",
"url": "https://github.com/ArroyoSystems/arroyo/issues/641"
}
|
gharchive/issue
|
Register Worker call fails on Kubernetes
Have an instance running on kubernetes for ~ 10 days. Suddenly getting errors.
panicked at /app/crates/arroyo-worker/src/lib.rs:297:14: called Result::unwrap() on an Err value: Status { code: FailedPrecondition, message: "Cannot handle message for job_vR6Gen2XNs: State machine is inactive", metadata: MetadataMap { headers: {"content-type": "application/grpc", "date": "Sat, 01 Jun 2024 19:10:43 GMT", "content-length": "0"} }, source: None } panic.file="/app/crates/arroyo-worker/src/lib.rs" panic.line=297 panic.column=14
This seems to be a reference to this:
https://github.com/ArroyoSystems/arroyo/blob/faa29a546bdb1bbc300ac2f9731c0dcc02b77bbe/crates/arroyo-worker/src/lib.rs#L297
This issue was still there with an Update deploy so could well be the environment thats the issue or something retained in the namespace.
Forgot to add that on the Frontend, I get this error:
"failed to tear down existing cluster"
This is after the check "which passes fine". The error is generated on preview and/or running the pipeline.
This got resolved by clearing out the artifacts and checkpoints on aws and also by deleting all the replicasets for the exsiting workers. I don't know which of these fixed things. It'd be great if the error message pointed out exactly what the app was trying to do when it errored. i.e Which pods or replicasets it was trying to delete that triggered the error
|
2025-04-01T04:10:10.397332
| 2021-03-25T06:56:55
|
840624210
|
{
"authors": [
"Artanicus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13406",
"repo": "Artanicus/python-cozify",
"url": "https://github.com/Artanicus/python-cozify/issues/16"
}
|
gharchive/issue
|
Some calls are confusing when unauthenticated
If there is no valid cloud state saved:
%> util/devicelist.py
WARNING:absl:Cloud attribute remotetoken not found in state.
Traceback (most recent call last):
File "util/devicelist.py", line 21, in <module>
main()
File "util/devicelist.py", line 11, in main
devs = hub.devices()
File "/home/artanicus/python-cozify.git/cozify/hub.py", line 40, in devices
_fill_kwargs(kwargs)
File "/home/artanicus/python-cozify.git/cozify/hub.py", line 542, in _fill_kwargs
kwargs['cloud_token'] = cloud.token()
File "/home/artanicus/python-cozify.git/cozify/cloud.py", line 346, in token
return _getAttr('remotetoken')
File "/home/artanicus/python-cozify.git/cozify/cloud.py", line 304, in _getAttr
raise AttributeError
AttributeError
Instead, it should be detected that auth hasn't been done successfully and error on that, not an internal consistency AttributeError.
This is a bit better today but still not great:
%> util/devicelist.py
CRITICAL:absl:Default hub not known, you should run cozify.authenticate()
Traceback (most recent call last):
File "util/devicelist.py", line 21, in <module>
main()
File "util/devicelist.py", line 11, in main
devs = hub.devices()
File "/home/artanicus/.local/lib/python3.8/site-packages/cozify/hub.py", line 37, in devices
_fill_kwargs(kwargs)
File "/home/artanicus/.local/lib/python3.8/site-packages/cozify/hub.py", line 546, in _fill_kwargs
kwargs['hub_id'] = _get_id(**kwargs)
File "/home/artanicus/.local/lib/python3.8/site-packages/cozify/hub.py", line 535, in _get_id
return default()
File "/home/artanicus/.local/lib/python3.8/site-packages/cozify/hub.py", line 456, in default
raise AttributeError
AttributeError
|
2025-04-01T04:10:10.405462
| 2023-11-27T12:05:20
|
2012128242
|
{
"authors": [
"Jon-b-m",
"hummelstrand"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13407",
"repo": "Artificial-Pancreas/iAPS",
"url": "https://github.com/Artificial-Pancreas/iAPS/issues/387"
}
|
gharchive/issue
|
Give 2.2.8 the "latest" tag
GitHub shows 2.2.7 as the "Latest" release.
OK. Thanks. Will
Check
Fixed
|
2025-04-01T04:10:10.411933
| 2018-09-21T12:49:02
|
362600322
|
{
"authors": [
"bjornreppen",
"coveralls"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13408",
"repo": "Artsdatabanken/ratatouille",
"url": "https://github.com/Artsdatabanken/ratatouille/pull/689"
}
|
gharchive/pull-request
|
Søk ux
A picture tells a thousand words
Before this PR
Coverage increased (+0.1%) to 47.662% when pulling 7d600975737107f7776c0499dd5efaeae4babb6c on bjornreppen:søk-ux into<PHONE_NUMBER>1affd38e40d75600eb8b180a00d6e2 on Artsdatabanken:master.
|
2025-04-01T04:10:10.416887
| 2023-12-12T00:59:00
|
2036785601
|
{
"authors": [
"ArturKalach",
"ckknight"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13409",
"repo": "ArturKalach/react-native-a11y",
"url": "https://github.com/ArturKalach/react-native-a11y/issues/45"
}
|
gharchive/issue
|
FocusWrapperProps should not have required props
In https://github.com/ArturKalach/react-native-a11y/blob/d9d55bde8c11465620de612e8683f9f6c6b8f30c/src/components/KeyboardFocusView/RCA11yFocusWrapper/RCA11yFocusWrapper.types.ts#L22-L23, both onKeyUpPress and onKeyDownPress are both required properties.
They should be optional (and the components work as expected when they are optional).
Documentation within README.md already specifies that these are optional.
Hello @ckknight,
Thank you for creating the issue and highlighting this. It help me move forward)
I am update this in some days. Unfortunately, I don't have possibility nor resourcces to polish this lib, I mean that projects I am working on don't use it and I don't have enough test cases(
If you need any additional functionality or have ideas what can be added to the lib, feel free to share it. I have ideas, but not sure what whould be better to implement first, or whether it is really needed)
Hello @ckknight,
I fixed this issue and added some additional changes.
I created release candidate: https://www.npmjs.com/package/react-native-a11y/v/0.4.3-rc and want to check it with different platforms and architecture. It can take some time, but I hope it will be finished this month.
Thank you for a help.
Artur Kalach
|
2025-04-01T04:10:10.483362
| 2016-07-05T17:37:45
|
163905437
|
{
"authors": [
"Ashok-Varma",
"amanzan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13410",
"repo": "Ashok-Varma/BottomNavigation",
"url": "https://github.com/Ashok-Varma/BottomNavigation/issues/52"
}
|
gharchive/issue
|
Cross-Fade animation switching views?
Would it be possible to add a cross-fade animation when switching views, like it is stated in Material guidelines?
this library is like tabs you need to change the views as user clicks on tabs. you can you fragments and fragment to do this
|
2025-04-01T04:10:10.488583
| 2022-06-17T07:52:34
|
1274697655
|
{
"authors": [
"Cygnific",
"ronh991",
"y-khodja"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13411",
"repo": "AsoboStudio/glTF-Blender-IO-MSFS",
"url": "https://github.com/AsoboStudio/glTF-Blender-IO-MSFS/issues/159"
}
|
gharchive/issue
|
Typo in msfs_light.py
Current Behavior
msfs_light.py:
blender_node.msfs_light_has_symmetry = extension.get("has_symmetry")
Asobo has it named and using "has_simmetry"
Expected Behavior
Have "has_simmetry" exported,,
Steps To Reproduce
Export a light and compare to Asobo model..
Environment
- OS:
- Blender:
- glTF-Blender-IO-MSFS:
Anything else?
No response
Line 42 should be changed @pepperoni505 Should I do a PR, as Eric of ASOBO says this git can still be updated.
This has been fixed for SU12 !
Thank you for your report.
|
2025-04-01T04:10:10.528694
| 2023-05-23T14:52:54
|
1722247732
|
{
"authors": [
"codespool"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13413",
"repo": "AstarNetwork/swanky-cli",
"url": "https://github.com/AstarNetwork/swanky-cli/pull/163"
}
|
gharchive/pull-request
|
Feature/add base image
Add swakny-base image Dockerfile and a readme with instructions.
add devcontainer.json using the swanky-base image
No changes to the code, just added the configs, so I'll just merge directly.
|
2025-04-01T04:10:10.539091
| 2023-09-02T09:19:29
|
1878514203
|
{
"authors": [
"AlexanderKrutov",
"Atque",
"safaritn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13414",
"repo": "Astrarium/Astrarium",
"url": "https://github.com/Astrarium/Astrarium/issues/110"
}
|
gharchive/issue
|
Problem displaying the menu
Hi,
I'm very pleased of your excellent Astrarium Software. Good luck for increasing development for that.
After installing and executing, the Menu software display "{Menu.Map}" althoug the name it self. How can I correct this ? Thnak you in avdance.
Hello,
thank you for the positive feedback about the app!
Seems that Astrarium was not installed correctly, try to reinstall it. Also could you be so kind to attach a screenshot with the problem.
This is a capture of the problem:
This is the problem :
I also have this problem. A solution is to change language to Russian and then back to English. However, this needs to be done every time Astrarium is restarted.
Finally I've found the root cause. This will be fixed in the upcoming release. Thanks everyone for reporting the problem.
|
2025-04-01T04:10:10.540353
| 2022-09-08T21:17:30
|
1366998426
|
{
"authors": [
"jandykwan",
"marvinamari",
"theocean154"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13415",
"repo": "AsyncAlgoTrading/aat",
"url": "https://github.com/AsyncAlgoTrading/aat/pull/185"
}
|
gharchive/pull-request
|
Migrate project to rust
Apologies for the blank PR content, I promise I will fill this in when its ready!
it's a great work. right here waiting...
Hey @theocean154 are you still working on this? I was wondering is it worth it to make a contribution to the C++ code or wait for this to merge since it seems it will replace all of the C++ code.
|
2025-04-01T04:10:10.542013
| 2017-10-07T14:08:32
|
263643915
|
{
"authors": [
"MiloszKrajewski",
"raskolnikoov"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13416",
"repo": "AsynkronIT/protoactor-dotnet",
"url": "https://github.com/AsynkronIT/protoactor-dotnet/issues/337"
}
|
gharchive/issue
|
MySql (and actually "any sql") provider
Hi,
I wrote AnySql persistence provider which allows to use any SQL database as storage with no external dependencies at all. On top of this I've implemented MySql provider and I'm testing it this weekend.
My question is: do you want to to pull-request it or create separate repository?
Thanks,
Milosz
@MiloszKrajewski thanks for your contribution. We have already today a provider implementation for MSSQL in our repo that we maintain. Feel free to make a PR for your MySQL implementation, and place your "any sql" solution in your own repo for future considiration for now.
|
2025-04-01T04:10:10.546278
| 2017-10-13T10:39:26
|
265245248
|
{
"authors": [
"rogeralsing"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13417",
"repo": "AsynkronIT/protoactor-dotnet",
"url": "https://github.com/AsynkronIT/protoactor-dotnet/pull/342"
}
|
gharchive/pull-request
|
Cluster stability test
This PR adds a test harness for the cluster support.
It spawns a client node and then 10 worker nodes and starts communicating with grains on all those nodes
From my tests, these changes made the cluster a lot more stable, I have not yet encountered any issues after this.
I Pushed a new commit, at 40 nodes. even though Consul UI reports all nodes as healthy.
Cluster.GetAsync returns Unavailable.
The cluster now looks stable at 100 nodes.
I've seen a few failed RPC calls. represented as "$" in the output.
There is however a considerable amount of lag. not sure if this is due to the 100 processes weighing down the machine, or if there is some proto.actor related issue causing this.
With fewer nodes, everything seems to run fine
Some observations:
Using a unique cluster name per run seems to make the cluster more stable.
My guess is that we have an issue where there are members from previous runs registered on the same address/port/clustername which are seen as separate registrations by consul.
This might confuse the cluster topology.
Waiting for cluster to stabilize before start communicating with grains makes the cluster behave perfect.
My guess here is that when there is a surge of new nodes joining, this messes up the partition actor as new members join taking of the same hash slices.
Two sides to this, it's unlikely that this happens in reality, members might be added or removed a few at a time, but adding 50 or more members in one go is unlikely unless running at huge scale.
Even if the above sentences are true, we should still try to mitigate this and fix this as good as possible.
|
2025-04-01T04:10:10.552614
| 2023-11-13T02:29:13
|
1989769715
|
{
"authors": [
"D3vil0p3r",
"sherloCod3"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13418",
"repo": "Athena-OS/athena-iso",
"url": "https://github.com/Athena-OS/athena-iso/issues/87"
}
|
gharchive/issue
|
[QUESTION]: Hyprland configs error installation
Hello, first of all, I would like to thank you for the work on the beautiful distro. I would like to ask for help with a problem during the installation of the Hyprland version.
When I tried to install for the first time, I received an error with the package "python-material-color-utilities". The error was "Maximum file size exceeded error: failed retrieving file..." while downloading and installing "athena-hyprland-config-439.c10884e8-1". This "python-material" package was initially trying to download from "chaotic-aur" using the default settings. Even trying to clear the cache and pacman packages, reinstall the keys for chaotic, update the mirrorlist, and try again, the error persisted. This happened during the installer process and also after the partial installation that was completed due to the error. I opened a TTY and downloaded the problematic package from AUR, and the installation finished, but your version's configurations were not installed, only the programs.
I am currently using the basic example version, without polybar, waybar, the raw version of Hyprland. Regarding the error with the mentioned package, I tried several solutions found on Google, but without success. Even after installing the necessary packages, the configurations did not install properly. If you could help me, please. Thank you in advance!
Hyprland is still in development.
Ok, got it, thank you!
|
2025-04-01T04:10:10.555305
| 2019-07-19T13:54:09
|
470338197
|
{
"authors": [
"AtiLion",
"sakuraimikoto33"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13419",
"repo": "AtiLion/VRCExtended",
"url": "https://github.com/AtiLion/VRCExtended/issues/2"
}
|
gharchive/issue
|
[Request]Exclude set Inside DynamicBoneCollider
Please exclude the collider which is set to inside
If you use an avatar gimmick inside-collider, you will pull another person's Dynamic Bone
sorry I used google translation
I am not entirely sure what you mean by "excluding the collider which is set to inside".
Do you wish for me to remove colliders that are not in your hands to be ignored? If so please refer to issue #1 if not please explain in a bit more details
I'm talking about the Dynamic Bone Collider Bound setting
I want to exclude this one because it pulls Dynamic Bone of others when targeted
sorry I used google translation
Feature has been added to the latest version.
Thank you for the recommendation!
|
2025-04-01T04:10:10.654276
| 2020-11-02T23:20:01
|
734903732
|
{
"authors": [
"scala-steward"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13420",
"repo": "Atry/Curried.scala",
"url": "https://github.com/Atry/Curried.scala/pull/78"
}
|
gharchive/pull-request
|
Update sbt-sonatype to 3.9.5
Updates org.xerial.sbt:sbt-sonatype from 3.9.2 to 3.9.5.
GitHub Release Notes - Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.xerial.sbt", artifactId = "sbt-sonatype" } ]
labels: sbt-plugin-update, semver-patch
Superseded by #80.
|
2025-04-01T04:10:10.697277
| 2021-11-28T01:48:21
|
1065162927
|
{
"authors": [
"Auburn",
"fragrant-teapot"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13421",
"repo": "Auburn/FastNoiseLite",
"url": "https://github.com/Auburn/FastNoiseLite/pull/82"
}
|
gharchive/pull-request
|
(minor) Java example typo and redundant if
Removed if checks that always defaulted to true, renamed mPingPongStength parameter to mPingPongStrength. This is an minor performance improvement and minor stylistic update.
Thanks, will probably need to fix this in the other languages too
|
2025-04-01T04:10:10.731838
| 2021-03-16T16:20:06
|
832980145
|
{
"authors": [
"bconfortin",
"matt-bullock"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13422",
"repo": "AugurProject/augur",
"url": "https://github.com/AugurProject/augur/issues/10869"
}
|
gharchive/issue
|
Make augur.net deploy automatically
Check the possibility and what needs to be done so that augur.net is deployed automatically.
@bconfortin not for design review. please close if done
|
2025-04-01T04:10:10.733681
| 2020-06-24T20:20:25
|
644921431
|
{
"authors": [
"matt-bullock",
"nuevoalex"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13423",
"repo": "AugurProject/augur",
"url": "https://github.com/AugurProject/augur/issues/8162"
}
|
gharchive/issue
|
Markets available for dispute are not appearing in the dispute UI
Markets which are available for disputing do not appear in the "Markets In Dispute" page anymore.
One oddity is that if I have dispute stake in a market and check the box to "only" show markets in my portfolio the non warp-sync one does appear:
showing up now. closing
|
2025-04-01T04:10:10.742737
| 2024-08-22T02:59:31
|
2479661088
|
{
"authors": [
"AuriRex",
"Zwx372"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13424",
"repo": "AuriRex/GTFO_TheArchive",
"url": "https://github.com/AuriRex/GTFO_TheArchive/issues/74"
}
|
gharchive/issue
|
Look like TheArchive no longer work in GTFO on the last update
i have done everything in installation instructure and the and play the moded game in R2MM but nothing showing up in the menu
Yeah, I can see what might have happened here haha
I haven't set the latest version as a proper release, so the instructions link to an older version ...
Make sure to grab the latest pre-release (v0.7.1) from the Releases page instead and then it should work.
Sorry! haha
I got the latest 0.7.1 to work but it got bug about showing weapon stat, there's only melee stat other function it work for me
|
2025-04-01T04:10:10.792707
| 2018-09-25T19:36:08
|
363734517
|
{
"authors": [
"andreasschacht",
"miguelcobain",
"ryedeer"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13425",
"repo": "AutoCloud/ember-cli-deploy-simply-ssh",
"url": "https://github.com/AutoCloud/ember-cli-deploy-simply-ssh/pull/9"
}
|
gharchive/pull-request
|
fix upload without revision
The command fails if the releasesDir didn't exists. Fix by rearranging the condition.
Can this be merged, please?
The updated version with this PR merged is on npm now.
Thank you!
|
2025-04-01T04:10:10.928261
| 2017-11-08T17:20:34
|
272280744
|
{
"authors": [
"sirreal"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13426",
"repo": "Automattic/photon.js",
"url": "https://github.com/Automattic/photon.js/pull/7"
}
|
gharchive/pull-request
|
Prep 2.0.1 for release
This PR includes
Fix #6
Update deps
build dist/photon.js
Version bump
Next
Squash-n-merge
Release 2.0.1
I'd just like a sanity check as I've never worked with on this project before but am wrangling the release 🙏
I tested browser build dist/photon.js with https://raw.githubusercontent.com/Automattic/photon.js/167e29626ebd03c774e25127cce36d58e54a7829/dist/photon.js
Looks 👌
|
2025-04-01T04:10:10.957009
| 2014-11-22T03:12:35
|
49770577
|
{
"authors": [
"goalsoft",
"rauchg",
"swmoon203"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13427",
"repo": "Automattic/socket.io-redis",
"url": "https://github.com/Automattic/socket.io-redis/issues/43"
}
|
gharchive/issue
|
Problem using UNIX domain socket
it fails on decoding message
var args = msgpack.decode(msg);
events.js:72
throw er; // Unhandled 'error' event
^
SyntaxError: Unexpected token �
at Object.parse (native)
I removed the socket option, it made no sense
How would not make sense? Unix socket with faster
Because in any case it should be two sockets, considering we need to open connections to Redis.
|
2025-04-01T04:10:11.172447
| 2023-06-06T01:23:13
|
1742895190
|
{
"authors": [
"dotNET-anykey",
"rabbitism",
"timunie",
"wmchuang"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13428",
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/11664"
}
|
gharchive/issue
|
Setting Transitions in Button style causes memory leak
Describe the bug
Setting Transitions in Button style causes memory leak
To Reproduce
Steps to reproduce the behavior:
https://github.com/AvaloniaUI/Avalonia.Samples/tree/main/src/Avalonia.Samples/Routing/BasicViewLocatorSample
Use this project. upgrade to 11.0.0-rc1.1
Add code.
run it
keep clicking on next back button
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: Windows
Version 11.0.0-preview6 11.0.0-rc1.1
Additional context
Add any other context about the problem here.
https://github.com/AvaloniaUI/Avalonia/pull/10173
Has it been fixed ?
@wmchuang after the PR you have linked is already merged, please try your sample again but this time using latest master. See https://github.com/AvaloniaUI/Avalonia/wiki/Using-nightly-build-feed for instructions.
PS: I cannot reproduce it on my machine at least.
@timunie that pr was merged 4 month ago, It's already in rc.
Tryied this on 11.0.3 and the problem is still present.
Commented transitions - no memory leak. Uncommented transitions - memory leak.
@dotNET-anykey please open a new issue and attach a minimal sample. It may be a similar issue but a bit different.
@dotNET-anykey please open a new issue and attach a minimal sample. It may be a similar issue but a bit different.
Idk, it's just identical :D
|
2025-04-01T04:10:11.180382
| 2024-04-10T21:08:50
|
2236431123
|
{
"authors": [
"laolarou726",
"maxkatz6",
"timunie"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13429",
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/issues/15313"
}
|
gharchive/issue
|
Proposal: Add a #AVA_DESIGN_TIME preprocessor to help to write the designer specific codes
Is your feature request related to a problem? Please describe.
Currently I/m using MSDI as my Dependency Injection container. However, because I injected all the required components in the VM's and UserControl's constructor. This will cause the AvaloniaUI's preview to stop working:
Describe the solution you'd like
Introduce a preprocessor will easily solve this issue:
#if AVA_DESIGN_TIME
public RandomUserControl() // introduce an empty ctor for the previewer
{
InitializeComponent();
...
DataContext = new A();
}
#else
public RandomUserControl(A a, B b, C c)
{
InitializeComponent();
...
DataContext = a;
}
#endif
Describe alternatives you've considered
No response
Additional context
No response
Designer doesn't have a control over compilation process. It uses precompiled binaries.
So, this feature is a bit problematic to implement.
Designer doesn't have a control over compilation process. It uses precompiled binaries.
So, this feature is a bit problematic to implement.
Any possible ways to provide an empty ctor for the previewer in the design time? Currently all of my preview windows are dead because ctor with paras.
@laolarou726 we technically can reuse Design.PreviewWith property.
Something like:
public static class DesignData
{
public static MyWindow Window => new MyWindow(DesignParam1, DesignParam2);
}
<Window x:Class="MyWindow"
Design.PreviewWith="{x:Static DesignData.MyWindow}">
<Button />
</Window>
Where previewer will take Design.PreviewWith instance instead of creating object itself.
The problem is, though, with current designer/previewwith code it will likely go into a recursion.
I think parameterless constructor is allowed to be marked as internal
We have if (Design.IsDesingMode) which can be used to have a code path to cover that
a. Maybe above can be combined with # IF DEBUG, so the code is not in release mode
Internal ctor should be supported even now.
Thanks! I'll try internal ctor with #if DEBUG check first
@maxkatz6 @timunie Looks like the previewer still not working for the internal ctor:
System.Xaml.XamlException: “Unable to find public constructor for type XXXViewModel()
yeah but for your ViewModel, check error message carefully!
@laolarou726 can you confirm the suggested solution works for you?
@laolarou726 can you confirm the suggested solution works for you?
Yes. It does work on some of my classes. But not all of them, I'm still trying to figure out why.
|
2025-04-01T04:10:11.185056
| 2018-06-29T13:01:52
|
336990311
|
{
"authors": [
"grokys",
"jkoritzinsky",
"jmacato",
"nc4rrillo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13430",
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/1715"
}
|
gharchive/pull-request
|
WIP: Unify render animation timers
Unifies the (unnecessarily) separated Timers for Animations and the Main loop.
Fixes #1706
We have some problems regarding the new animations algorithm and this PR. I determined that this PR is causing some flickering issues on looping animations when using the new algo...
I like this PR a lot and would love to get it merged. What's left to do here?
ImmediateRenderer is going to need to be made aware of this too, otherwise the Animations will never be Pulse'd
This also heavily conflicts with #1793. I think this PR is going to have be be restarted once that is merged.
Also we have another problem with this PR. After I wrote it, I realised that exposing the frame count in the timer wasn't a good idea because it makes timers FPS-dependent. My idea was to make the timers produce TimeSpans. However I then went on vacation and @jkoritzinsky made some changes which makes the animation system heavily dependent on timers producing frame counts :/
My idea was that we'd hack the current animation system to get it working with these new timers and then #1793 would revamp the animation system. However everything has now got mixed together: #1793 is based on the old broken timers, the animation system in this PR has been refactored to make it difficult to work with TimeStamp and I'm not sure how to continue...
I've committed a new WIP PR which changes the timers to be the way I was imagining, but now everything is hopelessly broken and animations don't run.
I'm thinking it might be best to burn this PR to the ground and start again...
This last issue is actually caused by #1879. So this PR is ready for review!
cc: @danwalmsley @grokys @jmacato
Fixes #1706.
@jkoritzinsky This is looking good! :smile:
|
2025-04-01T04:10:11.186170
| 2021-02-10T11:56:01
|
805447635
|
{
"authors": [
"YohDeadfall"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13431",
"repo": "AvaloniaUI/Avalonia",
"url": "https://github.com/AvaloniaUI/Avalonia/pull/5462"
}
|
gharchive/pull-request
|
HasFlagCustom made safe and supports all enums
Continuation of #4873 which branch I accidentally dropped.
@grokys, rebased on the top of master. It's ready for review.
|
2025-04-01T04:10:11.203529
| 2018-10-04T15:05:02
|
366835585
|
{
"authors": [
"focalplane",
"jflynn129"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13432",
"repo": "Avnet/wnc14a2a-driver",
"url": "https://github.com/Avnet/wnc14a2a-driver/pull/12"
}
|
gharchive/pull-request
|
Update WncController.cpp
removing GPLv3
@focalplane I updated with a new routine and left the original code in for you to compare, but it is your code so I don't want to be the final say on merging it back in. Please look it over and if you agree, can you remove the #if 0 code and do the merge?
Hi, The newer implementation is likely slower than the original that is
GPL3, the prior uses both / and %. If you think the newer 1 is fast enough
for your purposes it is functionally equivalent as far as I can tell for
the answer/output.
Regards,
-Fred
On Fri, Oct 5, 2018 at 12:59 PM BeeBoy<EMAIL_ADDRESS>wrote:
@focalplane https://github.com/focalplane I updated with a new routine
and left the original code in for you to compare, but it is your code so I
don't want to be the final say on merging it back in. Please look it over
and if you agree, can you remove the #if 0 code and do the merge?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Avnet/wnc14a2a-driver/pull/12#issuecomment-427432535,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AR0kx_3uTREz1seTuLDgu33ym8uLDqPAks5uh5AGgaJpZM4XIT3N
.
I am not going to be abke to make time to do any testing so I don’t see why
you want me to remove the preprocessor directive. I’m fine with the
change, so go ahead!
Thanks!
On Sat, Oct 6, 2018 at 8:22 AM BeeBoy<EMAIL_ADDRESS>wrote:
@focalplane https://github.com/focalplane Thanks for the feedback, when
will you be able to "look it over and if you agree, can you remove the
#if 0 code and do the merge?"
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Avnet/wnc14a2a-driver/pull/12#issuecomment-427569563,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AR0kxwA9NbJjTAHGpwSY4m8k27b5mzm1ks5uiKCMgaJpZM4XIT3N
.
You have changed the code in the past and have already made this change but
I understand.
Have a good weekend!
On Sat, Oct 6, 2018, 12:20 PM BeeBoy<EMAIL_ADDRESS>wrote:
Sorry but it’s not my code it’s yours. I made the suggestion that’s all
I’m going to do.
Sent from my iPhone
On Oct 6, 2018, at 12:17 PM, focalplane<EMAIL_ADDRESS>wrote:
I am not going to be abke to make time to do any testing so I don’t see
why
you want me to remove the preprocessor directive. I’m fine with the
change, so go ahead!
Thanks!
On Sat, Oct 6, 2018 at 8:22 AM BeeBoy<EMAIL_ADDRESS>wrote:
@focalplane https://github.com/focalplane Thanks for the feedback,
when
will you be able to "look it over and if you agree, can you remove the
#if 0 code and do the merge?"
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
https://github.com/Avnet/wnc14a2a-driver/pull/12#issuecomment-427569563>,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AR0kxwA9NbJjTAHGpwSY4m8k27b5mzm1ks5uiKCMgaJpZM4XIT3N
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Avnet/wnc14a2a-driver/pull/12#issuecomment-427586590,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AR0kx4LRcXIPOmnRFR_D2Z-E8_c0v85Yks5uiNhHgaJpZM4XIT3N
.
|
2025-04-01T04:10:11.216571
| 2023-03-15T08:18:06
|
1624988488
|
{
"authors": [
"ampratt",
"jafin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13433",
"repo": "AxaGuilDEv/react-oidc",
"url": "https://github.com/AxaGuilDEv/react-oidc/issues/999"
}
|
gharchive/issue
|
/.well-known/openid-configuration being automatically duplicated in configuration authority field
I am coming from using https://www.npmjs.com/package/@axa-fr/react-oidc-context in my CRA React app. I am migrating to Vite and changing the oidc library to this one.
My configuration has always used this format for the authority:
authority: "https://<my-keycloack-idp/realms/test/.well-known/openid-configuration"
Now after migrating, calls to my idp fail because the request is sent with the ending duplicated:
https://<my-keycloack-idp/realms/test/.well-known/openid-configuration/.well-known/openid-configuration
Any clues or knowledge of the source of the issue?
I can see in here that /.well-known/openid-configuration is appended to the OpenIdIssuerUrl
https://github.com/AxaGuilDEv/react-oidc/blob/5e0684f2ab86007abc5e76e0190a2b92982497ab/packages/react/src/oidc/vanilla/requests.ts#L8-L11
a comment would be good. Even better would be to make it configurable
|
2025-04-01T04:10:11.227455
| 2018-07-03T09:55:56
|
337826812
|
{
"authors": [
"abuijze",
"dzonekl",
"smcvb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13434",
"repo": "AxonFramework/AxonFramework",
"url": "https://github.com/AxonFramework/AxonFramework/issues/643"
}
|
gharchive/issue
|
Add API to query a Tracking Token whether it handled a given Event
In some scenarios it would be beneficial to be able to query a TrackingToken whether it has handled given EventMessage. If you for example require a given set of Event Processors to be ensured of having handled that event, this would be the most ideal action.
Currently TrackingToken#covers(TrackingToken) does provide something along these lines, but it does much more than just 'asking if you've handled an event'.
Whether this would be Utility implementation or a direct addition to the TrackingToken is up for debate.
This would be nice indeed, from our need, we have build an endpoint to query all our TEP's for their status, so knowing if they have caughtUp, also tells you that all events have been handled. One idea would be to allow to configure the AxonServer UI, to read this endpoint, so alongside reporting the tracking event processors, AxonServer Dashboard, could actually also get status details and giving a complete view of event processing, by calling this endpoint. We could even agree on a standard port/path, so no config is required. (The host name of the TEP is already known to AxonServer).
This is what the endpoint currently offers in Swagger format:
swagger: '2.0'
info:
title: 'Axon Admin API'
description: 'API specification for managing Axon'
version: '0.0.1'
schemes:
- http
basePath: /
produces:
- 'application/json;charset=UTF-8'
paths:
/axon/processors/:
get:
summary: 'Get the tracking event processors'
description: 'Retrieves the event processors registered in Axon.'
operationId: 'getTrackingEventProcessors'
responses:
200:
description: 'The tracking event processors are returned'
schema:
$ref: '#/definitions/Processors'
400:
description: 'The request is not valid'
503:
description: 'Service currently unavailable'
/axon/processors/replay/{processingGroup}:
parameters:
- name: 'processingGroup'
in: path
description: 'The name of the tracking event processor'
required: true
type: string
post:
operationId: 'replayTrackingEventProcessor'
summary: 'replay events on a tracking event processor'
description: 'Replay events on a processor. It will stop the processor, reset the tokens and start the processor again.'
responses:
200:
description: 'Replay initiated.'
definitions:
Processors:
type: array
description: 'A list of processors'
items:
$ref: '#/definitions/Processor'
Processor:
type: object
properties:
name:
type: string
description: 'The name of the processor'
segments:
$ref: '#/definitions/ProcessorSegmentsStatus'
state:
$ref: '#/definitions/ProcessorState'
threads:
$ref: '#/definitions/ProcessorThreads'
ProcessorSegmentsStatus:
type: array
description: 'The processor status'
items:
$ref: '#/definitions/ProcessorSegmentStatus'
ProcessorSegmentStatus:
type: object
description: 'The processor segment status'
properties:
segmentId:
type: integer
description: 'The segment identifier'
caughtUp:
type: boolean
description: 'If the processor is caught up with the event store.'
replaying:
type: boolean
description: 'If the processor is currenty replaying'
trackingToken:
type: string
description: 'The tracking token'
ProcessorState:
type: string
description: The Tracking processors current state.
enum: ['RUNNING', 'PAUSED_ERROR', 'UNDEFINED']
ProcessorThreads:
type: object
description: 'The thread configuration for this processor'
properties:
activeThreads:
type: integer
description: 'Current active threads'
availableThreads:
type: integer
description: 'Available threads'
This feature has been superseded by #1063, which exposes the position of a token as a Long. By comparing the positions of the Event's token with the position of the processor, one gets in indication whether an event has been handled.
|
2025-04-01T04:10:11.234760
| 2023-02-17T22:59:54
|
1590055929
|
{
"authors": [
"JSomervilleFromAz",
"mvenzke-axway"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13435",
"repo": "Axway-API-Management-Plus/apim-cli",
"url": "https://github.com/Axway-API-Management-Plus/apim-cli/issues/349"
}
|
gharchive/issue
|
Retry-Delay does not appear in help text
APIM-CLI version
1.13.3
API-Management version
7.7
Bug description
A retry-delay option was added in https://github.com/Axway-API-Management-Plus/apim-cli/issues/213 but it does not appear to be mentioned in the help text.
Steps to reproduce
Run apim api import help
Relevant log output
>apim api import help
2023-02-16 16:28:09,378 [APIManagerCLI] INFO : API-Manager CLI: 1.13.3
2023-02-16 16:28:09,380 [APIManagerCLI] INFO : Module: API - I M P O R T (1.13.3)
usage: API-Import [-ignoreCache] [-clearCache <ALL>] [-apimCLIHome </home/chris/apim-cli>] [-force] [-p <changeme>] [-u <apiadmin>] [-port
<8181>] [-returncodes] [-h <api-host>] [-s <preprod>] [-a <swagger_file.json>] -c <api_config.json> [-ignoreQuotas] [-updateOnly]
[-useFEAPIDefinition] [-clientOrgsMode <ignore|replace|add>] [-clientAppsMode <ignore|replace|add>] [-quotaMode <ignore|replace|add>]
[-detailsExportFile <APIDetails.properties>] [-forceUpdate] [-enabledCaches <applicationsQuotaCache,*API*>] [-stageConfig
<my-staged-config.json>] [-apimanagerUrl <https://manager.domain.com>]
-ignoreCache The cache for REST-API calls against the API-Manager isn't used at all.
-clearCache <ALL> Clear the cache previously created, which will force the CLI to get fresh data from the
API-Manager.
Examples: 'ALL', '*application*', 'applicationsQuotaCache,*api*'
-apimCLIHome </home/chris/apim-cli> The absolute path to the CLI home directory containing for instance your conf folder.
You may also set the environment variable: 'AXWAY_APIM_CLI_HOME'
-force Optional flag used by different modules to enforce actions. For instance import breaking
change or delete API(s).
-p,--password <changeme> Password used to authenticate
-u,--username <apiadmin> Username used to authenticate. Please note, that this user must have Admin-Role
-port <8181> Optional parameter to declare the API-Manager port. Defaults to 8075.
-returncodes Print the possible return codes and description.
-h,--host <api-host> The API-Manager hostname the API should be imported
-s,--stage <preprod> The API-Management stage (prod, preprod, qa, etc.)
Is used to lookup the stage configuration file.
-a,--apidefinition <swagger_file.json> (Optional) The API Specification either as OpenAPI (JSON/YAML) or a WSDL for SOAP-Services:
- in local filesystem using a relative or absolute path. Example: swagger_file.json
Please note: Local filesystem is not supported for WSDLs. Please use direct URL or a
URL-Reference-File.
- a URL providing the Swagger-File or WSDL-File. Examples:
[username/password@]https://any.host.com/my/path/to/swagger.json
[username/password@]http://www.dneonline.com/calculator.asmx?wsdl
- a reference file called anyname-i-want.url which contains a line with the URL
(same format as above for OpenAPI or WSDL). If not specified, the API Specification
configuration is read directly from the API-Config file.
-c,--config <api_config.json> This is the JSON-Formatted API-Config containing information how to expose the API. You may
get that config file using apim api get with output set to JSON.
-ignoreQuotas Use this flag to ignore configured API quotas.
-updateOnly If set, an existing actual API will be updated. If no actual API is found, the CLI stops.
-useFEAPIDefinition If this flag is set, the Actual-API contains the API-Definition (e.g. Swagger) from the
FE-API instead of the original imported API.
-clientOrgsMode <ignore|replace|add> Controls how configured Client-Organizations are treated. Defaults to add!
-clientAppsMode <ignore|replace|add> Controls how configured Client-Applications are treated. Defaults to add!
-quotaMode <ignore|replace|add> Controls how quotas are managed in API-Manager. Defaults to add!
-detailsExportFile <APIDetails.properties> Configure a filename, to get a Key=Value file containing information about the created API.
-forceUpdate If set, the API is Re-Created even if the Desired- and Actual-State are equal.
-enabledCaches <applicationsQuotaCache,*API*> By default, no cache is used for import actions. However, here you can enable caches if
necessary to improve performance. Has no effect, when -gnoreCache is set. More information
on the impact: https://bit.ly/3FjXRXE
-stageConfig <my-staged-config.json> Manually provide the name of the stage configuration file to use instead of derived from
the given stage.
-apimanagerUrl <https://manager.domain.com> Instead of using host and port, you can set the entire API-Manager URL. Host and Port will
be ignored if set.
ERROR: Missing required option: c
----------------------------------------------------------------------------------------
How to import APIs
Import an API including the API-Specification using environment properties file: env.api-env.properties:
scripts\apim.bat api import -c samples/basic/minimal-config-api-specification.json -s api-env
scripts\apim.bat api import -c samples/basic/minimal-config-api-specification-filtered.json -s api-env
scripts\apim.bat api import -c samples/basic/odata-v2-northwind-api.json -s api-env
scripts\apim.bat api import -c samples/complex/complete-config.json -a ../petstore.json -h localhost -u apiadmin -p changeme
For more information and advanced examples please visit:
https://github.com/Axway-API-Management-Plus/apim-cli/tree/develop/modules/api-import/assembly/samples
https://github.com/Axway-API-Management-Plus/apim-cli/wiki
2023-02-16 16:28:09,397 [APIImportApp] ERROR: Missing required option: c
com.axway.apim.lib.errorHandling.AppException: Missing required option: c
Also, looks like it is also parsing required parameters when it should be displaying the help text.
"ERROR: Missing required option: c"
|
2025-04-01T04:10:11.239401
| 2024-11-05T13:01:55
|
2635392463
|
{
"authors": [
"olitho74"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13436",
"repo": "Axway-API-Management-Plus/apim-cli",
"url": "https://github.com/Axway-API-Management-Plus/apim-cli/issues/518"
}
|
gharchive/issue
|
Deployment of API from Azure Devops agent
APIM-CLI version
1.14.1
API-Management version
<IP_ADDRESS>2
Bug description
Hi,
I get an error sometines on our Test environment when deploying an API.
I restart the pipeline, and then it works fine.
The error message is :
2024-11-05 08:48:19,938 [APIManagerAPIMethodAdapter] WARN : No operations found for API with id: 0a3625de-cc5f-4e64-b3a1-c1c99f882766
2024-11-05 08:48:19,938 [APIManagerAPIMethodAdapter] WARN : No operations found for API with id: 0a3625de-cc5f-4e64-b3a1-c1c99f882766
2024-11-05 08:48:20,482 [APIManagerAPIAdapter] ERROR: Failed to download original API-Specification. You may use the toggle -useFEAPIDefinition to download the Frontend-API specification instead.
It looks like the swagger file has no operation. Just before the deployment with the CLI, there is a curl test on the /healthcheck endpoint of the backend API, and it works fine. I wonder what could cause this error.
The backend API is a REST API deployed in Openshift.
The message says to use the toggle useFEAPIDefinition . Is it something recommended ? Never used this before
Steps to reproduce
deploy API from CLI in Azure Devops
Relevant log output
2024-11-05 08:48:19,938 [APIManagerAPIMethodAdapter] WARN : No operations found for API with id: 0a3625de-cc5f-4e64-b3a1-c1c99f882766
2024-11-05 08:48:19,938 [APIManagerAPIMethodAdapter] WARN : No operations found for API with id: 0a3625de-cc5f-4e64-b3a1-c1c99f882766
2024-11-05 08:48:20,403 [APIManagerAPIAdapter] INFO : Loading application quotas for 3 subscribed applications.
2024-11-05 08:48:20,482 [APIManagerAPIAdapter] ERROR: Failed to download original API-Specification. You may use the toggle -useFEAPIDefinition to download the Frontend-API specification instead.
Hi,
Ok, debug is activated. Next time I get the error, I come back here with the informations.
thanks,
Olivier
|
2025-04-01T04:10:11.257894
| 2021-06-20T09:12:45
|
925555744
|
{
"authors": [
"agamgupta2015"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13437",
"repo": "Ayush7614/Daily-Coding-DS-ALGO-Practice",
"url": "https://github.com/Ayush7614/Daily-Coding-DS-ALGO-Practice/issues/1267"
}
|
gharchive/issue
|
Boggle (Find all possible words in a board of characters) (Hard)(Using Depth first Search)
I want to work on this issue
Programming language
[ ] C
[x] C++
[ ] Java
[ ] Python
@Amit366 @ravikr126 Please assign this issue
|
2025-04-01T04:10:11.283027
| 2024-04-11T04:10:20
|
2236873458
|
{
"authors": [
"MARVELMafia",
"alfespa17"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13438",
"repo": "AzBuilder/terrakube",
"url": "https://github.com/AzBuilder/terrakube/issues/803"
}
|
gharchive/issue
|
Statefile not getting created in S3 Compatible Backend after Apply
Feedback
After being able to create IaaC I am only able to see the state of the deployment on Terrakube UI. But I dont see anything on my S3 Compatible backend storage.
The log on the executor are as follows:
2024-04-11 03:41:17.769 ERROR 1 --- [nio-8080-exec-3] o.t.a.p.s.aws.AwsStorageTypeServiceImpl : S3 Not found: The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey; Request ID: 17C51CBE85693434; S3 Extended Request ID: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855; Proxy: null) 2024-04-11 03:41:18.368 INFO 1 --- [nio-8080-exec-1] o.t.a.p.s.c.TerraformOutputController : Reading output from storage
I have tried with both remote and local execution.
Also what is meant by local execution? does it execute on the kubernetes worker node?
The local or remote execution mode is used with the CLI driven workflow to set where terraform will be executed, for example if terraform will run locally and just sent the updated state to terrakube or remote if the request will be sent to terrakube and the executor component will run it remotely.
https://docs.terrakube.io/user-guide/workspaces/cli-driven-workflow
For the other comment what do you mean when you mention you don't see anything in the backend storage? You should be able to see the logs, state, state history inside the storage.
Right I see. I do not need a cli driven workflow and terraform can execute remotely but the statefile should get created in the S3 Bucket I specified in my .tf script as well as be able to view it in terrakube right?
So when I look into the bucket I do not see any statefile getting created via the S3 Browser
Right I see. I do not need a cli driven workflow and terraform can execute remotely but the statefile should get created in the S3 Bucket I specified in my .tf script as well as be able to view it in terrakube right?
As mentioned, I can see the statefile in the terrakube UI
But when I look into the bucket via the S3 Browser I do not see any statefile getting created
Ok let me explain this, terrakube internally will take care of the state file using the terrakube storage backend that you have selected when you deployed terrakube (azure, gcp, S3 or minio) you don't have to write the S3 backend like the above example.
Instead of writing the backend like you have above you could use a backend like the following to set the workspace that will save the information for your infrastructure:
terraform {
backend "remote" {
hostname = "8080-azbuilder-terrakube-po7evw1u15x.ws-us86.gitpod.io"
organization = "simple"
workspaces {
name = "workspace1"
}
}
}
terraform {
cloud {
organization = "simple"
hostname = "8080-azbuilder-terrakube-sscrnu9jbie.ws-us99.gitpod.io"
workspaces {
name = "samplecloud"
}
}
}
terraform {
cloud {
organization = "simple"
hostname = "8080-azbuilder-terrakube-0musnfsxh7g.ws-us102.gitpod.io"
workspaces {
tags = ["development", "networking"]
}
}
}
To use the above examples you just need to run
terraform login TERRAKUBE-API
Hi, I have set my backend as follows and I can see the statefiles getting created when I look into the bucket.
storage:
defaultStorage: true
minio:
accessKey: "X"
secretKey: "Y"
bucketName: "iaac-infrastructure"
endpoint: "Z"
default:
endpoint: "Z"
Next I want to create the workspace name in the bucket instead of the default folders (34,35,37 etc) where the statefile gets saved.
According to your information above - what hostname is it talking about?
terraform {
backend "minio" {
hostname = <what hostname?>
organization = "simple"
workspaces {
name = "Test-VM"
}
}
}
Ideally what I want to see is a folder made for the workspace TEST-VM which contains the statefile for the applied terraform code.
You can use something like the following to manage the terraform backend for your code:
terraform {
cloud {
organization = "simple"
hostname = "TERRAKUBE API HOSTNAME"
workspaces {
name = "TESTVM"
}
}
}
I see you are using the default storage, so it will use MINIO to save the statefile and all the related information like logs, state history, etc.
Terrakube will handle internally all the file in some folders inside the MINIO storage including the state file in a path like
state/terrakube org id/terrakube workspace id/....
Sorry but why would it be
terraform {
cloud {
organization = "simple"
hostname = "terrakube-api-service"
workspaces {
name = "TESTVM"
}
}
}
and not
terraform {
backend "minio" {
hostname = "terrakube-api-service"
organization = "simple"
workspaces {
name = "Test-VM"
}
}
}
Sorry but why would it be
terraform {
cloud {
organization = "simple"
hostname = "terrakube-api-service"
workspaces {
name = "TESTVM"
}
}
}
and not
terraform {
backend "minio" {
hostname = "terrakube-api-service"
organization = "simple"
workspaces {
name = "Test-VM"
}
}
}
Because terrakube implement the cloud backend as explain here
https://docs.terrakube.io/user-guide/workspaces/cli-driven-workflow
You can use something like the following to manage the terraform backend for your code:
terraform {
cloud {
organization = "simple"
hostname = "TERRAKUBE API HOSTNAME"
workspaces {
name = "TESTVM"
}
}
}
I see you are using the default storage, so it will use MINIO to save the statefile and all the related information like logs, state history, etc.
Terrakube will handle internally all the file in some folders inside the MINIO storage including the state file in a path like
state/terrakube org id/terrakube workspace id/....
Right I see.
So I used
cloud {
organization = "<same name as organization defined in terrakube UI>"
hostname = "terrakube-api-service" (name of the kubernetes service that points to the terrakube-api pod)
workspaces {
name = "<same workspace name as defined in terrakube UI>"
}
}
However , in the bucket I still see the statefile being created with the default terrakube format
You can use something like the following to manage the terraform backend for your code:
terraform {
cloud {
organization = "simple"
hostname = "TERRAKUBE API HOSTNAME"
workspaces {
name = "TESTVM"
}
}
}
I see you are using the default storage, so it will use MINIO to save the statefile and all the related information like logs, state history, etc.
Terrakube will handle internally all the file in some folders inside the MINIO storage including the state file in a path like
state/terrakube org id/terrakube workspace id/....
Right I see.
So I used
cloud {
organization = "<same name as organization defined in terrakube UI>"
hostname = "terrakube-api-service" (name of the kubernetes service that points to the terrakube-api pod)
workspaces {
name = "<same workspace name as defined in terrakube UI>"
}
}
However , in the bucket I still see the statefile being created with the default terrakube format
You can't change the logic that terrakube uses to manage the state internally
|
2025-04-01T04:10:11.287248
| 2018-07-31T19:16:13
|
346322571
|
{
"authors": [
"Azanor",
"PassiveBrain",
"osa519"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13439",
"repo": "Azanor/thaumcraft-beta",
"url": "https://github.com/Azanor/thaumcraft-beta/issues/1223"
}
|
gharchive/issue
|
[Suggestion] Stabilizers Suggestion
Suggestions;
Giving stabilizers ranks similar to the Thaumium Essentia Smeltery
Or,
Have them work like the Levitator with a toggle to increase rf per tick = increasing stabilize per tick.
Or,
Have the stabilizers connectable, Tip -> butt, and the RF will travel through slightly increasing Stabilize per tick per each stabilizers connected.
I have 8, stabilizers, working on 12 around my matrix and I must say, it is quite weird as they have to be in front of each other's LOS.
I've heard that you have to power up stabilisers with vis generators on each side. Haven't tried though.
This issue was moved to Azanor/thaumcraft-suggestions#369
|
2025-04-01T04:10:11.297706
| 2018-08-12T17:02:05
|
349827700
|
{
"authors": [
"F9Alejandro",
"aaronhowser1",
"ben-mkiv"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13440",
"repo": "Azanor/thaumcraft-beta",
"url": "https://github.com/Azanor/thaumcraft-beta/issues/1294"
}
|
gharchive/issue
|
possible dimension ID conflict with Welcome to the Jungle Mod in the future
Hi,
noticed that theres already a config option for the outer lands in the beta configs, which is preset to -42.
actually theres a mod "welcome to the jungle" which already uses the same dimension id.
so to not run into conflicts i would suggest to chose another dimension id.
Welcome to the Jungle Mod => https://minecraft.curseforge.com/projects/welcome-to-the-jungle
That mod has less than 10k downloads, while most Thaumcraft 6 versions have over 100k. Wouldn't this be better for Welcome to the Jungle to fix? A modpack maker could also just change the config themselves.
ThaumCraft doesnt use the dimension yet, while there are servers which already use dimension id -42 with welcome to the jungle mod. Changing their dimension ID would break all those setups.
so its less likely to cause problems if thaum just chooses another id -.-
How long has this mod been out? also just how many servers do you know that use this mod over TC?
This seems more of an issue with server owners and miss configured packs then it does as an actual mod issue.
Can the "Welcome to the jungle" mod have the dim id changed? also most mods use positive id numbers then negative ones.
if this is such a big issue then change the configs yourself then hounding the mod authors about something that the end user already has access to change. If @Azanor really wanted to have that id only for TC then there wouldn't be a config option to change it.
what a nice attitude.... :-1:
Thank you for your opinions, although everyone is entitled to one doesn't mean you should push it on to others. It isn't difficult to regen that dim and change the TC config for that dim id, Azanor knows that people want to be able to change it because of this issue where mods have same ids for dims instead of getting one assigned upon world gen. (would be a cool feature for forge).
Going off on people because they respond to you with the reasons why things are the way they are is not helping you and puts more on the people who are supporters of the mod and the people who work on it. Please think about the different perspectives and decisions made before going off on people because they don't want to make changes that would take away from more pressing development (like bug fixes and other issues)
It really didn't since Azanor isn't the one to respond, I am not working for him or with him. I really like this mod and wish for it to continue. Getting vexed over something that is meant for clarification not only for you but for others as perceived from what is provided to the players and mod developers, will not help you any more than the next person that gets upset because the mod developer (Azanor) decided on something and has given all players access to change with little to no difficulty.
Basically you are getting upset because you don't want to take effort into making a simple change that will fix your issue. What you are trying to convey is being muddled by your unwillingness to see it from others points of view. Out of all the issues posted on this issue tracker, only a few have had major issues or conflicts with other mods. Your case isn't justified as Azanor has already provided the means to rectify this issue.
If this is about issues with a modpack then contact the modpack author to see this issue page as it will tell them all they need to do to fix the conflict of dimension ids. Please note that the mod you are claiming to have massive issues with is still fairly young and isn't as mainstream as some of the older more publicized mods such as this one and many others.
Everything being said Azanor was correct in closing this as there is no point in adding a change when it can already be changed by the end user. It is the end user that chooses not to take advantage of it. There is no reason to continue the hostility when everything has been explained to you twice and your to ignorant to actually think it over and actually even attempt what was given as a solution.
Any further responses @ben-mkiv I will ignore unless it is a serious question and not rude/indignant towards others/myself
its not about me or you, its about OTHER PEOPLE who arent that familiar with reading minecraft logs and which will require additional help to fix this in the future. You obviously didnt get it.
Then if they have an issue with it in the future if some how they use the two mods together that they make in there own custom type modpack they can always make an new issue and we will point them to here as it has the answer to their question/issue. Most people that make modpacks have at least some know how of configs and logs.
It will be rare for people to get a modpack that doesn't already have the configs modified so it works correctly. Only time it does is when someone who is new to making packs or are making it for themselves, then they will have questions or issues and can always ask others for help (kind of the point of a community is to help each other).
What you are trying to prove is becoming more pointless than ever as you keep saying others yet you are the only one complaining and trying to get a rise out of others. You still keep bringing up that changing the default for the dimension id will fix conflicts, however it won't there will still be conflicts as other mods take over vanilla functions and inject into them. Example enderio was recently having an issue with TC and was causing crashes, turns out it was fixed and the mod wasn't updated in that modpack which contains the fix.
Trying to prove yourself to others yet getting very defensive when there is no reason to makes it seem that it is only yourself that you really care about and that you don't want to change a simple config and regen that specific world dimension.
___Piece of advice people don't like being told what to do and how they should do it, take it from me I am a hobbyist programmer and like to try and figure things out on my own and reach out when i need help. forcing someone to do something that you are already given the option of is not helpful in the least and is extremely counter productive, if it is some sort of enhancement then it would be added to the list of things to look into. What you are trying to do is a demand and not a request, you are not asking you are stating.
Think before you respond to this. I know i make mistakes and i try to fix them to the best of my ability. I will not force others to make changes only ideas and suggestions.
|
2025-04-01T04:10:11.303744
| 2022-11-11T18:53:54
|
1445882649
|
{
"authors": [
"SebastianTanoni",
"YakaryBovine"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13441",
"repo": "AzerothWarsLR/WarcraftLegacies",
"url": "https://github.com/AzerothWarsLR/WarcraftLegacies/issues/253"
}
|
gharchive/issue
|
Velen is alive, but the quest is not completing
Closed by https://github.com/AzerothWarsLR/WarcraftLegacies/commit/fb25ebf394c6e0d639189fdcf8d915f04bc174f5.
|
2025-04-01T04:10:11.305353
| 2017-01-18T22:17:58
|
201708438
|
{
"authors": [
"EndoZira",
"JonnyBoy2000",
"Maxie01"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13442",
"repo": "AznStevy/Maybe-Useful-Cogs",
"url": "https://github.com/AznStevy/Maybe-Useful-Cogs/pull/10"
}
|
gharchive/pull-request
|
Btw it works, was a good change not going to lie.
Maxxie helped me fix the AttributeError and the lvling message error. So his latest commit works :)
|
2025-04-01T04:10:11.307993
| 2023-12-07T16:44:54
|
2031138173
|
{
"authors": [
"ludamad",
"swapilar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13443",
"repo": "AztecProtocol/aztec-packages",
"url": "https://github.com/AztecProtocol/aztec-packages/pull/3619"
}
|
gharchive/pull-request
|
Swapilar patch 4
Please provide a paragraph or two giving a summary of the change, including relevant motivation and context.
These improvements are part of the Cryptoversidad grant proposal.
Appreciate the work but not sure this fits our documentation. Doesn't seem like any maintainer has picked this up - so, sorry but I'll be closing this as part of PR cleanup
|
2025-04-01T04:10:11.327777
| 2020-04-01T12:34:02
|
591880922
|
{
"authors": [
"raynaldgirard",
"rbrundritt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13444",
"repo": "Azure-Samples/AzureMapsCodeSamples",
"url": "https://github.com/Azure-Samples/AzureMapsCodeSamples/issues/31"
}
|
gharchive/issue
|
Maps with Blazor
Hello, could you make examples using Blazor (code c #)? Thank you
That requires creating a .NET wrapper for the whole SDK which is out of scope for this sample project. There is a user voice feature request here: https://feedback.azure.com/forums/909172-azure-maps/suggestions/39695650-blazor
Once a library is created, then a separate samples project would be created for that library.
|
2025-04-01T04:10:11.374071
| 2023-10-04T14:53:14
|
1926400251
|
{
"authors": [
"don007git",
"pamelafox"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13445",
"repo": "Azure-Samples/azure-search-openai-demo",
"url": "https://github.com/Azure-Samples/azure-search-openai-demo/issues/733"
}
|
gharchive/issue
|
Container app-backend-### couldn't be started
Please provide us with the following information:
This issue is for a: (mark with an x)
- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Minimal steps to reproduce
the az up is successful with clean successful endpoint
click the endpoint get ":( Application Error" page."
look the diagnose page, it shows "container app-backend-### couldn't be started" with error captured below.
Any log messages given by the failure
Container app-backend-dn33npvkxysqs_0_2ac5daa7 couldn't be started: Logs = 2023-10-04T14:22:00.089783572Z _____
2023-10-04T14:22:00.089840373Z / _ \ __________ _________ ____
2023-10-04T14:22:00.089847473Z / /\ \__ / | _ __ _/ __ \
2023-10-04T14:22:00.089852073Z / | / /| | /| | /\ /
2023-10-04T14:22:00.089856573Z _|__ /_____ _/ || __ >
2023-10-04T14:22:00.089861173Z / / /
2023-10-04T14:22:00.089865574Z A P P S E R V I C E O N L I N U X
2023-10-04T14:22:00.089869774Z
2023-10-04T14:22:00.089873674Z Documentation: http://aka.ms/webapp-linux
2023-10-04T14:22:00.089877774Z Python 3.10.12
2023-10-04T14:22:00.089881874Z Note: Any data outside '/home' is not persisted
2023-10-04T14:22:02.068123283Z Starting OpenBSD Secure Shell server: sshd.
2023-10-04T14:22:02.336896992Z Site's appCommandLine: python3 -m gunicorn main:app
2023-10-04T14:22:02.907228403Z Starting periodic command scheduler: cron.
2023-10-04T14:22:02.907259704Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -userStartupCommand 'python3 -m gunicorn main:app'
2023-10-04T14:22:02.975888809Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
2023-10-04T14:22:02.984749616Z Build Operation ID: c6a4c673cbbc7c12
2023-10-04T14:22:02.986537758Z Oryx Version: 0.2.20230707.1, Commit: 0bd28e69919b5e8beba451e8677e3345f0be8361, ReleaseTagName: 20230707.1
2023-10-04T14:22:02.987974592Z Output is compressed. Extracting it...
2023-10-04T14:22:02.995145959Z Extracting '/home/site/wwwroot/output.tar.gz' to directory '/tmp/8dbc45cb581a465'...
2023-10-04T14:22:35.626161917Z App path is set to '/tmp/8dbc45cb581a465'
2023-10-04T14:22:35.674904866Z Writing output script to '/opt/startup/startup.sh'
2023-10-04T14:22:36.323596372Z Using packages from virtual environment antenv located at /tmp/8dbc45cb581a465/antenv.
2023-10-04T14:22:36.326567181Z Updated PYTHONPATH to '/opt/startup/app_logs:/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages'
2023-10-04T14:22:40.336020140Z [2023-10-04 14:22:40 +0000] [74] [INFO] Starting gunicorn 20.1.0
2023-10-04T14:22:40.340178053Z [2023-10-04 14:22:40 +0000] [74] [INFO] Listening at: http://<IP_ADDRESS>:8000 (74)
2023-10-04T14:22:40.341586258Z [2023-10-04 14:22:40 +0000] [74] [INFO] Using worker: uvicorn.workers.UvicornWorker
2023-10-04T14:22:40.360850921Z [2023-10-04 14:22:40 +0000] [75] [INFO] Booting worker with pid: 75
2023-10-04T14:22:40.394033329Z [2023-10-04 14:22:40 +0000] [76] [INFO] Booting worker with pid: 76
2023-10-04T14:22:40.552607047Z [2023-10-04 14:22:40 +0000] [77] [INFO] Booting worker with pid: 77
2023-10-04T14:23:07.797261116Z [2023-10-04 14:23:07 +0000] [77] [ERROR] Exception in worker process
2023-10-04T14:23:07.797698768Z Traceback (most recent call last):
2023-10-04T14:23:07.797721976Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
2023-10-04T14:23:07.797728079Z worker.init_process()
2023-10-04T14:23:07.797732880Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/uvicorn/workers.py", line 66, in init_process
2023-10-04T14:23:07.797737882Z super(UvicornWorker, self).init_process()
2023-10-04T14:23:07.797749986Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/workers/base.py", line 134, in init_process
2023-10-04T14:23:07.797755388Z self.load_wsgi()
2023-10-04T14:23:07.797759990Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
2023-10-04T14:23:07.797764891Z self.wsgi = self.app.wsgi()
2023-10-04T14:23:07.797769393Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/base.py", line 67, in wsgi
2023-10-04T14:23:07.797774095Z self.callable = self.load()
2023-10-04T14:23:07.797778596Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
2023-10-04T14:23:07.797783198Z return self.load_wsgiapp()
2023-10-04T14:23:07.797787699Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
2023-10-04T14:23:07.797792701Z return util.import_app(self.app_uri)
2023-10-04T14:23:07.797797203Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/util.py", line 359, in import_app
2023-10-04T14:23:07.797801904Z mod = importlib.import_module(module)
2023-10-04T14:23:07.797806406Z File "/opt/python/3.10.12/lib/python3.10/importlib/init.py", line 126, in import_module
2023-10-04T14:23:07.797811108Z return _bootstrap._gcd_import(name[level:], package, level)
2023-10-04T14:23:07.797815609Z File "", line 1050, in _gcd_import
2023-10-04T14:23:07.797820711Z File "", line 1027, in _find_and_load
2023-10-04T14:23:07.797825313Z File "", line 1006, in _find_and_load_unlocked
2023-10-04T14:23:07.797830114Z File "", line 688, in _load_unlocked
2023-10-04T14:23:07.797834716Z File "", line 883, in exec_module
2023-10-04T14:23:07.797839417Z File "", line 241, in _call_with_frames_removed
2023-10-04T14:23:07.797844219Z File "/tmp/8dbc45cb581a465/main.py", line 1, in
2023-10-04T14:23:07.797848921Z from app import create_app
2023-10-04T14:23:07.797853322Z File "/tmp/8dbc45cb581a465/app.py", line 18, in
2023-10-04T14:23:07.797858124Z from quart import (
2023-10-04T14:23:07.797865326Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/quart/init.py", line 5, in
2023-10-04T14:23:07.797875830Z from .app import Quart as Quart
2023-10-04T14:23:07.797880532Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/quart/app.py", line 46, in
2023-10-04T14:23:07.797885433Z from werkzeug.urls import url_quote
2023-10-04T14:23:07.797890035Z ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/werkzeug/urls.py)
2023-10-04T14:23:07.855597470Z [2023-10-04 14:23:07 +0000] [75] [ERROR] Exception in worker process
2023-10-04T14:23:07.855633583Z Traceback (most recent call last):
2023-10-04T14:23:07.855639785Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
2023-10-04T14:23:07.855644787Z worker.init_process()
2023-10-04T14:23:07.855649489Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/uvicorn/workers.py", line 66, in init_process
2023-10-04T14:23:07.855654290Z super(UvicornWorker, self).init_process()
2023-10-04T14:23:07.855658892Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/workers/base.py", line 134, in init_process
2023-10-04T14:23:07.855663594Z self.load_wsgi()
2023-10-04T14:23:07.855668095Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
2023-10-04T14:23:07.855672697Z self.wsgi = self.app.wsgi()
2023-10-04T14:23:07.855677198Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/base.py", line 67, in wsgi
2023-10-04T14:23:07.855681800Z self.callable = self.load()
2023-10-04T14:23:07.855686201Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
2023-10-04T14:23:07.855691403Z return self.load_wsgiapp()
2023-10-04T14:23:07.855695805Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
2023-10-04T14:23:07.855700506Z return util.import_app(self.app_uri)
2023-10-04T14:23:07.855704908Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/util.py", line 359, in import_app
2023-10-04T14:23:07.855709510Z mod = importlib.import_module(module)
2023-10-04T14:23:07.855714011Z File "/opt/python/3.10.12/lib/python3.10/importlib/init.py", line 126, in import_module
2023-10-04T14:23:07.855718613Z return _bootstrap._gcd_import(name[level:], package, level)
2023-10-04T14:23:07.855723114Z File "", line 1050, in _gcd_import
2023-10-04T14:23:07.855728116Z File "", line 1027, in _find_and_load
2023-10-04T14:23:07.855733418Z File "", line 1006, in _find_and_load_unlocked
2023-10-04T14:23:07.855738120Z File "", line 688, in _load_unlocked
2023-10-04T14:23:07.855754125Z File "", line 883, in exec_module
2023-10-04T14:23:07.855759027Z File "", line 241, in _call_with_frames_removed
2023-10-04T14:23:07.855763728Z File "/tmp/8dbc45cb581a465/main.py", line 1, in
2023-10-04T14:23:07.855768430Z from app import create_app
2023-10-04T14:23:07.855772832Z File "/tmp/8dbc45cb581a465/app.py", line 18, in
2023-10-04T14:23:07.855777533Z from quart import (
2023-10-04T14:23:07.855782835Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/quart/init.py", line 5, in
2023-10-04T14:23:07.855787737Z from .app import Quart as Quart
2023-10-04T14:23:07.855792138Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/quart/app.py", line 46, in
2023-10-04T14:23:07.855796840Z from werkzeug.urls import url_quote
2023-10-04T14:23:07.855801342Z ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/werkzeug/urls.py)
2023-10-04T14:23:07.855805943Z [2023-10-04 14:23:07 +0000] [76] [ERROR] Exception in worker process
2023-10-04T14:23:07.855810545Z Traceback (most recent call last):
2023-10-04T14:23:07.855814946Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
2023-10-04T14:23:07.855819548Z worker.init_process()
2023-10-04T14:23:07.855823949Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/uvicorn/workers.py", line 66, in init_process
2023-10-04T14:23:07.855828551Z super(UvicornWorker, self).init_process()
2023-10-04T14:23:07.855833053Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/workers/base.py", line 134, in init_process
2023-10-04T14:23:07.855837554Z self.load_wsgi()
2023-10-04T14:23:07.855841956Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
2023-10-04T14:23:07.855846457Z self.wsgi = self.app.wsgi()
2023-10-04T14:23:07.855850859Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/base.py", line 67, in wsgi
2023-10-04T14:23:07.855855360Z self.callable = self.load()
2023-10-04T14:23:07.855859762Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
2023-10-04T14:23:07.855864264Z return self.load_wsgiapp()
2023-10-04T14:23:07.855868665Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
2023-10-04T14:23:07.855873267Z return util.import_app(self.app_uri)
2023-10-04T14:23:07.855877668Z File "/opt/python/3.10.12/lib/python3.10/site-packages/gunicorn/util.py", line 359, in import_app
2023-10-04T14:23:07.855882270Z mod = importlib.import_module(module)
2023-10-04T14:23:07.855886671Z File "/opt/python/3.10.12/lib/python3.10/importlib/init.py", line 126, in import_module
2023-10-04T14:23:07.855895174Z return _bootstrap._gcd_import(name[level:], package, level)
2023-10-04T14:23:07.855899876Z File "", line 1050, in _gcd_import
2023-10-04T14:23:07.855904478Z File "", line 1027, in _find_and_load
2023-10-04T14:23:07.855909179Z File "", line 1006, in _find_and_load_unlocked
2023-10-04T14:23:07.855913781Z File "", line 688, in _load_unlocked
2023-10-04T14:23:07.855918783Z File "", line 883, in exec_module
2023-10-04T14:23:07.855923584Z File "", line 241, in _call_with_frames_removed
2023-10-04T14:23:07.855928186Z File "/tmp/8dbc45cb581a465/main.py", line 1, in
2023-10-04T14:23:07.855932787Z from app import create_app
2023-10-04T14:23:07.855937289Z File "/tmp/8dbc45cb581a465/app.py", line 18, in
2023-10-04T14:23:07.855941791Z from quart import (
2023-10-04T14:23:07.855946092Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/quart/init.py", line 5, in
2023-10-04T14:23:07.855950794Z from .app import Quart as Quart
2023-10-04T14:23:07.855955195Z File "/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/quart/app.py", line 46, in
2023-10-04T14:23:07.855959797Z from werkzeug.urls import url_quote
2023-10-04T14:23:07.855964198Z ImportError: cannot import name 'url_quote' from 'werkzeug.urls' (/tmp/8dbc45cb581a465/antenv/lib/python3.10/site-packages/werkzeug/urls.py)
2023-10-04T14:23:07.855968900Z [2023-10-04 14:23:07 +0000] [77] [INFO] Worker exiting (pid: 77)
2023-10-04T14:23:07.855973402Z [2023-10-04 14:23:07 +0000] [75] [INFO] Worker exiting (pid: 75)
2023-10-04T14:23:07.855977803Z [2023-10-04 14:23:07 +0000] [76] [INFO] Worker exiting (pid: 76)
2023-10-04T14:23:10.190105833Z [2023-10-04 14:23:10 +0000] [74] [WARNING] Worker with pid 76 was terminated due to signal 15
2023-10-04T14:23:10.190139833Z [2023-10-04 14:23:10 +0000] [74] [WARNING] Worker with pid 75 was terminated due to signal 15
2023-10-04T14:23:10.236745130Z [2023-10-04 14:23:10 +0000] [74] [INFO] Shutting down: Master
2023-10-04T14:23:10.236815230Z [2023-10-04 14:23:10 +0000] [74] [INFO] Reason: Worker failed to boot.
Expected/desired behavior
OS and Version?
Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
the run is performed on github codespace.
Linux codespaces-836ca0 6.2.0-1012-azure #12~22.04.1-Ubuntu SMP Thu Sep 7 14:07:14 UTC 2023 x86_64 GNU/Linux
azd version?
run azd version and copy paste here.
azd version 1.3.1 (commit b5030da0f29ffe98664c40450187fd8a4cb0d157)
Versions
node -v
v16.20.2
npm -v
8.19.4
Mention any other details that might be useful
I use free trial account, with OpenAI service approved, so all the initial steps are clean and successful, until I click the endpoint,
Thanks! We'll be in touch soon.
Please see https://github.com/Azure-Samples/azure-search-openai-demo/issues/694
Thanks Pamelafox, it is good.
|
2025-04-01T04:10:11.383207
| 2023-07-21T07:57:21
|
1815345759
|
{
"authors": [
"jongio",
"zedy-wj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13446",
"repo": "Azure-Samples/openai-plugin-fastapi",
"url": "https://github.com/Azure-Samples/openai-plugin-fastapi/issues/8"
}
|
gharchive/issue
|
Updating/work-with this template for GHA and Azd Devcontainer feature.
We are updating this template to support new features in Github Action and Azd Devcontainer.
For the Github Action feature: We will remove two lines of code on container image and add GHA code after checkout step in the .github/workflow/azure-dev.yml file. (Shown as below)
For the Azd Devcontainer feature: We will remove dockerfile in .devcontainer folder and add the following code in devcontainer.json file,
"ghcr.io/azure/azure-dev/azd:latest": {}
and image also need to be add instead of build.
@rajeshkamal5050 , @digitarald , @jongio for notification.
Please use GA version of devcontainer, not test-azd
@jongio - Updated to GA version.
|
2025-04-01T04:10:11.391879
| 2018-09-21T22:23:01
|
362792279
|
{
"authors": [
"MikkelHegn",
"gkhanna79",
"wangcai0124"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13447",
"repo": "Azure-Samples/service-fabric-mesh",
"url": "https://github.com/Azure-Samples/service-fabric-mesh/pull/23"
}
|
gharchive/pull-request
|
Add SF Volume Disk Counter Sample and template
Purpose
Add SF Volume Disk Counter Sample and template
Copy Counter to counterSFVolumeDisk
Update service.yaml
Update schemaVersion to 1.0.0-preview2
Rename app,network,.. to be added suffix "SFVD"
Add \templates\counterSFVolumeDisk\sfvd_mesh_rp.windows.json
Does this introduce a breaking change?
[ ] Yes
[x] No
Pull Request Type
What kind of change does this Pull Request introduce?
[ ] Bugfix
[x] Feature
[ ] Code style update (formatting, local variables)
[ ] Refactoring (no functional changes, no api changes)
[ ] Documentation content changes
[ ] Other... Please describe:
How to Test
Get the code
az mesh deployment create --resource-group {rg} --template-url {…/templates/counterSFVolumeDisk/sfvd_mesh_rp.windows.json} --parameters "{'location': {'value': 'eastus'}}"
git clone [repo-address]
cd [repo-name]
git checkout [branch-name]
npm install
Test the code
What to Check
Verify that the following are valid
...
Other Information
@VipulM-MSFT Can you please help merge this?
This needs to go to the 2018-09-01-preview branch, not master. API version and schema are tied to upcoming refresh.
|
2025-04-01T04:10:11.440100
| 2022-09-22T23:54:13
|
1383135160
|
{
"authors": [
"hlipsig",
"zacharyljones"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13448",
"repo": "Azure/ARO-RP",
"url": "https://github.com/Azure/ARO-RP/pull/2418"
}
|
gharchive/pull-request
|
Update Local MDM Testing Scripts and Docs
Which issue this PR addresses:
User Story 15535660: Update Scripts/Docs for Local MDM Testing
The existing MDM test scripts do not work for simply secure enabled subscriptions.
What this PR does / why we need it:
This PR modifies the local MDM testing scripts to work in simply secure enabled subscriptions, and updates relevant documentation accordingly. It also promotes the sharing of a remote MDM enabled VM, rather than users having to create their own personal VM.
Documentation changes also correspond to #2392.
Test plan for issue:
I used these scripts to create an MDM enabled VM in a simply secure enabled subscription and then use the VM to emit metrics to Public INT Geneva.
Is there any documentation that needs to be updated for this PR?
This PR contains all necessary documentation updates.
@dem4gus - This PR contains the script/documentation updates mentioned in #2392
Very out of date. Re-open if needed.
|
2025-04-01T04:10:11.442453
| 2024-01-05T11:02:52
|
2067147141
|
{
"authors": [
"bennerv",
"jaitaiwan",
"mociarain"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13449",
"repo": "Azure/ARO-RP",
"url": "https://github.com/Azure/ARO-RP/pull/3344"
}
|
gharchive/pull-request
|
Explicitly reference 1.18 only
This PR just updates the docs to make it clear what version of GO we support
@mociarain those docs are vendored in as part of go modules. We shouldn't be updating them as we don't own the repositories which host them, and it will fail CI as a result of updating them. Any README.md s at the root level or in docs/ should be applicable.
Thanks heaps for the first go @mociarain ; As per Ben's comment I'm going to close this out.
|
2025-04-01T04:10:11.451812
| 2024-07-19T21:25:54
|
2419913396
|
{
"authors": [
"SudoBrendan",
"bitoku",
"tsatam"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13450",
"repo": "Azure/ARO-RP",
"url": "https://github.com/Azure/ARO-RP/pull/3709"
}
|
gharchive/pull-request
|
WI/MI CLI Phase 1 - Base Update Functionality
Which issue this PR addresses:
Fixes ARO-6451
What this PR does / why we need it:
Implements the ability to set additional platform workload identities on a managed identity-enabled cluster via az aro update.
Test plan for issue:
[X] Unit tests for validating the --assign-platform-workload-identity argument have been updated to account for update scenarios
[X] az aro update against a MIWI cluster results in an API call being made to the RP with the necessary fields populated
Example invocation:
az aro update \
--subscription "${SUBSCRIPTION_ID}" \
--resource-group "${CLUSTER_RESOURCE_GROUP}" \
--name azcli-testcluster \
--assign-platform-workload-identity bar qwert \
--assign-platform-workload-identity baz zxcvb
Against cluster w/ document:
Open
{
"id": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.RedHatOpenShift/openShiftClusters/azcli-testcluster",
"location": "eastus",
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/baz": {}
}
},
"properties": {
"clusterProfile": {
"pullSecret": "",
"domain": "fv0w9cvw",
"version": "4.14.16",
"resourceGroupId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/aro-fv0w9cvw",
"fipsValidatedModules": "Disabled"
},
"platformWorkloadIdentityProfile": {
"platformWorkloadIdentities": [
{
"operatorName": "foo",
"resourceId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/asdf"
},
{
"operatorName": "bar",
"resourceId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ghjkl"
}
]
},
"networkProfile": {
"podCidr": "<IP_ADDRESS>/14",
"serviceCidr": "<IP_ADDRESS>/16",
"outboundType": "",
"preconfiguredNSG": "Disabled"
},
"masterProfile": {
"vmSize": "Standard_D8s_v3",
"subnetId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.Network/virtualNetworks/aro-vnet/subnets/master-subnet",
"encryptionAtHost": "Disabled"
},
"workerProfiles": [
{
"name": "worker",
"vmSize": "Standard_D2s_v3",
"diskSizeGB": 128,
"subnetId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.Network/virtualNetworks/aro-vnet/subnets/worker-subnet",
"count": 3,
"encryptionAtHost": "Disabled"
}
],
"apiserverProfile": {
"visibility": "Public"
},
"ingressProfiles": [
{
"name": "default",
"visibility": "Public"
}
]
}
}
Produces request body:
Open
{
"properties": {
"platformWorkloadIdentityProfile": {
"platformWorkloadIdentities": [
{
"operatorName": "foo",
"resourceId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/asdf"
},
{
"operatorName": "bar",
"resourceId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/qwert"
},
{
"operatorName": "baz",
"resourceId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${CLUSTER_RESOURCE_GROUP}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/zxcvb"
}
]
}
}
}
[!NOTE]
The current implementation of this command allows users to replace existing platform workload identities.
We have outlined in our CLI design doc that this behavior is not supported at this time but may be in the future. We can choose to either explicitly prevent updating existing identities within the CLI right now, or to make that API-side validation only, that can be relaxed in the future without pushing a corresponding CLI change.
Is there any documentation that needs to be updated for this PR?
Not yet
How do you know this will function as expected in production?
Preview extension-only change. We will perform comprehensive testing before releasing this extension to users.
Sorry for the silly question, but why don't we support changing MI when updating?
Sorry for the silly question, but why don't we support changing MI when updating?
The RP backend is not planning to support that ability right away, (to my understanding) it's more complex than just changing coordinates. Rather than building in something to the CLI that would prevent that, just to remove it later once the RP does support it, I am proposing we just allow the CLI to attempt to replace an identity and have the API reject the request with a user-facing error, the resulting user experience will be largely the same (potentially a few milliseconds slower).
I understand that changing MI is not in the scope as well as changing WI.
I agree with the idea to handle change request in the RP side.
/azp run ci
/azp run ci
|
2025-04-01T04:10:11.460376
| 2023-05-02T23:39:53
|
1693221958
|
{
"authors": [
"alexeldeib",
"maxwolffe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13451",
"repo": "Azure/AgentBaker",
"url": "https://github.com/Azure/AgentBaker/pull/3144"
}
|
gharchive/pull-request
|
test: Change test to check for kubelet.service file existence as well.
What type of PR is this?
/kind test
What this PR does / why we need it:
This PR adds a little more context to the test added in https://github.com/Azure/AgentBaker/pull/3104 for issue #2815
The issue #2815 describes the full set of currently required data for custom images which don't rely on the CSE script to execute.
Those are:
/etc/default/kubelet
/var/lib/kubelet/bootstrap-kubeconfig
/etc/kubernetes/certs/ca.crt
The Databricks usecase also has a silly dependency on kubelet.service which I have removed and will roll out in the next month. I've added it as a required file for now, but happy to not if it's possible to otherwise prevent it from being removed in the near term?
/opt/azure/containers/kubelet.sh also shouldn't be required, but currently is because it's referenced by the kubelet.service.
Which issue(s) this PR fixes:
None
Requirements:
[x] uses conventional commit messages
[x] includes documentation
[x] adds unit tests
[x] tested upgrade from previous version
Special notes for your reviewer:
First PR in this repo so apologies if I'm missing something / happy to make changes.
I've confirmed that tests failed first, then passed with this change.
Under what conditions is it NOT enabled?
Found this - https://github.com/Azure/AgentBaker/blob/a7df3d609eac4ad608ccd3e84ba193565c236243/pkg/agent/utils.go#L355 - but perhaps someone has context already?
Thanks!
Release note:
none
@alexeldeib - updated the commit messages to pass linting - could you re-approve tests at your convenience?
Also - e2e tests failed on the last run due to az login failed - any other follow up I need to make there?
unfortunately we have some annoying rbac issues running the e2e/generated tests from forks today :(
the lint is fine after rebase, I'm about to brea kit again but will squash-merge and ignore the other bit for now.
Under what conditions is it NOT enabled?
tl;dr - always today. the token itself is generated by RP, so it's possible the field in the struct is empty. but that's never the case today anymore afaik.
|
2025-04-01T04:10:11.468266
| 2023-11-27T23:06:58
|
2013281892
|
{
"authors": [
"AlisonB319",
"coveralls"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13452",
"repo": "Azure/AgentBaker",
"url": "https://github.com/Azure/AgentBaker/pull/3855"
}
|
gharchive/pull-request
|
feat: Re-Add CODEOWNERS
A member of Node Sig will be required to merge a PR
What type of PR is this?
What this PR does / why we need it: Re-adds CODEOWNERS so that a member of Node Sig must approve a PR
Which issue(s) this PR fixes:
Fixes #
Requirements:
[ ] uses conventional commit messages
[ ] includes documentation
[ ] adds unit tests
[ ] tested upgrade from previous version
Special notes for your reviewer:
Release note:
none
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 79.036%
Totals
Change from base Build<PHONE_NUMBER>:
0.0%
Covered Lines:
2247
Relevant Lines:
2843
💛 - Coveralls
|
2025-04-01T04:10:11.484695
| 2024-07-24T20:52:00
|
2428450086
|
{
"authors": [
"aadhar-agarwal",
"coveralls",
"r2k1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13453",
"repo": "Azure/AgentBaker",
"url": "https://github.com/Azure/AgentBaker/pull/4698"
}
|
gharchive/pull-request
|
test: re run failed pam tests up to 5 times
What type of PR is this?
/kind test
What this PR does / why we need it:
This PR reruns failed tests in test_pam.py up to 5 times
It utilizes https://github.com/pytest-dev/pytest-rerunfailures
Added pytest-rerunfailures to the requirements.txt file
This PR is needed to reduce flakiness in test_pam.py
We need to scrape the console output from running the bash 'login' command
An alternative means of testing that does not use console scraping has not been found
Which issue(s) this PR fixes:
This PR helps reduce flakiness in the Run VHD Tests step
An example failure - Link
Teams thread in AKS Node Sig - Link
How was this change tested?:
[Test All VHDs] AKS Linux VHD Build - Msft Tenant runs:
https://msazure.visualstudio.com/CloudNativeCompute/_build/results?buildId=98932261&view=results
https://msazure.visualstudio.com/CloudNativeCompute/_build/results?buildId=98938578&view=results
https://msazure.visualstudio.com/CloudNativeCompute/_build/results?buildId=98945865&view=results
Created a Mariner AKS cluster, cloned AgentBaker and ran pytest -v -s --reruns 5 test_pam.py in a loop 100 times. There was no failure
Requirements:
[x] uses conventional commit messages
[ ] includes documentation
[ ] adds unit tests
[ ] tested upgrade from previous version
Special notes for your reviewer:
Release note:
none
Pull Request Test Coverage Report for Build<PHONE_NUMBER>1
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 71.021%
Totals
Change from base Build<PHONE_NUMBER>6:
0.0%
Covered Lines:
2566
Relevant Lines:
3613
💛 - Coveralls
Is there a way to address the test flakiness directly in the test?
I'd like to test fail fast on error. But with this change a failure will take 5x longer to surface.
Is there a way to address the test flakiness directly in the test? It just hides the flakiness under the rug.
I'd like to test fail fast on error. But with this change a failure will take 5x longer to surface.
Yeah, I agree with you. This change is not addressing the actual flakiness in the test. The reason the test is flaky is because we need to interact with the console when logging in as a user or changing the password of a user and the console may be unpredictable. We have not found an alternative approach to using the console. We would need to also understand exactly what extra characters are coming from the console in order to make test_pam.py more robust.
I created a work item on our side to further look into this - https://dev.azure.com/mariner-org/mariner/_workitems/edit/8236
The retry should help make sure the vhd build does not fail out and block a vhd release due to this test.
Rebased with master
Re running the pipeline - https://msazure.visualstudio.com/CloudNativeCompute/_build/results?buildId=99061090&view=results
|
2025-04-01T04:10:11.488711
| 2023-09-12T19:36:15
|
1893137272
|
{
"authors": [
"amerjusupovic",
"zhenlan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13454",
"repo": "Azure/AppConfiguration-DotnetProvider",
"url": "https://github.com/Azure/AppConfiguration-DotnetProvider/issues/458"
}
|
gharchive/issue
|
Use time-based retry for startup
Problem
The provider uses count-based retry today. If an App Configuration store experiences transient errors (eg., momentary throttling), the provider may give up too quickly with a couple of retries (may only take a few milliseconds or seconds). This can cause applications to fail to start and blackout.
Proposal
The proposal is to use a time-based retry logic for the startup. We may allow the retry time to be customized, but the provider should have a default value that is reasonably long enough, for example, 1 minute. This way, the provider won't give up too quickly. Note:
This is for startup only. We don't need to do this for refresh. The refresh happens periodically. We don't want a refresh to last too long.
This should include calls to the Key Vault too (for Key Vault reference)
cc @jimmyca15 @drago-draganov @avanigupta
This should include calls to the Key Vault too (for Key Vault reference)
What should the behavior be in the case where we are accessing a Key Vault for the first time during refresh, not on provider startup? Would we expect to keep track of the first time each key vault is accessed?
Just to make sure I understand the question, why does it not matter if a Key Vault is accessed for the first-time during refresh?
I might have misunderstood the issue description, but I read it as applying startup retry logic to all Key Vault reads, not just during startup. So yes, I was wondering if we should treat accessing a Key Vault for the first time (even during refresh) like accessing an App Config store for the first time, but now I think that doesn't sound necessary.
Right, no change to refresh for either App Config or Key Vault.
|
2025-04-01T04:10:11.493016
| 2020-06-23T18:40:42
|
644061356
|
{
"authors": [
"abhilasharora",
"tgindrup-inroll"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13455",
"repo": "Azure/AppConfiguration",
"url": "https://github.com/Azure/AppConfiguration/issues/355"
}
|
gharchive/issue
|
Azure Function with MSI reading KV backed App Configuration
We're developing a v3 Azure Function that uses MSI to authenticate against the App Configuration store using the End Point. Following the sample found here:
https://docs.microsoft.com/en-us/azure/azure-app-configuration/howto-integrate-azure-managed-service-identity?tabs=core3x
We implemented the following:
` var configurationBuilder = new ConfigurationBuilder();
_configuration = configurationBuilder.AddAzureAppConfiguration(options =>
{
string connection = Environment.GetEnvironmentVariable("configurationEndpoint");
var credentials = new ManagedIdentityCredential();
options.Connect(new Uri(connection), credentials)
.ConfigureKeyVault(kv =>
{
kv.SetCredential(credentials);
});
}).Build();
`
As expected this throws an exception when running locally. But when running in the Azure Portal, on a function with MSI turned on, and the Function has App Configuration Data Reader permissions on the App Configuration instance and both the App Configuration Instance and the Azure Function have Access Policies to retrieve secrets along with setting Key Vault Secret User under IAM for Key Vault for both.
When executing in the portal we get simply:
Value cannot be null. (Parameter 'connectionString')
I'm sure there's something I'm missing.
TIA
@tgindrup-inroll The code snippet that you shared isn't expected to result in that exception. Could you share the stack trace so we could debug it further? Also, is that the same exception that you get when running locally?
This is what I get when running locally:
Azure.Identity.CredentialUnavailableException: 'ManagedIdentityCredential authentication unavailable, no managed identity endpoint found.'
@abhilasharora thank you for pointing me to the stack trace! The above code worked - my app using the settings didn't. Again thank you!!
|
2025-04-01T04:10:11.494578
| 2018-11-28T01:34:24
|
385061630
|
{
"authors": [
"krishnaPerivilli"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13456",
"repo": "Azure/Azure-MachineLearning-ClientLibrary-Python",
"url": "https://github.com/Azure/Azure-MachineLearning-ClientLibrary-Python/issues/45"
}
|
gharchive/issue
|
Kernel Error while loading the Jupyter note book
When I opened the principles of Machine learning module 5 i.e Bias Variance trade off, most of the data like images or Graphs & code is not showing properly and top right corner showing as Kernel error. I opened through Firefox, windows.
Please help me on this account!
Thanks,
Krishna
Sorry, now I cleared cookies and its working but still the images are not showing .
|
2025-04-01T04:10:11.501174
| 2023-08-10T23:50:25
|
1846027921
|
{
"authors": [
"aka0",
"v-sudkharat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13457",
"repo": "Azure/Azure-Sentinel",
"url": "https://github.com/Azure/Azure-Sentinel/issues/8767"
}
|
gharchive/issue
|
Sentinel Github DataConnector Required Permissions
Describe the bug
The documentation isn't clear on what Github Enterprise Cloud permissions are needed for the PAT. Can you please clarify on the exact permissions needed for the connector to work?
To Reproduce
I granted the PAT with admin:org as per Github (ref #1) documentation and executed the query found in the PowerShell code
https://github.com/Azure/Azure-Sentinel/blob/bc1b032cdfb1d349ee9dea1d1b33a43ad0f1c23b/DataConnectors/GithubFunction/AzureFunctionGitHub/GitHubTimerTrigger/run.ps1#L245
Updated the orgName with my orgName, executed the query but received
{
"errors": [
{
"type": "INSUFFICIENT_SCOPES",
"locations": [
{
"line": 1,
"column": 340
}
],
"message": "Your token has not been granted the required scopes to execute this query. The 'email' field requires one of the following scopes: ['user:email', 'read:user'], but your token has only been granted the: ['admin:org'] scopes. Please modify your token's scopes at: https://github.com/settings/tokens."
}
]
}
Added additional scopes user:email and read:user as per the error message but the request was still denied
{
"data": {
"organization": null
},
"errors": [
{
"type": "FORBIDDEN",
"path": [
"organization",
"auditLog"
],
"locations": [
{
"line": 1,
"column": 51
}
],
"message": "myserviceaccount does not have permission to retrieve auditLog information."
}
]
}
Expected behavior
The above query should return Github audit events in order for the connector to function
Reference
https://github.blog/2019-06-21-the-github-enterprise-audit-log-api-for-graphql-beginners/
Hi @aka0, thanks for flagging this, we will soon get back to you on this.
Hi @aka0, Gentle Reminder: We are waiting for your response on this issue. If you still need to keep this issue active, please respond on it in the next 2 days. If we don't receive response by 23-08-2023 date, we will be close this issue. Thanks!
Hi @v-sudkharat ,
To clarify I am testing against enterprise cloud.
I get the following error message if I only assign admin:org scope to the token and execute the query found on line 245 in the connector code:
{
"errors": [
{
"type": "INSUFFICIENT_SCOPES",
"locations": [
{
"line": 1,
"column": 306
}
],
"message": "Your token has not been granted the required scopes to execute this query. The 'email' field requires one of the following scopes: ['user:email', 'read:user'], but your token has only been granted the: ['admin:org'] scopes. Please modify your token's scopes at: https://github.com/settings/tokens."
}
]
}
If I then add user:email and read:user to the scope then I get the error message found in my first post.
Hi @aka0, as asked before can you please check if you are the owner for the organization? and let us know. Thanks!
Update token to owner. Issue resolved.
|
2025-04-01T04:10:11.504102
| 2020-10-22T16:59:50
|
727564591
|
{
"authors": [
"bganapa",
"leyasa",
"rakku-ms",
"vrajasekaran1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13458",
"repo": "Azure/AzureStack-Tools",
"url": "https://github.com/Azure/AzureStack-Tools/pull/585"
}
|
gharchive/pull-request
|
Update RegisterWithAzure.psm1
Recommend to check for existence of RG prior to creating
@rakku-ms Coudl you pls review?
@rakku-ms Coudl you pls review?
@rakku-ms Do we need this change?
@rakku-ms Could you please review this PR again?
not needed, get-azurermresourcegroup throws error (which will stop the execution) if resource group doesn't exist. new-azurermresourcegroup creates new resource group if one doesn't exists
Get-AzureRmResourceGroup : 11:05:33 PM - Provided resource group does not exist.
At line:1 char:1
+ Get-AzureRmResourceGroup -Name "test1"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-AzureRmResourceGroup], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.GetAzureResourceGroupCmdlet
closing this old PR
|
2025-04-01T04:10:11.509693
| 2022-12-21T22:15:09
|
1506977537
|
{
"authors": [
"DrDonoso",
"ToksHouston"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13459",
"repo": "Azure/CCOInsights",
"url": "https://github.com/Azure/CCOInsights/issues/299"
}
|
gharchive/issue
|
Unable to authenticate to management API with Tenant Reader
Unable to authenticate to management API with Tenant Reader
Hello, we are not storing the credentials for the CCO Connector, so everytime we refresh the dashboard credentials must be added.
Regarding the second login, you should use organizational account instead of anonymous login.
About the refresh, you should modify privacy settings as referred in the wiki.
|
2025-04-01T04:10:11.518440
| 2024-08-14T10:56:02
|
2465518396
|
{
"authors": [
"vgouth"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13460",
"repo": "Azure/LogicAppsUX",
"url": "https://github.com/Azure/LogicAppsUX/issues/5385"
}
|
gharchive/issue
|
Prevent parent node from being dragged below dependent node
Describe the Bug with repro steps
1.Create a Stateful workflow and open the Designer.
2.Add Http request trigger, add some parallel compose actions, Dropbox, add scope and add compose action inside scope action.
3.Add Dropbox token into compose action inside scope create a dependent action to compose which adds implicit for loop in scope.
4.Add some more compose action after step 2.
4.Try to Drag the Dropbox and drop in between compose actions.
Expected: Drag and drop cannot happen in this above case with Dropbox.
Actual: It is happening, we can drop the Dropbox action after its dependent card.
What type of Logic App Is this happening in?
Standard (Portal)
Are you using new designer or old designer
New Designer
Did you refer to the TSG before filing this issue? https://aka.ms/lauxtsg
Yes
Workflow JSON
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Compose": {
"type": "Compose",
"inputs": "abcd",
"runAfter": {}
},
"Compose_1": {
"type": "Compose",
"inputs": "mno",
"runAfter": {}
},
"Compose_2": {
"type": "Compose",
"inputs": "xud",
"runAfter": {}
},
"Compose_3": {
"type": "Compose",
"inputs": "fsgw",
"runAfter": {
"Scope": [
"SUCCEEDED"
]
}
},
"Response": {
"type": "Response",
"kind": "Http",
"inputs": {
"statusCode": 200
},
"runAfter": {
"Compose_4": [
"SUCCEEDED"
]
}
},
"List_files_in_folder": {
"type": "ApiConnection",
"inputs": {
"host": {
"connection": {
"referenceName": "dropbox"
}
},
"method": "get",
"path": "/datasets/default/folders/@{encodeURIComponent(encodeURIComponent('6e202211-2856-4d17-9ded-5beb8b8626b0'))}"
},
"runAfter": {
"Compose_3": [
"SUCCEEDED"
]
},
"metadata": {
"6e202211-2856-4d17-9ded-5beb8b8626b0": "/"
}
},
"Compose_4": {
"type": "Compose",
"inputs": "mnop",
"runAfter": {
"List_files_in_folder": [
"SUCCEEDED"
]
}
},
"Scope": {
"type": "Scope",
"actions": {
"For_each_1": {
"type": "foreach",
"foreach": "@body('List_files_in_folder')",
"actions": {
"Compose_6": {
"type": "Compose",
"inputs": "@items('For_each_1')?['DisplayName']"
}
}
}
},
"runAfter": {
"Compose": [
"SUCCEEDED"
],
"Compose_2": [
"SUCCEEDED"
],
"Compose_1": [
"SUCCEEDED"
]
}
}
},
"contentVersion": "<IP_ADDRESS>",
"outputs": {},
"triggers": {
"When_a_HTTP_request_is_received": {
"type": "Request",
"kind": "Http"
}
}
},
"connectionReferences": {
"dropbox": {
"api": {
"id": "/subscriptions/53ce23c6-ff79-490b-a7fb-9590fe7a414a/providers/Microsoft.Web/locations/centralus/managedApis/dropbox"
},
"connection": {
"id": "/subscriptions/53ce23c6-ff79-490b-a7fb-9590fe7a414a/resourceGroups/LAStnd-v-govubilish/providers/Microsoft.Web/connections/dropbox-2"
},
"connectionName": "dropbox-2",
"authentication": {
"type": "ManagedServiceIdentity"
}
}
},
"parameters": {
"responseString": {
"type": "string",
"value": "helloone tested @ 14/08"
},
"functionAuth": {
"type": "object",
"value": {
"name": "Code",
"type": "QueryString",
"value": "@appsetting('azureFunctionOperation_functionAppKey')"
}
},
"office365RuntimeUrl": {
"type": "string",
"value": "https://781b9753ac25f841.03.common.logic-centralus.azure-apihub.net/apim/office365/a8b20a4ad48c4202822040a21e72d0c5"
},
"serviceConnectionstring": {
"type": "string",
"value": "@appsetting('serviceBus_connectionString')"
}
}
}
Screenshots or Videos
Browser
Edge
Additional context
Version: 2.40813.1.2
Issue not repro at the latest portal request. Closing this bug as this issue is not repro with below version
https://github.com/user-attachments/assets/2b841345-8304-4145-b75f-e41f81b3d327
Version: 2.40826.1.3
|
2025-04-01T04:10:11.523583
| 2023-11-04T09:23:56
|
1977251219
|
{
"authors": [
"genggui001",
"wkcn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13461",
"repo": "Azure/MS-AMP-Examples",
"url": "https://github.com/Azure/MS-AMP-Examples/issues/18"
}
|
gharchive/issue
|
ds_config.json in pretrain_13b_megatron_ds.sh need to add msamp configuration?
{
"train_micro_batch_size_per_gpu": $BS,
"train_batch_size": $GLOBAL_BATCH_SIZE,
"gradient_clipping": $CLIP_GRAD,
"zero_optimization": {
"stage": $ZERO_STAGE
},
"bf16": {
"enabled": true
},
"steps_per_print": $LOG_INTERVAL
}
Do I need to add msamp configuration?
For example:
"msamp": {
"enabled": true,
"opt_level": "O3",
}
Hi @genggui001 , thanks for your attention to our work!
It is not necessary to add the MS-AMP configuration.
The reason is that the optimizer is LBAdamW when passing the argument --msamp (https://github.com/Azure/MS-AMP-Examples/blob/main/gpt3/Megatron-DeepSpeed.patch#L279), and DeepSpeed will utilize FP8DeepSpeedZeroOptimizer (https://github.com/Azure/MS-AMP/blob/main/msamp/deepspeed/runtime/engine.py#L213).
thanks
|
2025-04-01T04:10:11.534665
| 2023-02-08T16:09:19
|
1576405463
|
{
"authors": [
"MrCirca",
"heoelri",
"johnathon-b"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13462",
"repo": "Azure/Mission-Critical-Online",
"url": "https://github.com/Azure/Mission-Critical-Online/issues/831"
}
|
gharchive/issue
|
Import Sample Data & Test Stamp Endpoints
Hello,
I try to run e2e environment in one region(northeurope only) . I run pipeline from main branch. I have 2 failures.
Cannot import sample data
Stamps endpoints fail.
`
/usr/bin/pwsh -NoLogo -NoProfile -NonInteractive -Command . '/home/vsts/work/_temp/b8eda1b2-2129-4361-ad0e-be3ea344e25f.ps1'
*** SMOKE TESTS ***
*** Testing 1 targets
*** Testing stamp availability using e58e2efad5-cluster.northeurope.cloudapp.azure.com
*** Call - Stamp Health (stamp)
[GET] https://e58e2efad5-cluster.northeurope.cloudapp.azure.com/healthservice/health/stamp
{"lastExecution":"2023-02-08T14:18:58.0569895+00:00","checks":[{"component":"AzMonitorHealthScore","status":"Unhealthy","duration":"00:00:00.1136058"},{"component":"BlobStorage","status":"Unhealthy","duration":"00:00:00.1276586"},{"component":"Database","status":"Unhealthy","duration":"00:00:05.0364669"},{"component":"MessageProducer","status":"Unhealthy","duration":"00:00:00.0207653"}]}
WARNING: *** Request to https://e58e2efad5-cluster.northeurope.cloudapp.azure.com/healthservice/health/stamp failed. Retrying... 1/20
[GET] https://e58e2efad5-cluster.northeurope.cloudapp.azure.com/healthservice/health/stamp
`
Do i need some other configuration manually or edit pipeline?
What branch are you running? Most recent main is failing.
What rights does your SPN have over the subscription?
I run main branch. I think that azure service principal has admin rights.
Which branch is stable?
I'm deploying #809 which is the last one to pass the repo tests.
I'm deploying in INT right now as I was having other issues deploying into INT. I'll let you know my results after my testing.
I ran the e2e w/ that branch and initially had the same issue. I ran it one more time (deleting every resource) and it eventually got past the smoke screens.
Currently sitting at an issue where load balance testing is failing and I couldn't test Chaos in the region I am.
I'm unable to deploy INT because of the same error I am seeing in the e2e pipeline around the AzureCLI@2 being pinned to 2.216.0.
--auth-type key isn't working with this version so I had to manually update the CLI for the tasks in e2e that used az storage account. Not sure if this is related but wanted to provide an update on my findings so far.
Any ideas?
I just ran e2e on that #809 without chaos and load testing fine..
Once it destroys I am going to try main branch.
INT is still broken due to an issue I am having with the Az.Storage which I reported here:
https://github.com/Azure/azure-cli/issues/25396
I am seeing some errors in the Upload UI app to Blob task but they don't trigger a pipeline failure and the app is still deployed.
Hi @johnathon-b, Hi @MrCirca, thanks for bringing this up. We're not seeing this issue in our nightly INT runs. Can you share more details of what's happening? The failed import of sample data is expected when the stamp endpoints are not working as expected. There are various potential reasons for that. One could be that cert-manager wasn't able to request certificates for the endpoint. We've documented some of the potential resons in our troubleshooting guide: https://github.com/Azure/Mission-Critical-Online/blob/main/docs/reference-implementation/Troubleshooting.md
|
2025-04-01T04:10:11.539738
| 2023-10-20T23:11:25
|
1955114602
|
{
"authors": [
"simonkurtz-MSFT"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13463",
"repo": "Azure/aca-dotnet-workshop",
"url": "https://github.com/Azure/aca-dotnet-workshop/pull/115"
}
|
gharchive/pull-request
|
Set-Variables Changes
Set variables.ps1 script to print applied variables
[like] Simon Kurtz reacted to your message:
From: Wael Kdouh @.>
Sent: Saturday, October 21, 2023 1:39:58 AM
To: Azure/aca-dotnet-workshop @.>
Cc: Simon Kurtz @.>; Author @.>
Subject: Re: [Azure/aca-dotnet-workshop] Set-Variables Changes (PR #115)
Merged #115https://github.com/Azure/aca-dotnet-workshop/pull/115 into main.
—
Reply to this email directly, view it on GitHubhttps://github.com/Azure/aca-dotnet-workshop/pull/115#event-10729835130, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AUHBQROV3INMGE4XNXA4LVTYAMRW5AVCNFSM6AAAAAA6JVVO2OVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJQG4ZDSOBTGUYTGMA.
You are receiving this because you authored the thread.Message ID: @.***>
|
2025-04-01T04:10:11.550373
| 2017-04-12T10:47:56
|
221223257
|
{
"authors": [
"ashb",
"colemickens",
"seanknox"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13464",
"repo": "Azure/acs-engine",
"url": "https://github.com/Azure/acs-engine/issues/488"
}
|
gharchive/issue
|
When deploying Kubernetes with custom/external VNet there is no routetable or NSG associated.
Further to the work in #473 I'm attempting to deploy a Kubernetes cluster within a custom IP space (in this case <IP_ADDRESS>/21) I ran into a niggle.
I created the VNet (<IP_ADDRESS>/21) and subnet for the VMs (<IP_ADDRESS>/24) using an Azure deployment following examples/vnet/vnetarmtemplate/azuredeploy.kubernetes.json and the acs-engine deployment works and the master and nodes came up in the expected subnets and IP space.
However the routetable that the acs-engine deployment created was not associated with the VNet which lead to this sort of error appearing in the controller-manager logs.
I0412 09:58:10.508905 1 azure_routes.go:65] create: creating route. clusterName="k8s-ash-k8s" instance="k8s-master-27074931-0" cidr="<IP_ADDRESS>/24"
E0412 09:58:10.604045 1 routecontroller.go:160] Could not create route 7a10b960-1f63-11e7-a202-0022480165cd <IP_ADDRESS>/24 for node k8s-agentpool1-27074931-1 after 95.516103ms: network.SubnetsClient#CreateOrUpdate: Invalid input: autorest/validation: validation failed: parameter=subnetParameters.SubnetPropertiesFormat.IPConfigurations constraint=ReadOnly
value=[]network.IPConfiguration{network.IPConfiguration{ID:(*string)(0xc421b71100), IPConfigurationPropertiesFormat:(*network.IPConfigurationPropertiesFormat)(nil), Name:(*string)(nil), Etag:(*string)(nil)}, network.IPConfiguration{ID:(*string)(0xc421b71110), IPConfigurationPropertiesFormat:(*network.IPConfigurationPropertiesFormat)(nil), Name:(*string)(nil), Et
ag:(*string)(nil)}, network.IPConfiguration{ID:(*string)(0xc421b71120), IPConfigurationPropertiesFormat:(*network.IPConfigurationPropertiesFormat)(nil), Name:(*string)(nil), Etag:(*string)(nil)}, network.IPConfiguration{ID:(*string)(0xc421b71130), IPConfigurationPropertiesFormat:(*network.IPConfigurationPropertiesFormat)(nil), Name:(*string)(nil), Etag:(*string)
(nil)}} details: readonly parameter; must send as nil or empty in request
I get one of these such errors for each master or agent VM in the cluster. As soon as I manually go and associate the route table with the subnet it starts working.
I'm not sure if the bug is that I created a VNet without a routetable (as per the example), or should the acs-engine generated deployment template have added associated the route table with the subnet for me?
This seems to have been by design in #411 (via commit 857abc2a117efb6b5170b5b96b017b98d32865ec) -- I suspect this is crossed wires/unintended effect from my previous PR and not the desired effect when networkPolicy is not "azure" -- otherwise the cluster that spins up is non-functional.
Is it possible with ARM templates to associate a routeTable with an existing subnet?
Oh, and I also noticed that the NSG is created but not also assigned to the subnet.
The route table point is documented as Step 7 in docs/kubernetes.md but the NSG isn't. Is that a new bug/issue?
The NSG should be associated whenever a Service is deployed I think, can
you confirm?
We can't do it in the template because the template can't reach over into
the other resource.group to configure the NSG on the subnet.
Or maybe we apply the NSG to the NICs in that case instead. I'll confirm
shortly unless you do first.
On Apr 12, 2017 7:41 AM, "Ash Berlin"<EMAIL_ADDRESS>wrote:
The route table point is documented as Step 7 in docs/kubernetes.md
https://github.com/Azure/acs-engine/blob/d40d70e5b3a1c0c64792b392412f470338d097de/docs/kubernetes.md
but the NSG isn't. Is that a new bug/issue?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Azure/acs-engine/issues/488#issuecomment-293599272,
or mute
the thread
https://github.com/notifications/unsubscribe-auth/AAT9dKmeiy80KYWpD-jdotULZNYh9q-Mks5rvOKmgaJpZM4M7QfF
.
I'll check if deploying a service updates the NSG or not.
In this case it's all created in one resource group. Just not all created in the same deployment.
(Regardless, the ARM template doesn't re-express the vnet/subnet (how could it), so it doesn't have the opportunity to poke the NSG onto the subnet. I think in this case we go out of the way to add it to the NICs specifically...)
Ah the NICs all have the NSG associated with them, so I think that is fine.
Are we accepting that the VNet associating has to be done manually for now? If so to make that easier it would be nice if that vnet name/id/info was given as an output.
For now yes. We can evaluate getting the cloudprovider to do it in the future, but I removed any of that because it was preventing people from using a vnet in a different resource group. If we end up with a "v2" of the cloudprovider that is more multi-resource-group aware, it could potentially wire things up.
But then it gets more complicated, because we want to get to a point where the SP used by the cloudprovider is locked down to a single RG, so there will be scenarios where it can not auto-update the route table on the other vnet... so it might be better to just have this be an explicit step for deploying into another vnet.
There's other aspects too--- many operators that have an existing vnet would probably not be happy for us to be modifying the routeTable on their subnet, whether it's changing it or just adding entries to it. It's complicated.
The Azure VNET CNI integration makes all of these problems evaporate, as the IPs are all real vnet IPs, and there is no need for route tables whatsoever. Hopefully that stabilizes over time and becomes the default and makes these sorts of things much easier.
Closing due to inactivity/more info needed. Feel free to re-open if needed.
|
2025-04-01T04:10:11.579930
| 2021-08-11T11:32:08
|
966524462
|
{
"authors": [
"DeagleGross",
"RupengLiu",
"springcomp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13465",
"repo": "Azure/azure-api-management-devops-resource-kit",
"url": "https://github.com/Azure/azure-api-management-devops-resource-kit/issues/562"
}
|
gharchive/issue
|
[Creator] Deploying multiple versions fail
When deploying multiple APIs attached to the same apiVersionSet, deployment may fail randomly due to any of the following reasons:
Since no dependencies exist between different versions, deployment happen in parrallel.
Failure may happen because: "An API with specified name already exists".
Failure may happen because: "An API with the same path already exists".
To reproduce this case, please consider the following simple config file:
https://gist.github.com/springcomp/cee9eb07720fd3633fbce42a93f49da5
I think both issues can be solved by:
Introducing an explicit dependency of subsequent versions to the main "original" API or to the previous version. This will make sure that each version is deployed sequentially indenpendantly to one another.
Versions are hosted into two sequential templates "initial" and "subsequent". It happens that the "initial" template for a version fails because another version of the API has the same path. I have not found a bug in the generated templates. However, one can circumvent this by specifying a dummy path in the "initial" template. Oddly enough, once the "initial" template succeeds, the "subsequent" template will overwrite the path to the correct value.
Triaged, will review and fix it
This issue is closed + it doesn't have a good explanation to the commits provided. Nothing to do here. Moving to to be released
|
2025-04-01T04:10:11.583808
| 2022-10-25T20:57:37
|
1423079564
|
{
"authors": [
"vthiebaut10",
"yonzhan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13466",
"repo": "Azure/azure-cli-extensions",
"url": "https://github.com/Azure/azure-cli-extensions/pull/5485"
}
|
gharchive/pull-request
|
[ssh] Upgrade Version for Release
Fixed small bugs found during manual tests
Upgraded version from 1.1.2 to 1.1.3
This checklist is used to make sure that common guidelines for a pull request are followed.
For new extensions:
[x] My extension description/summary conforms to the Extension Summary Guidelines.
About Extension Publish
There is a pipeline to automatically build, upload and publish extension wheels.
Once your pull request is merged into main branch, a new pull request will be created to update src/index.json automatically.
The precondition is to put your code inside this repository and upgrade the version in the pull request but do not modify src/index.json.
ssh
|
2025-04-01T04:10:11.587696
| 2020-06-20T17:45:55
|
642417536
|
{
"authors": [
"adeptusnull"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13467",
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/14041"
}
|
gharchive/issue
|
fresh install of az cli on ubuntu. az vm list gives error
This is autogenerated. Please review and update as needed.
Describe the bug
Command Name
az vm list
Errors:
No module named 'antlr4'
Traceback (most recent call last):
lib/python3/dist-packages/knack/cli.py, ln 206, in invoke
cmd_result = self.invocation.execute(args)
azure/cli/core/commands/__init__.py, ln 528, in execute
self.commands_loader.load_arguments(command)
dist-packages/azure/cli/core/__init__.py, ln 300, in load_arguments
loader.load_arguments(command) # this adds entries to the argument registries
azure/cli/command_modules/vm/__init__.py, ln 31, in load_arguments
from azure.cli.command_modules.vm._params import load_arguments
azure/cli/command_modules/vm/_params.py, ln 31, in <module>
from azure.cli.command_modules.monitor.actions import get_period_type
azure/cli/command_modules/monitor/actions.py, ln 7, in <module>
import antlr4
ModuleNotFoundError: No module named 'antlr4'
To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
Put any pre-requisite steps here...
az vm list
Expected Behavior
Environment Summary
Linux-5.4.0-37-generic-x86_64-with-glibc2.29
Python 3.8.2
Shell: bash
azure-cli 2.0.81
Extensions:
azure-devops 0.17.0
Additional Context
Disregard, looks like the ubuntu package manager installed a really old version of az cli on Ubuntu 20.04 LTS
Used this solution to resolve it.
this solution fixed it https://github.com/Azure/azure-cli/issues/14011
|
2025-04-01T04:10:11.591645
| 2021-03-05T13:59:20
|
823119555
|
{
"authors": [
"fengzhou-msft",
"j-hudgins",
"yungezz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13468",
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/17218"
}
|
gharchive/issue
|
How can I update my Azure CLI for my pipeline?
Resource Provider
Azure Cloud
Description of Feature or Work Requested
There was a bug introduced in Azure CLI v 2.19.1 * but I run off of the Azure Cloud so I in the documentation az upgrade` isn't supported. How can I get access to Azure CLI v 2.20.0?
Minimum API Version Required
Swagger Link
Target Date
ASAP- my pipelines have been broken all week as a result.
hi @fengzhou-msft, could you pls help on this?
@j-hudgins what agent do you use, Ubuntu or windows?
Close due to inactivity. Please feel free to open new issues if the latest Azure CLI still does not work for you.
|
2025-04-01T04:10:11.609150
| 2023-05-30T09:01:59
|
1731833067
|
{
"authors": [
"MoChilia",
"fabianfoos",
"shishirdash24"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13469",
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/26561"
}
|
gharchive/issue
|
ERROR: The subscription of 'XXXX' doesn't exist in cloud 'AzureCloud'.
Describe the bug
We have an Azure build pipeline in DevOps which uses "az extension add -n ml" to install azure ML V2.
This process was working properly till 2 weeks back. Suddenly it has started giving the below error:
ERROR: The subscription of 'XXXX' doesn't exist in cloud 'AzureCloud'.
Note: There has been no changes to the subscription id. I am able to see that on our resource group. A quick help in resolution will help here.
Related command
az extension add -n ml
Errors
ERROR: The subscription of 'XXXX' doesn't exist in cloud 'AzureCloud'.
Issue script & Debug output
logs.txt
Expected behavior
Installation of CLI V2 has to be successfull.
Environment Summary
azure-cli 2.48.1 *
core 2.48.1 *
telemetry 1.0.8
Extensions:
azure-devops 0.26.0
Dependencies:
msal 1.20.0
azure-mgmt-resource 22.0.0
Additional context
N/A
Hi,
Did the same as suggested in a test deployment process, but this time too it failed with the same error.
@.***
Thanks & regards,
Shishir Kumar Dash
Principal Data Scientist
CleanHarbors, Pune
+91-8390650790
www.cleanharbors.comhttp://www.cleanharbors.com/
Safety Starts with Me: Live It 3-6-5
@.***
__________________________________
[New Image]
Upcoming planned leave: 9th & 29th June 2023
From: azure-client-tools-bot-prd[bot] @.>
Sent: Tuesday, May 30, 2023 2:32 PM
To: Azure/azure-cli @.>
Cc: Dash, Shishir @.>; Mention @.>
Subject: Re: [Azure/azure-cli] ERROR: The subscription of 'XXXX' doesn't exist in cloud 'AzureCloud'. (Issue #26561)
Hi @shishirdash24, 2. 48. 1 is not the latest Azure CLI(2. 49. 0). Please upgrade to the latest Azure CLI version by following https: //learn. microsoft. com/en-us/cli/azure/update-azure-cli. — Reply to this email directly, view it on GitHub,
Hi @shishirdash24https://urldefense.com/v3/__https:/github.com/shishirdash24__;!!DUIk98o1eVRR!tsQbVHs-iIhmzvEyp9fiQksnzl1bxTJDzCAIDs8jquvt0ni7uo7NpQ5Jf2RI-J-zgiM1r6Qfa9oKDg7S1tyW9WhPniwjqWE$,
2.48.1 is not the latest Azure CLI(2.49.0).
Please upgrade to the latest Azure CLI version by following https://learn.microsoft.com/en-us/cli/azure/update-azure-clihttps://urldefense.com/v3/__https:/learn.microsoft.com/en-us/cli/azure/update-azure-cli__;!!DUIk98o1eVRR!tsQbVHs-iIhmzvEyp9fiQksnzl1bxTJDzCAIDs8jquvt0ni7uo7NpQ5Jf2RI-J-zgiM1r6Qfa9oKDg7S1tyW9WhPlHuaBJg$.
—
Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https:/github.com/Azure/azure-cli/issues/26561*issuecomment-1568054207__;Iw!!DUIk98o1eVRR!tsQbVHs-iIhmzvEyp9fiQksnzl1bxTJDzCAIDs8jquvt0ni7uo7NpQ5Jf2RI-J-zgiM1r6Qfa9oKDg7S1tyW9WhPvztJPq8$, or unsubscribehttps://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/A5SIKECQQOFFTBEN3LUGZ73XIWZRJANCNFSM6AAAAAAYTWEWZI__;!!DUIk98o1eVRR!tsQbVHs-iIhmzvEyp9fiQksnzl1bxTJDzCAIDs8jquvt0ni7uo7NpQ5Jf2RI-J-zgiM1r6Qfa9oKDg7S1tyW9WhPjTILF6M$.
You are receiving this because you were mentioned.Message ID<EMAIL_ADDRESS>
Hi @shishirdash24! I noticed that you used a service principal to login and the login message showed that you were using a tenant level account.
2023-05-30T08:54:46.9080191Z "name": "N/A(tenant level account)",
That's why it failed in az account set --subscription and reported the error: The subscription of 'XXXX' doesn't exist in cloud 'AzureCloud'.
To solve this issue, please make sure you have assigned the service principal with proper role and scope by using az role assignment.
Thank you. Let me check internally if some changes has been made to the account. It was working fine till 8th March.
@shishirdash24, this issue is not related to Azure CLI version, but the loss of RBAC permission. Besides, az upgrade currently doesn't work in Azure DevOps agent but will not fail with ERROR: The subscription of 'XXXX' doesn't exist in cloud 'AzureCloud'. This error is caused by az account set.
Is there any new on this? I have the exact same issue. Which are the service principal's roles and scopes I need to configure?
Thank you!
Hi @fabianfoos,
Please follow Assign Azure roles using Azure CLI to select the appropriate role, identify the needed scope, and assign role for the service principal. For more information about role and scope, please refer to Understand Azure role definitions and Understand scope for Azure RBAC.
If you have any further question, you may open a new issue for better tracking.
|
2025-04-01T04:10:11.618051
| 2023-06-27T05:38:28
|
1776128266
|
{
"authors": [
"bebound",
"schaudhari6254888",
"yonzhan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13470",
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/issues/26765"
}
|
gharchive/issue
|
[servicebus, eventhubs] Potential bug because of modifying the list during iteration
Describe the bug
In https://github.com/azure/azure-cli/blob/ae4a96ad73e6e6b4cd27b29ade5cbe65e62dadf0/src/azure-cli/azure/cli/command_modules/eventhubs/operations/network_rule_set.py#L64-L67
for i in ip_rule:
for j in ip_rule_list:
if i['ip-address'] == j["ip_mask"]:
ip_rule_list.remove(j)
ip_rule_list is not fully iterated if j is removed. For example:
a = [1,2,3,4]
for i in a:
print(i)
a.remove(i)
>>> 1
>>> 3
There are four parts contains similar logic.
Related PR: #26685, found by modified-iterating-list rule.
c:\users\kk\developer\azure-cli\src\azure-cli\azure\cli\command_modules\eventhubs\operations\network_rule_set.py:67:16: W4701: Iterated list 'ip_rule_list' is being modified inside for loop body, consider iterating through a copy of it instead. (modified-iterating-list)
c:\users\kk\developer\azure-cli\src\azure-cli\azure\cli\command_modules\eventhubs\operations\network_rule_set.py:126:16: W4701: Iterated list 'virtual_network_rule_list' is being modified inside for loop body, consider iterating through a copy of it instead. (modified-iterating-list)
c:\users\kk\developer\azure-cli\src\azure-cli\azure\cli\command_modules\servicebus\operations\network_rule_set.py:64:16: W4701: Iterated list 'ip_rule_list' is being modified inside for loop body, consider iterating through a copy of it instead. (modified-iterating-list)
c:\users\kk\developer\azure-cli\src\azure-cli\azure\cli\command_modules\servicebus\operations\network_rule_set.py:124:16: W4701: Iterated list 'virtual_network_rule_list' is being modified inside for loop body, consider iterating through a copy of it instead. (modified-iterating-list)
Thank you for opening this issue, we will look into it.
The code is introduced by https://github.com/Azure/azure-cli/pull/25792
@schaudhari6254888 Could you please take a look?
The code is introduced by #25792 @schaudhari6254888 Could you please take a look?
Hey @bebound , Since IP address in Ip-rule is unique. So Suppose if it matches it will remove that ip address in ongoing iteration. even if skip the next index it doesn't harm bcs unique ip is already remove. So next iteration again it starts from 0th index of IP_RULE_list.
Thanks for your confirmation.
Would you mind if I use a copy when iteration, so I don't need to disable modified-iterating-list rule?
for j in ip_rule_list[:]:
Thanks for your confirmation. Would you mind if I use a copy when iteration, so I don't need to disable modified-iterating-list lint rule? for j in ip_rule_list[:]:
You can use
|
2025-04-01T04:10:11.621008
| 2020-10-19T03:18:39
|
724226015
|
{
"authors": [
"houk-ms"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13471",
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/15570"
}
|
gharchive/pull-request
|
{Core} Skip positional arguments for failure recommendations
Description
Fix issue #15465.
This PR fixes the positional arguments parsing issue in failure recommendations. The original logic ignored the positional arguments, which would make CLI crash when running a command with positional arguments if that command failed.
Testing Guide
Previously, CLI will crash when running az acr build
...
azure/cli/core/command_recommender.py, ln 162, in recommend_a_command
normalized_parameters = self._normalize_parameters(parameters)
azure/cli/core/command_recommender.py, ln 244, in _normalize_parameters
standard_form = sorted_options[0]
IndexError: list index out of range
Now, it prints the right error messages
RequiredArgumentMissingError: the following arguments are required: --registry/-r, <SOURCE_LOCATION>
Try this: 'az acr build -r MyRegistry .'
Still stuck? Run 'az acr build --help' to view all commands or go to 'https://docs.microsoft.com/en-us/cli/azure/reference-index?view=azure-cli-latest' to learn more
@christopher-o-toole The original logic in failure recommendation seems to not consider the positional arguments. Do you have any plans to use them in Aladdin Service?
|
2025-04-01T04:10:11.625190
| 2021-07-06T19:27:27
|
938194436
|
{
"authors": [
"jiasli",
"major",
"yonzhan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13472",
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/18749"
}
|
gharchive/pull-request
|
[Packaging] Add licenses to all Python packages
Description
I'm packaging Azure CLI and its dependencies in Fedora, but one of the requirements is that each package has a license file included. This change adds license files to each Python package and includes it in the manifest for each as well.
Testing Guide
No testing changes needed.
This checklist is used to make sure that common guidelines for a pull request are followed.
[x] The PR title and description has followed the guideline in Submitting Pull Requests.
[x] I adhere to the Command Guidelines.
[x] I adhere to the Error Handling Guidelines.
Packaging
Thank you @major for helping us make Azure CLI better! 🚀
|
2025-04-01T04:10:11.628221
| 2018-03-07T22:46:41
|
303288839
|
{
"authors": [
"mboersma",
"promptws"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:13473",
"repo": "Azure/azure-cli",
"url": "https://github.com/Azure/azure-cli/pull/5751"
}
|
gharchive/pull-request
|
Fix yaml format
PyYAML's default is to choose YAML flow style for non-nested collections, but this is less human-readable and isn't the expected default for Kubernetes config files (although the kubectl tool is fine with both flavors). Let's use YAML block style consistently with az aks get-credentials.
Closes #5638
General Guidelines
[X] The PR has modified HISTORY.rst describing any customer-facing, functional changes. Note that this does not include changes only to help content. (see Modifying change log).
View a preview at https://prompt.ws/r/Azure/azure-cli/5751
This is an experimental preview for @microsoft users.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.