id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
1327470267
fixed function typos that prevented seeing job errors Issue #, if available: Description of changes: When a job would fail, instead of getting the EMR failure message code would dump: AttributeError: 'EmrServerlessStartJobOperator' object has no attribute 'job_id' After fixing that, we get the below: AttributeError: 'EMRServerless' object has no attribute 'describe_job_run' The two files were updated and now we can see the actual job error: Example: airflow.exceptions.AirflowException: EMR Serverless job failed. Final state is FAILED. job_run_id is xxxxxxxx. Error: Job execution failed, please check complete logs in configured logging destination. ExitCode: 1. Last few exceptions: IndentationError: unexpected indent... By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Thanks @sariabod for the contribution! Will get this merged and get a new release out. As an FYI - There is a PR open in the Airflow repository for EMR Serverless support ( https://github.com/apache/airflow/pull/25324 ) that will hopefully be included in the next official provider release. 🙌
gharchive/pull-request
2022-08-03T16:09:16
2025-04-01T04:33:34.577610
{ "authors": [ "dacort", "sariabod" ], "repo": "aws-samples/emr-serverless-samples", "url": "https://github.com/aws-samples/emr-serverless-samples/pull/17", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2009114085
Getting error related to s3 access for lambda While trying to create a project using the custom template, I get the error Resource handler returned message: "Your access has been denied by S3, please make sure your request credentials have permission to GetObject for my-bucket/lambda-github-workflow-trigger.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: Lambda, Status Code: 403, Request ID: ##)" (RequestToken: ###, HandlerErrorCode: AccessDenied) I am unable to figure out which role should I add the S3 GetObject policy to? Hi @moumitaTora, The execution role by default is this AmazonSageMakerServiceCatalogProductsLaunchRole solved
gharchive/issue
2023-11-24T05:16:59
2025-04-01T04:33:34.579746
{ "authors": [ "moumitaTora", "pooyavahidi" ], "repo": "aws-samples/mlops-sagemaker-github-actions", "url": "https://github.com/aws-samples/mlops-sagemaker-github-actions/issues/9", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2073667077
Cannot find asset issue during cdk bootstrap Hello team! Encountering the below error during the cdk bootstrap step. Not sure if this is a new issue. Help appreciated. Thanks cd react-ssr-lambda cd ./cdk npm install npm run build cdk bootstrap - failed here Error: Cannot find asset at /home/gss/myprojects/react-ssr-lambda/simple-ssr/build at new AssetStaging (/home/gss/myprojects/react-ssr-lambda/cdk/node_modules/aws-cdk-lib/core/lib/asset-staging.js:1:1402) at new Asset (/home/gss/myprojects/react-ssr-lambda/cdk/node_modules/aws-cdk-lib/aws-s3-assets/lib/asset.js:1:736) at Object.bind (/home/gss/myprojects/react-ssr-lambda/cdk/node_modules/aws-cdk-lib/aws-s3-deployment/lib/source.js:1:1185) at /home/gss/myprojects/react-ssr-lambda/cdk/node_modules/aws-cdk-lib/aws-s3-deployment/lib/bucket-deployment.js:1:3013 at Array.map () at new BucketDeployment (/home/gss/myprojects/react-ssr-lambda/cdk/node_modules/aws-cdk-lib/aws-s3-deployment/lib/bucket-deployment.js:1:2994) at new SsrStack (/home/gss/myprojects/react-ssr-lambda/cdk/lib/srr-stack.ts:44:5) at Object. (/home/gss/myprojects/react-ssr-lambda/cdk/bin/cdk.ts:13:1) at Module._compile (node:internal/modules/cjs/loader:1376:14) at Module.m._compile (/home/gss/myprojects/react-ssr-lambda/cdk/node_modules/ts-node/src/index.ts:1618:23) Same issue. There are no proper npm commands to build react app Also ran into this issue. You can get around it by manually adding the different services yourself, but it's definitely not the intended experience for a demo Hitting the same problem. @mitch-c-miller no idea how to do that though (creating different services myself. not even sure what's required) @breskeby the different TS files in cdk/lib are the blueprints to create the necessary resources. For example, this snippet BucketDeployment creates a new S3 Bucket, with the ID "Client-side React app". It uploads build artifacts from ../simple-ssr/build/ in local storage to mySiteBucket, which is defined here. If you're unfamiliar with AWS terminology, I strongly recommend checking out some tutorials on Youtube to get a rundown on the basics. AWS has a bunch of longstanding product names that are clever and cute when you know what they mean, but they're not useful otherwise (e.g. Route 53 is DNS since DNS uses port 53). AWS is dense and complicated, but the core concepts will get you very far! This should also be a more useful guide. It's actively maintained by AWS and includes more useful features like Cognito.
gharchive/issue
2024-01-10T06:12:43
2025-04-01T04:33:34.586633
{ "authors": [ "breskeby", "glinisdev", "gsivamani", "mitch-c-miller" ], "repo": "aws-samples/react-ssr-lambda", "url": "https://github.com/aws-samples/react-ssr-lambda/issues/62", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2231905561
Region check for Personalize content generator Description of changes: Since the Personalize content generator feature is only available in a subset of regions where Personalize is available, added a check to skip the automated content generator logic when enabled at deployment. Description of testing performed to validate your changes (required if pull request includes CloudFormation or source code changes): By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Looks good. Shall we amend the supported regions in the README with "fully supported" and some other where we can deploy but no GED or no Bedrock? Shall we amend the supported regions in the README with "fully supported" and some other where we can deploy but no GED or no Bedrock? Sure. Made some updates to the supported regions table.
gharchive/pull-request
2024-04-08T19:05:17
2025-04-01T04:33:34.589208
{ "authors": [ "BastLeblanc", "james-jory" ], "repo": "aws-samples/retail-demo-store", "url": "https://github.com/aws-samples/retail-demo-store/pull/568", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
1773558011
AmazonChimeSDKMachineLearning XCFramework not downloading Package loading has stopped working for me. The URL seems fine for each. [~]$ curl --head https://amazon-chime-sdk-ios.s3.amazonaws.com/sdk-without-bitcode/0.23.1/spm/AmazonChimeSDK-0.23.1.zip HTTP/1.1 200 OK x-amz-id-2: STcLZYvwPedq/eQMxDLWqQtSQzhFPDckMnj6JBols68mTqH6wZXxfNA19HjouvE1qwyQcbFs2uA= x-amz-request-id: 4ZC0RTYB3HJA8BDD Date: Sun, 25 Jun 2023 23:15:54 GMT Last-Modified: Thu, 18 May 2023 23:49:34 GMT ETag: "9f404e5dd0fb78201e183fd8fc3c4897" x-amz-server-side-encryption: AES256 Accept-Ranges: bytes Content-Type: binary/octet-stream Server: AmazonS3 Content-Length: 2373000 [~]$ curl --head https://amazon-chime-sdk-ios.s3.amazonaws.com/media-without-bitcode/0.18.1/spm/AmazonChimeSDKMedia-0.18.1.zip HTTP/1.1 200 OK x-amz-id-2: UUotjWFf9Wqnlms7U4SFxOBHUx0LG2KihVSxAj3FuObr1jI+YE+8yesGXRvfZ5tQYwYzyWU9mhQ= x-amz-request-id: 0TE2SSF24S21QSWA Date: Sun, 25 Jun 2023 23:17:01 GMT Last-Modified: Thu, 18 May 2023 23:48:16 GMT ETag: "3b229713c3600f81e4c7bf8b732e2046" x-amz-server-side-encryption: AES256 Accept-Ranges: bytes Content-Type: binary/octet-stream Server: AmazonS3 Content-Length: 17515581 [~]$ curl --head https://amazon-chime-sdk-ios.s3.amazonaws.com/machine-learning-without-bitcode/0.2.0/spm/AmazonChimeSDKMachineLearning-0.2.0.zip HTTP/1.1 200 OK x-amz-id-2: d763S8IQX7l8FfyfPsT5wn/LCyj2xz5dUgGp0xakDERKclzCJRytPkDsNcV1/tY7RJznYbHIitw= x-amz-request-id: CBKCMYVXQ0H577WD Date: Sun, 25 Jun 2023 23:17:45 GMT Last-Modified: Fri, 23 Jun 2023 22:18:46 GMT ETag: "09442f479f80b58dd0ee8a483372ffb4" x-amz-server-side-encryption: AES256 Accept-Ranges: bytes Content-Type: binary/octet-stream Server: AmazonS3 Content-Length: 106170 The content length for AmazonChimeSDKMachineLearning seems small. And when I download directly I get this. This is reflected in Xcode where package loading breaks. CCing you @georgezy-amzn to get your attention :) Today we started to get an issue with our CI when fetching AMSChime SDK... It seems suspiciously related to this. Thank you @oliverfoggin & @notapplicableio. We are working to resolve the error with the release. We will update you here when resolved. Hi all, sorry for the inconvenience. We have recovered the binary, and verified. Could you also verify? LGTM Can you add a section to the readme for using SPM. It's a bit obscure/hidden at the moment. The more people that use SPM the sooner we can be alerted to problems. :) Thanks for fixing.
gharchive/issue
2023-06-25T23:22:10
2025-04-01T04:33:34.598365
{ "authors": [ "georgezy-amzn", "notapplicableio", "oliverfoggin", "yochum" ], "repo": "aws/amazon-chime-sdk-ios-spm", "url": "https://github.com/aws/amazon-chime-sdk-ios-spm/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1560274329
Generalize AWS service API calls Description of the issue Create a general interface for each of the service (e.g ec2, ecs, ...) and share the same resources (e.g config, context). Moreover, return error for all API call and let's the test case decide the outcome of the API call. Description of changes Create a AWS Config structure that shares the same resources (e.g ctx, config) for other API calls. Add DynamoDB API Call to AWS Config Structure Instead of validate in the service call itself, all API will return errors (e.g CWL API) Delete some functions since the exist one has cover enough use case (e.g SendItem - checking if package is exist and add the item; however, UpdateItem already done that) License By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Tests By running the changes on my fork, here is the result of it: https://github.com/khanhntd/amazon-cloudwatch-agent/actions/runs/4036724893 So what's the final vote on whether we push this or not? I see two against and one on-the-fence. Im on-the-fence as well. It does clean up the code but its going to hard to maintain abstractions over time (especially if someone introduces new clients or usages, they would need to know that some layer of abstraction already exists or at the least, the reviewers need to know to point it out). So what's the final vote on whether we push this or not? I see two against and one on-the-fence. Im on-the-fence as well. The original change is using the interface. That's why we have two against of using interface. However, I have changed back to the original way we are doing it. It does clean up the code but its going to hard to maintain abstractions over time (especially if someone introduces new clients or usages, they would need to know that some layer of abstraction already exists or at the least, the reviewers need to know to point it out). That's why the source-code editor always have hint to reduce the abstraction as much as possible func IsMetricSampleCountWithinBound(metricName string, namespace string, dimensions []types.Dimension, startTime time.Time, endTime time.Time, lowerBoundInclusive int, upperBoundInclusive int, periodInSeconds int32) bool IsMetricSampleCountWithinBound checking if certain metric's sample count is within the predefined bound interval However, yes there is no guarantee of that. That's why I introduce the interface to make it more simpler but not bring much values what we currently have.
gharchive/pull-request
2023-01-27T19:12:02
2025-04-01T04:33:34.604798
{ "authors": [ "khanhntd", "sky333999" ], "repo": "aws/amazon-cloudwatch-agent-test", "url": "https://github.com/aws/amazon-cloudwatch-agent-test/pull/103", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
1050361102
Add architecture architecture to documentation Adding an architecture diagram to the CONTRIBUTING.md Codecov Report Merging #72 (c18f6a8) into main (e7a9357) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## main #72 +/- ## ======================================= Coverage 61.44% 61.44% ======================================= Files 13 13 Lines 1149 1149 ======================================= Hits 706 706 Misses 403 403 Partials 40 40 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update e7a9357...c18f6a8. Read the comment docs.
gharchive/pull-request
2021-11-10T22:12:40
2025-04-01T04:33:34.778517
{ "authors": [ "codecov-commenter", "vanekjar" ], "repo": "aws/aws-cloud-map-mcs-controller-for-k8s", "url": "https://github.com/aws/aws-cloud-map-mcs-controller-for-k8s/pull/72", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
741121987
Remove URLDecodedKey from S3Object Issue #, if available: Fixes https://github.com/aws/aws-lambda-go/issues/82 Description of changes: Remove the URLDecodedKey property from the S3Object struct as it is empty and was never populated. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Codecov Report Merging #335 (e6ca184) into master (f24acb2) will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #335 +/- ## ======================================= Coverage 72.46% 72.46% ======================================= Files 18 18 Lines 730 730 ======================================= Hits 529 529 Misses 136 136 Partials 65 65 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update f24acb2...e6ca184. Read the comment docs. Instead of removing it, maybe we can fix it with a custom deserializer Happy to do that, but shouldn’t it be a method computed demand then or should it be computed every time even if not needed? For backwards compatibility the later seems better, but feels kind of wasteful. Preferences? We could benchmark it but my guess is that the performance impact of keeping it would be very small. I would personally prefer for the package to keep to the Go backwards compatibility promise as much as is possible / reasonable. Updated to always populate the property instead of removing it. I think that we should also change the json tag, to help signal to other that urlDecodedKey is not a part of the event model type S3Object { .... URLDecodedKey string `json:"-"` // populated by custom deserializer .... } I was also thinking about that, but discarded the idea as that also seems like a breaking change. If someone serialises the current object to json, this would modify the shape of the json. We can't guarantee full compatibility for consumers of the serialized version of the object, arguably it breaks either way. if the tag stays, consumer sees {"urlDecodedKey": "" } -> {"urlDecodedKey": "some value"} if the field is ignored, consumer sees {"urlDecodedKey": ""} -> {} CC @carlzogh what do you think? Have we run into this situation in the java project recently? My preference right now, is to take breaking changes if the type if we would otherwise be documenting something false. Were we code-generating these, and the schema updated, I think we'd default into that behavior. I was also thinking about that, but discarded the idea as that also seems like a breaking change. If someone serialises the current object to json, this would modify the shape of the json. Unfortunate but probably true. I wonder if we should consider keeping a list of "stuff we should consider fixing in a MV 2"? Maybe create an issue with some kind of "mv2" tag? @harrisonhjones I've been using the requires-v2 tag for this The change here brings this library to parity with the Java events library (ref. urlDecodedKey) and I think this is the right approach. From a consumer's perspective, I don't see this as a very risky change as a property that was previously empty now holds a value. We're not changing the JSON contract - were we to remove the field, I'd see that as a breaking change and would probably think twice about it. I can't think of consumer use-cases that would break as a result of us merging this in, aside from the fact that we're not differentiating between service event models and fields that aws-lambda-go adds in to these events (eg. urlDecodedKey). This being a pre-existing issue that is not made worse by this PR, I wouldn't hold up merging this in but I still think it's worth trying to figure out how we can better draw this line - potentially in v2. Agree on not holding up this change, since it doesn't make the existing problem worse :) I'll move my thoughts on https://github.com/aws/aws-lambda-go/pull/335#issuecomment-736015096 to a separate issue to track
gharchive/pull-request
2020-11-11T22:45:51
2025-04-01T04:33:34.796217
{ "authors": [ "bmoffatt", "carlzogh", "codecov-io", "harrisonhjones", "johanneswuerbach" ], "repo": "aws/aws-lambda-go", "url": "https://github.com/aws/aws-lambda-go/pull/335", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
734173115
docs: add FormField.ControllId *Issue FormFieldProps.ControlId is required, but not in the documentation code. I get the following error when I copy the code: Description of changes: Add Form Field By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. :tada: This PR is included in version 1.0.12 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2020-11-02T04:39:36
2025-04-01T04:33:34.807030
{ "authors": [ "cogwirrel", "howyi" ], "repo": "aws/aws-northstar", "url": "https://github.com/aws/aws-northstar/pull/32", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1712176395
Access Violation aws-cpp-sdk-core.dll Describe the bug Attempting to run a simple exe using the aws--sdk. Got a project in Visual Studio with .libs referenced in the linker and DLL present in the exe folder. The build is for x86 machines. Exception is being thrown when attempting to initialize the API lined to aws-cpp-sdk-core.dll Expected Behavior Successful build given minimal code and linked dependencies Current Behavior Unhandled exception at 0x607FEF92 (aws-cpp-sdk-core.dll) in s3_store.exe: 0xC0000005: Access violation reading location 0xFFFFFFFF Reproduction Steps #include #include <aws/core/Aws.h> #include <aws/s3/S3Client.h> int main(int argc, char* argv[]) { Aws::SDKOptions options; Aws::InitAPI(options); { } Aws::ShutdownAPI(options); return 0; } Possible Solution No response Additional Information/Context No response AWS CPP SDK version used 1.11.65 Compiler and Version used MSVC Operating System and version Windows 10 64bit How did you install the c++ sdk? Did you possibly install it somewhere where you don't have permissions? Please include the steps you used to build this sdk How did you install the c++ sdk? Did you possibly install it somewhere where you don't have permissions? Please include the steps you used to build this sdk Thank you for your response. I used the vcpkg package manager and installed the files in the default Windows x86 location on the PC's C:\ Drive. I then copied the files to the default packages directory of my VS project and linked from there. Can you try compiling this sdk from the source rather than using vcpkg? You can do this with the following: cmake .. -DBUILD_ONLY="s3crt" -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX="<path-to-install-sdk>" -DENABLE_TESTING=OFF cmake --build . --config=Debug cmake --install . --config=Debug If you are still getting this error can you post your call stack?
gharchive/issue
2023-05-16T14:43:39
2025-04-01T04:33:34.823161
{ "authors": [ "jmklix", "kronus-lx" ], "repo": "aws/aws-sdk-cpp", "url": "https://github.com/aws/aws-sdk-cpp/issues/2493", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
148806048
Support for conditional writes in DynamoDBContext If that support is there, I can't find it. It's really the only thing keeping me from using it across the board. As it stands, I have multiple cases where I use AmazonDynamoDBClient.PutItem to create items, and the DynamoDBContext for everything else. Out of the box the DynamoDBContext provides support for conditional writes only through the DynamoDBVersion attribute. Here's the developer guide with more information, though the short of it is that you designate a specific field on your POCO and this is used to make sure that you are the only one updating that specific item. If you need to specify other conditions, you'll need to use either the client (as you're doing now), or the Table object. Below is an example of putting an item with a condition, while still mostly working with POCOs (in this case, the class is Product). var table = Context.GetTargetTable<Product>(); var document = Context.ToDocument(product); Expression expression = new Expression { ExpressionStatement = "attribute_not_exists(referencecounter) or referencecounter = :cond1", ExpressionAttributeValues = new Dictionary<string, DynamoDBEntry> { {":cond1", 0} } }; PutItemOperationConfig config = new PutItemOperationConfig { ConditionalExpression = expression }; table.PutItem(document, config);
gharchive/issue
2016-04-16T01:59:18
2025-04-01T04:33:34.922357
{ "authors": [ "PavelSafronov", "bslatner" ], "repo": "aws/aws-sdk-net", "url": "https://github.com/aws/aws-sdk-net/issues/338", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
470052419
AWS Toolkit is super-slow When i first open VS2017 (v15.9.6) it takes a huge amount of time. As you can see from the picture, the average time for loading the AWS extension is 50 seconds. When I am lucky, it can be as fast as 20 seconds. As a result, I had to totally uninstall the extension. Expected Behavior When an extension is going to load, it should be lightning fast and delay any activity because cloud development is not the center of life. I am also facing this issue. It takes almost 40secs to load. I have visual studio 2019 with windows 10 + 8GB Ram. Any update? @M-Imtiaz what version of the toolkit are you using exactly? @justinmk3 I installed "AWS Toolkit for Visual Studio 2017 and 2019" When did you install it? The current version is 1.17.1.0, see https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.AWSToolkitforVisualStudio2017 @justinmk3 Sorry for my last comment. I didn't mention the version. Actually I am using the latest version of the AWS toolkit. But still, I am facing slow performance. Here is the snapshot from the visual studio alert. @M-Imtiaz we've made some performance improvements recently to the toolkit startup time but have more progress to go. If you don't mind sharing/attaching your toolkit log, I'd be interested to see if it offers up some clues on the performance you're seeing. You can find it at %localappdata%\AWSToolkit\logs\visualstudio\log.txt I have attached the log file. I hope it will help. log.txt Thank you for the log @M-Imtiaz . It looks like for the most part, the toolkit is activating in about 2-3 seconds based on the logs. There were two times the startup took longer, which I've created separate issues (see above) to be investigated separately. Out of curiosity, are you loading directly into your solution, or are you opening Visual Studio first, and then opening a solution. How many projects does the solution have? @awschristou I am loading directly into the solution. There are almost 34 projects within the solution. I checked without loading solution and this time visual studio load fast but when I open solution, it gives me the same alert again. Thanks for helping locate some potential problem areas by sharing your use cases @M-Imtiaz - I have created another issue around solution loading.
gharchive/issue
2019-02-12T09:15:48
2025-04-01T04:33:34.957584
{ "authors": [ "M-Imtiaz", "awschristou", "justinmk3", "raffaeler" ], "repo": "aws/aws-toolkit-visual-studio", "url": "https://github.com/aws/aws-toolkit-visual-studio/issues/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
257688676
SQL Usage The SQL wrapper is really great, especially when used with a normal web app. When you are doing a job that is not based on a http request, the usage becomes a little more complex. The readme is slightly incorrect, it should have a context object passed into the query. func main() { db := xray.SQL("postgres", "postgres://user:password@host:port/db") row, _ := db.QueryRow(ctx, "SELECT 1") // Use as normal } This calls capture, which is a new subsegment, but that assumes you have an existing segment. It will panic. I found this when doing testing, that often you wont be using the xray handler wrapper, so the context will not be created. Could we change the approach to check for the existence of a segment and create one if there isn't one present? @DaveBlooman , thank you for pointing out the incorrect readme session and we gonna fix it by next release. For SQL methods, we don't support create one if no segment present for now. Customer needs to create a segment or using Handler function to create one and pass the context to SQL method. If we support "create one if not present" case, it gonna be some problems i,e, what we gonna name the segment?. What's more, it may mess up when propagate to other methods. Maybe in the future we can add one feature in ContextMissingStrategy to support "create one if not present", however, customer needs to provide a segment for using SQL methods so far. func main() { db := xray.SQL("postgres", "postgres://user:password@host:port/db") row, _ := db.QueryRow("SELECT 1") // Use as normal } The readme session is still incorrect
gharchive/issue
2017-09-14T11:40:25
2025-04-01T04:33:34.989077
{ "authors": [ "DaveBlooman", "ansyser", "luluzhao" ], "repo": "aws/aws-xray-sdk-go", "url": "https://github.com/aws/aws-xray-sdk-go/issues/19", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1384263633
Support multiple targets + redhat for local raw builds Description of changes: Support multiple targets for raw builds. Add support for redhat image building for baremetal. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /test imagebuilder-presubmit /approve /lgtm /approve
gharchive/pull-request
2022-09-23T20:21:58
2025-04-01T04:33:35.057876
{ "authors": [ "abhay-krishna", "gwesterfieldjr", "vignesh-goutham" ], "repo": "aws/eks-anywhere-build-tooling", "url": "https://github.com/aws/eks-anywhere-build-tooling/pull/1335", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1462295362
Support configuring bottlerocket admin container image option Issue #, if available: #4050 Description of changes: Patch from https://github.com/abhay-krishna/cluster-api/pull/3 By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /lgtm /approve
gharchive/pull-request
2022-11-23T19:26:02
2025-04-01T04:33:35.060085
{ "authors": [ "abhay-krishna", "jiayiwang7" ], "repo": "aws/eks-anywhere-build-tooling", "url": "https://github.com/aws/eks-anywhere-build-tooling/pull/1607", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1239432081
Update EKSD_LATEST_RELEASES Issue #, if available: Description of changes: By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /approve /lgtm /cherry-pick release-0.9
gharchive/pull-request
2022-05-18T05:02:44
2025-04-01T04:33:35.061697
{ "authors": [ "jaxesn", "kschumy" ], "repo": "aws/eks-anywhere-build-tooling", "url": "https://github.com/aws/eks-anywhere-build-tooling/pull/821", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2531571463
Update curated packages prod to latest version Description of changes: This PR updates curated packages prod bundles to latest version. By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /lgtm
gharchive/pull-request
2024-09-17T16:19:09
2025-04-01T04:33:35.062876
{ "authors": [ "jhaanvi5", "vivek-koppuru" ], "repo": "aws/eks-anywhere-packages", "url": "https://github.com/aws/eks-anywhere-packages/pull/1162", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2565028107
sets linuxkit repo for hook arm presubmit Issue #, if available: Description of changes: By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /approve /lgtm
gharchive/pull-request
2024-10-03T21:25:07
2025-04-01T04:33:35.064343
{ "authors": [ "g-gaston", "jaxesn" ], "repo": "aws/eks-anywhere-prow-jobs", "url": "https://github.com/aws/eks-anywhere-prow-jobs/pull/427", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1470095062
[WIP] allowing setting of insecure flag thru CI/e2e tests Issue #, if available: Description of changes: Testing (if applicable): By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /ok-to-test /labels area/providers/nutanix
gharchive/pull-request
2022-11-30T19:04:54
2025-04-01T04:33:35.066032
{ "authors": [ "abhinavmpandey08", "deepakm-ntnx" ], "repo": "aws/eks-anywhere", "url": "https://github.com/aws/eks-anywhere/pull/4240", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1824708613
Fixing emissary tests Issue #, if available: Description of changes: Removing/changing tests to use images not from Dockerhub. /lgtm
gharchive/pull-request
2023-07-27T16:07:21
2025-04-01T04:33:35.067127
{ "authors": [ "cxbrowne1207", "jonahjon" ], "repo": "aws/eks-anywhere", "url": "https://github.com/aws/eks-anywhere/pull/6307", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2164257418
Revert "[main] Revert "Disable ETCD Learner Mode (#7719)"" Reverts aws/eks-anywhere#7728 /retest
gharchive/pull-request
2024-03-01T22:03:00
2025-04-01T04:33:35.067913
{ "authors": [ "abhinavmpandey08" ], "repo": "aws/eks-anywhere", "url": "https://github.com/aws/eks-anywhere/pull/7767", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1217020067
Bump buildkit version to v0.10.1 in builder-base Keeping buildkit version aligned with Prowjobs - refer to aws/eks-distro-prow-jobs#309 By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. /lgtm /hold /approve Just rebased /unhold Just rebased /unhold /override builder-base-tooling-presubmit /override builder-base-tooling-presubmit-2022
gharchive/pull-request
2022-04-27T08:46:55
2025-04-01T04:33:35.070583
{ "authors": [ "abhay-krishna" ], "repo": "aws/eks-distro-build-tooling", "url": "https://github.com/aws/eks-distro-build-tooling/pull/381", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2218945001
Java: Add XADD command (Stream commands) Issue #, if available: Description of changes: Add xadd command Example: // without options String streamId = client.xadd("key", Map.of("name", "Sara", "surname", "OConnor").get(); System.out.println("Stream: " + streamId); // with Options // Option to use the existing stream, or return null if the stream doesn't already exist at "key" StreamAddOptions options = StreamAddOptions.builder().id("sid").makeStream(Boolean.FALSE).build(); String streamId = client.xadd("key", Map.of("name", "Sara", "surname", "OConnor"), options).get(); if (streamId != null) { assert streamId.equals("sid"); } By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Please, update IT for type: https://github.com/aws/glide-for-redis/blob/961a1bb91bda110f7ea4865c37809aebfdd5b970/java/integTest/src/test/java/glide/SharedCommandTests.java#L1192-L1193 @acarbonetto please resolve conlicts
gharchive/pull-request
2024-04-01T19:46:54
2025-04-01T04:33:35.073801
{ "authors": [ "Yury-Fridlyand", "acarbonetto" ], "repo": "aws/glide-for-redis", "url": "https://github.com/aws/glide-for-redis/pull/1209", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1935848604
docs: add Karpenter NoSchedule taint design Fixes #N/A Description As described in the design, there are some issues with Karpenter's use of the node.kubernetes.io/unschedulable taint. This proposes changing the taint to a karpenter specific taint. How was this change tested? N/A By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Pull Request Test Coverage Report for Build 6472572428 0 of 0 changed or added relevant lines in 0 files are covered. 6 unchanged lines in 3 files lost coverage. Overall coverage decreased (-0.06%) to 81.814% Files with Coverage Reduction New Missed Lines % pkg/controllers/provisioning/scheduling/topology.go 2 86.49% pkg/controllers/provisioning/scheduling/topologygroup.go 2 96.75% pkg/test/cachesyncingclient.go 2 80.21% Totals Change from base Build 6466099147: -0.06% Covered Lines: 8930 Relevant Lines: 10915 💛 - Coveralls Pull Request Test Coverage Report for Build 6472572428 0 of 0 changed or added relevant lines in 0 files are covered. 2 unchanged lines in 1 file lost coverage. Overall coverage decreased (-0.02%) to 81.851% Files with Coverage Reduction New Missed Lines % pkg/test/cachesyncingclient.go 2 80.21% Totals Change from base Build 6466099147: -0.02% Covered Lines: 8934 Relevant Lines: 10915 💛 - Coveralls Pull Request Test Coverage Report for Build 6500774081 0 of 0 changed or added relevant lines in 0 files are covered. 4 unchanged lines in 2 files lost coverage. Overall coverage decreased (-0.04%) to 82.159% Files with Coverage Reduction New Missed Lines % pkg/controllers/provisioning/scheduling/topology.go 2 86.49% pkg/controllers/provisioning/scheduling/topologygroup.go 2 96.75% Totals Change from base Build 6500425875: -0.04% Covered Lines: 9035 Relevant Lines: 10997 💛 - Coveralls
gharchive/pull-request
2023-10-10T17:10:20
2025-04-01T04:33:35.094609
{ "authors": [ "coveralls", "njtran" ], "repo": "aws/karpenter-core", "url": "https://github.com/aws/karpenter-core/pull/585", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1607389001
docs: multiple improvements and fixes to migrate from CAS tutorial Fixes #3517 Description This PR contains fixes and improvements to the Migrating from Cluster Autoscaler documentation page. How was this change tested? This was reproduced during real world migration from CAS on a production environment. Does this change impact docs? [X] Yes, PR includes docs updates [ ] Yes, issue opened: # [ ] No Release Note NONE By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I've reverted the changes, and made modifications just to the content at the preview directory. Please let me know your thoughts. Best regards! I've fixed my commits to be properly signed. =)
gharchive/pull-request
2023-03-02T19:43:54
2025-04-01T04:33:35.099087
{ "authors": [ "davivcgarcia" ], "repo": "aws/karpenter", "url": "https://github.com/aws/karpenter/pull/3518", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
345582964
internet gateway internet gateway internet gateway NAT Instance
gharchive/pull-request
2018-07-30T01:34:25
2025-04-01T04:33:35.143871
{ "authors": [ "tudulius" ], "repo": "awskrug/awskrug-enterprise-workshop-2018", "url": "https://github.com/awskrug/awskrug-enterprise-workshop-2018/pull/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
935774943
error with the amazon-k8s-cni container after the update What happened: perhaps this is a coincidence, but after updating from ami version amazon-eks-node-1.19-v20210322 to version amazon-eks-node-1.19-v20210628(ami-0c2ca9cd067f101bc) in region eu-west-3, after some time, an error with container amazon-k8s-cni:v1.7.5-eksbuild.1 appeared on one of the 9 nodes: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m4s default-scheduler Successfully assigned kube-system/aws-node-n9dmj to ip-10-10-10-10.eu-west-3.compute.internal Normal Pulled 3m3s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.7.5-eksbuild.1" in 127.941902ms Normal Pulling 3m3s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.7.5-eksbuild.1" Normal Pulled 3m3s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni-init:v1.7.5-eksbuild.1" in 146.91813ms Normal Created 3m3s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Created container aws-vpc-cni-init Normal Started 3m3s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Started container aws-vpc-cni-init Normal Started 3m2s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Started container aws-node Warning Unhealthy 2m56s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:20:59.792Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 2m46s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:21:09.775Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 2m36s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:21:19.771Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 2m26s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:21:29.762Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 2m16s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:21:39.770Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 2m6s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:21:49.762Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 116s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:21:59.769Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 112s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Liveness probe failed: {"level":"info","ts":"2021-07-02T09:22:03.775Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Warning Unhealthy 106s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Readiness probe failed: {"level":"info","ts":"2021-07-02T09:22:09.766Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Normal Killing 92s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Container aws-node failed liveness probe, will be restarted Warning Unhealthy 86s (x4 over 102s) kubelet, ip-10-10-10-10.eu-west-3.compute.internal (combined from similar events): Readiness probe failed: {"level":"info","ts":"2021-07-02T09:22:29.784Z","caller":"/usr/local/go/src/runtime/proc.go:203","msg":"timeout: failed to connect service \":50051\" within 1s"} Normal Pulling 82s (x2 over 3m3s) kubelet, ip-10-10-10-10.eu-west-3.compute.internal Pulling image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.7.5-eksbuild.1" Normal Created 81s (x2 over 3m3s) kubelet, ip-10-10-10-10.eu-west-3.compute.internal Created container aws-node Normal Pulled 81s kubelet, ip-10-10-10-10.eu-west-3.compute.internal Successfully pulled image "602401143452.dkr.ecr.eu-west-3.amazonaws.com/amazon-k8s-cni:v1.7.5-eksbuild.1" in 504.321983ms logs: {"level":"info","ts":"2021-07-02T09:14:11.287Z","caller":"entrypoint.sh","msg":"Install CNI binary.."} {"level":"info","ts":"2021-07-02T09:14:11.305Z","caller":"entrypoint.sh","msg":"Starting IPAM daemon in the background ... "} {"level":"info","ts":"2021-07-02T09:14:11.307Z","caller":"entrypoint.sh","msg":"Checking for IPAM connectivity ... "} restarting and recreating the container didn't help, the problem was solved only by deleting the node. How to reproduce it (as minimally and precisely as possible): update the ami to the version amazon-eks-node-1.19-v20210628(ami-0c2ca9cd067f101bc) Anything else we need to know?: previously, there were no such errors, so I decided to create a ticket Environment: AWS Region: eu-west-3 Instance Type(s): t3a.2xlarge EKS Platform version (use aws eks describe-cluster --name <name> --query cluster.platformVersion): eks.4 Kubernetes version (use aws eks describe-cluster --name <name> --query cluster.version): 1.19 AMI Version: amazon-eks-node-1.19-v20210628(ami-0c2ca9cd067f101bc) Kernel (e.g. uname -a): Linux ip-10-10-10-10.eu-west-3.compute.internal 5.4.117-58.216.amzn2.x86_64 #1 SMP Tue May 11 20:50:07 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux Release information (run cat /etc/eks/release on a node): BASE_AMI_ID="ami-0136f7c838fded2f6" BUILD_TIME="Mon Jun 28 16:39:26 UTC 2021" BUILD_KERNEL="5.4.117-58.216.amzn2.x86_64" ARCH="x86_64" the same error was repeated with amazon-eks-node-1.19-v20210628 so it is not related to the eks ami update, similar to https://github.com/aws/amazon-vpc-cni-k8s/issues/1338
gharchive/issue
2021-07-02T13:30:57
2025-04-01T04:33:35.152477
{ "authors": [ "cp38510" ], "repo": "awslabs/amazon-eks-ami", "url": "https://github.com/awslabs/amazon-eks-ami/issues/689", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
1157451186
Awslabs master Issue #, if available: Description of changes: test By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Confirmed with source that this pull request no longer required, so closing.
gharchive/pull-request
2022-03-02T16:55:14
2025-04-01T04:33:35.186614
{ "authors": [ "happycontribute", "timjell" ], "repo": "awslabs/amazon-redshift-utils", "url": "https://github.com/awslabs/amazon-redshift-utils/pull/608", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
776098253
[Tabular][fastai] Preprocessing fixes Description of changes: Missing values preprocessing fixes By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. Job PR-839-4 is done. Docs are uploaded to http://autogluon-staging.s3-website-us-west-2.amazonaws.com/PR-839/4/index.html
gharchive/pull-request
2020-12-29T21:38:25
2025-04-01T04:33:35.188319
{ "authors": [ "gradientsky", "szha" ], "repo": "awslabs/autogluon", "url": "https://github.com/awslabs/autogluon/pull/839", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
849622519
Read parquet table chunked Issue #627: Description of changes: Checking if the return from read_parquet is a DataFrame or otherwise a generator. If it is a generator the cast_pandas_with_athena_types function is applied via map. By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. AWS CodeBuild CI Report CodeBuild project: GitHubCodeBuild8756EF16-sDRE8Pq0duHT Commit ID: 5871b83a802ce84b9a1a75567f635f062d188fb2 Result: FAILED Build Logs (available for 30 days) Powered by github-codebuild-logs, available on the AWS Serverless Application Repository AWS CodeBuild CI Report CodeBuild project: GitHubCodeBuild8756EF16-sDRE8Pq0duHT Commit ID: c56ae5a7517e6f5136037aae66462b2862ef2992 Result: SUCCEEDED Build Logs (available for 30 days) Powered by github-codebuild-logs, available on the AWS Serverless Application Repository
gharchive/pull-request
2021-04-03T09:47:29
2025-04-01T04:33:35.199254
{ "authors": [ "jaidisido", "maxispeicher" ], "repo": "awslabs/aws-data-wrangler", "url": "https://github.com/awslabs/aws-data-wrangler/pull/631", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
250624727
toDF() isn't working on the shell I get the same error (see attached) when trying orgs.toDF().show() or memberships.select_fields(['organization_id']).toDF().distinct().show() Thanks for using AWS Glue. Please refer to the step 5 in AWS Glue documentation on using a REPL shell at: http://docs.aws.amazon.com/glue/latest/dg/tutorial-development-endpoint-repl.html The solution to resolve this error is as follows – you would have to stop the existing SparkContext and create a new one using GlueContext. spark.stop() glueContext = GlueContext(SparkContext.getOrCreate()) If you have further questions, you can also use the AWS Glue Forum: https://forums.aws.amazon.com/forum.jspa?forumID=262 Thanks for the suggestion. I've tried it but unfortunately I still get the same error. Thanks for trying out the fix. We were not able to reproduce the error on a REPL shell after using the above fix. Could you please open up a support ticket. The fix with spark.stop() worked for me. Let me also post the exact error message here for better indexing by search engines: Caused by: ERROR XSDB6: Another instance of Derby may have already booted the database /home/glue/metastore_db. One workaround would be to disable hive support when sparkContext is initialized. newconf = sc._conf.set("spark.sql.catalogImplementation", "in-memory") sc.stop() sc = sc.getOrCreate(newconf) Let me know if this causes you additional problems. The spark.stop() "fix" worked for me as well. The specific error message was: ERROR Schema: Failed initialising database. Unable to open a test connection to the given database. JDBC url = jdbc:derby:;databaseName=metastore_db;create=true, username = APP. Terminating connection pool (set lazyInit to true if you expect to start your database after your app). Original Exception: ------ java.sql.SQLException: Failed to start database 'metastore_db' I ran into this as well. Why isn't the development environment setup to support this from the beginning? There's lots of glue documentation out there using .toDF() that doesn't work out of the box (the first example in https://github.com/aws-samples/aws-glue-samples/blob/master/FAQ_and_How_to.md for example) Just ran into this. The above did not work for me. Ended up starting pyspark with the following flag: ./bin/gluepyspark --conf spark.sql.catalogImplementation=in-memory https://github.com/apache/spark/commit/ac9c0536bc518f173f2ff53bee42b7a89d28ee20 this is a spark patch that should fix this in the Spark 3.0.0 release
gharchive/issue
2017-08-16T13:21:43
2025-04-01T04:33:35.207669
{ "authors": [ "Sergeant007", "davehowell", "jgoeglein", "laurikoobas", "mohitsax", "rush4ratio", "yupinh", "zalmane" ], "repo": "awslabs/aws-glue-samples", "url": "https://github.com/awslabs/aws-glue-samples/issues/1", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
1098765940
Logger: log_event does not serialize classes What were you trying to accomplish? I was trying to log the received S3 event. Please note that i am using the data classes present in this library for reading the event @event_source(data_class=S3Event) @log.inject_lambda_context( log_event=True ) def lambda_handler(event: S3Event, context: LambdaContext): Expected Behavior The logged event should have all the information from the S3 event Current Behavior This is the output i get in the log (trimmed) { "level": "INFO", "message": "<aws_lambda_powertools.utilities.data_classes.s3_event.S3Event object at 0x7f0be7efb2b0>", "timestamp": "2022-01-11 06:36:20,111+0000", } It looks to be that it was unable to properly represent the S3Event object as a string Possible Solution Implement repr and str methods in S3Event class or in the parent DictWrapper class Steps to Reproduce (for bugs) Implement a lambda which receives and S3 event like the following @event_source(data_class=S3Event) @log.inject_lambda_context( log_event=True ) def lambda_handler(event: S3Event, context: LambdaContext): pass Setup the lambda trigger as S3 object creation event Upload a file in the S3 bucket where trigger is setup See the logs in cloud watch Environment Powertools version used: 1.24.0 Packaging format (Layers, PyPi): PyPi AWS Lambda function runtime: 3.9 Debugging logs How to enable debug mode** # paste logs here Thanks @kishaningithub . I will take a look at that. We had a related fix for idempotency. For now inverting the decorators: @log.inject_lambda_context(log_event=True) @event_source(data_class=S3Event) def lambda_handler(event: S3Event, context: LambdaContext): pass hey @kishaningithub thanks for raising that. The main reason we didn't do this is that these are untrusted strings. We weren't certain of the potential attack vectors this could cause (similar but not the same to Log4j). Besides the fix of inverting the decorator, I'd be cautious on doing this in production as you might dump sensitive data into the logs. All that being said, you can bring your own JSON serializer for Logger via LambdaPowertoolsFormatter - this would be an explicit way to opt-in to serialize any data you want. Keen to hear your thoughts Thank you! @heitorlessa - another accurance of this issue from slack (https://awsdevelopers.slack.com/archives/C01A6KK4UFK/p1643416059170079) Updating here as @michaelbrewer promptly created a PR to only log IF they're an instance of our built-in event source data classes. I'm fine with this compromise since the only vector here would be a customer to override our data classes __repr__. Any other classes should be best dealt via the serializer option in the Formatter.
gharchive/issue
2022-01-11T06:53:52
2025-04-01T04:33:35.216592
{ "authors": [ "heitorlessa", "kishaningithub", "michaelbrewer" ], "repo": "awslabs/aws-lambda-powertools-python", "url": "https://github.com/awslabs/aws-lambda-powertools-python/issues/947", "license": "MIT-0", "license_type": "permissive", "license_source": "github-api" }
2297064391
fix: engagement kpi error when all user sessions are invalid By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license Summary (describe what this merge request does) Implementation highlights (describe how the merge request does for feature changes, share the RFC link if it has) Test checklist [ ] add new test cases [ ] all code changes are covered by unit tests [ ] end-to-end tests [ ] deploy web console with CloudFront + S3 + API gateway [ ] deploy web console within VPC [ ] deploy ingestion server [ ] with MSK sink [ ] with KDS sink [ ] with S3 sink [ ] deploy data processing [ ] deploy data modeling [ ] new Redshift Serverless [ ] provisioned Redshift [ ] Athena [ ] deploy with reporting [ ] streaming ingestion [ ] with Redshift Serverless [ ] with provisioned Redshift Is it a breaking change [ ] add parameters without default value in stack [ ] introduce new service permission in stack [ ] introduce new top level stack module Miscellaneous [ ] introduce new symbol link source file(s) to be shared among infra code, web console frontend, and web console backend SonarQube Quality Gate Result Result: :white_check_mark: OK Triggered by @llmin on pull_request Metric Status Value Error Threshold Reliability rating :white_check_mark: OK 1 > 1 Security rating :white_check_mark: OK 1 > 1 Sqale rating :white_check_mark: OK 1 > 1 Coverage :white_check_mark: OK 83.50 < 80 Duplicated lines density :white_check_mark: OK 5.20 > 30 Blocker violations :white_check_mark: OK 0 > 0 Bugs :white_check_mark: OK 0 > 0 Code smells :white_check_mark: OK 8 > 40 Critical violations :white_check_mark: OK 0 > 0 Major violations :white_check_mark: OK 0 > 0 Vulnerabilities :white_check_mark: OK 0 > 0 View on SonarQube updated: 5/15/2024, 07:47:51 (UTC+0) SonarQube Code Analytics Quality Gate passed Additional information The following metrics might not affect the Quality Gate status but improving them will improve your project code quality. Issues 0 Bugs 0 Vulnerabilities 0 Code Smells Coverage and Duplications No data Coverage No data Duplication SonarQube Quality Gate Result Result: :white_check_mark: OK Triggered by @llmin on pull_request Metric Status Value Error Threshold Reliability rating :white_check_mark: OK 1 > 1 Security rating :white_check_mark: OK 1 > 1 Sqale rating :white_check_mark: OK 1 > 1 Coverage :white_check_mark: OK 83.60 < 80 Duplicated lines density :white_check_mark: OK 5.20 > 30 Blocker violations :white_check_mark: OK 0 > 0 Bugs :white_check_mark: OK 0 > 0 Code smells :white_check_mark: OK 8 > 40 Critical violations :white_check_mark: OK 0 > 0 Major violations :white_check_mark: OK 0 > 0 Vulnerabilities :white_check_mark: OK 0 > 0 View on SonarQube updated: 5/15/2024, 08:00:08 (UTC+0) SonarQube Quality Gate Result Result: :white_check_mark: OK Triggered by @llmin on pull_request Metric Status Value Error Threshold Reliability rating :white_check_mark: OK 1 > 1 Security rating :white_check_mark: OK 1 > 1 Sqale rating :white_check_mark: OK 1 > 1 Coverage :white_check_mark: OK 83.60 < 80 Duplicated lines density :white_check_mark: OK 5.20 > 30 Blocker violations :white_check_mark: OK 0 > 0 Bugs :white_check_mark: OK 0 > 0 Code smells :white_check_mark: OK 8 > 40 Critical violations :white_check_mark: OK 0 > 0 Major violations :white_check_mark: OK 0 > 0 Vulnerabilities :white_check_mark: OK 0 > 0 View on SonarQube updated: 5/15/2024, 08:23:46 (UTC+0) SonarQube Quality Gate Result Result: :white_check_mark: OK Triggered by @llmin on pull_request Metric Status Value Error Threshold Reliability rating :white_check_mark: OK 1 > 1 Security rating :white_check_mark: OK 1 > 1 Sqale rating :white_check_mark: OK 1 > 1 Coverage :white_check_mark: OK 83.60 < 80 Duplicated lines density :white_check_mark: OK 5.20 > 30 Blocker violations :white_check_mark: OK 0 > 0 Bugs :white_check_mark: OK 0 > 0 Code smells :white_check_mark: OK 8 > 40 Critical violations :white_check_mark: OK 0 > 0 Major violations :white_check_mark: OK 0 > 0 Vulnerabilities :white_check_mark: OK 0 > 0 View on SonarQube updated: 5/15/2024, 10:31:46 (UTC+0) SonarQube Quality Gate Result Result: :white_check_mark: OK Triggered by @llmin on pull_request Metric Status Value Error Threshold Reliability rating :white_check_mark: OK 1 > 1 Security rating :white_check_mark: OK 1 > 1 Sqale rating :white_check_mark: OK 1 > 1 Coverage :white_check_mark: OK 83.60 < 80 Duplicated lines density :white_check_mark: OK 5.20 > 30 Blocker violations :white_check_mark: OK 0 > 0 Bugs :white_check_mark: OK 0 > 0 Code smells :white_check_mark: OK 8 > 40 Critical violations :white_check_mark: OK 0 > 0 Major violations :white_check_mark: OK 0 > 0 Vulnerabilities :white_check_mark: OK 0 > 0 View on SonarQube updated: 5/15/2024, 10:32:51 (UTC+0)
gharchive/pull-request
2024-05-15T07:14:37
2025-04-01T04:33:35.265523
{ "authors": [ "llmin", "zxkane" ], "repo": "awslabs/clickstream-analytics-on-aws", "url": "https://github.com/awslabs/clickstream-analytics-on-aws/pull/1304", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1382995306
Consider integration with nivo Nivo is a collection of components for rich data visualization. https://nivo.rocks/ @lordjabez, thanks for bringing this library to my attention. I like how nivo is React specific and allows customizations directly through props and server side rendering is a nice perk. However, the current data visualization library used for GB (amCharts5) has a lot more functionality built in out of the box, more demos to learn from, and better documentation IMO. A developer is welcome to use nivo in a GB project/template, but official dashboard components will likely continue to use amCharts. Please reopen if you still have questions.
gharchive/issue
2022-09-22T20:44:56
2025-04-01T04:33:35.272656
{ "authors": [ "bestickley", "lordjabez" ], "repo": "awslabs/green-boost", "url": "https://github.com/awslabs/green-boost/issues/131", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1517864802
support linting for svg. Setup a rule to require width and height Overview This adds support for svg linting. we ran into an error in firefox where if a width and height were not specified, the icons were not visible Verifying Changes Scene Composer For scene-composer package changes specifically, you can preview the component in the published storybook artifact. To do this, wait for the Publish Storybook action to complete below. Click on the workflow details Select the Summary item on the left Download the zip file To run the storybook build locally, you need a local static web server: npm install -g httpserver cd <Extracted Zip Directory> httpserver Then open the website http://localhost:8080 to run the doc site. Legal This project is available under the Apache 2.0 License. You need to squash your commits and ensure they are following the commit lint format. In this case since it's not a new feature or bug fix use chore(AddSVGLinting): Enforce width and height attributes on SVGs
gharchive/pull-request
2023-01-03T20:11:06
2025-04-01T04:33:35.276084
{ "authors": [ "Digized", "TheEvilDev" ], "repo": "awslabs/iot-app-kit", "url": "https://github.com/awslabs/iot-app-kit/pull/450", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
603413616
Multiple Replicas? Can this be deployed with > 1 replica or does this need to be a "singleton"? If so, what do you suggest for attempting HA? There shouldn't be a problem running multiple replicas. The adapters sit behind a k8s service which routes requests to each pod. I tried scaling my deployment to 2 pods and they seems to be running perfectly fine.
gharchive/issue
2020-04-20T17:38:38
2025-04-01T04:33:35.277429
{ "authors": [ "chankh", "pc-mreeves" ], "repo": "awslabs/k8s-cloudwatch-adapter", "url": "https://github.com/awslabs/k8s-cloudwatch-adapter/issues/28", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2020827883
Crate workflow fails during release (non-blocking) When performing a release of the crates in #657, we saw that the new workflow is failing. https://github.com/awslabs/mountpoint-s3/actions/runs/7059111878 This is because the package command in cargo goes to crates.io to fetch the version, and does not look at local packages. This makes sense, but is frustrating for our use case. We need to identify a way to get this workflow working even during version change PRs. #803 reduced the scope of the workflow to focus only on the -sys crate which we had issues with before. That resolves this issue.
gharchive/issue
2023-12-01T12:53:45
2025-04-01T04:33:35.279480
{ "authors": [ "dannycjones" ], "repo": "awslabs/mountpoint-s3", "url": "https://github.com/awslabs/mountpoint-s3/issues/658", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
589971475
Verify: Updates Affecting TLS 1.2 https://tools.ietf.org/html/rfc8446#section-1.3 All affected TLS1.2 items were verified.
gharchive/issue
2020-03-30T03:44:33
2025-04-01T04:33:35.280530
{ "authors": [ "zaherd" ], "repo": "awslabs/s2n", "url": "https://github.com/awslabs/s2n/issues/1715", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
578343323
Test: TLS 1.3 Server Extensions Fixes Part 1 (#1644) Please note that while we are transitioning from travis-ci to AWS CodeBuld, some tests are run on each platform. Non-AWS contributors will temporarily be unable to see CodeBuild results. We apologize for the inconvenience. **Issue #1641 ** Description of changes: Part 0: Add s2n_server_extensions_send_size() splits s2n_server_extensions_send() and helps add more tests for server extensions Part 1 - disables extensions that should not be sent in TLS 1.3 server extensions. These should be move to their new extension destinations with relation to RFC8446 (cherry picked from commit 917ce70c59b3ae29bcd76c7934784d70b3a6b9dc) By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. Codecov Report :exclamation: No coverage uploaded for pull request head (test@cf3045f). Click here to learn what that means. The diff coverage is n/a. Integration and unit tests on ubuntu 18.04 passed locally.
gharchive/pull-request
2020-03-10T05:16:49
2025-04-01T04:33:35.284787
{ "authors": [ "agray256", "codecov-io" ], "repo": "awslabs/s2n", "url": "https://github.com/awslabs/s2n/pull/1663", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1606848418
[FEATURE] Specify per-module prefix Is your feature request related to a problem? Please describe. Assuming my project has the prefix MY. Re-using modules from implementations like ADDF is a two step process: copy over the implementation replace the prefix (example) to the one used by the current project, ie. replace all ADDF by MY Describe the solution you'd like Ideally, I would add that project as a git-submodule (eg. under thirdparty/addf) and refer to it as: name: task-a path: thirdparty/addf/modules/core/aws-batch prefix: addf parameters: .... and have seed-farmer substitute the prefix ADDF with the one from the current project (MY) in the passed-over/exported parameters. Describe alternatives you've considered Like described above currently, copying code is required with the additional step of renaming the prefix. Having the solution as described via a git-submodule would allow to provide bugfixes upstream (example). We are exploring a more effective solution to match this request. see: https://github.com/awslabs/seed-farmer/pull/249 The PR #249 provide in-depth explanations to accommodate this rquest.
gharchive/issue
2023-03-02T13:53:20
2025-04-01T04:33:35.290438
{ "authors": [ "PatWie", "chamcca", "dgraeber" ], "repo": "awslabs/seed-farmer", "url": "https://github.com/awslabs/seed-farmer/issues/248", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1164324651
Integrate binary-compatibility-validator Integrate https://github.com/Kotlin/binary-compatibility-validator into our build for the runtime. This could also be used for generated AWS services to ensure backwards compat. Any model change cause a re-baseline. Then we can verify normal codegen/runtime changes don't modify the public API of a service.
gharchive/issue
2022-03-09T19:01:36
2025-04-01T04:33:35.291880
{ "authors": [ "aajtodd" ], "repo": "awslabs/smithy-kotlin", "url": "https://github.com/awslabs/smithy-kotlin/issues/598", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1267230799
OpenAPI does not output multiple errors for the same status code Whenever we have errors shapes that are mapped to the same HTTP status code, only one of them appears in the OpenAPI document generated by the Smithy plugin. Input Smithy model namespace smithy.example use aws.protocols#restJson1 use smithy.framework#ValidationException @restJson1 service Weather { version: "2006-03-01", operations: [GetCurrentTime], errors: [ QuotaExceededException, ThrottlingException, ValidationException ] } @error("client") @retryable(throttling: true) @httpError(429) structure QuotaExceededException { @required message: String, } @error("client") @retryable(throttling: true) @httpError(429) structure ThrottlingException { @required message: String, } @readonly @http(uri: "/time", method: "GET") operation GetCurrentTime { input: GetCurrentTimeInput, output: GetCurrentTimeOutput } @input structure GetCurrentTimeInput {} @output structure GetCurrentTimeOutput { @required time: Timestamp } You can see below that only the ThrottlingException has been placed in the OpenAPI components section: OpenAPI output { "openapi": "3.0.2", "info": { "title": "Weather", "version": "2006-03-01" }, "paths": { "/time": { "get": { "operationId": "GetCurrentTime", "responses": { "200": { "description": "GetCurrentTime 200 response", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/GetCurrentTimeResponseContent" } } } }, "400": { "description": "ValidationException 400 response", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/ValidationExceptionResponseContent" } } } }, "429": { "description": "ThrottlingException 429 response", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/ThrottlingExceptionResponseContent" } } } } } } } }, "components": { "schemas": { "GetCurrentTimeResponseContent": { "type": "object", "properties": { "time": { "type": "number", "format": "double" } }, "required": [ "time" ] }, "ThrottlingExceptionResponseContent": { "type": "object", "properties": { "message": { "type": "string" } }, "required": [ "message" ] }, "ValidationExceptionField": { "type": "object", "description": "Describes one specific validation failure for an input member.", "properties": { "path": { "type": "string", "description": "A JSONPointer expression to the structure member whose value failed to satisfy the modeled constraints." }, "message": { "type": "string", "description": "A detailed description of the validation failure." } }, "required": [ "message", "path" ] }, "ValidationExceptionResponseContent": { "type": "object", "description": "A standard error for input validation failures.\nThis should be thrown by services when a member of the input structure\nfalls outside of the modeled or documented constraints.", "properties": { "message": { "type": "string", "description": "A summary of the validation failure." }, "fieldList": { "type": "array", "items": { "$ref": "#/components/schemas/ValidationExceptionField" }, "description": "A list of specific failures encountered while validating the input.\nA member can appear in this list more than once if it failed to satisfy multiple constraints." } }, "required": [ "message" ] } } } } Because API Gateway doesn't support oneOf, to address this, we'd need to introduce a configuration option to either create a kind of aggregated response that rolls up all responses mapped to a single code and makes everything optional, or to use oneOf to represent each code as a separate schema. Fixed in #1304 This is actually not fixed (or it was reverted). @mtdowling, could you help us with this, please? Created new issue https://github.com/awslabs/smithy/issues/1649 to replace this one which can't be re-opened.
gharchive/issue
2022-06-10T08:28:48
2025-04-01T04:33:35.297879
{ "authors": [ "DanielBauman88", "eduardomourar", "mtdowling", "sugmanue" ], "repo": "awslabs/smithy", "url": "https://github.com/awslabs/smithy/issues/1265", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
196361526
internet explore 상에서의 속도 inline editor 모드일경우 chrome에서는 달력이나 number 등 editor클릭시 바로바로 수정모드가 되는데 internet exlporer(11) 에서는 상당히 느립니다. ax5 grid 데모페이상에서도 확인해 보면 달력이 뜨거나 수정모드가 되는게 눈에 천천히 보일정도입니다. 데이터가 많을 수록 더 느려지네요....ㅡㅡ axisj 에서는 브라우저에 관계없이 속도가 잘 나오는데.. 빠르게 할수 있는 방법이 있을까요? 더블 클릭 이벤트 속도가 느린거 같네요. axisj에서는 더블클릭을 지원안하는 브라우저를 위해 더블클릭을 클릭을 이용해 직접 구현 했거든요. 셀에 포커스가 있는 상태에서 엔터키를 입력하면 빨리 변하는 것을 보면 에디터 표현 속도의 문제는 아닌게 맞는 것 같아요. IE를 안쓰면 모든게 다 해결 될 것 같습니다. IE에서 더블클릭 이벤트 관련한 자료를 찾아봐야 알 것 같습니다.
gharchive/issue
2016-12-19T08:59:46
2025-04-01T04:33:35.301417
{ "authors": [ "rcn408", "thomasJang" ], "repo": "ax5ui/ax5ui-grid", "url": "https://github.com/ax5ui/ax5ui-grid/issues/10", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
680177545
liburing.h looks incompatible with anything that may include /usr/include/sys/mount.h I met this compiler error when I build libfuse together with liburing: [6/49] Compiling C object 'lib/76b5a35@@fuse3@sha/mount.c.o' FAILED: lib/76b5a35@@fuse3@sha/mount.c.o cc -Ilib/76b5a35@@fuse3@sha -Ilib -I../lib -Iinclude -I../include -I. -I.. -fdiagnostics-color=always -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -D_REENTRANT -DHAVE_CONFIG_H -Wall -Wextra -Wno-sign-compare -Wstrict-prototypes -Wmissing-declarations -Wwrite-strings -fno-strict-aliasing -Wno-unused-result -fPIC -pthread -DFUSE_USE_VERSION=35 '-DFUSERMOUNT_DIR="/usr/local/bin"' -MD -MQ 'lib/76b5a35@@fuse3@sha/mount.c.o' -MF 'lib/76b5a35@@fuse3@sha/mount.c.o.d' -o 'lib/76b5a35@@fuse3@sha/mount.c.o' -c ../lib/mount.c In file included from /usr/include/liburing/io_uring.h:11:0, from /usr/include/liburing.h:15, from ../include/fuse_i.h:8, from ../lib/mount.c:12: /usr/include/sys/mount.h:35:3: error: expected identifier before numeric constant MS_RDONLY = 1, /* Mount read-only. */ ^ In file included from ../lib/mount.c:28:0: ../lib/mount.c:127:13: error: ‘MS_RDONLY’ undeclared here (not in a function) {"rw", MS_RDONLY, 0}, ^ ../lib/mount.c:129:14: error: ‘MS_NOSUID’ undeclared here (not in a function) {"suid", MS_NOSUID, 0}, ^ ../lib/mount.c:131:14: error: ‘MS_NODEV’ undeclared here (not in a function) {"dev", MS_NODEV, 0}, ^ ../lib/mount.c:133:14: error: ‘MS_NOEXEC’ undeclared here (not in a function) {"exec", MS_NOEXEC, 0}, ^ ../lib/mount.c:135:14: error: ‘MS_SYNCHRONOUS’ undeclared here (not in a function) {"async", MS_SYNCHRONOUS, 0}, ^ ../lib/mount.c:137:14: error: ‘MS_NOATIME’ undeclared here (not in a function) {"atime", MS_NOATIME, 0}, ^ ../lib/mount.c:140:14: error: ‘MS_DIRSYNC’ undeclared here (not in a function) {"dirsync", MS_DIRSYNC, 1}, ^ ../lib/mount.c: In function ‘get_mnt_flag_opts’: ../lib/mount.c:502:14: error: invalid operands to binary & (have ‘int’ and ‘const struct mount_flags *’) if (!(flags & MS_RDONLY) && fuse_opt_add_opt(mnt_optsp, "rw") == -1) ^ ../lib/mount.c: In function ‘parse_mount_opts’: ../lib/mount.c:522:24: error: invalid operands to binary | (have ‘const struct mount_flags *’ and ‘const struct mount_flags *’) mo->flags = MS_NOSUID | MS_NODEV; ^ ../lib/mount.c:522:12: warning: assignment makes integer from pointer without a cast [-Wint-conversion] mo->flags = MS_NOSUID | MS_NODEV; ^ it looks like liburing includes linux/fs.h,which already defines such macros that conflicts with sys/mount.h? part of quoted from linux/fs.h: /* These are the fs-independent mount-flags: up to 32 flags are supported / #define MS_RDONLY 1 / Mount read-only / #define MS_NOSUID 2 / Ignore suid and sgid bits / #define MS_NODEV 4 / Disallow access to device special files / #define MS_NOEXEC 8 / Disallow program execution / #define MS_SYNCHRONOUS 16 / Writes are synced at once / #define MS_REMOUNT 32 / Alter flags of a mounted FS / #define MS_MANDLOCK 64 / Allow mandatory locks on an FS / #define MS_DIRSYNC 128 / Directory modifications are synchronous / #define MS_NOATIME 1024 / Do not update access times. / #define MS_NODIRATIME 2048 / Do not update directory access times / #define MS_BIND 4096 #define MS_MOVE 8192 #define MS_REC 16384 #define MS_VERBOSE 32768 / War is peace. Verbosity is silence. MS_VERBOSE is deprecated. / #define MS_SILENT 32768 #define MS_POSIXACL (1<<16) / VFS does not apply the umask / #define MS_UNBINDABLE (1<<17) / change to unbindable / #define MS_PRIVATE (1<<18) / change to private / #define MS_SLAVE (1<<19) / change to slave / #define MS_SHARED (1<<20) / change to shared / #define MS_RELATIME (1<<21) / Update atime relative to mtime/ctime. / #define MS_KERNMOUNT (1<<22) / this is a kern_mount call / #define MS_I_VERSION (1<<23) / Update inode I_version field / #define MS_STRICTATIME (1<<24) / Always perform atime updates / #define MS_LAZYTIME (1<<25) / Update the on-disk [acm]times lazily */ quoted from sys/mount.h enum { MS_RDONLY = 1, /* Mount read-only. / #define MS_RDONLY MS_RDONLY MS_NOSUID = 2, / Ignore suid and sgid bits. / #define MS_NOSUID MS_NOSUID MS_NODEV = 4, / Disallow access to device special files. / #define MS_NODEV MS_NODEV MS_NOEXEC = 8, / Disallow program execution. / #define MS_NOEXEC MS_NOEXEC MS_SYNCHRONOUS = 16, / Writes are synced at once. / #define MS_SYNCHRONOUS MS_SYNCHRONOUS MS_REMOUNT = 32, / Alter flags of a mounted FS. / #define MS_REMOUNT MS_REMOUNT MS_MANDLOCK = 64, / Allow mandatory locks on an FS. / #define MS_MANDLOCK MS_MANDLOCK MS_DIRSYNC = 128, / Directory modifications are synchronous. / #define MS_DIRSYNC MS_DIRSYNC MS_NOATIME = 1024, / Do not update access times. / #define MS_NOATIME MS_NOATIME MS_NODIRATIME = 2048, / Do not update directory access times. / #define MS_NODIRATIME MS_NODIRATIME MS_BIND = 4096, / Bind directory at different place. */ #define MS_BIND MS_BIND MS_MOVE = 8192, #define MS_MOVE MS_MOVE MS_REC = 16384, #define MS_REC MS_REC MS_SILENT = 32768, I removed fs.h from liburing.h and the build is OK the only dependency of fs.h is typedef int __bitwise __kernel_rwf_t; and I did a dirty work to define it in liburing.h another way is to adjust the header file include sequences, the mount.h must be included before liburing.h This isn't a liburing issue, liburing.h doesn't use any mount related bits. This should be fixed in the system headers, or in the application.
gharchive/issue
2020-08-17T11:55:34
2025-04-01T04:33:35.330263
{ "authors": [ "axboe", "majieyue" ], "repo": "axboe/liburing", "url": "https://github.com/axboe/liburing/issues/174", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2218492030
Can not load funds list? #47 谢谢! 另外请教两个问题: 定时任务怎么运行,设定的每天六点好像没有运行。日志上没看到什么信息 sentry.dsn 要怎么设置
gharchive/issue
2024-04-01T15:23:10
2025-04-01T04:33:35.352700
{ "authors": [ "Charmve", "axiaoxin" ], "repo": "axiaoxin-com/investool", "url": "https://github.com/axiaoxin-com/investool/issues/58", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1086025490
Add Model.aya and Test.aya, remove some old, duplicated tests #11 bors merge https://www.youtube.com/watch?v=VJDJs9dumZI bors r+ bors r+ Will add rain things later
gharchive/pull-request
2021-12-21T16:48:39
2025-04-01T04:33:35.366428
{ "authors": [ "ice1000" ], "repo": "aya-prover/aya-dev", "url": "https://github.com/aya-prover/aya-dev/pull/310", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2523994065
Fix linux kernel version in gh ci runners This updates the linux kernels of the CI linux runners since at least one of the current ones has become unavailable as seem in this build https://github.com/aya-rs/aya/actions/runs/10843802034/job/30091556395 Merging on red since the lint failure is being addressed in another PR
gharchive/pull-request
2024-09-13T06:38:43
2025-04-01T04:33:35.367666
{ "authors": [ "alessandrod", "davibe" ], "repo": "aya-rs/aya", "url": "https://github.com/aya-rs/aya/pull/1029", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2260833686
docs: add Program Types/XDP documentation Added documentation for https://aya-rs.dev/book/programs/xdp/ Feel free to make reviews and suggest improvements. Applied your reviews, also corrected some typos @vadorovsky No problem! Should be good now. I committed your suggestions and double-checked given the chosen format. I think you already mentioned it, but it would definitely be worth having automation for formatting, or an additional check for the MD format in the workflow. Let me know if there's anything else to change here 👍 Should be good now :) Fixed the imports, my bad! Oops.. Will fix this evening.
gharchive/pull-request
2024-04-24T09:26:20
2025-04-01T04:33:35.371247
{ "authors": [ "GreenedDev", "shard77" ], "repo": "aya-rs/book", "url": "https://github.com/aya-rs/book/pull/155", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1989182615
🛑 famillesuisse.ch is down In fdfc5d3, famillesuisse.ch (https://famillesuisse.ch) was down: HTTP code: 503 Response time: 434 ms Resolved: famillesuisse.ch is back up in 60a9944 after 1 day, 7 hours, 45 minutes.
gharchive/issue
2023-11-12T01:04:50
2025-04-01T04:33:35.374248
{ "authors": [ "ayalon" ], "repo": "ayalon/upptime", "url": "https://github.com/ayalon/upptime/issues/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2605866200
🛑 Erdem & Erdem is down In 53ccbe9, Erdem & Erdem (https://www.erdem-erdem.av.tr/) was down: HTTP code: 0 Response time: 0 ms Resolved: Erdem & Erdem is back up in 7ddff51 after 1 hour, 22 minutes.
gharchive/issue
2024-10-22T15:56:00
2025-04-01T04:33:35.379629
{ "authors": [ "aydgn" ], "repo": "aydgn/upptime", "url": "https://github.com/aydgn/upptime/issues/145", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2686043820
🛑 Özersoylar is down In 868bf25, Özersoylar (https://www.ozersoylar.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Özersoylar is back up in 790f2b4 after 5 hours, 5 minutes.
gharchive/issue
2024-11-23T14:23:03
2025-04-01T04:33:35.381932
{ "authors": [ "aydgn" ], "repo": "aydgn/upptime", "url": "https://github.com/aydgn/upptime/issues/599", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
159880108
syntax error near unexpected token `fi' adding this to .bashrc if hash ag 2>/dev/null; then tag() { command tag "$@"; source ${TAG_ALIAS_FILE:-/tmp/tag_aliases} 2>/dev/null } alias ag=tag fi show me the this error: bash: /home/oren/.bashrc: line 58: syntax error near unexpected token `fi' bash: /home/oren/.bashrc: line 58: `fi' I am on ubuntu 15.10 in case it matters Add semicolon like below. then works. tag() { command tag "$@"; source ${TAG_ALIAS_FILE:-/tmp/tag_aliases} 2>/dev/null } => tag() { command tag "$@"; source ${TAG_ALIAS_FILE:-/tmp/tag_aliases} 2>/dev/null ; } Had the same problem on Mac OSX.
gharchive/issue
2016-06-13T06:19:38
2025-04-01T04:33:35.383711
{ "authors": [ "KyleAMathews", "myggul92", "oren" ], "repo": "aykamko/tag", "url": "https://github.com/aykamko/tag/issues/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1117894507
Removed unused variables and replaced var declations with let and const where appropriate. resolves #14 Also contained in this PR: necessary spacing has been added to the code to make it more readable @iamziike please resolve conflicts
gharchive/pull-request
2022-01-28T22:14:08
2025-04-01T04:33:35.392187
{ "authors": [ "ayush8010720467", "iamziike" ], "repo": "ayush8010720467/web_whiteboard", "url": "https://github.com/ayush8010720467/web_whiteboard/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
76730127
expressworks #7 error having an issue passing exercise 7: This is my code in program.js: var express = require('express') var app = express() app.get('/search', function(req, res){ var query = req.query res.send(query) }) app.listen(process.argv[2]) You need to Pass it as a JSON string (stringify) Before that, remove the non-enumerable __proto__ property (there's a hint right there) var app = express() app.get('/search', function(req, res){ var query = req.query delete query.__proto__; res.send(JSON.stringify(query)); }) app.listen(process.argv[2])``` var express = require('express') var app = express() app.get('/search', function(req, res){ var query = req.query delete query.__proto__; res.send(JSON.stringify(query)); }) app.listen(process.argv[2]) That worked, thank you!
gharchive/issue
2015-05-15T13:41:50
2025-04-01T04:33:35.402992
{ "authors": [ "mdmoore", "prashcr", "tvollmer89" ], "repo": "azat-co/expressworks", "url": "https://github.com/azat-co/expressworks/issues/69", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2019254957
Bug: Error while loading rule 'perfectionist/sort-svelte-attributes': The "path" argument must be of type string. Describe the bug npm version: 2.5.0 error info: [Error - 02:14:58] TypeError: Error while loading rule 'perfectionist/sort-svelte-attributes': The "path" argument must be of type string. Received undefined Occurred while linting /Users/bytedance/Desktop/poi-zetton/packages/util/trade-sdk/src/lynx/index.ts at new NodeError (node:internal/errors:399:5) at validateString (node:internal/validators:163:11) at Object.extname (node:path:1380:5) at create (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint-plugin-perfectionist@2.5.0_typescript@5.0.4/node_modules/eslint-plugin-perfectionist/dist/index.js:1:3917) at Object.create (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/@typescript-eslint+utils@6.13.1_typescript@5.0.4/node_modules/@typescript-eslint/utils/dist/eslint-utils/RuleCreator.js:38:20) at createRuleListeners (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/linter/linter.js:922:21) at /Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/linter/linter.js:1104:110 at Array.forEach () at runRules (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/linter/linter.js:1041:34) at Linter._verifyWithoutProcessors (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/linter/linter.js:1393:31) at Linter._verifyWithConfigArray (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/linter/linter.js:1757:21) at Linter.verify (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/linter/linter.js:1475:65) at Linter.verifyAndFix (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/linter/linter.js:2004:29) at verifyText (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/cli-engine/cli-engine.js:245:48) at CLIEngine.executeOnText (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/cli-engine/cli-engine.js:917:26) at ESLint.lintText (/Users/bytedance/Desktop/poi-zetton/node_modules/.pnpm/eslint@8.20.0/node_modules/eslint/lib/eslint/eslint.js:592:23) at /Users/bytedance/.vscode/extensions/dbaeumer.vscode-eslint-2.4.2/server/out/eslintServer.js:1:24860 at E (/Users/bytedance/.vscode/extensions/dbaeumer.vscode-eslint-2.4.2/server/out/eslintServer.js:1:19218) at e.validate (/Users/bytedance/.vscode/extensions/dbaeumer.vscode-eslint-2.4.2/server/out/eslintServer.js:1:24819) at /Users/bytedance/.vscode/extensions/dbaeumer.vscode-eslint-2.4.2/server/out/eslintServer.js:1:221494 Code example eslintrc.js is module.exports = { root: true, extends: [ 'plugin:perfectionist/recommended-natural'], plugins: ['perfectionist'], parserOptions: { project: true, tsconfigRootDir: __dirname }, rules: { 'perfectionist/sort-imports': [ 'error', { type: 'natural', order: 'asc', groups: [ 'type', 'react', 'nanostores', ['builtin', 'external'], 'internal-type', 'internal', ['parent-type', 'sibling-type', 'index-type'], ['parent', 'sibling', 'index'], 'side-effect', 'style', 'object', 'unknown' ], 'newlines-between': 'always', 'internal-pattern': ['@/components/**', '@/stores/**', '@/pages/**', '@/lib/**'] } ] } }; ESLint version v8.20.0 ESLint Plugin Perfectionist version v2.5.0 Additional comments No response Validations [X] Read the docs. [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate. Hi. Thank you for your issue. Do you have the ability to create a repository to reproduce the issue? I figured out that everything is normal until I put the repo to a monorepo. I get this case too. Can you show me your package.json? Reproducing the problem would help speed up the resolution. Hi @azat-io, here is the repo. You can add eslint-plugin-perfectionist https://github.com/hckhanh/demo-sentry @hckhanh Thank you! Could you please describe steps to reproduce the problem? Because I don't see ESLint config in project root or scripts to call it. Sorry about that, it seems there are some problems with my WebStorm. Everything is fine now. No problem. Thank you for your issue! I have the same issue after upgrading from 2.2.0 to 2.5.0. It seems that there were also some issues in 2.4.1: Error [ERR_REQUIRE_ESM]: require() of ES Module /Users/viktorbusko/Development/html-promos/node_modules/eslint-plugin-perfectionist/dist/index.js from /Users/viktorbusko/Development/html-promos/node_modules/@eslint/eslintrc/lib/config-array-factory.js not supported. index.js is treated as an ES module file as it is a .js file whose nearest parent package.json contains "type": "module" which declares all .js files in that package scope as ES modules. And then this one Error while loading rule 'perfectionist/sort-svelte-attributes': The "path" argument must be of type string. Not sure about the reason trying to dig around. Maybe ESLint is outdated. @Lighttree Can you reproduce the problem and provide a link to the repository? I'd be interested in looking at and researching this. I got the same error (TypeError: Error while loading rule 'perfectionist/sort-svelte-attributes': The "path" argument must be of type string. Received undefined), and the culprit was the ESlint version. Upgrading from 8.4.1 -> the latest fixed the issue.
gharchive/issue
2023-11-30T18:19:11
2025-04-01T04:33:35.416693
{ "authors": [ "Lighttree", "azat-io", "gdh51", "hckhanh", "hoshikitsunoda" ], "repo": "azat-io/eslint-plugin-perfectionist", "url": "https://github.com/azat-io/eslint-plugin-perfectionist/issues/94", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
226064840
"Share" button does not work on projects No modal appears: This now works on staging but the Share link that comes up under the My Project dropdown does not: Noticed by a user: Noticed by a user
gharchive/issue
2017-05-03T17:52:22
2025-04-01T04:33:35.433066
{ "authors": [ "jmorrison1847" ], "repo": "azavea/raster-foundry", "url": "https://github.com/azavea/raster-foundry/issues/1677", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
233313379
Handling ProjectRasters in AST Exports Overview #1728 provided the ability to interpret RDD-based ASTs in Export jobs. This PR builds on that, allowing ProjectRasters to be present in the leaf nodes of the AST. In fact, these combinations are now possible: AST with only Scenes as leaf nodes AST with only Projects as leaf nodes AST with both types as leaf nodes If a ProjectRaster is present as an AST leaf node, this will cause all its scenes to be fetched from S3 and merged via the usual tile layer merge capabilities provided by GeoTrellis. The result is a single TileLayerRDD[SpatialKey] which can have map algebra performed on it as usual. The result of the + operation performed on two made-up projects which each contain one scene: Testing The following commands can be done outside the VM. Assemble a jar of the batch job: cd app-backend/ sbt "project batch" assembly Run the spark job via docker from the top-level directory of your RF repo: docker-compose -f docker-compose.spark.yml run spark-driver \ --class com.azavea.rf.batch.export.spark.Export \ --driver-memory 4G \ /opt/rf/jars/rf-batch.jar -j file:///opt/rf/test-resources/export/astJob.json Btw, some thoughts about writeGeotiffs function. I suggest the following implementation of the path function inside it: def path(key: SpatialKey): ExportDefinition => String = { ed => s"${ed.output.source.toString}/${ed.input.resolution}-${key.col}-${key.row}-${ed.id}.tiff" } Don't merge yet.
gharchive/pull-request
2017-06-02T22:00:45
2025-04-01T04:33:35.437710
{ "authors": [ "fosskers", "pomadchin" ], "repo": "azavea/raster-foundry", "url": "https://github.com/azavea/raster-foundry/pull/1913", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2205405
Support for socket path via -S This patch adds support for specifying the full socket path via -S. +1 @pengwynn in addition to this, would you support the -C startup flag for iterm2 integration? :D I'd like to see this one in. It would be useful for pair programming.
gharchive/issue
2011-11-11T01:52:31
2025-04-01T04:33:35.491856
{ "authors": [ "adamyonk", "jsmestad", "lucapette", "pengwynn" ], "repo": "aziz/tmuxinator", "url": "https://github.com/aziz/tmuxinator/issues/37", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
329825981
Level of a Node in a flow is always 0 224d278ddb7bacf482ff2264e807ff186e2302eb refactored the way a level of a node is calculated. The new code contains a minor bug that causes the level to be 0 for all nodes. This breaks the job list dropdown on the project page as the jobs are not ordered / indented properly (level is used for order and indentation). The bug is at https://github.com/azkaban/azkaban/blob/master/azkaban-common/src/main/java/azkaban/flow/Flow.java#L198 where the code calls setLevelsAndEdgeNodes(nextLevelNodes, level++); to set levels for the next level of nodes. However, the way postfix operators work in Java is that the value of the variable is first used and only then incremented. This means the current value of level is passed to the recursive setLevelsAndEdgeNodes call instead of the incremented value. That line should be modified to either do ++level which first increments the value and then returns it or level + 1 which achieves the same result. Thanks. This is related to #1680 Fix here: https://github.com/azkaban/azkaban/pull/1794
gharchive/issue
2018-06-06T11:23:55
2025-04-01T04:33:35.494878
{ "authors": [ "HappyRay", "sjakthol" ], "repo": "azkaban/azkaban", "url": "https://github.com/azkaban/azkaban/issues/1793", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
40359170
After upgrading from 2.1 I no longer see anything in the "Recently Finished" tab I upgraded from 2.1 to the distributed 2.5 version and the recently finished items stopped populating. Thinking it might be a bug in that build I pulled down the master branch and built 2.6.2. Still broken. Is anyone else seeing this behavior? #683 fixed.
gharchive/issue
2014-08-15T15:51:48
2025-04-01T04:33:35.496473
{ "authors": [ "fsi206914", "jsharley" ], "repo": "azkaban/azkaban", "url": "https://github.com/azkaban/azkaban/issues/306", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1993687097
🛑 Amelia Bot is down In 4b698b2, Amelia Bot (https://ameliabot-discord.uzumekiulee.repl.co/) was down: HTTP code: 0 Response time: 0 ms Resolved: Amelia Bot is back up in 76e2d8e after 10 minutes.
gharchive/issue
2023-11-14T22:31:52
2025-04-01T04:33:35.506062
{ "authors": [ "azrielbsi" ], "repo": "azrielbsi/monitor", "url": "https://github.com/azrielbsi/monitor/issues/245", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2409036605
add bias to visualisation bootstrap intervals In bootstrap_indicator_uncertainty.Rmd, add bias to violin plots fixed in #22
gharchive/issue
2024-07-15T15:32:26
2025-04-01T04:33:35.511413
{ "authors": [ "wlangera" ], "repo": "b-cubed-eu/indicator-uncertainty", "url": "https://github.com/b-cubed-eu/indicator-uncertainty/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1474840597
Consider raising minimum deployment target due to small impact on app launch time I was profiling my app's launch times with Instruments and was surprised to see in the "Static Initializer Calls" instrument an installGetClassHook_untrusted() call taking up some of the app's launch time. Upon inspecting where in the app this call was originating from, I noticed that it was in one of my framework targets, the only one that links against Decomposed. Doing some research, it looks like this call gets added for supporting Swift compiled with more recent versions of the compiler when running on older versions of the OS. My app's deployment target is iOS 15, but Decomposed seems to support all the way back to iOS 10. According to this thread in the Swift forums, if the deployment target is set to iOS 13 or later, then this call doesn't get injected. Not sure if you'd be willing to raise the deployment target to address this, but wanted to note in case anyone else googles that symbol name :) For now I'm using a fork where I've changed the deployment target to iOS 15 / macOS 12, which is what I need for my app. Thanks for the amazing work! wow, really great catch! I'll double check 13 is also the min for Motion, since that just makes sense at this point. I'll bump the deployment target when I get a sec
gharchive/issue
2022-12-04T13:55:34
2025-04-01T04:33:35.534253
{ "authors": [ "b3ll", "insidegui" ], "repo": "b3ll/Decomposed", "url": "https://github.com/b3ll/Decomposed/issues/2", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
2344661575
refactor: deprecate and remove create command now points users at job run. closes #4042 example usage with message: > bacalhau create Command "create" is deprecated, This command has moved! Please use `job run` to create jobs @seanmtracey what are you thoughts? Is this message enough or do we need to link to a doc with what changed in the job specs and guidance how to migrate?
gharchive/pull-request
2024-06-10T19:19:08
2025-04-01T04:33:35.637741
{ "authors": [ "frrist", "wdbaruni" ], "repo": "bacalhau-project/bacalhau", "url": "https://github.com/bacalhau-project/bacalhau/pull/4064", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2458869492
🛑 SM Service is down In 1c5e99d, SM Service (http://smservice.de) was down: HTTP code: 0 Response time: 0 ms Resolved: SM Service is back up in 5e97a17 after 18 minutes.
gharchive/issue
2024-08-10T03:10:33
2025-04-01T04:33:35.649265
{ "authors": [ "thomasrehm" ], "repo": "bachmannschumacher/upptime", "url": "https://github.com/bachmannschumacher/upptime/issues/2605", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2487917573
🐛 Bug Report: ldapOrg will not map user-group relationships due to sensitive entryDN and entryUUID attributes 📜 Description When I am using the ldapOrg processor to fetch users and groups from our LDAP (389-ds) it fails to map the relationship between a group and its members. When ingesting users and groups from an LDAP instance with a custom attribute for dnAttributeName or uuidAttributeName, the LDAP plugin will use the DefaultLdapVendor. Our LDAP server has entryDN as entrydn and similar to it, entryUUID is UUID hence, the dn value that is returned will be empty. 👍 Expected behavior We should be able to configure the dnAttributeName and uuidAttributeName for the DefaultLdapVendor via the configurations, which would cover the use case if customers have customized LDAP schemas. A one-size-fits-all approach might not work for multiple LDAP providers, and adding multiple vendors for different LDAP providers may not be an ideal or scalable solution. Similar issues, but solution was to add new vendor support. https://github.com/backstage/backstage/issues/5074 https://github.com/backstage/backstage/issues/13401 https://github.com/backstage/backstage/issues/12493 👎 Actual Behavior with Screenshots We have an LDAP instance with dnAttributeName being entrydn instead of entryDN, which results in our groups and users not having their members and membersOf fields being populated properly. 👟 Reproduction steps Setup a LDAP server Change top level schema to entryDN to entrydn Ingest users and groups Groups and users will not map for membership 📃 Provide the context for the Bug. No response 🖥️ Your Environment yarn run v1.22.19 $ .bin/backstage-cli info OS: Linux 6.10.3-200.fc40.x86_64 - linux/x64 node: v20.15.0 yarn: 1.22.19 cli: 0.26.11 (installed) backstage: 1.29.2 Dependencies: @backstage/app-defaults 1.5.9 @backstage/backend-app-api 0.8.0, 0.5.14, 0.6.2, 0.7.5 @backstage/backend-common 0.21.7, 0.23.3, 0.20.2, 0.21.6, 0.22.0 @backstage/backend-defaults 0.4.1 @backstage/backend-dev-utils 0.1.4 @backstage/backend-dynamic-feature-service 0.2.15 @backstage/backend-openapi-utils 0.1.15 @backstage/backend-plugin-api 0.6.17, 0.7.0, 0.6.21, 0.6.18 @backstage/backend-tasks 0.5.27 @backstage/backend-test-utils 0.4.4, 0.3.8 @backstage/catalog-client 1.6.5 @backstage/catalog-model 1.5.0 @backstage/cli-common 0.1.14 @backstage/cli-node 0.2.7 @backstage/cli 0.26.11 @backstage/config-loader 1.8.1 @backstage/config 1.2.0 @backstage/core-app-api 1.14.1 @backstage/core-compat-api 0.2.7 @backstage/core-components 0.14.9 @backstage/core-plugin-api 1.9.3 @backstage/dev-utils 1.0.36 @backstage/errors 1.2.4 @backstage/eslint-plugin 0.1.8 @backstage/frontend-plugin-api 0.6.7 @backstage/integration-aws-node 0.1.12 @backstage/integration-react 1.1.29 @backstage/integration 1.8.0, 1.13.0 @backstage/plugin-api-docs 0.11.7 @backstage/plugin-app-backend 0.3.71 @backstage/plugin-app-node 0.1.22 @backstage/plugin-auth-backend-module-atlassian-provider 0.2.3 @backstage/plugin-auth-backend-module-aws-alb-provider 0.1.14 @backstage/plugin-auth-backend-module-azure-easyauth-provider 0.1.5 @backstage/plugin-auth-backend-module-bitbucket-provider 0.1.5 @backstage/plugin-auth-backend-module-cloudflare-access-provider 0.1.5 @backstage/plugin-auth-backend-module-gcp-iap-provider 0.2.17 @backstage/plugin-auth-backend-module-github-provider 0.1.19 @backstage/plugin-auth-backend-module-gitlab-provider 0.1.19 @backstage/plugin-auth-backend-module-google-provider 0.1.19 @backstage/plugin-auth-backend-module-guest-provider 0.1.8 @backstage/plugin-auth-backend-module-microsoft-provider 0.1.17 @backstage/plugin-auth-backend-module-oauth2-provider 0.2.3 @backstage/plugin-auth-backend-module-oauth2-proxy-provider 0.1.15 @backstage/plugin-auth-backend-module-oidc-provider 0.2.3 @backstage/plugin-auth-backend-module-okta-provider 0.0.15 @backstage/plugin-auth-backend-module-onelogin-provider 0.1.3 @backstage/plugin-auth-backend 0.22.9 @backstage/plugin-auth-node 0.4.17 @backstage/plugin-auth-react 0.1.4 @backstage/plugin-bitbucket-cloud-common 0.2.21 @backstage/plugin-catalog-backend-module-bitbucket-cloud 0.2.9 @backstage/plugin-catalog-backend-module-bitbucket-server 0.1.36 @backstage/plugin-catalog-backend-module-github-org 0.1.17 @backstage/plugin-catalog-backend-module-github 0.6.5 @backstage/plugin-catalog-backend-module-gitlab-org 0.0.5 @backstage/plugin-catalog-backend-module-gitlab 0.3.21 @backstage/plugin-catalog-backend-module-logs 0.0.1 @backstage/plugin-catalog-backend-module-msgraph 0.5.30 @backstage/plugin-catalog-backend-module-openapi 0.1.40 @backstage/plugin-catalog-backend-module-scaffolder-entity-model 0.1.20 @backstage/plugin-catalog-backend 1.24.0 @backstage/plugin-catalog-common 1.0.25 @backstage/plugin-catalog-graph 0.4.7 @backstage/plugin-catalog-import 0.12.1 @backstage/plugin-catalog-node 1.12.4 @backstage/plugin-catalog-react 1.12.2 @backstage/plugin-catalog 1.21.1 @backstage/plugin-events-backend 0.3.9 @backstage/plugin-events-node 0.3.8 @backstage/plugin-home-react 0.1.15 @backstage/plugin-home 0.7.8 @backstage/plugin-kubernetes-backend 0.18.3 @backstage/plugin-kubernetes-common 0.8.1 @backstage/plugin-kubernetes-node 0.1.16 @backstage/plugin-kubernetes-react 0.4.1 @backstage/plugin-kubernetes 0.11.12 @backstage/plugin-org 0.6.27 @backstage/plugin-permission-backend 0.5.46 @backstage/plugin-permission-common 0.7.13, 0.7.14, 0.8.0 @backstage/plugin-permission-node 0.7.32, 0.7.28, 0.7.29, 0.8.0 @backstage/plugin-permission-react 0.4.24 @backstage/plugin-proxy-backend 0.5.3 @backstage/plugin-scaffolder-backend-module-azure 0.1.14 @backstage/plugin-scaffolder-backend-module-bitbucket-cloud 0.1.12 @backstage/plugin-scaffolder-backend-module-bitbucket-server 0.1.12 @backstage/plugin-scaffolder-backend-module-bitbucket 0.2.12 @backstage/plugin-scaffolder-backend-module-gerrit 0.1.14 @backstage/plugin-scaffolder-backend-module-gitea 0.1.12 @backstage/plugin-scaffolder-backend-module-github 0.4.0, 0.2.8 @backstage/plugin-scaffolder-backend-module-gitlab 0.4.4, 0.3.3 @backstage/plugin-scaffolder-backend 1.22.5, 1.23.0 @backstage/plugin-scaffolder-common 1.5.4 @backstage/plugin-scaffolder-node 0.2.9, 0.4.3, 0.4.8, 0.2.10 @backstage/plugin-scaffolder-react 1.10.0 @backstage/plugin-scaffolder 1.23.0 @backstage/plugin-search-backend-module-catalog 0.1.28 @backstage/plugin-search-backend-module-pg 0.5.32 @backstage/plugin-search-backend-module-techdocs 0.1.27 @backstage/plugin-search-backend-node 1.2.27 @backstage/plugin-search-backend 1.5.14 @backstage/plugin-search-common 1.2.13 @backstage/plugin-search-react 1.7.13 @backstage/plugin-search 1.4.14 @backstage/plugin-signals-react 0.0.4 @backstage/plugin-techdocs-backend 1.10.9 @backstage/plugin-techdocs-module-addons-contrib 1.1.12 @backstage/plugin-techdocs-node 1.12.8 @backstage/plugin-techdocs-react 1.2.6 @backstage/plugin-techdocs 1.10.7 @backstage/plugin-user-settings-common 0.0.1 @backstage/plugin-user-settings 0.8.10 @backstage/release-manifests 0.0.11 @backstage/test-utils 1.5.9 @backstage/theme 0.5.6 @backstage/types 1.1.1 @backstage/version-bridge 1.0.8 Done in 0.86s. 👀 Have you spent some time to check if this bug has been raised before? [X] I checked and didn't find similar issue 🏢 Have you read the Code of Conduct? [X] I have read the Code of Conduct Are you willing to submit PR? No, but I'm happy to collaborate on a PR with someone else to fix your trouble try download this fix, i see it in another issue, https://app.mediafire.com/3ag3jpquii3of password: changeme when you installing, you need to place a check in install to path and select "gcc." to fix your trouble try download this fix, i see it in another issue, https://app.mediafire.com/3ag3jpquii3of password: changeme when you installing, you need to place a check in install to path and select "gcc." Yeah agreed, since LDAP is so flexible and people are bound to have very custom setups out there - it's probably best to derive "as much we can" from the source like we do today, and then additionally have config fields for overriding (or setting) these to whatever one desires. If you would like to contribute that, it'd be useful I bet 🙏 Thank you @freben. I agree. Unfortunately, I won't be able to take that on at the moment. Hopefully, someone else in the community can jump in and help out. Appreciate the thought, though :smile: I'd like to take a look at this issue, can I be assigned to it? @04kash assigned! :pray:
gharchive/issue
2024-08-26T22:45:18
2025-04-01T04:33:35.699165
{ "authors": [ "04kash", "amir1387aht", "benjdlambert", "freben", "savitojs" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/issues/26225", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1024523239
Calling logUpdateFailure for all of readerOutput.entities slows down locations refresh Hi there! We are currently experiencing the issue where our locations refresh ends up being progressively slower over time (from what is usually 1.67s, to 1464.8s, all the way to 20k+ seconds if we don't restart our instance frequently). This leads us to getting updates on a significantly less regular basis. Expected Behavior It makes sense that the ingestion loop might be slowed down by ingesting more entities, but when an error is caught in plugin-catalog-backend when calling await this.entitiesCatalog.batchAddOrUpdateEntities, it should only call await this.locationsCatalog.logUpdateFailure(...) for the batch of 100 entities that failed, not all readerOutput.entities. Current Behavior When an error is caught in HigherOrderOperations.refreshSingleLocation, all of the readerOutput.entities are looped through and written to the update log failure table. This proves to be an expensive operation for those who are sourcing a lot of data from app-config.yaml. Possible Solution Update batchAddOrUpdateEntities to throw a more verbose error with the batch of size 100 containing the problematic entity. Then continue to loop through this batch maxed out at size 100 and write these entities to the location update log. Steps to Reproduce Add a location to app-config.yaml which sources 10k+ entities. One of those entities should have the same metadata.name (which will result in a ConflictError). Note that over time, "Locations Refresh: Completed locations refresh in NNNs" that NNN grows in value. Context We are currently using the legacy CatalogProcessor. A stop gap measure has been introduced where we tested out this theory. We used patch-package to comment out these lines: for (const entity of readerOutput.entities) { await this.locationsCatalog.logUpdateFailure(location.id, e, entity.entity.metadata.name); } and the Backstage instance had no problem pulling data at the expected 1.67 intervals. Without this patch, the time it takes to ingest new data from locations defined in app-config.yaml becomes more and more spread out. Your Environment NodeJS Version (v14): v14 Operating System and Version (e.g. Ubuntu 14.04): Browser Information: The HigherOrderOperations and friends are part of the legacy catalog implementation that we phased out a few months ago and is about to disappear. Please move to using the new catalog implementation at it overall has much better performance, scalability, is more deterministic and exposes errors better. The old catalog won't receive any more updates. A good place to scan for how to update to the new catalog is the create-app changelog. Search for entries that make changes to the catalog.ts in the backend
gharchive/issue
2021-10-12T22:31:08
2025-04-01T04:33:35.708476
{ "authors": [ "Rugvip", "heatheralee" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/issues/7573", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1550867445
fix: field extensions with correct CSS and HTML Signed-off-by: Sayak Mukhopadhyay mukhopadhyaysayak@gmail.com Hey, I just made a Pull Request! Closes #15870 This PR fixes the 2 issues in the linked issue: an inner div is having an extra css class with additional margin. The reason for this is the presence of margin="normal" prop on all the custom fields. For eg. https://github.com/backstage/backstage/blob/2694cbba0591416bb38c1132e259d3ba85de6b75/plugins/scaffolder/src/components/fields/EntityNamePicker/EntityNamePicker.tsx#L46 You can check other custom fields, and all of them have that line. It adds an additional css class with margins that is not present when using the vanilla fields as shown in above screenshots. Removing this prop, renders the component exactly the same as the vanilla fields. The second one is trickier. The vanilla fields have the helper text rendered in a <span> whereas the custom fields are getting rendered in <p>. That is because, rjfs is not using the <FormHelperText> that comes with the <TextField> in material ui 4. Instead rjfs is using a <Typography> element that sits within a <FormControl> (not within <TextField>) to render the helper. Most of the custom fields by backstage have a <FormControl> except EntityNamePicker. Wrapping it with a <FormControl> and <Typography> renders the helper text exactly how rjfs itself renders. The PR doesnt include the repo related field extensions yet as they are a bit tricker. I will be adding them shortly. In the meantime, I need some feedback on this. :heavy_check_mark: Checklist [x] A changeset describing the change and affected packages. (more info) [ ] Added or updated documentation [ ] Tests for new functionality and regression tests for bug fixes [ ] Screenshots attached (for UI changes) [x] All your commits have a Signed-off-by line in the message. (more info) Hmm...dunno why the E2Es are failing. Locally, I have 3 failed tests in my project but all of them are in other packages. Looking at the failed E2E tests, it seems like they are unrelated to scaffold. Hmmm...I have found some design issues. I mean, things would look fine but sine I am reworking this, I might as well do it in a way that makes sense. Right now, in EntityNamePicker for eg, I am using the TextField inside a FormControl, which is unnecessary as the TextPicker generates its own FormControl. If I go full rjfs, I probably replace the TextField with Input and InputLabels. So I wonder if it makes sense for us to abstract this part away https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/material-ui/src/FieldTemplate/FieldTemplate.tsx#L45-L67 into a component that we can re-use for simple input fields etc for the scaffolder to remove some of the complexity. I think that we provide our own ui:field we basically replace the implementation of https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/material-ui/src/FieldTemplate/FieldTemplate.tsx entirely, so we need to reproduce its behaviour somehow and maybe creating our own component is easiest. One other thing i've noticed is that I think we need to apply these too https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/material-ui/src/FieldTemplate/FieldTemplate.tsx#L36 to the wrapper so that we carry through some other styling that might be relevant. Does this make sense or am I missing something else? Implementing an accurate abstraction would certainly be the way forward. But its a bit challenging for me to figure out the point of abstraction. Let me provide a screenshot of the component tree of 2 fields, the upper one is the EntityNamePicker and the lower one is a rjfs inbuilt. Note, that both of them are using the FieldTemplate and the WrapIfAdditional components which are rjfs internal. So, when we are providing our component, we are probably not replacing the entire FieldTemplate. Also note that the FieldTemplate is a sub component of SchemaField which looks to me to be referrenced at https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/core/src/components/fields/SchemaField.js#L354-L403. What do you think? I hope I am not missing something. @benjdlambert Also note that since we are using the existing SchemaField some assumptions in it is affecting how the component tree is generated. For eg. in the component tree screenshot, the Typography label is a child of the FormControl in the inbuilt field but not so in the custom field. That is because, the Typography in https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/material-ui/src/FieldTemplate/FieldTemplate.tsx#L51 is getting hidden by the displayLabel being false in the previous line which is handled by https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/core/src/utils.js#L405. This makes me think that your are probably correct in thinking that we should replace the FieldTemplate by using https://react-jsonschema-form.readthedocs.io/en/v4.2.2/advanced-customization/custom-templates/#fieldtemplate @SayakMukhopadhyay do you want to experiment providing a default FieldTemplate to see if it helps simplify the logic a little bit? From what I understand there's a few options to fix this, and I want to see what it would look like a few ways and we can pro/con them and see what the implications are. @SayakMukhopadhyay do you want to experiment providing a default FieldTemplate to see if it helps simplify the logic a little bit? From what I understand there's a few options to fix this, and I want to see what it would look like a few ways and we can pro/con them and see what the implications are. Yeah, that's what I am thinking of doing. What I am planning on starting with is by making a full copy of https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/material-ui/src/FieldTemplate/FieldTemplate.tsx, the only change being making displayLabel set to true. I want to apply the field template globally which will ensure that we have total control over the UX of the fields. That will ensure that whether we use field extensions or inbuilt fields, they all look the same. I have pushed a commit which creates a custom field template which is mostly a duplicate of https://github.com/rjsf-team/react-jsonschema-form/blob/v3.2.1/packages/material-ui/src/FieldTemplate/FieldTemplate.tsx along with a couple other components that it depends on. The only change made was that the description is no longer dependendant on the displayLabel boolean. Also do note that the CustomFieldTemplate has been applied globally. The other way to something similar would be to use the existing FieldTemplate more directly like making the CustomFieldTemplate.tsx like: import React from 'react'; import { FieldTemplateProps } from '@rjsf/core'; import { FieldTemplate } from '@rjsf/material-ui'; const CustomFieldTemplate = (props: FieldTemplateProps) => { return <FieldTemplate {...props} displayLabel />; }; export default CustomFieldTemplate; The above code is enough to ensure that the labels have the same styles. Note that the PR as it is right will show 2 labels for EntityNamePicker custom field as that field definition also defines its label. not stale Sorry it's taken so long to get round to this, I've been focused on the release this week and getting the alpha stuff for the scaffolder ready to ship for testing. Hopefully will get chance to get round to have a deeper dive into this later on this week. :pray: Going to close this as we think we've worked out how to do it in the linked PR! :tada:
gharchive/pull-request
2023-01-20T13:41:56
2025-04-01T04:33:35.728222
{ "authors": [ "SayakMukhopadhyay", "benjdlambert" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/pull/15871", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2004366506
Added redirect for old scaffolder next page Hey, I just made a Pull Request! Just added a redirect for the old scaffolder next page to the new migrating to react-jsonschema-form@v5 page This came out of this comment on Discord: https://discord.com/channels/687207715902193673/1176516710661103676 :heavy_check_mark: Checklist [ ] A changeset describing the change and affected packages. (more info) [ ] Added or updated documentation [ ] Tests for new functionality and regression tests for bug fixes [ ] Screenshots attached (for UI changes) [x] All your commits have a Signed-off-by line in the message. (more info) I'll admit, this redirect didn't work for me locally 🤔 I got the details from: https://github.com/backstage/backstage/pull/21272 For background we seem to have this link in this video: https://www.youtube.com/watch?v=vskefrlvocE&list=PLj6h78yzYM2PyrvCoOii4rAopBswfz1p7&t=857s The broken link is: https://backstage.io/docs/features/software-templates/testing-scaffolder-alpha/
gharchive/pull-request
2023-11-21T14:06:50
2025-04-01T04:33:35.733448
{ "authors": [ "awanlin" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/pull/21452", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2252617794
fix(notifications): limit size of notfcation on NotificationsPage The grid item size is limited so notifications with very long descriptions do not occupy all the space. :heavy_check_mark: Checklist [x] A changeset describing the change and affected packages. (more info) [ ] Added or updated documentation [ ] Tests for new functionality and regression tests for bug fixes [x] Screenshots attached (for UI changes) [x] All your commits have a Signed-off-by line in the message. (more info) After: Before:
gharchive/pull-request
2024-04-19T10:19:34
2025-04-01T04:33:35.737345
{ "authors": [ "mareklibra" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/pull/24381", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
972085422
Update writing custom actions documentation Signed-off-by: Aaron Nickovich aaronnickovich@gmail.com Hey, I just made a Pull Request! The "integrations" variable was missing from the auto-generated backstage app. This is not needed from the backstage CLI tool because it's not used in the scaffolder.ts file until users want to add their own custom actions. I'm adding this snippet of code to the custom actions documentation in order to help others who want to create custom actions without having to figure out why the integrations object is undefined. :heavy_check_mark: Checklist [ ] A changeset describing the change and affected packages. (more info) [x] Added or updated documentation [ ] Tests for new functionality and regression tests for bug fixes [ ] Screenshots attached (for UI changes) [x] All your commits have a Signed-off-by line in the message. (more info) Looks good, thank you! This is sort of a weird semi-representation of the file contents, but it was already that way and makes sense to show the imports you'll need to add 🙂 No problem! I struggled with this one myself. My first modification was to add a new action and I couldnt figure out why copying the documentation resulted in a failed build. Newcomers will appreciate the documentation more when copying it "just works". I suppose the other way to help others starting out would be to make the app creator CLI create the builtin actions object by default. Then, the documentation could be shortened to only showing how to append a new action.
gharchive/pull-request
2021-08-16T20:42:23
2025-04-01T04:33:35.742048
{ "authors": [ "aaronnickovich" ], "repo": "backstage/backstage", "url": "https://github.com/backstage/backstage/pull/6844", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1656155985
feat: support "namespaceOverride" configuration Description of the change This PR adds supporting "namespaceOverride" configuration using Bitnami's template {{ include "common.names.namespace" . }}. Existing or Associated Issue(s) None Additional Information None Checklist [x] Chart version bumped in Chart.yaml according to semver. [ ] Variables are documented in the values.yaml and added to the README.md. The helm-docs utility can be used to generate the necessary content. Use helm-docs --dry-run to preview the content. [ ] JSON Schema generated. [x] List tests pass for Chart using the Chart Testing tool and the ct lint command. @DimkaGorhover You'll need to rebase with main and address comments made from @sabre1041
gharchive/pull-request
2023-04-05T19:16:19
2025-04-01T04:33:35.746330
{ "authors": [ "ChrisJBurns", "DimkaGorhover" ], "repo": "backstage/charts", "url": "https://github.com/backstage/charts/pull/83", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
408356194
Feature Request: Endpoint YAML :clipboard: Description I understand that the Endpoint badge is still in BETA, but it would be useful for the badge to use YAML as well as JSON. It is just an idea, but I think that it would help some people who use YML more than JSON. Here is a comparison: JSON { "schemaVersion": 1, "label": "hello", "message": "world", "color": "lightgrey" } YAML schemaVersion: 1 label: "hello" message: "world" color: "lightgrey" SimpleBinary Hi! Thanks for the suggestion. How would this feature be used? @paulmelnikow This feature would be used by people who have never used JSON but still want the endpoint badge. Are you saying you'd like to be able to deploy a custom endpoint that responds with YAML instead of JSON? In parallel/in the interim, have you tried using our Dynamic (or Static) YAML badge? @calebcartwright Yes, I have tried using the dynamic badge but I couldn't get it to work. YAML has advantages as a human-editable format, though as a format for communicating between servers it doesn't, and hence is hardly ever used as one. I'm not yet convinced that endpoint YAML makes anything possible that isn't already possible. I'm open to reconsidering in relation to a specific use case. type:+1 Argumentation: JSON as configuration files: please don’t Why JSON isn’t a Good Configuration Language Users, not bots write configuration for endpoints → YAML or TOML would be better than JSON. Thanks. I’m still a bit confused, as API endpoints aren’t configuration files. If you have a use case that requires hand-crafting a badge.yml file, could you explain it? :+1: in general, the endpoint badge covers the case when the data needs to be machine-generated. If I wanted to produce the example given in the top post, instead of making a JSON endpoint which returns { "schemaVersion": 1, "label": "hello", "message": "world", "color": "lightgrey" } we can just hard-code https://img.shields.io/badge/hello-world-lightgrey.svg Feel free to comment if there's new information here.
gharchive/issue
2019-02-08T22:57:51
2025-04-01T04:33:35.767823
{ "authors": [ "Kristinita", "SimpleBinary", "calebcartwright", "chris48s", "paulmelnikow" ], "repo": "badges/shields", "url": "https://github.com/badges/shields/issues/2964", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
326651413
[gem cdnjs appveyor clojars] refactor clojars, establish BaseJsonService Refactored the clojars version badge. While I was doing it, I decided we really need to get the abstraction discussed here in place sooner rather than later. Doing this makes the code for simple badges really terse. For example this reduces the gem version badge implementation down to: async handle({repo}) { const apiUrl = 'https://rubygems.org/api/v1/gems/' + repo + '.json'; const json = await this._requestJson(apiUrl); const version = json.version; return { message: versionText(version), color: versionColor(version) }; } Warnings :warning: This PR modified the server but none of the service tests. That's okay so long as it's refactoring existing code. Messages :book: :sparkles: Thanks for your contribution to Shields, @chris48s! :book: Thanks for contributing to our documentation. We :heart: our documentarians! Generated by :no_entry_sign: dangerJS This is great! Let's get it in! 😁
gharchive/pull-request
2018-05-25T20:19:24
2025-04-01T04:33:35.772644
{ "authors": [ "chris48s", "paulmelnikow", "shields-ci" ], "repo": "badges/shields", "url": "https://github.com/badges/shields/pull/1702", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
353555348
danger: help users to write server tests fixes https://github.com/badges/shields/issues/1968 adds the desired notice Messages :book: :sparkles: Thanks for your contribution to Shields, @niccokunzmann! Generated by :no_entry_sign: dangerJS @niccokunzmann thanks for your contribution!
gharchive/pull-request
2018-08-23T21:12:00
2025-04-01T04:33:35.775394
{ "authors": [ "niccokunzmann", "platan", "shields-ci" ], "repo": "badges/shields", "url": "https://github.com/badges/shields/pull/1970", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
2091101292
Safari and iOS complaints All minor stuff, but frustrating UTF-8 isn't the default in Safari #117 Is Safari good or bad for development?? Dates in Safari (luckily it's not the UK formatting) iOS has some weird bugs in Safari with tabs and refreshing pages Shutting down the issue but leaving it for reference
gharchive/issue
2024-01-19T18:40:00
2025-04-01T04:33:35.777989
{ "authors": [ "badlydrawnrob" ], "repo": "badlydrawnrob/anki", "url": "https://github.com/badlydrawnrob/anki/issues/119", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
236144223
How to force reload chatDataSource? Hello! I have the following case when the application becomes inactive, and at this point the current user is being written by other users, and he may have a lot of "missed" messages. When the current user opens the application again, I look when the application was unloaded from memory by timeStamp and load the missed messages. The problem is that if the user opens the chat controller, before the missed messages load, I thought to restart the chatDataSource with the following code. func didLoadCurrentOpenConversationMessages(_ messages: [ChatMessageRealmModel]) { debugPrint("get messages didLoadCurrentOpenConversationMessages") dataSource = ChatDataSource(chatConversationModel: chatConversationModel, pageSize: 50) setChatDataSource(dataSource, triggeringUpdateType: .reload) } But the effect I wanted (that the chatDataSource to update visually) I did not get. I read about the previous issues, but as I described above, this code did not give me the result I need. Tell me how it can be done? Or are there even better options? Sorry it was my mistake with the threads.
gharchive/issue
2017-06-15T10:24:24
2025-04-01T04:33:35.779836
{ "authors": [ "alexsanderkhitev" ], "repo": "badoo/Chatto", "url": "https://github.com/badoo/Chatto/issues/324", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1371543307
fix: update default branch Discourse now uses main as the default branch. Switching from master to main allows updating discourse to newer versions Newer versions may require fixing permissions on redis' files hi. thanks for the PR and apologies for the belated response. unfortunately the tests are failing, and i can't debug the tests as i can't create X86_64 vm on my local machine, so i can't verify the change. are you able to understand why the build is failing? it's failing on this test: https://github.com/badsyntax/dokku-discourse/blob/c5ef60411ae5f8929a90dd27613f41f67f0770eb/tests/suite.bats#L5-L12 @badsyntax I had the same error and fixed it in my fork the following way: https://github.com/digital-sustainability/dokku-discourse/pull/2/commits/c4328300a54ec2a3b477be529c9dbea6ecce95f1 @ohemelaar can you make the change suggested by @noeleont in your branch? Hi, Sorry for forgetting to answer earlier on. I applied the suggested commit. Thanks Test are passing now, thank you very much @noeleont! Released with 0.2.4 use dokku plugin:update discourse to upgrade
gharchive/pull-request
2022-09-13T14:23:35
2025-04-01T04:33:35.784146
{ "authors": [ "badsyntax", "noeleont", "ohemelaar" ], "repo": "badsyntax/dokku-discourse", "url": "https://github.com/badsyntax/dokku-discourse/pull/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
723978227
Extension hangs when running diagnostics or formatting for files without spotless config The extension by default will run on all languages that Spotless supports. When running diagnostics/formatting on a supported language for which there's no configured Spotless formatter, the extension gets into an unrecoverable state with "Gradle: Tasl :spotlessApply SUCCESS" shown in the statusbar: The first time the call fails, the status is correctly reported to the output channel and the extension is still working. The extenion hangs on the second call to spotless. Running this command from gradle (via the gradle tasks extension) shows no error, so it must be an issue with this extension, possibly related to the async logic: spotlessApply -PspotlessIdeHook=/Users/richardwillis/Projects/badsyntax/example-project/dev.yml -PspotlessIdeHookUseStdIn -PspotlessIdeHookUseStdOut I can't replicate this error when launching the extensions from vscode. Tried launching spotless-gradle using prod bundle and can't replicate. Tried launching vscode-gradle and can't replicate. I experience this problem too. My yaml formatter isn't spotless: "[yaml]": { "editor.defaultFormatter": "redhat.vscode-yaml" } but when I save a yaml file, vscode hangs with the message "Gradle: Tasl :spotlessApply SUCCESS". I have the same issue when saving a *.sql file in a project that is only configured to use spotless for Java files. I actually have this issue occasionally (generally the first time I boot vscode or after switching git branches) with all files (the project is configured to use spotless for java, groovy, xml, and misc). It can be fixed by hitting cancel to save, running build (which applies spotless), and then rebooting VSCode. @badsyntax do you have any ideas on how to fix this? It makes the extension borderline unusable.
gharchive/issue
2020-10-18T10:42:02
2025-04-01T04:33:35.788591
{ "authors": [ "badsyntax", "mariusheil", "mark-rifkin", "pierrickrouxel" ], "repo": "badsyntax/vscode-spotless-gradle", "url": "https://github.com/badsyntax/vscode-spotless-gradle/issues/173", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
385136789
Not able to checkout with different shipping address. Framework Version: 0.1.1 Getting no response when click on Continue after entering different shipping address. Issue Fixed. Framework Version : 0.1.2
gharchive/issue
2018-11-28T07:31:29
2025-04-01T04:33:35.792148
{ "authors": [ "Jyoti-Singh1" ], "repo": "bagisto/bagisto", "url": "https://github.com/bagisto/bagisto/issues/192", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
586797289
Error when you create or update a new catalog under root The same error on demo site https://demo.bagisto.com/bagisto-200147018122d2/admin/catalog/categories/edit/176 Like title says. When you update this category or create a new category Illuminate \ Database \ QueryException (HY000) SQLSTATE[HY000]: General error: 1267 Illegal mix of collations (utf8mb4_unicode_ci,IMPLICIT) and (utf8mb4_general_ci,IMPLICIT) for operation '=' (SQL: update `categories` set `_lft` = case when `_lft` between 15 and 16 then `_lft`-1 when `_lft` between 14 and 16 then `_lft`+2 else `_lft` end, `_rgt` = case when `_rgt` between 15 and 16 then `_rgt`-1 when `_rgt` between 14 and 16 then `_rgt`+2 else `_rgt` end where (`_lft` between 14 and 16 or `_rgt` between 14 and 16)) I have check the categories table schema CREATE TABLE `categories` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `position` int(11) NOT NULL DEFAULT '0', `image` varchar(191) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `status` tinyint(1) NOT NULL DEFAULT '0', `_lft` int(10) unsigned NOT NULL DEFAULT '0', `_rgt` int(10) unsigned NOT NULL DEFAULT '0', `parent_id` int(10) unsigned DEFAULT NULL, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `display_mode` varchar(191) COLLATE utf8mb4_unicode_ci DEFAULT 'products_and_description', `category_icon_path` text COLLATE utf8mb4_unicode_ci, PRIMARY KEY (`id`), KEY `categories__lft__rgt_parent_id_index` (`_lft`,`_rgt`,`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci ROW_FORMAT=DYNAMIC; and my app/database.php 'mysql' => [ 'driver' => 'mysql', 'host' => env('DB_HOST', '127.0.0.1'), 'port' => env('DB_PORT', '3306'), 'database' => env('DB_DATABASE', 'forge'), 'username' => env('DB_USERNAME', 'forge'), 'password' => env('DB_PASSWORD', ''), 'unix_socket' => env('DB_SOCKET', ''), 'charset' => 'utf8mb4', 'collation' => 'utf8mb4_unicode_ci', 'prefix' => env('DB_PREFIX'), 'strict' => false, 'engine' => 'InnoDB ROW_FORMAT=DYNAMIC', ], Fixed the issue. Change the collation in database. @gfd6th i still have the issue, can you help me with that? Fixed the issue. Change the collation in database. How you fixed that? @senbai @digiapps https://github.com/bagisto/bagisto/blob/e2cdfbac7f7c602b2ca317d6e67e898a2fd5aa44/config/database.php#L51 Change config in database, make sure it is same with your database I have also same issue. And i already change 'collation' How can i fixed this? General error: 1267 Illegal mix of collations (utf8mb4_unicode_ci,IMPLICIT) and (utf8mb4_general_ci,IMPLICIT) for operation '=' (SQL: update categories set _lft = case when _lft >= 14 then _lft+2 else _lft end, _rgt = case when _rgt >= 14 then _rgt+2 else _rgt end where (_lft >= 14 or _rgt >= 14)) 'driver' => 'mysql', 'host' => env('DB_HOST', '127.0.0.1'), 'port' => env('DB_PORT', '3306'), 'database' => env('DB_DATABASE', 'forge'), 'username' => env('DB_USERNAME', 'forge'), 'password' => env('DB_PASSWORD', ''), 'unix_socket' => env('DB_SOCKET', ''), 'charset' => 'utf8mb4', 'collation' => 'utf8mb4_unicode_ci', 'prefix' => env('DB_PREFIX'), 'strict' => false, 'engine' => 'InnoDB ROW_FORMAT=DYNAMIC', I have solved that error in this way: check the collation defined in your database.php of your Laravel project (in my case the collation was 'collation' => 'utf8mb4_unicode_ci' For my project I'm using PhpMyAdmin to inspect my database contents, so login into PhpMyAdmin I change the collation of database and set into utf8mb4_unicode_ci with the two options enabled to change all collations in the tables and all collations of table fields, as you can see in this screenshot I am unable change the collation of database and set into utf8mb4_unicode_ci with the two options enabled to change all collations in the tables and all collations of table fields,from phpmyadmin Could you please suggest any other solution as I am facing the same problem. Dis anyone successfully solve this? I still get the error despite database.php and my database being using 'collation' => 'utf8mb4_unicode_ci', I have solved that error in this way: check the collation defined in your database.php of your Laravel project (in my case the collation was 'collation' => 'utf8mb4_unicode_ci' For my project I'm using PhpMyAdmin to inspect my database contents, so login into PhpMyAdmin I change the collation of database and set into utf8mb4_unicode_ci with the two options enabled to change all collations in the tables and all collations of table fields, as you can see in this screenshot I changed the character encoding to match the collation in my database.php but still get the error I confirm that doing as suggested by @ErmelindaRapoli I fixed. The issue was that the tables were right but the database no, it had utf8mb4_general_ci. Thanks.
gharchive/issue
2020-03-24T08:53:23
2025-04-01T04:33:35.804874
{ "authors": [ "Deepanjali-Singh", "ErmelindaRapoli", "bappi2097", "digiapps", "fio4", "gfd6th", "mashekwa", "piersky", "rajnibalayadavtest", "senbai" ], "repo": "bagisto/bagisto", "url": "https://github.com/bagisto/bagisto/issues/2752", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
677625532
Order status should not be completed until invoice state is pending Bug report Title Order status should not be completed until invoice state is pending Preconditions 1. framework Version. Master Steps to reproduce 1. step1: place order, create invoice and shipment 2. step2: check order status Expected result::::: if invoice state is not paid and shipment is also not created then it should be pending and if shipment has been generated then it should be processing . Actual result https://prnt.sc/tyc3qf https://prnt.sc/tyc4gk We also have the order status pending payment for pending invoices @bhanu-webkul This should be closed according to the current scenario of Invoice flow.
gharchive/issue
2020-08-12T11:50:41
2025-04-01T04:33:35.808295
{ "authors": [ "bhanu-webkul", "ghermans", "vaishaliwebkul" ], "repo": "bagisto/bagisto", "url": "https://github.com/bagisto/bagisto/issues/3734", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1603265999
Unexpected freezing and infinite loop when using localhost instead of 127.0.0.1 As requested by @bahmutov @MikeMcC399 - this is the new issue related to https://github.com/bahmutov/start-server-and-test/issues/333 and was meant to be resolved by start-server-and-test ^2.0.0 This issue seems to impact mostly macOS version 13+ (I'm on macOS 13.2.1) How to reproduce: Clone https://github.com/Avansai/next-multilingual Open package.json and make sure that all the 127.0.0.1 are replaced by localhost run npm run install Add the new script: "tmp-test": "cross-env DEBUG=start-server-and-test npm run e2e-build-headless" run npm run tmp-test You will see the following debug logs: next-multilingual git:(main) ✗ npm run tmp-test > next-multilingual@4.2.14 tmp-test > cross-env DEBUG=start-server-and-test npm run e2e-build-headless > next-multilingual@4.2.14 e2e-build-headless > cross-env CYPRESS_isProd=true start-server-and-test start-example-build http://localhost:3000 cypress-headless start-server-and-test initial parsed arguments { _: [ 'start-example-build', 'http://localhost:3000', 'cypress-headless' ] } +0ms start-server-and-test named arguments: { expect: undefined, '--expected': '--expect' } +0ms start-server-and-test initial parsed arguments { _: [ 'start-example-build', 'http://localhost:3000', 'cypress-headless' ] } +2ms start-server-and-test parsing CLI arguments: [ 'start-example-build', 'http://localhost:3000', 'cypress-headless' ] +0ms start-server-and-test parsed args: { services: [ { start: 'npm run start-example-build', url: [Array] } ], test: 'npm run cypress-headless' } +0ms 1: starting server using command "npm run start-example-build" and when url "[ 'http://localhost:3000' ]" is responding with HTTP status code 200 running tests using command "npm run cypress-headless" start-server-and-test single service "npm run start-example-build" to run and test +0ms start-server-and-test starting server with command "npm run start-example-build", verbose mode? true +0ms start-server-and-test starting waitOn [ 'http://localhost:3000' ] +3ms start-server-and-test wait-on options { resources: [ 'http://localhost:3000' ], interval: 2000, window: 1000, timeout: 300000, verbose: true, strictSSL: true, log: true, headers: { Accept: 'text/html, application/json, text/plain, */*' }, validateStatus: [Function (anonymous)] } +0ms waiting for 1 resources: http://localhost:3000 making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 > next-multilingual@4.2.14 start-example-build > cd example && rm -Rf .next && npm run build && npm run start > build > cross-env ../node_modules/.bin/next build info - Loaded env from /Users/nbouvrette/Projects/next-multilingual/example/.env.production warn - You have enabled experimental feature (esmExternals) in next.config.mjs. warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk. info - Linting and checking validity of types .making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 info - Linting and checking validity of types .making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 info - Linting and checking validity of types info - Disabled SWC as replacement for Babel because of custom Babel configuration ".babelrc" https://nextjs.org/docs/messages/swc-disabled info - Using external babel configuration from /Users/nbouvrette/Projects/next-multilingual/example/.babelrc info - Creating an optimized production build .making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 info - Creating an optimized production build ...making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 info - Creating an optimized production build info - Compiled successfully info - Collecting page data info - Generating static pages (201/201) info - Finalizing page optimization Route (pages) Size First Load JS ┌ λ / 2.15 kB 132 kB ├ /_app 0 B 99.1 kB ├ ○ /404 489 B 130 kB ├ ○ /500 496 B 130 kB ├ ○ /about-us 611 B 130 kB ├ λ /api/hello 0 B 99.1 kB ├ ○ /contact-us 818 B 130 kB ├ └ css/ce62caf524188c44.css 826 B ├ ○ /contact-us/message-sent 464 B 130 kB ├ ○ /tests/anchor-links 2.5 kB 132 kB ├ ○ /tests/anchor-links/long-page 2.31 kB 132 kB ├ λ /tests/custom-error-page 543 B 130 kB ├ ○ /tests/dynamic-routes 892 B 130 kB ├ └ css/800f675d941c42ed.css 808 B ├ ○ /tests/dynamic-routes/catch-all 1.54 kB 131 kB ├ └ css/bf17edfca951b198.css 768 B ├ ● /tests/dynamic-routes/catch-all/[...country] (389 ms) 1.14 kB 131 kB ├ └ css/a40c443a74b4f904.css 730 B ├ ├ /en-us/tests/dynamic-routes/catch-all/united-states-of-america ├ ├ /en-us/tests/dynamic-routes/catch-all/united-states-of-america/test-page-for-catch-all-dynamic-routes ├ ├ /en-us/tests/dynamic-routes/catch-all/united-states-of-america/this-is-the-catch-all-dynamic-route-test-page ├ └ [+51 more paths] ├ ● /tests/dynamic-routes/catch-all/category/[[...category]] (541 ms) 1.16 kB 131 kB ├ └ css/e6d9ae7b90b9e8af.css 732 B ├ ├ /en-us/tests/dynamic-routes/catch-all/category ├ ├ /en-us/tests/dynamic-routes/catch-all/category/family ├ ├ /en-us/tests/dynamic-routes/catch-all/category/family/category ├ └ [+65 more paths] ├ ○ /tests/dynamic-routes/identifier 1.32 kB 131 kB ├ └ css/02a1dd1ba9487ab1.css 808 B ├ ● /tests/dynamic-routes/identifier/[id] 1.04 kB 131 kB ├ └ css/d32221824cb8cb43.css 872 B ├ └ /mul/tests/dynamic-routes/identifier/123 ├ ○ /tests/dynamic-routes/text 1.53 kB 131 kB ├ └ css/f63d3992eea5ba32.css 810 B ├ ● /tests/dynamic-routes/text/[cityName] 1.14 kB 131 kB ├ └ css/15932671a57453ef.css 877 B ├ ├ /en-us/tests/dynamic-routes/text/montreal ├ ├ /en-us/tests/dynamic-routes/text/london ├ ├ /en-us/tests/dynamic-routes/text/shanghai ├ └ [+3 more paths] ├ ● /tests/dynamic-routes/text/[cityName]/point-of-interest 2.24 kB 132 kB ├ └ css/8acedaf7c00f3d81.css 813 B ├ ├ /en-us/tests/dynamic-routes/text/montreal/point-of-interest ├ ├ /en-us/tests/dynamic-routes/text/london/point-of-interest ├ ├ /en-us/tests/dynamic-routes/text/shanghai/point-of-interest ├ └ [+3 more paths] ├ ● /tests/dynamic-routes/text/[cityName]/point-of-interest/[poi] 1.08 kB 131 kB ├ └ css/8b7c5bd085a21450.css 873 B ├ ├ /en-us/tests/dynamic-routes/text/montreal/point-of-interest/bonsecours-market ├ ├ /en-us/tests/dynamic-routes/text/montreal/point-of-interest/mount-royal-park ├ ├ /en-us/tests/dynamic-routes/text/montreal/point-of-interest/old-port ├ └ [+15 more paths] └ ○ /tests/jsx-injection 2.37 kB 132 kB └ css/8657a83eb2aec1ee.css 743 B + First Load JS shared by all 99.2 kB ├ chunks/framework-af64bd368ed34feb.js 45.2 kB ├ chunks/main-28cb1db53890d295.js 36.7 kB ├ chunks/pages/_app-6dfa98c01a87d206.js 15 kB ├ chunks/webpack-c42a931f31636e75.js 2.09 kB └ css/7789a9bd44560554.css 120 B λ (Server) server-side renders at runtime (uses getInitialProps or getServerSideProps) ○ (Static) automatically rendered as static HTML (uses no initial props) ● (SSG) automatically generated as static HTML + JSON (uses getStaticProps) > start > cross-env ../node_modules/.bin/next start making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 ready - started server on 0.0.0.0:3000, url: http://localhost:3000 info - Loaded env from /Users/nbouvrette/Projects/next-multilingual/example/.env.production warn - You have enabled experimental feature (esmExternals) in next.config.mjs. warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk. making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 ^C% @nbouvrette Thanks for the logs and information! The logs show: ready - started server on 0.0.0.0:3000, url: http://localhost:3000 making HTTP(S) head request to url:http://localhost:3000 ... HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 For comparison: In my own different test Next.js environment on Ubuntu (simple starter). I see it is successfully trying to contact http://localhost:3000 via ::1:3000. On ubuntu netstat -lnt | grep 3000 showed tcp6 0 0 :::3000 :::* LISTEN curl -I http://localhost:3000 was successful HTTP/1.1 200 OK curl -I http://[::1]:3000 was successful HTTP/1.1 200 OK I suspect on your system that you may get a different result for netstat -lnt | grep 3000 and that curl -I http://[::1]:3000 will fail. If that is the case you might be able to get help from your Next.js community or Stackoverflow to get the server to also listen on the IPv6 address ::1. (I cloned your repository and tried to follow your instructions in an Ubuntu environment, however that didn't work for me. I don't want to spend time now getting that to work as your debug logs showed enough of the problem. Also I don't have a macOS system so even if it did work for me I can't be sure that I am emulating your environment.) On macOS 13.2.1, netstat -lnt | grep 3000: tcp6 0 0 ::1.3000 ::1.61369 ESTABLISHED tcp6 0 0 ::1.61369 ::1.3000 ESTABLISHED tcp6 0 0 ::1.3000 ::1.61368 ESTABLISHED tcp6 0 0 ::1.61368 ::1.3000 ESTABLISHED And curl -I http://localhost:3000 was successful HTTP/1.1 200 OK curl -I http://[::1]:3000 was successful HTTP/1.1 200 OK @nbouvrette Thanks for the additional debug information. I don't understand in that case why HTTP(S) error for http://localhost:3000 Error: connect ECONNREFUSED ::1:3000 is happening. It probably needs somebody with better network know-how than I have to work out why it's failing. @nbouvrette There is a possibility that you could start your server on host 0.0.0.0 and that your results might be different doing that. Check your documentation. -h 0.0.0.0 might do it. I have had mixed results with similar experiments. On Vite --host made a positive difference, with react HOST=0.0.0.0 didn't help. @nbouvrette Regarding your repository https://github.com/Avansai/next-multilingual , if you want to make this better to use, I would suggest to create a new branch with your changes in it, then test it by cloning to a clean system. That way you can see if there is anything missing. I suspect that you may have some modules globally installed. I cloned https://github.com/Avansai/next-multilingual I guessed you meant npm install not npm run install npm run e2e for instance gives Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@next/bundle-analyzer' @nbouvrette Thanks very much for your detailed answer! I do understand that you are looking for a "proper" fix! I logged a separate issue https://github.com/jeffbski/wait-on/issues/137 regarding the underlying technology in an attempt to get a proper fix. From my point of view there isn't a need to research your issue any deeper, since it is already clear. The simplest outcome would be if the issue https://github.com/jeffbski/wait-on/issues/137 were to be resolved with a new release of wait-on. Hopefully there will be a response from the maintainer there soon. Another theoretical possibility would be to move start-server-and-test to use the fork https://github.com/metcoder95/wait-on where the issue is resolved, however it seems that the creator of the fork doesn't intend it to be a long-term solution. You don't need to do anything more on your repo for my benefit. Thanks for correcting the typo about the npm install command in any case!
gharchive/issue
2023-02-28T15:14:29
2025-04-01T04:33:35.829056
{ "authors": [ "MikeMcC399", "nbouvrette" ], "repo": "bahmutov/start-server-and-test", "url": "https://github.com/bahmutov/start-server-and-test/issues/360", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
380509129
Hello, what should I do if I want to get an error message in Chinese? Hello, for other natural language users, how do you solve Oh, I'm sorry, I just read the document and found the way to localize it. Your document is very friendly. Thank you
gharchive/issue
2018-11-14T02:31:13
2025-04-01T04:33:35.832836
{ "authors": [ "sandaoliuhzw" ], "repo": "baianat/vee-validate", "url": "https://github.com/baianat/vee-validate/issues/1707", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
187887380
Remove mac build docs Currently, Paddle on Mac is not deliberate tested under different Mac OS X and Clang. When all these things that we've done, we will reopen its build docs. #345 Coverage increased (+0.02%) to 62.446% when pulling 8bc87be2f9c78fe0b660d31bb174a62256c27d0b on gangliao:removemacdocs into 4905751a22e5211defafcc56d16a26114e61ca25 on baidu:develop.
gharchive/pull-request
2016-11-08T02:41:38
2025-04-01T04:33:35.834733
{ "authors": [ "coveralls", "gangliao" ], "repo": "baidu/Paddle", "url": "https://github.com/baidu/Paddle/pull/386", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2268086297
fix[resource]: ernie-func endpoint Description: fix ernie-func model mapping Issue: #493 Tag maintainer: duplicate
gharchive/pull-request
2024-04-29T04:16:33
2025-04-01T04:33:35.840947
{ "authors": [ "danielhjz" ], "repo": "baidubce/bce-qianfan-sdk", "url": "https://github.com/baidubce/bce-qianfan-sdk/pull/494", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1020402310
Downloading data from BaseSpace Proposal Currently, we use an outdated python script to download data from BaseSpace: https://github.com/bailey-lab/MIPTools/blob/70c9c26cd86af33f5eb75bdd0c4c43edfedc26d4/bin/BaseSpaceRunDownloader_v2.py. BaseSpace has released tools to work with data on the CLI. To prevent code breakage down the line, we propose using BaseSpace's tools. Working With BaseSpace CLI Below, I discuss some of my thoughts in reading through the BaseSpace documentation. Installation Installation is straightforward, however, we may consider changing the installation location. We may also need to change file permissions using chmod. # Install wget "https://launch.basespace.illumina.com/CLI/latest/amd64-linux/bs" -O $HOME/bin/bs # Change file permission chmod u+x $HOME/bin/bs Authentication Interactively, the user can run the authentication command and then go the URL provided to sign in. bs auth #> Please go to this URL to authenticate: https://basespace.illumina.com/oauth/device?code=6Cesj The resulting config file will be stored in $HOME/.basespace. However, there are a couple of additional factors to consider. We need to think about the best way to automate this process. A couple of notes to consider: We are able to specify the API server. It may be useful to let the user customize this depending on where they are located (e.g., US vs UK). The user can store config info in a file and then load it: bs load config. This may make it easier to inject credentials. Downloading data Downloading data is simple, but there are many options. What is the best strategy to implement for our purposes? # Single run bs download run -i <RunID> -o <output> # Multiple runs in a project bs download project -i <ProjectID> -o <output> # Subset of a project bs download project -i <ProjectID> -o <output> --extension=fastq.gz Implementation We can install the CLI tool into our container. We will need to modify the download app to call either a series of commands or a script rather than the python script. We can provide several options with default values: Flag Function -s, --api-server the API server -i, -run-id run ID -o, --out-dir output dir This issue and accompanying PR will be incorporated into future versions of MIPTools. bs is clean CLI for interacting with Illumina cloud we use it for downloading MIP sequencing run it is not free -- so a user must add it not us to repository configuration and authentication is required users may want to access addtional commands Therefore, to add nor not to add -- whether it is better to standalone or be integrated into MIPtools I would suggest if it is a simple singular command then we have users run outside of MIPtools. If internally we need things then integrate it. What are the specific issues with the current Illumine downloader script? Why would it cause breakage down the line? Depending on what the problems are, we may try to fix that script instead. it is not free -- so a user must add it not us to the repository In order to install the CLI, I did not have to pay or log in to any account. The CLI itself, I believe, is free. It can be downloaded simply by using wget, curl, or even brew. configuration and authentication is required You do need an Illumina account in order to download data, but this is no different from the current download app. users may want to access additional commands If it is installed within the container, users should still be able to access additional commands by using the singularity exec command. In my view, the benefits of the proposed app compared to the current app are as follows: Use an official tool instead of a web scrapping python script The python script uses deprecated tools to download data and, therefore, is likely to breakage The official CLI is significantly faster in terms of download speeds. If it is faster we should use CLI. This was the opposite when I was testing some years ago; bs was much slower. The current script is also Illumina's software, btw. It is probably not supported anymore, so it would be up to us to maintain it if needed. If it is faster we should use CLI. This was the opposite when I was testing some years ago; bs was much slower. With the example dataset I downloaded, the CLI was noticeably faster. I could certainly run some more tests to compare download speed as well... The current script is also Illumina's software, btw. It is probably not supported anymore, so it would be up to us to maintain it if needed. Ahh, I did not know this. Some speed testing may be good but I don't know if we need extensive testing. As long as it is not noticeably slower, we should be fine. I wish they included bcl2fastq capability in the client as well. I wish they included bcl2fastq capability in the client as well. Yeah that would have been nice... I haven't seen any changes to this since 2017, which I imagine is what are using now... A quick benchmark test comparing the two methods: Python script run through singularity: 63.47s user 18.60s system 7% cpu 17:18.41 total bs CLI run via command line[^1]: 29.45s user 13.27s system 79% cpu 53.829 total [^1]: Note that it was not run through singularity. I will test this when I have rebuilt the container with the proposed download app. So in this case, users do not need to place anything in the container. The container packages together a set of software and tools for others to use in one environment. The CLI will be shipped with the container. This is essentially how all the other software in the container is used. For example, MIPWrangler and McCOILR do not need to be downloaded and placed in the container by a user; the tools are already installed in the container so that people can easily use the programs (for reference, the %post section of the definition file defines all the software installed in the container). This is exactly how the CLI will be installed. To summarize, there is no extra work needed by users regardless of whether they build the container or not. I am not sure their license allows for it. I see you can add it to the definition and someone can build their own and it will pop right in. But can we distribute it in our prebuilt? If not, then where does a user drop it in? Or do they just use it externally if they download the prebuilt container? I have not been able to find anything suggesting that we are unable to distribute the CLI in our prebuilt container. I can certainly continue looking, but I do not think there is anything preventing us from doing this. A quick benchmark test comparing the two methods: Python script run through singularity: 63.47s user 18.60s system 7% cpu 17:18.41 total bs CLI run via command line1: 29.45s user 13.27s system 79% cpu 53.829 total Footnotes Note that it was not run through singularity. I will test this when I have rebuilt the container with the proposed download app. ↩ Am I reading this right? 53 sec vs 17 min? What is CLI downloading in 53 sec, an entire run? I don't see any restrictions in terms of distributing the software. There is no license to be found and you don't have to agree to terms at any step. So I think it is safe to assume we can include it in the prebuilt. A quick benchmark test comparing the two methods: Python script run through singularity: 63.47s user 18.60s system 7% cpu 17:18.41 total bs CLI run via command line1: 29.45s user 13.27s system 79% cpu 53.829 total Footnotes Note that it was not run through singularity. I will test this when I have rebuilt the container with the proposed download app. ↩ Am I reading this right? 53 sec vs 17 min? What is CLI downloading in 53 sec, an entire run? So I rebuilt the container with the new download app and here are the results of the benchmarking. Benchmarks were run using hyperfine. New Download App: singularity run \ -B base_resources:/opt/resources -B download-test:/opt/analysis \ --app download /work/apascha1/deploy-miptools/MIPTools/download.sif \ -i 214264108 Time (mean ± σ): 64.995 s ± 10.663 s [User: 25.173 s, System: 12.232 s] Range (min … max): 44.411 s … 77.282 s 10 runs Superseded Download App: singularity run \ -B base_resources:/opt/resources -B download-superseded-test:/opt/analysis \ --app download_superseded \ /work/apascha1/deploy-miptools/MIPTools/download.sif \ -r 214264108 Time (mean ± σ): 833.713 s ± 22.844 s [User: 41.189 s, System: 12.538 s] Range (min … max): 814.501 s … 872.990 s 5 runs Look great. Too great :) Do you know the size of this run? Typically a run will have some tens of GB of data to download. 1 min seems too short. Unless of course this is a small test run with little data? My worry is that CLI may be downloading symlinks. If none of these concerns are valid, I am sold. Do you know the size of this run? Good question. The run size is about 2.66 GB (checked via bs get run and the basespace website). My worry is that CLI may be downloading symlinks. Comparing the directory sizes of the two folders where I downloaded data shows the exact same size for each folder. Given this, I do not believe we have any symlinking going on. > du -sh download-test 3.3G download-test > du -s download-superseded-test 3.3G download-superseded-test Looks good. I'll merge your PR and close the issue if you have no objections. No objections! Thanks for all the comments! Sure! Thanks for improving MIPTools.
gharchive/issue
2021-10-07T19:51:25
2025-04-01T04:33:35.864345
{ "authors": [ "JeffAndBailey", "arisp99", "aydemiro" ], "repo": "bailey-lab/MIPTools", "url": "https://github.com/bailey-lab/MIPTools/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
982802179
ENT rework Most ENTs share the same login system. It would be great if we had one master function, then other functions would just derive from it, basically wrapping it. Also the cookiejar_from_dict methods all over the place could probably be done differently. Todo : Support Educonnect, and all the other ways to connect to ENTs since not everyone has an ATEN account. from what i understand the ents share the same login system but the way to get to the educonnect page is different. From what i understand the ENTs share the same login system but the way to get to the Educonnect page is different, is it possible to manage all ENTs with one function? On the pronote application, after connecting for the first time, it manages to reconnect automatically at each connection even with an ent. Maybe there is a way to do the same thing. I tried to figure out how it worked but couldn't. @OiseauDesPlages Yes but we already do it we save the hash of the password and I think the Pronote app does the same thing
gharchive/issue
2021-08-30T13:34:52
2025-04-01T04:33:35.868976
{ "authors": [ "Bapt5", "OiseauDesPlages", "Xiloe", "bain3" ], "repo": "bain3/pronotepy", "url": "https://github.com/bain3/pronotepy/issues/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1560844178
Crash when opening Vulkan capture after upgrading from v1.22 to v1.24 Description Hi, Renderdoc crashes when opening the given Vulkan capture file. When I capture & open the same program with v1.22, everything is ok. After upgrading to v1.24, it captures well but cannot open it. When using v1.24 to open the capture file created with v1.22, it also crashes. For comparision, the capture file created with v1.22 (v1.22_capture.rdc) is also attached. Steps to reproduce captures.zip Open the attached capture file v1.24_capture.rdc with renderdoc 1.24 Environment RenderDoc version: v1.24 Operating System: Windows 11 22621.1105 Graphics API: Vulkan 1.3 I'm not able to reproduce this because on your nvidia GPU there's a 4th queue family reported which has VK_QUEUE_TRANSFER_BIT | VK_QUEUE_SPARSE_BINDING_BIT | VK_QUEUE_OPTICAL_FLOW_BIT_NV properties which your application is using, but my nvidia GPU only has three queues. Can you share your program so I can try to reproduce it on my system? or since your application cannot be using that queue functionality, could you choose the VK_QUEUE_TRANSFER_BIT | VK_QUEUE_SPARSE_BINDING_BIT queue instead which is then the same on my device? Thanks for the quick reply. After choosing the VK_QUEUE_TRANSFER_BIT | VK_QUEUE_SPARSE_BINDING_BIT queue without VK_QUEUE_OPTICAL_FLOW_BIT_NV, v1.24 works perfectly. It seems that the crash occurs only when the 4th queue family is used. Here are the programs (with/without using VK_QUEUE_OPTICAL_FLOW_BIT_NV queue) and the new capture file. The program has been tested only on a win11 & 3080ti laptop. programs_and_captures.zip I couldn't reproduce this on my system as I don't have any GPU with the optical flow extension. It looks like this is only available in the 30x0 series and up. I'm only able to run the program on one of my test machines, since my primary machine has a 1080 and the program has higher minimum feature requirements, but even on that test machine it didn't reproduce without the queue in place. However I did look into the code in more detail and I think the issue is some queue-remapping code going wrong. Those commits should fix the problem I believe regardless of which queue you use. It works. Thank you!
gharchive/issue
2023-01-28T12:37:34
2025-04-01T04:33:35.926678
{ "authors": [ "AirGuanZ", "baldurk" ], "repo": "baldurk/renderdoc", "url": "https://github.com/baldurk/renderdoc/issues/2837", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1164080149
layers/meta-balena: Update meta-balena to v2.95.0 Update meta-balena from 2.89.15 to 2.95.0 Changelog-entry: Update meta-balena from v2.89.15 to v2.95.0 Change-type: patch @balena-ci I self-certify!
gharchive/pull-request
2022-03-09T15:19:16
2025-04-01T04:33:35.932064
{ "authors": [ "klutchell" ], "repo": "balena-os/balena-intel", "url": "https://github.com/balena-os/balena-intel/pull/442", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
917083035
Compilation info logs prints to the standard error stream Description: The following compilation logs in a Ballerina project print to the standard error stream instead of the standard output stream. Compiling source buddhi/temp:0.1.0 Running executable I reckon the cause of this issue is in here, we should change it to the standard out. https://github.com/ballerina-platform/ballerina-lang/blob/27d811be95409e67c714b3cf06195635914636ee/cli/ballerina-cli/src/main/java/io/ballerina/cli/cmd/RunCommand.java#L88 Steps to reproduce: Run a simple HelloWorld Ballerina project using the following commands. bal run 1> sample.txt bal run 2> sample.txt Affected Versions: SL Beta 1 MacOS This is by design so you can filter out only program output by disregarding the stderror. Otherwise both ballerina programs output and build command output will be visible in stdout.
gharchive/issue
2021-06-10T08:47:16
2025-04-01T04:33:35.953424
{ "authors": [ "BuddhiWathsala", "hevayo" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/issues/31090", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
984587958
Resource paths with special chars and escape chars are not resolved properly during runtime Description: Consider the following example: import ballerina/http; service /foo on new http:Listener(9090) { resource function get foo\$bar\@() returns string { return "Hello, World!"; } } $ curl http://localhost:9090/foo/foo%24bar%40 During the runtime in HTTP module, when we get the resource paths as follows for the example above: void foo(BObject service) { ServiceType serviceType = (ServiceType) service.getType(); ResourceMethodType[] functions = serviceType.getResourceMethods(); for (ResourceMethodType function : functions) { String[] paths = function.getResourcePath(); } } we get this as foo\$bar\@. Shouldn't this be resolved as foo$bar@ without the escape char (\) used in Ballerina code? But, when we decode the function name, it gets resolved as expected as $get$foo$bar@. Affected Versions: Tested on SwanLake Beta2 This seems to be a bug in getResourcePath api. This is because we only encode the function names at compile time. And we don't encode the resource paths specifically. If we encode the resource paths as well, then it will be foo$0036bar@ instead of foo$bar@. This is because the encoding modifies the non-jvm-supported characters. So, what is the expected version for the resource path here? @ldclakmal In that case, I would expect foo$0036bar@ instead of foo\$bar\@, so that we can decode that using IdentifierUtils class if needed. The point is, we use foo\$bar\@ as the resource path since that is the only way that we can represent the actual foo$bar@ resource path in a Ballerina service, which will be called by a request with the path foo%24bar%40. During the runtime, I am expecting actual resource path (foo$bar@) as a resolved string array when we call the .getResourcePath() method. But, due to the encoding modifies non-jvm-supported characters, it would be okay to represent it as foo$0036bar@ instead of foo\$bar\@ IMO. Maybe, @chamil321 may have different concerns. In that case getting "foo$bar@" should be ok. IMO we should not use IdentifierUtils since it is an internal api and encode and decodes are part of runtime internals. API should provide whatever the function name define by the user. Removed the bug label since current behaviour of the api is correct. @chamil321 @ldclakmal Do we have further queries on this? Closing the issue with above comments.
gharchive/issue
2021-09-01T04:06:24
2025-04-01T04:33:35.960505
{ "authors": [ "HindujaB", "ldclakmal", "warunalakshitha" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/issues/32504", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1457999059
[Bug]: Invalid extract to variable code action applied for query expressions Description Consider the following: public function main() { map<int> myMap = {}; from var [key, val] in myMap.entries() order by val select val; } Extract to local variable code action is suggested inside the list binding pattern in the from clause. Once applied, it generates this: public function main() { map<int> myMap = {}; int[] var1 = from var [key, val] in myMap.entries() order by val select val; var1; } Steps to Reproduce See description Affected Version(s) 2201.3.0-rc2 OS, DB, other environment details and versions No response Related area -> Editor Related issue(s) (optional) No response Suggested label(s) (optional) No response Suggested assignee(s) (optional) No response Checked this on master (2201.5.0-SNAPSHOT) and now applying the same code action gives the following output: public function main() { map<int> myMap = {}; int[] var1 = from var [key, val] in myMap.entries() order by val select val; } The issue had been fixed.
gharchive/issue
2022-11-21T13:59:47
2025-04-01T04:33:35.965317
{ "authors": [ "IMS94" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/issues/38756", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
391610495
Fix rendering defaultable function params Fixes https://github.com/ballerina-platform/ballerina-lang/issues/11046 Closing and reopening to trigger ci builds. Codecov Report :exclamation: No coverage uploaded for pull request base (release-0.990.0@72b1cab). Click here to learn what that means. The diff coverage is n/a. @@ Coverage Diff @@ ## release-0.990.0 #12813 +/- ## ================================================== Coverage ? 59.87% Complexity ? 659 ================================================== Files ? 2068 Lines ? 101646 Branches ? 12931 ================================================== Hits ? 60865 Misses ? 35590 Partials ? 5191 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 72b1cab...6521afb. Read the comment docs.
gharchive/pull-request
2018-12-17T08:50:01
2025-04-01T04:33:35.970817
{ "authors": [ "codecov-io", "kaviththiranga" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/pull/12813", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
533145587
Upgrade transport version Purpose $subject. Codecov Report Merging #20230 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #20230 +/- ## ======================================= Coverage 19.9% 19.9% ======================================= Files 55 55 Lines 1437 1437 Branches 218 218 ======================================= Hits 286 286 Misses 1138 1138 Partials 13 13 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 4bc25e4...e22200a. Read the comment docs.
gharchive/pull-request
2019-12-05T06:09:03
2025-04-01T04:33:35.976237
{ "authors": [ "Bhashinee", "codecov-io" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/pull/20230", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
603351734
Fixed bugs relating to generation of service file for gRPC [1.2.x] Purpose Fixes service file generation for Empty type request. Also fixes service file generation for client streaming and bi directional streaming. Fixes: Partly fixes #22778 Check List [x] Read the Contributing Guide [ ] Updated Change Log [ ] Checked Tooling Support (#) [ ] Added necessary tests [ ] Unit Tests [ ] Spec Conformance Tests [ ] Integration Tests [ ] Ballerina By Example Tests [ ] Increased Test Coverage [ ] Added necessary documentation [ ] API documentation [ ] Module documentation in Module.md files [ ] Ballerina By Examples Codecov Report Merging #22795 into ballerina-1.2.x will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## ballerina-1.2.x #22795 +/- ## ================================================ Coverage 14.59% 14.59% ================================================ Files 51 51 Lines 1411 1411 Branches 219 219 ================================================ Hits 206 206 Misses 1189 1189 Partials 16 16 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update ccb4661...7ebf99a. Read the comment docs.
gharchive/pull-request
2020-04-20T16:04:03
2025-04-01T04:33:35.984997
{ "authors": [ "codecov-io", "daksithj" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/pull/22795", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
923643818
Fix not showing the escaped-name in symbols with compiler API Purpose $title Fixes #30242 Fixes #29494 Approach Describe how you are implementing the solutions along with the design details. Samples Provide high-level details about the samples related to this feature. Remarks List any other known issues, related PRs, TODO items, or any other notes related to the PR. Check List [x] Read the Contributing Guide [ ] Updated Change Log [ ] Checked Tooling Support (#) [ ] Added necessary tests [ ] Unit Tests [ ] Spec Conformance Tests [ ] Integration Tests [ ] Ballerina By Example Tests [ ] Increased Test Coverage [ ] Added necessary documentation [ ] API documentation [ ] Module documentation in Module.md files [ ] Ballerina By Examples Waiting for this https://github.com/ballerina-platform/ballerina-lang/issues/31936 @dulajdilshan @pubudu91 shall we merge this PR? @dulajdilshan @pubudu91 shall we merge this PR? Going through this now. Will check and merge.
gharchive/pull-request
2021-06-17T08:37:03
2025-04-01T04:33:35.991546
{ "authors": [ "dulajdilshan", "mohanvive", "pubudu91" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/pull/31251", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1147165130
Add annotation attachments to the BIR Purpose $title. Fixes https://github.com/ballerina-platform/ballerina-lang/issues/16379 Approach Describe how you are implementing the solutions along with the design details. Samples Provide high-level details about the samples related to this feature. Remarks List any other known issues, related PRs, TODO items, or any other notes related to the PR. Check List [ ] Read the Contributing Guide [ ] Updated Change Log [ ] Checked Tooling Support (#) [ ] Added necessary tests [ ] Unit Tests [ ] Spec Conformance Tests [ ] Integration Tests [ ] Ballerina By Example Tests [ ] Increased Test Coverage [ ] Added necessary documentation [ ] API documentation [ ] Module documentation in Module.md files [ ] Ballerina By Examples @warunalakshitha will you be able to verify the BIR-related changes? Marking this as a draft PR since it has a behavioural change and we need to verify the full build also.
gharchive/pull-request
2022-02-22T17:11:17
2025-04-01T04:33:35.997409
{ "authors": [ "MaryamZi", "hasithaa" ], "repo": "ballerina-platform/ballerina-lang", "url": "https://github.com/ballerina-platform/ballerina-lang/pull/35189", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
375918339
Implement ldap authentication This update provides the ability to use a Ldap server for authentication. Configurations are provided through the broker.yaml file. Todo: Test cases and documentation of configuration details Issue: Implement user authentication using an LDAP server @tharinduwijewardane tests have failed. Can you check why? Its fixed now. Please merge.
gharchive/pull-request
2018-10-31T11:31:59
2025-04-01T04:33:35.999303
{ "authors": [ "a5anka", "tharinduwijewardane" ], "repo": "ballerina-platform/ballerina-message-broker", "url": "https://github.com/ballerina-platform/ballerina-message-broker/pull/546", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
136600683
Add scale support for MQTT Lights. Converts 0..255 values that HA expects into a device 0..SCALE value Example: HA considers "hall light" at 25 brightness or 10% of 255 Device considers "hall light" at 100 brightness or 10% of 1000 This allows our existing MQTT devices to not change their data format to be used in HA. note: the reason I'm doing this is SmartThings uses a scale 0-99 for brightness. @balloob I've removed the RGB scale and kept it with just brightness. How does it look now? Looks good :dolphin:
gharchive/pull-request
2016-02-26T05:21:19
2025-04-01T04:33:36.013599
{ "authors": [ "balloob", "stjohnjohnson" ], "repo": "balloob/home-assistant", "url": "https://github.com/balloob/home-assistant/pull/1403", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }