added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:35:40.780564
2023-09-26T18:13:31
1914046026
{ "authors": [ "beef331", "termermc" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11355", "repo": "termermc/nim-stack-strings", "url": "https://github.com/termermc/nim-stack-strings/pull/8" }
gharchive/pull-request
Support 1.6.14 Tests pass on 1.6.14 so not much reason not to support it for the time being. Please do make a git tag :smile: Docs won't generate on 1.6.14 unless uses of addr are replaced with unsafeAddr. Isn't that feature deprecated? Ugh more work is required, seemed so easy to add 1.6.14 support. If it's not a deprecated feature, unsafeAddr can be used in place of addr. If not, a few when blocks can be used. Yea I knew how to fix it of course, I just didnt want to have to go through and do it :stuck_out_tongue: The only things that need to be updated now are the README to remove the inaccurate bit about 2.0.0 support, and runnableExamples, the latter of which could have comments explaining unsafeAddr being required pre-2.0.0
2025-04-01T04:35:40.786667
2021-11-27T18:06:38
1065099687
{ "authors": [ "GavinMendelGleason", "spl" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11356", "repo": "terminusdb/terminusdb-client-js", "url": "https://github.com/terminusdb/terminusdb-client-js/issues/74" }
gharchive/issue
Path must be optional For performance and convenience reasons we need path to be optional. Not sure precisely what the code is, but something like this: WOQLQuery.prototype.path = function(Subject, Pattern, Object, Path) { if (this.cursor['@type']) this.wrapCursorWithAnd() this.cursor['@type'] = 'Path' this.cursor['subject'] = this.cleanSubject(Subject) if (typeof Pattern == 'string') Pattern = this.compilePathPattern(Pattern) this.cursor['pattern'] = Pattern this.cursor['object'] = this.cleanObject(Object) if(typeof Path != 'undefined'){ this.cursor['path'] = this.varj(Path) } return this } The path object can be very large. We should not report it if it's not needed.
2025-04-01T04:35:40.865184
2022-01-15T21:57:36
1104862728
{ "authors": [ "StoneMonarch", "ddelnano" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11357", "repo": "terra-farm/terraform-provider-xenorchestra", "url": "https://github.com/terra-farm/terraform-provider-xenorchestra/pull/186" }
gharchive/pull-request
Correct documentation for affinity_host example. Update documentation to have a working example along with make it a bit more descriptive. issue #185 Really appreciate your help in making the documentation correct!
2025-04-01T04:35:40.945746
2022-04-11T20:40:58
1200510703
{ "authors": [ "1985-A", "ArchiFleKs", "DimamoN", "Epic55", "FeLvi-zzz", "FernandoMiguel", "MadsRC", "PLeS207", "VladoPortos", "adiii717", "alfredo-gil", "amazingguni", "bcarranza", "bryantbiggs", "csepulveda", "dcarrion87", "dempti", "dracut5", "ecoupal-believe", "evenme", "evercast-mahesh2021", "g150421", "joseph-igb", "kaykhancheckpoint", "mathewmoon", "mebays", "mesobreira", "miguelgmalpha", "nick4fake", "robpearce-flux", "rooty0", "sergiofteixeira", "sotiriougeorge", "stdmje", "sushil-propel", "tanvp112" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11358", "repo": "terraform-aws-modules/terraform-aws-eks", "url": "https://github.com/terraform-aws-modules/terraform-aws-eks/issues/2007" }
gharchive/issue
dial tcp <IP_ADDRESS>:80: connect: connection refused Description I know there are numerous issues (#817) related to this problem, but since v18.20.1 reintroduced the management of configmap thought we could discuss in a new one because the old ones are closed. The behavior is till very weird. I updated my module to use the configmap management feature and the first run went fine (was using the aws_eks_cluster_auth datasource. When I run the module with no change I have no error either in plan or apply. I then tried to update my cluster form v1.21 to v1.22 and then plan and apply began to fail with the following well know error: null_resource.node_groups_asg_tags["m5a-xlarge-b-priv"]: Refreshing state... [id=7353592322772826167] ╷ │ Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp <IP_ADDRESS>:80: connect: connection refused │ │ with kubernetes_config_map_v1_data.aws_auth[0], │ on main.tf line 428, in resource "kubernetes_config_map_v1_data" "aws_auth": │ 428: resource "kubernetes_config_map_v1_data" "aws_auth" { │ ╵ I then moved to the exec plugin as recommended per the documentation and removed from state the old datasource. Still go the same error. Something I don't get is when setting the variable export KUBE_CONFIG_PATH=$PWD/kubeconfig as suggested in #817 things work as expected. I'm sad to see things are still unusable (not related to this module but on the Kubernetes provider side), load_config_file option has been removed from Kubernetes provider for a while and I don't see why this variable needs to be set and how it could be set beforehand. Anyway, if someone managed to use the readded feature of managing configmap I'd be glad to know how to workaround this and help debug this issue. PS: I'm using Terragrunt, not sure if the issue could be related but it might [X] ✋ I have searched the open/closed issues and my issue is not listed. Versions Module version [Required]: Terraform v1.1.7 on linux_amd64 + provider registry.terraform.io/hashicorp/aws v4.9.0 + provider registry.terraform.io/hashicorp/cloudinit v2.2.0 + provider registry.terraform.io/hashicorp/kubernetes v2.10.0 + provider registry.terraform.io/hashicorp/null v3.1.1 + provider registry.terraform.io/hashicorp/tls v3.3.0 Reproduce Here is my provider block provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.id] } } data "aws_eks_cluster" "cluster" { name = aws_eks_cluster.this[0].id } I have the same issue but when I work with state with another AWS user , I'm got error like │ Error: Unauthorized │ │ with module.eks.module.eks.kubernetes_config_map.aws_auth[0], │ on .terraform/modules/eks.eks/main.tf line 411, in resource "kubernetes_config_map" "aws_auth": │ 411: resource "kubernetes_config_map" "aws_auth" { I want you to try replacing aws_eks_cluster.this[0].id with the hard coded cluster name. I guess aws_eks_cluster.this[0].id would be known after apply because you're going to bump up EKS cluster version. That's why the data resource is indeterminate, and kubernetes provider will fallback to default <IP_ADDRESS>:80. Would you try replacing aws_eks_cluster.this[0].id with the hard coded cluster name? I guess aws_eks_cluster.this[0].id would be known after apply because you're going to bump up EKS cluster version. That's why the data resource is indeterminate, and kubernetes provider will fallback to default <IP_ADDRESS>:80. not quite true - if the data source fails to find a result, its a failure not indeterminate. @ArchiFleKs you shouldn't need the data source at all; does this still present the same issue? provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } } cluster_ca_certificate you cant run these in tf cloud though, cause of the local exec Would you try replacing aws_eks_cluster.this[0].id with the hard coded cluster name? I guess aws_eks_cluster.this[0].id would be known after apply because you're going to bump up EKS cluster version. That's why the data resource is indeterminate, and kubernetes provider will fallback to default <IP_ADDRESS>:80. not quite true - if the data source fails to find a result, its a failure not indeterminate. @ArchiFleKs you shouldn't need the data source at all; does this still present the same issue? provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } } you cant run these in tf cloud though, cause of the local exec This is just merely pointing to what the Kubernetes provider documentation specifies. The module doesn't have any influence over this aspect I can confirm that this snippet works as expected: provider "kubernetes" { host = aws_eks_cluster.this[0].endpoint cluster_ca_certificate = base64decode(aws_eks_cluster.this[0].certificate_authority.0.data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" args = ["eks", "get-token", "--cluster-name", aws_eks_cluster.this[0].id] } } I know Hashi are hiring and have made some hires to start offering more support to the Kubernetes and Helm providers recently so hopefully some of these quirks get resolved soon! for now, we can just keep sharing what others have found to have worked for their setups 🤷🏽‍♂️ Unfortunately, it doesn't seem to work with tf-cloud (it gets the connect: connection refused error), I locked the module on v18.19 so it still works. Apparently using kubectl provider instead of kubernetes provider (even completely removing it) made it work with terraform-cloud 🤷‍♀️ : provider "kubectl" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) token = data.aws_eks_cluster_auth.cluster.token exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } } I just ran into this while debugging an issue during redeployment of a cluster. I'm not sure exactly how it happened, but we ended up in a state where the cluster had been destroyed, which caused terraform to not be able to connect to the cluster (duh...) using the provider and such defaulted to <IP_ADDRESS> when trying to touch the config map... As mentioned, I'm not sure exactly how it ended up in that state, but it got so bad that I'd get this dial tcp <IP_ADDRESS>:80: connect: connection refused error on terraform plan even with all references to the config map removed. Turns out there was still a reference to the config map in the state file, so removing that using terraform state rm module.eks.this.kubernetes_config_map_v1_data.aws_auth allowed me to redeploy... Maybe not applicable to most of you, but hopefully it's useful for someone in the future :D I'm also experiencing this, in the meantime are there any work arounds? Im experiencing the same problem with the latest version. Initial creation of cluster worked fine but trying to update any resources after creation i get the same error. │ Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp <IP_ADDRESS>:80: connect: connection refused │ │ with module.eks.kubernetes_config_map_v1_data.aws_auth[0], │ on .terraform/modules/eks/main.tf line 431, in resource "kubernetes_config_map_v1_data" "aws_auth": │ 431: resource "kubernetes_config_map_v1_data" "aws_auth" { │ provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id, "--profile", "terraformtest"] } } Faced the same, then checked state using terraform state list and found k8s related entries there. Then I removed then using terraform state rm module.eks.kubernetes_config_map.aws_auth[0] terraform state rm module.eks.local_file.kubeconfig[0] terraform state rm aws_s3_bucket_object.upload_kubeconfig And that helped to resolve the issue. Ye~ i also deleted the aws_auth, it allowed me to continue (before that it was not letting me destroy the k8s cluster) terraform state rm 'module.eks.kubernetes_config_map_v1_data.aws_auth[0]' I don't know what the implications of rm'ing this state has, is it safe to keep removing this state whenever we encounter this error?. a brand new cluster and tf state, eks 1.22 terraform { required_version = ">= 1.1.8" required_providers { aws = { source = "hashicorp/aws" version = ">= 4.9" } kubernetes = { source = "hashicorp/kubernetes" version = ">= 2.10" } kubectl = { source = "gavinbunney/kubectl" version = ">= 1.13.1" } } } provider "aws" { alias = "without_default_tags" region = var.aws_region assume_role { role_arn = var.assume_role_arn } } provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } } locals { ## strips 'aws-reserved/sso.amazonaws.com/' from the AWSReservedSSO Role ARN aws_iam_roles_AWSReservedSSO_AdministratorAccess_role_arn_trim = replace(one(data.aws_iam_roles.AWSReservedSSO_AdministratorAccess_role.arns), "/[a-z]+-[a-z]+/([a-z]+(\\.[a-z]+)+)\\//", "") aws_auth_roles = concat([ { rolearn = data.aws_iam_role.terraform_role.arn username = "terraform" groups = ["system:masters"] }, { rolearn = local.aws_iam_roles_AWSReservedSSO_AdministratorAccess_role_arn_trim username = "sre" groups = ["system:masters"] } ], var.aws_auth_roles, ) } leads to: # aws-auth configmap create_aws_auth_configmap = var.self_managed_node_groups != [] ? true : null manage_aws_auth_configmap = true aws_auth_roles = local.aws_auth_roles aws_auth_users = var.aws_auth_users aws_auth_accounts = var.aws_auth_accounts │ Error: Unauthorized │ │ with module.eks.module.eks.kubernetes_config_map.aws_auth[0], │ on .terraform/modules/eks.eks/main.tf line 414, in resource "kubernetes_config_map" "aws_auth": │ 414: resource "kubernetes_config_map" "aws_auth" { any ideas @bryantbiggs ? thanks in advance. @FernandoMiguel I'm seeing something similar in a configuration I'm working with. After some time of thought I believe you'll need to add the Assumed role to your configuration provider "aws" { alias = "without_default_tags" region = var.aws_region assume_role { role_arn = var.assume_role_arn } } provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id,"--role", var.assume_role_arn] } } Sadly this isn't a solution for me. The configuration I'm working with uses dynamic credentials fed in. Something along the lines... provider "aws" { access_key = <access_key> secret_key = <secret_key> token = <token> region = <region> } This is useful if doing something where a temporary vm or container or tfe is running the terraform execution Going down this route the provider is getting fed the information for connection and used entirely within the provider context (no aws config process was ever used). The problem is none of that data is stored or carried over, so when the kubernetes provider tries to run the exec it's going to default to the methods the aws cli uses (meaning a locally store config in ~/.aws/config or ~/.aws/credentials). In my case that doesn't exist. @FernandoMiguel it looks like your are presumably using a ~/.aws/config, so passing the assumed role and possibly the profile (if not using a default) should help move that forward. I cannot guarantee it will fix it, but that would be the theory. No config and no aws creds hardcoded. Everything is assume role from a global var. This works on hundreds of our projects. If you mean the cli exec, that's running from aws-vault exec --server @FernandoMiguel Hmm well that's interesting. I was able to get a solution to work for me. provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws-iam-authenticator" # This requires the awscli to be installed locally where Terraform is executed args = ["token", "-i, module.eks.cluster_id] } } This seemed to work for me, but I also had to expose my endpoint to be public for the first run. Our network configuration was locked down too tightly for our remote execution server to hit the endpoint. That could be something else you make sure you are hitting. If you mean the cli exec, that's running from aws-vault exec --server What I meant was if credentials are being passed to the aws provider than I would necessarily see them being passed to the kubernetes provider. Some trouble shooting you could try it TF_LOG=debug terraform plan ... in order to get more information if you haven't tried that. If you really wanted to test if the kubernetes exec works spin up a vm or container pass the credentials and see if that carries over. If my guess it correct than a way around it would be creating a ~/.aws/credentials file using a null resource and template out configuration that aws eks get-token can then reference. The thought process I am having is the data being passed into the kubernetes provider contains no information about aws configuration. So I would expect it to fail if the instance running the terraform didn't have the aws cli configured. @bryantbiggs I think the thought process I had from above just reassures your comment. I don't think there is anything in this module that can be done to fix this. I do have a suggestion of not completely remove the aws_auth_configmap_yaml output unless you have other solutions coming up. The reasoning is I could see a use case where terraform is ran to provision private cluster which may or may not be running on an instance that can reach that endpoint. If it isn't the aws_auth_configmap_yaml can be used in a completely separate process to hit the private cluster endpoint. It all depends on how separation of duties may come into play (a person to provision, and maybe a person to configure). It's just a thought. I would love to know what isn't working here. I spent a large chunk of this week trying every combo I could think to get this to work, without success. Different creds for the kube provider, different parallelism settings, recreating the code outside of the module so it would run after the eks cluster module had finished, etc.. I would always get either authentication error, that the config map didn't exist or that it couldn't create it. Very frustrating. If we were to keep the now deprecated output, I can at least revert my internal PR and keep using that old and terrible null exec code to patch the config map. The problem might be the terraform-provider-kubernetes and not terraform-aws-eks. Take a look at https://github.com/hashicorp/terraform-provider-kubernetes/issues/1479, https://github.com/hashicorp/terraform-provider-kubernetes/issues/1635#issuecomment-1068468254 ... more about localhost connection refused, @tanvp112 you are onto something there we have this provider notice the highlight bit that is not available until the cluster is up so it is possible that this provider is getting initialised with the wrong endpoint maybe even "localhost" and ofc that explains why auth fails explains why the 2nd apply works fine, cause now the endpoint is correct So my issue was with authentication, and I believe this example clearly states the issue. The example state that you must set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Doing a little more digging and for those having issues with authentication could try something like this. provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" # This would set up the aws cli configuration if there is no config or credential file running on the host that would run the aws cli command env = { AWS_ACCESS_KEY_ID = var.access_key_id AWS_SECRET_ACCESS_KEY = var.secret_access_key AWS_SESSION_TOKEN = var.token } # This requires the awscli to be installed locally where Terraform is executed\ command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } } I haven't gotten to try this myself, but it should work. The AWS_SESSION_TOKEN would only be needed for an assumed role process, but it could possibly work. So my issue was with authentication, and I believe this example clearly states the issue. The example state that you must set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Doing a little more digging and for those having issues with authentication could try something like this. provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" # This would set up the aws cli configuration if there is no config or credential file running on the host that would run the aws cli command env = { AWS_ACCESS_KEY_ID = var.access_key_id AWS_SECRET_ACCESS_KEY = var.secret_access_key AWS_SESSION_TOKEN = var.token } # This requires the awscli to be installed locally where Terraform is executed\ command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } } I haven't gotten to try this myself, but it should work. The AWS_SESSION_TOKEN would only be needed for an assumed role process, but it could possibly work. I honestly don't know what you are trying to do... aws iam auth can be done in many ways. not everyone has a dedicated IAM account... we use assume roles, for ex. I honestly don't know what you are trying to do... aws iam auth can be done in many ways. not everyone has a dedicated IAM account... we use assume roles, for ex. When you assume a role your retrieve an temporary access key, secret key, and token. My code snippet is an example for when a user is running things in a jobbed off process inside of a container. Where the container contains no context for AWS (no config or credentials file). That is my use case where my runs are an isolated instance that does not persist (Terraform Cloud follows this same structure, but does not have aws installed by default), and run in a CICD pipeline fashion not on a local machine. When the aws provider is used the configuration information is is passed into the provider for this example. (I'm making it simple. My context actually uses dynamic credential by using hashicorp vault, but don't want to introduce that complexity in this explanation.) provider "aws" { region = "us-east-1" access_key = "<access key | passed via variable or some data query>" secret_key = "<secret access key | passed via variable or some data query>" token = "<session token | passed via variable or some data query>" } In this instance the AWS Provider has all information passed in and using the Provider Configuration method. On this run no local aws config file or environment variables exist, so it needs this to make any aws connection. All aws resources create successfully in this process, besides that aws-auth configmap, when using the suggested example. provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" # This requires the awscli to be installed locally where Terraform is executed\ command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } The reason this is failing is the Kubernetes provider has no context on what you use for the aws command because no config or environment variables are being used. Therefore this will fail NOTE: This will also fail if you have a local AWS Config loaded using a config file or environment variable that does not run as the same role as the EKS cluster was created. The only auth by default is the user or role that created the cluster. So if the local user cannot assume the role used with the above aws provider. The kubernetes commands will fail as well. That is how the suggested route came to be. provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" # This would set up the aws cli configuration if there is no config or credential file running on the host that would run the aws cli command env = { AWS_ACCESS_KEY_ID = "<same access key passed to aws provider | passed via variable or some data query>" AWS_SECRET_ACCESS_KEY = "<same secret access key passed to aws provider | passed via variable or some data query>" AWS_SESSION_TOKEN = "<same session token passed to aws provider | passed via variable or some data query>" } } # This requires the awscli to be installed locally where Terraform is executed\ command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id] } } In this provider block it is purposely passing in the required credential/configuration needed for the aws cli to successfully call aws eks get-token --cluster-name <cluster name>. Because the kubernetes provider does not care what was passed in to the aws provider. There is no shared context because there is no local configuration file or environment variables being leveraged. @FernandoMiguel does this make sense on what I was trying to attain now? This may not be your use case, but it is useful information for anyone trying to run this module using some external remote execution tool. I'm going to add this module does not contain the issue, but adding the above snippet to the documentation may help out those that may be purposely providing configuration to the aws provider vs utilizing Environment variables or local config files. In this provider block it is purposely passing in the required credential/configuration needed for the aws cli to successfully call aws eks get-token --cluster-name <cluster name>. Because the kubernetes provider does not care what was passed in to the aws provider. There is no shared context because there is no local configuration file or environment variables being leveraged. @FernandoMiguel does this make sense on what I was trying to attain now? This may not be your use case, but it is useful information for anyone trying to run this module using some external remote execution tool. it does. I've been fighting issued using the kube provider for weeks with what seems a race condition or failed to initialise endpoint/creds. Sadly, in our case, your snippet does not help since creds are already available via metadata endpoint. but it's a good idea to always double check if CLI tools are using the expected creds. @tanvp112 you are onto something there we have this provider notice the highlight bit that is not available until the cluster is up so it is possible that this provider is getting initialised with the wrong endpoint maybe even "localhost" and ofc that explains why auth fails explains why the 2nd apply works fine, cause now the endpoint is correct @FernandoMiguel, according to the discussion here; the use of data "aws_eks_cluster" could resolve the chicken & egg issue. I was having the same issue but the solution that worked for me is to configure the kubernetes provider to use the role, something like this: `provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id, "--role", "arn:aws:iam::${AWS_ACCOUNT_ID}:role/${ROLE_NAME}" ] } }` I was having the same issue but the solution that worked for me is to configure the kubernetes provider to use the role, something like this: provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id, "--role", "arn:aws:iam::${AWS_ACCOUNT_ID}:role/${ROLE_NAME}" ] } } Ohh that's an interesting option... Need to try that I have the same issue, but like this: Post "http://localhost/api/v1/namespaces/kube-system/configmaps": dial tcp [::1]:80: connect: connection refused when i set "manage_aws_auth_configmap = true" when deploy eks managed group. Is there a decision how to solve it? Related, if someone is not aware of it. https://github.com/hashicorp/terraform/issues/27728#issuecomment-779392630 My team has suffered this ongoing problem for a hot minute now. Even if you use the k8s provider outside of the module to update the configmap you will hit an issue anytime your provider config relies on a computed value. The workaround that we are implementing as I type this is to use a local-exec to call a script with kubectl. We are updating the configmap and doing some helm stuff to replace aws-cni and coredns with a proper chart. This has been a huge pain for us with even plans failing when the cluster needs to be recreated or the EKS version updated. Same problem with a fresh deployment. were you able to resolve it? We have updated successfully from 17.x to 18.x version, but I noticed current problem and decided to dig deeper. Reproduce My steps to reproduce the issue: Creating new cluster using the latest version(18.21.0 at that moment) of the module, create_aws_auth_configmap and manage_aws_auth_configmap are true due self-managed node groups. Worked well. Changing module parameter to make cluster destroy/apply, add to iam_role_name value some ending as example. I got an error │ Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp <IP_ADDRESS>:80: connect: connection refused │ │ with module.eks.kubernetes_config_map.aws_auth[0], │ on .terraform/modules/eks/main.tf line 414, in resource "kubernetes_config_map" "aws_auth": │ 414: resource "kubernetes_config_map" "aws_auth" { I used both configurations for kubernetes provider, which work as expected until iam_role_name was changed: provider "kubernetes" { host = data.aws_eks_cluster.eks.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks.certificate_authority[0].data) token = data.aws_eks_cluster_auth.eks.token } provider "kubernetes" { host = module.eks.cluster_endpoint cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id, "--role", "arn:aws:iam::${var.account_id}:role/system/${var.current_iam_role_name}"] } } As mentioned before, I suppose, such behavior is caused by computed values in kubernetes provider. I understand that cluster recreate is not what you want to get, but you should be able to determine that something is going wrong. Versions Terraform v1.1.4 on linux_amd64 + provider registry.terraform.io/hashicorp/aws v3.75.1 + provider registry.terraform.io/hashicorp/cloudinit v2.2.0 + provider registry.terraform.io/hashicorp/http v2.1.0 + provider registry.terraform.io/hashicorp/kubernetes v2.11.0 + provider registry.terraform.io/hashicorp/null v2.1.2 + provider registry.terraform.io/hashicorp/tls v3.4.0 P.S. forgot to add that updating cluster version doesn't generate any errors for me. I meat same error T_T So everything works well, tho whenever I change the cluster_name within the EKS module (cluster will be replaced, which is ok) I get an error: Error: Get "http://localhost/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/blah:eks-editor": dial tcp [::1]:80: connect: connection refused This is driving me crazy. Wondering if owners of the kubernetes provider are even aware about what is going on. So far, no single workaround is working for me. bumping this as i have the same error with, 18.26.6. It seems like it's when trying to connect to make the configmap for aws-auth that is causing the issue. This truly sounds like the biggest issue of the module. I've been using EKS tf module for many years, and this is something that we face with almost every upgrade and new deployment. I had this issue when I added one of these: iam_role_name / cluster_security_group_name / node_security_group_name on an existing cluster. As the OP wrote, using export KUBE_CONFIG_PATH=$PWD/kubeconfig worked for me. So did using config_path for the provider "kubernetes" block. Which, of course, means it takes a couple of passes of terraform apply in order to get to a stable state. Face a similar issue and have to delete the aws_auth from the remote state which fixed the issue. to remove the block from remote state terragrunt state pull > temp.tfstate //remove the complete block of `aws_auth` {} "module": "module.eks", "mode": "managed", "type": "kubernetes_config_map_v1_data", "name": "aws_auth", terragrunt state push temp.tfstate Face a similar issue and have to delete the aws_auth from the remote state which fixed the issue. to remove the block from remote state terragrunt state pull > temp.tfstate //remove the complete block of `aws_auth` {} "module": "module.eks", "mode": "managed", "type": "kubernetes_config_map_v1_data", "name": "aws_auth", terragrunt state push temp.tfstate You don't want to leave aws auth unmanaged. to remove the block from remote state Hi I'm in the same point that you @adiii717 !!!, Have you been able to get through this? @bcarranza actually the error keeps the same until I have to destroy and recreate, the more strange part is that the destroy recognize the same cluster but the apply does not. so I will say the latest module is pretty unstable which definitely create problem in the live environment, been using 17.x so far in live but did not face any issue so far This is such a frustrating issue having to do crazy workarounds to go. I am getting below error while if i touch/change/comment/update anything on cluster_security_group_description and cluster_security_group_name variables. I just wanted to get a default name and description of sg that is created for EKS by default. cluster_security_group_description = "Short Description" cluster_security_group_name = local.name_suffix Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused with module.eks_cluster.kubernetes_config_map_v1_data.aws_auth[0], on .terraform/modules/eks_cluster/main.tf line 443, in resource "kubernetes_config_map_v1_data" "aws_auth": 443: resource "kubernetes_config_map_v1_data" "aws_auth" { Any solution for this? Thanks! Hello, Regarding this problem, I also had this problem and found a workaround. Since this issue happens when the EKS datasources are only "known after application" during the terraform plan due to ControlPlane endpoint changes, I created an external datasource that basically fetches the EKS cluster endpoint and certificates from a shell script (it uses the aws command line). The script is in attach. I set it as my default data source. If this datasource fails (usually when I create a new cluster), it switches to the default EKS datasource. But with this external datasource, I no longer depend on the state of terraform and then any "Known after application" has no impact. This is the content of the .tf file used to instantiate the kubernetes providers : data "aws_region" "current" {} data "external" "aws_eks_cluster" { program = ["sh","${path.module}/script/get_endpoint.sh" ] query = { cluster_name = "${var.kubernetes_properties.cluster_name}" region_name = "${data.aws_region.current.name}" } } provider "kubernetes" { host = data.external.aws_eks_cluster.result.cluster_endpoint == "" ? data.aws_eks_cluster.this[0].endpoint : data.external.aws_eks_cluster.result.cluster_endpoint cluster_ca_certificate = data.external.aws_eks_cluster.result.cluster_endpoint == "" ? base64decode(data.aws_eks_cluster.this[0].certificate_authority[0].data) : base64decode(data.external.aws_eks_cluster.result.certificate_data) exec { api_version = "client.authentication.k8s.io/v1beta1" args = ["eks", "get-token", "--cluster-name", var.kubernetes_properties.cluster_name, "--role-arn", try(data.aws_iam_session_context.this[0].issuer_arn, "")] command = "aws" } } The same configuration can be applied to kubectl and helm providers. I have created clusters and changed EKS control plane configurations using this workaround and have no issues so far. I know that External Data Source is not recommended as it's a bypass to the terraform state, but in this case it's very useful. get_endpoint.sh.gz But with this external datasource, I no longer depend on the state of terraform and then any "Known after application" has no impact. That is entirely inaccurate. The kubernetes/helm/kubectl providers will always need a clusters certificate and endpoint, in some shape or form, which are not values that you can know before the cluster comes into existence My bad. What I was trying to say is that after the cluster is created, I will not depend on "know after applying" in case of changes in the EKS control plane. If the cluster does not exist of course, I cannot retrieve the EKS cluster endpoint and certificate. That's why I said, "If this datasource fails (usually when I create a new cluster), it switches to the default EKS datasource." That's why I have this condition: data.external.aws_eks_cluster.result.cluster_endpoint == ""? data.aws_eks_cluster.this[0].endpoint: data.external.aws_eks_cluster.result.cluster_endpoint Thank you @mesobreira and @bryantbiggs. I will try this solution. I am getting below error while if i touch/change/comment/update anything on cluster_security_group_description and cluster_security_group_name variables. I just wanted to get a default name and description of sg that is created for EKS by default. I am using version = "~> 18.23.0". cluster_security_group_description = "Short Description" cluster_security_group_name = local.name_suffix Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused with module.eks_cluster.kubernetes_config_map_v1_data.aws_auth[0], on .terraform/modules/eks_cluster/main.tf line 443, in resource "kubernetes_config_map_v1_data" "aws_auth": 443: resource "kubernetes_config_map_v1_data" "aws_auth" { Any solution for this? Thanks! Same issue here, i could create without any issue the clusters and modify it. But after a few hours i got the same error. I already try a lot of changes. Use data. Use module output Use exec command Always the same issue. │ Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused │ │ with module.eks.kubernetes_config_map_v1_data.aws_auth[0], │ on .terraform/modules/eks/main.tf line 475, in resource "kubernetes_config_map_v1_data" "aws_auth": │ 475: resource "kubernetes_config_map_v1_data" "aws_auth" { @csepulveda, have you tried to use external data source, as I mentioned above ? I really do not understand what the issue is with terraform. provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data) exec { api_version = "client.authentication.k8s.io/v1beta1" command = "aws" # This requires the awscli to be installed locally where Terraform is executed args = ["eks", "get-token", "--cluster-name", var.cluster_name] } } Using the data will not provide the information to the provider, despite the information clearly are in state file and are correct. Had to switch it to module.eks.cluster_endpoint and module.eks.cluster_certificate_authority_data Why the variables are not provided to provider ?? terraform -version Terraform v1.3.3 on linux_amd64 + provider registry.terraform.io/gavinbunney/kubectl v1.14.0 + provider registry.terraform.io/hashicorp/aws v4.37.0 + provider registry.terraform.io/hashicorp/cloudinit v2.2.0 + provider registry.terraform.io/hashicorp/helm v2.7.1 + provider registry.terraform.io/hashicorp/kubernetes v2.15.0 + provider registry.terraform.io/hashicorp/local v2.2.3 + provider registry.terraform.io/hashicorp/null v3.2.0 + provider registry.terraform.io/hashicorp/random v3.4.3 + provider registry.terraform.io/hashicorp/template v2.2.0 + provider registry.terraform.io/hashicorp/time v0.9.0 + provider registry.terraform.io/hashicorp/tls v4.0.4 + provider registry.terraform.io/oboukili/argocd v4.1.0 + provider registry.terraform.io/terraform-aws-modules/http v2.4.1 Was getting this error: Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused Using static values in the data section fixed the error for me. This was my configuration: data "aws_eks_cluster_auth" "default" { name = var.cluster_name depends_on =[aws_eks_cluster.cluster] } data "aws_eks_cluster" "default" { name = var.cluster_name depends_on =[aws_eks_cluster.cluster] } provider "kubernetes" { host = data.aws_eks_cluster.default.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data) token = data.aws_eks_cluster_auth.default.token } Was getting this error: Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused Using static values in the data section fixed the error for me. This was my configuration: data "aws_eks_cluster_auth" "default" { name = var.cluster_name depends_on =[aws_eks_cluster.cluster] } data "aws_eks_cluster" "default" { name = var.cluster_name depends_on =[aws_eks_cluster.cluster] } provider "kubernetes" { host = data.aws_eks_cluster.default.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data) token = data.aws_eks_cluster_auth.default.token } How do you mean static values? Was getting this error: Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused Using static values in the data section fixed the error for me. This was my configuration: data "aws_eks_cluster_auth" "default" { name = var.cluster_name depends_on =[aws_eks_cluster.cluster] } data "aws_eks_cluster" "default" { name = var.cluster_name depends_on =[aws_eks_cluster.cluster] } provider "kubernetes" { host = data.aws_eks_cluster.default.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data) token = data.aws_eks_cluster_auth.default.token } How do you mean static values? Previously had something along the lines of: data "aws_eks_cluster_auth" "default" { name = aws_eks_cluster.my_cluster.name } Based on some of the comments above, decided to use pre-set values so used variables and that got rid of the error. Same error here using Terragrunt. Everytime i have to upgrade k8s version i have to delete kubernetes_config_map_v1_data.aws_auth[0] from the state otherwise i will get the following error. kubernetes_config_map_v1_data.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth] ╷ │ Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp [::1]:80: connect: connection refused │ │ with kubernetes_config_map_v1_data.aws_auth[0], │ on main.tf line 518, in resource "kubernetes_config_map_v1_data" "aws_auth": │ 518: resource "kubernetes_config_map_v1_data" "aws_auth" { │ ╵ Hallo dear friends How can I save my data. Someone Stohl my identity and make Money laundering. I live in Switzerland Zürich and I feel not safe
2025-04-01T04:35:40.953178
2022-03-08T02:44:14
1162152585
{ "authors": [ "antonbabenko", "kty1965" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11359", "repo": "terraform-aws-modules/terraform-aws-lambda", "url": "https://github.com/terraform-aws-modules/terraform-aws-lambda/issues/277" }
gharchive/issue
Support self_managed_kafka event source mapping Is your request related to a new offering from AWS? available on terraform AWS provider self-managed-apache-kafka . Is your request related to a problem? Please describe. Currently, we can't connect self managed kafka. We need to add self_managed_event_source nested block in your module. I submit this requested at #200 Describe the solution you'd like. If self_managed_event_source value is set, we don't need to set event_source_arn. Only add self_managed_event_source code block. Describe alternatives you've considered. code main.tf on aws_lambda_event_source_mapping.this - event_source_arn = each.value.event_source_arn + event_source_arn = lookup(each.value, "event_source_arn", null) ... + dynamic "self_managed_event_source" { + for_each = lookup(each.value, "self_managed_event_source", []) + content { + endpoints = self_managed_event_source.value["endpoints"] + } + } Additional context This issue has been resolved in version 3.1.0 :tada:
2025-04-01T04:35:40.955310
2019-06-19T22:10:49
458248707
{ "authors": [ "CNFIT", "antonbabenko" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11360", "repo": "terraform-aws-modules/terraform-aws-rds", "url": "https://github.com/terraform-aws-modules/terraform-aws-rds/issues/133" }
gharchive/issue
RDS module version 2.0.0 fails to download. RDS module version 2.0.0 fails to initialize, version 1.28.0 works fine. >terraform init Initializing modules... - module.tsql-express Found version 2.0.0 of terraform-aws-modules/rds/aws on registry.terraform.io Getting source "terraform-aws-modules/rds/aws" Error downloading modules: Error loading modules: module tsql-express: Error parsing .terraform\modules\68bc2aa11327a45d0cb94cead5018a06\terraform-aws-modules-terraform-aws-rds-fedd420\main.tf: At 2:35: Unknown token: 2:35 IDENT var.db_subnet_group_name Hi! Please make sure you are using the correct version of Terraform with the correct version of the module as described here - https://github.com/terraform-aws-modules/terraform-aws-rds#terraform-versions
2025-04-01T04:35:40.957451
2018-02-06T05:08:24
294633974
{ "authors": [ "antonbabenko", "lawliet89" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11361", "repo": "terraform-aws-modules/terraform-aws-vpc", "url": "https://github.com/terraform-aws-modules/terraform-aws-vpc/pull/69" }
gharchive/pull-request
Manage Default Route Table under Terraform So that the default route table will be named and tagged accordingly instead of being unnamed and untagged. You're right. I'll change the PR to tag aws_default_route_table instead. I didn't know that resource existed =X. @antonbabenko I've pushed the changes. This change is not as small as it may seem because it reassigns default resources. I will test it during tomorrow and extend this PR a bit. v1.19.0 has been released.
2025-04-01T04:35:40.958961
2018-10-10T04:12:55
368487233
{ "authors": [ "frank8812", "hsatterwhite-transloc" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11362", "repo": "terraform-google-modules/terraform-google-lb-http", "url": "https://github.com/terraform-google-modules/terraform-google-lb-http/issues/36" }
gharchive/issue
Do you support backend bucket? I try to create a load balancer by terraform, but it seems not support backend bucket. Do you support it? @marekaf If I'm understanding this correctly and have my local proof of concept correctly then it appears that you have to BOTH a default service and backend bucket when using this module. The gist of what I'm saying is that even in the case of only wanting to provision a GCE LB instance with a single bucket configured as the backend, then you still need to configure a default service, even if that service does nothing and is never used in path matching. Is this accurate? Or can you simply configure ONLY a single backend bucket using the module and forgo a backend service all together?
2025-04-01T04:35:40.960523
2021-07-15T23:17:11
945817080
{ "authors": [ "betsy-lichtenberg", "comment-bot-dev" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11363", "repo": "terraform-google-modules/terraform-google-lb-internal", "url": "https://github.com/terraform-google-modules/terraform-google-lb-internal/pull/62" }
gharchive/pull-request
Added region tags for inclusion in C.G.C. Added README Intending to add to new page similar to https://cloud.google.com/load-balancing/docs/https/ext-http-lb-tf-module-examples, but for L4 ILB. Thanks for the PR! 🚀✅ Lint checks have passed.
2025-04-01T04:35:40.980984
2024-08-23T20:42:10
2483860480
{ "authors": [ "terraform-ibm-modules-dev", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11364", "repo": "terraform-ibm-modules/terraform-ibm-cos", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-cos/pull/703" }
gharchive/pull-request
fix(deps): update terraform-module This PR contains the following updates: Package Type Update Change terraform-ibm-modules/kms-all-inclusive/ibm (source) module patch 4.15.8 -> 4.15.9 terraform-ibm-modules/observability-instances/ibm (source) module patch 2.14.0 -> 2.14.1 terraform-ibm-modules/secrets-manager/ibm (source) module patch 1.17.4 -> 1.17.6 Release Notes terraform-ibm-modules/terraform-ibm-kms-all-inclusive (terraform-ibm-modules/kms-all-inclusive/ibm) v4.15.9 Compare Source Bug Fixes deps: update terraform ibm to latest for the deployable architecture solution (#​534) (0a1d8da) terraform-ibm-modules/terraform-ibm-observability-instances (terraform-ibm-modules/observability-instances/ibm) v2.14.1 Compare Source Bug Fixes skip auth policy creation for cloud logs buckets as the cos module already creates the IAM policy to access the KMS.(#​544) (8fe8441) terraform-ibm-modules/terraform-ibm-secrets-manager (terraform-ibm-modules/secrets-manager/ibm) v1.17.6 Compare Source Bug Fixes reduce validation on event notifications for solution (#​184) (780d3b4) v1.17.5 Compare Source Bug Fixes enable event notifications in secrets manager DA (#​178) (7a98602) Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. /run pipeline :tada: This issue has been resolved in version 8.10.5 :tada: The release is available on: GitHub release v8.10.5 Your semantic-release bot :package::rocket:
2025-04-01T04:35:40.987077
2024-08-02T15:23:47
2445212106
{ "authors": [ "shemau", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11365", "repo": "terraform-ibm-modules/terraform-ibm-icd-rabbitmq", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-icd-rabbitmq/pull/215" }
gharchive/pull-request
style: standardize variable definitions and descriptions Description Standardizing style in the variables files. No changes to variable names or default values. Some descriptions and thus README updates. Order type, description, default, sensitive and a blank line before each validation block. Release required? [x] No release [ ] Patch release (x.x.X) [ ] Minor release (x.X.x) [ ] Major release (X.x.x) Release notes content Not applicable with no release. Run the pipeline If the CI pipeline doesn't run when you create the PR, the PR requires a user with GitHub collaborators access to run the pipeline. Run the CI pipeline when the PR is ready for review and you expect tests to pass. Add a comment to the PR with the following text: /run pipeline Checklist for reviewers [ ] If relevant, a test for the change is included or updated with this PR. [ ] If relevant, documentation for the change is included or updated with this PR. For mergers Use a conventional commit message to set the release level. Follow the guidelines. Include information that users need to know about the PR in the commit message. The commit message becomes part of the GitHub release notes. Use the Squash and merge option. /run pipeline :tada: This issue has been resolved in version 1.11.5 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:40.992647
2022-12-21T12:09:06
1506201203
{ "authors": [ "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11366", "repo": "terraform-ibm-modules/terraform-ibm-icse-vpc-address-prefix", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-icse-vpc-address-prefix/pull/115" }
gharchive/pull-request
chore(deps): update common-dev-assets digest to a796e37 This PR contains the following updates: Package Update Change common-dev-assets digest 0920aa4 -> a796e37 Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, click this checkbox. This PR has been generated by Renovate Bot. :tada: This PR is included in version 1.0.2 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.004245
2024-09-22T04:04:01
2540628295
{ "authors": [ "terraform-ibm-modules-dev", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11367", "repo": "terraform-ibm-modules/terraform-ibm-landing-zone-vpc", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-landing-zone-vpc/pull/857" }
gharchive/pull-request
chore(deps): update module github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper to v1.38.3 This PR contains the following updates: Package Type Update Change github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper require patch v1.38.2 -> v1.38.3 Release Notes terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper) v1.38.3 Compare Source Bug Fixes deps: update gomod (#​864) (802df12) Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. ℹ Artifact update notice File name: tests/go.mod In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s): 2 additional dependencies were updated The go directive was updated for compatibility reasons Details: Package Change go 1.22 -> 1.22.0 github.com/IBM-Cloud/power-go-client v1.7.1 -> v1.8.1 github.com/IBM/platform-services-go-sdk v0.69.0 -> v0.69.1 /run pipeline :tada: This PR is included in version 7.19.1 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.010213
2022-12-14T18:11:23
1497150301
{ "authors": [ "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11368", "repo": "terraform-ibm-modules/terraform-ibm-landing-zone", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-landing-zone/pull/219" }
gharchive/pull-request
chore(deps): update common-dev-assets digest to 9a26bb0 This PR contains the following updates: Package Update Change common-dev-assets digest 12e7798 -> 9a26bb0 Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, click this checkbox. This PR has been generated by Renovate Bot. :tada: This PR is included in version 1.13.1 :tada: The release is available on: GitHub release v1.13.1 Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.015883
2023-02-04T06:10:42
1570787965
{ "authors": [ "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11369", "repo": "terraform-ibm-modules/terraform-ibm-landing-zone", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-landing-zone/pull/271" }
gharchive/pull-request
chore(deps): update common-dev-assets digest to 926911b This PR contains the following updates: Package Update Change common-dev-assets digest b14a2f8 -> 926911b Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, click this checkbox. This PR has been generated by Renovate Bot. :tada: This PR is included in version 3.0.0 :tada: The release is available on: GitHub release v3.0.0 Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.024666
2023-11-28T14:35:15
2014588237
{ "authors": [ "surajsbharadwaj", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11370", "repo": "terraform-ibm-modules/terraform-ibm-powervs-instance", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-powervs-instance/pull/164" }
gharchive/pull-request
chore(deps): update ci dependencies This PR contains the following updates: Package Type Update Change common-dev-assets digest b224509 -> 2f74f8e github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper require patch v1.25.1 -> v1.25.2 Release Notes terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper) v1.25.2 Compare Source Bug Fixes deps: update gomod (#​705) (0c9e048) Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. /run pipeline /run pipeline :tada: This PR is included in version 1.0.3 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.032966
2024-12-18T00:05:41
2746383663
{ "authors": [ "terraform-ibm-modules-dev", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11371", "repo": "terraform-ibm-modules/terraform-ibm-scc", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-scc/pull/220" }
gharchive/pull-request
fix(deps): update terraform terraform-ibm-modules/cos/ibm to v8.15.12 This PR contains the following updates: Package Type Update Change terraform-ibm-modules/cos/ibm (source) module patch 8.15.11 -> 8.15.12 Release Notes terraform-ibm-modules/terraform-ibm-cos (terraform-ibm-modules/cos/ibm) v8.15.12 Compare Source Bug Fixes Fixed bug in fscloud submodule where object locking related config was being ignored (#​796) (dcbbd33) Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. /run pipeline :tada: This PR is included in version 1.8.31 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.040941
2024-04-20T07:44:57
2254435856
{ "authors": [ "terraform-ibm-modules-dev", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11372", "repo": "terraform-ibm-modules/terraform-ibm-secrets-manager-private-cert-engine", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-secrets-manager-private-cert-engine/pull/195" }
gharchive/pull-request
chore(deps): update ci dependencies This PR contains the following updates: Package Type Update Change common-dev-assets digest 44ee19c -> 09c3d8a github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper require patch v1.30.7 -> v1.30.8 Release Notes terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper) v1.30.8 Compare Source Bug Fixes deps: update gomod (#​795) (3f738c7) Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. /run pipeline /run pipeline
2025-04-01T04:35:41.058121
2024-02-16T23:48:44
2139643454
{ "authors": [ "terraform-ibm-modules-dev", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11374", "repo": "terraform-ibm-modules/terraform-ibm-secrets-manager", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-secrets-manager/pull/47" }
gharchive/pull-request
fix(deps): update terraform terraform-ibm-modules/cbr/ibm to v1.18.1 This PR contains the following updates: Package Type Update Change terraform-ibm-modules/cbr/ibm (source) module patch 1.18.0 -> 1.18.1 Release Notes terraform-ibm-modules/terraform-ibm-cbr (terraform-ibm-modules/cbr/ibm) v1.18.1 Compare Source Bug Fixes deps: updated required provider constraints to not allow major version updates (#​400) (ecacb57) Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. /run pipeline /run pipeline :tada: This PR is included in version 1.1.3 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.065975
2023-06-03T02:09:08
1739068971
{ "authors": [ "terraform-ibm-modules-dev", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11375", "repo": "terraform-ibm-modules/terraform-ibm-zvsi", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-zvsi/pull/4" }
gharchive/pull-request
chore(deps): update common-dev-assets digest to b5456c0 This PR contains the following updates: Package Update Change common-dev-assets digest 6565341 -> b5456c0 Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. /run pipeline :tada: This PR is included in version 1.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 1.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 1.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.080225
2023-09-12T12:28:34
1892414097
{ "authors": [ "ocofaigh", "terraform-ibm-modules-dev", "terraform-ibm-modules-ops" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11376", "repo": "terraform-ibm-modules/terraform-ibm-zvsi", "url": "https://github.com/terraform-ibm-modules/terraform-ibm-zvsi/pull/70" }
gharchive/pull-request
chore(deps): update ci dependencies This PR contains the following updates: Package Type Update Change common-dev-assets digest 0e5e0eb -> cc6f9f5 github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper require minor v1.20.10 -> v1.21.1 Release Notes terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper) v1.21.1 Compare Source Bug Fixes deps: update gomod (#​639) (97a2c5c) v1.21.0 Features / Fixes getRemoteURL updated to getRemoteOriginURL for more descriptive name getSymbolicRef replaced by getCurrentBranch which tries multiple approaches incase one fails on the runtime environment CleanTerraformDir added the removes terraform metafiles like state, cache and lock files form the target directory DisableTempWorkingDir added to the test options useful for if you need to keep files around after the test when teardown is disabled Updated upgrade test to find the upstream URL and origin branch this is to allow a fork to work and have the base branch and repo auto-detected. Option added for BaseTerraformRepo and BaseTerraformBranch to manually set the base repo and branch for an upgrade test if users are not using a default configuration. These will be overridden by environment variables BASE_TERRAFORM_REPO and BASE_TERRAFORM_BRANCH if set. Option added DisableTempWorkingDir to disable temporary working directory. Workspace collisions when running in parallel could occur if this is set to true. Configuration 📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied. ♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired. [ ] If you want to rebase/retry this PR, check this box This PR has been generated by Renovate Bot. /run pipeline /run pipeline /run pipeline /run pipeline /run pipeline :tada: This PR is included in version 1.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 1.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket: :tada: This PR is included in version 1.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.192763
2024-01-01T10:42:15
2061477549
{ "authors": [ "amalykhi", "bardielle" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11377", "repo": "terraform-redhat/terraform-provider-rhcs", "url": "https://github.com/terraform-redhat/terraform-provider-rhcs/pull/479" }
gharchive/pull-request
OCM-5422: Adding a commits validation for each PR Your commit message should start with a JIRA issue ('JIRA-1111'), a GitHub issue ('#39'), or a BugZilla issue ('Bug 123') with a following colon(:). i.e. 'MGMT-42: Summary of the commit message' You can also ignore the ticket checking with 'NO-ISSUE' for master only. /retest /retest /retest
2025-04-01T04:35:41.195774
2015-10-14T14:24:28
111409676
{ "authors": [ "dlebauer", "rachelshekar" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11378", "repo": "terraref/computing-pipeline", "url": "https://github.com/terraref/computing-pipeline/issues/21" }
gharchive/issue
check compression, write to disk speed, and if this can be parallelized what information does Dan need to determine what can be done in memory, what needs to be done on site, or should we ingest stream without local storage (could have data loss). have spoken w. c zender about using NCO to process and compress in memory replaced by #38 and #39
2025-04-01T04:35:41.197471
2018-08-16T18:47:12
351324359
{ "authors": [ "craig-willis", "max-zilla" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11379", "repo": "terraref/computing-pipeline", "url": "https://github.com/terraref/computing-pipeline/issues/494" }
gharchive/issue
Implement Pegasus workflow for bin2tif - canopy cover Based on the existing https://github.com/terraref/workflow-pilot/, implement the "real" version of the bin2tif pipeline given on implementation defined in https://github.com/terraref/computing-pipeline/issues/480 check for issue to create condor pool & test this is complete, will create follow-up issue to test on Campus Cluster next.
2025-04-01T04:35:41.199305
2023-02-01T20:48:43
1566805075
{ "authors": [ "BarryNolte" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11380", "repo": "terrastruct/d2-vscode", "url": "https://github.com/terrastruct/d2-vscode/pull/42" }
gharchive/pull-request
Custom task based execution Changed d2 execution to a custom task to take advantage of 'jump to error' (#36) Eliminated the need for temp files (#39) Hot key for show command only active when d2 document is active (#41) Don't forget to run the 'webpack --watch' task. I just spent 15min wondering why my changes weren't running. could you also run yarn prettier -w .? It's been prettied.
2025-04-01T04:35:41.210446
2024-01-26T00:44:58
2101392722
{ "authors": [ "sgrimm" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11381", "repo": "terraware/terraware-server", "url": "https://github.com/terraware/terraware-server/pull/1644" }
gharchive/pull-request
Add extension method to create rectangles We need to construct rectangles in several places in the code; add a helper function as an extension of GeometryFactory to do it. [!WARNING] This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite. Learn more Current dependencies on/for this PR: main PR #1642 PR #1644 👈 PR #1645 This stack of pull requests is managed by Graphite. Merge activity Jan 26, 1:17 PM: @sgrimm started a stack merge that includes this pull request via Graphite.
2025-04-01T04:35:41.215776
2024-07-19T01:49:45
2417654616
{ "authors": [ "tommylau523" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11382", "repo": "terraware/terraware-web", "url": "https://github.com/terraware/terraware-web/pull/2881" }
gharchive/pull-request
Added failure message bar to map [!WARNING] This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite. Learn more #2881 👈 #2880 main This stack of pull requests is managed by Graphite. Learn more about stacking. Join @tommylau523 and the rest of your teammates on Graphite Merge activity Jul 19, 1:21 PM EDT: @tommylau523 started a stack merge that includes this pull request via Graphite.
2025-04-01T04:35:41.218094
2023-05-31T15:17:55
1734469728
{ "authors": [ "FritzHoing", "dnlkoch" ], "license": "BSD-2-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11383", "repo": "terrestris/shogun-gis-client", "url": "https://github.com/terrestris/shogun-gis-client/pull/886" }
gharchive/pull-request
fix: makes the footer extendable via plugins The footer can now be extended via plugins. :tada: This PR is included in version 6.4.0 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
2025-04-01T04:35:41.247722
2015-09-14T07:46:43
106287953
{ "authors": [ "johnnyman727", "wprater" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11384", "repo": "tessel/t2-cli", "url": "https://github.com/tessel/t2-cli/pull/305" }
gharchive/pull-request
add factory method to make a tessel command adds default options for name, usb, lan, and timeout @johnnyman727 some methods did not have all the name, usb, lan, and timeout methods. for example erase. with the new makeCommand method, they will. is this what you intended? @wprater just two comments but this is a great refactor. good to merge when you are, @johnnyman727 Love it! need a quick hotfix! On Mon, Sep 14, 2015 at 11:10 PM Jon<EMAIL_ADDRESS>wrote: Merged #305 https://github.com/tessel/t2-cli/pull/305. — Reply to this email directly or view it on GitHub https://github.com/tessel/t2-cli/pull/305#event-409438148. nevermind! I tricked myself while I was on another branch (: On Mon, Sep 14, 2015 at 11:12 PM William N. Prater III < <EMAIL_ADDRESS>wrote: need a quick hotfix! On Mon, Sep 14, 2015 at 11:10 PM Jon<EMAIL_ADDRESS>wrote: Merged #305 https://github.com/tessel/t2-cli/pull/305. — Reply to this email directly or view it on GitHub https://github.com/tessel/t2-cli/pull/305#event-409438148.
2025-04-01T04:35:41.250620
2022-07-15T21:09:43
1306523005
{ "authors": [ "johnwason", "marip8" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11385", "repo": "tesseract-robotics-packaging/descartes-light-feedstock", "url": "https://github.com/tesseract-robotics-packaging/descartes-light-feedstock/pull/1" }
gharchive/pull-request
Updated Boost version pinning Updates the version pinning of boost to require versions between 1.58.x <= boost < 2.x.x. Version 1.58 is the version distributed on Ubuntu Xenial, which is the earliest distribution for which we have a CI build on the repo Addresses this issue I tested this locally with the build script and it seemed to succeed, however I wasn't really able to verify that it worked. What do I need to look for in the docker image that was generated? The versions of dependencies like Boost are controlled by conda-forge to make sure that everything remains compatible. conda-smithy manages updating these versions. They call them "pinning". All of the other packages in this organization will also need to be updated. Can you try tesseract-robotics-superpack? That has all the tesseract stuff in one package. @marip8 I have spent some time trying to get these individual packages to work, and the builds are still failing. The dependency resolution is not being consistent. I am going to delete all these feedstock repositories so they don't confuse people. Clone them if you want to tinker with them later. I will create backups locally.
2025-04-01T04:35:41.448188
2023-09-12T14:20:15
1892631176
{ "authors": [ "HofmeisterAn", "MeikelLP", "minoseah629" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11386", "repo": "testcontainers/testcontainers-dotnet", "url": "https://github.com/testcontainers/testcontainers-dotnet/issues/998" }
gharchive/issue
[Bug]: Test Container locks up testhost after Windows Update August update Testcontainers version 3.3.0 and 3.5.0 Using the latest Testcontainers version? Yes Host OS Windows 11 Host arch x64 .NET version 7.0.400 Docker version Client: Cloud integration: v1.0.35-desktop+001 Version: 24.0.5 API version: 1.43 Go version: go1.20.6 Git commit: ced0996 Built: Fri Jul 21 20:36:24 2023 OS/Arch: windows/amd64 Context: default Server: Docker Desktop 4.22.1 (118664) Engine: Version: 24.0.5 API version: 1.43 (minimum version 1.12) Go version: go1.20.6 Git commit: a61e2b4 Built: Fri Jul 21 20:35:45 2023 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.21 GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8 runc: Version: 1.1.7 GitCommit: v1.1.7-0-g860f061 docker-init: Version: 0.19.0 GitCommit: de40ad0 Docker info Client: Version: 24.0.5 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.11.2-desktop.1 Path: C:\Program Files\Docker\cli-plugins\docker-buildx.exe compose: Docker Compose (Docker Inc.) Version: v2.20.2-desktop.1 Path: C:\Program Files\Docker\cli-plugins\docker-compose.exe dev: Docker Dev Environments (Docker Inc.) Version: v0.1.0 Path: C:\Program Files\Docker\cli-plugins\docker-dev.exe extension: Manages Docker extensions (Docker Inc.) Version: v0.2.20 Path: C:\Program Files\Docker\cli-plugins\docker-extension.exe init: Creates Docker-related starter files for your project (Docker Inc.) Version: v0.1.0-beta.6 Path: C:\Program Files\Docker\cli-plugins\docker-init.exe sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.) Version: 0.6.0 Path: C:\Program Files\Docker\cli-plugins\docker-sbom.exe scan: Docker Scan (Docker Inc.) Version: v0.26.0 Path: C:\Program Files\Docker\cli-plugins\docker-scan.exe scout: Command line tool for Docker Scout (Docker Inc.) Version: 0.20.0 Path: C:\Program Files\Docker\cli-plugins\docker-scout.exe Server: Containers: 39 Running: 2 Paused: 0 Stopped: 37 Images: 54 Server Version: 24.0.5 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8 runc version: v1.1.7-0-g860f061 init version: de40ad0 Security Options: seccomp Profile: unconfined Kernel Version: <IP_ADDRESS>-microsoft-standard-WSL2 Operating System: Docker Desktop OSType: linux Architecture: x86_64 CPUs: 20 Total Memory: 15.47GiB Name: docker-desktop ID: 1a6d71d1-9038-4cf4-b6ae-7a1a745742f9 Docker Root Dir: /var/lib/docker Debug Mode: false HTTP Proxy: http.docker.internal:3128 HTTPS Proxy: http.docker.internal:3128 No Proxy: hubproxy.docker.internal Experimental: false Insecure Registries: hubproxy.docker.internal:5555 <IP_ADDRESS>/8 Live Restore Enabled: false WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile What happened? run a test in visual studio or dotnet test 1-2 tests would pass fine test runner tries to continue running test, but is currently locked up. test running is locked up due to WerFault.exe is catching an error and reporting problem to Windows also happening on coworker's machine. can provide video if requested Relevant log output from dotnet logs, logs did say testcontainer stopped before 2 or 3rd test starts. Additional information No response I have not noticed any issues on my Windows test environment. test runner tries to continue running test, but is currently locked up. test running is locked up due to WerFault.exe is catching an error and reporting problem to Windows Does this imply that the test keeps running infinite, or is there something causing the test process to fail (can you share the WerFault.exe report)? Could you please look into the output of the docker ps and docker inspect command? My initial assumption is that the dependent container may not be initializing correctly, causing Testcontainers to wait for the readiness check confirmation (can you share the container builder configuration). Occasionally, resetting Docker Desktop to its default factory settings can also be helpful 😅. looks like testhost is the program that is using werfault. at the start of the all of the test run, the test container starts up fine. but when the 2nd or 3rd test of the whole test run, the container i am starting up with testcontainer is no longer shown in docker desktop. I am sorry, I do not think I can help much with this information. Have you tried to reset Docker Desktop? Are you possibly exceeding the resources allocated to your test host? but not seeing logging info Run the tests with Debug configuration and check the test / debug window in Visual Studio (View > Output). In addition to that, try to run dotnet test --verbosity detailed to get a verbose output. i have not reset docker desktop. i want to preserve my existing containers. trying code modifications will provide a dotnet test -v:diag shortly Microsoft (R) Test Execution Command Line Tool Version 17.7.1 (x64) Copyright (c) Microsoft Corporation. All rights reserved. Starting test execution, please wait... A total of 1 test files matched the specified pattern. [testcontainers.org 00:00:00.05] Connected to Docker: Host: npipe://./pipe/docker_engine Server Version: 24.0.5 Kernel Version: <IP_ADDRESS>-microsoft-standard-WSL2 API Version: 1.43 Operating System: Docker Desktop Total Memory: 15.47 GB InitializeAsync Called [testcontainers.org 00:00:00.17] Docker container 8e84a8c7fd52 created [testcontainers.org 00:00:00.23] Start Docker container 8e84a8c7fd52 [testcontainers.org 00:00:01.53] Wait for Docker container 8e84a8c7fd52 to complete readiness checks [testcontainers.org 00:00:01.54] Docker container 8e84a8c7fd52 ready [testcontainers.org 00:00:01.59] Docker container d5e4bb66c8f0 created [testcontainers.org 00:00:01.60] Start Docker container d5e4bb66c8f0 [testcontainers.org 00:00:01.92] Wait for Docker container d5e4bb66c8f0 to complete readiness checks [testcontainers.org 00:00:01.93] Execute "/bin/sh -c true && (grep -i ':01538' /proc/net/tcp || nc -vz -w 1 localhost 5432 || /bin/bash -c '</dev/tcp/localhost/5432')" at Docker container d5e4bb66c8f0 [testcontainers.org 00:00:03.05] Execute "/bin/sh -c true && (grep -i ':01538' /proc/net/tcp || nc -vz -w 1 localhost 5432 || /bin/bash -c '</dev/tcp/localhost/5432')" at Docker container d5e4bb66c8f0 [testcontainers.org 00:00:04.19] Execute "/bin/sh -c true && (grep -i ':01538' /proc/net/tcp || nc -vz -w 1 localhost 5432 || /bin/bash -c '</dev/tcp/localhost/5432')" at Docker container d5e4bb66c8f0 [testcontainers.org 00:00:04.32] Docker container d5e4bb66c8f0 ready Fi=000,Fa=000,Pe=000 # > FEATURE: GetRecentlyAccessedForUserHandlerTests Before Start Fi=000,Fa=000,Pe=001 # 1> SCENARIO: Validates QueryResults RecentlyAccessedWorkflow ReturnsCreatedWorkflow Fi=000,Fa=000,Pe=001 # 1> STEP 1/6: GIVEN There Is A Person... Fi=000,Fa=000,Pe=001 # 1> STEP 1/6: GIVEN There Is A Person (Passed after 867ms) Fi=000,Fa=000,Pe=001 # 1> STEP 2/6: AND A Workflow Exists... Fi=000,Fa=000,Pe=001 # 1> STEP 2/6: AND A Workflow Exists (Passed after 409ms) Fi=000,Fa=000,Pe=001 # 1> STEP 3/6: AND The Workflow Has A Version... Fi=000,Fa=000,Pe=001 # 1> STEP 3/6: AND The Workflow Has A Version (Passed after 1s 158ms) Fi=000,Fa=000,Pe=001 # 1> STEP 4/6: AND A Recently Accessed Exists... Fi=000,Fa=000,Pe=001 # 1> STEP 4/6: AND A Recently Accessed Exists (Passed after 92ms) Fi=000,Fa=000,Pe=001 # 1> STEP 5/6: WHEN Query Is Invoked... Before End Fi=000,Fa=000,Pe=001 # 1> STEP 5/6: WHEN Query Is Invoked (Passed after 572ms) Fi=000,Fa=000,Pe=001 # 1> STEP 6/6: THEN QueryResult IncludesRecentlyCreatedWorkflow... Fi=000,Fa=000,Pe=001 # 1> STEP 6/6: THEN QueryResult IncludesRecentlyCreatedWorkflow (Passed after 13ms) Fi=001,Fa=000,Pe=000 # 1> SCENARIO RESULT: Passed after 3s 211ms After Start [testcontainers.org 00:00:24.40] Delete Docker container d5e4bb66c8f0 <!-- After End interesting. removing [assembly: CollectionBehavior(DisableTestParallelization = true, MaxParallelThreads = 2)] got me working again. i thought i saw something to ensure 1 process is running database connections. I'm having the same issue. Sadly I don't get any logs to post here. The Ryuk container does start (logs exist) but it fails to start my custom container (Typesense). It takes about 60s until the StartAsync is finished and then I get an exception: Unhandled exception. System.InvalidOperationException: Could not find resource 'TypesenseContainer'. Please create the resource by calling StartAsync(CancellationToken) or CreateAsync(CancellationToken). brickhub-backend-1 | at DotNet.Testcontainers.Guard.ThrowIf[TType](ArgumentInfo`1& argument, Func`2 condition, Func`2 ifClause) brickhub-backend-1 | at DotNet.Testcontainers.Resource.ThrowIfResourceNotFound() brickhub-backend-1 | at DotNet.Testcontainers.Containers.DockerContainer.GetMappedPublicPort(String containerPort) brickhub-backend-1 | at DotNet.Testcontainers.Containers.DockerContainer.GetMappedPublicPort(Int32 containerPort) My setup is a little more complex, I try to start the testcontainer from within another container in a docker compose. Yes I did mount the volume for docker: volumes: - /var/run/docker.sock:/var/run/docker.sock Funnily enough this issue does not occur when running the app natively (not from within the docker container). I also noticed that the container is never created - so this is not an issue with the WaitStrategy Any way you can help me or for me to debug this behavior? This looks like a Compose configuration. Did you set the environment variable mentioned here: https://dotnet.testcontainers.org/examples/dind/#compose? The Wormhole configuration varies a bit for each environment. I need more information about your setup if it is still not working. @HofmeisterAn thank you! I missed that part...
2025-04-01T04:35:41.453586
2023-06-28T17:32:16
1779400319
{ "authors": [ "eddumelendez" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11387", "repo": "testcontainers/testcontainers-java", "url": "https://github.com/testcontainers/testcontainers-java/pull/7246" }
gharchive/pull-request
Add explicit SPI for R2DBC Testcontainers has been using com.google.auto.service:auto-service to declare R2DBCDatabaseContainerProvider implementations. In order to be consistent with JdbcDatabaseContainerProvider, dependency com.google.auto.service:auto-service is dropped and implementations are declared explicitly under META-INF/services/org.testcontainers.r2dbc.R2DBCDatabaseContainerProvider. I'll close it for now. It can be revisited in the future and avoid extra dependencies.
2025-04-01T04:35:41.487441
2022-08-03T15:50:03
1327444109
{ "authors": [ "CLAassistant", "vikram-chaitanya" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11388", "repo": "testsigmahq/testsigma", "url": "https://github.com/testsigmahq/testsigma/pull/77" }
gharchive/pull-request
1.8.0 Release [x] End-to-End Testing Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.3 out of 4 committers have signed the CLA.:white_check_mark: tarun-testsigma:white_check_mark: shabarish-testsigma:white_check_mark: PratheepV:x: vikram-chaitanyaYou have signed the CLA already but the status is still pending? Let us recheck it.
2025-04-01T04:35:41.492961
2022-05-08T05:13:00
1228795595
{ "authors": [ "codefromthecrypt" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11389", "repo": "tetratelabs/archive-envoy", "url": "https://github.com/tetratelabs/archive-envoy/pull/36" }
gharchive/pull-request
Updates to latest GHActions Intentionally not fixing the script or doing a release as there's a pending PR that needs to be redone. This just does maintenance in preparation for whenever that happens. thx again @mathetake!
2025-04-01T04:35:41.494043
2023-10-01T12:05:55
1920731857
{ "authors": [ "caibirdme", "mathetake" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11390", "repo": "tetratelabs/proxy-wasm-go-sdk", "url": "https://github.com/tetratelabs/proxy-wasm-go-sdk/issues/397" }
gharchive/issue
[Question]Same vm_id with different code I see from the OVERVIEW, we could reuse a vm in each Thread to save resources. I want to know what will happen if two different wasm code apply the same vm_id( which means they have their own RootContext implementation?) assuming what you mean by wasm code is a Wasm binary, then envoy creates an entirely different VM, not sharing the VM.
2025-04-01T04:35:41.528256
2022-03-18T17:09:56
1173821561
{ "authors": [ "PeterPumpkinEater69real", "canedoly", "lnx00", "nofhdgtf778" ], "license": "WTFPL", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11391", "repo": "tf2cheater2013/Fedoraware", "url": "https://github.com/tf2cheater2013/Fedoraware/issues/79" }
gharchive/issue
make fware that u can dual inject i injected fware and lbox and works but kicked me from the server so wtf is going on Disconnect: client disconnect? some compatibility issues it used to work with lbox but some update caused it to break #69 #60 #52 #48 please stop asking already #69 #60 #52 #48 please stop asking already JUST MAKE IT DUAL INJECTABLE M8 XDD
2025-04-01T04:35:41.543204
2020-07-04T18:11:19
650933595
{ "authors": [ "FoggyFinder", "tforkmann" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11392", "repo": "tforkmann/Fumble", "url": "https://github.com/tforkmann/Fumble/issues/1" }
gharchive/issue
Docs contains invalid information ^ Current page is confusing: The page is almost empty and that even worse - contains wrong information: Thin F# API for Sqlite for easy data access to ms sql server with functional seasoning on top Project is not about MSSQL Also it says that NuGet version is 0.6.3. though is is 0.1 so far. Hi there, sorry didn't had much time to work on the docs yet. Just fixed the docs main page. Will add some more docs in the next week.
2025-04-01T04:35:41.625906
2017-02-08T11:15:52
206167664
{ "authors": [ "elhigu", "jehy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11393", "repo": "tgriesser/knex", "url": "https://github.com/tgriesser/knex/issues/1903" }
gharchive/issue
Add check for column existance when renaming I use mariadb with mysql client. If I try to rename non existent column, I get TypeError: Cannot read property 'Type' of undefined at /web/quest/node_modules/knex/lib/dialects/mysql/schema/tablecompiler.js:104:83 at tryCatcher (/web/quest/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/web/quest/node_modules/bluebird/js/release/promise.js:510:31) at Promise._settlePromise (/web/quest/node_modules/bluebird/js/release/promise.js:567:18) at Promise._settlePromiseCtx (/web/quest/node_modules/bluebird/js/release/promise.js:604:10) at Async._drainQueue (/web/quest/node_modules/bluebird/js/release/async.js:138:12) at Async._drainQueues (/web/quest/node_modules/bluebird/js/release/async.js:143:10) at Immediate.Async.drainQueues (/web/quest/node_modules/bluebird/js/release/async.js:17:14) at runCallback (timers.js:651:20) at tryOnImmediate (timers.js:624:5) at processImmediate [as _immediateCallback] (timers.js:596:5) Error is on this line: var sql = 'alter table ' + table + ' change ' + wrapped + ' ' + column.Type; The real reason of error (no column) is very hard to understand. Simple check for column===undefined could be very useful. Closing in favor of #2155 which has some extra discussion how to fix this.
2025-04-01T04:35:41.628829
2019-07-10T17:27:33
466430312
{ "authors": [ "gDelgado14", "kibertoad" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11394", "repo": "tgriesser/knex", "url": "https://github.com/tgriesser/knex/issues/3342" }
gharchive/issue
Version mismatch between NPM and github releases Noticed that knex on NPM is set to version 0.18.3 while the latest release of knex as per this repo is 0.17.2 Can someone please explain if and how these two versions are related? Not sure if we should be updating to version 0.18.3 since there's no mention of it in this repo other than in the package.json @gDelgado14 You are reading too much into it, I'm just really terrible at remembering to tag versions on GitHub 😅. That said, there is a known regression in 0.18.x currently: https://github.com/tgriesser/knex/issues/3333 If it doesn't concern you, and breaking changes in 0.18 typings are not a problem for you, then by all means 0.18 is superior to 0.17 and update is recommended. 0.18.4 with correct tag has just landed.
2025-04-01T04:35:41.803146
2016-05-06T07:42:32
153396379
{ "authors": [ "BalooUriza", "HumbleBeeBumbleBee" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11395", "repo": "tgwizard/sls", "url": "https://github.com/tgwizard/sls/issues/259" }
gharchive/issue
Avoid scrobbling advertisements Google Play Music Radio now inserts ads every 2-5 songs. It would be nice to not scrobble these fake tracks during the ads, as otherwise it quickly becomes the top listened to track. Hello thank you for this info. I had the same problem with Spotify. Hopefully there is a good fix. I have been testing this on my phone and it doesn't seem to be a problem. I am going to close this, because the updated version of Google Play doesn't seem to scrobble commercials.
2025-04-01T04:35:41.857404
2024-12-05T11:57:10
2720211756
{ "authors": [ "GiedriusS", "MichaHoffmann", "midhun-mohan", "yeya24" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11396", "repo": "thanos-io/thanos", "url": "https://github.com/thanos-io/thanos/issues/7963" }
gharchive/issue
Receive : Receive 0.36.1 in router / ingestor mode has a memory leak and goes OOM every 2 hrs Thanos, Prometheus and Golang version used: Thanos : 0.36.1 Prometheus 2.54.1 / $ thanos --version thanos, version 0.36.1 (branch: HEAD, revision: 99a5742a15f107d4607d280c825eca5b7f09a253) build user: root@3b4bc83e1037 build date: 20240813-11:33:32 go version: go1.21.13 platform: linux/amd64 tags: netgo Object Storage Provider: Azure Blob Storage What happened: Memory leak for Thanos receive in router ingestor mode What you expected to happen: No memory leaks happen How to reproduce it (as minimally and precisely as possible): Not sure. Full logs to relevant components: Anything else we need to know: Is it the same with 0.37.1? Haven't tried 0.37.1. But I am happy to try it. Will report back today @GiedriusS Its still the same with the new version. Heap memory keeps increasing. I don't think it's a bug - it's how the Prometheus TSDB works. It truncates the WAL and produces a new block every 2h for optimal compression. That's why you see memory usage increase and drop every 2h. It's not exactly 2hrs. Its around 2 hrs If you see this instance, it was started at 13:40 and the memory reached all the way to the roof which is at 32 GB and then OOMed at 15:15 which is around 1 hr and 35 minutes. This is not a block size which is 2hrs by default in our environment adding a bigger window @MichaHoffmann I am now running with more resources for Thanos receive to see how it behaves after 2 hrs. But I have few questions I believe the block is stored in the attached persistent volume and not in memory. If so what would be causing this increase? Is receive supposed to ingest data to object storage while it receives or is it creating a block and then uploading it? Where is this block stored? Testing with more resources to pass over 2hours @MichaHoffmann I am now running with more resources for Thanos receive to see how it behaves after 2 hrs. But I have few questions I believe the block is stored in the attached persistent volume and not in memory. If so what would be causing this increase? Is receive supposed to ingest data to object storage while it receives or is it creating a block and then uploading it? Where is this block stored? Block is stored on disk and mmapped - later it is uploaded to object storage. Head block is stored in memory. You could try decreasing length of blocks to 1h maybe To me this looks like a memory leak. Every two hours, the metric prometheus_tsdb_head_series in receive is being flushed as seen in the below screenshot. But the memory is not cleaned up. It eats up whatever available and them went OOM at around 03:00 - 04:00 The current config for reference : Replication factor is changed to 2, number of pods of ingestor is increased to 6. The only change I have seen is that it took more time to fall over as compared to the previous settings. Explanation of Graphs : First tile shows the memory used by each of the ingestor pods. Limit is at 45GB. Second row shows the Rate of series / samples received by receive Third row shows the metric prometheus_tsdb_head_series as seen from receive and our prometheus instance Fourth row shows the metric prometheus_tsdb_head_samples_appended_total as seen from receive and our prometheus instance Can you take a heap profile and upload to pprof.me and share here please? It would be interesting to see what it contains! Can you also share configuration of the ingestor component? Do you by chance enabled out of order writes? Yes. I do have config for out of order writes. Here is the config for investor. - args: - receive - --log.level=info - --log.format=json - --grpc-address=<IP_ADDRESS>:10901 - --http-address=<IP_ADDRESS>:10902 - --tsdb.max-exemplars=1000000 - --tsdb.too-far-in-future.time-window=180s - --tsdb.out-of-order.time-window=1800s - --remote-write.address=<IP_ADDRESS>:19291 - --tsdb.path=/var/thanos/receive - --tsdb.retention=2h - --label=replica="$(NAME)" - --label=receive="true" - --objstore.config=$(OBJSTORE_CONFIG) - --receive.local-endpoint=$(NAME).thanos-receive-ingestor-default.$(NAMESPACE).svc.cluster.local:10901 That looks pretty normal except the amount of exemplars, can you try without them just for an experiment? This is the config for the router. Yes I can remove exemplars from this one and do a deploy. - args: - receive - --log.level=info - --log.format=json - --tsdb.max-exemplars=1000000 - --grpc-address=<IP_ADDRESS>:10901 - --http-address=<IP_ADDRESS>:10902 - --remote-write.address=<IP_ADDRESS>:19291 - --receive.replication-factor=2 - --receive.hashrings-file=/var/lib/thanos-receive/hashrings.json - --receive.hashrings-algorithm=ketama - --label=replica="$(NAME)" - --label=receive="true" @MichaHoffmann Here is a ppof from one of the pod @MichaHoffmann Good news. After turning exemplar off, the memory is very very stable at 1.6 GB 😱 Do you have any idea on how often exemplar is flushed from memory? Do you have any idea on how often exemplar is flushed from memory? @midhun-mohan No, exemplars are never flushed from memory to disk so please limit the number of exemplars. That's the same as what Prometheus does today. @yeya24 Just asking, do you know what is the default in prometheus from the top of your head? Can the issue be closed as it's just a misconfiguration essentially? @MichaHoffmann Yes. It can be closed. In anycase, do you know the defaults in prometheus from the top of your head or even an ideal value for it? https://prometheus.io/docs/prometheus/latest/configuration/configuration/#exemplars 100k seemingly! @MichaHoffmann @yeya24 Thank you for your time. Very much appreciated ❤️ @MichaHoffmann @yeya24 A followup question, Is exemplar stored to object storage at any time? I am wondering how can we make sure that previous exemplar that was injected is available for future use.
2025-04-01T04:35:41.859139
2020-11-30T11:58:54
753417048
{ "authors": [ "tharmes42" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11397", "repo": "tharmes42/vCamDesk", "url": "https://github.com/tharmes42/vCamDesk/issues/5" }
gharchive/issue
crop zoom is not saved crop zoom is not saved if I reuse the last used webcam auto-crop makes this obsolete: https://github.com/tharmes42/vCamDesk/releases/tag/v0.9.7645.37214
2025-04-01T04:35:41.866849
2024-12-26T03:51:12
2759238629
{ "authors": [ "FoolishFool4202", "tharunbirla" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11398", "repo": "tharunbirla/FetchIt", "url": "https://github.com/tharunbirla/FetchIt/issues/21" }
gharchive/issue
Tiktok link error Links from tiktok are not working and saying not able to retrieve FetchIt relies on the Cobalt API to access and download content from YouTube, Tik tok, Tumblr and other platforms. However, following the recent discontinuation of the Cobalt API, the app is no longer able to download videos from these platforms. imputnet/cobalt#860
2025-04-01T04:35:41.868225
2015-06-05T00:50:58
85352694
{ "authors": [ "thatJavaNerd", "zglazer" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11399", "repo": "thatJavaNerd/JRAW", "url": "https://github.com/thatJavaNerd/JRAW/pull/75" }
gharchive/pull-request
Closes thatJavaNerd/JRAW#70 Close to issue #70. Created a removalReason() method and added two test cases. Javadoc probably needs to be updated still. Hmm... Any idea what happened? Travis doesn't use secure environmental variables with pull request builds for security reasons. When the test suite sees that it's a Travis build, it searches for these variables and when it doesn't find them, it throws an error. Don't worry, your code is fine. It's just something Travis does.
2025-04-01T04:35:41.876689
2022-04-01T05:30:39
1189259982
{ "authors": [ "thautwarm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11400", "repo": "thautwarm/Traffy.UnityPython", "url": "https://github.com/thautwarm/Traffy.UnityPython/issues/36" }
gharchive/issue
limited IO operations UnityPython is designed with security concerns kept in mind, which is to say, Python scripts do not have the permission to access IO operations, protected .NET APIs and so on. However, for game use, accessing to specific file system is required to save game states. You can only use IO operations within Application.persistentDataPath. For instance, we can provide a module uio: import uio uio.open("a/b/c") # open ${Application.persistentDataPath}/a/b/c
2025-04-01T04:35:41.932220
2016-11-07T09:37:45
187664058
{ "authors": [ "fishmad", "handiwijoyo" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11401", "repo": "the-control-group/voyager", "url": "https://github.com/the-control-group/voyager/issues/116" }
gharchive/issue
How to creat Drop down selections in BREAD form maker Cant see any instructions on how to create selection items for dropdowns in the form maker, would it be safe to assume we would enter these as JSON code into the empty text field on far right? @fishmad the examples how to use additional field options are on the docs https://the-control-group.github.io/voyager/docs/#voyager-docs-database-tools-additional-field-options For dropdown can use something like: { "default" : "option1", "options" : { "option1": "Option 1 Text", "option2": "Option 2 Text" } }
2025-04-01T04:35:41.937285
2019-07-21T04:12:02
470740320
{ "authors": [ "ahmedlab311", "fletch3555" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11402", "repo": "the-control-group/voyager", "url": "https://github.com/the-control-group/voyager/issues/4277" }
gharchive/issue
Hello, when i use rich text box in voyager admin panel, it is correct but in front end side it apper as symbol. how can solve this? Version information Laravel: v#.#.# Voyager: v#.#.# PHP: #.# Database: [type] [version] (e.g. MySQL 8.0) Description A clear and concise description of what the bug is. Steps To Reproduce Steps to reproduce the behavior: Go to '...' Click on '....' Scroll down to '....' See error Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Additional context Add any other context about the problem here. Issue template is required. Please edit and provide the necessary information. Then this can be reopened
2025-04-01T04:35:41.945148
2016-12-27T15:50:48
197721251
{ "authors": [ "Jaquedeveloper", "adriangordon1231", "fletch3555", "jonathanvh", "kiranpalkathait" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11403", "repo": "the-control-group/voyager", "url": "https://github.com/the-control-group/voyager/issues/450" }
gharchive/issue
Many to Many relationship not working Laravel Version: 5.3 Voyager Version: 10.5 PHP Version: 5.6 Database Driver & Version: MySQL Description: I have a table avaliacaos and a table servicos and in avaliacaos I want to reffer to multiple services. I've followed the docs and created a Many to Many relationship and used the Multiple Select for selecting the services(servicos) I want to evaluate(avaliacaos). I've used the following code on my Avaliacaos Model: public function servico(){ return $this->belongsToMany(Servicos::class,'avaliacaos_servicos'); } I've created the pivot table avaliacaos_servicos on my database: But when I add the avaliacao trhough the BREAD, It returns this: Here's the structure of my avaliacaos table: I want to add a avaliacao item and select Multiple Servicos. Therefore, the pivot table should associate the servico with the avaliacao. What did I do wrong? How can I fix this? Am I supposed to do anything else I haven't done in order to achieve the result I need? I've seen a similar issue, https://github.com/the-control-group/voyager/issues/354 . Not fixed still... Anybody has any idea of what's going on? Anybody did created a Many to Many relationship following only the steps of the docs, just like me but got the expected result? What did you do different? Thank you very much. This is not an issue with Voyager. Please refer to Ok, but if I don't put servico on my BREAD table, how will this field appear in my view? Apologies, I misunderstood. Voyager does not currently support Many-to-Many relations in it's views, though I believe someone is currently working on that. My instructions above were for using Laravel functionality to build the relation. As a workaround, you could add BREAD to the pivot table, and configure fields using one-to-many relations there. Hi, did you find any approach that could fulfill this functionality, I'm now going through the same situation. Best regards, Has anyone come up with a solution to this problem as yet? SQLSTATE[42S22]: Column not found: 1054 Unknown column 'track_cd.teck_list_id' in 'field list' (SQL: select id, name, track_cd.teck_list_id as pivot_teck_list_id, track_cd.cd_id as pivot_cd_id from cds inner join track_cd on cds.id = track_cd.cd_id where track_cd.teck_list_id in (7, 6, 5, 4, 3, 2, 1))
2025-04-01T04:35:41.948670
2024-04-06T22:21:24
2229444061
{ "authors": [ "AphidRS", "GrantBirki" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11404", "repo": "the-hideout/tarkov-dev", "url": "https://github.com/the-hideout/tarkov-dev/pull/917" }
gharchive/pull-request
Spanish translation.json 50% completed [Spanish translation] It contains the spanish translation for the file "translation.json" at 50% complete. related: https://github.com/the-hideout/tarkov-dev/issues/175 .deploy .deploy
2025-04-01T04:35:42.015332
2018-04-09T00:20:08
312356252
{ "authors": [ "Paarsec", "the3dadvantage" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11405", "repo": "the3dadvantage/Modeling-Cloth", "url": "https://github.com/the3dadvantage/Modeling-Cloth/issues/17" }
gharchive/issue
Sharp corners collider sticking out + tests Hi! I love this addon, so first of all, THANK YOU for making it available! I have an issue with sharp corner colliders and tried some workarounds, with no success. Unfortunately I know no python, even though your plugin certainly is very inspiring as a reason to start studying it. On this case, the cube, what I've tried so far is this: adding subdivision/faces to the collider object doesn't seem to help, adding more margin to the collide value of the cloth doesn't help, adding some collide margin to the collider object seems to help a bit, because of the distance, but I have to use extremely high value that leave a big gap & the cloth seems to get more slippery on it (?) I attached some gifs of what happens depending on the cloth subdivision. It seems to happen when the faces don't fall flat against the collider. Again, I don't know python and I apologize if this doesn't help much. Hope this can be useful to anyone willing to check this out, or if there's a workaround I'm very interested in knowing it. Thanks! Looking forward to it! I've made some impressive cloths with it, especially with spheres. it works like a charm. Great! I'd love to see what you came up with. Thanks, I'm very happy to show you some renders! Your add-on is the most promising tool for cloth I've come across lately. I'm on a very tight budget right now and it was a godsend, I really wanted to work with some realtime cloth! Later I will post some recordings of a couple real-time tests I've made. And, of course, thank you for all the work you've put into this!
2025-04-01T04:35:42.020434
2018-03-28T20:10:22
309513513
{ "authors": [ "bolivaralejandro", "qw3rt33" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11406", "repo": "theJasonHelmick/PS-AutoLab-Env", "url": "https://github.com/theJasonHelmick/PS-AutoLab-Env/issues/146" }
gharchive/issue
Password issues? My PS_AutoLab setup seems to have gone smoothly, but when I try to log into the Win10 VM, it doesn't seem to be accepting the default password set? Anyone know if the password is different or a way to bypass the security altogether? I think I wrote this issue prematurely...turns out AutoLab has a US configuration and I'm in the UK...different keyboard layout! The second character for the password is not &, it is @.
2025-04-01T04:35:42.021907
2016-10-10T21:21:29
182118541
{ "authors": [ "greggoindenver", "theJasonHelmick" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11407", "repo": "theJasonHelmick/PS-AutoLab-Env", "url": "https://github.com/theJasonHelmick/PS-AutoLab-Env/issues/19" }
gharchive/issue
Document use of Lability script with VMware Workstation Specifically setting the OS configuration to "Hyper-V (unsupported)" and not "Windows 10 x64). Thank you Greg! I'm adding to documentation now I'm going to leave this open for a while just in case no one reads the documentation ;) Updated -- closing issue
2025-04-01T04:35:42.024918
2023-05-12T17:02:12
1707943635
{ "authors": [ "theNullCrown" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11408", "repo": "theNullCrown/FrogAI", "url": "https://github.com/theNullCrown/FrogAI/issues/1" }
gharchive/issue
Mobile website optimization The website is currently not functioning optimally on mobile browsers. The css files in vite-project/src/ are all set to adapt when the width is less than 40rem and it works perfectly on the desktop but in mobile browsers there are some inconsistencies such as: The top bar not extending the full length of the viewport on the landing page The center column is overflowing horizontally on the landing page Overall the UI is not optimized for mobile browsers as it requires a significant amount of scrolling to set the parameters and go through the recommendations so a better design would be helpful The Landing Page .tsx and .css are in vite-project/src. Some elements in the Landing Page are borrowing CSS classes from App.css. The components for taking inputs have their styles in Component.css and the Recommendation boxes have their styles in Recommendations.css
2025-04-01T04:35:42.080175
2021-09-21T11:32:33
1002285543
{ "authors": [ "Dygear", "theangryangel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11411", "repo": "theangryangel/insim.rs", "url": "https://github.com/theangryangel/insim.rs/issues/2" }
gharchive/issue
API Design @Dygear I've been toying with your idea from twitter and trying to take into account what a general purpose insim client might need to look like, so that we could build stuff like PRISM in rust. I'm definitely still at the tinkering phase and I was hoping you would have some input (as I suspect you're further along with rust than I am). I kind of like the macro style that that projects serenity and rocket use. I just don't know if that's a good or bad idea. Or we could go for a callback per packet type (which seems horrible given the number of packets in Insim). Or there's just the receive everything to a single handler func/lambda/whatever and let upstream projects (like PRISM) handle dispatching. Any thoughts? :) I was thinking more along the lines of client_connected callback when someone connects to the server like AMX Mod's native functions. Basically the same documentation too. I shall investigate further <3 https://www.lfs.net/forum/post/1969418#post1969418
2025-04-01T04:35:42.109008
2018-04-17T15:42:13
315118465
{ "authors": [ "gzg365", "thefab" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11412", "repo": "thefab/tornadis", "url": "https://github.com/thefab/tornadis/issues/46" }
gharchive/issue
why can't connect redis db ??? tornadis 0.8.0 tornado 5.0.1 python 3.6.5 class MyVerification(RequestHandler): @tornado.gen.coroutine def get(self): self.redisdb = tornadis.Client(db=1) result = yield self.redisdb.call("SET", self.myId, myValue) self.write(result) Traceback (most recent call last): File "/app/python3/lib/python3.6/site-packages/tornado/web.py", line 1543, in _execute result = yield result File "/app/python3/lib/python3.6/site-packages/tornado/gen.py", line 1099, in run value = future.result() File "/app/python3/lib/python3.6/site-packages/tornado/gen.py", line 1107, in run yielded = self.gen.throw(*exc_info) File "/app/tornadoWeb/com/verification.py", line 30, in get result = yield self.redisdb.call("SET", self.myId, myValue) File "/app/python3/lib/python3.6/site-packages/tornado/gen.py", line 1099, in run value = future.result() File "/app/python3/lib/python3.6/site-packages/tornado/gen.py", line 1107, in run yielded = self.gen.throw(*exc_info) File "/app/python3/lib/python3.6/site-packages/tornadis/client.py", line 212, in _call_with_autoconnect yield self.connect() File "/app/python3/lib/python3.6/site-packages/tornado/gen.py", line 1099, in run value = future.result() File "/app/python3/lib/python3.6/site-packages/tornado/gen.py", line 315, in wrapper yielded = next(result) File "/app/python3/lib/python3.6/site-packages/tornadis/client.py", line 101, in connect self.__connection = Connection(cb1, cb2, **kwargs) File "/app/python3/lib/python3.6/site-packages/tornadis/connection.py", line 87, in init self._ioloop) TypeError: init() takes 3 positional arguments but 4 were given just fixed in master (API break in tornado 5) @thefab 👍❤
2025-04-01T04:35:42.110839
2023-02-15T12:26:49
1585771972
{ "authors": [ "Fuglen" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11413", "repo": "theflyingbirdsmc/TFB-Network", "url": "https://github.com/theflyingbirdsmc/TFB-Network/issues/126" }
gharchive/issue
[BUILD] Vanilla Spawn Randomly found this build from Planet Minecraft because they send me an email lol. I think it would be great for a spawn in Vanilla and with the Phoenix on the top or in a tree? https://www.planetminecraft.com/project/steampunk-spawn-1-16-free-download-5849931/ Now we finally have a new spawn for Vanilla :D
2025-04-01T04:35:42.134296
2015-03-25T15:25:26
64297969
{ "authors": [ "cowboyd", "miguelcobain" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11414", "repo": "thefrontside/emberx-select", "url": "https://github.com/thefrontside/emberx-select/pull/9" }
gharchive/pull-request
add blockless version. closes #8 This is working in my app! Tests and README update still missing. Just fixed an edge case. Your content is "item 1, item 2 and item 3" and you have "item 1" selected. If you remove item 1 from content, your selection must be cleared (set to null). Any problem with this PR? How can I help? If you could add the tests, I'll go ahead and add the README. Please let me know if the tests are covering everything. looks good! :+1: thanks @miguelcobain! this has been released as v1.1.2
2025-04-01T04:35:42.178737
2017-10-19T09:49:22
266783307
{ "authors": [ "JanKoehnlein", "meysholdt" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11415", "repo": "theia-ide/yang-lsp", "url": "https://github.com/theia-ide/yang-lsp/pull/79" }
gharchive/pull-request
Fix rename refactoring Signed-off-by: Moritz Eysholdt<EMAIL_ADDRESS> Closing as outdated
2025-04-01T04:35:42.179816
2023-10-21T17:36:39
1955594132
{ "authors": [ "AimenYaseen", "sushma1031", "theinit01" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11416", "repo": "theinit01/PyFuzz", "url": "https://github.com/theinit01/PyFuzz/issues/6" }
gharchive/issue
Add Status Code Filtering Allow users to specify which HTTP status codes are considered successful or failed responses, making it more adaptable to different scenarios. Hello, I'd like to work on this, could you please assign it to me? I want to work on this. Please explain this a little bit more.
2025-04-01T04:35:42.192004
2021-12-16T18:25:44
1082512278
{ "authors": [ "gtca", "ivirshup" ], "license": "bsd-3-clause", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11417", "repo": "theislab/anndata", "url": "https://github.com/theislab/anndata/issues/665" }
gharchive/issue
How to express write_attribute(file, name, adata) when writing to root group? Question: what should the API be to write an AnnData or MuData to a store without creating a new group? Currently the API for writting elements is write_elem(group, key, value) which writes value into a new element k in group. How do we specify that we would like to write the element into the current group? Two cases where we want this are writing an AnnData or MuData, though you could also want this for an arbitrary mapping. I figure the API should either be either passing key=None or key="/". So this could look like: with h5py.File(pth, "w") as f: write_elem(f, None, adata) An implementation detail here: what do we do if the group we are trying to write to isn't empty? When a key is passed, we delete anything that previously existed at that key and then write the new element. This doesn't work when it's the root of a store, since you generally can't delete that. Ideally our solution doesn't make working around this complicated. My initial opinion would be to go with key="/". One might discuss if we want to also accept key="" to follow the existing semantics (key="obsm" to write into the "obsm" group, key="" to write into the current group). The latter might be error-prone though so we can go exclusively with the former. For the second question, we also have to think if we want this to behave in the same way as anndata behaves currently when writing to a file with content (it does re-write it fully). If e.g. write_elem(f, "obsm", data) overwrites what was in this group group before, we should probably overwrite all the groups with write_elem(f, "/", adata) but also delete extra ones then. "/" may actually break some semantics of hdf5, since it refers to the root group. f = h5py.File("pbmc.h5ad") uns = f["uns"] uns["/"].keys() <KeysViewHDF5 ['X', 'obs', 'obsm', 'obsp', 'raw', 'uns', 'var', 'varm']> This works for writing to root of a store, but does not generalize to writing to the current group. Zarr seems to allow "" to refer to the current group, but h5py uses "." (which does make a lot of sense). h5py does not allow "..". I have gone with write_elem(f, "/", adata) on master.
2025-04-01T04:35:42.197117
2024-11-02T22:06:16
2630825691
{ "authors": [ "eroell" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11418", "repo": "theislab/ehrdata", "url": "https://github.com/theislab/ehrdata/pull/62" }
gharchive/pull-request
Towards v0.1.0 Towards a Prototype 0.1.0 This PR entails multiple in-sync forward moving developments. It should be a major step towards (although not yet completing) a prototype with limited, but partially stable functionality for further testing. [x] Fixes #28 [x] Add/fix tests for mimic_iv_omop, gibleed_omop, synthea27nj_omop [x] Fixes #60 Only if all units for a feature are the same; otherwise raises Error [ ] Fixes #61 [x] Use download slightly adapted/fixed from ehrapy.data._dataloader.py in ehrdata.dt.dataloader since #64 for omop demo datasets [x] Use logging instead of print and rich.print [x] Able to handle column names in tables to be of different capitalization; internally puts all column names to lowercase. The failing test with the pre-release candidates for 1.1.4 (1.1.4.dev1919 currently) of duckdb for table drug_exposure in mimic-iv-demo-data-in-the-omop-common-data-model-0.9 comes from the use of a % in the column drug_source_value of drug_exposure, e.g. row 14299 with value Syringe (0.9% Sodium Chloride) 1 Syringe. Can fixed by adding the escapechar="%" argument to duckdb.read_csv. consider to raise to duckdb, as this works with latest stable release 1.1.3
2025-04-01T04:35:42.200233
2021-03-25T21:32:39
841325198
{ "authors": [ "ivirshup", "simjbaum" ], "license": "BSD-3-Clause", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11419", "repo": "theislab/scanpy", "url": "https://github.com/theislab/scanpy/issues/1761" }
gharchive/issue
read_mtx obs and var mixed Hi, I was reading some mtx file from here: https://www.ebi.ac.uk/gxa/sc/experiments/E-HCAD-4/downloads adata = sc.read_mtx("./data/mtx/E-HCAD-4.aggregated_filtered_counts.mtx") AnnData object with n_obs × n_vars = 25052 × 606606 sc.__version__ '1.7.1' when loading the mtx file the obs and vars are mixed up. That happened with another mtx file before. I was wondering if already a fix exists to specify the obs and vars (or switch them if necessary). Thanks ![image](https://user-images.githubusercontent.com/7283790/112545551-a19f4280-8db8-11eb-8e0d-7d56ee0443b5.png) You can transpose the anndata object with adata.T or adata.transpose(). The issue is the difference in convention between R defaulting to fortran order arrays, while python defaults to C order arrays.
2025-04-01T04:35:42.217991
2017-06-20T05:21:24
237095505
{ "authors": [ "YaManicKill", "astorije" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11420", "repo": "thelounge/lounge", "url": "https://github.com/thelounge/lounge/pull/1240" }
gharchive/pull-request
Improve the PR tester script a bit I know it's probably not the cleanest / most error-proof solution, but it does the job in most cases. What I wanted to add most obviously is the git rebase master part, to make sure what we are testing with these PRs is as up-to-date with master as possible. Of course, if your master is not up-to-date itself, there will be troubles, but devs' masters should always be up-to-date... right? 😉 Anyway, I'm using this myself a lot to test PRs when reviewing, so I'd surely appreciate those! but I doubt many people other than you use this script @astorije, so go for it. Yeah, and I'd be totally willing to improve that script if other reviewers find this useful (how can you review without it, I literally use it multiple times a week 😅), I'd be totally happy to make it more resilient. Just a think to remember that if the rebase fails, the script will continue to run the other commands. Actually, it would stop thanks to the set -e at the top, which is exactly why I'm not too concerned about having it fail. how can you review without it I mean, I have an equivalent, but it's not exactly the same. It's just a 1-liner. Actually, it would stop thanks to the set -e at the top Huh, TIL
2025-04-01T04:35:42.222680
2023-04-18T16:04:32
1673435927
{ "authors": [ "devformatters", "robertu7" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11421", "repo": "thematters/matters-web", "url": "https://github.com/thematters/matters-web/issues/3360" }
gharchive/issue
Crowdin workflow sync Tasks [x] Sync the current workflow of copies management [x] Set a better workflow Ref https://matterslab.slack.com/archives/C88CK7Q7L/p1683273470856389?thread_ts=1682656947.308119&cid=C88CK7Q7L https://www.notion.so/Crowdin-101-960b7ef1f1d44c439119c416463ee9e8
2025-04-01T04:35:42.229287
2024-03-23T18:27:44
2203994524
{ "authors": [ "Hann1bal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11422", "repo": "themesberg/flowbite-react", "url": "https://github.com/themesberg/flowbite-react/issues/1312" }
gharchive/issue
DarkModeToggle button doesn't show light theme button icon on switchin theme [ x ] I have searched the Issues to see if this bug has already been reported [ x ] I have tested the latest version Steps to reproduce Set <DarkThemeToggle iconLight={FaRegMoon} iconDark={IoSunny}/> to the navbar. Click on button DarkMode showing button icon but light not Current behavior -at the light mode and -at the dark mode. Expected behavior - dark - light I think the svg needs to change the value of the dark: parameter because adding hidden before dark: has no effect and doesn't change the currently active svg. After playing around with the code I came to the conclusion that changing dark:hidden to dark:block changes the icon. Context browser chrome last release version node version >18 packages: { "name": "graphedior", "private": true, "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc && vite build", "lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0", "preview": "vite preview" }, "dependencies": { "@microsoft/signalr": "^8.0.0", "autoprefixer": "^10.4.18", "axios": "^1.6.2", "flowbite-react": "^0.7.3", "mkcert": "^3.2.0", "mobx": "^6.12.0", "mobx-react-lite": "^4.0.5", "postcss": "^8.4.35", "react": "^18.2.0", "react-contexify": "^6.0.0", "react-dom": "^18.2.0", "react-icons": "^4.12.0", "react-notifications-component": "^4.0.1", "react-router": "^6.21.0", "react-router-dom": "^6.21.0", "react-select": "^5.8.0", "reactflow": "^11.10.1", "tailwindcss": "^3.4.1" }, "devDependencies": { "@types/react": "^18.2.43", "@types/react-dom": "^18.2.17", "@typescript-eslint/eslint-plugin": "^6.14.0", "@typescript-eslint/parser": "^6.14.0", "@vitejs/plugin-react": "^4.2.1", "eslint": "^8.55.0", "eslint-plugin-react-hooks": "^4.6.0", "eslint-plugin-react-refresh": "^0.4.5", "typescript": "^5.2.2", "vite": "^5.0.8", "vite-plugin-mkcert": "^1.17.1" } }``` Sory, i found the error. first i modify tailwind config like that export default { content: [ './src/**/*.{js,jsx,ts,tsx}', 'node_modules/flowbite-react/**/*.{js,jsx,ts,tsx}', ], theme: { extend: {}, }, plugins: [ // ... require('flowbite/plugin'), ], };``` 2. Make tailwind cli css file from instruction on official tailwind docs 3. Do not use cdn.tailwind in code because it broken some lib logic.
2025-04-01T04:35:42.244760
2020-09-19T06:15:25
704815300
{ "authors": [ "Niekon01", "kristykjlee", "webdeveloperswj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11426", "repo": "thenewboston-developers/Account-Manager", "url": "https://github.com/thenewboston-developers/Account-Manager/issues/304" }
gharchive/issue
When user1 add user2 as a friend then in user2 friend's list name of user1 should be displayed Bug Description When user1 add user2 as a friend then in user2 friend's list name of user1 is not displayed Steps to Reproduce Add user2 as a friend of user1 Go to user2 check friend list user1 name not displayed Actual Result When user1 add user2 as a friend then in user2 friend's list name of user1 is not displayed Expected Result When user1 add user2 as a friend then in user2 friend's list name of user1 should be displayed OS version Windows This does not need to happen. If people you don't know add you as a friend, you friends list will be full of accounts and people unknown to you. This has been moved to the Design Repository
2025-04-01T04:35:42.250026
2020-11-13T05:48:49
742166132
{ "authors": [ "buckyroberts", "webdeveloperswj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11427", "repo": "thenewboston-developers/Account-Manager", "url": "https://github.com/thenewboston-developers/Account-Manager/issues/470" }
gharchive/issue
[Linux]TNB_VER_28 Account Number using as a Signing Key (Measure security issue) Bug Description Account number using as a signing key. Steps to Reproduce Steps to reproduce the behavior: 1.Installed TNB Ver 28. 2.launch TNB App on Linux Machine. 3.Click on my account(+). 4.Select Create New Account. 5.Enter nickname & click on create button. 6.Copy that Account Number. 7.Click on my accounts(+). 8.Select Add Existing Account. 9.Enter nickname & paste account number as a signing key section. 10.click on add button. Expected behavior Doesn't Accept Account number As a signing key. Actual behavior Account number Accepted As a signing key. Screenshots/Recordings https://prnt.sc/vieb2p https://prnt.sc/viebtu OS and Browser OS: [Linux] Browser [Chrome] Closing, duplicate of - https://github.com/thenewboston-developers/Account-Manager/issues/500 Closing, duplicate of - https://github.com/thenewboston-developers/Account-Manager/issues/500
2025-04-01T04:35:42.252332
2021-04-13T00:12:52
856450850
{ "authors": [ "angle943", "jamessspanggg", "thesanjeevsharma" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11428", "repo": "thenewboston-developers/Website", "url": "https://github.com/thenewboston-developers/Website/issues/1702" }
gharchive/issue
AppWide UI Update - Update Headers here is our new style guide: https://www.figma.com/file/6AGSvP6DJIIvy5Ayyp1a6U/Design-System?node-id=12%3A4 as you can see from that figma link, we will have standards when it comes to headers (h1,h2,h3,h4) and what they are calling Display. Find an elegant way to tackle this issue app-wide. Make sure you test each page that will be effected to ensure that it will not be broken @angle943 Figma link is not working! @thesanjeevsharma i'll bring it up with kristy. thanks for bringing that to attention! hi @kristykjlee do we also have a color system as well? I realised while working with the projects, some colors do not exist within the codebase.
2025-04-01T04:35:42.253669
2021-09-20T12:49:10
1000930751
{ "authors": [ "jamessspanggg" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11429", "repo": "thenewboston-developers/Website", "url": "https://github.com/thenewboston-developers/Website/pull/2056" }
gharchive/pull-request
[#2055] Remove developer portal Fixes #2055 @buckyroberts when directing to the https://developer.thenewboston.com/ home page, it works fine. There is a problem however when redirecting to the subroutes, i.e. https://developer.thenewboston.com/whitepaper or https://developer.thenewboston.com/projects
2025-04-01T04:35:42.254622
2016-01-11T12:45:52
125936488
{ "authors": [ "ScottSpittle", "thenikso" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11430", "repo": "thenikso/angular-inview", "url": "https://github.com/thenikso/angular-inview/issues/81" }
gharchive/issue
in-view-container using windows height When using in-view-container to limit the number of inview items to 7 horizontally on a 1080p screen, inview seems to think that all elements in the window height are visible instead of just the 7 in the 265px height element a code example would be useful to debug the issue
2025-04-01T04:35:42.276897
2016-01-13T21:58:33
126525625
{ "authors": [ "MichaelButkovic", "benghaziboy", "mparent61" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11431", "repo": "theonion/django-bulbs", "url": "https://github.com/theonion/django-bulbs/pull/100" }
gharchive/pull-request
2025-04-01T04:35:42.281808
2022-06-28T09:07:28
1287032168
{ "authors": [ "Fabilin", "francoisno" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11432", "repo": "theopenconversationkit/tock", "url": "https://github.com/theopenconversationkit/tock/issues/1380" }
gharchive/issue
[Web] Allow sending an image's width and height Large images currently cause resizing issues in tock-react-kit once they fully load (most apparent being the scrollbar jumping up). Specifying the width and height when sending an image from the server would ensure the corresponding img tags get the right size from the start. Hello @Fabilin good idea. As long as it remains optional of course. How would you implement this: Specify size in pixels (like <img> attributes), or CSS properties enabling more units like em, %, etc.? Specify absolute size or max size? What happens if the image is smaller than the configured size? Specify size by image (programaticaly or in Tock Studio / Stories / Add Media), by bot (Settings / Application Configuration) and/or globally for the whole Tock platform instance (envvar)? cc @pi-2r @correi-f @elebescond I had not considered setting the dimensions through CSS, it is true that this would allow for a lot more flexibility, and possibly responsiveness as well. On the other hand, here are the benefits I can think of from specifying size for the HTML attributes : Easier validation and typing (kotlin API), only whole numbers would be accepted No risk of CSS injection Allows more styling in tock-react-kit (e.g. using clientside options to force all images to be the same width would work as intended and keep the aspect ratio if dimensions are specified through HTML but I believe it would be completely ignored if we specified the size in CSS) For these reasons I would still recommend setting the HTML width and height, be it only for simplicity's sake. I believe image size should be (optionally) specified for each image sent by the bot's backend; tock-react-kit already has options to set CSS overrides for all images displayed in the conversation.
2025-04-01T04:35:42.294483
2016-12-16T08:59:55
196010236
{ "authors": [ "marc-mabe", "theory" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11433", "repo": "theory/sqitch", "url": "https://github.com/theory/sqitch/issues/322" }
gharchive/issue
Warning: Calling Utils::JSON.load is deprecated! I'm not sure if this is the right issue tracker but on installing sqitch with Homebrew I get the following message: Warning: Calling Utils::JSON.load is deprecated! Use JSON.parse instead. /usr/local/Homebrew/Library/Taps/theory/homebrew-sqitch/Formula/sqitch_dependencies.rb:20:in `block in install' Please report this to the theory/sqitch tap! Thanks Thanks. Moved to /theory/homebrew-sqitch/issues/27.
2025-04-01T04:35:42.307263
2015-01-26T13:56:12
55484542
{ "authors": [ "Ortix92", "jasonlewis" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11434", "repo": "thephpleague/fractal", "url": "https://github.com/thephpleague/fractal/issues/152" }
gharchive/issue
Strange behaviour when including a model in a transformer using laravel I have 2 transformers: EpisodeTransformer and ShowTransformer. A show hasMany episodes. In my EpisodeTransformer I have the following method to include the show into the response: public function includeShow(Episode $episode) { $show = $episode->show; return $this->item($show, new ShowTransformer); } However laravel throws the following exception: call_user_func() expects parameter 1 to be a valid callback, class 'Animekyun\Transformers\ShowTransformer' does not have a method 'includeShow' So I just put in an empty method in my ShowTransformer: public function includeShow() { } Note that I do have a transform() method in my ShowTransformer class. And then everything works. I don't think this is how it should work? What is going on? Why do I need to include that method? It's not even mentioned in the docs. You shouldn't need to unless you're defining the available includes on both of the transformers.
2025-04-01T04:35:42.310094
2023-11-15T20:35:00
1995524904
{ "authors": [ "codespearhead", "murraycollingwood", "zerkms" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11435", "repo": "thephpleague/oauth2-client", "url": "https://github.com/thephpleague/oauth2-client/issues/1017" }
gharchive/issue
Basic example improvements Hello I have been working with this page for a few weeks now: https://oauth2-client.thephpleague.com/usage/ I would like to suggest a couple of modifications: // Try to get an access token using the authorization code grant. $accessToken = $provider->getAccessToken('authorization_code', [ 'code' => $_GET['code'] ]); // We have an access token, which we may use in authenticated // requests against the service provider's API. echo 'Access Token: ' . $accessToken->getToken() . "<br>"; echo 'Refresh Token: ' . $accessToken->getRefreshToken() . "<br>"; echo 'Expired in: ' . $accessToken->getExpires() . "<br>"; echo 'Already expired? ' . ($accessToken->hasExpired() ? 'expired' : 'not expired') . "<br>"; Can this be changed to: // Try to get an access token using the authorization code grant. $tokens = $provider->getAccessToken('authorization_code', [ 'code' => $_GET['code'] ]); // We have an access token, which we may use in authenticated // requests against the service provider's API. echo 'Access Token: ' . $tokens->getToken() . "<br>"; echo 'Refresh Token: ' . $tokens->getRefreshToken() . "<br>"; echo 'Expired in: ' . $tokens->getExpires() . "<br>"; echo 'Already expired? ' . ($tokens->hasExpired() ? 'expired' : 'not expired') . "<br>"; In what was is it an improvement? It seems that variable $accessToken contains both the access token and the refresh token, so it should've been called $oauthTokens instead.
2025-04-01T04:35:42.324612
2021-03-18T23:22:42
835358359
{ "authors": [ "fr1t2", "jackalyst" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11436", "repo": "theqrl-community/tipbot", "url": "https://github.com/theqrl-community/tipbot/issues/13" }
gharchive/issue
[IMPROVEMENT] - Faucet pull countdown/timer in negative faucet response Right now when someone issues a the +faucet command when they've already withdrawn within a 24 hour period, it responds back with the information that they've already pulled from the faucet recently. As it can be difficult to discern when the faucet will be ready to pull from again, adding a timer to the faucet response to indicate the next time someone can initiate the faucet command would help. Suggested by Discord user MrTr3 closed with #14
2025-04-01T04:35:42.356333
2023-05-24T21:18:51
1724752763
{ "authors": [ "Polygonalr", "c4em", "thesadru" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11437", "repo": "thesadru/genshin.py", "url": "https://github.com/thesadru/genshin.py/issues/120" }
gharchive/issue
cookie_token no longer present in cookies Seems like they've updated the cookies on both hoyoverse.com and hoyolab.com to not include the cookie_token anymore, thus breaking code claiming. The cookie from hoyolab.com now looks like this: G_ENABLED_IDPS=google; ltoken=xxx; ltuid=xxx; mi18nLang=en-us; DEVICEFP_SEED_ID=xxx; DEVICEFP_SEED_TIME=xxx; _MHYUUID=xxx; DEVICEFP=xxx Just got hit with this problem when my old cookie_tokens no longer work. It seems like HoYoverse is in the midst of rolling out a new cookie system with the HttpOnly flag for cookie_token_v2 set to true (and thus not being able to be grabbed with JavaScript 😕). Grabbing cookies for code claiming with the library just got more tedious, and the wrapper for code claiming has to be updated to accommodate for the new API. I'm pretty sure the library already supports cookie_token_v2 and also getting http-only cookies from the browser. What specifically is failing?
2025-04-01T04:35:42.359083
2022-03-15T12:55:40
1169638403
{ "authors": [ "LucasMSg" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11438", "repo": "thesandboxgame/sandbox-smart-contracts", "url": "https://github.com/thesandboxgame/sandbox-smart-contracts/pull/604" }
gharchive/pull-request
Task/tsbbloc 524 gems catalyst refac Description Checklist: [ ] Pull Request references Jira issue [ ] Pull Request applies to a single purpose [ ] I've added comments to my code where needed [ ] I've updated any relevant docs [ ] I've added tests to show that my changes achieve the desired results [ ] I've reviewed my code [ ] I've followed established naming conventions and formatting [ ] I've generated a coverage report and included a screenshot [ ] All tests are passing locally now working on Task/tsbbloc 524 gems catalyst refac2
2025-04-01T04:35:42.392132
2023-12-08T10:44:56
2032417338
{ "authors": [ "kdruart29", "thesps" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11439", "repo": "thesps/conifer", "url": "https://github.com/thesps/conifer/issues/63" }
gharchive/issue
RF results Hello there, I currently am experimenting RandomForests from sklearn into a ZCU102 board. I first tried with the classic HLS/Vivado/Vitis flow but was struggling with the results. I tried using pynq + the hls accelerator and my results are still weird. So, for the example I am using the basic wine dataset from sklearn, with a RF (100 trees with a max depth of 100). With sklearn I obtain these predictions: (using clf.predict_proba), which are fine [0.97 0.03 0. ] [0.93 0.05 0.02] [0.06 0.12 0.82] [0.91 0.08 0.01] [0.07 0.85 0.08] Then, with the model converted and compiled I obtain this : (using model.decision_function) [ 8.59375000e-01 6.23525391e+01 2.60214844e+01] [ 7.51953125e-01 -3.56474609e+01 2.61230469e+01] [ 1.75781250e-01 8.43525391e+01 2.62246094e+01] [ 7.03125000e-01 -8.66474609e+01 2.63261719e+01] [ 2.83203125e-01 -9.96474609e+01 2.64277344e+01] These results are strange and I don't understand them, what would be the explanation about them ? Finally, on the PL, here are the results provided by accelerator.decision_function(np.float32(X_test)) [0.859375 0. 0. ] [0.7519531 0. 0. ] [0.17578125 0. 0. ] [0.703125 0. 0. ] [0.28320312 0. 0. ] These one correspond to the precedent results given by the converted model. For the conversion I used the examples : clf = RandomForestClassifier(n_estimator=100, max_depth=100) clf.fit(X_train, X_test) cfg = conifer.backends.xilinxhls.auto_config() accelerator_config = {'Board' : 'zcu102', 'InterfaceType': 'float'} cfg['AcceleratorConfig'] = accelerator_config cfg['OutputDir'] = 'prj_{}'.format(int(datetime.datetime.now().timestamp())) model = conifer.converters.convert_from_sklearn(clf, cfg) model.compile() y_hls = model.decision_function(X_test) y_skl = clf.predict_proba(X_test) model.build(bitfile=True, package=True) What am I doing wrong ? Thank you in advance Hi, thanks for reaching out. I think there are a few things going on, but it seems to me that the Random Forest conversion is not working correctly, at least for multi-class problems. I tried working with the same wine dataset and see similar nonsense results to yours, and I can see 'missing' trees in the converted model firmware under firmware/parameters.h (missing tree indices). For a binary classification example the results looked more compatible between sklearn and the conifer HLS. One effect that is smaller, but would eventually need to be taken into account for this dataset is the data types. The defaults probably don't work well for the features in this case. In general this is dataset dependent, but for the wine example a better configuration might be: # Create a conifer config cfg = conifer.backends.xilinxhls.auto_config(granularity='full') cfg['InputPrecision'] = 'ap_fixed<18,16>' cfg['ThresholdPrecision'] = 'ap_fixed<18,16>' cfg['ScorePrecision'] = 'ap_fixed<18,8,AP_RND_CONV,AP_SAT>' Besides your issue, it seems that you used the accelerator support and ran on a device. Since this is a quite new feature I'm also looking for feedback on that part of the workflow. Was it easy enough to make the bitfile and run it on the board? Hi! Actually the conversion is doing great, the trees are correctly savec in the parameters.h file. The issue is with how RF and BDT are implemented in Sklearn In Sklearn, BDT are converted into subtrees for each class in each estimator wheras RF use a single tree, so the CDT_rolled.cpp can't do the other classes because it expects subtrees for each class. I solved this by modifying the way the value field is converted, adapted the BDT header and cpp file to accept the multiclass RF. Issue is it's not compatible with BDT now, just RF for my case. I plan on commiting my code when fully compatible. The accelerator workflow is surprisingly easy and it works very well. The only difficulty was to find a compatible image of pynq for my zcu102.
2025-04-01T04:35:42.396556
2017-06-26T16:06:29
238592213
{ "authors": [ "D3m0n92", "bddckr", "thestonefox" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11440", "repo": "thestonefox/VRTK", "url": "https://github.com/thestonefox/VRTK/issues/1324" }
gharchive/issue
VRTK.SDK_OculusHeadset.GetHeadset NullReferenceException With the latest update just starting my scene I immediately get this repeating error: NullReferenceException: Object reference not set to an instance of an object VRTK.SDK_OculusHeadset.GetHeadset () (at Assets/VRTK/SDK/Oculus/SDK_OculusHeadset.cs:55) VRTK.SDK_OculusHeadset.GetHeadsetCamera () (at Assets/VRTK/SDK/Oculus/SDK_OculusHeadset.cs:69) VRTK.VRTK_SDK_Bridge.GetHeadsetCamera () (at Assets/VRTK/SDK/VRTK_SDK_Bridge.cs:503) VRTK.VRTK_TransformFollow.OnCamPreRender (UnityEngine.Camera cam) (at Assets/VRTK/Scripts/Utilities/ObjectFollow/VRTK_TransformFollow.cs:119) UnityEngine.Camera.FireOnPreRender (UnityEngine.Camera cam) (at C:/buildslave/unity/build/artifacts/generated/common/runtime/CameraBindings.gen.cs:719) When I change the scene the error stops. you need to provide steps to reproduce the error in an example scene Sounds like you didn't set up the needed SDK Setups. Check the example scenes. Nope.. I solved copying a sdkmanager from a sample scene and copying my scripts. Anyway, really did not change anything (in fact some update did everything worked properly). Go to example 005, tick onPersistentOnLoad and press Play. Error VRTK.SDK_OculusHeadset.GetHeadset () spamming started. Sounds like #1316? The PR for that one should reduce the potential for errors in case you were on a previous version and it was working fine. We recommend to switch to additive scenes or put the SDK stuff into every scene, though, as you can read in that issue. @D3m0n92 we're phasing out persist on load as it doesn't work as we expect it to.
2025-04-01T04:35:42.413011
2015-03-23T17:36:28
63785545
{ "authors": [ "dvstudio", "thetylerwolf" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11441", "repo": "thetylerwolf/sketchfindatagen", "url": "https://github.com/thetylerwolf/sketchfindatagen/issues/2" }
gharchive/issue
Not working? Apologies if I'm doing something wrong... I'm selecting a shape, then plug'ins>generate Chart... and.. nothing... using Sketch 3.2.2 on OSX 10.10 Seems to have loaded correctly.. wondering if I missed something. Anyone else? Thanks in advance. --Dave Does a prompt come up when you select "generate chart"? Oh hey.. thanks for checking in… I’m not getting any of the prompts as you laid out in the instrux. Weird … On Mar 23, 2015, at 2:46 PM, thetylerwolf<EMAIL_ADDRESS>wrote: Does a prompt come up when you select "generate chart"? — Reply to this email directly or view it on GitHub https://github.com/thetylerwolf/sketchfindatagen/issues/2#issuecomment-85139911. Very unusual. I have two idea. When was the last time you closed and re-opened Sketch? That's resolved issues for me in the past If you can send me an error log, that would be a huge help to resolve the problem. Can you follow this to capture the error log? http://www.bohemiancoding.com/sketch/support/developer/01-introduction/03.html Wow... I totally missed your last message... Just tried it again... Here's the log: 3/31/15 11:17:17.021 AM Sketch[4737]: Couldn't #import script './functions/inputs.js' 3/31/15 11:17:17.022 AM Sketch[4737]: Couldn't #import script './globals/chartTypes.js' 3/31/15 11:17:17.393 AM Generate Chart (Sketch Plugin)[4737]: ReferenceError: Can't find variable: askForInput. Plugin “Generate Chart”, line 7. » input = askForInput('Generate how many [points,series]?'); « 3/31/15 11:17:17.394 AM Sketch[4737]: Exception: { column = 20; line = 7; sourceURL = "/Users/druehontz/Library/Application Support/com.bohemiancoding.sketch3/Plugins/Generate Chart.sketchplugin"; } Hope this helps! That is very helpful. It looks like you grabbed the "Generate Chart.sketchplugin" file and put it in your plugins folder. The plugin has a few core dependencies between both the generate table and generate chart functionalities (files in the "functions" and "globals" folders), so you need the whole project folder in your Plugins folder for it to work. If you want to reduce the number of files you take, I can point out which ones you don't need. Awesome! All good! Thanks man!!! On Mar 31, 2015, at 11:26 AM, thetylerwolf<EMAIL_ADDRESS>wrote: That is very helpful. It looks like you grabbed the "Generate Chart.sketchplugin" file and put it in your plugins folder. The plugin has a few core dependencies between both the generate table and generate chart functionalities (files in the "functions" and "globals" folders), so you need the whole project folder in your Plugins folder for it to work. If you want to reduce the number of files you take, I can point out which ones you don't need. — Reply to this email directly or view it on GitHub https://github.com/thetylerwolf/sketchfindatagen/issues/2#issuecomment-88131482.
2025-04-01T04:35:42.416380
2020-03-03T10:41:46
574574268
{ "authors": [ "joshuagl", "lukpueh" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11442", "repo": "theupdateframework/tuf", "url": "https://github.com/theupdateframework/tuf/pull/989" }
gharchive/pull-request
Use name for loggers, per convention Fixes issue #: N/A Description of the changes being introduced by the pull request: As in secure-systems-lab/securesystemslib#212 replace hard-coded logger names with the conventional pattern logging.getLogger(__name__). Please verify and check that the pull request fulfills the following requirements: [ ] The code follows the Code Style Guidelines [ ] Tests have been added for the bug fix or new feature [ ] Docs have been added for the bug fix or new feature Hooray for passing AppVeyor tests (see #985). :)
2025-04-01T04:35:42.423050
2017-10-10T13:58:41
264238983
{ "authors": [ "crysallis", "jneale" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11443", "repo": "thewhitespace/UI16-Developer-Patch", "url": "https://github.com/thewhitespace/UI16-Developer-Patch/issues/3" }
gharchive/issue
Wording on module context menu Just noted that you have "Edit Module" in your readme images, however, it only had "Edit" on the actual context menu. Thanks. Updated everything in 2.4.
2025-04-01T04:35:42.426344
2015-08-03T15:28:45
98768386
{ "authors": [ "JohnSmith-LT", "ronm123" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11444", "repo": "thexerteproject/xerteonlinetoolkits", "url": "https://github.com/thexerteproject/xerteonlinetoolkits/issues/367" }
gharchive/issue
Hide [Logout] button in guest mode Doesn't really make sense in this mode or do anything so should be hidden or disabled... This has always been the case so don't think it's a biggie. But perhaps detect if guest mode and if so try to close window on clicking logout?
2025-04-01T04:35:42.462176
2018-03-04T21:15:06
302122859
{ "authors": [ "allforabit", "postspectacular" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11445", "repo": "thi-ng/umbrella", "url": "https://github.com/thi-ng/umbrella/issues/13" }
gharchive/issue
Support context in hdom Maybe there's already a way of achieving this using the existing api or via native es6 methods but I am looking to pass values down deeply into the hdom tree. Thus far I have explored ideas of a higher order function to mimic behaviour found in the redux connect method. The idea of this is to wrap the component and pass in relevant props to the child. To do this, it uses context to pull out the store which has been injected higher up in the component tree (using a HOC called a provider). It's this second part that I'm hitting a stumbling block with. From digging into the code I can see that the bulk of the work that is carried out to render the hiccup with hdom is done in the normalizeTree function. Would this be a good place to add the possibility to inject globally available state (well globally to the hdom tree). I understand that an option would be to make the state available in an outer scope but I am hoping to avoid this so that everything is as self contained as possible in a similar way that redux components are. I don't see a way of adding labels here, I'm wondering if I have access? Thanks! Hey @allforabit , did you have a look at the login form example I posted last week? This shows the overall pattern of using a central state (like redux does too), but also shows how derived views attached to the state work within components. For brevity the example uses a few global vars, but this can be easily refactored to pass views as args to component functions instead (or have a component create its own derived view). This still keeps the state centralized, since views only act as readonly pointers to somewhere within the larger app state. Let me know if that helps. If not I will make up another example... Also, related here's some more info about using atoms and derived views: https://github.com/thi-ng/umbrella/tree/master/packages/atom#derived-views Yes I had a look at this and I can see how this can fulfil a similar role. I probably just need to change my conceptual frame of mind a little! I wonder what the best way of achieving updates from components that are derived from read only views? I'm guessing it would be a matter of passing in callbacks to be handled by the ancestor component that has access to the atom/cursor. In the example the updater functions, that the components use, access the global db reference. I know it's probably not the end of the world if a single global db is used and this is how reframe does it. This does have implications for tools like devcards though and makesit difficult to have nested apps. https://github.com/Day8/re-frame/issues/137 I think I see where you're going, but am not entirely sure what that "context" you're asking for would allow you to do (am not familiar w/ devcards). Can you please provide/describe a concrete example? How does this pattern work in redux? Also, unlike re-frame which defines the central state atom as singleton in the library/framework itself (at least it used to, haven't used in a few years...), hdom does not care about your app state at all, so it's not really comparable (IMO same goes for comparing w/ redux). I've kept state handling separate for exactly this reason: to be able to experiment with new/different approaches, rather than "complecting" all these tasks: state handling, event/action dispatch and DOM creation/diffing... AFAIK redux is based on the same approach as re-frame and both piggyback on other vdom implementations (React), whereas hdom only handles the vdom parts by design. Therefore I think this issue boils down to how you get state and context information into your components when they're created and with that I think you listed the main options already: define your components as closures and pass any context info when creating them. Will mock up a new example for that tonight. Btw. The two things I always loved in re-frame the most are the idea of event batching and its use of interceptors to augment event processing. There's another module in the works for that (just need to find time to refactor it first). A concrete example to provide an atom to a hdom tree and allow children of the hdom that are nested deeply in the tree to access this atom. So imagine a tree path such as app -> sidebar -> userProfile -> editUserProfile. To get a user cursor down here we'll need to either pass it down through the sidebar if we want it to flow down from the root component. Alternatively we can pull it out of the scope. What something like context in react does is allows you to avoid passing props right down through the tree and create a provider/consumer relationship between distant ancestor components. Redux is one example of it's usage with react. Others include for localization, theming and for routing (react router and styled components). I hear you about keeping things separated out and the examples I've provided are more like frameworks. My point is more that react the library provides this extension point for frameworks to use and makes certain things a bit more convenient. A key difference between redux and reframe is that redux doesn't use a singleton and this achieved by using the react context api. As far as I can see it's the one downside of reframe vs redux. Yes I love the idea of interceptors (as well as there concept of sidefx and cofx). I can't wait to see what you come up with for that! I'm wondering would a very opinionated low level method to allow some sort of preprocessing to take place on each of the components make sense? At the moment both the hdom start and normalizeTree take "span" as a boolean argument. I wonder could this be made more generic and be made into a function. As this function could be stateful it would make it possible to create a context like system that could create relationships between distantly related components. I could try to put together a pull request for this if you think it's worth exploring? I also understand if you'd prefer not to go down this route as it has the potential of complicating/"complecting" things. Actually to clarify, I will put together an example using react's context and the umbrella atom. I will follow up with this in the next few days. Hi again, since you referenced Bruce's Devcards earlier, I just added & uploaded a new example, hopefully showing some this (and some options) in a bit more detail (lots of comments in the source too). https://github.com/thi-ng/umbrella/tree/master/examples/devcards http://demo.thi.ng/umbrella/devcards/ Will respond to your other points tomorrow... Just read through the React docs about context and this section looks potentially feasible to support as optional feature, but personally I still don't fully understand the point or benefit over passing these things manually as a more obvious/clear/readable solution to which data is ending up where. In the end you're still passing it manually anyhow by having to declare .contextTypes = {...}. So far this all smells a little of "convenience magic" and it also encourages a kind of local state in components, which I always try to avoid (though am aware this local state is largely about component config data). But don't get me wrong, am not saying no. Looking forward to your example... Btw. I also like this quote on the React docs page and is fundamentally what I proposed above as more natural solution: Before you build components with an API similar to this, consider if there are cleaner alternatives. For example, you can pass entire React components as props if you’d like to. This pattern is also demonstrated in the above mentioned devcards demo... Thank you very much for that demo. It's amazing how much functionality and interactivity that can be expressed in such a small amount of code! Yes I can see your point about keeping things clearer by manually passing down data. Having something like this somewhat moves it away from being a strict tree of data. Have you seen this project? https://github.com/roman01la/citrus/blob/master/README.md It's an evolution of some of the ideas from reframe and uses batched updates and side effects. It avoids using the singleton db by passing a "reconciler" object through the tree. The reconciler is similar to a redux store. He mentions that it is possible to inject this into the tree by using react's context but advises against it: Passing reconciler explicitly is annoying and makes components impossible to reuse since they depend on reconciler. Can I use DI via React context to avoid this? Yes, you can. But keep in mind that there's nothing more straightforward and simpler to understand than data passed as arguments explicitly. The argument on reusability is simply not true. If you think about it, reusable components are always leaf nodes in UI tree and everything above them is application specific UI. Those leaf components doesn't need to know about reconciler, they should provide an API which should be used by application specific components that depend on reconciler and pass in data and callbacks that interact with reconciler. So I'm not even convinced myself if using context is the best approach, particularly since hdom aims to be much simpler! It might be nice to have it as an advanced feature that in general shouldn't be used directly but by libraries or frameworks that are using hdom (similarly to react). Just to note, I'm going to use the latest alpha version of react (16.3) for my demo. This has a more refined context system and will be a public api when it's released. Details of this are here: https://github.com/reactjs/rfcs/blob/master/text/0002-new-version-of-context.md Okay, a long night made worth it, i hope... :) Just pushed @thi.ng/atom 0.9.0 and new demo showcasing my existing event & interceptor handling, which has been used v. successfully for a bunch of apps already in production... (having said am sure there's ample scope for further refactoring, always! :) ) http://demo.thi.ng/umbrella/interceptor-basics/ To make more sense of the demo, the important bits are here (in the comments): https://github.com/thi-ng/umbrella/blob/master/examples/interceptor-basics/src/index.ts https://github.com/thi-ng/umbrella/blob/master/packages/atom/src/interceptors.ts https://github.com/thi-ng/umbrella/blob/master/packages/atom/src/event-bus.ts This is amazing, thanks so much for pushing it live (and for the rest of this treasure box of libraries!) I hope you're not too tired today!!! It's like Christmas morning every morning, seeing all the new additions :) I will use this event bus for the react integration that I'm working on (which I will post here asap). This more or less clarifies how you'd go about tying the various pieces of the library together. Feel free to close this issue as I think it's unneeded and can be addressed by passing around the event bus as a prop similarly to how it's recommended with Citrus. I've got a demo of the new React context api in action with the Umbrella libraries managing more or less everything except for the rendering. https://github.com/allforabit/umbrella/tree/react-context-example/examples/react-context, demo here: https://allforabit.com/react-context/ It's an autocomplete widget that searches Wikipedia (inspired by an Om next demo). It's a little rough around the edges and I ended up using normal js/jsx instead of typescript because the new React apis aren't available on Definitely Typed just yet. To illustrate the independence of the connected components I've place them in a "dumb" layout component that just calls the connected components (without any arguments/props). The context system makes sure that the "smart" connected components receive the state that they need as well as the dispatch function from the central bus. I can see that you've put together a very comprehensive demo of your approach to app structure and this is very helpful and instructive. I like the way a lot of the wiring between the different components and events is setup in the config and there's only one entry point through the app object. I think the big difference in the react approach is that everything is a component, whether it's a router or a state management tool. There's probably trade offs for both approaches but for now I will use the example you gave as a template for new projects. Slightly separately, the react lib that is included with the demo (https://github.com/allforabit/umbrella/blob/react-context-example/examples/react-context/src/lib.jsx) is the beginning of a reasonably comprehensive state management solution for React (thanks to Umbrella!) Do you think this could be a good addition to the libraries or will I work on it in a separate repo? I would like to get the hiccup syntax working with react too. Thanks for the effort, Kevin! This is really great and the comparison really helps - or rather - it will help me to think about this some more and then also can give you a proper answer to your other question. Alas no capacity to fully check this in detail over the next few days :( Just one more thing: there's no reason why the router (or the app itself) could not be packaged up as components too. Essentially the little derived view function defined here is all what's needed to turn the router into a component. I just personally prefer the to have functionality (here "routing", but also more generally) available in standalone form (without assuming it will be used with components). Though maybe offering a component wrapper is a good idea... Many roads lead to Rome! Finally, yesterday I started extracting core pieces from that latest demo into a new repo to be used as re-usable app skeleton, possibly with a little config "wizard"... anyhow, thanks again & more real soon! Thanks very much for that, yes the derived view approach is very flexible and I'm using that in the a react app that I'm working on now to integrate the thi.ng router. It works almost the same as your example except that the state views return jsx instead of hiccup. In general the pattern looks really nice to work with in that it is very transparent as to how the components, events, state and router all interact with each other. Looking forward to the skeleton app. I'm not sure if this is relevant but the create-react-app project could be a good model to go with. When a repo is called create-[something-here], yarn can run "yarn create [something-here". E.g. "yarn create react-app my-app". https://yarnpkg.com/lang/en/docs/cli/create/ The main thing this does is add a dependency to a "react-scripts" package that has all the latest dependencies as well as the webpack config. Hey, sorry still had no chance to look through your context demo (on weekend!) - but since I needed it urgently I've pushed an initial version of the project generator: https://github.com/thi-ng/create-hdom-app. That yarn create approach was exactly what I intended, but in the end I didn't base it on create-react-app (only used a tiny part from it) since that seemed way too OTT with its dozens of options/extensions. That new skeleton app is v. close to the router-basics example from a few days ago, but has some small additions and also is configurable (there're actually 4 project templates). Thanks no rush with that! To be honest there's not that much in it and I think you've got the gist of it based on the thread. I think it's very low priority anyway. yarn create works a treat! One very small thing is I had to do a yarn upgrade --latest to upgrade to get the latest version of the atom package to get a new feature that it depends on (forward side fx). Really nifty though apart from that. Hi @allforabit - so sorry for the long pause on this front, but lots of other things (more pressing things) to sort out meanwhile. I've been thinking about a simple solution to this, which 1) makes the use of context more or less optional, 2) isn't too intrusive inside hdom and 3) doesn't break existing components (not sure yet, but also would be v. easy to fix/update). So I will play around with this preliminary plan over the weekend: add an optional context arg to start() any function or object component function in the DOM tree will be called with this context arg (and any others provided) For example: // ctx will be injected as first arg to all component functions const foo = (ctx: any, body: string) => ["h1", ctx.foo, body]; // ctx will be injected as first arg to all lifecycle methods const bar = { init: (ctx: any, el: Element, ...args: any[]) => { ... }, render: (ctx: any, ...args: any[]) => ["section", ctx.bar, ...args.map(foo)] }; start( "app", // root component fn ()=> [bar, "hello", "world"], // arbitrary context object // here just using element attribs, but could store event bus, app db views etc. { foo: {class: "f-headline lh-solid"}, bar: {class: "bg-light-gray"}, etc: ... } ); Thoughts? Thanks Karsten, yes this seems like a good way of adding it. Although I initially had envisaged as an additional argument at the end. I'm sure you have good reasons to make it the first argument though. This looks like it'll add some convenience and avoid some unnecessary property passing, hopefully without compromising the simpler structure. Looking forward to seeing what you come up with. I will do another version of the autocomplete demo with it to test it out and see how it compares with the react api. Thanks very much for adding this. From looking at the updated router-basics it looks very straightforward to use. I will take it for a spin over the next few days! Hi Kevin, I just pushed v3.0.0 & thanks a lot for the idea & input! adding you to contributors list... :)
2025-04-01T04:35:42.487013
2024-06-25T16:36:20
2373133735
{ "authors": [ "bit-app-3000", "postspectacular" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11446", "repo": "thi-ng/umbrella", "url": "https://github.com/thi-ng/umbrella/issues/477" }
gharchive/issue
[rdom] unexpected dispatch $replace after change route / reload layout https://github.com/thi-ng/umbrella/assets/42169423/d887122f-e34e-4555-b044-84a956a66144 // DropDown import { $replace } from<EMAIL_ADDRESS>import { fromViewUnsafe } from<EMAIL_ADDRESS>import { states$, toggle, useSlot } from '../../modules/index.js' import { Menu } from '../index.js' const slot = x => { const { state } = x return [ 'dropdown', { onpointerdown: toggle(x) }, ['button', { type: 'button' }, 'Drop Down'], state ? Menu(x) : null ] } export const Dropdown = seed => [ 'control', {}, $replace(fromViewUnsafe(states$, { path: useSlot(seed), tx: slot })) ] // Menu import { $list } from<EMAIL_ADDRESS>import { cancel, end, start } from '../../modules/slot.js' import { Icon } from '../index.js' const Item = ({ ico, label }) => ['li', {}, Icon({ id: ico }), label] export const Menu = x => { // LOG('MENU:RDR') const { id, state, placement, data$ } = x return $list( data$, 'menu', { id, state, placement, onanimationstart: start(x), onanimationcancel: cancel(x), onanimationend: end(x) }, Item ) } // State from Atom export const states$ = defAtom({}) // useSlot import { defCursor } from<EMAIL_ADDRESS>import { states$ } from './hub.js' export function useSlot (seed) { const { id } = seed if (!Reflect.has(states$.deref(), id)) { const cursor = defCursor(states$, id) seed.placement = 'bottom' seed.state = 'hidden' cursor.reset(seed) } return id } // Page / Layout import { data$ } from '../../modules/index.js' import { Dropdown, Top } from '../index.js' export const Dashboard = () => [ 'main', {}, Top(), ['header', {}, 'DASHBOARD'], [ 'section', {}, [ 'bar', {}, Dropdown({ id: 'x1', data$ })]] ] @bit-app-3000 Can you please describe a little more where/what the issue is or rather what the expected behavior should be and what you feel is going wrong? I.e. a bit more context please... Sorry, I watched the video a few times, but can't really figure out what parts are wrong... Even better would be a codesandbox or upload the example somewhere so it can be debugged... thanks! :) I'm sorry, @bit-app-3000 — I've downloaded the video and went through it step by step, but without a working example or at least a little description of what you're doing I don't even know which part (or component) I'm supposed to focus on (I'm guessing the dropdown?). I'm a busy with lots of things, but I really do want to help — though, that also requires from you, as the person asking for help, to provide a bit more context/details, please... thank you! 👍 made an example src: https://github.com/bit-app-3000/dummy public: https://dummy-3xu.pages.dev I hope it helps Hey again — thank you for this, but again: Please describe and explain in a few sentences what I should be looking at here? Which component(s) has/have the problem? When is the issue occurring? (e.g. when you navigate away or when a new route is starting/initializing...?) Does the issue occur only once or every time a route is switched? All I understand from the little (context) you provided so far is that some component is being unmounted/replaced multiple times (from the log messages it seems maybe the dropdown), but again if you cannot describe the problem a bit more, I unfortunately really don't know how I can help you... That example project of your has way too many files (most unrelated to the problem at hand) and from the title of this issue, I really just don't have enough information to understand what the actual problem is here... and it's not for a lack of trying! Maybe I'm missing something obvious, so also grateful if anyone else would like to chip in here... unexpected dispatch $replace after change route / reload layout dispatch dropdown after change route executed more than once maybe not unsubscribe correctly you can see in the logs that the execution of the function is accumulating in this case a dropdown component Hi @bit-app-3000 — so because you're project has way too many files and I couldn't get it to run locally, I still couldn't figure out what is going wrong in your case, but I've just created & uploaded a new example with a similar setup/task, i.e. a router & atom-based component switcher (incl. dynamic/reactive lists of images). I hope this much more stripped down example will help you figure out what might be wrong on your end, but I'm afraid that's all I can offer you here... obviously always happy to answer any other related questions Demo: https://demo.thi.ng/umbrella/rdom-router/ Source: https://github.com/thi-ng/umbrella/blob/develop/examples/rdom-router/src/index.ts Really hope that helps! @bit-app-3000 I just updated the example with some more features and more comments (also requires a newer version of thi.ng/router, just published) :) Hth! unexpected behavior occurs when using the component function signature (hiccup) [fn, arg1, arg2, ...] if you use the direct call in the tree it works as it should [tag, {}, fn(rgs)] Please add an example using such a signature ["tag", {...}, "body", 23, function, [...]] [function, arg1, arg2, ...] https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup#what-is-hiccup Those should be working like this: const myComponent = (...items: any[]) => [ "div", { class: "custom" }, ...items.map((x) => x.toUpperCase()).join(", "), ]; $compile([myComponent, "yabba", "dabba", "doo"]).mount(document.body); Zero-arg functions in child/body positions... (only supported since rdom v1.5.0, released just now) const random = () => ["li", {}, Math.floor(Math.random() * 100)]; $compile(["ul", {}, random, random, random]).mount(document.body); Hi @postspectacular made a minimal example with unexpected behavior of the component import { defAtom } from<EMAIL_ADDRESS>import { ConsoleLogger, ROOT } from<EMAIL_ADDRESS>import { $compile, $replace, $switch } from<EMAIL_ADDRESS>import { EVENT_ROUTE_CHANGED, HTMLRouter } from<EMAIL_ADDRESS>import { fromView } from<EMAIL_ADDRESS>import { cycle } from<EMAIL_ADDRESS> ROOT.set(new ConsoleLogger()) const routes = [ { id: 'home', match: ['home'] }, { id: 'about', match: ['about'] }, { id: 'profile', match: ['profile'] } ] const router = new HTMLRouter({ routes, default: 'home', useFragment: true }) const db = defAtom({ route: { id: 'home' }, x1: { state: false, label: '🤠' }, x2: { state: false, label: '☠️' } }) // Pop Component const emojis = cycle(['🥳', '🙂‍', '️😏', '😒', '🙂‍', '️😞', '😔', '😕', '🙁']) const toggle = id => () => db.swapIn(id, last => ({ ...last, state: !last.state, label: last.state ? emojis.next().value : last.label })) const slot = x => { const { state, label } = x return state ? ['emoji', {}, label] : null } export const pop = (id, desc) => [ 'pop', {}, ['button', { onpointerdown: toggle(id) }, desc], $replace(fromView(db, { path: id, tx: slot })) ] // PageNav const nav = () => [ 'nav', {}, ...Array.from(routes, ({ id }) => ['a', { href: router.format(id) }, id]) ] // PageContent const container = (title, ...body) => [ 'header', {}, nav(), ['main', {}, ['h1', {}, title], ...body] ] const home = () => container( 'Home', [ 'div', {}, // component behavior // expected ['p', {}, pop('x1', 'expected behavior')], // unexpected ['p', {}, [pop, 'x2', 'unexpected behavior']] ] ) const about = () => container('About', ['section', {}, 'About us']) const profile = () => container('Profile', ['section', {}, 'Profile']) router.addListener(EVENT_ROUTE_CHANGED, ({ value }) => db.resetIn('route', value)) router.start() $compile( $switch( fromView(db, { path: ['route'] }), ({ id }) => id, { home, profile, about } )).mount(document.getElementById('app')) I hope for your help You know, it'd be really good of you in the future to please actually describe what is the unexpected behavior you're observing. You keep on having me guess and spend a lot of time trying to figure out which parts are unexpected — it's not really helpful! From what I could figure, it seems components containing the [fn, arg...] form don't seem to properly unmount and hence when switching to another route and then back, the earlier non-cleared reactive fromView() subscription is still active, plus a new one is being created and so on... About these embedded function forms, in general: Theyse forms are a legacy feature from the older thi.ng/hdom approach and are actually not that useful at all with rdom. I.e there's no real benefit of using [fn, arg1, arg2] vs calling fn(arg1, arg2) directly in rdom (or it can already be handled via other means). Because of the previous point, I've actually been tempted for a while to completely remove support for these forms in a future version of rdom. Their handling adds unnecessary complexity with no real gain (and obviously some edge cases are still not 100% right anyway) So two more questions for you: Can you please explain WHY you're intending to use these forms? WHAT is your specific need of using this embedded form compared to using normal function calls? Thanks You know, it'd be really good of you in the future to please actually describe what is the unexpected behavior you're observing. You keep on having me guess and spend a lot of time trying to figure out which parts are unexpected — it's not really helpful! From what I could figure, it seems in some circumstances components containing the [fn, arg...] form don't seem to properly unmount and hence when switching to another route and then back, the earlier non-cleared reactive fromView() subscription is still active, plus a new one is being created and so on... About these embedded function forms, in general: Theyse forms are a legacy feature from the older thi.ng/hdom approach and are actually not that useful at all with rdom. I.e there's no real benefit of using [fn, arg1, arg2] vs calling fn(arg1, arg2) directly in rdom (or it can already be handled via other means). Because of the previous point, I've actually been tempted for a while to completely remove support for these forms in a future version of rdom. Their handling adds unnecessary complexity with no real gain (and obviously some edge cases are still not 100% right anyway) So two more questions for you: Can you please explain WHY you're intending to use these forms? WHAT is your specific need of using this embedded form compared to using normal function calls? Thanks @postspectacular plan to use in declarative design system components definition serialize deserialize/parse transfer // example.json [ "page", {}, ["HeaderLayout"], "main", {}, [ "section", {}, [ "bar", {}, [ "PopOver", { "id": "x1", "state": "show", "placement": "top-end", "layout": ["Tooltip", "contentId"] } ], [ "PopOver", { "id": "x2", "placement": "top", "layout": "Description" } ], [ "PopOver", { "id": "x3", "placement": "top", "layout": ["h1", {}, "Tooltip"] } ], ] ], ["FooterLayout"] ] Lower Case: html tag Camel Case: component definition @bit-app-3000 that's very helpful to learn & a great use case — thank you! I've updated the $compile() function to add checks for embedded function forms, call the function and then only compile the result... I'm doing some more testing and then release asap (your example above has no more problems now, as far as i can tell!) thx works as it should! Regards
2025-04-01T04:35:42.489507
2020-07-17T03:34:02
658809350
{ "authors": [ "shaunc" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11447", "repo": "thiagobustamante/typescript-rest", "url": "https://github.com/thiagobustamante/typescript-rest/issues/138" }
gharchive/issue
How to respond with multipart/form-data I would like to send a response whose body is multipart/form-data encoded, including a file. Is there a recipe? I guess, for some reason, this isn't supported anywhere, so I guess I can't complain that typescript-rest doesn't support -- closing.
2025-04-01T04:35:42.501128
2023-12-15T10:32:16
2043432709
{ "authors": [ "reey", "reubenmiller" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11448", "repo": "thin-edge/thin-edge.io", "url": "https://github.com/thin-edge/thin-edge.io/pull/2529" }
gharchive/pull-request
adjust the expected return code when user is missing "Tenant manager" admin rights Proposed changes I guess this was just a small typo in the docs. Added Forbidden as well just to make it clear. Types of changes [ ] Bugfix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Improvement (general improvements like code refactoring that doesn't explicitly fix a bug or add any new functionality) [X] Documentation Update (if none of the other choices apply) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) Paste Link to the issue Checklist [x] I have read the CONTRIBUTING doc [ ] I have signed the CLA (in all commits with git commit -s) [ ] I ran cargo fmt as mentioned in CODING_GUIDELINES [ ] I used cargo clippy as mentioned in CODING_GUIDELINES [ ] I have added tests that prove my fix is effective or that my feature works [x] I have added necessary documentation (if appropriate) Further comments Thanks, I was also in the middle of some other additional tips, so I will push a few commits of my own to this PR, e.g. notice that user/email is case sensitive add same tips to the additional cert upload page (same tips which are in the c8y getting started section) Thanks @reey for the PR. I just pushes a commit which expands on the common errors reasons and added a screenshot as well of the Cumulocity IoT Global Role which is required. @reubenmiller maybe also mention that SSO users are not supported? @reubenmiller maybe also mention that SSO users are not supported? Do you know how a user can check if they are an SSO user or not?
2025-04-01T04:35:42.558329
2023-10-11T03:38:46
1936734123
{ "authors": [ "Manas5353", "PBJI" ], "license": "Unlicense", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11449", "repo": "thinkswell/javascript-mini-projects", "url": "https://github.com/thinkswell/javascript-mini-projects/pull/855" }
gharchive/pull-request
modifications added made it simpler and sleek Developer Checklist [ ] Followed guidelines mentioned in the readme file. [ ] Followed directory structure. (e.g. ProjectName/{USERNAME}/...yourfiles) [ ] Starred ⭐ the Repo (Optional) Summary add a summary here Screenshot attach screenshots/gifs here Live Project Link add a working project link here hi @Manas5353, consider commenting, before and after screenshots here for better review purposes.
2025-04-01T04:35:42.565492
2024-08-15T23:19:09
2469136791
{ "authors": [ "MananTank" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11450", "repo": "thirdweb-dev/js", "url": "https://github.com/thirdweb-dev/js/pull/4123" }
gharchive/pull-request
Migrate Engine list UI to shadcn/tailwind Problem solved Short description of the bug fixed or feature added PR-Codex overview This PR updates UI components in the dashboard app for better user experience. Detailed summary Updated tooltip styles and added new components Refactored modal components and added new functionalities Improved button styles and error handling Organized imports and removed unused code The following files were skipped due to too many changes: apps/dashboard/src/components/engine/engine-instances-table.tsx ✨ Ask PR-Codex anything about this PR by commenting with /codex {your question} [!WARNING] This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite. Learn more #4123 👈 #4119 #4113 : 1 other dependent PR (#4116 ) main This stack of pull requests is managed by Graphite. Learn more about stacking. Join @MananTank and the rest of your teammates on Graphite
2025-04-01T04:35:42.590664
2012-02-14T04:31:22
3213887
{ "authors": [ "artiom", "assembler", "benzheren", "daddyz", "dnagir", "jrissler", "lukesaunders", "masterkain", "ouranos", "rekky", "saiko-chriskun", "thomas-mcdonald", "utkarshkukreti" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11451", "repo": "thomas-mcdonald/bootstrap-sass", "url": "https://github.com/thomas-mcdonald/bootstrap-sass/issues/62" }
gharchive/issue
3 minutes to precompile Hi, Precompiling assets takes ~3mins when I import bootstrap. I have no idea why. Maybe somebody can shade some light. This is my layout: > tree app/assets/ app/assets/ ├── images │   └── icons │   ├── apple-touch-icon.png │   └── favicon.ico ├── javascripts │   ├── application.js │   └── frameworks.js.coffee └── stylesheets ├── application.css ├── frameworks.css.sass └── layout.css.sass > cat app/assets/stylesheets/application.css app/assets/stylesheets/frameworks.css.sass /* *= require_self *= require_tree . */ @import "bootstrap" @import "bootstrap-responsive" > cat app/assets/javascripts/application.js app/assets/javascripts/frameworks.js.coffee //= require jquery //= require jquery_ujs //= require_tree . jQuery(function(){ $("body").addClass('dom-loaded'); }); #= require bootstrap # Gemfile group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'bootstrap-sass', '~> 2.0' gem 'compass', " >= 0.12.alpha.0" gem 'uglifier', '>= 1.0.3' gem 'therubyracer' # To compile CoffeeScript gem 'jquery-rails' end So there's really not much going on. Pretty standard stuff. When I remove the @import bootstrap from sass file then the compilation is pretty "snappy" (~15 secs including Rails startup). Any thought what could be causing this? Cheers. All on a mid-2010 Macbook Pro (2.4GHz Core 2 Duo) Rails 3.1 app (with a reasonable set of additional stylesheets): tom:qa tom$ time rake assets:precompile /Users/tom/.rvm/rubies/ruby-1.9.3-p0/bin/ruby /Users/tom/.rvm/gems/ruby-1.9.3-p0/bin/rake assets:precompile:all RAILS_ENV=production RAILS_GROUPS=assets /Users/tom/.rvm/rubies/ruby-1.9.3-p0/bin/ruby /Users/tom/.rvm/gems/ruby-1.9.3-p0/bin/rake assets:precompile:nondigest RAILS_ENV=production RAILS_GROUPS=assets real 1m3.207s user 0m34.134s sys 0m3.595s Almost fresh Rails 3.2 app: tom:librarian tom$ time rake assets:precompile /Users/tom/.rvm/rubies/ruby-1.9.3-p0/bin/ruby /Users/tom/.rvm/gems/ruby-1.9.3-p0/bin/rake assets:precompile:all RAILS_ENV=production RAILS_GROUPS=assets real 0m39.350s user 0m21.538s sys 0m1.657s I can't really reproduce the 3+ minutes you're seeing. 40 seconds (Rails 3.2) is still relatively long. In moderate sized app assets are compiled within ~15secs or so if I don't use bootstrap-sass. But for me the new Rails 3.2 boostrap-sass compiles very fast. So there must be something specific to my config (https://gist.github.com/1832369). Here's full output just in case: > time RAILS_ENV=production bundle exec rake assets:precompile /Users/dnagir/.rvm/rubies/ruby-1.9.3-p0/bin/ruby /Users/dnagir/.rvm/gems/ruby-1.9.3-p0/bin/rake assets:precompile:all RAILS_ENV=production RAILS_GROUPS=assets real 3m0.787s user 2m46.590s sys 0m5.156s There might be an issue with something else, not bootstrap-sass. Ohh, I think I have nailed it down. In the config I have config.autoload_paths += %W(#{config.root}/lib). Removing this line from application.rb makes assets compile within 11 seconds. (There are no any assets in lib at all). I suspect that's a sass-rails issue. Actually, that's not right. The compilation just quits with an error which I looked at as success. So, no it's not the autoload_paths thing :( getting a similar issue, cpu shoots to 100% and just seems to stall when I include a file identical to frameworks.css.sass as described above. If I remove it and include twitter bootstrap normally, everything's fine. I also see slow precompile times for my sass files which import bootstrap. But what seems odd is that in development mode where it just recompiles when source files change it seems to be able to regenerate those files in just one or two seconds. Why the difference? +1 It takes around 2 minutes to precompile on Heroku +2 takes at least 2-3 minutes 5 minutes here in production 3-4 minutes for me using ruby 1.9.3-p125 & Rails 3.2.1. 5 minutes for me :( 5 minutes for me on Heroku with responsive, 2 minutes without responsive. 2 minutes on rails 3.2.2 and ruby 1.9.3-p125 also I tried to run it in development environment and it took about 10 seconds +1 +1 Hey guys, sorry it's taken a while for me to sit down and get back to you all on this one, I've had the allure of much more pleasant tasks than profiling calling me. From browsing through previous issues, it looks like this issue will most likely lie within Sprockets, specifically the way it handles finding files. However, I can't be sure until I get chance to sit down with ruby-prof, it's particularly odd considering how fast reloading files is in development as opposed to production. With Bootstrap 2.0.2 coming soon I intend to spend most of tomorrow going through the issues and getting everything in order with the Gem framework, before looking at the style changes. Hopefully I'll make some progress somewhere. @thomas-mcdonald, now worries. Thanks for getting on the issue. Not sure I can trace it down too much further, but if you need any additional info please let us know. ASIDE: Do you know what the changes are in the 2.0.2?
2025-04-01T04:35:42.611741
2020-03-11T15:19:57
579343209
{ "authors": [ "DavidGonzalezGutierrez", "Sinon02" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11452", "repo": "thomasdondorf/puppeteer-cluster", "url": "https://github.com/thomasdondorf/puppeteer-cluster/issues/264" }
gharchive/issue
PDF loss To the cluster I add for example 1000 url to files. When the cluster ends, it has generated me only 850 or so. If I set it with maxConcurrency to 1 it does generate 1000 pdfs. await cluster.task(async ({page, data: url}) => { var contentHtml = await fs.readFileSync(url, 'utf8'); await page.setContent(contentHtml); await page.pdf({ path: "./generated-pdfs/" + Date.now() + "_g.pdf", format: "A4", margin: {top: '2mm', right: '2mm', bottom: '2mm', left: '2mm'}, printBackground: true }); fs.unlinkSync(url); }); I have this code inside cluster. I met the similar problems when trid to render 1000 images and save them by page.screenshot(). If maxConcurrency is not 1, some images will not be saved. So can someone tell me what causes this problem and how to prevent this?
2025-04-01T04:35:42.616566
2023-07-04T05:51:31
1787196693
{ "authors": [ "Alex-developer", "EricClaeys" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11453", "repo": "thomasjacquin/allsky", "url": "https://github.com/thomasjacquin/allsky/issues/2856" }
gharchive/issue
[BUG] Missing ERROR message for text fields totally outside of image dimensions The following entry in overlay.json will produce no ERROR/WARNING message in allsky.log nor in the Overlay Editor: { "label": "Mean: ${MEAN}", "x": 40000, "y": 50000, "id": "oe-field-7", "fontsize": 40, "strokewidth": 1, "stroke": "#000000", "sample": "", "tlx": 40000, "tly": 50000, "format": "{:.3f},{:.4f}", "empty": "NO_mean,no_mean" }, even though 40,000px and 50,000px is not within any sensor's image. FYI, I had smaller numbers but they also were outside the sensor's size. Checking for this at runtime should be easy since the actual image dimensions are known. Checking in the Overlay Editor is a little trickier since the underlying image might be a "notification" image which is generally much smaller than real images, although trying to use the Overlay Editor on a "notification" image gives terrible results. Added Indication in the overlay editor if a text field is off of the screen, obviously this is only for fields added via the overlay editor. This functionality already exists for images Added an error in the Allsky log if any text field is off of the image, this will capture any fields added via extra files Testing will then do PR
2025-04-01T04:35:42.659002
2021-02-10T16:27:12
805667874
{ "authors": [ "CuriousDran", "EricClaeys", "IanLauwerys", "Jonk2", "bleara", "f29pc", "jcauthen78", "lumdiniz", "matkovic", "paolobar54", "pclanon", "sbkirby" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11454", "repo": "thomasjacquin/allsky", "url": "https://github.com/thomasjacquin/allsky/issues/323" }
gharchive/issue
PiHq stops capture after update I did a fresh install of Rasbian on a 128GB Scandisk Extreme, on a Rpi 4,(4gb) RpiHq. I had everything running with and earlier install but had the "no timelaps" issue. Now with a fresh install of Allsky with the config that has the resize setting for the timelaps, everything starts out ok but after 2 to 3 hrs (day or night settings) it stops capturing. Allsky seems to be running, I can log in via the GUI, change the camera settings, view the system info (shows 52% CPU load every time it stops capturing) If I look at the device manager (pi desktop) it shows less than 4%. The Pi is not locked up and the log does not show anything except for capturing with the time the last image was taken. Stopping and restarting Allsky makes no difference. I can reboot the Pi and everything starts ok but then stops capturing after 2 to 3 hrs. I have the same ver running on a Pi3B+ with a ZWO120MC -s that has been working fine since the update... I'm a noob to the Allsky and love it, just don't know what to try next. Any sugestions??? Thanks Same problem here: Just new to Allsky, done a fresh install on a Rpi4 4GB and RpiHQ, 32GB SD. Using the GUI is ok and the night and day capturing is working, but never been able to reach the end of a night. Of two attempts the first stopped just before 1AM and the other at 5AM. No strange messages on the log, the GUI continue to work. No ftp, web server or other add-on, just the allsky and GUI install. Any possible idea for debugging? Thanks in advance Paolo I'm getting the same behavior all of a sudden on an RPi 4, 4GB, RPi-HQ camera, updated and upgraded. Allsky crashes unexpectedly and doesn't recover. I thought it might be related to power issues (I operate overnight on a rechargeable battery), but same behavior when plugged into the grid inside the house. Syslog always spits out something nearly identical to this when the crash happens, but I don't have the skills to interpret: Feb 14 07:53:45 allsky allsky.sh[648]: Capturing & saving image... Feb 14 07:53:45 allsky allsky.sh[648]: Capture command: nice raspistill --nopreview --thumb none --output image.jpg --burst -st --mode 3 --exposure auto --analoggain 1 --awb auto --vflip --saturation 50 --quality 95 -a 1104 -a 1036 -a "San Francisco, CA" -ae 32,0xff,0x808000 Feb 14 07:53:45 allsky allsky.sh[648]: Capturing & saving image done, now wait 30 seconds... Feb 14 07:54:33 allsky kernel: [67546.764067] ------------[ cut here ]------------ Feb 14 07:54:33 allsky kernel: [67546.764098] WARNING: CPU: 0 PID: 22681 at drivers/firmware/raspberrypi.c:64 rpi_firmware_transaction+0xec/0x128 Feb 14 07:54:33 allsky kernel: [67546.764108] Firmware transaction timeout Feb 14 07:54:33 allsky kernel: [67546.764117] Modules linked in: cmac bnep hci_uart btbcm bluetooth ecdh_generic ecc 8021q garp stp llc brcmfmac bcm2835_codec(C) brcmutil v3d v4l2_mem2mem bcm2835_isp(C) bcm2835_v4l2(C) bcm2835_mmal_vchiq(C) videobuf2_dma_contig videobuf2_vmalloc videobuf2_memops sha256_generic videobuf2_v4l2 videobuf2_common raspberrypi_hwmon cfg80211 vc4 rfkill cec videodev drm_kms_helper gpu_sched mc vc_sm_cma(C) drm drm_panel_orientation_quirks rpivid_mem snd_bcm2835(C) snd_soc_core snd_compress snd_pcm_dmaengine snd_pcm snd_timer snd syscopyarea sysfillrect sysimgblt fb_sys_fops backlight uio_pdrv_genirq uio nvmem_rmem ip_tables x_tables ipv6 Feb 14 07:54:33 allsky kernel: [67546.764661] CPU: 0 PID: 22681 Comm: kworker/0:2 Tainted: G C 5.10.11-v7l+ #1399 Feb 14 07:54:33 allsky kernel: [67546.764667] Hardware name: BCM2711 Just the same here Feb 14 15:06:55 allsky kernel: [ 281.907599] ------------[ cut here ]------------ Feb 14 15:06:55 allsky kernel: [ 281.907637] WARNING: CPU: 0 PID: 131 at drivers/firmware/raspberrypi.c:64 rpi_firmware_transaction+0xec/0x128 Feb 14 15:06:55 allsky kernel: [ 281.907649] Firmware transaction timeout Feb 14 15:06:55 allsky kernel: [ 281.907659] Modules linked in: cmac rfcomm bnep hci_uart btbcm bluetooth ecdh_generic ecc fuse 8021q garp stp llc vc4 cec v3d drm_kms_helper gpu_sched brcmfmac brcmutil drm sha256_generic raspberrypi_hwmon drm_panel_orientation_quirks cfg80211 rfkill bcm2835_v4l2(C) bcm2835_codec(C) bcm2835_isp(C) snd_soc_core v4l2_mem2mem bcm2835_mmal_vchiq(C) videobuf2_dma_contig videobuf2_vmalloc videobuf2_memops snd_compress videobuf2_v4l2 snd_pcm_dmaengine videobuf2_common snd_bcm2835(C) videodev snd_pcm mc vc_sm_cma(C) snd_timer snd syscopyarea rpivid_mem sysfillrect sysimgblt fb_sys_fops backlight nvmem_rmem uio_pdrv_genirq uio i2c_dev ip_tables x_tables ipv6 Feb 14 15:06:55 allsky kernel: [ 281.908335] CPU: 0 PID: 131 Comm: kworker/0:2 Tainted: G C 5.10.11-v7l+ #1399 Feb 14 15:06:55 allsky kernel: [ 281.908343] Hardware name: BCM2711 Feb 14 15:06:55 allsky kernel: [ 281.908364] Workqueue: events dbs_work_handler I'm not an expert in Linux environment, but from my research and trace (you can use sudo -r -p allsky-PID ) looks like that the raspistill call never returned. The problem is to discover why. I have as well a fully upgraded system, running relatively cold (max temp is 52C) with power from an official Rpi4 power supply. There are other people running a RPi4 with HQ camera without problems? BTW: why the raspistill command is run with "nice" without option, so I suppose a "niceness" of 10? Why downgrade the niceness? Anyways I did a test without the "nice" just for fun and no change: freeze after 45 minutes of capture... I bought 2 new sd cards and did a fresh install on both, one a 64g and the other a 128g. Both failed to capture after running about 2 hrs on the RPi4 (also new). I put the same (64g) sd card in an older RPi3B that I had, and it ran fine all night. I also have it running ok on an older RPi4 . I'm pretty sure that when I first installed on the new RPi4 that I ran an update and an upgrade after installing Rasbain. (didn't on the older Pi4 that is running ok). Not sure what what it means but I hope the info helps.. I might be wildy off here, but there was an issue in the latest Rasbian update I believe, that caused a POE hat's fan to stop working. If you don't have one of these, then there may be another issue that is causing throttling / too much CPU work. Perhaps try 1 version back and see what happens? I've not had any issues with my allsky crashing, new Rpi 4 and M.2 running via USB3. It may also be camera driver related - I'm using a ZWO camera without any issues. Update.. On the same install that the PiHQ was failing, I removed the PiHQ camera and replaced it with my ZWO120mc and it has been running all day with no issues. So it appears to be a problem only with the PiHQ camera. OK, I'm confused (OK, I'm MORE confused). I've my system (RPi4+HQ) now running since yesterday evening, 18 hours so far a real world record. I obtained that simply closing and deleting all the Chrome tabs that were attached to the GUI interface. I just access the RPi using VNC or SSH sporadically just to check that is still running... I don't know if that has any statistical significance or not, maybe if somebody want to try... In the meantime that I continue the experiment, I'm setting up an old PC to receive the files with FTP so avoiding to "disturb" the RPi. Same problem here: /var/log/allsky.log Feb 15 20:57:49 allsky kernel: [43640.096551] ------------[ cut here ]------------ Feb 15 20:57:49 allsky kernel: [43640.096581] WARNING: CPU: 3 PID: 28009 at drivers/firmware/raspberrypi.c:64 rpi_firmware_tr ansaction+0xec/0x128 Feb 15 20:57:49 allsky kernel: [43640.096590] Firmware transaction timeout Feb 15 20:57:49 allsky kernel: [43640.096599] Modules linked in: cmac rfcomm bnep hci_uart btbcm bluetooth ecdh_generic ecc f use joydev uinput 8021q garp stp llc brcmfmac brcmutil sha256_generic v3d raspberrypi_hwmon gpu_sched cfg80211 bcm2835_codec( C) bcm2835_v4l2(C) rfkill v4l2_mem2mem vc4 videobuf2_vmalloc bcm2835_isp(C) bcm2835_mmal_vchiq(C) videobuf2_dma_contig cec vi deobuf2_memops videobuf2_v4l2 videobuf2_common drm_kms_helper drm snd_bcm2835(C) videodev drm_panel_orientation_quirks mc vc_ sm_cma(C) snd_soc_core snd_compress snd_pcm_dmaengine snd_pcm snd_timer snd syscopyarea sysfillrect sysimgblt fb_sys_fops bac klight rpivid_mem uio_pdrv_genirq uio nvmem_rmem i2c_dev ip_tables x_tables ipv6 Feb 15 20:57:49 allsky kernel: [43640.097130] CPU: 3 PID: 28009 Comm: kworker/3:2 Tainted: G C 5.10.14-v7l+ #1 401 Feb 15 20:57:49 allsky kernel: [43640.097136] Hardware name: BCM2711 Feb 15 20:57:49 allsky kernel: [43640.097149] Workqueue: events dbs_work_handler Feb 15 20:57:49 allsky kernel: [43640.097159] Backtrace: Feb 15 20:57:49 allsky kernel: [43640.097179] [] (dump_backtrace) from [] (show_stack+0x20/0x24) Feb 15 20:57:49 allsky kernel: [43640.097189] r7:ffffffff r6:00000000 r5:60000013 r4:c12e69fc Feb 15 20:57:49 allsky kernel: [43640.097199] [] (show_stack) from [] (dump_stack+0xcc/0xf8) Feb 15 20:57:49 allsky kernel: [43640.097211] [] (dump_stack) from [] (__warn+0xfc/0x114) Feb 15 20:57:49 allsky kernel: [43640.097221] r10:dec01008 r9:00000009 r8:c099ae6c r7:00000040 r6:00000009 r5:c099ae6c Feb 15 20:57:49 allsky kernel: [43640.097227] r4:c0e9a114 r3:c1205094 Feb 15 20:57:49 allsky kernel: [43640.097238] [] (__warn) from [] (warn_slowpath_fmt+0xa4/0xd8) Feb 15 20:57:49 allsky kernel: [43640.097246] r7:00000040 r6:c0e9a114 r5:c1205048 r4:c0e9a134 Feb 15 20:57:49 allsky kernel: [43640.097257] [] (warn_slowpath_fmt) from [] (rpi_firmware_transaction+0xec/0x128) Feb 15 20:57:49 allsky kernel: [43640.097266] r9:c1a7a340 r8:00000018 r7:00000000 r6:ffffff92 r5:c1a7a340 r4:c1205048 Feb 15 20:57:49 allsky kernel: [43640.097277] [] (rpi_firmware_transaction) from [] (rpi_firmware_property_list+0xbc/0x170) Feb 15 20:57:49 allsky kernel: [43640.097285] r7:c1205048 r6:dec01000 r5:00001000 r4:dec01024 Feb 15 20:57:49 allsky kernel: [43640.097297] [] (rpi_firmware_property_list) from [] (rpi_firmware_property+0x70/0x118) Feb 15 20:57:49 allsky kernel: [43640.097306] r10:c6d6e08c r9:00030002 r8:00000018 r7:c1a7a340 r6:c8d55d48 r5:0000000c Feb 15 20:57:49 allsky kernel: [43640.097312] r4:c6d6e080 Feb 15 20:57:49 allsky kernel: [43640.097324] [] (rpi_firmware_property) from [] (raspberrypi_clock_property+0x54/0x7c) Feb 15 20:57:49 allsky kernel: [43640.097332] r10:00000000 r9:00000000 r8:c1abf780 r7:00000000 r6:3b9aca00 r5:c8d55d70 Feb 15 20:57:49 allsky kernel: [43640.097345] r4:c1205048 r3:0000000c ... ~/allsky/log.txt (just before it freezes) Saving image-20210215205637.jpg Saving image-20210215205714.jpg -angle -8 -autofocus 1 -autogain 0 -awb 0 -background 0 -bin 1 -brightness 50 -darkframe 0 -daytimeDelay 15000 -delay 10 -exposure 30000 -filename image.jpg -flip 2 -fontcolor 255 -fontsize 50 -gain 8 -gamma 50 -height 0 -latitude 0.0N -longitude 0.0E -quality 100 -rotation 0 -showDetails 0 -text rem -time 1 -wbb 2.0 -wbr 2.8 -width 0 -daytime 1 I think Jonk2 might be on to something.. Not a solution but a workaround.... I did a fresh install of raspbian 2021-01-11. On the first boot, this time I skipped the update option (I ran the update on the system that would fail) then installed allsky and the gui. So far using the same hardware, it has been up and running for over 5 hrs. Before it would repeatedly stop capturing after 2 to 2.5 hrs. Call me a chicken, but I give up... I ordered a RPi3B+ and finally run an entire night of capture. Now in the fight against timelapse that are generated and not playing and installing a web server... but those are other stories (and my conviction is that the problem is in the some race condition inside the raspistill side software not hardware, or environment) Data point: I’ve had two straight full nights without a crash and without reverting the OS to an earlier version (or making any other change). Assuming I haven’t just jinxed it, I’ll keep running the RPi 4B-4GB updated and upgraded and report back here. And no, timelapse and startrails don’t work reliably for me either. I just added my own ffmpeg step to the "additional steps to run at end of night" script and that works fine for timelapses. I get exactly the same problem with a very similar backtrace to Matkovic, starting with this:- WARNING: CPU: 2 PID: 20428 at drivers/firmware/raspberrypi.c:63 rpi_firmware_transaction+0xec/0x128 I only started with allsky at the beginning of the month and I am adding support for a Veye imx327 camera which uses a similar but customised raspistill() call. It all worked brilliantly to start with, running several nights but I must have done an apt-get update and now it runs for about 3 1/2 hours before the process errors. I also get the 50% cpu load on the web page but it seems to be an incorrect reading. I first had the problem with the January Buster release so I reverted to December and did an update on that and I still get the error. I need to revert to December and be careful to be with no updates to try that. Then also Jan with no updates too like f29pc has done. The RPI December image and no explicit update with a new install of allsky also now fails for me so I don't know which part causes the problem. I did not install the Gui this time as I wanted a minimal install. Bizarrely on both of the last two nights the failure has occurred at 21:51 GMT so the last two files are image-20210219215117.jpg and image-20210220215155.jpg i.e 24 Hrs and 38 seconds apart. A coincidence?? Unfortunately I don't possess an HQ camera so can't test with the standard platform. I reverted back to December 23 firmware, and two nights now of no crash. Estou a 3 dias lutando com a nova instalação pois meu cartão queimou tive que instalar novamente, estou com mesmo problema de congelar imagem . Alguma solução? lumdiniz, I reverted the operating system back to a December 23 version, and the problem hasn't come back for me. Here's how I went back: sudo rpi-update 611beaaa346c8c2b285d816ed796f0fe6daf2417 Obviously, don't update or upgrade after reverting. lumdiniz, I reverted the operating system back to a December 23 version, and the problem hasn't come back for me. Here's how I went back: sudo rpi-update 611beaaa346c8c2b285d816ed796f0fe6daf2417 Obviously, don't update or upgrade after reverting. lumdiniz, I reverted the operating system back to a December 23 version, and the problem hasn't come back for me. Here's how I went back: sudo rpi-update 611beaaa346c8c2b285d816ed796f0fe6daf2417 Obviously, don't update or upgrade after reverting. ok obrigado, minha camera funcionou 1 ano agora queimou o cartao SD, nao tinha backup estou fazendo do zero, estou com dificuldades de ativar o gps sunwait voce tem dicas? a imagem nao esta indo para o site inteira e esta sem data e hora. grato Possibly resolved. I've been experiencing this issue since December on a brand new Pi 4 and HQ Cam with Raspian Buster (fully updated straight after install). Hopefully recent Pi firmware updates have resolved it. The Pi would run for a random number hours (usually between three and twelve) and then stop capturing. The final capture would usually have some corruption, either covered in pink stripes or red/pink pixel 'noise' all over the darkest areas of the image. Sometimes the Pi would remain responsive and generate the startrails, timelapse, etc. at the end of the night (with a whatever images it had before the camera hung), and it would be possible to remote desktop to it. Other times it would completely hang and require a power-cycle. On the occasions where it remained responsive, restarting with sudo shutdown -r now would cause the Pi to hang and require a power-cycle. Instead forcing a hard reboot as below would kick it back in to life (after a delay presumably because of the unclean shutdown). echo s | sudo tee /proc/sysrq-trigger echo u | sudo tee /proc/sysrq-trigger echo b | sudo tee /proc/sysrq-trigger There are a few threads that seem to suggest it is a firmware problem, though the WARNING: CPU: 2 PID: 20428 at drivers/firmware/raspberrypi.c:63 rpi_firmware_transaction+0xec/0x128 error is fairly generic from what I understand: https://github.com/raspberrypi/linux/issues/4047 https://github.com/raspberrypi/linux/issues/4033 https://github.com/raspberrypi/firmware/issues/1552 I understand the issue may have been fixed in a Pi firmware update some time in March. I tried updating the Pi with: sudo apt update sudo apt full-upgrade I believe this should update the firmware, but upon checking with: /opt/vs/bin/vcgencmd version I was still getting a firmware version dated 25th February. I then tried: sudo rpi-update sudo shutdown -r now This forces a firmware update but I don't think it is recommended unless there is a specific reason to do so. Checking the firmware version again: /opt/vs/bin/vcgencmd version Gives: Apr 21 2021 15:48:42 Copyright (c) 2012 Broadcom version a48d332c35ee1c1c1ab433228e23317f62dcc5fb (clean) (release) (start_x) I've been up and running for two days now with no hangs or corrupted images. I do have a cron job that restarts the Pi every 24 hours during the daytime, but hopefully this is the fix as I haven't had a solid 24 hour run for months. I was having a similar problem with a fresh Pi 4b, 8gb and the PiHQ cam - lots of lock-ups, sudo reboots would hang and be unresponsive on the reboot forcing a hard power cycle.. got up to about 3 a day, and some nights wouldn't complete because of it. I ended up creating a .sh script to check the age of the live-view.jpg image, and if its older than 2 min, to try and reboot. Also put in a script to email and me when it reboots Followed the firmware update you mentioned earlier today, and it's been decently stable so far, crossing my fingers for a smooth run tonight. we'll see if i get any reboot emails in the morning. I was experiencing the same problem of 'hangs' as described above after updating my RPi. Prior to the fix, the firmware installed was dated Feb 2021. After following the instructions posted by @IanLauwerys, my RPi 4 is working fine...No hangs. Hello, I seem to be having the same issue as everyone in these threads. Running 4b with HQ camera, it is intermittently freezing during raspistill calls. using the --verbose output, I see a line stating "raspistill" Camera App (commit 4a0a19b88b43 Tainted) I've tried as @IanLauwerys said with rpi-update. It froze after about 3 hours, which is an improvement, but I am still getting the commit error I quoted above. Link to another comment I made with more information in post #1152 https://github.com/raspberrypi/firmware/issues/1552#issuecomment-873247512 Closing issue based on multiple people in this thread and others saying the most recent firmware fixed the problem of the RPiHQ camera stopping after a few hours. This issue wasn't related to the Allsky software.
2025-04-01T04:35:42.665275
2021-10-04T19:53:17
1015570018
{ "authors": [ "EricClaeys", "linuxkidd", "maserowik" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11455", "repo": "thomasjacquin/allsky", "url": "https://github.com/thomasjacquin/allsky/issues/559" }
gharchive/issue
wlan0 instead of eth0 @EricClaeys If I recall correctly you pointed me to the place to modify the ethernet dashboard. I know is is purely cosmetic but is there a way the default can have this changed in the repo? < cd /var/www/html/includes sudo nano -l dashboard_eth0.php line 65 change wlan0 to eth0 Hi @maserowik ... the file you're looking for is: /var/www/htm/includes/dashboard_eth0.php You'll see it around line number 65 Just change wlan0 on that line to eth0. Hi @maserowik -- I submitted a PR to fix this, and it was merged. Thanks for reminding us about that one! :+1: I'll go ahead and close the ticket. @maserowik I could have sworn I fixed this after you first pointed it out. Either way, thanks to @linuxkidd for actually fixing it. @linuxkidd The LAN and WLAN pages on the GUI display the information correctly but (for me and another user) the "start" and "stop" buttons don't work. If I manually execute sudo ifdown eth0 or sudo ifup eth0 (or the same for wlan0 they don't work either. Do they work on your Pi? Hmm... I'll have to get a different test setup going to play with that... I've opened a new issue against allsky-portal -- I'll investigate/update there.
2025-04-01T04:35:42.700766
2015-06-26T00:17:31
91109820
{ "authors": [ "bloudermilk", "thomseddon" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11459", "repo": "thomseddon/node-oauth2-server", "url": "https://github.com/thomseddon/node-oauth2-server/issues/198" }
gharchive/issue
Question: Is it possible to use the password grant type without passing the client secret? I'm considering using this module to secure an iOS app API. The app is trusted so we're interested in using the "password" grant type so that we can avid using a webpage and redirects. It looks like this module supports the grant type but it requires the client secret, which for obvious reasons we can't store safely in the app. Does this module support this or is there another recommended method? I'm currently passing in an arbitrary value for the client secret and ignoring it in getClient. You can use the password grant type but you should be prepared for the fact that if someone finds your secret (or find you're ignoring it) they can generate tokens themselves with correct credentials. My approach here has been to use a secret with a view to rotating it if there ever was observed misuse, I'd be interested to know how others approach this Thanks for the tips. We do intend to use the client secret for other trusted endpoints so unfortunately we won't be able to go with the rotation strategy you mentioned. For now we'll continue to pass a placeholder secret and ignore it, unless you have any other ideas :) The rotation should still work, you can create a client/secret pair and put these in your application then as time goes on you may generate a new pair, then when ready you can disable the original pair. The idea is you just treat your self as another (perhaps more privileged) API client
2025-04-01T04:35:43.289531
2024-04-03T09:29:15
2222401666
{ "authors": [ "ja573", "rhigman", "tosteiner" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11460", "repo": "thoth-pub/thoth", "url": "https://github.com/thoth-pub/thoth/issues/588" }
gharchive/issue
Dissemination: Handling of DOIs from registration agencies other than Crossref Describe the bug Thoth's auto-dissemination workflow currently assumes all DOIs are eligible for submission to / update via Crossref. As we've learned recently, there are some cases where publishers have existing DOIs that have previously been registered with DataCite (in this case: punctum's older books), and when the auto-dissem workflow now sends these on to Crossref for updating, Crossref throws an error because they don't recognise the DOI prefix. For now, a pragmatic fix proposed by @rhigman would be to list all books with that specific DOI prefix in a special EXCLUDE rule to tell the Crossref auto-dissem workflow to omit those records. In the future, we might have more publishers with legacy books registered with DataCite, so it might be pertinent to consider a more pragmatic approach, e.g. to include DOI prefixes in the publisher-level metadata, which would then enable us to tailor the dissemination workflow to a particular set of DOI prefixes. (also tagging @hannahhillen :) ) To be decided whether this requires fixing/mitigating at the thoth end or the thoth-dissemination end (or both). The thoth-dissemination code could be made more defensive, given that very few checks are made by Crossref at the time of submission (the error is only raised later in a report via email). For example, there could be an initial check of the DOI prefix against the Crossref Get Prefix Publisher endpoint (https://doi.crossref.org/getPrefixPublisher/?prefix=[prefix]) - although note that the docs mark this as "legacy". https://api.crossref.org/swagger-ui/index.html#/Prefixes/get_prefixes__prefix_ Side note: this also raises the more fundamental question around exploring DataCite membership (and potential sponsorship structure similar to Crossref's) - but this seems not urgent at this stage
2025-04-01T04:35:43.291461
2023-03-08T11:20:54
1615103659
{ "authors": [ "rhigman" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11461", "repo": "thoth-pub/thoth", "url": "https://github.com/thoth-pub/thoth/pull/484" }
gharchive/pull-request
Support filtering works by multiple work_statuses and by updated_at_with_relations Required for automatic dissemination (initially to Crossref), where we need to find works which are either ACTIVE or FORTHCOMING, and which have had their metadata updated within a certain timeframe (i.e. since the last scheduled dissemination - as we will be making both initial submissions and updates). Also enables filtering by multiple language_relations. Filtering by updated_at_with_relations requires specifying a timestamp and whether the field value should be greater than (after) or less than (before) this timestamp. As discussed, due to other priorities, we'll keep the basic TimeExpression filtering for now and improve it later under #486.
2025-04-01T04:35:43.362066
2017-07-18T20:19:25
243838810
{ "authors": [ "Andrewpk", "gfontenot", "johnnysay", "tkohout", "tonyd256", "wongzigii" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11462", "repo": "thoughtbot/Curry", "url": "https://github.com/thoughtbot/Curry/pull/39" }
gharchive/pull-request
Fix Swift 4 ambiguity Latest Xcode 9 beta 3 shows ambiguous use of curry when the swift 4 version flag is set for the project. This is likely related to changes in compiler implementation in swift 4 - see more here: https://github.com/Swinject/SwinjectAutoregistration/issues/29 The solution is to disambiguate by adding parentheses to functions with more than one parameter. As far as I tested this, it should be backward compatible with earlier versions of swift. I would wait before merging this to see whether it is a bug of compiler in the beta or a real change. Is there anything holding this up from getting merged? Not sure who's maintaining this anymore ... @sharplet ? It's surprising to me that this works at all, beyond that it's needed. My understanding was that Swift moved away from letting you pass tuples to functions that take multiple arguments, and yet this seems to be exactly what this is doing? Bump Any clue about when this will be merged? I'll get this into a release today. Curry 4.0 is now released, which includes this change and so should fix compatibility with Swift 4. Sorry for the delay on this btw, and thank you for your PR and your patience. Thank you @gfontenot for your fast reaction! However when running pod install / pod update I still have the version 3.0.0. I have also noticed that in https://github.com/thoughtbot/Curry/blob/master/Curry.podspec the version is still 3.0.0. d'oh! Thanks. I'll get that pushed.
2025-04-01T04:35:43.366229
2019-01-29T20:32:34
404474178
{ "authors": [ "maymillerricci", "thadwoodman" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:11463", "repo": "thoughtbot/bamboo", "url": "https://github.com/thoughtbot/bamboo/issues/449" }
gharchive/issue
Key should be "filename" instead of "filname" https://github.com/thoughtbot/bamboo/blob/23b8dc85d1cb095448e0f596759166fcc2caaf0c/lib/bamboo/email.ex#L217 Thanks for reporting! #454 fixes this.