Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
I don't think CUDA is really much use for computional algebraic/general topology. Sure, you can use it to mess around with homology groups etc., but that's rather algebra than topology, which itself tends to be too abstract/"dynamic" to really benefit from SIMD. If you don't have a clear idea, I'd first try some CPU implementations and only port to CUDA as a later optimisation.Anyway, what you describe sounds rather like you're mainly interested in creatingvisual representationsof topological spaces, i.e. in giving concrete embeddingsT→ ℝ³. That's rather in the realm ofdifferential topology, which I reckon could quite well make use of gpgpu processing. However for the final "visualisation step", you want to use something more specific; openGL + GLUT is fine. You can use that from many languages, I'd recommend Haskell (undisputedly great for everything mathematical), but C or C++ is of course closer to the library, you will find more examples and can easier get CUDA in.
I have never had an experience with creating simulations or 3D objects, but I want to start learning them, and create a small application, which will simulate topological objects in 3D. What I mean by "topological object", is mathematical topology (algebraic/general topology), meaning torus, knot, Möbius strip, etc. So, I don't mean anything like network topology.I have been searching over the Internet for some example codes regarding them, but I wasn't able to find anything useful. If you could offer me some materials, I would be glad. On the other hand, I want to hear your opinion about which programming language/paradigm/extension should I use? I also plan to use CUDA with the project for achieving speed-ups.
How to create 3D mathematical topology simulations?
I think, the described answers work only for exclusive caches for every build-job. If I have different jenkins-jobs running on docker-slaves, I will get some trouble with this scenario. If the jobs run on the same time and write to the same mounted cache in the host-filesystem, it can become corrupted. Or you must mount a folder with the job-name as part of the filesystem-path (one jenkins-job run only once at a time).
We have Jenkins Docker Slave template that successfully builds a piece of software for example a Gradle project. This is based on thehttps://hub.docker.com/r/evarga/jenkins-slave/).When we fire up the docker slave the dependencies are downloaded everytime we do a build. We would like to speed up the build so dependencies that are downloaded can be reused by the same build or even by other builds.Is there a way to specify an external folder so that cache is used? Or another solution that reuses the same cache?
How to cache downloaded dependencies for a Jenkins Docker SSH Slave (Gradle)
No, you don't. You can login using aws-sdk like this:const cognito = new aws.CognitoIdentityServiceProvider({ region }); cognito.adminInitiateAuth({ AuthFlow: 'ADMIN_NO_SRP_AUTH', ClientId: clientId, UserPoolId: poolId, AuthParameters: { USERNAME: email, PASSWORD: password, }, });
I have a javascript project where I use the aws-sdk. No I want to useamazon-cognito-identity-js. On the page it says:Note that the Amazon Cognito AWS SDK for JavaScript is just a slimmed down version of the AWS Javascript SDK namespaced as AWSCognito instead of AWS. It references only the Amazon Cognito Identity service.and indeed, I can for example create CognitoIdentityServiceProvider with:CognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider();But how do I do thinks like authenticate a user? According to theamazon-cognito-identity-jsdocumentation:authenticationDetails = new CognitoIdentityServiceProvider.AuthenticationDetails({Userame: ..., Password: ...}); cognitoUser.authenticateUser(authenticationDetails, ...)But the CognitoIdentityServiceProvider object does not have a AuthenticationDetails property.Do I have to do something different when I use the aws-sdk instead of amazon-cognito-identity-js?Or is my assumption wrong, and I need both, the aws-sdk and amazon-cognito-identity-js?
Do I need amazon-cognito-identity-js if I already have the aws-sdk (javascript)?
It's more a definitional thing, IIUC. A valid certification path is defined in RFC 5280 and one condition is that its first certificate is signed by a trust anchor (and that the issuerName of the certificate matches that trust anchor's name). (Trust anchors need not be certificates.)
In thePKIX documentationit mentions:The certificate representing the TrustAnchor should not be included in the certification pathMy question is, where does this restriction come from? In theRFC 5280I only found:A certificate MUST NOT appear more than once in a prospective certification path.Does the statement (2) in RFC somehow imply statement (1)? Because I can not see it.What problem would be created by having the trust anchor in the path as well? In the end, the TA certificate can validate itself.Could anyone please explain this?
Why should the trust anchor not be included in the PKIX certification path?
No real need to issue aws configure instead as long as you populate env vars export AWS_ACCESS_KEY_ID=aaaa export AWS_SECRET_ACCESS_KEY=bbbb ... also export zone and region then issue aws ecr get-login --region ${AWS_REGION} you will achieve the same desired aws login status ... as far as troubleshooting I suggest you remote login into your running container instance using docker exec -ti CONTAINER_ID_HERE bash then manually issue above aws related commands interactively to confirm they run OK before putting same into your Dockerfile
So I have a docker container running jenkins and an EC2 registry on AWS. I would like to have jenkins push containers back to the EC2 registry. To do this, I would like to be able to automate the aws configure and get login steps on container startup. I figured that I would be able to export AWS_ACCESS_KEY_ID=* export AWS_SECRET_ACCESS_KEY=* export AWS_DEFAULT_REGION=us-east-1 export AWS_DEFAULT_OUTPUT=json Which I expected to cause aws configure to complete automatically, but that did not work. I then tried creating configs as per the AWS docs and repeating the process, which also did not work. I then tried using aws configure set also with no luck. I'm going bonkers here, what am I doing wrong?
'aws configure' in docker container will not use environment variables or config files
The answer is .gitignore file. See: https://git-scm.com/docs/gitignore for more information.
Specifically for Java Eclipse projects. Is there a reason to have anything other than src and lib directories on github?? How much value does providing /bin, /settings .classpath, .project, etc.?? I'd like to have them on my local, but not displayed on github. Is there a way to do this? Thanks! EDIT: contents of my .gitignore file (which is located in my local git project directory): bin/ .settings/ .classpath .project I did a git add (to add this new .gitignore file) and a git commit to my local repo. However, when I push to my remote github (https://github.com/VKkaps/Breakout) now, I still see everything including the .gitignore file now? Help?
How to not push certain directories/files to Github?
I have 99,561 small files totalling 466MB and experimented with uploading these to S3 as quickly as possible from my m4.16xlarge (64 CPU) EC2 instance. aws s3 sync took 10 minutes CyberDuck promised to take 17 minutes, but hung. I terminated it after 45 minutes. CloudBerry seems to be windows only at first glance, so I did not test it. s3-parallel-put --put=stupid took 9 minutes s3-parallel-put --put=stupid --processes=64 took 1 minute s3-parallel-put --put=stupid --processes=256 took 19 seconds s3-parallel-put --put=stupid --processes=512 took 18 seconds I'll be moving forward with a s3-parallel-put solution.
I have to upload 100k small files (Total size: 200MB). I've tried to do that via web browser (AWS Console), but for the first 15 min, i've uploaded only 2MB. What is the fastest way to upload 100k small files to S3?
S3: how to upload large number of files
First make sure your identity pool and user pool are setup for google authentication.Then federatedSignIn has a capital last I.And finally just change your second param in the call to federatedSignIn as follows:Amplify.Auth.federatedSignIn('google', { token: googleResponse.id_token, expires_at: googleResponse.expires_at }, {email, name})...ShareFollowansweredApr 10, 2019 at 17:17neonguruneonguru75966 silver badges1616 bronze badgesAdd a comment|
I'm trying to use AWS Amplify to support email / password and Google authentication. Now, I want to store the details from Google into my user pool in AWS. I don't understand the flow here - there are many blog posts I read but most of them are just confusing.Here's what I tried to do:// gapi and Amplify included googleSigninCallback(googleUser => { const googleResponse = googleUser.getAuthResponse(); const profile = googleUser.getBasicProfile(); const name = profile.getName(); const email = profile.getEmail(); Amplify.Auth.federatedSignin('google', googleResponse, {email, name}) .then(response => { console.log(response); }) // is always null .catch(err => console.log(err)); });In DevTools I have the following error in the request in Network Tab:{"__type":"NotAuthorizedException","message":"Unauthenticated access is not supported for this identity pool."}Why should I enable unauthenticated access to this pool? I don't want to.Am I doing this right? Is it even possible or is it a good practice to store Google User details into the AWS User Pool? If it's not a good practice, then what is?Also, if I want to ask user for further details not provided by Google in the app and store them, how to do it if we can't store the user in User Pool?
Using AWS Amplify to authenticate Google Sign In - federatedSignin returns null?
There's probably an easier way to do it (eg using--cli-input-jsonand providing JSON in a file), but I got this working:aws sns subscribe \ --topic-arn arn:aws:sns:region:accountId:my_topic \ --protocol sqs \ --notification-endpoint arn:aws:sqs:region:differentAccountId:my_sqs_queue \ --attributes '{\"RawMessageDelivery\": \"true\", \"FilterPolicy\": \"{\\\"filter\\\": [\\\"value1\\\", \\\"value2\\\"]}\"}'The problem was the JSON included in a string, which needed\"to be escaped as\\\".
I'm trying to write a cross account aws cli command to subscribe to a topic and create a filter for that subscription at the same time. Below is how my command looks like.aws sns subscribe --topic-arn arn:aws:sns:region:accountId:my_topic --protocol sqs --notification-endpoint arn:aws:sqs:region:differentAccountId:my_sqs_queue --attributes "{'RawMessageDelivery': 'true', 'FilterPolicy': '{\"filter\": [\"value1\", \"value2\"]}'}"I'm getting below error when I run this.Unknown options: --attributes, [\value1\,, \value2\]}'}, {'RawMessageDelivery': 'true', 'FilterPolicy': '{" filter\:I've access to admin access both the aws accounts. Any suggestions on what I'm doing wrong?EDIT:I'm running this in VS Code powershell terminal in windows.
aws cli command to subscribe to a topic with filters
20 An aurora cluster instance might be either a writer or a reader. Aurora clusters allow one writer and up to 15 readers. The instance role might change failover happens. The writer DNS endpoint always resolves to the writer instance, Cluster writer endpoint The reader endpoint DNS randomly resolves to one of the reader instances with TTL=1. (Note: It might point to the writer instance only if they are one healthy instance is available in the cluster fleet) Cluster reader endpoint Share Follow answered Dec 28, 2016 at 0:56 maziarmaziar 58133 silver badges99 bronze badges 1 TTL is 5 seconds now :) – maziar Jan 3, 2021 at 21:26 Add a comment  | 
I created an Amazon Aurora instance in my VPC. When the instance was created, it came with 2 endpoints, a writer and a reader endpoint. The instance is using a security policy with an ingress rule (Type: All Traffic, Protocol: All, Port: All, Source: 0.0.0.0/0). I tried both MySQL Workbench and MySQL monitor command interface to connect to the endpoints. The connection to the Reader endpoint worked but that to the Writer endpoint didn't. The reader endpoint was readonly, so I was unable to build my DB using it. Any idea?
AWS RDS Writer Endpoint vs Reader Endpoint
There is nothing wrong with those patches, they look all exactly how they should look. Unified diff includes 3 lines for context (per default, this can usually be changed by the diff provider, in case of git diff this is -U<n> or --unified=<n>). Lets look at the hunk in your first example: @@ -362,7 +362,7 @@ It says that the patch file starts at line 362, and that 7 lines are included in the diff. If we look at the diff, we can see that it indeed starts at line 362 and is 7 lines long. If we look at the diff in more detail, we see that line 362, 363, 364 are produced verbatim. Line 365 is labeled with a - (respective +) because it got removed and another line re-inserted. This is highlighted by the red/green color in the output. One thing that is not in the actual diff file is GitHubs highlighting of exactly which parts of the line was changed. Thats a custom enhancement of GitHub. And after that, the three next context lines which are not changed are displayed verbatim. Unified diff simply provides context lines and includes them in the diff, and GitHub shows it this way, too. You have 1 changed line (365), and three lines before and after for context. makes 7 lines in total that are included in the patch/diff file (starting at 362).
When you look at a diff of a file, it will show you the diff info at the top, and then it highlights the changes below. However, in every example I have looked at...the line number that Github highlights with the change, is always different than the line number that Git specified in the Diff/Patch info. For example this commit (note the diff data says @@ -362,7 +362,7 @@ def association_instance_set(name, association, yet Github begins the highlighting at line 365.) Or this one: Or this: Or finally this one: It seems as if the actual line-number highlighted by Github is usually around 3 lines higher than the patch/diff data from Git specified. When I check their API, pull down the first file I highlighted and linked above, spit it out into an array and then do a line count in the array using index, I get a different result too. The line where the diff specifies the change is made i.e. 362, comes out using my array conversion method to 364 and not 365 as Github highlighted it. So something is a bit off. Why is that?
Why is the Git Diff/Patch info different than the Github representation of that patch?
It's detailed fairly well here in the Kubernetes Service here:https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-typesTo summarise:NodePort exposes the service on a port, which can then be accessed externally.LoadBalancer uses a Cloud Providers option for exposing a port. E.G. Azure Load Balancers are used and can potentially expose multiple public IP addresses and load balances them against a larger pool of backend resources (Kubernetes nodes)
Inhttps://kubernetes.github.io/ingress-nginx/deploy/baremetal/In metalLB mode, one node attracts all the traffic for the ingress-nginx By node port we can gather all traffic and loadbalance it podes by servicewhat is diffrence between node port and metalLB?
metalLB vs nodeport in kubernetes
You have to set up a webhook in a git repository and enable a flag in the job config.There is a simplewrite up with imagesShareFollowansweredFeb 25, 2020 at 13:44RamRam1,18466 silver badges2323 bronze badges1In the Which events would you like to trigger this webhook? section there is no option for notifying when a commit happens.–1r0n-manFeb 25, 2020 at 13:56Add a comment|
I have a jenkins pipeline job. I want my job to be triggered whenever there is a commit in my github repository.Note: I am able to do this as a freestyle project. Now I want it as a pipeline project.
Trigger a Jenkins Job when a git commit happens in my repo
I am not an expert about this, so please take this with a grain of salt. I guess that [NSString stringWithString:@"a"] will probably just return the literal string @"a", i.e. it just returns its argument. As @"a" is a literal, it probably resides in constant memory and can't be deallocated (so it should be initialized with a very high retain count).
@property (retain) NSString *testString; self.testString = [[NSString alloc] initWithString:@"aaa"]; [self.testString retain]; self.testString = [NSString stringWithString:@"a"]; [self.testString release]; [self.testString release]; Let's go line by line: Line 2: retain count of testString = 2 Line 3: retain count of testString = 3 Line 4: retain count of testString = 1 Line 5: retain count of testString = 0 Line 6: it should crash Even if there's other stuff holding to testString in CoreFoundation, it eventually will go away. But the app never crash due to this. Anyone could explain this? Thanks!
Over-release an object and the app does not crash
What happens if I send message with MessageDeduplicationId test , the consumer processes the message and the deletes it, then, in about 1 minute, I send the exact same message again. Will this message be ignored? Answer seems to be: Yes Amazon SQS continues to keep track of the message deduplication ID even after the message is received and deleted. https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html
I'm trying to use the AWS SQS FIFO service together with an Elastic Beanstalk worker environment. Let's say I send a message with MessageDeduplicationId test, if I continue sending this exact message in the next 5 minutes, the message will be ignored, correct? What happens if I send a message with MessageDeduplicationId test , the consumer processes the message and deletes it, then, in about 1 minute, I send the exact same message again. Will this message be ignored? My question is, does deduplication occur as long as the same MessageDeduplicationId is still in queue/flight? Or is the id banner forever, no other message with the same id can be sent. Thanks.
How does AWS FIFO SQS deduplication ID work?
2 Another tricky way might be creating a new organization and configuring organization-wide templates. Share Improve this answer Follow answered Jun 10, 2020 at 0:13 cema-spcema-sp 37222 silver badges88 bronze badges Add a comment  | 
Github recently announced the addition of pull request templates. This is an awesome feature that has been heavily requested in the community for some time. These templates are added by including a special file named PULL_REQUEST_TEMPLATE.md to the root of the project or within the .github directory. I have multiple projects for which I would like to use the same template. What is the best way to keep these templates in sync across projects? (Git submodules are the only thing I can think of, but that seems pretty heavy-handed and complexity-prone for such a simple use case).
Best way to share Github pull request templates between projects?
Here is what I did:volumeMounts: {{- range $key, $value := pluck .Values.service_name .Values.global.mountPath | first }} - name: {{ $key }} mountPath: {{ $value }} subPath: {{ $key }} {{- end }}helm template --set service_name=hello [...]seems to render exactly what you want.Notice I changed the line with mountPath field:$value->{{ $value }}, and the line with range:.Values.global.mountPath.serviceName->.Values.global.mountPath
I havevalues.ymlfile that takes in a list of mountPaths with this format:global: mountPath: hello: config: /etc/hello/hello.conf node: /opt/hello/node.jks key: /opt/hello/key.jks cert: /opt/hello/cert.jksI want the resulting rendered template to bevolumeMounts: - name: config mountPath: /etc/hello/hello.conf subPath: config - name: node mountPath: /opt/hello/node.jks subPath: node - name: key mountPath: /opt/hello/key.jks subPath: key - name: cert mountPath: /opt/hello/cert.jks subPath: certHow would I accomplish this? I tried the following in mydeployment.yamltemplate file:volumeMounts: {{- range $key, $value := pluck .Values.service_name .Values.global.mountPath.serviceName | first }} - name: {{ $key }} mountPath: $value subPath: {{ $key }} {{- end }}the following helm command that i have run but the it won't work for me. How do I accomplish getting to the format I want above based on the input?helm upgrade --install \ --namespace ${NAMESPACE} \ --set service_name=hello \ --set namespace=${NAMESPACE} \ hello . \ -f values.yaml \
Helm Chart configmap templating with toYaml
Kubernetes Pods have an imagePullPolicy field. If you set that to Never, it will never try to pull an image, and it's up to you to ensure that the docker daemon which the kubelet is using contains that image. The default policy is IfNotPresent, which should work the same as Never if an image is already present in the docker daemon. Double check that your docker daemon actually contains what you think it contains, and make sure your imagePullPolicy is set to one of the two that I mentioned. apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: my-image image: local-image-name imagePullPolicy: Never
How to refer to the local image that exists? kubectl run u --rm -i --tty --image my_local_image -- bash Results in ImagePullBackOff and kubectl is obviously trying to pull from a remote repository instead of local register. This answer is unhelpful, and the follow up refers to minikube and kubernetes. Some event logs Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned u-6988b9c446-zcp99 to docker-for-desktop Normal SuccessfulMountVolume 1m kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-q2qm7" Normal SandboxChanged 1m kubelet, docker-for-desktop Pod sandbox changed, it will be killed and re-created. Normal Pulling 23s (x3 over 1m) kubelet, docker-for-desktop pulling image "centos_postgres" Warning Failed 22s (x3 over 1m) kubelet, docker-for-desktop Failed to pull image "centos_postgres": rpc error: code = Unknown desc = Error response from daemon: pull access denied for centos_postgres, repository does not exist or may require 'docker login' Warning Failed 22s (x3 over 1m) kubelet, docker-for-desktop Error: ErrImagePull Normal BackOff 9s (x5 over 1m) kubelet, docker-for-desktop Back-off pulling image "centos_postgres" Warning Failed 9s (x5 over 1m) kubelet, docker-for-desktop Error: ImagePullBackOff
kubectl run from local docker image?
Run the commandaws eks --region us-west-2 update-kubeconfig --name my-app-prdto update the kubectl config and then runkubectl get svc
My company has given me a replacement laptop because the previous one has died. Usingaws configure, I have configured AWS and also downloaded and added kubectl to the path. I have updated the kubectl config usingaws eks update-kubeconfig \ --region us-west-2 \ --name my-app-prd \ --role-arn arn:aws:iam::xxxxxxxxxxxx:role/role_nameI changed the cluster usingkubectl config use-context arn:aws:eks:us-west-2:xxxxxxxxxxxx:cluster/my-app-prdWhen I runkubectl get svcI get the following error messageAn error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::xxxxxxxxxxxx:user/<AWS_USERNAME> is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxxxxxx:role/role_name E0201 09:59:49.896242 7160 memcache.go:265] couldn't get current server API group list: Get "https://...eks.amazonaws.com/api?timeout=32s": getting credentials: exec: executable aws failed with exit code 254
User: arn:aws:iam::xxxxxxxxxxxx:user/<AWS_USERNAME> is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxxxxxx:role/role_name
As you mentioned that onlykubelet.crthas expired andapiserver-kubelet-client.crtis valid, you can try to renew it by commandkubeadm alpha certs renewbased ondocumentation.Second way to renew kubeadm certificates is upgrade version like inthis article.You can also try by usingkubeadm init phase certs all. It was explained inthis Stackoverflowcase.Let me know if that helped. If not provide more information with more logs.
We currently having 2 Master 2 Worker node cluster onKubernetes v1.13.4.The cluster is down as the kubelet certificate located in/var/lib/kubelet/pki/kubelet.crthas expired and the kubelet service is not running. On checking the kubelet logs I get the following errorE0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTCThe following certificatesca.crt,apiserver-kubelet-client.crtare valid. We are unable to renew thekubelet.crtcertificate manually by using thekubeadm-config.yaml. Can someone please provide the steps to renew the certificate.We have tried setting--rotate-certificatesproperty and also usingkubeadm-config.yamlbut since we are usingv1.13.4kubeadm --configflag isnot present.On checking the kubelet logs I get the following errorE0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTC
Renewing Kubernetes cluster certificates
you are correct.kubectl top podalso presents the current usage in millicores, so it can also be used to calculate a possible limit reduction.kubectl top pod | grep prometheus-k8s | awk '{print $2} 405mSo I set my limit with. (405 * 100) / 75 = 540mShareFollowansweredOct 23, 2020 at 10:49Kelson MartinsKelson Martins11633 bronze badgesAdd a comment|
let's say I have the following resource requests:cpu 25mmem 256MiAnd I have the following limits:cpu 1mem 1GiAnd I have the following utilizationcpu 15.01%mem 17.24%Question... Is utilization % of limits or % of requests?My presumption is it would be % of limits. So then if I want my cpu at 75% utilization I would just have to scale it down which would get me to200musing the math below(15.01%*1000)/0.75 = 200[Update]I was looking at GCP in the monitoring section of GKE pods
Understanding Kubernetes pod resource utilization in GCP monitoring
The values you have there are OK, but meta http-equiv is highly unreliable. You should be using real HTTP headers (the specifics of how you do this will depend on your server, e.g. for Apache).
This question already has answers here: How do we control web page caching, across all browsers? (30 answers) Closed 7 years ago. The community reviewed whether to reopen this question last year and left it closed: Original close reason(s) were not resolved I have a HTML page. The problem is that I do not want to have the users to refresh the page each time I put on new content. I have the following code in order to make sure that the page is not cached: <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate"/> <meta http-equiv="Pragma" content="no-cache"/> <meta http-equiv="Expires" content="0"/> The problem is, I still need to do a refresh on the page in order to get the most current content to show up. Am I doing something wrong? Should I be using some other tags?
Prevent caching of HTML page [duplicate]
Updated answer (august 2022)Problem is fixed in the official v9.6 image.Updated answer (june 2022)I was able to build a working image by using this Dockerfile:FROM sonarqube:community USER root RUN apk add --no-cache --upgrade 'zlib>=1.2.12-r1'; USER sonarqube(credits tothis Github comment)Original answer (april 2022)As stated inthis issuefrom the Docker-Sonarqube repository:This bug appeared when the alpine base image has been updated to 3.14.5+. So you have two choices for now:Either rollback Sonarqube 9.2.4 (prior versions have the Log4J vulnerability)Re-build the latest Sonarqube image yourself with alpine version set to 3.14.3 (as explained inthis comment)
I'm trying to create docker container with SonarQube inside it, but I get this error while composing for the first time:Caused by: java.util.concurrent.ExecutionException: org.apache.lucene.index.CorruptIndexException: checksum failed (hardware problem?) : expected=f736ed01 actual=298dcde2 (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/opt/sonarqube/data/es7/nodes/0/_state/_7w.fdt")))I tried installing it on a fresh instance with fresh docker installation, I even tried to install it on a different server to rule out hardware failure, and I still get the same error. What could be the cause of it?docker-compose.ymlversion: "3" services: sonarqube: image: sonarqube:community depends_on: - db environment: SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonar SONAR_JDBC_USERNAME: sonar SONAR_JDBC_PASSWORD: sonar volumes: - sonarqube_data:/opt/sonarqube/data - sonarqube_extensions:/opt/sonarqube/extensions - sonarqube_logs:/opt/sonarqube/logs ports: - "9000:9000" db: image: postgres:12 environment: POSTGRES_USER: sonar POSTGRES_PASSWORD: sonar volumes: - postgresql:/var/lib/postgresql - postgresql_data:/var/lib/postgresql/data volumes: sonarqube_data: sonarqube_extensions: sonarqube_logs: postgresql: postgresql_data:
SonarQube Docker Installation CorruptIndexException: checksum failed
Did you try proxy_cache_valid 200 1d;: location ~ ^/1 { proxy_pass http://10.10.52.126:8090; proxy_cache api_cache; proxy_cache_valid 200 1d; } Link
My nginx server if configured like this: ...... server { # Status page location /nginx_originserver { stub_status on; } listen 80; location ~ ^/1 { proxy_pass http://10.10.52.126:1239; proxy_cache api_cache; } ...... } In this case, when I browse http://localhost/1/thumbnail.jpg, the image file is cached. But when I change the proxy to a location which returns json like below and browse http://localhost/1/api_service, the json file is not cached, why just the image file is cached but not json, how to cache the json file? location ~ ^/1 { proxy_pass http://10.10.52.126:8090; proxy_cache api_cache; }
nginx doesn't cache json, but caches .jpg file
There's no standard way to do this, but various malloc debugging tools may have a way of doing it. For example, if you usevalgrind, you can useVALGRIND_CHECK_MEM_IS_ADDRESSABLEto check this and related things
I want to know if a pointer points to a piece of memory allocated with malloc/new. I realize that the answer for an arbitrary address is "No you can't" but I do think it is possible to override malloc/free and keep track of allocated memory ranges.Do you know a memory management library providing this specific tool?Do you know something for production code?Valgrindis great, but it is too much instrumentation (slow) and as Will said we don't want to use Valgrind like this (making the soft crash is good enough).Mudflapis a very good solution, but dedicated to GCC, and sadly, a check does not simply return a boolean (see my answer below).Note that checking that memory writes are legal is asecurity issue. So looking for performance is motivated.
Check if a pointer points to allocated memory on the heap
I don't know why this error has appeared, but after I made [root@vm5808 ~]# /etc/init.d/nginx stop Stopping nginx: [ OK ] [root@vm5808 ~]# /etc/init.d/nginx start Starting nginx: [ OK ] error has gone: [root@vm5808 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful I've tried restart before stop and start, but Nginx wouldn't restart: [root@vm5808 ~]# /etc/init.d/nginx restart nginx: [emerg] listen() to 0.0.0.0:80, backlog 511 failed (98: Address already in use) nginx: configuration file /etc/nginx/nginx.conf test failed
I have two servers on CentOS: Nginx(proxy) + Apache. I need restart Nginx but if i try to test configuration before restart, i have next error: [root@vm5808 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: [emerg] listen() to 0.0.0.0:80, backlog 511 failed (98: Address already in use) nginx: configuration file /etc/nginx/nginx.conf test failed How i can solve this problem? Thanks! P.S. Nginx has listen port 80, apache listen 81.
Nginx(proxy) + Apache: two process listening same port
docker pull (and push) run over HTTPS, and I believe if you use the default HTTP/TLS port 443 for your server then you won't need to specify it in your image tags.
This to enable one to use docker tag image:version docker.mydomain.com/image:version instead of explicitly specifying the port
Is there a default port for a docker private registry
1 In general, you can check if the imageId of your container matches the one of your image: docker inspect -f '{{ .Image }}' bashapp As you won't be able to remove an image that is still used, you should be able to find the image of your container with the following command: docker images | grep $(docker inspect -f '{{ .Image }}' bashapp | cut -d ":" -f 2 | head -c 12) If you know which image is used, you should also be able to tell whether the image is newer or not. Share Follow answered Jan 28, 2019 at 20:08 phaebelphaebel 6611 silver badge44 bronze badges Add a comment  | 
Suppose I have one running container: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 59f00e5c96d6 me/myapp:latest "bash" 9 hours ago Up 7 hours 127.0.0.1:80->80/tcp bashapp Suppose I found a mysterious docker image archive file me-myapp-latest.tar.gz out of nowhere. I want to know, relative to the image used to start the running container, whether that file contains an older or newer version of the image. I load the archive into docker using docker load --input me-myapp-latest.tar.gz. docker images now shows: REPOSITORY TAG IMAGE ID CREATED SIZE me/myapp latest fe22fc800843 12 hours ago 123MB This gives no indication of whether or not the me/myapp:latest shown by docker images is the same as that shown by docker ps -a. They are both named me/myapp:latest, but they could be different. How can I determine if the images are the same, or if they're not the same, which one is newer and which is older?
How to know which version of an image a container is running?
Yes, you can set the namespaceas per the docslike so:$ kubectl config set-context --current --namespace=NAMESPACEAlternatively, you can usekubectxfor this.
Can I set the default namespace? That is:$ kubectl get pods -n NAMESPACEIt saves me having to type it in each time especially when I'm on the one namespace for most of the day.
Can I set a default namespace in Kubernetes?
Not all resources are the same. Always check the documentation for the particular resource. It has the "Return Values" section and you can easily verify that SNS topic has ARN as a Ref value, so you don't have to use GetAtt function https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sns-topic.html Edit: Thanks for the comment which points out that not every resource provides its ARN. A notable example is the Autoscaling group. Sure, the key thing in my answer was "check the documentation for each resource", this is an example that not every resource has every attribute. Having said that, ARN missing for the ASG output is a really strange thing. It cannot be also constructed easily, because the ARN also contains GroupId which is a random hash. There is probably some effort to solve this at least for the use-case of ECS Capacity Providers https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/548 and https://github.com/aws/containers-roadmap/issues/631#issuecomment-648377011 but I think that is is an significant enough issue that it should be mentioned here.
I am trying to create an AWS cloudformation stack using a yaml template. The goal is to create a sns topic for some notifications. I want to output the topic arn, to be able to subscribe multiple functions to that topic by just specifying the topic arn. However I am getting an error when I try to create the stack from the aws console: "Template validation error: Template error: resource NotificationsTopic does not support attribute type Arn in Fn::GetAtt" I have done exactly the same for s3 buckets, dynamodb tables, and all working good, but for some reason, with SNS topic I cannot get the ARN. I want to avoid hardcoding the topic arn in all functions that are subscribed. Because if one day the the ARN topic changes, I'll need to change all functions, instead I want to import the topic arn in all functions and use it. This way I will have to modify nothing if for any reason I have a new arn topic in the future. This is the template: Parameters: stage: Type: String Default: dev AllowedValues: - dev - int - uat - prod Resources: NotificationsTopic: Type: AWS::SNS::Topic Properties: DisplayName: !Sub 'notifications-${stage}' Subscription: - SNS Subscription TopicName: !Sub 'notifications-${stage}' Outputs: NotificationsTopicArn: Description: The notifications topic Arn. Value: !GetAtt NotificationsTopic.Arn Export: Name: !Sub '${AWS::StackName}-NotificationsTopicArn' NotificationsTopicName: Description: Notifications topic name. Value: !Sub 'notifications-${stage}' Export: Name: !Sub '${AWS::StackName}-NotificationsTopicName'
AWS cloudformation error: Template validation error: Template error: resource NotificationsTopic does not support attribute type Arn in Fn::GetAtt
You need to untrack the file, and then git will happily ignore it using the pattern in .gitignore. If a file is already being tracked, changes to it are not ignored even if it is in the ignorelist. From the current status, it is apparent that you have removed the file from your filesystem, but have not untracked it yet. So, run the following: git rm --cached content/data.db git commit -m "removed db file" In general, it is better to run your git rm commands with the --cached flag so that the local copies of the files do not get deleted. Since, in your case, you already have the file in question deleted from the filesystem, git rm content/data.db will work all right as well. EDIT As discussed in comments below, check if your .gitignore has the entry as { content/data.db }. Because that entry, with { and spaces would become incorrect. Just have a single line, with text content/data.db in your git rm --cached content/data.db git commit -m "removed db file" 0. Make sure there are no redundant whitespace before or after the ignore rule. The way git works, each complete line within git rm --cached content/data.db git commit -m "removed db file" 1 is treated as a pattern for git rm --cached content/data.db git commit -m "removed db file" 2, and any redundant characters, like whitespaces or even comments beginning with git rm --cached content/data.db git commit -m "removed db file" 3 alter the pattern itself, causing the file to not be ignored.
So, I'm pretty sure that I'm doing something wrong, but I can realize what. My .gitignore: { content/data.db } When I change this document and I do git status I get: Changes not staged for commit: (use "git add/rm <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) deleted: content/data.db Expected result: Not track this file. And, of course, if I do git add -A && git commit -m"this is a test" && git push the file is pushed to the repo. I don't know what's wrong with .gitignore. Thanks in advance.
Gitignore is not untracking
Looks like Beanstalk and the S3 bucket are in different regions. You stated that Beanstalk is in ap-southeast-2, while it seems that the S3 bucket is in ap-southeast-1. Create that bucket in ap-southeast-2.
i am trying to deploy the war file from jenkins to elastic bean stalk, the build is successful , but when it tries to upload to s3 , it is showing this error Uploading file awseb-2152283815930847266.zip as s3://elasticbeanstalk-ap-southeast-1-779583297123/jenkins/My App-jenkins-Continuous-Delivery- MyApp-Stage-promotion-Deploy-14.zip Cleaning up temporary file /tmp/awseb-2152283815930847266.zip FATAL: Deployment Failure java.io.IOException: Deployment Failure further error shows The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: Jenkins configuration for elastic beanstalk My beanstalk is in "ap-southeast-2" region Bucket name is "elasticbeanstalk-ap-southeast-1-779583297123" You can have a look at this for more clarity enter image description here
The bucket you are attempting to access must be addressed using the specified endpoint , while uploading from jenkins to s3
I have solved this on Tectonic 1.7.9-tectonic.4. In the Tectonic web UI, go to Workloads -> Config Maps and filter by namespace tectonic-system. In the config maps shown, you should see one named "tectonic-custom-error". Open it and go to the YAML editor. In the data field you should have an entry like this: custom-http-errors: '404, 500, 502, 503' which configures which HTTP responses will be captured and be shown with the custom Tectonic error page. If you don't want some of those, just remove them, or clear them all. It should take effect as soon as you save the updated config map. Of course, you could to the same from the command line with kubectl edit: $> kubectl edit cm tectonic-custom-error --namespace=tectonic-system Hope this helps :)
I have a couple of NodeJS backends running as pods in a Kubernetes setup, with Ingress-managed nginx over it. These backends are API servers, and can return 400, 404, or 500 responses during normal operations. These responses would provide meaningful data to the client; besides the status code, the response has a JSON-serialized structure in the body informing about the error cause or suggesting a solution. However, Ingress will intercept these error responses, and return an error page. Thus the client does not receive the information that the service has tried to provide. There's a closed ticket in the kubernetes-contrib repository suggesting that it is now possible to turn off error interception: https://github.com/kubernetes/contrib/issues/897. Being new to kubernetes/ingress, I cannot figure out how to apply this configuration in my situation. For reference, this is the output of kubectl get ingress <ingress-name>: (redacted names and IPs) Name: ingress-name-redacted Namespace: default Address: 127.0.0.1 Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- public.service.example.com / service-name:80 (<none>) Annotations: rewrite-target: / service-upstream: true use-port-in-redirects: true Events: <none>
How to disable interception of errors by Ingress in a Tectonic kubernetes setup
Yes, you should create a new keystore for new environment as the (Common Name) CN field in keystore should match to your dev/test/prod environment. I assume your CN is localhost or loopback address when you created one for your local environment.Alternatively, if you want to match based on wildcards such as *.test.abc.com you can do it using Subject Alternative Names(SAN).
I have a Spring Boot web service, and I want to make it use https instead of http.To this end, I have already created a keystore containing a self-signed certificate, and configured the server using spring boot properties. The certificate has name localhost, to match the local environment.Now I want to test it on a proper dev environment, but the certificate doesn't work any more, because the name needs to be the name of the environment.What is the preferred way of dealing with this? Should I create a separate keystore/certificate per environment as well as a separate yml file with their respective properties?
Managing test ssl certificates on dev environments
As I'd mentioned on MSDN, immediate consistency is expensive. You have to accept some tradeoffs in consistency, or lay out a lot of cash to be immediately consistent. Using the isolated read/write model we discussed on MSDN, along with queues, probably provides you the best performance/consistency bang for the buck. Multiple tiers of cache as suggested by David are also excellent, depending on your overall architecture/design. Using your own local in-proc or localhosted cache implementation also offers a lot of value -- not a fan of AppFabric's OOTB local cache myself.--abShareFollowansweredMay 22, 2011 at 18:15andrewbaderaandrewbadera1,37299 silver badges1919 bronze badges0Add a comment|
Our web application is deployed in a web farm (more than 20 servers). The site has a huge traffic (millions of page views per day). In the first release, this application is using EntLib's CacheManager (Entreprise application block caching). We call this "Local Server Cache". There are many benefits but we still have a major drawback : each server manage its own cache and acces to database (not distributed).That's why we are trying to implement AppFabric caching feature in order to reduce database round trips. One of the major problem we have is data synchronisation :with GetAndLock/PutAndUnLock (aka distributed lock) page response time is higly affectedwith Get/Put + simple server side lock, we have so many requests with the local cache ; no benefits.So what are caching stragegies for large scale web sites ?Thanks,
AppFabric Caching for large scale web sites
It could be multiple reasons.I'm assuming your virtualenv is called venv, if not change it accordingly.You have not activated your virtualenv.Fix:source venv/bin/activateYou activated your virtualenv, but the path does not match.Fix:echo $PATH /Users/[yourpath]/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/binIf/Users/[yourpath]/venv/binis not correct, check:grep "VIRTUAL_ENV=" venv/bin/activate VIRTUAL_ENV="/Users/[yourpath]/venv"Then updatevenv/bin/activateaccordingly.Note: The virtualenv folder should not be added into your repo. This may be the source of your issues.If you want to ensure everyone has matching dependencies run:python -m pip freeze > requirements.txt
I'm attempting to deploy a chalice application following the tutorial. I'm using a virtualenv with python3.6. My application is dependent on a github repo in the requirements.txt and that repo's requirements.txt depends on several libraries.I can run the application just fine withpython -i app.pyand I can execute my endpoints properly live in the REPL.However, when I runchalice deployI get an error complaining about a module required by the github repo I require.File "/usr/local/lib/python2.7/dist-packages/chalice/deploy/packager.py", line 715, in download_all_dependencies raise NoSuchPackageError(str(package_name)) NoSuchPackageError: Could not satisfy the requirement: PyQt5>=5.8.1Notice, however, that this chalice library being used is 2.7. I'm in a virtualenv that is set to python3.6.I realized that I had previously installed chalice globally, which might have been a mistake. So I pip uninstalled chalice globally, but it's still installed in my virtualenv.Now rerunning chalice, I get$ chalice --version bash: /usr/local/bin/chalice: No such file or directoryI tried rerunning the install of chalice to the local virtualenv but it didn't change anything.What am I doing wrong here?
Why is AWS's Chalice for Python not respecting my virtualenv?
Addpdf.FreeReader(reader);right beforereader.Close();to make sure that as little memory as possible is required for managing copied content from thatreader.That method (of the parent class ofPdfCopy) is documented asWrites the reader to the document andfrees the memory used by it. The main use is when concatenating multiple documents to keep the memory usage restricted to the current appending document.
I am merging large number of pdf files using iTextSharp in asp.net mvc c#, it works fine for small number of files but when it reaches about (1000) files it breaks on line pdf.AddDocument(reader); and throws 'System.OutOfMemoryException'. I am already using PdfCopy and FileStream to better utilize memory which is suggested every where I searched. My code is given below. Please suggest me how to handle this.using (FileStream stream = new FileStream(outMergeFile, FileMode.Create)) { Document document = new Document(); PdfCopy pdf = new PdfCopy(document, stream); PdfReader reader = null; PdfReader.unethicalreading = true; try { document.Open(); for (int i = 0; i < fileList.Count; i++) { var fileStr = Request.MapPath(fileList[i].Path) reader = new PdfReader(fileStr); pdf.AddDocument(reader); reader.Close(); } } catch (Exception ex) { reader.Close(); } finally { document.Close(); } }
how to avoid 'System.OutOfMemoryException' Merging large number of PDF files using iTextSharp in asp.net mvc c#
Community edition of Nginx does not provide such functionality.A commercial version of Nginx provides. There ismax_connsparameter inupstream's servers:upstream my_backend { server 127.0.0.1:11211 max_conns=32; server 10.0.0.2:11211 max_conns=32; }The documentation ishere
trying to use NGINX as reverse proxy, and would like to have constant number of open connections to backend (upstream) open at all times.Is this possible with nginx (maybe haproxy..?) ??running on ubuntu if it makes any difference
is it possible for NGINX to have a pool of N open connections to backend?
The problem is thatliquidtelecom.dl.sourceforge.netdoes not have a valid https certificate and wget is refusing to connectIf you are sure you want to connect to it anyway then provide the--no-check-certificateflag to wget to make it skip certificate validation.!wget --no-check-certificate https://liquidtelecom.dl.sourceforge.net/project/easc-corpus/EASC/EASC.zipShareFollowansweredNov 1, 2022 at 12:56Matteo ZanoniMatteo Zanoni3,8431010 silver badges2929 bronze badges1Ah, I see the root cause. Thank you so much!! I tried the solution and it worked for me.–user18848025Nov 1, 2022 at 13:06Add a comment|
I'm trying to get access to the dataset using:!wget https://liquidtelecom.dl.sourceforge.net/project/easc-corpus/EASC/EASC.zipBut I got the following error messageERROR: cannot verify liquidtelecom.dl.sourceforge.net's certificate, issued by ‘CN=R3,O=Let's Encrypt,C=US’: Issued certificate has expired. To connect to liquidtelecom.dl.sourceforge.net insecurely, use `--no-check-certificate'.Can you please suggest a solution?
How do I can fix the error Use no-check-certificate
3 Thanks to @icza's pointers, the code I've mentioned above is not quite complete. The error I was receiving had nothing to do with the last bit. Apparently, it's part of the completeMultiPartUpload function. That's where the error is being thrown. Details: The uploadPart is taking my data, whatever my chunk size is, and storing it in a pool of some sort. The error is being thrown at the maxPartSize, being too small. Changing that to anything greater than 5 MB solved the issue. Share Improve this answer Follow answered Aug 22, 2019 at 7:54 TheLebDevTheLebDev 54177 silver badges1919 bronze badges 2 1 The documentation isn't fully correct. The minimum part size isn't 5 MB = 5 × 1000 × 1000. It's 5 MiB = 5 × 1024 × 1024. Not sure what you were going for with 512 × 1000. – Michael - sqlbot Aug 23, 2019 at 1:10 I agree. After contacting the author of the code, it was fixed. – TheLebDev Aug 23, 2019 at 9:00 Add a comment  | 
I'm trying to upload a file to my S3 bucket using multipart upload. I'm following the exact same code as found here. The problem is, whenever it reaches the last part of my file, I get a EntityTooSmall: Your proposed upload is smaller than the minimum allowed size error. Issue: S3's multipart upload restricts file part sizes to a minimum of 5 MB Solution: S3 allows the last part to be less than 5 MB large Issue: This is my last part. And it's not being recognized. Is there anything I'm missing?
EntityTooSmall when uploading to S3
Had a similar issue myself and while researching stumbled across your question. I found this was quite easy to to programatically however isn't really explained in the Jetty docs. The structure of the Jetty xml configuration files are matched by the structure of the java API so you can just replicate it in code. So following the Jetty guide on how to configure using the XML configuration file here I was able to configure the embedded server programatically like this: Server server = new Server( port ); // Create HTTP Config HttpConfiguration httpConfig = new HttpConfiguration(); // Add support for X-Forwarded headers httpConfig.addCustomizer( new org.eclipse.jetty.server.ForwardedRequestCustomizer() ); // Create the http connector HttpConnectionFactory connectionFactory = new HttpConnectionFactory( httpConfig ); ServerConnector connector = new ServerConnector(server, connectionFactory); // Make sure you set the port on the connector, the port in the Server constructor is overridden by the new connector connector.setPort( port ); // Add the connector to the server server.setConnectors( new ServerConnector[] { connector } );
I am running a Spring Boot application in AWS. The application is running behind an Elastic Load Balancer (ELB). The ELB is configured to use https (port 443) to the outside world, but passes through http (port 8080) to the application. The ELB is configured to pass through the x-forwarded-proto header. I am using Jetty 9.0.0.M0, with Spring Boot 1.1.5 RELEASE. I appear to be getting incorrect redirects sent back from the application via the ELB where the redirect responses are coming back as http, rather than https. Now, I read here that I should set the "forwarded" header to true using: 4430 I can't see how to do this with the embedded version of Jetty in Spring Boot because there is no XML configuration file as part of my source. I have looked at the 4431 infrastructure but I still can't get the right incantation to get this setup to work. The application is built and tested outside of the AWS 4432 environment, so the application needs to transparently work with 4433 too. Directly hitting the application endpoints without going through the ELB works. It's just that the ELB to application route that's not working. Any ideas?
Configuring embedded Jetty 9 for X-FORWARDED-PROTO with Spring Boot
No, you can't. Amazon Linux does not have a Repo for X-server packages. Also, It was meant to be used for Server side roles and hence all he X related stuff is not available.Consider usingUbuntuORRHELami where you can configure X environment manually by followingthisandthis.
I know GUI is for the weak but unfortunately strictly using the terminal isn't an option for me. I have an instance of the Amazon Linux AMI and I have it all set up but I can't find a guide on how to get a GUI on Amazon and how to remote desktop/ VNC into it. I have seen stuff on how to do this for the Ubuntu instance but that is different from Amazon Linux AMI and I don't want to mess up my system or something like that.So if anyone could point me to where I can find how to do this or tell me how I'd appreciate it
Amazon Linux AMI ec2 GUI / Remote Desktop
For specifics:http://www.csc.villanova.edu/~mdamian/Sockets/TcpSockets.htmdescribes the C library for TCP sockets.I think the key is that after a process forks while holding a socket file descriptor, the parent and child are both able to call accept() on it.So here's the flow. Nginx, started normally:Calls socket() and bind() and listen() to set up a socket, referenced by a file descriptor (integer).Starts a thread that calls accept() on the file descriptor in a loop to handle incoming connections.Then Nginx forks. The parent keeps running as usual, but the child immediately execs the new binary. exec() wipes out the old program, memory, and running threads, but inherits open file descriptors: seehttp://linux.die.net/man/2/execve. I suspect the exec() call passes the number of the open file descriptor as a command line parameter.The child, started as part of an upgrade:Reads the open file descriptor's number from the command line.Starts a thread that calls accept() on the file descriptor in a loop to handle incoming connections.Tells the parent to drain (stop accept()ing, and finish existing connections), and to die.
According tothe Nginx documentation:If you need to replace nginx binary with a new one (when upgrading to a new version or adding/removing server modules), you can do it without any service downtime - no incoming requests will be lost.My coworker and I were trying to figure out:how does that work?. We know (we think) that:Only one process can be listening on port 80 at a timeNginx creates a socket and connects it to port 80A parent process and any of its children can all bind to the same socket, which is how Nginx can have multiple worker children responding to requestsWe also did some experiments with Nginx, like this:Send akill -USR2to the current master processRepeatedly runps -ef | grep unicornto see any unicorn processes, with their own pids and their parent pidsObserve that the new master process is, at first, a child of the old master process, but when the old master process is gone, the new master process has a ppid of 1.So apparently the new master process can listen to the same socket as the old one while they're both running, because at that time, the new master is a child of the old master. But somehow the new master process can then become... um... nobody's child?I assume this is standard Unix stuff, butmy understanding of processes and ports and sockets is pretty darn fuzzy. Can anybody explain this in better detail? Are any of our assumptions wrong? And is there a book I can read to really grok this stuff?
How can Nginx be upgraded without dropping any requests?
Do not call `retainCount`; it is useless.Without seeing the contents of your block, it is impossible to say. If your block is effectively a static block, then copying it does nothing.Are you seeing a crash?ShareFollowansweredJul 25, 2011 at 19:56bbumbbum163k2323 gold badges273273 silver badges359359 bronze badges2I was seeing a crash if I didn't copy the block (presumably because it was allocated onto the stack and was cleaned up before the invocation). However, as it turns out, it DOESN'T crash if I release thecallbackblock. Nor does Leaks report that the blocks are leaking or anything else. Looks like I was just overthinking.–Ben MosherJul 27, 2011 at 12:28Added my block definition to the question. Also, thanks for your help!–Ben MosherJul 27, 2011 at 12:33Add a comment|
I'm trying to send a Block as an argument to a method called by an NSInvocation (which, for context, is fired by an NSInvocationOperation). The invocation should be retaining the arguments, and it seems to be working for the "regular" object parameters, but the Block's retainCount is staying at 1.I could release it after it is used in the method call, but that could theoretically leak it if the queue is dissolved before the operation is called.Some code:NSInvocationOperation *load = [[NSInvocationOperation alloc] initWithInvocation:loadInvoc]; NSAssert([loadInvoc argumentsRetained],@"Arguments have not been retained"); [loader release]; NSInvocation *completionInvoc = [NSInvocation invocationWithMethodSignature:[self methodSignatureForSelector:@selector(serviceCompletionBlock:afterInvocationCompleted:)]]; [completionInvoc setTarget:self]; [completionInvoc setSelector:@selector(serviceCompletionBlock:afterInvocationCompleted:)]; MFEImageCallback callback = [completionBlock copy]; [completionInvoc setArgument:&callback atIndex:2]; [completionInvoc setArgument:&load atIndex:3]; NSInvocationOperation *completion = [[NSInvocationOperation alloc] initWithInvocation:completionInvoc]; NSAssert([completionInvoc argumentsRetained],@"Completion handler not retaining"); [callback release]; [completion addDependency:load];The block that I'm using (defined in an accessor method for anNSManagedObjectsubclass):^(UIImage *image,NSError *err){ [self setValue:image forKey:key]; }
Construct NSInvocation w/ Block argument
So, the kubectl commands needs to be configured! Currently kubectl is trying to connect to the kube-api on your local machine (127.0.0.1), clearly there is no kubernetes there so it throws the error "No connection could be made..."To change the kubectl settings you need to find the kubectl config file at the path:Windows:%userprofile"\.kube\configLinux:$HOME/.kube/configOpen the file with any editor and then you need to change the ip 127.0.0.1 to any external ip of one of your nodes!This will solve your connection problem but there might be another issue with your certificate! I will alter my answer when you tell me what kubernetes distro you are using (e.g. k3s, k8s, ...)
kubectl get pods kubectl get namespace kubectl describe ns.ymlWhen I try to execute any commands from above I am getting the following error:E0824 14:41:42.499373 27188 memcache.go:238] couldn't get current server API group list: Get "https://127.0.0.1:53721/api?timeout=32s": dial tcp 127.0.0.1:53721: connectex: No connection could be made because the target machine actively refused it.Could you please help me out, how to resolve it?Can any one tell me what is wrong with my kubutcl utility
I Installed kubectl locally ,but when i try to used that utility getting error,
Add it on a separate line.When you use && with cron, it's expecting multiple cron jobs to add for the same cron frequency.eg.0 * * * * a && bHence why it says "*/5 not found" because that's thebabove - it thinks it's a cron script to run.Add your */5 * * * * script on a separate line in its own command.
I want to perform cron jobs every minute for feeds.sh and every 5 minutes for reminder.sh inside docker container. It was able to run every minutes for feeds.sh. However for reminder.sh, it cannot run every 5 minutes as it keep throwing the error/bin/ash: */5: not foundinside the docker container.The following code is shown as below :FROM alpine:latest # Install curlt RUN apk add --no-cache curl # Copy Scripts to Docker Image COPY reminders.sh /usr/local/bin/reminders.sh COPY feeds.sh /usr/local/bin/feeds.sh # Add the cron job RUN echo ' * * * * * /usr/local/bin/feeds.sh && */5 * * * * /usr/local/bin/reminders.sh' > /etc/crontabs/root # Run crond -f for Foreground CMD ["/usr/sbin/crond", "-f"]
how to perform cron jobs every 5 minutes inside docker
There are two active pull requests on the GitHub Issue Tracker which pertain to this problem: https://github.com/EllisLab/CodeIgniter/pull/661 https://github.com/EllisLab/CodeIgniter/pull/1403 They are both a little different and while one of them looks pretty old (I didn't notice it) the second more recent issue might be right up your alley. If you can verify either of the fixes then give the pull request a +1 and I'll merge it. I had an issue a while back where I wanted to use Memcache on a server which didn't have it installed correctly, and instead of it falling back to Files it just bitched at me, but I never found the time to fix it. If this is the same issue then great, I can get it merged.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. Closed 11 years ago. I'm trying to use the cache driver which CodeIgniter supplies in their PHP framework. However, it seems to fully ignore the backup adapter. If I use: $this->load->driver('cache', array('adapter' => 'apc', 'backup' => 'dummy')); Then I assume that it will use APC if available, otherwise it'll fall back to dummy cache (do nothing). This is obviously very handy since not everyone will have APC installed. This doesn't seem to be the case - since I get an error when testing the following code: if(!$config = $this->cache->get('config')) { //Get config from database $config = $this->db->from('core')->get()->row_array(); //Cache it $this->cache->save('config', $config, 600); } (Fatal error: Call to undefined function apc_cache_info())
CodeIgniter cache driver ignoring backup adapter [closed]
Have you looked at TeamCity'spre-tested commitfeature? It doesn't work exactly as you described you would like your workflow to operate, but it might be useful. I've used it with Subversion in the past and it works pretty well, I haven't used it with Git butJetBrains stateit also works with Git.However, the most common workflow for Git is to create feature/bugfix branches for everything you do, which allows you to commit and push freely, and merge tomasterwhen you are ready. GitHub makes the merge (and optional code review) step painless and TeamCity has built-in support to automatically build branches, see theTeamCity documentation on feature branchesfor specific details it provides.ShareFollowansweredNov 17, 2012 at 17:52Jonathon RossiJonathon Rossi4,16911 gold badge2222 silver badges3333 bronze badges1Thanks Jonathon! I suspected my workflow might be an issue, since I'm newish to Github. I'll look into your recommendation!–Tad DonagheNov 19, 2012 at 16:17Add a comment|
What I want to do:I want to set up Continuous Integration with Team City for a project that's hosted on GitHub.What's Currently WorkingI'm properly connected to GitHub. Commits, pushing, etc etc all seem to be fine. TeamCity is set up and I can kick off a build which will run and run my unit tests, but...What's Not WorkingWhen I do a TeamCity build, it looks like it's pulling down code from GitHub before doing the build or running unit tests. I want to trigger a TC build when I do a commit,beforeit does the push to GitHub. I don't really want it to pull any code out of GitHub before running the TC build. This doesn't seem to be working at all.I've set up a BuildTrigger which is a VCS Trigger. I've checked the box that says Trigger a Build on Each Check-in. I added a rule to the BuildTrigger with the VCS (Github) source and my username.When I do a commit, I don't notice TC doing anything. When I then push the commit to GitHub, TC doesn't do anything either. I see no builds queuing or anything like that.Any clues on what I'm doing incorrectly?Thanks!
Having some problems integrating TeamCity, GitHub and Visual Studio
I think you get back to the login page because you are redirected and since your code doesn't send back your cookies, you can't have a session.You are looking for session persistance,requestsprovides it :Session Objects The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use urllib3's connection pooling. So if you're making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase (see HTTP persistent connection).s = requests.Session() s.get('http://httpbin.org/cookies/set/sessioncookie/123456789') r = s.get('http://httpbin.org/cookies') print(r.text) # '{"cookies": {"sessioncookie": "123456789"}}'http://docs.python-requests.org/en/master/user/advanced/
I have tried logging into GitHub using the following code:url = 'https://github.com/login' headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36', 'login':'username', 'password':'password', 'authenticity_token':'Token that keeps changing', 'commit':'Sign in', 'utf8':'%E2%9C%93' } res = requests.post(url) print(res.text)Now,res.textprints the code of login page. I understand that it maybe because the token keeps changing continuously. I have also tried setting the URL tohttps://github.com/sessionbut that does not work either.Can anyone tell me a way to generate the token. I am looking for a way to login without using the API. I had askedanother questionwhere I mentioned that I was unable to login. One comment said that I am not doing it right and it is possible to login just by using the requests module without the help of Github API.ME:So, can I log in to Facebook or Github using the POST method? I have tried that and it did not work.THE USER:Well, presumably you did something wrongCan anyone please tell me what I did wrong?After the suggestion about using sessions, I have updated my code:s = requests.Session() headers = {Same as above} s.put('https://github.com/session', headers=headers) r = s.get('https://github.com/') print(r.text)I still can't get past the login page.
How can I use POST from requests module to login to Github?
Figured out an alternative:Instead of usingForceTypeI used:<Files "apple-app-site-association"> Header set Content-type 'application/json' </Files>I don't know why the other alternative didn't work
I want to upload an AASA (apple-app-site-association).The problem I'm having is that I can't set the MIME-Type. I placed the file in the root/andthe.well-knownfolder without an extension. After that I tried (like I did with success on an other hosting provider) to change the MIME-Type toapplication/json.This is my.htaccessfile:RewriteEngine On # BEGIN Thunermay (Admin) # set apple-app-site-association as application/json for Apple's crawler <Files "apple-app-site-association"> ForceType 'application/json' </Files> # END ThunermayWhen I used an AASA validator (thisandthisone), I can't get the rightcontent-type. The second one shows me at least that the file is parsed but without a content-type:No Redirect: Pass Content-type: [] JSON Validation: Pass JSON Schema: PassI don't know where my mistake in the.htaccessis nor do I know how to potentially debug the.htaccessThanks for help in advance!
rewrite file's MIME-Type in .htaccess isn't working
Using mod_rewrite you could use a condition to verify the id and grab what comes after if anything:RewriteEngine On RewriteCond %{REQUEST_URI} ^/forumdisplay.php RewriteCond %{QUERY_STRING} ^fid=6(&.*|.*) RewriteRule ^.*$ /blog.php?%1 [R=301,L]
Trying to make the url:www.google.com/forum.php?fid=5Redirect to:www.google.com/new.php?fid=5But also need it to keep everything else intact because for example the link can be:www.google.com/forum.php?fid=5&sortby=ascAnd needsortbyportion to be there upon redirect.What the redirect needs to do is look forforumdisplay.phpandfid=6and when both are found in the same url it redirects toblog.phpand removesfid=6but keeps any other parameters there.I searched and found how to do it with one string but not two.Also, what's the difference between redirect and rewrite?This is related to MyBB forum software. I made a separate php file that usesforumdisplaybut with a new name.
Redirect when two strings are matched
I figured it out - it happened because I reinstalled TortoiseGIT since starting to work on the project. Pulls went smoothly, but as soon as I tried to push back my changes TortoiseGIT needed my authentication key, which was not configured.EditTo resolve, I simply cleared all authentication data from my TortoiseGit, under: Context menu "Tortoise Git" -> Settings -> Saved Data -> Authentication data [Clear]ShareFolloweditedMay 18, 2012 at 12:33Gerold Meisinger4,51055 gold badges2626 silver badges3434 bronze badgesansweredApr 17, 2010 at 8:38ripper234ripper234226k279279 gold badges636636 silver badges907907 bronze badges31how can I get the authentication key?–Rodrigo SouzaAug 12, 2010 at 18:041@RodrigoAlves - sorry for not noticing your comment until now. It's probably not relevant, but anyway, the key is a file on your disk ... you should always know where it is / back it up. You don't extract it from TortoiseGit. Ask a separate question if somehow this is still relevant.–ripper234May 18, 2012 at 12:511For me it started happening after renewing my expired RSA key certificate. None of these steps worked for me, regardless how promising they looked. Still looking for the answer...–NickMar 20, 2014 at 12:48Add a comment|
I started a github project a few weeks ago. I was able to push changes without any problems (I'm using TortoiseGIT).Suddenly today when I tried to push my changes, I got "PuTTY Fatal Error" "Disconnected: No supported authentication methods availble" error window.Anything you can recommend to remedy the problem?
Suddenly getting "No supported authentication methods available" when pushing to github
I am not an expert in this area but I have deployed Django using uWSGI on Nginx with this method. A socket file represents a Unix socket. In this case, uWSGI creates it and it will be through this socket that uWSGI and Nginx will talk to each other.The "Concept" section of the link you provided talks about it:uWSGI is a WSGI implementation. In this tutorial we will set up uWSGI so that it creates a Unix socket, and serves responses to the web server via the WSGI protocol. At the end, our complete stack of components will look like this:the web client <-> the web server <-> the socket <-> uwsgi <-> DjangoThe first part of the tutorial talks about using TCP port socket to achieve the same result. If you have already followed those steps then you should skip the Unix socket part. However, it also mentions thatUnix sockets are better due to less overhead.
I followedthis doc, and almost everything went well untill "mysite.sock" occurred. It occurred like this:server unix:///path/to/your/mysite/mysite.sock; # for a file socket # server 127.0.0.1:8001; # for a web port socket (we'll use this first)This doc did not mention anything about the "mysite.sock" and after one day's searching, I found nothing.
In django + nginx + wsgi, what is a "mysite.sock"
I would suggest you slice and dice your cluster by usingNamespaces.You can easily create a namespace by using the following command.kubectl create namespace my-projectNow you can feed your all manifest files (deployment, services, secrets, PersistentVolumeClaims) to API Server to that my-project namespace. for instance,kubectl create -f my-deployment.yaml --namespace my-projectDo not forget to usenamespace flagotherwise these manifest would be applied to the default namespace.If you want to delete your project. you just need to delete the namespace.It will delete all of the resources related to that project.kubectl delete namespace my-projectfurthermore, You can limit the quota to each namespace for resources utilization.you can further dig up withNamespaceEditedNamespaces are virtual clusters in a physical cluster
I have a two-cluster multi-region HA enabled in production working in MS Azure.I was asked to reuse the same cluster to manage several new projects using Microservices.What is the best practice here ? Should I create a cluster per app ? Is it better to isolate every project in different clusters and cloud account subcriptions?Looking forward for opinions.Thanks.
Kubernetes Cluster per app?
You need to use a tool that is capable of supporting database migrations. The two I'd recommend in the Java space would be:liquibaseflywayThese are by no means the only tools in this category. What they do is keep a record of the schema changes that have already been applied to your database instance, ensuring that the schema matches the desired state captured in your version control system.ExampleBest choice to generate scripts for different databases
I want to deploy my microservices in docker containers. I want these microservices to be as stateless as possible, only persisting state to a database.This means that there are these requirements:These services are deployed as docker containers and orchestrated using kubernetes.Each service can be deployed and scaled to multiple instances.Each instance of a service will be identical. This means that they must all have the same environment variables and configurations passed to it.Each instances should not care or know about another instance.The instances should be stateless and should not elect a leader or have a quorum.That leads to my problem with handling schema creation and migrations:If I have a service that uses MySQL or Postgres as the data store, how do I create the tables/schemas on first launch? Should I just useCREATE IF NOT EXISTstatements and let the instances "fight it out" during boot? I am not able to set an environment variable to ask for table/schema creation for just 1 of the instances.How do I handle schema migrations with the above constraints? There are numerous actions like dropping/adding columns that cannot be encapsulated in a transaction.
Handling database schema creation and migrations when launching multiple instances of a containerized microservice
You can try thesparse checkoutalong with submodules.Set Git submodule to shallow clone & sparse checkout?
A lots of repos for python on github use project structure for setuptools (for examplehttps://github.com/omab/django-social-auth). But I want to make folder with module core to my project as a submodule. Can I use subdirectories from remote repo as a submodule in my repo?
Subdirectory from github as a git submodule
This is pretty simple, but I like to use a javascript wrapper rather then making the ajax calls myself. You will need to generate a oAuth token. Here is a working example. If you are not familiar with oAuth it is basically a special password you create for your account which allows your extension to access your application's (GitHub) data. You can limit the access of this special password to only the information needed to prevent security issues. Download Github.js library Go to https://github.com/settings/tokens/new REMEBER to uncheck all checkboxes! This will only allow this password access to public information on your github account. Click Generate token. Copy your access key Use code below to get repo list Github.js requires Underscore.js //get access to account var github = new Github({ token: "e7e427e4a8cc0f424a77a0******************", auth: "oauth" }); //get user var user = github.getUser(); //get repos user.repos(function(err, repos) { //loop through repos for (i = 0; i < repos.length; i++) { repo = repos[i]; //access repo here } });
How can i use my github account in a chrome extension. I mean how can a get repo lists or something else in my own extension with my account? i find a lot of extension to github but i cant find the source how made it..
Github in chrome extension
Turns out there was another reverse proxy in front of my server that I had no control of. Changed my server set up to be directly internet facing and it works as expected.
I am using nginx and proxying to my app that uses socket.io on node.js for the websocket connection.I get the error above when accessing the app through the domain.I have configured nginx according tohttps://github.com/socketio/socket.io/issues/1942to ensure that websockets are properly proxied to the node.js backend.My nginx configuration is below:server { listen 80; server_name domain.com; location / { proxy_pass http://xxx.xx.xx.xx:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }In my react client, I start the websocket connection this way:import io from 'socket.io-client'; componentWillMount() { this.socket = io(); this.socket.on('data', (data) => { this.state.output.push(data); this.setState(this.state); }); }Any advice is appreciated!edit 1:After more investigation.My server set up is as follows:Domain accessible from internet: web-facing.domain.comDomain accessible from intranet: internal.domain.comWhen I access the app from the intranet, it works fine, but get the error when accessing from the internet.I suspect it's due to the creation of the socket usingthis.socket = io()which sets up a socket connection with the current domain.Since the socket in node is listening tows://internal.domain.com, when connecting via web-facing.domain.com, the wrong socket,ws://web-facing.domain.com, is created.So now the question is, how can I create a socket to the internal domain when accessing from a different domain?edit 2:I have addedapp.set('trust proxy', true)to configure Express to accept proxy connections, but it still does not work.
WebSocket connection to 'ws://.../socket.io/' failed: Error during WebSocket handshake: net::ERR_CONNECTION_RESET
Your free can mirror your allocation: int i; for(i = 0; i < y; i++) free(array[i]); free(array); But if you assign to temp the new matrix created by matrixTranspose, then you'll lose your pointer to that memory. So keep track of that with another pointer, or assign the result of matrixTranspose to another pointer: float **transposedMatrix = matricTranspose(...); If you consider your matrices as mutable, you could also transpose them in place: rather than allocating a new matrix in the matrixTranspose function, you move around the numbers in the existing array. You can do this in place with one float temp.
I have a function which creates a 2D array: float** createMatrix(int x, int y){ float** array= malloc(sizeof(float*) * y); for(int i=0; i<y; i++) array[i] = malloc(sizeof(float) * x); return array; } Now I can create a 2D array: float** temp=createMatrix(2,2); I also have a function, for e.g., which transposes my "matrix" (2D array): float** matrixTranspose(float** m, int x, int y){ float** result=createMatrix(y, x); for(int i=0; i<y; i++){ for(int j=0;j<x; j++) result[j][i]=m[i][j]; } return result; } Now if I do this: temp=matrixTranspose(temp,2,2); what happens with the old memory previously allocated to temp? My transpose function allocates new memory chunk. Obviously I would have to somehow free "old temp" after the Transposition, but how (elegantly)?
Memory leaking when allocating memory
You can have following .htaccess in site root:RewriteEngine On RewriteRule ^$ app/ [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .+ app/$0 [L]Then you need to have this code inapp/.htaccessRewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^(.+?)/?$ $1.php [L]
My root folder to many file and folder, because of I using some tools (php composer, node, gulp, bower...) for a lager project.Q1. change the root folder?I don't want my project to be too messy. So I save all files running on FTP into a directory named/app/EG: /www/app/home.php =https://xxxx.com/home.phpI tried this, but if I used it, I can't seem compatible '/ end' or 'without /'of Q2RewriteCond %{REQUEST_URI} !^/appQ2. how to hidden the .php and compatible '/ end' or 'without /'/www/app/home.php =https://xxxx.com/home/orhttps://xxxx.com/homeI tried it, but not compatible '/ end' and 'without /' at the somRewriteRule ^food/?$ /app/food.php [L]here the part of the relation codeRewriteEngine On RewriteCond %{REQUEST_URI} !^/app RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^(.*)$ $1.php [NC,L] RewriteRule ^([^\/]+)?$ /app/$1 [L,NC] RewriteRule ^food/?$ /app/food.php [L]
htaccess RewriteRule set the root to sub folder and auto hide .php/.html
If you don't want to use the "raw" button, you can (since June 2021) add ?plain=1 to your GitHub markdown file URL: Appending ?plain=1 to the url for any Markdown file will now display the file without rendering. As with other code files, it will also show line numbers, and can be used to link other users to a specific line or lines. For example, appending ?plain=1#L52 will highlight line 52 of a plain text Markdown file. Example: https://github.com/git/git/blob/master/README.md?plain=1#L49-L51 Since Sept. 2021, there is a button which adds the ?plain=1 for you:
Github helpfully renders Markdown (.md) files to HTML when viewing on github.com (for example, this README.md). When viewing any other source file, it is shown as unrendered source code (for example, this .gitignore). A handy feature this gives is linking directly to a line in the source by clicking the line number (for example, like this). How can I view the unrendered source of Markdown files on Github (so I can link to a particular line in the source)? note: I know of the "Raw" button, however it does not provide any of the nice UI Github has.
How do I view the source of Markdown files on Github?
My guess is that when you borrowed your colleague's credentials that one time, your Git tool cached them, and since then has been reusing them. Look into git config to see how to manually set your username from the bash: git config --global user.name "Shorya Sharma" Then confirm that the settings are sticking: git config --global user.name
I have been seeing this weird problem on git wherein I push a certain piece of code from my github account and it shows up as being pushed by my colleague. Git specifically asks for my credentials so I know it is my account, yet all commits are made from this other account. I once did login from his account temporarily but there were no default settings set up on my system. I can't wrap my head around why it might be happening. Do tell ?
Why is the code being pushed by myself on git show up as from somebody else?
Can you try setting Client-IP based session affinity by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). (http://kubernetes.io/docs/user-guide/services/)You can also try running an ingress controller which can better manage the routing internal, see:https://github.com/kubernetes/kubernetes/issues/13892#issuecomment-223731222
Trying to understand how sticky session should be configured when working with service type=loadbalancer in AWS My backend are 2 pods running tomcat app I see that the service create the AWS LB as well and I set the right cookie value in the AWS LB configuration ,but when accessing the system I see that I keep switching between my pods/tomcat instancesMy service configurationkind: Service apiVersion: v1 metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http labels: app: app1 name: AWSELB namespace: local spec: type: LoadBalancer ports: - port: 8080 targetPort: 8080 selector: app: app1Is there any additional settings that are missing? Thank you Jack
Kubernetes: How to setup sticky session with AWS loadbalancing
While followinghttps://www.cyberciti.biz/faq/configure-nginx-ssltls-passthru-with-tcp-load-balancing/I found thatproxy_ssl on;was missing. Below stream configuration worked perfectly and we need to access Nginx server onhttpnothttpsprotocol but port used was 8443. I need to figure out why its not working onhttpsprotocol.stream { upstream apiservers { 192.168.1.2:8443 max_fails=3 fail_timeout=10s; 192.168.1.3:8443 max_fails=3 fail_timeout=10s; 192.168.1.4:8443 max_fails=3 fail_timeout=10s; } server { listen 8443; proxy_ssl on; proxy_pass apiservers; proxy_next_upstream on; } }
I am trying to do load balancing on 3 SSL enabaled REST webservice instances using Nginx.I need to use SSL pass-thru in my case.I have followedhttps://www.cyberciti.biz/faq/configure-nginx-ssltls-passthru-with-tcp-load-balancing/andhttps://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thrubut i am not getting default json on hitting the load balancer endpoint.I am getting output as shown in image for all type of request.Please help me in this.
Nginx SSL pass-thru not working with a REST web service
Here's what I ended up doing - it took a while to get right. While ideally I would have used the Prometheus python client designed specifically for this purpose, it appears that it doesn't support multiple labels in some cases and the documentation is virtually non-existent - so I went with a home-brewed solution.The code below uses gevent and supports multiple (comma-delimited) pushgateway urls (like "pushgateway1.my.com:9092, pushgateway2.my.com:9092").import gevent import requests def _submit_wrapper(urls, job_name, metric_name, metric_value, dimensions): dim = '' headers = {'X-Requested-With': 'Python requests', 'Content-type': 'text/xml'} for key, value in dimensions.iteritems(): dim += '/%s/%s' % (key, value) for url in urls: requests.post('http://%s/metrics/job/%s%s' % (url, job_name, dim), data='%s %s\n' % (metric_name, metric_value), headers=headers) def submit_metrics(job_name, metric_name, metric_value, dimensions={}): from ..app import config cfg = config.init() urls = cfg['PUSHGATEWAY_URLS'].split(',') gevent.spawn(_submit_wrapper, urls, job_name, metric_name, metric_value, dimensions)
I wish to push a multi-labeled metric into Prometheus using the Pushgateway. The documentation offer a curl example but I need it sent via Python. In addition, I'd like to embed multiple labels into the metric.
How to push metrics with Python and Prometheus Pushgateway
You can use it this way withmod_setenvif:SetEnvIfNoCase Request_URI ^/testing SECURED AuthName "Restricted Area" AuthType Basic AuthUserFile /path/to/your/.htpasswd AuthGroupFile / Require valid-user Satisfy any Order Allow,Deny Allow from all Deny from env=SECURED
I want to add a login restriction on certain page like.www.example.com/testingwhenever someone hit this url it should ask password, otherwise whole website should work password free.Htaccess Code:SetEnvIfNoCase Request_URI ^/testing-softwares SECURED AuthName "Restricted Area" AuthType Basic AuthUserFile /path/to/your/.htpasswd AuthGroupFile / Require valid-user Satisfy any Order Allow,Deny Allow from all Deny from env=SECURED # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /nrl_shareware/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /abc_shareware/index.php [L] </IfModule> # END WordPressHtpasswd Code:test:$apr1$O9AK9s5s$H0IuOqkTnB0yJ5k35kX2a1Updated CodeSetEnvIfNoCase Request_URI ^/testing-softwares SECURED AuthName "Restricted Area" AuthType Basic AuthUserFile "D:/DONT DELETE 80-261209/ABCProjects/xampp/htdocs/abc_shareware/.htpasswd" AuthGroupFile / Require valid-user Satisfy any Order Allow,Deny Allow from all Deny from env=SECURED # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /abc_shareware/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /abc_shareware/index.php [L] </IfModule> # END WordPress
How to protect a specific url through htaccess
Obviously, you didn't follow the right uninstallation method; please note that if you are installing different toolkit versions there won't be any conflict between them and you can keep both of them. You will be asked during the installation process to link your /usr/local/cuda-x.y to /usr/local/cuda. Check Section 2.6 ofCUDA 7.0 GETTING STARTED ON LINUX.The proper way to uninstall as mentioned in the above link is to use the commands below depending on how you did the installation (i.e.,runmethod orrpmmethod):$ sudo /usr/local/cuda-X.Y/bin/uninstall_cuda_X.Y.pl Use the following command to uninstall a Driver runfile installation: $ sudo /usr/bin/nvidia-uninstall Use the following commands to uninstall a RPM/Deb installation: $ sudo apt-get --purge remove <package_name> # Ubuntu $ sudo yum remove <package_name> # Fedora/Redhat/CentOS $ sudo zypper remove <package_name> # OpenSUSE/SLESI hope it works for you; I don't know about theCaffe deep learninglibrary, but I am assuming that you didn't configure it before by providing the PATH of cuda 6.5 compiler and its libraries. If this is the case, first try to uninstall the previous cuda 6.5 properly, configure the library from scratch and then make it.
On Ubuntu, I previously had an installation of CUDA 6.5, and wanted to upgrade to CUDA 7.0. So, I deleted the directory at /usr/local/cuda-6.5, and installed CUDA 7.0 into /usr/local/cuda-7.0. I then changed the symbolic link at /usr/local/cuda to point to /usr/local/cuda-7.0. In my bash.rc file, I also updated the environment variables accordingly:export CUDA_HOME=/usr/local/cuda-7.0 export LD_LIBRARY_PATH=${CUDA_HOME}/lib64 export PATH=${CUDA_HOME}/bin:${PATH}If I type in "nvcc --version", then I get the following as expected:Cuda compilation tools, release 7.0, V7.0.27However, I am now compiling some code (the Caffe deep learning library, to be precise) which uses CUDA, and I am getting the following error message:error while loading shared libraries: libcudart.so.6.5: cannot open shared object file: No such file or directorySo for some reason, it is still looking for the CUDA 6.5 libraries, rather than the CUDA 7.0 libraries. Why is this? How do I tell the compiler to look for the 7.0 libraries? I cannot find any reference to libcudart.so.6.5 in the source code I am compiling, so the CUDA compiler itself is looking for the wrong version.
Compiling CUDA looks for wrong CUDA version
Tim, you're indeed hitting on the two key approaches: NOT GOOD ENOUGH: store the secret key "secretly" in the app. There is indeed a grave risk of someone just picking it out of the app code. Some mitigations might be to (a) use the DPAPI to store the key outside the app binary, or (b) obtain the key over the wire from your web service each time you need it (over SSL), but never store it locally. No mitigation can really slow down a competent attacker with a debugger, as the cleartext key must end up in the app's RAM. BETTER: Push the content that needs to be protected to your web service and sign it there. The good news is that only the request name and timestamp need to be signed -- not all the uploaded bits (I guess Amazon doesn't want to spend the cycles on verifying all those bits either!). Below are the relevant code lines from Amazon's own "Introduction to AWS for C# Developers". Notice how Aws_GetSignature gets called only with "PutObject" and a timestamp? You could definitely implement the signature on your own web service without having to send the whole file and without compromising your key. In case you're wondering, Aws_GetSignature is a 9-line function that does a SHA1 hash on a concatenation of the constant string "AmazonS3", the operation name, and the RFC822 representation of the timestamp -- using your secret key. DateTime timestamp = Aws_GetDatestamp(); string signature = Aws_GetSignature( "PutObject", timestamp ); byte[] data = UnicodeEncoding.ASCII.GetBytes( content ); service.PutObjectInline( "MainBucket", cAWSSecretKey, metadata, data, content.Length, null, StorageClass.STANDARD, true, cAWSAccessKeyId, timestamp, true, signature, null ); EDIT: note that while you can keep the secret key portion of your Amazon identity hidden, the access key ID portion needs to be embedded in the request. Unless you send the file through your own web service, you'll have to embed it in the app.
I'm Looking at using Amazon S3 and simpleDB in a desktop application. The main issue I have is that I either need to store my aws credentials in the application or use some other scheme. I'm guessing that storing them in the application is out of the question as they would be easily picked out. Another option is to create a web service that creates the aws authentication signature but this has its own issues. Does the signature require all the data from a file thats being uploaded? If so I would have to transfer all the data twice. There would then be a central failure point which was one of the main reasons for using aws. Any ideas? UPDATE: I needed to make it a bit clearer that I'm wanting to store my aws credentials in an application handed out to others. DPAPI or any other encryption would be only stop people simply using reflector to get the credentials. Using any encryption still needs the key that is easy to get. UPDATE 2 - Sept 2011 Amazon have released some details on using the AWS Security Token Service, which allows for authentication without disclosing your secret key. More details are available on this blog post.
Storing My Amazon Credentials in C# Desktop App
Resque github repository has this secret gem, a god task that will do exactly this: watch your tasks and kill stale ones. https://github.com/resque/resque/blob/master/examples/god/stale.god # This will ride alongside god and kill any rogue stale worker # processes. Their sacrifice is for the greater good. WORKER_TIMEOUT = 60 * 10 # 10 minutes Thread.new do loop do begin `ps -e -o pid,command | grep [r]esque`.split("\n").each do |line| parts = line.split(' ') next if parts[-2] != "at" started = parts[-1].to_i elapsed = Time.now - Time.at(started) if elapsed >= WORKER_TIMEOUT ::Process.kill('USR1', parts[0].to_i) end end rescue # don't die because of stupid exceptions nil end sleep 30 end end
I have an application that uses resque to run some long-running jobs. Sometimes the take 8 hours or more to complete. In situations where the job fails, is there a way to monitor resque itself to see if the job is running? I know I can update the job's status in a database table (or in redis itself), but I want to know if the job is still running so I can kill it if necessary. The specific things I need to do are: Determine if the job is still running Determine if the job has stopped Kill jobs that are stuck
Find out if a resque job is still running and kill it if it's stuck
The /media/ path is not found, so it was returning 404 error. You need to check from where it is calling.ShareFollowansweredNov 27, 2015 at 7:38Shailesh DaundShailesh Daund18022 silver badges1212 bronze badges5Can you post console screenshot?–Shailesh DaundNov 27, 2015 at 7:43You have posted net tab screenshot, It will be better if you post console tab screenshot.–Shailesh DaundNov 27, 2015 at 7:48"test.xxx.com/media" this is called from ajax script which not found. Check where it called and simply comment out that.–Shailesh DaundNov 27, 2015 at 7:53How would I resolve this? means will I have to tweak htaccess file–Dushyant JoshiNov 27, 2015 at 7:54Let uscontinue this discussion in chat.–Shailesh DaundNov 27, 2015 at 8:00Add a comment|
I am getting lots of 404 errors in Fire Bug's Net tab. I have attached the screen shot to know more. The error gets continuously incremented in Fire Bug. I have checked for this and getting solutions for this but not related to this type of problem.Console Output: 16(Keeps on incrementing, thereby slowing down the website) "NetworkError: 404 Not Found -http://test.xxx.com/media/" /media/
404 media urls in Fire Bug
Have you tried this:gitk path/to/fileShareFollowansweredMar 21, 2012 at 15:37Pierre MagePierre Mage2,27611 gold badge1515 silver badges2020 bronze badges5Seems to display the last commit that touched path/to/file–ChrisMar 21, 2012 at 15:422It should show you the gitk interface for path/to/file including commits and diffs, not only the last commit.–Pierre MageMar 21, 2012 at 15:542Actually, looks like this is exactly what I need - the whole timeline is oriented around path/to/file.–ChrisMar 21, 2012 at 15:564This doesn't work for me - I just get a blank screen with the text 'No commits selected'.–Tola OdejayiJul 8, 2016 at 14:12On the latest macOS,gitkdoes not scale well to high DPI. Apart from that, it's awesome–Fred QianOct 30, 2018 at 7:01Add a comment|
This question already has answers here:Closed11 years ago.Possible Duplicate:View the change history of a file using Git versioningSometimes I want to step through the history of a particular file. In the past I used P4V and this was very quick and intuitive.Right click on a file and select history.Scrolling through the dates and see a nice diff of exactly what changed inthatfile onthatdate. Simple.Switching to git this is now a grueling task."git log filename"Look at history and pick a date, copy hash"git diff hash"Scroll through diff for the stuff that changed in the file I am interested in.Nope, that's not it, lets try a different date - back to step 2, rinse and repeat.I've searched SO, and I've tried a few of the commonly suggested guis: github, gitk, gitg, git-gui.These all remove the need to manually run commands, but the workflow is the same for this. View history of file; view commit; search through diff of lots of irrelevant files. It's slow and repetitive.All the data is in the repo so I see no reason this simple common use case could not be more streamlined.Can anyone recommend a tool that does this - or a more efficient way to utilize the command line to do what I want?Thanks for any suggestions.
Show history of a file? [duplicate]
You are matching your referer against^https://(.+\.)*mydomain\.com. Which means if some completely other site, sayhttp://stealing_your_images.com/links to something onprotect.mydomain.com, the first condition will fail, thus the request is never redirected tohttps://unprotected.mydomain.com/. You want to approach it from the other direction, only allow certain referers to pass through, then redirect everything else:RewriteEngine On RewriteBase / # allow these referers to passthrough RewriteCond %{HTTP_REFERER} ^https://(protect|unprotected)\.mydomain\.com RewriteRule ^ - [L] # redirect everything else RewriteRule ^ https://unprotected.mydomain.com/ [R,L]
I'm trying to set up a htaccess file that would accomplish the following:Only allow my website to be viewed if the viewing user is coming from a specific domain (link)So, for instance. I have a domain called. protect.mydomain.com . I only want people coming from a link on unprotected.mydomain.com to be able to access protect.mydomain.com.The big outstanding issue I have is that if you get to protect.mydomain.com from unprotected.mydomain.com and click on a link in the protect.mydomain.com that goes to another page under protect.mydomain.com then I get sent back to my redirect because the http_referer is protect.mydomain.com . So to combat that I put in a check to allow the referrer to be protect.mydomain.com as well. It's not working and access is allowed from everywhere. Here is my htaccess file. (All this is under https)RewriteEngine On RewriteBase / RewriteCond %{HTTP_REFERER} ^https://(.+\.)*mydomain\.com RewriteCond %1 !^(protect|unprotected)\.$ RewriteRule ^.*$ https://unprotected.mydomain.com/ [R=301,L]
htaccess only accept traffic from specific http_referer
If you have only one matrix pointing to your data, you can do this trick: Mat img = imread("myImage.jpg"); // do some operations img = Mat(); // release it If more than one Mat is pointing to your data, what you should do is to release all of them Mat img = imread("myImage.jpg"); Mat img2 = img; Mat roi = img(Rect(0,0,10,10)); // do some operations img = Mat(); // release all of them img2 = Mat(); roi = Mat(); Or use the bulldozer approach: (Are you sure? this sounds like inserting bugs in your code ) Mat img = imread("myImage.jpg"); Mat img2 = img; Mat roi = img(Rect(0,0,10,10)); // do some operations char* imgData = (char*)img.data; free[] imgData; imshow("Look, this is called access violation exception", roi);
I'm working on a program where we do some image processing of full quality camera photos using the Android NDK. So, obviously memory usage is a big concern. There are times where I don't need the contents of a Mat anymore - I know that it'll be released automatically when it goes out of scope, but is there a good way of releasing it earlier, so I can reduce the memory usage? It's running fine on my Galaxy S II right now, but obviously that is not representative of the capabilities of a lot of the older phones around!
Explicitly releasing Mat with opencv 2.0
7 You can use the Project from Version Control, it has git. What I did was I logged in through a token, it also tells you what needs to be added to the token and that worked for me. You can create the token here: https://github.com/settings/tokens Share Improve this answer Follow answered Nov 16, 2020 at 8:02 Kage KrigerenKage Krigeren 7922 bronze badges Add a comment  | 
I've started working with Android Studio and I found a problem when trying to connect to Github. I've tried restarting Android Studio and even creating a new project, but I am not able to login. I installed Git and it´s working in the local repository. The problems are: Incorrect credentials Request response: 401 unauthorized
Android Studio, Github login problem incorrect credentials
I found the better solution! http://www.littlebigextra.com/use-spring-profiles-docker-containers/ I can compile without usisng a profile but then i can use java -jar selecting the spring profile! thanks for your comments, responses!
I am running multiple spring-boot microservice within docker containers. I am writing docker-compose.yml file to automate de deploy, but I have a problem. All the spring-boot microservices have different profiles depending on if you want to run them locally without docker and another ig you want to run them with docker (Basically each profiles changes the URls of clients between microservices). If I run docker-compose up and the jar files that have been compiled are not compiled with "docker" spring-boot profile, it will creates the images with the wrong jar. MY DOCKER-COMPOSE.YML EXAMPLE: version: "3" services: imageserver: build: context: ./ImageServer dockerfile: Dockerfile ports: - 8081:8081 networks: - my-private-network core: build: context: ./Core dockerfile: Dockerfile ports: - 8082:8082 networks: - my-private-network networks: my-private-network: my-public-network ONE OF MY DOCKERFILE EXAMPLE: FROM openjdk:8-jdk-alpine LABEL maintainer="[email protected]" VOLUME /tmp EXPOSE 8082 COPY ./target/*.jar Core.jar ENTRYPOINT ["java","-jar","/Core.jar"] is there any way to execute a command like "mvn clean install -P docker" in each directory before running docker-compose to ensure that the jar that will be included in the image has been compiled with the right profile? The command must be excited BEFORE cretae the container (out of the container to complile the jar that will be included within the container NOT inside the container) Thanks
Execute mvn clean install before docker-compose commands
You can check the name servers by openingresolv.confin your container's directory:/etc/resolv.confYou can test by adding Google's public name server like:nameserver 8.8.8.8Should resolve internet issue by adding above line to/etc/resolv.conf.A better way is to use Kubernetes's own core DNS as describedhere.
Pod hosted on K8s cluster is not able to connect to the internet:[root@master micro-services]# kubectl exec -it webapp-77896f4bf8-8ptbb -- ping google.com ping: bad address 'google.com' command terminated with exit code 1
Pod is not able to connect to internet
Create a function in the component controller file in the controller frontend let say myCornTasklet say your component name is com_myformso modify frontend controller file like this<?php class MyFormController extends JController{ public function myCornTask(){ $app=JFactory::getApplication(); $today=date('Y-m-d'); $db=JFactory::getDBO(); $query="UPDATE `#__my_form_table` SET `state`=0 WHERE `date`='$today'"; $db->setQuery($query); $db->query(); $app->close(); } } ?>/** * Activate Corn JOB From CPANEL */ Set Corn JOB Once in a day and URL will behttp://yoursite.com/index.php?option=com_myform&task=myCornTaskproblem in understanding please reply
I am not very familiar with Cron Job in Joomla. I have created custom component fromcomponent creator. I have created two field 1. Title and 2.Cron. Now I want to disable that title when Cron(Date field) and current date is match.I will thankful If anyone can help me.Thanks, Manan
Joomla update mysql query with Cron Job
+100As of kubectl v1.24, it is possible to patch subresources with an additional flag e.g.--subresource=status. This flag is considered "Alpha" but does not require enabling the feature.As an example, with a yaml merge:kubectl patch MyCrd myresource --type=merge --subresource status --patch 'status: {healthState: InSync}'TheSysdig "What's New?" for v1.24includes some more words about this flag:Some kubectl commands like get, patch, edit, and replace will now contain a new flag --subresource=[subresource-name], which will allow fetching and updating status and scale subresources for all API resources.You now can stop using complex curl commands to directly update subresources.The--subresourceflag is scheduled for promotion to "Beta" in Kubernetes v1.27 throughKEP-2590: graduate kubectl subresource support to beta. The lifecycle of this feature can be tracked in#2590 Add subresource support to kubectl.
I am trying to update status subresource for a Custom Resource and I see a discrepency with curl andkubectl patchcommands. when I use curl call it works perfectly fine but when I usekubectl patchcommand it says patched but withno change. Here are the command that I usedUsing Curl:When I connect tokubectl proxyand run the below curl call, it's successful and updates status subresource on my CR.curl -XPATCH -H "Accept: application/json" -H "Content-Type: application/json-patch+json" --data '[{"op": "replace", "path": "/status/state", "value": "newState"}]' 'http://127.0.0.1:8001/apis/acme.com/v1alpha1/namespaces/acme/myresource/default/status'Kubectl patch command:Using kubectl patch says the CR is patch but withno changeand the status sub-resource is updated.$ kubectl -n acme patch myresource default --type='json' -p='[{"op": "replace", "path": "/status/state", "value":"newState"}]' myresource.acme.com/default patched (no change)However when I do thekubectl patchon the other sub-resources likespecit works fine. Am i missing something here?
kubectl patch doesn't update status subresource
in this case solution was to delete the old config from$HOME/.kube/and re-initialize it afteraz loginwith the user in question
My private AKS Cluster is accessible only to the root user usingkubectlon a jumphost. But for a non-root user it throws below error message:someuser@jump-vm$ kubectl get pods -A Error from server (Forbidden): pods is forbidden: User "XX-XX-XX-XX-XX" cannot list resource "XX" in API group " " at the cluster scopeHow to resolve this error?
Error from server (Forbidden): pods is forbidden: User cannot list resource "pods" in API group at the cluster scope
Either/Or:You upgrade the size of your main node disks with something likethis.Check what pods are taking up space. Is it logs? Is it cached data? is it swap? Every application is different so you will have to go case by case.Setlocal ephemeralstorage at the pod level for your workloads so that they don't go over. Pods using a lot will get evicted.UsePersistent Volumesfor your workloads, especially some that are not local and just reserved for your applications.
I followed this tutorial to create a Kubernetes cluster on Azure to run build agents:http://www.chrisjohnson.io/2018/07/07/using-azure-kubernetes-service-aks-for-your-vsts-build-agents/To recap what is there: a helm chart to do a deployment with a secret and a config map. For this deployment, I created a kubernetes cluster on Azure with all default settings and it is pulling an image from the docker hub with vsts build agent installed.All was working fine, but recently pods started to be evicted pretty regularly, the message on them is:Message: Pod The node was low on resource: [DiskPressure].How can I fix this issue?
How to clean up disk space for Azure Kubernetes Node?
Looks like, you can't run more than one AWS service using motostandaloneserver. If you want say bothec2andacmservices to be run with moto, run both these commands,moto_server ec2 -p 5000 -H 0.0.0.0 moto_server acm -p 5001 -H 0.0.0.0However, if you want multiple services of AWS for testing, you could considerlocalstackhere. It claims that it internally uses moto and few other open source applications. Though it has a few limitations, such as ACM service not available, implementation of few AWS APIs varies slightly.
I am trying to run integration tests against AWS services, to do this I choose moto. Because I am doing this under Java, I wanted to run moto_server, and execute these tests against this mock. The problem I have is that moto_server allows only one service to be mocked. And I need a couple of them. I can lunch moto_server instance per service, but this way it will not share state (like EC2 instances or IAM roles). Is there another way I can mock more than one service with moto_server?
How to run multiple AWS services with moto_server
2 To use a concrete example let's say you have 3 nodes and 9 tasks. You now want to go to 2 nodes and 6 tasks, without any unnecessary rescheduling (e.g. 2 modes and 9 tasks, or 3 modes and 6 tasks). To scale down a service and 'drain' a node at the same time, you can do this: docker service update --replicas 6 --constraint-add "node.hostname != node_to_be_removed_hostname" service_name If your existing setup is balanced, this should only cause the tasks running on the host to be removed to be killed. After this, you can proceed to (docker node update) drain the node, remove it from the swarm, and remove the constraint that has just been added. Share Improve this answer Follow answered Oct 9, 2018 at 18:20 JayJay 3,29511 gold badge2020 silver badges2020 bronze badges Add a comment  | 
I have set of tasks for given service t1, t2, ..., tk across nodes N1, N2, ...Nw. Due to lower usage, I do not need as many tasks as k. I need only l tasks (l < k). In fact, I do not need w nodes so I want to start removing machines and pay less. Removing one machine at a time is fine. Each service has its own state. The services are started in replicated mode. 1) How can I remove a single node and force the docker swarm not to recreate the same number of tasks for the service? Notes: I can ensure that no work is rerouted to tasks running on a specific node, so removing the specific node is safe. This is the easiest solution, I will end up with w - 1 nodes and l services assuming that on the removed node was served k - l services. or 2) How can I remove specific containers (tasks) from docker swarm and keep the number of replicas of the service lower by the number of removed tasks? Notes: I assume that I already removed a node. The services from the node were redeployed to other nodes. I monitor myself the containers (tasks) which serve no traffic -> no state is needed to maintain or 3) Any other solution?
Docker swarm mode: scale down a node and remove services
In a EC2 instance the best way to authorise running code to access AWS resource is to use IAM Role.You assign a role to any instance when starting it. Any policy can be set to the role.Inside the instance any process can connect to a known URL to retrieve temps keys in order to authenticate to any AWS service.Boto, the Python library used by the Ansible S3 module, has automatic support for IAM roles. So if no key is provided directly or in the environment variable, Boto will query the known URL to get the instance key.More details on how IAM roles work can be found here:http://docs.aws.amazon.com/IAM/latest/UserGuide/roles-usingrole-ec2instance.html#role-usecase-ec2app-permissions
I'm trying to use Ansible to download some files to my various EC2 instances. The problem I'm having is when it comes to my AWS credentials. The AWS Ansible modules all work great, including the S3 module. The following (when I substitute in my AWS credentials) works like a charm.- name: upload data import file s3: aws_access_key=<accesskey> aws_secret_key=<secretkey> bucket=my-bucket object=/data.zip mode=getHowever, I need Ansible playbooks and roles I'm writing to be utilized by anyone, and I don't want to have any AWS credentials hardcoded. Everywhere else I use the Ansible AWS modules, I've eliminated aws_access_key and aws_secret_key and it works just fine as Ansible looks for those values in environment variables. However, with every other use, I'm running them as local actions. So, it's pulling the credentials from my local machine, which is what I want. The problem is when I'm running the S3 module on one of my instances, if I eliminate the credential parameters, I get:failed: [54.173.19.238] => {"failed": true} msg: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentialsI imagine that this is because since I've not specified the credentials, it's looking for them in environment variables on my instance, where they are not set. Nor would I want to set them in environment variables on the instance.Is there a way I can download a file from S3 with ansible and not have to specify my AWS credentials?
Ansible and s3 module
0 S3 hosted sites are static html. No POST handling, no PHP renders, no nothing... So, Why do you care about Google indexing AJAX sites? For a static website, simply upload well formed robots.txt and sitemap.xml files to your root path. Share Improve this answer Follow answered May 23, 2014 at 12:37 ma.tomema.tome 27611 silver badge33 bronze badges Add a comment  | 
Google webmaster guide explains that web server should handle requests for url that contains _escaped_fragment_ (The crawler modifies www.example.com/ajax.html#!mystate to www.example.com/ajax.html?_escaped_fragment_=mystate) http://support.google.com/webmasters/bin/answer.py?hl=en&answer=174992 My site is located on AWS S3 and I have no web server to handle such requests. How can I make sure the crawler gets feed and my site gets index?
How to make sure web crawler works for site hosted on AWS S3 and uses AJAX
The expression0 45/30 07-17 * * ?fires every 30 minutes, starts at 7:45 and ends at 17:45every day. In my opinion, it's not possible to schedule a job to start at 7:45 and end at 17:15usingonly onecron expression.Thanks
I am having problems scheduling my job with quartz... I cannot find an expression which lets me run my job from 7:45 to 17:15 every half hour... I have tried this0 15/30 7-17 ? * * *but it fires at 17:45, and I don't want it. Does anybody know any way to do this without splitting the expression? I know that these expressions0 15 8-17 ? * * * 0 45 7-16 ? * * *would fit but I'd rather use a single one if possible.
Scheduling with quartz every 30 minutes from 7:45 to 17:15
<div class="s-prose js-post-body" itemprop="text"> <p>In your case it was some other process that was using the port and as indicated in the comments, <code>sudo netstat -pna | grep 3000</code> helped you in solving the problem.</p> <p>While in other cases (I myself encountered it many times) it mostly is the same container running at some other instance. In that case <code>docker ps</code> was very helpful as often I left the same containers running in other directories and then tried running again at other places, where same container names were used.</p> <p><strong>How <code>docker ps</code> helped me:</strong></p> <blockquote> <p><code>docker rm -f $(docker ps -aq)</code> is a short command which I use to remove all containers.</p> </blockquote> <p><strong>Edit:</strong> Added how <code>docker ps</code> helped me.</p> </div>
<div class="s-prose js-post-body" itemprop="text"> <p>When I run <code>docker-compose up</code> in my Docker project it fails with the following message:</p> <blockquote> <p>Error starting userland proxy: listen tcp 0.0.0.0:3000: bind: address already in use</p> </blockquote> <pre><code>netstat -pna | grep 3000 </code></pre> <p>shows this:</p> <pre><code>tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN - </code></pre> <p>I've already tried <code>docker-compose down</code>, but it doesn't help.</p> </div>
Docker Error bind: address already in use
Whether or not private contributions are cuonted and shown is a setting in your GitHub profile. The default is to only show activity from public repositories (https://docs.github.com/en/github/setting-up-and-managing-your-github-profile/publicizing-or-hiding-your-private-contributions-on-your-profile)Your GitHub profile shows a graph of your repository contributions over the past year. You can choose to show anonymized activity from private and internal repositories in addition to the activity from public repositories.Note that the e-mail address of the commits must be linked to your GitHub account (https://docs.github.com/en/github/setting-up-and-managing-your-github-profile/viewing-contributions-on-your-profile):Note: Commits will only appear on your contributions graph if the email address you used to author the commits is connected to your account on GitHub. For more information, see "Why are my contributions not showing up on my profile?"
I have been contributing to a private repo of an organization on GitHub. There is no member added in the organization and at the organization page, it shows this messageThis organization has no public members. You must be a member to see who’s a part of this organization.So, I am the part of this organization and I have made a few commits, verified by GPD signature verification but these commits are not being counted by my contribution heat map, even though I have checked the option to show contributions on the private repos. Does GitHub count the commits you made in a private repo of an org for your heatmap ?
Does GitHub count the commits you made that are in a private repo of an organization
Destroy your container and start a new one up with the new environment variable usingdocker run -e .... It's identical to changing an environment variable on a running process, you stop it and restart with a new value passed in. Replace the concept of restarting a process with destroying and recreating a new container.If your container contains files that cannot be lost, then you should be using volumes. The other contents of the container filesystem should be either disposable or immutable.ShareFollowansweredJul 26, 2016 at 13:26BMitchBMitch246k4444 gold badges509509 silver badges473473 bronze badges41Well yes, true, one could default back to recreating the container, and I did in the end since I'm using a volume, but I was more curious if there was a way to do so as a kind of running patch?–Victor.dMdBJul 30, 2016 at 5:348Docker doesn't provide a way to modify an environment variable in a running container because the OS doesn't provide a way to modify an environment variable in a running process. You need to destroy and recreate.–BMitchJul 30, 2016 at 11:55@BMitch That doesn't make sense. You can stop and start containers.–LampOct 23, 2020 at 8:17@Lamp the update command doesn't require that you restart the container to take effect. If you change something that requires a container restart, then the workflow is to replace the container instead.–BMitchOct 23, 2020 at 10:21Add a comment|
I've setup a server with multiple docker containers, accessible withjwilders nginx reversre proxy. When you run the containers you can set the VIRTUAL_HOST environment variable. I've been trying to figure out a way of updating these after a container was launched.A solution postedhere:You just stop docker daemon and change container config in/var/lib/docker/containers/[container-id]/config.jsonRequires you to stop the docker daemon, but I would prefer not to have to resort to that.Anotherhere, uses docker commit to preserve the instance information:Having said that, you -can- preserve filesystem changes in the container, by committing it as a new image;$ docker run -it --name=foobar alpine sh $ docker commit foobar mynewimage $ docker rm foobar $ docker run -it --name=foobar mynewimage shThough this also seems to be a bit over the top for just changing an environment variable.I've looked indocker update, but that is mainly for reconfiguring container resources.Of course, if I have no other choice I will use either of the methods above, but I'm wondering if anyone has found some other solution?
Updating Environment Variables of a Container using Docker
0 You could also declare a webhook on GitHub, in order to generate an event payload that can be listened to by a process on your server. That process (a listener) would be in charge of triggering the git pull. You will find various examples of such listeners, like: a basic php one one in go or in python Share Improve this answer Follow answered Sep 12, 2014 at 19:06 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges Add a comment  | 
I develop on my Mac and push it to Github. I login to my server by SSH and I git pull the changes to the server. I want the changes to automatically be pulled to the server when I push them to Github so I a file .git/hooks/post-update with this info #!/bin/sh echo echo "**** Pulling changes into Live [Hub's post-update hook]" echo cd /mydirector/html || exit unset GIT_DIR git pull exec git-update-server-info What else should I do to get it working? Thanks in advance for your answer. It will be very much appreciated.
How do I setup a git hook to pull from Github?
7 I had exactly the same problem - couldn't successfully make request on HTTP to app from Selenium-controlled browsers (Chrome or Firefox) in other Docker container on same network. cURL from that container though worked fine! Connect on HTTP, but something seemed to be trying to force HTTPS. Identical situation right down to the name of the container "app". The answer is... it's the name of the container! "app" is a top level domain on the HSTS preloaded list - that is, browsers will force access through HTTPS. Fix is to use a container name that isn't on HSTS preloaded lists. HSTS - more reading Share Improve this answer Follow answered Jul 17, 2019 at 14:51 ryanpryanp 5,01311 gold badge3131 silver badges4141 bronze badges 4 oh my good lord. thank you internet stranger, i have no idea how long it might have taken me to figure this out without you. – hwjp Apr 6, 2020 at 16:39 A truly horrible gotcha, that's for sure... I'm glad my pain at least meant someone else didn't need to suffer as much! – ryanp Apr 9, 2020 at 20:37 1 if you - like me - don't want to change your service name, you could also add an alias so the selenium browsers can connect to your app service via an alternative hostname: services: app: networks: tests: aliases: - app.local – Stefan Jul 3, 2020 at 12:53 For future visitors also driven crazy by this, you can check a name at hstspreload.org to see if it is in the HSTS preloaded list. – chucknelson Aug 20, 2020 at 19:05 Add a comment  | 
I have a functional app running in a docker on port 3000. I have selenium tests that works when I set my host to http://localhost:3000. I created a container to launch the selenium tests and it fails with the following error: WebDriverError:Reachederrorpage:about:neterror?e=nssFailure2&u=https://app:3000/&c=UTF-8&f=regular&d=An error occurred during a connection to app:3000. SSL received a record that exceeded the maximum permissible length. Error code: <a id="errorCode" title="SSL_ERROR_RX_RECORD_TOO_LONG">SSL_ERROR_RX_RECORD_TOO_LONG</a> Snippet of my docker-compose.yml app: build: context: . dockerfile: Dockerfile.dev volumes: - ./:/usr/src/app/ ports: - "3000:3000" - "3001:3001" networks: tests: selenium-tester: build: context: . dockerfile: Dockerfile.selenium.tests volumes: - ./:/usr/src/app/ - /dev/shm:/dev/shm depends_on: - app networks: tests: I replaced the host by http://app:3000 but firefox seems to want to redirect this http to https (which is not working). And finally I build my driver like this: const ffoptions = new firefox.Options() .headless() .setPreference('browser.urlbar.autoFill', 'false'); // test to disable auto https redirect… not working obviously const driver = Builder() .setFirefoxOptions(ffoptions) .forBrowser('firefox') .build(); When manually contacting the http://app:3000 using curl inside the selenium-tester container it works as expected, I get my homepage. I'm short on ideas now and even decomposing my problem to write this question didn't get me new ones
Selenium firefox driver forces https
Usually when you clone a Laravel Repository, you have to make this Step:- Composer install- Copy .env.example to .env and set the good values inside .env
One of my friends have created a laravel app and he has made a repository in GitHub. But once I cloned it and runphp artisan servecommand it displays the following error.**Warning: require(C:\Users\j\Desktop\newsLankaPhp2-2\newsLankaPhp2-2\bootstrap/.. /vendor/autoload.php): failed to open stream: No such file or directory in C:\Us ers\j\Desktop\newsLankaPhp2-2\newsLankaPhp2-2\bootstrap\autoload.php on line 17** **Fatal error: require(): Failed opening required 'C:\Users\j\Desktop\newsLankaPhp 2-2\newsLankaPhp2-2\bootstrap/../vendor/autoload.php' (include_path='.; C:\xampp\ php\PEAR') in C:\Users\j\Desktop\newsLankaPhp2-2\newsLankaPhp2-2\bootstrap\autoload.php on line 17**I am new to laravel and I need some help regarding this.
Cannot serve Cloned Git repository in my local machine
I'm not sure but I guess he didn't received a notification. However such GitHub behaviors often change over time with UI updates and new features... Anyway, it will be clearer for the repo owner, as for anybody else later reviewing what happened, if you'd explicitly notify him about the PR update. He might/should have asked you for those changes as a comment to your initial PR? Then answer you fulfilled his request with a new comment.
I've created a pull request on GitHub, been asked for changes by the repo owner, committed the changes and pushed them to the same branch.Now I can see the updates in the pull request but I'm not sure if the repo owner have received any kind of automatic notification (e.g. email) about the update in the pull request. Should I trigger such a notification manually by adding a comment? something else?Readingthis,thisandthisdoesn't answer the question either.
Repo owner automatic notification after updating a pull request
In the fog of eclipse plug-ins I was not able to find the exact answer, but this was the solution. The problem was in the IBM Websphere and Bluemix plugins. There are several options for installing those plugins which results in different content. From the Eclipse Marketplace: The bad set came from Eclipse Marketplace from "IBM Liberty Developer Tools for Oxygen" The good set came from "IBM Eclipse Tools for Bluemix Oxygen" The problem plug-ins are these: IBM Bluemix Tools 1.0.2.v20171004_2101 com.ibm.wdt.bluemixtools.feature.feature.group IBM OSGi Application Development Tools 17.0.3000.v20171004_2101 com.ibm.osgi.wdt.feature.feature.group IBM Web Development Tools 17.0.3000.v20171004_2101 com.ibm.wdt.webtools.top.feature.feature.group IBM Web Services Development Tools 17.0.3000.v20171004_2101 com.ibm.wdt.ast.ws.tools.feature.feature.group IBM The ones that work are these: IBM Bluemix Tools 17.0.3000.v20171004_2330 com.ibm.cftools.ext.feature.feature.group IBM WebSphere® Application Server Liberty Tools 1.0.2.v20171004_2330 com.ibm.cftools.server.tools.feature.feature.group IBM These two came with either package, but are required in a websphere/bluemix installation. Cloud Foundry Tools Core 1.2.3.v201709130027 org.eclipse.cft.server.core.feature.feature.group Eclipse Tools for Cloud Foundry Cloud Foundry Tools UI 1.0.10.v201709130027 org.eclipse.cft.server.ui.feature.feature.group Eclipse Tools for Cloud Foundry As of this writing this link has the current information on installing Bluemix / Websphere: https://console.bluemix.net/docs/manageapps/eclipsetools/eclipsetools.html#eclipsetools
I'm using Eclipse with Egit/Github and Maven on Windows. Often but not always, when checking out a branch an error message is thrown indicating the pom.xml file could not be renamed which causes the checkout to fail. The file is locked by Windows, preventing the rename. Using Handle as suggested below shows Eclipse has the lock. Colleagues don't see this problem. I've installed an entirely different instance of Eclipse and cloned the repository to a different location and have the same results. This all causes a great mess in my repository because Git does not have a rollback function on the checkout failure. All of the files from the go-to branch were copied in but git keeps the come-from branch as checked out. All of the files that differ between the branches are shown as modified. Cleanup takes a bit of work.
Locked pom.xml causes git branch checkout failure in eclipse
NSMutableArray uses slightly more memory for two (er, four, see comments) reasons: 1) Because it can change size, it can't store the contents inside the object and must store a pointer to out of line storage as well as the extra malloc node for the storage 2) Because it would be very slow to resize one element at a time as things are added, it resizes in chunks, which may result in some unused space.
I have a question regarding NSArray and NSMutableArray. I understand the difference between two primarily that NSArray is immutable and NSMutableArray is mutable. And as far as my research goes, there performance is kind of same too. There is one thing that I could not find a good answer for and that is if NSMutableArray uses more memory than NSArray and if NSMutableArray is somehow harsher on memory than NSArray. I would really appreciate the suggestions and explanation. Thanks Vik
Is NSMutableArray better than NSArray for memory reasons?
You canmountthe host directory to the container.So your host directory will be accessible inside theJenkinscontainer and you can access the files also.This way just mount your.kubefolder to Jenkins container, which is storing thekubeconfigfile. And you can use that path in jenkin config.Create a home directory for Jenkins in the host.sudo mkdir /mykubeconfigcopy-paste thekubeconfigfile inside above created directoryRun the latest Jenkins container using the following command.docker run -d -p 8080:8080 -p 50000:50000 -v /mykubeconfig:/var/jenkins_home jenkinsNow you will be able to access jenkins on host port 8080.
I am running a Jenkins docker container, which I want to configure the Kubernetes plugin which requires passing kubeconfig file. How can I point the kubeconfig file in my local machine to the Jenkins running in the container? I am running the k3d Kubernetes cluster on my host machine.
How can I connect to k3d kubernetes running on my host to Jenkins docker container
We opened an issue to discuss this: https://github.com/kubernetes/kubernetes/issues/13858 The recommended way to go here is to use IAM instance profiles. kube-up does configure this for you, and if you're not using kube-up I recommend looking at it to emulate what it does! Although we did recently merge in support for using a .aws credentials file, I don't believe it has been back-ported into any release, and it isn't really the way I (personally) recommend. It sounds like you're not using kube-up; you may find it easier if you can use that (and I'd love to know if there's some reason you can't or don't want to use kube-up, as I personally am working on an alternative that I hope will meet everyone's needs!) I'd also love to know if IAM instance profiles aren't suitable for you for some reason.
We are trying to Configure kubernetes RC in AWS instance with AWS Elastic Block Store(EBS). here is the key part of our controller yaml file - volumeMounts: - mountPath: "/opt/phabricator/repo" name: ebsvol volumes: - name: ebsvol awsElasticBlockStore: volumeID: aws://us-west-2a/vol-***** fsType: ext4 our rc can start pod and works fine with out mounting it to a AWS EBS but with volume mounting in an AWS EBS it gives us - Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedMount Unable to mount volumes for pod "phabricator-controller-zvg7z_default": error listing AWS instances: NoCredentialProviders: no valid providers in chain Fri, 11 Sep 2015 11:29:14 +0000 Fri, 11 Sep 2015 11:29:34 +0000 3 {kubelet 172.31.24.103} failedSync Error syncing pod, skipping: error listing AWS instances: NoCredentialProviders: no valid providers in chain We have an credential file with appropiate credential in .aws directory. But its not working. Do we missing something? Is it a configuration issue? Kubectl version: 1.0.4 and 1.0.5 (Tried with both)
Kubernetes with AWS Elastic Block Storage
You can indicate that you don't want GitHub Pages to build your page with Jekyll by adding an empty file called .nojekyll in the root of your publishing source (see docs). You can also use a custom GitHub Actions workflow to publish your page; such a workflow usually has four steps: Check out the repo with actions/checkout Build your site using a static site generator Upload your static files with actions/upload-pages-artifact Deploy the artifact with actions/deploy-pages In your case, you can skip step 2, as you don't need to build anything. This starter workflow does that.
I want to create my own website and have been working on the page using Bootstrap and my own CSS style sheet. Is there a way of just pushing my HTML and CSS files straight to GitHub-pages without using Jekyll? Here is a simple project structure of my files: The index.html is the home page and the other pages are accessed from the nav-bar. The CSS files are linked to each HTML file as well. Any information or links to tutorials is appreciated.
Is there a way to add my own HTML and CSS files into GitHub-Pages without having to use Jekyll layouts?
Try installing the app onto the device using iTunes instead of Xcode. First drag the app's ipa file and drop it onto iTunes. You should see the app appear in the apps list. Then just sync the device with iTunes. You can install the provisioning profile using iTunes in the same way. This is the way that I've sent beta versions of my apps to testers in other countries. Send them both the ipa file and the provisioning profile. I'd recommend creating a separate ad-hoc provisioning profile with just the devices you need defined instead of using the team provisioning profile.When emailing a copy of the app, you should compress the ipa file into a zip file first. When the user unzips the file, on a mac they'll get an ipa file. On a pc they'll get a folder of the same name as the ipa file. You can drag the ipa folder onto iTunes in the same way.It's also a good idea to change the bundle display name to something different when installing apps this way. Otherwise you won't be able to distinguish between the beta version and the same app purchased from the app store.
I have several people with my app on iPhones, iPod Touches, and iPads, that helped me with development. However, I just discovered that apps put onto the devices through XCode are not backed up by an iTunes sync, and so are not restored. How can I ensure that apps I put on devices this way get backed up, or restored? Is there a way of getting data files out of the bundle, and putting them back in later, in case they need to restore the app from scratch and then restore the data files?Thanks for your answers.
How to back up and restore apps on ios development devices?
As Benjamin mentioned, as of March 7th, 2023, there is currently no way to perform this action. Here is the community discussion tab about this, although unresolved.
I've recently moved a repository from my account to an organization that I've created. I can't create any entries from my issues on the repository since. Is there a way to pass ownership of my Project to the Organization so I get back to creating entries on it for the stuff I want to do in the repository? I would rather not need to recreate my entries again, if possible. I've tried searching for this before asking here and I've only found this other, similar question, that also wasn't answered.
Is it possible to transfer a Project from an user to an organization?
As describedhereto pass value in subchart you need to define value under dependent chart name section like followingprometheus-operator: # Change default node-exporter port prometheus-node-exporter: service: port: 30206 targetPort: 30206 prometheus: prometheusSpec: storageSpec: volumeClaimTemplate: spec: storageClassName: efs accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi selector: {} fluentd-elasticsearch: elasticsearch: hosts: ["https://vpc-logs-abcd:443"]
Chart.yaml:-dependencies: - name: prometheus-operator version: 8.16.1 repository: https://kubernetes-charts.storage.googleapis.com/ - name: fluentd-elasticsearch version: 9.4.2 repository: https://kiwigrid.github.ioCustom-values.yaml# Change default node-exporter port prometheus-node-exporter: service: port: 30206 targetPort: 30206 prometheus: prometheusSpec: storageSpec: volumeClaimTemplate: spec: storageClassName: efs accessModes: ["ReadWriteOnce"] resources: requests: storage: 5Gi selector: {} elasticsearch: hosts: ["https://vpc-logs-abcd:443"]Running command:-helm install --namespace dependency test -f /root/custom-values.yaml /root/customchartError/Problem:- Custom-values.yaml is NOT applied on the chart !! Chart Installed with "Default values"
how to apply custom-values.yaml on two subchart which is part for main chart