Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
I think you are correct in assessing that current Cloud Run for Anthos set up (unintentionally) does not let you see the origin IP address of the user.As you said, the created gateway for Istio/Knative in this case is a Cloud Network Load Balancer (TCP) and this LB doesn’t preserve the client’s IP address on a connection when the traffic is routed to Kubernetes Pods (due to how Kubernetes networking works with iptables etc). That’s why you see anx-forwarded-forheader, but it contains internal hops (e.g. 10.x.x.x).I am following up with our team on this. It seems that it was not noticed before.
We use Google Cloud Run on our K8s cluster on GCP which is powered by Knative and Anthos, however it seems the load balancer doesn't amend the x-forwarded-for (and this is not expected as it is TCP load balancer), and Istio doesn't do the same.Do you have the same issue or it is limited to our deployment? I understand Istio support this as part of their upcomingGateway Network Topologybut not in the current gcp version.
Getting client ip using Knative and Anthos
In my circumstance, it was because kube-proxy (v1.1.4) was missing the--proxy-mode=iptablesflag. Evidently in 1.1.4, the default is something other than iptables, and specifying that flag made the logs immediately stop spewing those messages.
I got errors in my kube-proxy:E0107 21:48:57.738867 1 proxysocket.go:160] I/O error: read tcp 10.2.11.253:37568: connection reset by peerHow can I trace quickly which pod has IP10.2.11.253? And how can I know which request that was, from which pod to which pod?Or can we change the kube-proxy log level to verbose or debug?I got another errors, sameconnection reseterror, but the IP is a node's IPE0107 21:52:53.483363 1 proxysocket.go:160] I/O error: read tcp 192.168.166.180:11732: connection reset by peer192.168.166.xis my kubernetes node subnet, but how can kube-proxy forwards request to a node IP?I'm using kubelet 1.0.1 and CoreOS v773.1.0 (docker 1.7.1, kernel 4.1.5) as my cluster nodes.Thanks for any help!
How to debug error in kube-proxy: Connection reset by peer
find /opt/files/Backup -name \*.zip -a -mtime +14 -ls If you are satisfied the files being matched are the ones to delete, replace -ls with "-exec rm {} \;"
I am having the following directory with multiple Backup from $(date +"%d.%m.%Y at %H:%M:%S").zip files. /opt/ /opt/files/ /opt/files/private/* /opt/files/backup.sh /opt/files/backup.txt /opt/files/Backup from $(date +"%d.%m.%Y at %H:%M:%S").zip With a daily cronjob 0 0 * * * cd /opt/files/ && ./backup.sh > /opt/files/backup.txt I am currently managing my backups. As you can imagine, this directory gets bigger and bigger over time. I now would like to create another script (or cronjob if it works with one command) to delete the oldest /opt/files/Backup from $(date +"%d.%m.%Y at %H:%M:%S").zip after 14 days (so that I have 14 recent backups all the time). It would be great if you could explain your answer.
Command line to remove oldest backup
No official article mentions it, I submit a user voice here:Docker image cache on Hosted linux agentthat you can vote and follow.
We are building docker image on VSTS by using VSTS Hosted Linux Preview agent. microsoft/aspnetcore-build image is used to build asp.net core application. Each time build is triggered, an agent is pulling microsoft/aspnetcore-build image from registry and it takes some time. We would like to avoid this, by specifying specific image pre-cached on agents.Is there a list of container images that have been cached on Hosted Linux Preview agent? Such information is available forHosted VS2017agent, but not for Linux one.
Cached Docker images on Hosted Linux Preview agent
Its a tricky problem sincevar2=anythingcan really appear anywhere in query string.This code should work for you:Options +FollowSymLinks -MultiViews # Turn mod_rewrite on RewriteEngine On RewriteBase / RewriteCond %{QUERY_STRING} ^(.+?&|)var2=[^&]*(?:&(.*)|)$ [NC] RewriteRule ^ %{REQUEST_URI}?%1%2 [R=301,L]
I would like to use mod_rewrite to remove a specifik query parameter from an URL.Example: 1) User enters URL:http://localhost/intra/page.htm?var1=123&var2=456&var3=7892) mod_rewrite removes "var2=456"3) New URL:http://localhost/intra/page.htm?var1=123&var3=789My problem is, that I only know the parameter name (var2), and not the value (456), and that I newer know the order of the parameters. It might be placed at the beginning as well as the end of the query string.I would appreciate any help, as I used a lot of time searching the web, without finding any working solution.
Use mod_rewrite to remove parameter
Experiments at a previous employer showed that the standard Linux and Solaris malloc/free implementations were not particularly efficient in high-concurrency multicore environments. We realized significant performance improvements by creating a custom allocator. I think it is definitely worthwhile to do experiments with alternative allocators. If you are still working on this project, please post your findings! Note that this was for a web service written in C. I have no experience with nginx, Hoard, or Lockless.
I am doing some experiments to find out the ceiling of my requests per second rate of haproxy and nginx on RHEL or Centos. Part of my setup in nginx uses embedded LUA in the form of LuaJIT. My question is this: Does anybody have any experience or advice about the usefulness of doing some testing of these apps after building with alternative heap allocators such as Hoard or Lockless. Any thoughts gratefully received. Dave.
Is it worth experimenting with different heap allocators on Linux for multi core servers for nginx or haproxy
36 According to SNS architecture and design: If subscription is APPROVED so no matter whether there is topic associated to it or not. User will be able to delete the subscription.) If subscription is PENDING so no matter whether there is topic associated to it or not. Amazon will delete automatically the subscription after 3 days of creation. See the FAQ for more info (scroll down to the question titled "How long will subscription requests remain pending, while waiting to be confirmed?") Share Improve this answer Follow edited Mar 18, 2020 at 15:22 Peter 13.9k1111 gold badges7777 silver badges123123 bronze badges answered Jun 6, 2019 at 15:15 MinnieMinnie 38933 silver badges33 bronze badges 6 4 ¿Do you have a link to this part of the documentation? – Matt Jan 25, 2020 at 8:42 @Matt I added a link to an FAQ. – Peter Mar 18, 2020 at 15:22 9 It makes absolutely no sense to not be able to delete it from the console. What if I put a wrong URL? What's the big deal if I were able to remove it? After all I'm the owner of the topic. I can delete the topic itself but not a subscription to the topic ?! – Bogdan Calmac Jul 11, 2020 at 1:11 4 docs.aws.amazon.com/sns/latest/dg/… You can't delete a pending confirmation. After 3 days, Amazon SNS deletes it automatically. – gomisha Aug 24, 2020 at 21:02 1 @gomisha the link in your comment is not valid anymore. Here is the updated link docs.aws.amazon.com/sns/latest/dg/… – Manos Pasgiannis Jan 18, 2022 at 8:07  |  Show 1 more comment
I am trying to delete a subscription to an SNS topic (specifically an email address) that is unconfirmed, but the AWS console won't let me. It will let me delete subscriptions that are confirmed however. Any ideas?
How to delete a unconfirmed AWS SNS subscription
Somebody posted that this would have been fixed in Zend Studio update in June 2013, but I didn't get it to work by installing the update...However, I got it to work by choosing 1) Import Git -> Projects from Git 2) Import as General Project 3) right clicking from the PHP Explorer -> Configure -> Add PHP Support .
When I use zend studio to create a project from github, I get the following error message:Cannot retrieve branches, check if the provided repository location is validCan anyone explain how to solve this?
cannot retrieve branches,check if the provided repository location is vaid
+100From AWS Support (August 10, 2015):Thank you for reaching out AWS Support with your question about Lambda and UTF-8.We are presently researching this issue as other customers have brought this to our attention. There is no eta on when this will be resolved or if this is something we can resolve.ShareFolloweditedAug 10, 2015 at 23:38answeredAug 10, 2015 at 22:58BestPracticesBestPractices12.8k3030 gold badges9797 silver badges141141 bronze badgesAdd a comment|
Update Oct 12:The issue is fixed now. Seethis postin aws forum for details.I wrote a nodejs function simply respond with some Chinese characters. But it respond with wrong characters.exports.handler = function(event, context) { context.succeed('Hello 世界!'); };The function result becomes:"Hello ������������!"I came across this problem when I wrote a function to parse some Chinese websites and retrieve their page titles. I manage to convert them into utf-8 (I used needle for the request), and console.log(title) correctly display those Chinese characters. But the result from context.succeed() shows up like the example above. What should I do to deal with these non-latin characters when responding the result?
How to response non-latin characters in AWS lambda?
You can try doing it this way:$ docker run --rm -p 4444:4444 -p 5900:5900 \ -v /tmp/chrome_profiles:/tmp/chrome_profiles \ -e JAVA_OPTS selenium/standalone-chrome:latestor# To execute this docker-compose yml file use `docker-compose -f up` # Add the `-d` flag at the end for detached execution version: '2' services: chrome: image: selenium/node-chrome:latest volumes: - /dev/shm:/dev/shm - /tmp/chrome:/tmp/chrome_profiles ports: - "5900:5900" depends_on: - hub environment: HUB_HOST: hub hub: image: selenium/hub:latest ports: - "4444:4444"The profile path then needs to be passed through ChromeOptions, keep in mind that this is the path inside the container. Example code:ChromeOptions chromeOptions = new ChromeOptions(); chromeOptions.addArguments("-profile", "/tmp/chrome_profiles/.selenium"); WebDriver driver = new RemoteWebDriver(new URL("http://hub:4444/wd/hub"), chromeOptions);
I need to launchseleniuminsidedockercontainer. It's important to pass browser profile towebdriver.Here'sdocker-compose:version: '2' services: worker_main: build: ./app volumes: - /Users/username/Library/Application Support/Google/Chrome/Profile 1:/profile restart: always env_file: - config.env networks: - backend depends_on: - chrome chrome: image: selenium/standalone-chrome restart: always ports: - 4444:4444 hostname: chrome networks: - backend networks: backend:Here's driver code:from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument("user-data-dir=/profile") driver = webdriver.Remote("http://chrome:4444/wd/hub", options=options)As a result I catch this error:selenium.common.exceptions.WebDriverException: Message: unknown error: cannot create default profile directory
What's the right way to pass browser profile to selenium inside docker container?
If you have a look for debian series cron.service, you could see next: [Unit] Description=Regular background program processing daemon Documentation=man:cron(8) After=remote-fs.target nss-user-lookup.target [Service] EnvironmentFile=-/etc/default/cron ExecStart=/usr/sbin/cron -f $EXTRA_OPTS IgnoreSIGPIPE=false KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target From ExecStart=/usr/sbin/cron -f $EXTRA_OPTS, I guess unlike alpine, the main program on such debian series linux could be cron not crond. (PS: python:3.9.12-bullseye based on debian, while python:3.6.12-alpine based on alpine)
I have a dockerfile FROM python:3.9.12-bullseye COPY . . RUN apt-get update -y RUN apt-get install cron -y RUN crontab crontab CMD python task.py && crond -f And a crontab * * * * * python /task.py I keep running into the error /bin/sh: 1: crond: not found when I run the docker file. Docker build is fine. Anyone knows why this happens? If I use python:3.6.12-alpine everything works fine but with python:3.9.12-bullseye, i keep getting that error.
/bin/sh: 1: crond: not found when cron already installed
Disclaimer: both solution are for educational purpose and I would not recommend to use it in any real program. If you need to solve homework with strict requirements, then that maybe ok:First:istream& operator>>(istream& is, Employee & e) { Employee tmp; tmp.name = new char[1024]; is >> tmp.num >> tmp.rate >> tmp.name; e = tmp; return is; }Second - more ugly and more "effective" solution:istream& operator>>(istream& is, Employee & e) { char buffer[1024]; Employee tmp; tmp.name = buffer; is >> tmp.num >> tmp.rate >> tmp.name; e = tmp; tmp.name = 0; return is; }Again both solution created under condition "to use existing assignment operator", real code should be different.Note:if (name != NULL) delete [] name;is redundant, writedelete [] name;instead
Hello so I am confused with my istream& operator>>. I have to overload this operator to take input for a class that is using dynamic memory allocation for a C string.My Employee.h file is#include <iostream> using namespace std; const double MIN_WAGE = 10.25; class Employee { int num; char * name; double rate; public: Employee(); Employee(const Employee&); Employee operator=(const Employee&); friend istream& operator>>(istream& is, Employee& employee); friend ostream& operator<<(ostream& is, const Employee& employee); friend bool operator>(const Employee& a, const Employee& b); ~Employee(); };I have a copy constructor which called the assignment operatorEmployee::Employee(const Employee & e) { name = NULL; *this = e; } Employee Employee::operator=(const Employee & e) { if (this != e) { num = e.num; rate = e.rate; if (name != NULL) delete [] name; if (e.name != NULL) { name = new char[strlen(e.name) + 1]; strcpy(name, e.name); } else name = NULL; } return *this; }And in the assignment operator I have dynamically assigned memory for the length of the C string I am using. My istream function so far:istream& operator>>(istream& is, Employee & e) { int n; double r; }My question is: how do I use the new dynamic memory allocation in my assignment operator in my istream function?
Overloading istream operator with dynamic memory allocation
4 Pass in parameter values in a mapping template: { "startStop":"$input.params('startStop')", "vertical":"$input.params('vertical')" } Read parameter values via event Object: startStop = event['startStop'] vertical = event['vertical'] Share Follow answered Dec 28, 2019 at 21:21 cellepocellepo 4,14944 gold badges3939 silver badges5858 bronze badges Add a comment  | 
I have written an AWS Lambda function in Python that filters through instances and turns them on or off depending on how they are tagged. This will show you a working function, and the set-up needed to get it working. If you have questions on anything post it in the comments. Here is my Lambda Function as of now: import boto3 def lambda_handler(event, context): startStop = event['startStop'] vertical = event['vertical'] isRunning = '' if(startStop == 'start'): isRunning = 'stopped' elif (startStop == 'stop'): isRunning = 'running' ec2 = boto3.resource('ec2') filters = [ { 'Name': 'instance-state-name', 'Values': [isRunning] }, { 'Name': 'tag:Vertical', 'Values': [vertical] } ] instances = ec2.instances.filter(Filters=filters) runningInstances = [instance.id for instance in instances] if len(runningInstances) > 0: if(startStop == 'start'): shuttingDown = ec2.instances.filter(InstanceIds=runningInstances).start() elif (startStop == 'stop'): shuttingDown = ec2.instances.filter(InstanceIds=runningInstances).stop() For reference, here is my mapping template: { "startStop":"$input.params('startStop')", "vertical":"$input.params('vertical')" } And this is how I am passing in the parameters within the URL: https://awslambdaapiurl.../prod/-params?startStop=start&vertical=TEST
How to pass parameters to an AWS Lambda Function using Python
You can use following to load image:Glide.with(context) .signature(new StringSignature(yourVersionMetadata)) .into(imageView)Just changeyourVersionMetadatawhen you load image and it will not load from cache ifyourVersionMetadatais different.
I'm writing an app which needs to load a lot of images from the internet (a manga reader). I need to cache some thumbnail images for offline use, any others should be cleared when app closed.I read some about cache invalidation on Glide page, they said the best way is to change the content url, but how Glide know if it is a modified url of old content or a new one? I'm new to Glide here.https://github.com/bumptech/glide/wiki/Caching-and-Cache-InvalidationThank in advance :)
How to invalidate Glide cache for some specific images
As described in theGitHub help page for fork, the best policy here is to:define a remote called upstream and pointing to the original repo (the one where the author accepted your pull request)pull from that upstream repoPull in upstream changesIf the original repo you forked your project from gets updated, you can add those updates to your fork by running the following code:$ git fetch upstream $ git merge upstream/masterOr you could, after the fetch, reset yourmasterbranch toupstream/master, in order to have the exact same history.So, when you fork a repo, and clone that forked repo to your workstation:remote 'origin' refers to your forkremote 'upstream' refers to the original repo that your forked. You need to explicitly add that remote reference to your repo.
Not sure if this is the place to ask questions about Github.I have forked a public repo and added two commits to it, then sent to the original author asking for a pull request. The author have complied with the request and now I'd wish to fast track my own repo to the HEAD of the author's repo. All of my new commits are in the author's repo now, so there aren't any side-tracked commits (what's the proper name for this btw? I thought it was fork but that sounded weird considering how Github refer to forks.).Thanks!
How to fast track branch after pull request in Github
The file travis-ci?per_page=100.json is not a valid filename on Windows. You can see that there are actual files named like this in the repo, eg repos?per_page=9999.json You can maybe clone this repo on cygwin (such a filename would be valid in a cygwin shell), remove the offending files, manually or by filtering the branch with git filter-branch --subdirectory-filter and then proceed to put your fork back on github.
I am really confused with this. I am a avid github user and never have had a problem before. However, when checking out a fork I just made of the repo travis-ci/travis-core, whether using https or ssh, I run into this bug after tortisegit finished downloading the git repo but before checking it out for the first time. Anything that could cause this? Thanks for the help! remote: Counting objects: 29130, done. remote: Compressing objects: 100% (16427/16427), done. Receiving objects: 100% (29130/29130), 8.37 MiB | 265.00 KiB/s, done. Resolving deltas: 100% (13171/13171), done. remote: Total 29130 (delta 13171), reused 27543 (delta 11662) error: unable to create file spec/fixtures/github/api.github.com/orgs/travis-ci?per_page=100.json (Invalid argument) error: unable to create file spec/fixtures/github/api.github.com/users/svenfuchs?per_page=100.json (Invalid argument) fatal: unable to checkout working tree warning: Clone succeeded, but checkout failed. You can inspect what was checked out with 'git status' and retry the checkout with 'git checkout -f HEAD'
Git can't checkout a repo from github
Use a named location and an internal rewrite. For example:location / { try_files $uri $uri/ @rewrite; } location @rewrite { rewrite ^/(.*)$ /index.php?url=$1 last; }Seethis documentfor more.
My Nginx conf file :location / { try_files $uri $uri/ /index.php?url=$uri; } ## PHP conf in case it's relevant location ~ \.php$ { fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include /etc/nginx/fastcgi.conf; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; }Trying the following URL :http://example.org/login:expected behavior :http://example.org/index.php?url=loginactual behavior :http://example.org/index.php?url=/login
Nginx conf how to remove leading slash from $uri
Crontab needs the full path on your server. 0 0 * * * php /var/www/vhosts/domain.com/httpdocs/scripts/example.php This will execute every day at midnight.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 10 years ago. I would like to run a PHP script every day at midnight. After research on how to do this, it appears that the best way to achieve this is to use a CRON job. If my php script was located at http://example.com/scripts/scriptExample.php, can somebody be able to show the most simple example of what this CRON command would look like? I have looked through numerous posts but I cannot find a simple enough example for me to learn and build upon.
Executing a PHP script with a CRON Job [closed]
So, after a few days working on it, I was finally able to solve it :) Here is the code that worked for me:exports.handler = async (event, context, callback) => { // Get Secret var AWS = require('aws-sdk'); var MyPromise = new AWS.SecretsManager(); var Vsecret = await MyPromise.getSecretValue({ SecretId: 'enter-the-secret-id-here' }).promise(); var MyOpenSecret = JSON.parse(Vsecret.SecretString); // From here, we can use the secret: var Vhost = MyOpenSecret.host; var Vuser = MyOpenSecret.username; var Vpassword = MyOpenSecret.password; var Vdatabase = .....
Can anyone provide a simple, complete node.js lambda function where I can get a secret from secrets manager and use it? I am struggling with the async/await process. I have already tried several suggestions from other posts, but all of them, at the end, can't really use the secret in the main function. For example, I have a main function and call a second function to retrieve the secret:xxx = retrieve_secret('mysecret');Then, in the retrieve_secret function I am able to retrieve the secret, I can print it using console.log, but when I try to use it in the main function, it says "Promise ".Please, help. Thanks in advance!
Get secrets in AWS lambda node.js
IIS Cache settings have no affect on service worker caching. Remember the server code and the client code are completely decoupled. What you are setting in IIS is the Cache-Control header value. This value is used by the browser cache, not service worker cache. You are 100% in control of what gets cached and how long it is cached in the service worker cache.
Specifically the cache-control property: <?xml version="1.0"?> <configuration> <system.webServer> <httpProtocol> <customHeaders> <add name="Cache-Control" value="no-cache" /> </customHeaders> </httpProtocol> <staticContent> <remove fileExtension=".json" /> <mimeMap fileExtension=".json" mimeType="application/json; charset=utf-8"/> </staticContent> </system.webServer> </configuration> I'm developing locally with a Node server and everything works fine, but on our deployment server the app runs in an IIS instance and the ServiceWorker isn't caching the requested assets. It's not throwing errors either, so I'm wondering if it's just this "no-cache" declaration getting in the way. I'm super new to ServiceWorkers and not at all a devops guy. Not hunting for the exact solution, just trying to narrow down the diagnosis so I have a clearer idea what to ask my back-end developer. Thank you!
Do the web.config settings for IIS interfere with a ServiceWorker caching?
I found a way to achieve this by building my own docker image which uses --model_config_file option instead of --model_name and --model_base_path. So I'm running tensorflow serving with below command. docker run -p 8501:8501 -v {local_path_of_models.conf}:/models -t {docker_iamge_name} Of course, I wrote 'models.conf' for multiple models also. edit: Below is what I modified from original docker file. original version: tensorflow_model_server --port=8500 --rest_api_port=8501 \ --model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \ modified version: tensorflow_model_server --port=8500 --rest_api_port=8501 \ --model_config_file=${MODEL_BASE_PATH}/models.conf \
I'm new to Tensorflow serving, I just tried Tensorflow serving via docker with this tutorial and succeeded. However, when I tried it with multiple versions, it serves only the latest version. Is it possible to do that? Or do I need to try something different?
How to serve multiple versions of model via standard tensorflow serving docker image?
I am quoting the original question here:kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080But is it fine to use this in production? I would like to use Infrastructure-as-a-Code approach.Your first approach is a list of commands, that must be executed in a certain order. In addition, all these commands are notidempotent, so you cannot run the commands multiple times.Also from the original question:Is there a way to avoid working with kubernetes template files directly and still follow best practices? E.g. generating Yaml files from docker-compose files or similar?What is the expected kubectl usage in production? Justkubectl apply -f <folder>The second approach is declarative, you only describe what you want, and the command is idempotent, so it can be run many times without problems. Your desired state is written in text files, so any change can be managed with a version control system, e.g. Git and the process can be done with validation in a CI/CD pipeline.For production environments, it is best practice to use version control system like git for what your cluster contain. This make it easy to recover or recreate your system.
I am trying to find the simpliest method to use kubernetes in production. YAML templates look like an overhead to me. E.g. all I want is expose simple backend service. I can do it with kubectl with 2 lean commands:kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0 kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080But is it fine to use this in production? I would like to use Infrastructure-as-a-Code approach.Is there a way to avoid working with kubernetes template files directly and still follow best practices? E.g. generating Yaml files from docker-compose files or similar?What is the expected kubectl usage in production? Justkubectl apply -f <folder>while it is developers job to maintain template files in<folder>? Is there a Declarative Management with kubectl without writing kubernetes templates myself? E.g. some files that contain the minimal info needed to templates to be generated.Really want to use Kubernetes, please advice the simplest way to do this!
kubectl instead of yaml files in production?
43 I'm not sure if this helps. I ran into this same problem recently and it seems like AWS made some changes with how we define our CORS configurations. For example, if you want to allow certain Methods on your S3 bucket in the past you have to do something like this on the editor: <CORSConfiguration> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>PUT</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <AllowedMethod>HEAD</AllowedMethod> <AllowedMethod>DELETE</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>*</AllowedHeader> </CORSRule> The config below is equivalent to the one on the top but takes the form of an array. [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "PUT", "POST", "HEAD", "DELETE" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] Let me know if this helps. Thank you! Share Follow answered Oct 31, 2020 at 4:44 FaitAccompliFaitAccompli 80788 silver badges1818 bronze badges 3 Useful answer. Would be even more helpful if you attach a link explaining "it seems like AWS made some changes" – Abhinav Mathur Dec 29, 2020 at 13:41 it does not work. I get Unknown Error An unexpected error occurred. API response Policies must be valid JSON and the first byte must be '{' – Chris Tarasovs Feb 22, 2021 at 18:39 It worked in place of XML styled version which was mentioned by others – Sudarshan Aug 28, 2021 at 6:11 Add a comment  | 
I needed to change my AWS S3 bucket CORS policy to enable the upload of files for my ReactJS to AWS S3, but I keep getting this API response: Expected params.CORSConfiguration.CORSRules to be an Array. I am at a loss right now. Can anyone help?
Unable to update AWS S3 CORS POLICY
Templates uses the same interpolation syntax as all other strings in Terraform.Documentation is availableSo in your case it will look like this:path = ${is_enabled ? "/one/path/" : "/another/path"}
I have a template in my terraform config to which I write the values of a variable like this:data "template_file" "config" { template = "${file("${path.module}/templates/${var.json_config}")}" vars { is_enabled = "${var.is_enabled}" } }Nowis_enabledis a boolean string which is either set totrueorfalse. Now based on if this is true or false I want to set another variable. In pseudocode it would look like this:if is_enabled == true path = /one/path/ else path = /another/pathI had a look at theconditional valuesbut it seems to be for bringing up resources. how would I use this to set a variable in a template file ?
Terraform Conditional Variables
Although this question is 2 years old, however there are two ways to do static analysis of the Dockerfile.usingFromLatestusingHadolintOption#2 is mostly preferable since this can be used as an automated process inside CICD pipelines.Hadolint also provide ways to exclude messages/errors using ".hadolint.yml"ShareFollowansweredJan 9, 2020 at 7:43ankidaemonankidaemon1,3731414 silver badges2020 bronze badgesAdd a comment|
I was wondering if there is any tool support for analyzing the content of Dockerfiles. Syntax checks of course, but also highlighting references to older packages that need to be updated.I'm usingSonarQubefor static code analysis for other code but if it does not support it (I could not find any information that it does), is there is any other tool that does this?
Static code analysis of Dockerfiles?
You cannot clone another repository using the secrets.GITHUB_TOKEN. That token is only scoped to the repository running the workflow. If you wish to clone another repository, you will need set a repository secret with a PAT that has the permissions to perform the clone. https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret When you enable GitHub Actions, GitHub installs a GitHub App on your repository. The GITHUB_TOKEN secret is a GitHub App installation access token. You can use the installation access token to authenticate on behalf of the GitHub App installed on your repository. The token's permissions are limited to the repository that contains your workflow. For more information, see "Permissions for the GITHUB_TOKEN."
Following is the step in the Github workflow of repository A - name: Checkout repo-b uses: actions/checkout@v2 with: repository: myorg/repo-b fetch-depth: 1 ref: master token: ${{ secrets.GITHUB_TOKEN }} The github action throws the following error: ... ... Fetching the repository /usr/bin/git -c protocol.version=2 fetch --no-tags --prune --progress --no-recurse-submodules --depth=1 origin +refs/heads/master*:refs/remotes/origin/master* +refs/tags/master*:refs/tags/master* remote: Repository not found. Error: fatal: repository 'https://github.com/myorg/repo-b/' not found ... ... I have specified following permissions in the workflow job containing this step: permissions: contents: write packages: write Do I need to enable any repository settings of these repos? Using the same GITHUB_TOKEN it is able to access github's npm/docker registry in other steps.
How I do I add a step in my repository A's github workflow to checkout repository B which is in the same org as repository A, using GITHUB_TOKEN?
Python code cannot be run without the required libraries but you can tell people to install the libraries. For example, you can run pip freeze > requirements.txt to add all the dependencies to a file. When people wamnt to install the dependencies, they can run pip install -r requirements.txt. Inside a repositories README.md, you can tell people the required dependencies. In addition, dependencies aren't usually bundled because of space concerns. If you have a bash file, you need bash to run it. If you want to bundle your project to include the language and all the dependencies, you can use a program such as the ones listed in this question
If I have a python library I use in my GitHub project, will someone without that python package still be able to run my code after cloning my project? If not, can a python library be attached to a repo? Also, if I have a bash file, will it still be able to run with people without bash? Lastly, how do you attach whole languages like Python, C#, or any language inside of your repo so everyone can use your project?
How do you attach necessary files to GitHub repo?
8 I had the same issue and the comment from Ed Harper resolved my issue: SSMS uses a comma rather than a colon to delimit between server name and port number. Try localhost,1433. – Ed Harper Jan 23 at 12:52 The server name field in SMSS required a format of locahost,[dockerport]. So in my case I needed: localhost,32768 Share Improve this answer Follow answered Mar 28, 2017 at 12:07 ArumunusArumunus 8111 silver badge22 bronze badges Add a comment  | 
I'm quite new to Linux OS's so hope this isn't a stupid question!! Software: Windows 10 Pro Docker for Windows (1.13.0-beta38 (9805)) SQL Server Management Studio v17.0 RC1 Issue: I'm trying to connect to my SQL Server Linux container using SSMS. It's not worked, so looking over the documentation it seems you need SQL Server Tools installed (bottom of page) on top of of the SQL Server Linux image. I followed these instructions to install SQL Server Tools on Ubuntu (base image of SQL Server Linux Image) Error: bash: curl: command not found Steps to reproduce error: Pull latest microsoft/sql-server-linux image Run according to instructions: docker run -e 'ACCEPT_EULA=Y' -e SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d microsoft/mssql-server-linux Attach to container using: docker exec -it <container_id> /bin/bash Attempt to import public repository GPG keys: curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - Thing's I've tried: I've tried running apt-get install curl but all i get is E: Unable to locate package curl Googling - to no end First edit - Connecting via SSMS I've tried connecting to the container from SSMS using the following server names (I am using SQL authentication with the credentials specified during the docker run command): localhost:1433 localhost\[container_name] [container_ip_address]:1433 Solution (kind of) OK so i got this working. But I'm not 100% sure what did the trick. I used localhost as the server name (defaults to port 1433) I also mounted a volume to the container as part of my docker-compose.yml file: volumes: - C:\local\volume\path:/var/opt/mssql SQL Server Linux Logs microsoft/sql-server-linux0
Unable to connect to SQL Server Linux Docker container via SQL Server Management Studio
A gist operates like any other repository. So let's say you've cloned something like git://gist.github.com/2322786.git: $ git clone [email protected]:2322786.git (If you just wanted to try this without pushing, you can use git://gist.github.com/2322786.git, which will demonstrate the merge principle and works anonymously, but does not allow you to push.) And now you want to merge in changes from git://gist.github.com/2661995.git. Add it as an additional remote: $ git remote add changes git://gist.github.com/2661995.git $ git fetch changes And then merge in the changes like this: $ git merge changes/master And you should be all set. This should work regardless of whether the new gist was forked from yours at some previous point or is completely unrelated. Taking Romain's comment into account, you would then issue a push: $ git push This would only work if your original clone URL allows writing.
I have a gist on GitHub that someone forked and made changes to. I like their changes. Is there a way to merge the changes back into my original gist?
How to merge a gist on GitHub?
1 I would recommend looking through this post on caching in Rails, it's a tremendously thorough post that goes through various strategies that may provide the outcome that you're looking for in this situation. Though he doesn't mention it in the post, adding some sort of cache busting parameter (like a cache version id) to the arguments list might provide you with a way to expire the cache in a more universal way. For example: cache(:version => Posts.cache_version, :direction => params[:direction], :sort => params[:sort], :page => params[:page]) do # Later on to bust the cache Posts.cache_version = 2 The details of how you would want to implement cache_version and cache_version= could vary depending on how you're handling other data in the application. There may also be a more elegant solution than this, but it's what came to mind. Share Improve this answer Follow answered Nov 18, 2012 at 5:23 joshhepworthjoshhepworth 3,03611 gold badge1616 silver badges1818 bronze badges Add a comment  | 
I have a page where on the where the index action shows a list of Posts, with custom sort columns, pagination, etc. Although I can cache every individual page / sort option with cache(:direction => params[:direction], :sort => params[:sort], :page => params[:page]) do I can't expire all of these at once using a single call to expire_action (which is a problem). I know expire_action has a regex option, but that is messy (using a regex to search for keys created with a hash), and I am using memcached which will not work. How can I expire all the cache members of an action with a single call to expire_action? If this is not possible, are there any other caching options you could recommend?
Rails 3.1 wildcard expire cache for action with query string
You can run a shell in the image, with:docker run -t -i --entrypoint bash paintedfox/nginx-php5Then change the configuration files as you like. Note the container ID (it appears in the prompt, e.g.root@9ffa2bafe2bb:/#), then commit it to a new image:docker commit 9ffa2bafe2bb my-new-nginxYou can then run the new image (my-new-nginx).
I installed a docker image from the registry by doing.docker pull paintedfox/nginx-php5Now I wish to make some changes to this nginx's config files to add some domains. I believe the config files are somehow help inside the dockers image, but where is the image? How can I change these config files?
After pulling a Docker from the repository, how to change the images files?
You can open Developer Tools by pressingCtrl+Shift+Jand then you'll find a cog icon in bottom right. When you click on it you should see an option to disable caching.ShareFolloweditedFeb 23, 2013 at 15:12answeredNov 30, 2011 at 19:10user196106user1961063that is not working. and i just checked, i have mod_pagespeed module from google enabled. is that causing the trouble? thanks...–Arjun BajajNov 30, 2011 at 19:20ok, i removed pagespeed and it works without disabling the cache, but it seems this is still the right approach. So I'll mark this answer as accepted... thanks...–Arjun BajajNov 30, 2011 at 20:181It now seems to be the "3 dots" icon at the top-right of dev tools. Click on that and select Settings.–Steve SmithApr 12, 2019 at 11:43Add a comment|
When I make a page,linkit to a CSS file, and open it in a browser, it works fine. But if a make a change and refresh the page again between very short time periods, the change is not reflected. But after sometime, when i refresh the page again, the changes appear.So, somehow the browser keeps the CSS file cached and expires it after sometime. How to make the browser cache no CSS or HTML file. It would be better if i can block it on a particular domain.I'm on Ubuntu, using Chrome and Firefox, trying to prevent browsers from caching CSS files on 'localhost'... How to do it...Thanks...
How to Prevent Browsers from Caching CSS Files?
For the posterity : Backing up Rocket.chat on SERVER 1 and Restore it on SERVER 2, based on the official docker image : SERVER 1 cd /backups docker run -it --rm --link db -v /backups:/backups mongo:3.0 mongodump -h db -o /backups/mongoBACKUP tar czf mongoBACKUP.tar.gz mongoBACKUP/ Then send mongoBACKUP.tar.gz on SERVER 2 in /backups. SERVER 2 (+ test on :3000) docker run --name db -d mongo:3.0 --smallfiles cd /backups tar xzf mongoBACKUP.tar.gz docker run -it --rm --name mongorestore -v /backups/mongoBACKUP:/var/dump --link db:db mongo mongorestore --host db /var/dump docker run -p 3000:3000 --name rocket --env ROOT_URL=http://yourwebsite.test --expose 3000 --link db -d rocket.chat
I use this docker image : https://hub.docker.com/_/rocket.chat/ So here is the code i used : docker run --name db -d mongo:3.0 --smallfiles docker run --name rocketchat --link db -d rocket.chat I tried several things, but I can't find a way to have a clean backup/restore system. Any advice ?
Backup and restore Rocket.chat on docker with mongodb
QWizard::addPage internally calls setPage, which calls page->setParent(...) as one of the first things done. So yes, the wizard does take ownership of the pages, and they will be subject to normal QObject lifetimes. Deleting the wizard will delete all of the pages.
If I have QWizard, and I instantiate this without specifying parent, will it delete its pages when it goes out of scope or will they leak? { WelcomeWizard wiz; wiz.addPage(new QWizardPage); } I think QWizard will delete them however I would really appreciate any more detailed explanation.
Will QWizard delete QWizardPage or will it leak?
When you pushedrefs/for/masterreference to remote you have created newnamespacefor references, ang gave it namefor.Long story short, it allows to create default subset of references for each user to operate on and avoid name conflicts for refs used by different groups of repository users.Users can set their own configuration inremote.remote_name.fetchandremote.remote_name.pushconfiguration values so that most common operations with branches and tags (pull,push,fetchetc.) would use their preferred namespace instead of default one defined by repository owner.Maybe it is easier to understand if you think thatgitcreates its own hardcoded namespaces when you create repository with all default configuration, one for branches (heads) and one for tags (tags) so that you do not need to prefix your branch and tag names in every operation with branches or tags
If I understand correctly,refs/for/is a special namespace that'sused in Gerrit for uploading changes.However, out of habit, instead ofgit push origin master, I've just donegit push origin HEAD:refs/for/masterona non-Gerrit repo, which apparently worked:$ git push origin HEAD:refs/for/master Enumerating objects: [...] [...] To github.com:fstanis/myrepo.git * [new branch] HEAD -> refs/for/masterThis apparently created a new branch on origin, but this branch isn't listed when I trygit branch -rand isn't shown in GitHub's UI. What exactly happened here? Where do commits pushed torefs/for/master"go" when not using Gerrit?
What is refs/for/master when not using Gerrit?
My understanding is that for every distinct labelset metric is stored as a separate time series.Prometheus will not create labelsets of{endpoint="/", user_id="1"}if they aren't exposed, same as it would not create labelset{endpoint="/foo/"}. So you second estimation of (99*1 + 1*10,000) is correct one.On the other hand, what you are doing feels more like it belongs to logs (or traces), and not metrics. Especially since number of users is usually not that stable metric, and sudden grow of that number might occur at any time. Please refer tothis answerby @brian-brazil (author of robustperception.io).
I have a Prometheus metric request_duration with a label "endpoint". A service is running, and being scraped, and is reporting metrics to prometheus for 100 different endpoints that are all being hit, e.g.{endpoint="/users/"}and 99 other endpoints. A new label is added, "user_id" (and there are 10,000 users),but"user_id" is only set on the/users/endpoint and not when the endpoint label is set to anything else.Assume metrics are being reported to prometheus for all possible "user_id" and "endpoint" label values (but "user_id" will only vary for a single endpoint,/users/, and be unset for all other endpoints)Is there potential for a "cardinality explosion" here that would cause memory issues if more high-cardinality labels were added? Is a cardinality explosion in Prometheus based on a potential number of label combinations (100 * 10,000) or an actual number of label combinations (99*1 + 1*10,000)?
Does cardinality explode in Prometheus if two high cardinality metrics never vary together?
This is the expected behaviour , because the new statefulSet will create a new set of PVs and start over. ( if there is no other choice it can randomly land on old PVs as well , for example local volumes )StatefulSet doesn't mean that kubernetes will remember what you were doing in some other old statefulset that u have deleted.Statefulset means that if the pod is restarted or re-created for some reason, the same volume will be assigned to it. This doesn't mean that the volume will be assigned across the StatefulSets.
I made a Kafka and zookeeper as a statefulset and exposed Kafka to the outside of the cluster. However, whenever I try to delete the Kafka statefulset and re-create one, the data seemed to be gone? (when I tried to consume all the message usingkafkacat, the old messages seemed to be gone) even if it is using the same PVC and PV. I am currently using EBS as my persistent volume.Can someone explain to me what is happening to PV when I delete the statefulset? Please help me.
What happens to persistent volume if the StatefulSet got deleted and re-created?
You need to provide more information about your environment (OS, Docker installation, etc), but basically, if you start your Redis container like this:docker run --name=redis-devel --publish=6379:6379 --hostname=redis --restart=on-failure --detach redis:latestIt should expose the port no matter what. The only reason you might not be able to connect to it, is if you've messed up your bridge interface, if you're on Linux, or you're using a docker machine with its own network interface and IP address and you're not connecting to that IP address. If you're using Docker for Mac, then that only supports routing to the localhost address, since bridging on Mac hosts doesn't work yet.Anyway, on MacOS with Docker for Mac (not the old Docker Toolbox), the following should be enough to get your started:➜ ~ docker run --name=redis-devel --publish=6379:6379 --hostname=redis --restart=on-failure --detach redis:latest 6bfc6250cc505f82b56a405c44791f193ec5b53469f1625b289ef8a5d7d3b61e ➜ ~ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6bfc6250cc50 redis:latest "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:6379->6379/tcp redis-devel ➜ ~ redis-cli ping PONG ➜ ~
I just built the redis docker instance$ docker pull redisAfter which I ran it like this.$ docker run --name=redis --detach=true --publish=6379:6379 redisI get the following$ docker ps key redis "/sbin/entrypoint.sh" 22 minutes ago Up 22 minutes 0.0.0.0:6379->6379/tcp redisTo me the above means that it is now running listening on port 6379 on localhost or 127.0.0.1 or 0.0.0.0.But to my great surprise, when I try to connect is responds with connection refused.Please can someone throw some light.
Redis Docker connection refused
Your browser uses a certificate store, and checks if the public certificate of the site you're visiting is available. If it is, you'll have no problems visiting the site.Java also checks its certificate store, but it's different from the one used by your browser. This is explained inthe white paper on digital signaturesSearch the document for PKIX, and you'll discover what's going on.ShareFollowansweredSep 3, 2012 at 16:04Bruno LowagieBruno Lowagie76.7k99 gold badges113113 silver badges170170 bronze badges3Thanks Bruno for sharing such a good link. My web application is deployed on Linux machine. Do you have any idea how to add a certificate on jre's keystore?–InfotechieSep 5, 2012 at 5:21It doesn't really matter if you're working on Linux or Windows. You need the keytool application that comes with the JDK. If you can't install it on the server, just copy the cacerts file, add the certificate on whatever machine you want, and replace the existing cacerts file with the updated one.–Bruno LowagieSep 6, 2012 at 8:05Yes you are right. Finally i got success in creating and installing self signed certificate but still not able to add StandardAltNames under [req_extensions]. Do you have any idea? I actually saw one of the questions related to the same topic, answered by you, but it doesn't work for me.–InfotechieSep 10, 2012 at 16:43Add a comment|
There is a option of generating pdf from html page in my web application.Following exception is coming while doing that. In this html page we are accessing css files over https.However, I am able to access web application over https successfuly.javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested targetPlease provide your valubale suggestions. Thanks!!
getting SSL handshake exception while generating Pdf from Html in java
TrySSL checkerto check whether the SSL is a problem or not.It will verify your server certificate and tell you where is the problem.
Here is my Nginx conf file:upstream app { server unix:/home/deploy/example_app/shared/tmp/sockets/puma.sock fail_timeout=0; } server { listen 80; listen 443 ssl; # ssl on; server_name localhost example.com www.example.com; root /home/deploy/example_app/current/public; ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem; try_files $uri/index.html $uri @app; location / { proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Connection ''; proxy_pass http://app; } location /.well-known { allow all; } location ~ ^/(assets|fonts|system)/|favicon.ico|robots.txt { gzip_static on; expires max; add_header Cache-Control public; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }The path to certificates are correct but when I accesshttps://example.comit stay loading forever.Is there any problem with my SSL setup?
Enable SSL on Ruby on Rails app with Nginx and Puma
You are right, official importer from assembla to github does not exists. So you need to do it manually(implement utility) consuming assembla api to pull information and upload it to github. You can try to find existing hand made toolshere
I wanna migrate a project from Assembla to Github with all its tickets and sources.SourcesThe sources are not the problem because I can push it easily to the new environment.TicketsMy problem is the linking between commits that includes an issue number (e.g.#123 Increased build number) to the related issue / ticket.Github counts everything in its ID (tickets, pull requests, etc). That means that it seems to be highly unlikely that myAssembla ticket #123will be myGithub issue #123and therefore the connection between he commit and the ticket will break.I tried to export all my tickets from Assembla but labelBackup successfully scheduled.stays for hours now.QuestionPlease correct me if this assumption is false. My question at all is, if someone could provide an idea how to solve the problem. It seems to be that there is no official importer to Github or exporter from Assembla.
How do I migrate an Assembla project (Issues and Source) to Github?
Add one of thedelegatedorcachedoptions to the volume mounting your app directory. I've experienced significant performance increases using cached in particular:volumes: - ~/.composer-docker/cache:/root/.composer/cache:delegated - ./:/usr/src/app:cached
I have a signifiant delay and high cpu usage when running my vue.js app on docker instance.This is my docker setupdocker-compose.ymlversion: '2' services: app: build: context: ./ dockerfile: docker/app.docker working_dir: /usr/src/app volumes: - ~/.composer-docker/cache:/root/.composer/cache:delegated - ./:/usr/src/app stdin_open: true tty: true environment: - HOST=0.0.0.0 - CHOKIDAR_USEPOLLING=true ports: - 8080:8080app.docker# base image FROM node:8.10.0-alpine # Create app directory WORKDIR /usr/src/app # Install app dependencies COPY package*.json ./ RUN npm install # Bundle app source COPY . . EXPOSE 8080 CMD [ "npm", "run", "serve"]this setup works fine when i type docker-compose up -d and my app is loading inhttp://localhost:8080/but hot reloading happens after 10 seconds , then 15 seconds like wise it keeps increasing and my laptop cpu usage gets 60% and still increasingi am on a mac book pro with 16 gb ram, and for docker i have enabled 4 cpu's and 6 gb ram.how can this issue be resolved?
Vue.js app on a docker container with hot reload
I have used below cron php /full-path-to-cron-file/cron.php /test/index source: http://www.asim.pk/2009/05/14/creating-and-installing-crontabs-using-codeigniter/ This works for me. Thanks to all
I am using CodeIgniter for my website. I have to use cron job to run one of controller function. I am using route in website. And also I am not using index.php in URL. e.g. http://example.com/welcome/show, here welcome is my controller and show is function name of that controller. I have used like this, 0 * * * * php /home/username/public_html/welcome/show It is giving 'No such directory' How can I set cron jon in cPanel for above URL.
How to set cron job URL for CodeIgniter?
Heap consumption , internally & externally ( programatically ) : You can use GetProcessMemory function : https://msdn.microsoft.com/en-us/library/ms683219.aspx Heap consumption , externally & non programatically : You can use MS Technet`s VMMap : https://technet.microsoft.com/en-us/sysinternals/vmmap.aspx Stack consumption , internally ( programatically ) : You can use Windows thread information block : https://stackoverflow.com/a/1747249/1996740 Stack consumption , externally & programatically : Here is a nice answer showing how you can access an external thread`s thread information block : https://stackoverflow.com/a/8751576/1996740
I'm using Visual Studio 2013 on Windows 7 - 64 bit machine. I'm writing a program on C. How can I check how much heap and stack storage my program is using?
How to check how much from heap and from stack my program is using?
As mentioned by @aerokite, this question seems to have already been answered in thiscommunity post.Posted as a community wiki.
I have deployed airflow via docker on kubernetes cluster and now I need to increase the persistent volume's storage capacity. While editing the yaml file via UI, I get this error:PersistentVolumeClaim "data-pallet-airflow-worker-0" is invalid: spec: Forbidden: field is immutable after creation
How to increase storage capacity of already deployed cluster in Kubernetes?
Most likely it is because mod_rewrite is enabled but .htaccess files are disabled viaAllowOverride Nonewhich disables checking .htaccess files (which gives You some performance gains but You have to put Your mod_rewrite code directly in apache configuration files)Change for Your virtual host to:AllowOverride All
I have a local environment working fine. Pasted a test route in .htaccess and it works as expected (re-routes me to google).RewriteEngine on RewriteRule testpage\.html http://www.google.com [R]I pasted the same thing on my development server (Unbutu 12.04) and it simply gives me a Not Found page. When I verify it on the devserver by running:sudo a2enmod rewriteIt says "Module rewrite already enabled".Edit: It also appears in the "loaded modules" section of phpinfo() and I have restarted the apache server several times since it was installed.Any ideas?
Mod_rewrite not working on Ubuntu Server (works locally, though)
You don't have to generate the CSR on your linux server. You can use thepemorp12file you created (using your mac book) on any server. If your code works when you test it on your mac book, it will work on any server. You just have to copy thepemorp12file to that server.
I am going to use a linux server for push notifications.Is the following correct?Generate a CSR of thelinuxserverUpload the file to Apple to generate a certChange this cer to pem and then conbine with my private key pem of linuxUse the combined pem in my codeIs this correct? Since I get confused by the Apple document, I can now only test push notifications in my mac book, and can't test on other servers.
Linux APNS server which cert should I create?
For the backed VSTS account and Github Enterprise in Azure Active Directory, they are not share a file system.
We're using VSTS, backed by Git, on our Azure tenant. We're considering buying a GitHub Enterprise subscription, to be installed on the same Azure tenant. In this configuration, can both front-ends point to the same file system, so that they can be used simultaneously for the same repos?
Can VSTS and GitHub Enterprise on the same Azure tenant share a file system?
TLS itself has no concept of the certificate being self signed or not. When you initiate a TLS connection (either by connecting to a specific port or via STARTTLS) the server and client negotiate the TLS connection.As part of the TLS negotiation it is up to the client and server to decide whether the certificate that they're presented is valid or not. If the certificate is self signed it's possible that the client (I'm assuming you're the server) may reject the certificate because it's not issued by a known CA or it might accept it.It's therefore possible to use TLS with self signed certificates (we do it) but it's also possible that a client could reject the connection because it cannot verify the certificate. If you full have control of the clients (which you do here) you can of course aid this and ensure you always accept your hosts certificate.
I'm usingSMTPtransport. I would like to useTLSbut my hosting has self-signed certificate.It is possible to useTLSin such situation?
How to use Zend_Mail_Transport_Smtp with tls and self-signed certificate?
I have solved it with another way, using 2 batch files So I give you my code:This one creates a folder in c: , than it creates a text file, it copies the name of the current user in it, than the other batch file in the same folder, and finaly runs it as local admin. If you write the password correctly(password will not appear as " * " when you write it):mkdir c:\tempfiles$ break>c:\tempfiles$\temp.txt echo %username% >> "c:\tempfiles$\temp.txt" copy "%~dp0\admin.bat" "c:\tempfiles$" runas /noprofile /env /user:%computername%\<LOCAL ADMIN USER> "C:\tempfiles$\admin.bat" pause rmdir /s /q "c:\tempfiles$"The admin.bat, takes the user name writen in the text file (if this wasn't, it would take the %username% as the local admin username to add it, because we run it as the local admin) The copy for the batch file is only necessary so you can run it from anywhere. For example if you would have it on a server's mapped drive it would not work.set /p u=<c:\tempfiles$\temp.txt net localgroup Administrators /add <DOMAIN NAME>\%u%I have tried it on multiple computer, on most of it, it runs. On some of the computers it does not, probably because of the local policy of my company. I did not figgured that out yet.For any questions or suggestions, feel confident to write your opinion.
I would like to write a script that will add a domain user to the local administrator group. I already triedNET LOCALGROUP Administrators "domain\domainuser" /ADDbut I get the Access Denied error.The problem is that if I want to run it as domain user, it does not have local admin rights, and if as local admin, it does not have access to the domain names. (I don't want to use domain admin)If I manually right click thecomputericon, thanmanage, I type in the computer name/local admin user/pass, than inLocal Users and Groups->Groupsfolder I want to add user toAdministrators, I am prompted to log in again. If I log in than with a domain user, it works.My question is, if it is possible to do the same (or something similar) with batch script?
A bit more challenging - Batch script to add domain user to local administrator group Windows 7
I don't know exactly what you want, but I will list the possibilities here anyway:Use MarkdownCreate/Import a Card from an issueUse a GitHub Application/the API in order to add automation functions(e.g. move an issue if a label has been added)Create custom categories (and add automation)ShareFollowansweredNov 5, 2019 at 15:03dan1st is cryingdan1st is crying14.3k1212 gold badges3939 silver badges7777 bronze badgesAdd a comment|
Can we customise the card in Kannan board? is there any plugin available?
How to customize card in Kanban board in GitHub Projects
I think yourcron_tab.pydidn't read django configuration fromsetting.py. What happens then you run this script from the shell?Anyway you should consider to usecustom management commandfor this task.ShareFollowansweredJan 11, 2015 at 13:10catavarancatavaran45.1k88 gold badges100100 silver badges8585 bronze badges5@@catavaran: I updated my post along with the traceback. Please have a loot at it.–questions postJan 11, 2015 at 13:37You are trying to run your script as a shell script. To run as a python add#!/usr/bin/env pythonas the first line of your script.–catavaranJan 11, 2015 at 13:50@@catavaran:I added #!/usr/bin/env python as first line, but still I get the same error.–questions postJan 11, 2015 at 14:06@@acatavaran: I changed my first line as #!/usr/bin/python. changed cron_tab.py as executable. But still error remains the same–questions postJan 11, 2015 at 14:24What if the python is in virtualenvironment ? How to add the first line of script–PraneethMar 19, 2015 at 19:26Add a comment|
I would like to send an automated mail using python, django and crontab. So I did the following things.Created a cron_tab.py(which is inside the folder home/myhome/django/myapp/registration/cron_tab.py) which looks like below:from django.core.mail import send_mail, EmailMessage,EmailMultiAlternatives subject, from_email, to = 'hello', '[email protected]', '[email protected]' text_content = 'This is an important message.' html_content = '<p>This is an <strong>important</strong> message.</p>' msg = EmailMultiAlternatives(subject, text_content, from_email, [to]) msg.attach_alternative(html_content, "text/html") msg.send()And using terminal I entered in to crontab by issuing the below commandcrontab -eI scheduled the task like* 1 * * * /home/myhome/django/myapp/registration/cron_tab.pyBut I didn't mail receive. What am I doing wrong? Please Somebody help me. After changing the file mode I get the following error and pasted the tracebackmyhome@myhome:~/django/myapp/registration$ ./cron_tab.py from: can't read /var/mail/django.core.mail ./cron_tab.py: line 3: subject: command not found ./cron_tab.py: line 4: from_email: command not found ./cron_tab.py: line 5: to: command not found ./cron_tab.py: line 6: text_content: command not found ./cron_tab.py: line 7: html_content: command not found ./cron_tab.py: line 8: syntax error near unexpected token `(' ./cron_tab.py: line 8: `msg = EmailMultiAlternatives(subject, text_content, from_email, [to])'
How to schedule a job using python and crontab?
Yes, you can use the spark to overwrite the content. You can still read your data with Glue methods, but then change it to a spark dataframe and overwrite the files:datasink = DynamicFrame.toDF(inputTable) datasink.write.\ format("orc").\ mode("overwrite").\ save("s3://my_out_path")
Cosider a code:val inputTable = glueContext .getCatalogSource(database = "my_db", tableName = "my_table) .getDynamicFrame() glueContext.getSinkWithFormat( connectionType = "s3", options = JsonOptions(Map("path" -> "s3://my_out_path")), format = "orc", transformationContext = "" ).writeDynamicFrame(inputTable)When I run this code twice neworcfiles added to old ones in"s3://my_out_path". Is there a way to overwrite always override path?NoteThe writting data have no partition.
How override data in aws glue?
Finally I achieved what needed with the following steps (still tricky and manual work):Stop running pod (otherwhise you could not use the volume in the next steps)Create the a new PVC with the desired capacity (ensure that the spec and label matches the exisitng PVC)Run this Jobhttps://github.com/edseymour/pvc-transferIn the spec of thejob-template.yamlset the source and destination volumeSet the ReclaimPolicy on the new created pv to Retain. This will ensure that the pv won't be deleted after we delete the temp pvc in the next stepDelete the source and destination pvcCreate a new pvc with the old name and the new storage capacityOn the new pv point the claimRef to the new pvc
I do have multiple persistent volumes which need to be shrinked to reduce the hosting costs. I already figured out that Kubernetes does not provide such an option. I also tried to clone or the restore the volumes from an snapshot to a new smaller volume - with the same result (requested volume size XXX is less than the size XXX for the source snapshot).Nevertheless I need a solution or workaround to get this done.The cluster is deployed with Rancher and the volumes are mounted to a Ceph Cluster. Everything is provided by an external hoster.
Shrink Kubernetes persistent volumes
Azure said that the cert needs to be imported for each resource group.
I'm trying to set up SSL for our new Azure app service. We have several other app services but they're in a different resource group and under a different app service plan.On all my previous app services, there is a section on the SSL blade that shows Private Certificates. And there's an informational block in that section that states "Private certificates list includes all valid private certificates from your subscription." This text shows on all app services, even the new one. However, on previously created app services I see my wildcard cert (i.e., *.domainname.com). On this newly created app service (and new app service plan) I do not see any certs.We only have the one Azure subscription. So if the private certificates list includes all valid private certs from our subscription, why doesn't the new one show them? Is there something I have to do with a new app service and/or app service plan (and maybe resource group as I segregated this into a new one) to make these certs from our subscription appear?Thanks in advance for any advice you have, Denise
Private certificates from my Azure subscription not shown in new appservice
You put your files in the project package, but you didn't put them under version control. All you need is to add them to VCS by "git add" command. Here is gooddocumentation.You should use this command in git bash or in terminal (if you have added git to path)Also I recommend you to use build manager as Maven, Gradle or Ant to add you dependencies. Good luck!
I have a project on eclipse where recently i created a new folder called lib and added some .jar files inside. I cannot seem to commit the changes to git. It just doesnt appear in the tracked list of files.. I have removed .jar from .gitignore in my branch and commited the changes, and still the same.
Add .jar files into a git repository
docker's --user parameter changes just id not a group id within a docker. So, within a docker I have:id uid=1002 gid=0(root) groups=0(root)and it is not like in original system where I have groups=1000(users)So, one workaround might be mapping passwd and group files into a docker.-v /etc/docker/passwd:/etc/passwd:ro -v /etc/docker/group:/etc/group:roThe other idea is to map a tmp directory owned by running --user and when docker's work is complete copy files to a final locationTMPFILE=`mktemp`; docker run -v $TMPFILE:/working_dir/ --user=$(id -u); cp $TMPDIR $NEWDIRThis discussionUnderstanding user file ownership in docker: how to avoid changing permissions of linked volumesbrings some light to my question.
I've played a lot with any rights combinations to make docker to work, but... at first my environment:Ubuntu linux 15.04 and Docker version 1.5.0, build a8a31ef.I have a directory '/test/dockervolume' and two users user1 and user2 in a group userschown user1.users /test/dockervolume chmod 775 /test/dockervolume ls -la drwxrwxr-x 2 user1 users 4096 Oct 11 11:57 dockervolumeEither user1 and user2 can write delete files in this directory. I use standard docker ubuntu:15.04 image. user1 has id 1000 and user2 has id 1002.I run docker with next command:docker run -it --volume=/test/dcokervolume:/tmp/job_output --user=1000 --workdir=/tmp/job_output ubuntu:15.04Within docker I just do simple 'touch test' and it works for user1 with id 1000. When I run docker with --user 1002 I can't write to that directory:I have no name!@6c5e03f4b3a3:/tmp/job_output$ touch test2 touch: cannot touch 'test2': Permission denied I have no name!@6c5e03f4b3a3:/tmp/job_output$Just to be clear both users can write to that directory if not in docker.So my question is this behavior by docker design or it is a bug or I missed something in the manual?
Docker with '--user' can not write to volume with different ownership
If you are running this:docker run -p 8080:8080 jenkinsThen to connect to jenkins you will have to connect to (in essence you are doing port forwarding):http://127.0.0.1:8080 or http://localhost:8080If you are just running this:docker run jenkinsYou can connect to jenkins using the container's IPhttp://<containers-ip>:8080The Dockerfile when the Jenkins container is built already exposes port 8080
So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?EDIT: I'm also just using the official Jenkins image from docker hub.
Can't get docker to accept request over the internet
You can re-mount your volume from inside the container, in the rw mode, like that: mount -o remount,rw /mnt/data The catch is that mount syscall is not allowed inside the Docker containers by default so that you would have to run it in a privileged mode: docker run --privileged ... or enable the SYS_ADMIN capability SYS_ADMIN Perform a range of system administration operations. docker run --cap-add=SYS_ADMIN --security-opt apparmor:unconfined (note that I have had to also add --security-opt apparmor:unconfined, to make this work on Ubuntu). Also, remounting the rw volume back to ro might be tricky, as some process(es) might have already opened some files inside it for writing , in which case the remount will fail with is busy error message. But my guess is that you can just restart the container instead (as it would be the one running an old version of the app).
I have a dockerized application that uses the filesystem to store lots of state. The application code is contained in the docker image I am considering a update strategy which involves sharing the volume between two containers, but making sure that at most one container at a time can write to that filesystem. The workflow would be: start container A with /data mounted rw start container B with /data mounted ro, and a newer version of the application stop serving requests to container A for container A, make the /data mount read-only for container B, make the /data mount read-write start serving requests to container B
Is it possible to change the read-only/read-write status of a docker mount at runtime?
OpenResty allows for loading more complex lua code through files. https://github.com/openresty/lua-nginx-module#init_by_lua_file That is just one directive. There are multiple ways you can load lua code. This way worked for me.
I'm using OpenResty with nginx to auto-obtain SSL certs from Let's Encrypt. There's a lua function where you can allow certain domains. In this function, I have a regex to whitelist my domains. After I add a certain amount (not sure the exact amount), I start getting this error: nginx: [emerg] too long lua code block, probably missing terminating characters in /usr/local/openresty/nginx/conf/nginx.conf:60. Shrinking down that string makes the error go away. I'm not familiar with lua, but here's the example code. I have a few hundred domains to add in here. auto_ssl:set("allow_domain", function(domain) return ngx.re.match(domain, "^(domain1.com|domain2.com|domain3.com....)$", "ijo") end) Do I need to define this string ahead of time, or maybe specify it's length somewhere? EDIT ok, so I was thinking about this another way. Does anyone see an issue if I were to try this? Any sort of performance issues, or lua related things? Maybe there's a more efficient way of doing this? auto_ssl:set("allow_domain", function(domain) domains = [[ domain1.com domain2.com domain3.com -- continues up to domain300.com ]] i, j = string.find(domains, domain) return i ~= nil end)
OpenResty auto_ssl too long lua code block error
3 You need to use SSH to access the repository, not HTTPS. Change the URL for the remote from https://xxx to git://xxx. You can use the green button towards the right side of the GitHub repository code page to help you get the correct URL. Change from: to The other thing you need to do is make sure you have an up-to-date local installation of Git. You may run into errors if you have an old version of Git installed. Share Improve this answer Follow edited Nov 19, 2021 at 20:43 Peter Mortensen 31k2222 gold badges108108 silver badges132132 bronze badges answered Nov 6, 2020 at 15:30 peterupeteru 13011 silver badge1111 bronze badges Add a comment  | 
I am using GitHub Desktop v2.5.7 and Git v2.29.1 on Windows 10 64 bit. My GitHub account has 2FA enabled. I can clone repositories from GitHub using the GitHub Desktop or command line I have generated an SSL key and followed all instructions to add it locally and to GitHub I have generated a personal access token and tried using this in the command line, and also my password Problem: When I attempt to push changes to a repository I get an authentication error. Error in GitHub Desktop: Error in command line: With sslverify turned on: fatal: unable to access 'https://github.com/jacquibo/neo4jDataSets.git/': SSL certificate problem: unable to get local issuer certificate With sslverify turned off: info: please complete authentication in your browser... fatal: incorrect_client_credentials: The client_id and/or client_secret passed are incorrect. [https://docs.github.com/apps/managing-oauth-apps/troubleshooting-oauth-app-access-token-request-errors/#incorrect-client-credentials] Username for 'https://github.com': [email protected] Password for 'https://[email protected]@github.com': remote: No anonymous write access. fatal: Authentication failed for 'https://github.com/myusername/myrepo.git/' I have tried: all the suggestions in the error in GitHub adding an SSH key locally and on GitHub (when I use ssh -i mycertname -vT [email protected] I get a permission denied message, see below) adding a personal access token checking origin is using HTTPS Error using the command line to test SSL certificate: OpenSSH_8.4p1, OpenSSL 1.1.1h 22 Sep 2020 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Connecting to github.com [140.82.121.4] port 22. debug1: connect to address 140.82.121.4 port 22: Permission denied ssh: connect to host github.com port 22: Permission denied How can I fix this problem? Only being able to edit files actually on GitHub.com is not very practical.
Authentication failed when pushing to a repository on GitHub (from GitHub Desktop and command line)
It sounds like you're missingthe stepwhere you have SonarQube upgrade it's own database schema. Specifically, navigate to[your SonarQube URL]/setupand click the button on that screen. That triggers SonarQube to make the database changes required to support your new version.
We are trying to upgrade our current SonarQube from 5.6.3. to 6.7.1 We have upgrade our SQL Server to 2014 and 5.6.3 has been tested and it has been working fine when I try to start the 6.7.1 server. It shows the Warning database needs an upgrade but the process seems to be running.Is the issue because of the plugins which we are supporting or the version of database. I am attaching all the logs that got generated.Database version : Microsoft SQL Server2014(MSSM studio 12.0.5207.0) Plugin version : SonarJava : sonar-java-plugin-4.6.0.8784 C# : sonar-csharp-plugin-5.10.1.1411Does plugin version has to play any role ( I doubt as it is highly unlikely)Let me know in case any further details are required.Please advice.Thanks in Advance!
Upgrade 5.6.3 to 6.7.1
Since all your gauges are referencing the samecurrentStatus, when the new value comes in, all the gauge's source is changed. Instead use a map to track all the current status by id:public class PrometheusStatusLogger { private Map<String, Integer> currentStatuses = new HashMap<>(); public void statusArrived(String id, int value) { if(!currentStatuses.containsKey(id)) { Tags tags = Tags.of("product_id", id); Gauge.builder("product_status",currentStatuses, map -> map.get(id)) .tags(tags) .register(Metrics.globalRegistry); } currentStatuses.put(id, value); } }
I use MicroMeter gauges in a Spring Boot 2 application to track statuses of objects. On status change, thestatusArrived()method is called. This function should update the gauge related to that object.Here is my current implementation:public class PrometheusStatusLogger { private int currentStatus; public void statusArrived(String id, int value) { currentStatus = value; Tags tags = Tags.of("product_id", id); Gauge.builder("product_status",this::returnStatus) .tags(tags) .strongReference(true) .register(Metrics.globalRegistry); } private int returnStatus(){ return currentStatus; } }This works quite well, but the problem is that when this method is called, all gauges values are updated. I would like only the gauge with the givenproduct_idto be updated.Input:statusArrived(1, 2); statusArrived(2, 3);Current output:product_status{product_id=1} 3 product_status{product_id=2} 3All gauges are updatedDesired output:product_status{product_id=1} 2 product_status{product_id=2} 3Only the gauge with the given product_id tag is updated.How can I achieve that ?
How to update MicroMeter gauge according to labels
1 location /location/ { rewrite ^/location/(.*) /$1 break; proxy_pass http://localhost:5008; } I hope this script helps, the default path "/" works fine as you have written, but for specific paths like "/api" or "/location" you need to use the rewrite keyword with regular expression so that the previous path remains the same "http://localhost/" and the forwarding path followed by matching path "/location/somepath" is to be written on the default path. Share Improve this answer Follow answered Jun 13, 2021 at 11:00 Rajat HongalRajat Hongal 6811 silver badge66 bronze badges Add a comment  | 
I'm trying to split FE (angularjs) from BE (nodejs) so the UI will be served from a different container as the backend. The setup is pretty simple but I'm absolutely new to NGINX and even though I went through a lot of posts and tried several configurations, I'm not achieving it to be working as expected. When I spin up the containers and hit the localhost in the browser it starts loading of index.html but actually to load the full UI, first waits for the response of the configuration from the backend. API call to backend for the configuration is http://localhost/configuration and this will fail on http status code 502 Bad Gateway: 192.168.192.1 - - [12/Jun/2021:21:18:42 +0000] "GET /configuration HTTP/1.1" 502 559 "http://localhost/" I can not figure out how to make it working. Below are listed some details about the setup. Three containers Postgres Backend (nodejs) Frontend (nginx serving static files + act also as a reverse proxy) docker-compose.yml version: '1' services: backend: build: . depends_on: - postgres ports: - '8085:80' postgres: image: postgres ports: - '35432:5432' volumes: - ./core/db/initdb.sql:/docker-entrypoint-initdb.d/init.sql environment: POSTGRES_PASSWORD: ****redacted**** frontend: build: context: . dockerfile: Dockerfile_UI volumes: - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf ports: - '80:80' restart: always NGINX config server { listen 80; server_name localhost; location / { root /usr/share/nginx/html; index index.html index.htm; } location /configuration { proxy_pass http://app:8085/; } location /cm/2 { proxy_pass http://app:8085/; } } Examples ​of API call to backend: http://localhost:8085/cm/2/purpose http://localhost:8085/configuration
Docker container running nginx serving static files and reverse proxy
You can't reliably force the creation of separate objects. Some classes may use tagged pointers. The set of classes doing that can change over time with releases of the OS. A tagged pointer really just encodes the value of the object into a pointer-sized value. It doesn't allocate any memory. By definition, any two objects represented as tagged pointers whose values are equal will have equal "addresses".Also, an init method is just a method. It can return any object it wants. There's no rule that it has to return the receiver. It can release thealloced object it is sent to (self) and return a different object. If it can determine that an existing object (such as the parameter you're passing to-initWithTimeInterval:sinceDate:) meets its needs, it may return that object (with an extra retain). This sort of thing is common in immutable value classes, likeNSDateorNSString.You're going to have to reconsider your supposed need to "make sure 2 different NSDate instances are really two different instances of allocated memory".ShareFollowansweredMar 14, 2016 at 21:04Ken ThomasesKen Thomases89.4k77 gold badges120120 silver badges159159 bronze badges21Tagged pointer details by Mike Ash:mikeash.com/pyblog/…, which are, AFAIK, currently used for NSDate.–jscsMar 14, 2016 at 21:05Yes NSDate does sometimes use NSTaggedDate–Bradley ThomasMar 14, 2016 at 21:22Add a comment|
I'm creating tests where I have to make sure 2 different NSDate instances are really two different instances of allocated memory. So I have this example code:NSDate *date1 = [NSDate date]; NSDate *date2 = [[NSDate alloc] initWithTimeInterval:0 sinceDate:date1]; XCTAssertEqualObjects(date1, date2); XCTAssertNotEqual(date1, date2);The first assert should compare object values using "isEqual", and it's working great!The second assert should compare pointers using "==". The bizarre thing is that it sometimes fails randomly, telling me that both pointers have the same value (ie, they are pointing to the same allocated memory).As I'm allocating twice, it is supposed to be different memory areas... So why do I have this test failing randomly sometimes? Maybe XCode is reusing memory areas somehow?
Different instances of NSDate pointing to the same allocated memory?
web thinks db runs on the host pointed to by the env variable DOCKER_DB or something like that. Your services should point to that variable (host), not localhost. The db container exposes ports (via EXPOSE) to its linked containers, again in variables. You can run the db on whatever port you want, as long as it's EXPOSEd.
From my understanding of docker compose / fig, creating a link between two services/images is one main reason if you do not want to exposes ports to others. like here db does not expose any ports and is only linked: web: build: . links: - db ports: - "8000:8000" db: image: postgres Does web thinks db runs on its localhost? Would i connect from a script/program in web to localhost:5432 (standard port from postgresql) to get a database connection? And if this is correct, how can you change port 5432 to 6432, without exposing? would i just run postgresql on a different port? Update: useful links after some input: http://docs.docker.com/userguide/dockerlinks/ https://docs.docker.com/compose/yml/#links
Understanding ports and links in docker compose
0 Probably this doc is not 100% and some default variables still require certificates. Try to generate certificates manually. Share Improve this answer Follow answered Feb 22, 2016 at 22:21 Jan GarajJan Garaj 26.9k33 gold badges4343 silver badges6666 bronze badges Add a comment  | 
I've been attempting to follow the instructions listed in: https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker-multinode/master.md#starting-the-kubernetes-master but the apiServer won't stay up, it exits with code 255 almost immediately, the last thing in the logs for the container is: F0222 21:45:10.776761 1 server.go:319] Invalid Authentication Config: open /srv/kubernetes/ca.crt: no such file or directory I've tried both 1.2.0-alpha.7 and 1.1.2 version of the docker container with: sudo docker run \ --volume=/:/rootfs:ro \ --volume=/sys:/sys:ro \ --volume=/var/lib/docker/:/var/lib/docker:rw \ --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \ --volume=/var/run:/var/run:rw \ --net=host \ --privileged=true \ --pid=host \ -d \ gcr.io/google_containers/hyperkube-amd64:v1.1.2 \ /hyperkube kubelet \ --allow-privileged=true \ --api-servers=http://localhost:8080 \ --v=2 \ --address=0.0.0.0 \ --enable-server \ --hostname-override=127.0.0.1 \ --config=/etc/kubernetes/manifests-multi \ --containerized \ --cluster-dns=10.0.0.10 \ --cluster-domain=cluster.local Only thing I can find, suggested openssl as a dependency, but I've installed that and I'm still getting the error. It seems to suggest I'm missing a certificate, but I cant find any documentation on it - any pointers would be appreciated.
Kubernetes Docker Multi Node Setup issues
In grafana there is "variables", go to "dashboard settings" --> variables, from there create new variable with type "query", name it like "area" and put sql query(select area from test), save it, and then in your select query: select ... where area in ($area).referencehttps://docs.timescale.com/timescaledb/latest/tutorials/grafana/grafana-variables/#using-grafana-variables
So I am new to Grafana, I have a table and a bar chart:The bar chart query is as follows:select coalesce(group_name) as group_name, sum(sales) filter (where device_type = 'HEAD_PHONES') as HEAD_PHONES, sum(sales) filter (where device_type = 'GUITAR') as guitar, sum(sales) filter (where device_type = 'XBOX') as xbox, sum(sales) filter (where device_type is null ) as other from test where area = 'AREA1' <--make this grafana variable group by coalesce(group_name);I would like if every time the user clicks a row within the table, the bar chart is updated using the same query but the area changed depending on the row clicked. How do I make it a Grafana variable supposing there are more than a 100 areas?Here is db:
Grafana create query variable from table
2 I believe the preferred method of downloading files with AFNetworking is by setting the "outputStream" property. According to AFNetworking documentation: The output stream that is used to write data received until the request is finished. By default, data is accumulated into a buffer that is stored into responseData upon completion of the request. When outputStream is set, the data will not be accumulated into an internal buffer, and as a result, the responseData property of the completed request will be nil. The output stream will be scheduled in the network thread runloop upon being set. I was having the same problem, solved it by using "outputStream". Share Improve this answer Follow answered Nov 1, 2013 at 14:17 AdriAdri 53022 silver badges88 bronze badges Add a comment  | 
I am downloading movie files from UIGridViewCells. My code is: NSMutableURLRequest* rq = [[APIClient sharedClient] requestWithMethod:@"GET" path:[[self item] downloadUrl] parameters:nil]; [rq setTimeoutInterval:5000]; _downloadOperation = [[AFHTTPRequestOperation alloc] initWithRequest:rq] ; _downloadOperation.outputStream = [NSOutputStream outputStreamToFileAtPath:[[self item] localUrl] append:NO]; __weak typeof(self) weakSelf = self; [_downloadOperation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) { NSLog(@"Successfully downloaded file to %@", [weakSelf.item localUrl]); [Helper saveItemDownloaded:weakSelf.item.productId]; weakSelf.isDownloading = NO; [weakSelf.progressOverlayView removeFromSuperview]; [weakSelf setUserInteractionEnabled:YES]; } failure:^(AFHTTPRequestOperation *operation, NSError *error) { NSLog(@"Error: %@", error); [weakSelf.progressOverlayView removeFromSuperview]; [weakSelf setUserInteractionEnabled:YES]; weakSelf.isDownloading = NO; }]; [_downloadOperation setDownloadProgressBlock:^(NSUInteger bytesRead, long long totalBytesRead, long long totalBytesExpectedToRead) { float progress = totalBytesRead / (float)totalBytesExpectedToRead; weakSelf.progressOverlayView.progress = progress; }]; [[NSOperationQueue mainQueue] addOperation:_downloadOperation]; And the property in ItemCell is: @property (nonatomic, retain) AFHTTPRequestOperation *downloadOperation; After 1-2 successful downloads(20mb), I am receiving Memory Warning. Memory using is increasing with each download, and never decrease when the download finishes. From Instruments:
Receiving memory warning when downloading multiple files with AFNetworking
Escape it with\+in yourregexShareFollowansweredJun 19, 2011 at 4:59Connor SmithConnor Smith1,30477 silver badges1111 bronze badges2Actually I'm not sure you can do what you want to, when does the+turn into a%20, I thought that spaces did that?–Connor SmithJun 19, 2011 at 5:00basicly i was saying that i wanted so i could use +'s in my urls instead of %20's in order for php to recognize a space. Thank you this is what i needed!–AliceJun 19, 2011 at 5:11Add a comment|
I want my url rewrite to allow Plus signs in the string so that i dont have the yucky %20 all over.RewriteEngine on RewriteRule ^/?([A-Za-z-\s_]+)/([A-Za-z-\s_]+)/([A-Za-z-\s_]+)$ display.php?a=$1&b=$2&c=$3 [L]How would i do this?
Allow +'s in URL Rewrite
Seems like the "-characters might cause the problem. Try to replace them by typing them in again or copy and paste my example below.This should definitely work:{"find":"terms","field":"sourceEnvironment"}
I am fighting with build proper query for templated variable in Grafana.I would like to build query type variable which will take all values from field sourceEnvironment.Document example:{ "host" : "10.6.0.132", "memoryFree" : 927296, "type" : "system", "path" : "/appl/Axway-7.5.3/apigateway/events/group-6_instance-9.log", "memoryTotal" : 16258844, "@timestamp" : "2019-06-17T00:00:27.216Z", "@version" : "1", "memoryUsed" : 16073968, "sourceEnvironment" : "test", }I have searched a lot of articles and official documentation but no hint works for me.Based onhttps://grafana.com/blog/2016/03/09/how-to-effectively-use-the-elasticsearch-data-source-in-grafana-and-solutions-to-common-pitfalls/it should be{“find”: “terms”, “field”: “sourceEnvironment”}But still getting error:Template variables could not be initialized: Unexpected token “ in JSON at position 1Any idea what's wrong?Thanks and regards, Reddy
Unable to build query in Grafana to elastic source in variables templating
You can run the follow command on the projectAPrivate root directorygit remote add public https://github.com/exampleuser/old-repository.git git pull public masterThen you will get all the updates from projectA repository, then please merge the changes, resolve conflicts etc, and after that, you can run the follow command to commit it to projectAPrivategit push origin master
I have bare cloned a public github repository(say projectA) and created a private github repository (say projectAPrivate), then mirror pushed the cloned projectA to projectAPrivate(as outlined herehttps://help.github.com/articles/duplicating-a-repositorygit clone --bare https://github.com/exampleuser/old-repository.git # Make a bare clone of the repository cd old-repository.git git push --mirror https://github.com/exampleuser/new-repository.git # Mirror-push to the new repository cd .. rm -rf old-repository.git # Remove our temporary local repository)Since then, I have pushed changes to projectAPrivate, now projectA has released new versions. How do I have pull/merge changes from the public projectA repository to my private github project projectAPrivate.
Pull changes from a github public repository to github private repository
This worked on my end(just replace the container ID):docker exec 1d3595c0ce87 sh -c 'mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sql' mysqldump: [Warning] Using a password on the command line interface can be insecure.
I want to create mysql dumps for a database which is running in docker container. However I do not want to get into the container and execute the command but do it from the host machine. Is there a way to do it. I tried few things but probably I am wrong with the commands.docker exec -d mysql sh mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sqldocker exec -d mysql sh $(mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sql)docker exec -d mysql mysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sqlthedumpsdirectory is already bind to the host machine.These commands are seems not the right way to do it or probably not the right way to do it at all. These always ends up with an error:bash: /dumps/MyNewDump.sql: No such file or directoryBut if I just runmysqldump -uroot -pSomePassword DBName > /dumps/MyNewDump.sqlinside the container it works fine.
How to execute mysqldump command from the host machine to a mysql docker container
I suggest you make the lock not the file itself, but an actualfile lock.$fp = fopen($trigger_file2, "r+"); if (!flock($fp, LOCK_EX | LOCK_NB)) { die("another script running"); }LOCK_NBisn't respected on Windows, but you don't seem to be using it.ShareFollowansweredJun 20, 2010 at 21:27ArtefactoArtefacto97k1717 gold badges203203 silver badges225225 bronze badges3Thanks for the help! Just to make sure, is the code you posted above enough for the entire script? It's possible to create a lock on a file that doesn't even exist? I'm just not 100% on how to use this. I read the manual on flock, but a simple explanation would be great! Thanks a lot.–timetoflyJun 20, 2010 at 21:56@user371699 Just prepend your script with that. As I put it,fopenfails if the file doesn't exist, both you can open it in write mode to force its creation.–ArtefactoJun 20, 2010 at 22:03Why not lock the file itself?–lonyFeb 4, 2011 at 1:46Add a comment|
I can't seem to figure this one out.I have a validator-type script that validates certain files on my server (checks if exists, its contents, etc). I want it to run continuously with only one instance of it, not more. The current solution I have doesn't seem to work, I'm guessing due to timeouts (possibly other reasons too). For each file that I validate, I set the php timeout to 60 seconds. I have this at the top of the script:$trigger_file2 = 'path/to/a/randomfile'; if (file_exists($trigger_file2)){ //another script is working, exit exit; } // create file inside app $handle = fopen($trigger_file2, 'w'); fwrite($handle, '0'); fclose($handle);At the end of the script, I just use an unlink() on the trigger file. Due to possible timeouts, and possibly other reasons, the trigger file isn't always getting deleted and is causing problems. I also tried adding code that checks the time the trigger file was modified, and if older than 10 minutes, is deleted, but this is really inefficient, as the problem occurs often.Any ideas on how to get around this? I thought of doing something with PIDs of running scripts, but I'm using CakePHP for the script, so every process looks like it's using index.php (I think).Any other ideas on how to do this? Help would be immensely appreciated.
Having Trouble Allowing Only One Instance of a PHP Script at a Time, Using Cron
0 You can use the backup directive on the server line and option allbackups in the specific section. You maybe can also add the weight for the server to define which backup server should be used in priority order. Share Improve this answer Follow answered Dec 19, 2020 at 1:02 AleksandarAleksandar 2,56233 gold badges1616 silver badges2626 bronze badges Add a comment  | 
Let's say I have 3 main servers and 3 backup servers. I want HAproxy to replace a main server with a backup server as soon as it goes down. To elaborate, let's say Main Server 1 goes down, HAproxy will then still continue to use 3 servers in total, where 2 will be main and 1 will be backup. Similarly, if 2 main server goes down, HAproxy will still use a total of 3 servers, 1 from main and 2 from backup. Also, once the main server is active again, HAproxy should stop using the backup and switch back to the main server.
How to set a custom number of backup backend servers in HAproxy?
The alternative in many cases is to suggest that users disable the firewall entirely orokthe prompt Windows raises when your server ports begin to listen. Both of these are bad options: one risks leaving the machine open to anything and the other trains them to approve security prompts uncritically.You could easily have your applications add and remove themselves as firewall exceptions. They could even disable/enable the firewall, a really bad idea. But it's usually a better idea to add and remove exceptions as part of install/uninstall operations.One place where Microsoft describes the process isWindows Firewall for Game Developers.If your security is so poor you rely on a software firewall alone for protection you can always set "no exceptions" mode on the firewall. Of course this reduces your users to passive consumers of Internet services, à la broadcast television reception.
I've heard that you can, during installation, add an exception for your app to give permission for it to access the internet through the firewall.Anyone know how to do this?
How do you add firewall permission to an app during installation?
1 Git is a VCS (Version Control System). How git's generally used A feature branch is made where you need to make changes, while work goes on master branch (eg bug fixes) and when feature branch is ready to be added into the master (ie you're ready to add the feature to your master code base) you git merge <feature branch> How you're using it Its a bit messy. Try following a regular method where you commit every change you need in your code and in the end merge it with the master. Another suggestion, use git rebase. (Google it, there are many explaining it visually). Also google git fetch then git merge origin/master master method to reflect changes of master into your working repo. Use rebase, so as to see only one commit (not commit + merge commit) on your repo Basically, go through a git tutorial. Its easy and you'll understand the basics in less than 10 minutes. Share Improve this answer Follow edited Jun 20, 2020 at 9:12 CommunityBot 111 silver badge answered Aug 1, 2018 at 11:56 clamentjohnclamentjohn 3,64722 gold badges2020 silver badges4444 bronze badges 2 I used cherry pick-instead of merge because I don't want to merge all commits of work_branch to master – Kallel Omar Aug 1, 2018 at 14:39 @KallelOmar But your issue is that you see the past history for those commits you cherry pick. Is that the problem? You do understand that you need not have an extra branch just to make changes and then finally commit into master. You can just commit on master. – clamentjohn Aug 1, 2018 at 15:15 Add a comment  | 
when I was working on projects that uses the control system git, each time I want to commit then push my modifs I have ofcourse to git pull before. But the problems that I meet is that in many times git prohibits me to pull because I should before commit my local changes. What I used to do is that I have an other clean git repo where I don't modify it directly, but when I need I use it to git pull then merge my modifs to it using meld, from the other repo where I work. But I think that this way is not the best and lose a lot of time, it's time to optimize it. What I'm thinking to do is that I create an other local branch "work_branch" and commit my modifs to it locally then merge commits from "work_branch" to "master" then push in master after the pull. So the scenario should be like that: git branch work_branch git checkout work_branch #modify in branch work_branch git commit git add <files list> git commit -m "fixes branch work" #local commit gitk ==> get the id of "fixes branch work" commit (example: 5b099287c229e16c24bfcdbfd6fba384cfe165e6) git checkout master git pull git cherry-pick 5b099287c229e16c24bfcdbfd6fba384cfe165e6 #merge "fixes branch work" commit from work_branch to master branch git push The problem that I met is after the third step (#modify in branch work_branch), each modif in the branch work_branch is viewed by the master branch, but what I want is that master branch can see from work_branch only the merged commit after git cherry-pick command. Is there a way to improve my solution. Or is there an other good way to optimize working with git.
git: better way in commit/push
You ask: How/why can Swift guarantee that object memory allocation will be successful, as implied by a non optional initializer return? It can't. What happens if I try to instantiate a Swift object in the minuscule but nonzero chance that the system that is out of memory? Generally, it will just crash. Fortunately, the app will often receive memory warnings before it gets to that point (at least if the app is consuming memory slowly enough for the OS to have a chance to supply a warning). So apps often have a chance to free up a little memory before everything goes south. And suspended apps may be jettisoned to free up a little memory, too. But Swift does not gracefully handle these degenerate scenarios. We are often stuck looking at crash reports, looking for memory related crashes. And when we have memory issues, they often manifest in inconsistent errors (i.e., not always the same line of code), but there are tricks to identify the source of the problem. We will also often “profile” our apps periodically with Instruments to identify memory issues.
Many Swift initializers return non-optional objects. This means they cannot be nil and are always successful. However, behind the scenes, Swift has to allocate the memory somehow, and in general memory allocation failure is a possibility. For example, memory returned by C function malloc() should be checked for NULL. How/why can Swift guarantee that object memory allocation will be successful, as implied by a non-optional initializer return? What happens if I try to instantiate a Swift object in the minuscule but nonzero chance that the system that is out of memory?
What happens if Swift cannot allocate memory?
Thanks for the answers.I need to perform this action in a lambda and this is the result:import boto3 import json s3 = boto3.client('s3') def lambda_handler(event, context): file='test/data.csv' bucket = "my-bucket" response = s3.get_object(Bucket=bucket,Key=file ) fileout = 'test/dout.txt' rout = s3.get_object(Bucket=bucket,Key=fileout ) data = [] it = response['Body'].iter_lines() for i, line in enumerate(it): # Do the modification here modification_in_line = line.decode('utf-8').xxxxxxx # xxxxxxx is the action data.append(modification_in_line) r = s3.put_object( Body='\n'.join(data), Bucket=bucket, Key=fileout,) return { 'statusCode': 200, 'body': json.dumps(data), }ShareFollowansweredJan 14, 2021 at 15:12vll1990vll199032144 silver badges1717 bronze badgesAdd a comment|
I am working with python and I need to check and edit the content of some files stored in S3.I need to check if they have a char o string. In that case, I have to replace this char/string.For example: I want to replace;with.in following file File1.txtThis is an example;After replaceFile1.txtThis in an example.Is there a way to do the replace without downloading the file?
How to edit S3 files with python
1 To your question the code you posted is correct and will work. In my opinion it would be preferable(cleaner\safer) to use C++\CLI as a wrapper to your C++ native classes so all the public methods should receive only managed objects as parameters otherwise just use COM Share Improve this answer Follow answered Jun 5, 2013 at 7:41 makcmakc 2,56911 gold badge1919 silver badges2828 bronze badges 2 To be honest, I would not touch COM unless I really have to. I think C++/CLI is very powerful, so why use it for wrapping only? However I agree that public functions should only take managed objects so that the class can neatly integrate in e.g. a C# GUI. The DoStuff function above is private, though. – Matz Jun 6, 2013 at 6:45 @Matz i didnt mean that c++\cli should be only a wrapper what i was trying to say is that all the advantages of c++\cli is lost the second your public methods start receiving native poiners so as long as you avoid it everyting should be great and you'll be able to integrate native c++ with c# – makc Jun 6, 2013 at 6:52 Add a comment  | 
I was doing some C++/CLI programming recently in order to integrate some of our company's native C++ classes into .NET. My question may sound trivial, but this is one thing I'm always not sure about: If there is a ref class with a native pointer, say public ref class ManagedClass { private: NativeClass* pObj1; NativeClass* pObj2; void DoStuff(NativeClass* obj); public: ManagedClass(); bool Activate(); } and a constructor like ManagedClass::ManagedClass() : pObj1(new NativeClass()), pObj2(new NativeClass()) {;} instances of that class will be created on the managed heap. However, pObj1 and pObj2 do point to objects created on the native heap? So there is no pinning needed to use those pointers, even since they are members of a managed class? Especially, if the DoStuff function calles a external native library function, say void ManagedClass::DoStuff(NativeClass* obj) { int returnCode = External::Function(obj); if (returnCode == 0) return true; else return false; } is there no need to write something like pin_ptr<NativeClass> pinPtr = obj etc.? I guess the situation is different if a reference to the pointer is needed; here, however, I understand that the location of the pointer itself may vary due to memory reallocation, but its content, i.e. a memory adress on the native heap, stays valid since the garbage collector won't touch that memory. Is this correct and code like the one above safe to use? Thanks for your help! Matthew
Native pointers in managed class
As far as i know connecting to db just helps to store data, not to display data.You can check stored data on sonarqube's guiClick on projectClick on ActivityShareFollowansweredApr 17, 2018 at 11:43George312George312271010 bronze badgesAdd a comment|
I connected my sonarqube server to my postgres db however when I view the the "metrics" table, it lacks the actual value of the metric.Those are all the columns I get, which are not particularly helpful. How can I get the actual values of the metrics?My end goal is to obtain metrics such as duplicate code, function size, complexity etc. on my projects. I understand I could also use the REST api to do this however another application I am using will need a db to extract data from.
SonarQube DB lacking values
Easy answer after much head scratching.Don't use Cygwin for github access. An alternative is to do all your normal terminal functions in Cygwin and then use Windows Command Line forgit push originBe sure to have ssh keys added to your account. Here aresteps to add ssh to github. Also be sure your ssh keys have a passphrase.
Attempting to push my development branch to my github repo.git push origin develop -vThe connection hangs and hangs and hangs and hangs and never times out. I never receive error messages nor "writing objects" nor any sort of communication.Connecting via ssh. Have verified that I can connect via ssh to github meaning my public keys are valid.git remote set-url origin[email protected]:username/Forkedrepo.git ssh -T[email protected]What else can I do? FWIW, I can connect to other sites via ssh and git push. I also know I CANNOT connect via https over this router.Using a Windows workstation and git push works with other non-github remote repos.
Git push hangs when pushing via ssh
2 I guess the problem is the Line Feed Character character at the end. ngx.say always adds linefeed ngx.print is just output problem solved Linefeed Character Share Improve this answer Follow edited Dec 30, 2020 at 4:03 answered Dec 30, 2020 at 3:57 SweetNGXSweetNGX 15311 silver badge99 bronze badges Add a comment  | 
I am having an interesting bug and method issue Lua mentions that the js_content variable has a length of 80 bytes. But when I don't use the "Content-Length" header, firefox mentions that 81 bytes of data are transferred. I don't know where the +1 byte excess comes from I will be glad if you can help, an application I wrote with VBNet gives an error when I noticed that the "Content-Length" header is 80 bytes while parsing json data from my remote server, but it works fine when I add +1. local ref_array = {1, 2, 3} local sArray = {} sArray["1"] = "One" sArray["2"] = "Two" sArray["3"] = "Tree" local ctable = {} for index, data in ipairs(ref_array) do if sArray[tostring(data)] ~= nil then local cinfo = {} cinfo["X"] = tostring(data) cinfo["Y"] = sArray[tostring(data)] cinfo["Z"] = 0 table.insert(ctable, cinfo) end end local js_content = cjson.encode(ctable) ngx.header['Content-Type'] = 'application/json' ngx.header['Content-Length'] = #js_content -- 80 byte ngx.say(js_content) ngx.exit(200)
NGINX LUA Content-Length +1 Byte Lost
0 Can you just use robocopy? This line will copy all files in c:\source and its subfolders that have been modified in the last day, to d:\test. robocopy c:\source d:\test *.* /s /maxage:1 Of course if you forget to run it one day, you'll miss any files touched that day. So if this is really for backups, the better approach is to use the Archive bit. robocopy c:\source d:\test *.* /s /m When a file is created or edited, Windows will clear the Archive bit. robocopy with the /m switch will only copy files with the Archive bit set (meaning only ones that have changed since the last time you ran your script), then sets the Archive bit. Share Improve this answer Follow answered Jun 10, 2013 at 1:47 Nate HekmanNate Hekman 6,5772828 silver badges3030 bronze badges 1 You're welcome. If this has answered your question, please mark it as Accepted. – Nate Hekman Jun 11, 2013 at 19:49 Add a comment  | 
Here is what I want to do: I want to write a "bat" file that will check all the files in a single partition to determine whether any file is revised/created today and if any, I would copy these file to a folder. So, if I run this bat everyday before I leave my office, I can backup all the files I used in a single folder. The bat file I have now copies the folder instead of file, and sometimes it doesn't work at all... Could you help me debug it? You might want to put it in a root directory such as C/D, and then change d:/test to whatever folder you plan to "test copy the targeted file. Here is the code I have for now: @echo off set t=%date% set t=%t:~0,10% echo %t% setlocal ENABLEDELAYEDEXPANSION for /f "tokens=*" %%i in ('dir /b /a-d') do ( set d=%%~ti set d=!d:~0,10! echo !d! if "!d!"=="%t%" (if not "%~nx0"=="%%i" copy "%%i" d:\test)) for /f "tokens=*" %%j in ('dir /b /ad') do ( set d=%%~tj set d=!d:~0,10! echo !d! if "!d!"=="%t%" (echo d|xcopy /e /y "%%j" d:\test\%%j))
bat file debug "back up used files"
Instead of mounting volume, you could open a new bash in your running container withdocker exec:docker exec -it <id of running container> bashThat way, you can directly go to the folder managed by the webapp from within the container.
I have a tomcat running in a docker container and would like to watch was is going on in the webapps directory from the docker host. Is it possible to do this by mounting a volume without setting up a sshd in the container and without starting a shell inside?
How to make a docker container directory accesible from host?
1 Take a look at https://github.com/marketplace/actions/branch-merge. The example does almost exactly what you want: It merges every pushed branch called "release/*" into "master". Just change release/* to bugfix/* master to development and you should be set. Share Follow answered Jan 22, 2022 at 1:01 SebDieBlnSebDieBln 3,34011 gold badge88 silver badges2323 bronze badges Add a comment  | 
Scenario multiple people are working on multiple branches (one to one relationship) for bug fixes. These branches are designated bugfix/[name-here] There is a system that automatically syncs with a branch called development. I'm looking for a way using GitHub, to have all bugfix/* branches automatically merged with the development branch on push. Considerations I don't want to have to approve PR's for this as it requires more manual work. I'm looking for a solution that will allow the development branch to be constantly updated and available for testing at any time. I'm not worried about merge conflicts, since there will not be any cross-over between files between the branches. I've looked into GitHub actions, but the available actions I've already found seem to be one-to-many merging, where I need many-to-one merging. New bugfix/[name-here] branches will be created new all of the time, I need a method that will use RegEx or something like that when getting a list of the branches so that every time new ones are created, the GitHub action doesn't have to be updated as well. PR's will need to be created when something is done testing, and so that particular bugfix/12345 can be merged into master. I'm under the current assumption that the best case here is for me to write my own GitHub action to achieve what I want. Question Does anyone know of an automation I can just plug and play or is there a better way to go about what I'm trying to achieve?
Automatic Merging of branches under bugfix/[name-here] to dev in GitHub
Put below line in your htaccess file and put that file at www.example.com/base_ini/ (path)Options -Indexes
I am working on a CI Application.Now I Have a folder in my project calledbase_ini, in which I have some config files. I want to secure this file, so no can can view its content directly from browser.So I tried this two ways:Put anindex.htmlsaying thatDirectory access is forbidden.I put a.htaccessfile in that folder withDeny from allIf I pass this in URl : www.example.com/base_ini I do get proper error message. But still if from browser I pass this pathwww.example.com/base_ini/default.inithen I can view its content.How can I stop this access?
How to prevent direct access to files from browser?
The commenter probably means for you to download and install the release of yt-dlp they linked, rather than whatever's in git for youtube-dl. Generally an unhelpful comment, since the project yt-dlp diverged from youtube-dl somewhere around 2020-09-22; they're not the same software anymore, and patches from one are going to be really hard to apply to the other. The two projects do have related history, but they've been diverging on purpose for a while.
My problem with youtube-dl seems to be a well documented allbeit recent bug: https://github.com/ytdl-org/youtube-dl/issues/31542 In the thread a contributor writes "use this patch if u [sic] cannot wait for release fix": https://github.com/ytdl-patched/yt-dlp/releases/tag/2023.02.17.334 I have tried a couple different ways to apply this patch with no success: git diff > https://github.com/ytdl-patched/yt-dlp/releases/tag/2023.02.17.334 bash: https://github.com/ytdl-patched/yt-dlp/releases/tag/2023.02.17.334: No such file or directory and curl https://github.com/critrolesync/critrolesync.github.io/commit/b34accf6638e2dae957b14fb14c4895a92eb2324 | git apply % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 170k 0 170k 0 0 137k 0 --:--:-- 0:00:01 --:--:-- 137k error: unrecognized input Can anyone tell me how to do it?
How to (step by step) add a git patch
/var/spool/cron/usernameUsesuto access the file.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionWhere cron file for user and for root is saved after executingcrontab -eand saving data?
Where crontab -e saves data? [closed]
The usual possible causes are:those files are ignored: see if that is the case with:git check-ignore -v -- /path/to/filethose files are part ofa nested Git repository (look for a .git in the parent folders)a submodule (look for a .gitmodules in the root folder of your main repository)ShareFollowansweredJun 3, 2018 at 4:12VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges3Thank you, @VonC Via thegit check-ignore -v -- /path/to/file, I found out there is a mis-addedJavain my.gitignoreat line 25. Problem finally solved. Thank you so much.–HearenJun 3, 2018 at 12:441@Hearen Great! What I like withgit check-ignore -vis that it give you the exact path of the various.gitignorefiles involved.–VonCJun 3, 2018 at 14:37Yes, VonC. Exactly, that's the easiest and clearest way to find out what's happening for theignored.–HearenJun 4, 2018 at 2:39Add a comment|
These days, in my mac (EI Captain 10.11.6) IntelliJ (2017.2.6) I just encountered this weird thing.Just created a new class: DumpVersionEnumBut I cannot add it and commit to my Github repositoryI checked lots of posts and articles mentioned the solutions:Settings -> Version Control -> Git -> Test button : it's working well;git commandgit add *in plain terminal : not working;right click the file -> Git (popup menu) -> Add (Option + Command + A) : it's grey (disabled) actuallygit clone the repo in another folder and then move the files in : not working either;But the newly added classes just cannot be tracked and pushed to git.Can any one help? Thank you ~A workaround for now I achieved: move the files into the root folder of the project -> add -> commit and they are pushed to the remote repo but as mentioned, they cannot be tracked in the project package folder.I also tried to then move them to the package in IntelliJ but then they will be again un-detected by the git. Even I do the moving inplain terminalormac Finder.I tried the moving in a new folder (newly git cloned), just did not do
IntelliJ new-ed class files cannot be detected by git
1 You may want CURAND.jl, which provides curand_poisson. using CURAND n = 10 lambda = .5 curand_poisson(n, lambda) Share Improve this answer Follow answered Jul 24, 2022 at 2:57 BillBill 5,8251717 silver badges2828 bronze badges 4 Hi, I'm having a hard time installing that package... It also seems to be "phased out" and the dependencies (CUDAdrv) report incompatibility. Which version are you running? – physicsjoke Jul 26, 2022 at 9:17 1 CURAND is part of CUDA.jl nowadays, you shouldn't use the CURAND.jl package (it's old, and as you noticed probably not even installable). The equivalent function in CUDA.CURAND is rand_poisson(!). – maleadt Jul 26, 2022 at 14:33 Old code in a folder, not used lately was the source, physicsjoke. So anyway in that case have you tried rand_poisson already? – Bill Jul 27, 2022 at 3:11 Yes, with similar not-results :( – physicsjoke Jul 27, 2022 at 11:49 Add a comment  | 
For a stochastic solver that will run on a GPU, I'm currently trying to draw Poisson-distributed random numbers. I will need one number for each entry of a large array. The array lives in device memory and will also be deterministically updated afterwards. The problem I'm facing is that the mean of the distribution depends on the old value of the entry. Therefore, I would have to do naively do something like: CUDA.rand_poisson!(lambda=array*constant) or: array = CUDA.rand_poisson(lambda=array*constant) Both of which don't work, which does not really surprise me, but maybe I just need to get a better understanding of broadcasting? Then I tried writing a kernel which looks like this: function cu_draw_rho!(rho::CuDeviceVector{FloatType}, λ::FloatType) idx = (blockIdx().x - 1i32) * blockDim().x + threadIdx().x stride = gridDim().x * blockDim().x @inbounds for i=idx:stride:length(rho) l = rho[i]*λ # 1. variant rho[i] > 0.f0 && (rho[i] = FloatType(CUDA.rand_poisson(UInt32,1;lambda=l))) # 2. variant rho[i] > 0.f0 && (rho[i] = FloatType(rand(Poisson(lambda=l)))) end return end And many slight variations of the above. I get tons of errors about dynamic function calls, which I connect to the fact that I'm calling functions that are meant for arrays from my kernels. the 2. variant of using rand() works only without the Poisson argument (which uses the Distributions package, I guess?) What is the correct way to do this?
Correct way to generate Poisson-distributed random numbers in Julia GPU code?
In order to bind your Google Service Account (GSA) to you Kubernetes Service Account (KSA) you need to enable Workload Identity on the cluster. This is explained in more details in Google's documentation (https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity).To enable Workload Identity on an existing cluster you can run.gcloud container clusters update MY_CLUSTER \ --workload-pool=PROJECT_ID.svc.id.goog
I am trying to bind my Google Service Account (GSA) to my Kubernetes Service Account (KSA) so I can connect to my Cloud SQL database from the Google Kubernetes Engine (GKE). I am currently using the follow guide provided in Google's documentation (https://cloud.google.com/sql/docs/sqlserver/connect-kubernetes-engine).Currently I have a cluster running on GKE namedMY_CLUSTER, a GSA with the correct Cloud SQL permissions namedMY_GCP_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com, and a KSA namedMY_K8S_SERVICE_ACCOUNT. I am trying to bind the two accounts using the following command.gcloud iam service-accounts add-iam-policy-binding \ --member "serviceAccount:PROJECT_ID.svc.id.goog[K8S_NAMESPACE/MY_K8S_SERVICE_ACCOUNT]" \ --role roles/iam.workloadIdentityUser \ MY_GCP_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.comHowever when I run the previous command I get the following error message.ERROR: Policy modification failed. For a binding with condition, run "gcloud alpha iam policies lint-condition" to identify issues in condition. ERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) INVALID_ARGUMENT: Identity Pool does not exist (PROJECT_ID.svc.id.goog). Please check that you specified a valid resource name as returned in the `name` attribute in the configuration API.Why am I getting this error when I try to bind my GSA to my KSA?
Unable to Bind Google Service Account to Kubernetes Service Account
You need to commit to your local repository, then you can push. But that's not going to work probably since you aren't logged in.
I've just cloned a repository, made some changes and now I'd like to send the author my patch. What should I do? I cloned from github anonymously. git push origin ?
I've git clone, now what?
If the metric can have either0or1values, then thesum_over_time(metric[d])calculates the number of1values on the specified lookbehind windowd. For example,sum_over_time(up[1h])returns the number ofupsamples with1value during the last hour. The number of0values then can be calculates ascount_over_time(up[1h]) - sum_over_time(up[1h]).If the metric can have other values than0and1, then Prometheus doesn't provide functions for counting the number of samples with a particular value yet :(There is another Prometheus-like system, which allows counting the number of raw samples with the given value on the specified lookback window -VictoriaMetrics(I'm the core developer of this system). It providescount_eq_over_timefunction for this task. For example, the following MetricsQL query returns the number of samples forsome_metrictime series with the value42over the last hour:count_eq_over_time(some_metric[1h], 42)ShareFolloweditedJun 12, 2022 at 18:56answeredApr 12, 2022 at 17:07valyalavalyala14.6k22 gold badges8181 silver badges8080 bronze badgesAdd a comment|
I don't speak English very well, but I need some advice. I have Prometheus. How can I calculate the number of downtime for a service over a period of time? It's my functionirate(ALERTS{job="blackbox", alertstate="firing"}[2h])
Prometheus: Count metric value over a period of time
From your comments on your question, it seems that you haven't tried configuring the path to the report, so it's natural that no coverage data is imported. The analysis cannot intuit where reports are or that it should read them.Having said that, you also indicate that you're generating acobertura.xmlfile, but that's not one of the formatscurrently supported by SonarCFamily for Objective-C. So you'll need to get your coverage data into theGeneric Coverage format, and then include the path to that report using thesonar.coverageReportPathsanalysis property.
I am using Fastlane for building and testing my ObjC project. I usescanaction to run Unit Test cases andslatheraction to generate Code coverage report. I am able to generate cobertura.xml report using slather action, but unable to publish the report to SonarQube.I am using SonarQube 6.4 and fastlane 2.64.0.FastFilescan( workspace: "Sample.xcworkspace", scheme: "SampleTests", code_coverage: true, output_types: "html" ) slather( cobertura_xml: true, output_directory: "./reports", proj: "Sample.xcodeproj", workspace: "Sample.xcworkspace", scheme: "SampleTests", ) sonarAnalysis is published to Sonar but Code Coverage report is not updated. Please let me know where i miss the key.
Publishing Slather Report to SonarQube
You've disabled CGO for your build, but you're not disabling CGO for your tests, which you must do:CGO_ENABLED=0 GOOS=linux go test -v ./...
I use Docker to add my project to it, now I want to run some test on it and I got errors that the test failedAny idea what I miss here?# build stage FROM golang:1.11.1-alpine3.8 AS builder RUN apk add --update --no-cache make \ git ADD https://github.com/golang/dep/releases/download/v0.5.0/dep-linux-amd64 /usr/bin/dep RUN chmod +x /usr/bin/dep RUN mkdir -p $GOPATH/src/github.company/user/go-application WORKDIR $GOPATH/src/github.company/user/go-application COPY Gopkg.toml Gopkg.lock ./ RUN dep ensure --vendor-only COPY . ./Now I build the docker which finish successfully and now I want to run tests on it.I did docker rundocker run -it goappwhich run successfullyAnd now I use commandgo test -v ./...and I got error# runtime/cgo exec: "gcc": executable file not found in $PATH FAIL github.company/user/go-application [build failed] FAIL github.company/user/go-application/integration [build failed]Any idea how to resolve this ?I try another step in docker file like following which doesnt helpsRUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix nocgo -o /go-application .
How to disable CGO for running tests
kubectl logs -f <pod-id>You can use the-fflag:-f, --follow=false: Specify if the logs should be streamed.https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logsShareFolloweditedAug 13, 2020 at 16:15Thiru3377 bronze badgesansweredSep 12, 2016 at 17:47Yu-Ju HongYu-Ju Hong6,74711 gold badge1919 silver badges2525 bronze badges6what about logs for service or anything other than the pods?–Alexander MillsMay 30, 2019 at 0:5213this works for a short time then the logs stop. I have to ctrl-c to get out of kubectl, then restart them. This shows more logs after but stops again. Anyone know why the logs stop in random spots when they are obviously still being generated by the pod?–pferrelSep 22, 2019 at 20:46@pferrel, did you ever figure this out? I'm having the same issue.–duyn9uyenFeb 12, 2022 at 15:43Could be related to log rotation:github.com/kubernetes/kubernetes/issues/59902–Yu-Ju HongFeb 14, 2022 at 19:44@duyn9uyen I think it's because logs stop coming in from the server. even without restarting kubectl, logs start coming in automatically–gchandraFeb 16, 2022 at 6:49|Show1more comment
kubectl logs <pod-id>gets latest logs from my deployment - I am working on a bug and interested to know the logs at runtime - How can I get continuous stream of logs ?edit: corrected question at the end.
kubectl logs - continuously
From:https://github.com/kubernetes/kubernetes/pull/12717/filesThis function func ReadDockerConfigFile() (cfg DockerConfig, err error) is used to parse config which is stored in:GetPreferredDockercfgPath() + "/config.json" workingDirPath + "/config.json" $HOME/.docker/config.json /.docker/config.json GetPreferredDockercfgPath() + "/.dockercfg" workingDirPath + "/.dockercfg" $HOME/.dockercfg /.dockercfgThe first four one are new type of secret, and the last four one are the old type.This helps explain why moving the file to /.dockercfg fixed your issue, but not why there was an issue in the first place.
I am attempting to pull private docker images from Docker Hub.Error: image orgname/imagename:latest not foundThe info I am seeing on the internet...http://kubernetes.io/v1.0/docs/user-guide/images.html#using-a-private-registryhttps://github.com/kubernetes/kubernetes/issues/7954Leads me to believe I should be able to put something like{ "https://index.docker.io/v1/": { "auth": "base64pw==", "email": "[email protected]" } }In the kubelet uer's $HOME/.dockercfg and kublet will then authenticate with the container registry before attempting to pull.This doesn't appear to be working. Am I doing something wrong? Is this still possible?I am using the vagrant provisioner located inhttps://github.com/kubernetes/kubernetes/tree/master/clusterAlso: I am aware of the ImagePullSecrets method but am trying to figure out why this isn't working.Update:I moved /root/.dockercfg to /.dockercfg and it now appears to be pulling private images.
How to use .dockercfg to pull private images with Kubernetes
Finally I found the solution by myself The clue is to apply NSURLIsExcludedFromBackupKey to root folder not to every file you want to exclude from backupso at very beginning you should call this with (for example) "Library/Application Support" folder
I'm using iOS 5.1 I use this peace of code[pathURL setResourceValue:[NSNumber numberWithBool:YES] forKey:NSURLIsExcludedFromBackupKey error:nil];The folder where I put my content is (inside app sandbox) .../Library/Application Support/, not a /Documents folderI do not receive any errors and the result of setResourceValue: is YESWhy do I see 2 MB is Settings -> iCloud -> ... etc. where I can check the apps data size?
Excluding files from iCloud backup
i created a plugin for this behavior (and by extend to link sonar to my maven projects):https://github.com/VandeperreMaarten/sonar-maven-plugin.The only thing you need to do isadd following plugin to your pom.xml<plugin> <groupId>com.viae-it.maven</groupId> <artifactId>sonar-maven-plugin</artifactId> <version>LATEST</version> </plugin>call the plugin to validate themvn com.viae-it.maven:sonar-maven-plugin:validate-qualitygate
I want to be able to completely stop a Maven build process in case SonarQube detects new rule violations during incremental analyses in the developer's machines. I want to do this in order to force the developers to verify their code quality prior to checking their changes in to the SCM (Apache Subversion, in our case).The Build Breaker plugin is not a possibility since it doesn't break the build. It lets the build finish successfully, and only reports a status of BUILD FAILURE. Furthermore, Build Breaker is not supported anymore for SonarQube 5.1+ for preview/incremental modes.
It is possible to break Maven builds when SonarQube detects new violations, without using the Build Breaker plugin?