Response stringlengths 15 2k | Instruction stringlengths 37 2k | Prompt stringlengths 14 160 |
|---|---|---|
tlsis an array/slice so you have to refer to it like that and include it in the original patch.$ kubectl patch ing hello-world -p '{"spec":{"tls":[{"hosts":["my.host"], "secretName": "updated"}]}}'A good way to get the-ps right (that works for me) is to convert them from YAML to JSON. You can try an online tool likethis. | I have this ingress object where I am trying to patch thesecretName:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world
...
spec:
rules:
- host: my.host
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8080
tls:
- hosts:
- my.host
secretName: my-secretI would like to update the secret name usingkubectl patchI have tried:$ kubectl patch ing hello-world -p '{"spec":{"tls":{"secretName":"updated"}}}'
Error from server: cannot restore slice from mapand:$ kubectl patch ing hello-world -p '[{"op": "replace", "path": "/spec/tls/secretName", "value" : "updated"}]'
Error from server (BadRequest): json: cannot unmarshal array into Go value of type map[string]interface {}Any suggestions? | Kubectl patch gives: Error from server: cannot restore slice from map |
3
It looks like you did not install all intermediate certificates. You can see this where it says "Extra Download" by two certificates under the "Certification Paths" section of your ssllabs report.
Most desktop browsers will automatically have common intermediate certificates so it'll handle this for you, but some mobile operating systems might not have all of these by default.
You need to chain your certificates together using something like https://certificatechain.io/ and then update your nginx configuration to point to the new chained certificate.
Share
Improve this answer
Follow
answered Mar 22, 2017 at 20:09
Collin BarrettCollin Barrett
2,45155 gold badges3333 silver badges5656 bronze badges
1
I spent two days trying to understand why my Android app did not work with my HTTPS server. And the reason was exactly as you wrote. Thank you very much!
– yaskovdev
Dec 1, 2018 at 17:20
Add a comment
|
|
I have problem with my ssl certificate is not valid only on android devices, here is my nginx domain conf:
server {
listen 80;
server_name teatrclub.pl www.teatrclub.pl;
return 301 https://$server_name$request_uri;
}
server {
listen 8080;
listen 443 ssl;
ssl on;
ssl_certificate /etc/ssl/teatrclub.crt;
ssl_certificate_key /etc/ssl/teatrclub.key;
server_name teatrclub.pl www.teatrclub.pl;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Here is also my domain analyze:
https://www.ssllabs.com/ssltest/analyze.html?d=teatrclub.pl
I'm green on nginx configuration and ssl, so maybe some one can give me some hint?
| nginx + SSL certificate not valid on Android |
By default there are no resource requests or limits, which means every pod is created using BestEffort QoS. If you want to configure default values for requests and limits you should make use of LimitRange.
BestEffort pods by definition is "for a Pod to be given a QoS class of BestEffort, the Containers in the Pod must not have any memory or CPU limits or requests." BestEffort pods have the lowest priority for Kubernetes scheduler and can be evicted in case of resource contention
All above said is true for all Kubernetes distributions including OpenShift.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
Hallo fellow K8s Users,
I'm trying to understand whether the is a base request and\or limit per pod\container in k8s as they are today or if you know of a change in the future regarding that.
I've seen this answer:
What is the default memory allocated for a pod
stating that there isn't any, at least for Google's implementation of k8s and I'd like to know for sure of that also for the current state of k8s in on-prem deployments.
Are there any base request or limit values for a container\pod?
EDIT:
Also is there a way k8s would predict container memory request by the application development language or environment variables set for deployment (like from java container RUN command or env: JVM_OPTS -Xms1G -Xmx1G)?
| Is there a default memory request and\or limits for pods\containers in k8s? [closed] |
First clone one of the repositories:
git clone https://github.com/superbitcoin/SuperBitcoin.git
The superbitcoin/SuperBitcoin repository is your origin. Now add the other repository as a second remote and fetch its commits:
cd SuperBitcoin
git remote add upstream https://github.com/bitcoin/bitcoin.git
git fetch upstream
Now you can use merge-base, as suggested by max630 in the comments:
git merge-base origin/master upstream/master
Note that you must compare specific branches.
This gives c2704ec98a1b7b35b6a7c1b6b26a3f16d44e8880, which is the last common commit between the two branches. You can see this commit in each repository.
|
I want to know what commit a specific project was forked from its parent so I can fork from the same commit number.
How can I achieve this?
More specifically, I want to know the commit number of when
https://github.com/superbitcoin/SuperBitcoin
was forked from
https://github.com/bitcoin/bitcoin
| Find when a GitHub project forked from it's GitHub fork parent |
From Boto3 docs, there is no support to get the identity ARN. But you can construct it easily. Assuming you are inus-east-1(or get the region using Boto3):account_num = '1234567890'
identity = '[email protected]'
print 'arn:aws:ses:us-east-1:'+account_num+':identity/'+identityARNarn:aws:ses:us-east-1:1234567890:identity/[email protected] | How to get the Identity ARN of a verified SES email?
I have an SES email which is verified and I want to get the Identity ARN to assign it to Cognito programmatically. Looking into the Boto3 config I can not find any method giving me this information. The closest I get islist_verified_email_addresses, which just gives me the email if it is verified... no way how to get the ARN from that. | SES how to get ARN of identity? |
3
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Share
Improve this answer
Follow
answered Apr 25, 2019 at 13:08
bellacknbellackn
1,9791313 silver badges2222 bronze badges
Add a comment
|
|
i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
| How to expose the docker container ip to the external network? |
Assigning zeros takes time and is not always what the programmer wants to do. Consider this:
int a;
std::cin >> a;
Why waste time loading a zero into the memory when the first thing you are going to do is store a different value there?
|
I always wondered why there are garbage values stored in a memory space. Why cant the memory be filled with zeros. Is there a particular reason?
For example:
int a ;
cout<<a //garbage value displayed
| Why are memory locations assigned garbage values? |
0
try this out :
cd /opt/nginx/cache
and do rm -rf *
and restart the server again
let me know if this helps
Share
Follow
answered Oct 20, 2013 at 16:58
amit karsaleamit karsale
74555 silver badges1414 bronze badges
Add a comment
|
|
I just deployed, and everything ran fine without errors. Previously it was running fine too. The code in the production is updated in the current version, but when I browse using a new browser, it was using old code.
I have tried restarting unicorn, nginx, but no problem whatsoever. There's no error message.
I tried it on staging environment, everything is good. Just doesn't work on production.
I tried to redeploy too. Nothing changed.
UPDATE 1
Not sure what happened, but after restarting the entire server it went fine again. Anyone knows why?
| Updated code on production server, but no changes on browser |
Your suggestion is a good starting point.Scan the image row by row and when you meet a black pixel start flood filling it. While you fill, you can keep the bounding box updated. After filling, you just continue the scan.Fill(img, x, y, xm, xM, ym, yM):
img[x][y]= white
xm= min(xm, x); xM= max(xM, x); ym= min(ym, y); yM= max(yM, y);
if x >= 0 and img[x-1][y] == black:
Fill(img, x-1, y)
if x < w and img[x+1][y] == black:
Fill(img, x+1, y)
if y >= 0 and img[x][y-1] == black:
Fill(img, x, y-1)
if y < h and img[x][y+1] == black:
Fill(img, x, y+1)
FloodFill(img):
for y in range(h):
for x in range(w):
if Img[x][y] == black:
xm= xM= x; ym= yM= y
Fill(img, x, y, xm, xM, ym, yM)
Store(x, y, xm, xM, ym, yM)As flood filling is stack-intensive, ascanline-basedapproach is recommended. | I have a binary image that will have one or more blobs. I want a list of pixels for each blob. If I can find one seed point for each blob, I can flood fill to find the pixels of the blob.Doing some research for this problem, I think the algorithm I want is "Connected component labeling." Most examples I see just color code the blobs output. With this algorithm will I be able to gather: one point on the blob, and the axis aligned bounding box of the blob?Does connected component labeling sound like the right algorithm for what I need? Does anyone have a good CUDA implementation? | GPU blob bounding box connected component labeling |
Passenger author here. Byebug integration is available in Passenger Enterprise:https://www.phusionpassenger.com/library/admin/nginx/debugging_console/ruby/ | I developed a simple rails app which works in development environment under WEBrick. However, when I move to production environment it doesn't work. I fixed trivial errors relating to assets and things. However, some things just don't work. It would be extremely helpful to be able to see what is going on interactively with debugger.I can insertbyebugin the code, and it pauses code from running, but since neitherpassengernornginxlogs to STDOUT (by default) I can't get to the byebug prompt. (and neither read STDIN)Is there a way to runbyebugunderpassenger + nginx?EditThis time my issuewas relatedto https. | Any way to run byebug under passenger + nginx? |
The best way to organize destruction in delphi is to always think about "who will create the given vars".
If you also free them in this context (For you private vars the destroy method of the class) it's much less likely that you will encounter memory leaks.
Actually the destructor of a class is normally not called via
myInstance.Destroy();
instead the typical way of doing it is via
FreeAndNil(myInstance);
or
myInstance.Free();
As delphi will handle calling the destructor method at order
|
As there is no garbage collection in Delphi, where exactly do you unload variables?
Say I have a type with a set of private vars.
Would it suffice to have a Destroy method that does this work?
Do I need to explicitly call this destroy method in my consuming classes?
| Where should a class free its private variables? |
You can pass any valid directory as the value ofsonar.java.binaries, for example:mkdir /tmp/empty
mvn sonar:sonar -Dsonar.java.binaries=/tmp/emptyThis will bypass the problem raised by the Java analyzer,
but keep in mind that the analysis results won't be perfectly accurate.
It's very common to have some false positives when the analyzer doesn't have access to the bytecode binaries. | I have a java codebase I need to scan in sonarqube, but when I run the scanner I get:Please provide compiled classes of your project with sonar.java.binaries propertyI don't have the classes; the code I was given wasn't compiled. It's also a pretty complex application and I don't really have time to figure out how to build it myself. Is there a way I can force the analysis to run without any binaries available?Thanks for any help/ideas!-Jason(Also, I ran sonarqube 5.x last year on java code, and definitely did not have to use classfiles for that analysis. I figured this was a new "feature" for version 6, but the documentation says this has been since version 4.12 (?!) | How do I disable the byetcode requirement for scanning Java projects in sonarqube 6.5.0.27846? |
Answering my own question.you can add something like to override the entry point in the Dockerfile and runlsorcatcommand to see inside.ENTRYPOINT ls /etc/fluentd | This question already has answers here:Exploring Docker container's file system(33 answers)Closed3 years ago.Sometimes running the docker image fails so ssh’ing into the container is not an option. in that cases how do we see the content inside container?There is a existing question but mistakenly marked as duplicate.how to browse docker image without running it?NOTE: To stupid Moderators with stupid EGO, Please read the question PROPERLY before making judgement about closing the problem. Don't think you know better than others. | How to view files inside docker image without running it? (NOTE: THIS QUESTION IS HOW TO READ FILES WITHOUT RUNNING THE CONTAINER) [duplicate] |
9
I had the same problem.
Just delete the folders under %appdata%/code/backups/ and restart VS.
Share
Improve this answer
Follow
edited Sep 16, 2022 at 22:55
desertnaut
59k2929 gold badges145145 silver badges168168 bronze badges
answered Apr 25, 2022 at 10:33
Ingo SiedermannIngo Siedermann
9133 bronze badges
3
1
I have tried this, but did not help.. anyone found another solution?
– Bo rislav
Jul 14, 2022 at 8:38
It did not help me either.
– Ali
Jul 18, 2022 at 13:57
Thank you. If someone posted this solution on the myriad of unanswered github issues, it would help a lot of people. Another common solution seems to be switching off gpu acceleration, although that did not help for me.
– James Westgate
Jul 21, 2022 at 12:14
Add a comment
|
|
I am trying to open a folder that I opened before, but it crashed.
I can open other projects, and restarting the computer didn't help.
Maybe it's because I had a big file opened (400mb) in this folder, but I cant close this file because the vscode crashing every time when I tried open the workspace..
https://github.com/microsoft/vscode/issues/126127
https://github.com/microsoft/vscode/issues/130375
| VScode crashed (reason: 'oom', code: '-536870904') |
I was facing same issue. This worked for me :`RUN touch /var/lib/rpm/* \
&& yum -y install java-1.8.0-openjdk-devel`ShareFollowansweredJan 27, 2019 at 14:42Anand PrakashAnand Prakash32144 silver badges55 bronze badges2touching the files solved the problem for me too. Do you know what the root cause of the error message is? Why/how does this fix the problem?–bdrxSep 27, 2021 at 17:44wouldyum -y updatebe a better thing to run vstouch /var/lib/rpm/*(assuming that fixes the problem), since I would think the update would also touch those files and would make sure it had the latest repo data?–paulie4May 2, 2022 at 18:37Add a comment| | Details of the error :We have a custom docker image and we build on top of Cent OS 7 which is the base image . While build image happening got this error .Rpmdb checksum is invalid: dCDPT(pkg checksums): dbus-libs.x86_64 1:1.10.24-7.el7 - u
_[0m
The command '/bin/sh -c yum clean all && yum -y swap fakesystemd systemd && yum clean all && yum -y update && yum clean all' returned a non-zero code: 1
07/10/18 [04:54:22]# TRACE : Error Trace:- | docker image build getting check sum error - Rpmdb checksum is invalid: dCDPT |
I did a workaround for my problem using out-of-band solution.Basically, i use a group web hook to trigger a lambda function in aws which in turn will trigger the pipeline that i want to run in gitlab.This ensures that i'm still using the same runners as previously.Group Webhook -> AWS lambda -> trigger pipeline thru API | Currently, I'm using required pipeline configuration to "inject" a pipeline to all projects in a top-level group and sub-groups. This would allow me to run my pipeline first than run the pipeline at the local project level.However, required pipeline configuration isdeprecated in v15.9 and will be removed in v17.0.I understand that I can switch to compliance framework/pipeline. But I would need to manually make changes to each and every projects (Have tens of thousands) in the top-level groups and subgroups. And there is a possibility that new projects might be left out.The schema of the groups looked something like this:Top-level-Group|--Subgroup1|----Project1|----Project2|----Sub-Subgroup1|------Project3|--Subgroup2|----Project2I know there is aquestionon this previously. But the solution requires me to go to each and every project to include the "general CI/CD configuration file", which is very tedious and the possibility to miss out new projects is high.I want to be able to run my pipeline (not necessary in gitlab) for all projects in the top-level group and subgroups than continue whatever local pipeline is configured for the projects.Appreciate if anyone can provide any insights or recommendations. Currently, I have no idea on where/how to start. | Injecting a gitlab CI/CD pipeline at a group level |
For Windows, create a .bat file with the needed command, and then create a scheduled task that runs that .bat file according to a schedule.Create a .bat file in this fashion, replacing your username, password, and database name as appropriate:mysqldump --opt --host=localhost --user=root --password=yourpassword dbname > C:\some_folder\some_file.sqlThen go to the start menu, control panel, administrative tools, task scheduler. Hit action > create task. Go to the actions tab, hit new, browse to the .bat file and add it to the task. Then go to the triggers tab, hit new, and define your daily schedule. Refer tohttp://windows.microsoft.com/en-US/windows/schedule-taskYou might want to use a tool like 7zip to compress your backups all in the same command (7zip can be invoked from the command line). An example with 7zip installed would look like:mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\some_file.7zI use this to include the date and time in the filename:set _my_datetime=%date:~-4%_%date:~4,2%_%date:~7,2%_%time:~0,2%_%time:~3,2%_%time:~6,2%_%time:~9,2%_
set _my_datetime=%_my_datetime: =_%
set _my_datetime=%_my_datetime::=%
set _my_datetime=%_my_datetime:/=_%
set _my_datetime=%_my_datetime:.=_%
echo %_my_datetime%
mysqldump --opt --host=localhost --user=root --password=yourpassword dbname | 7z a -si C:\some_folder\backup_with_datetime_%_my_datetime%_dbname.7z | I want to make a daily dump of all the databases in MySQL usingEvent Scheduler, by now I have this query to create the event:DELIMITER $$
CREATE EVENT `DailyBackup`
ON SCHEDULE EVERY 1 DAY STARTS '2015-11-09 00:00:01'
ON COMPLETION NOT PRESERVE ENABLE
DO
BEGIN
mysqldump -user=MYUSER -password=MYPASS all-databases > CONCAT('C:\Users\User\Documents\dumps\Dump',DATE_FORMAT(NOW(),%Y %m %d)).sql
END $$
DELIMITER ;The problem is that MySQL seems to not recognize the command'mysqldump'and shows me an error like this:Syntax error: missing 'colon'.
I am not an expert in SQL and I've tried to find the solution, but I couldn't, hope someone can help me with this.Edit:Help to make this statement a cron task | How to create a daily dump in MySQL? |
You could set up an SSH key with no password and put it in the .ssh folder of the cron user's home directory so that it is used automatically. Generally, no-password SSH keys are a bad idea, but if they never leave your server, maybe not so bad. | I'm just getting started with github so I'm sorry if this is a stupid question.
I'm trying to set up a cron job that pushes to a github repo.I'm doing great until the git push part.If I'm doing a push via ssh I'll just do git push origin master which then asks me for the password. How can I include the passphrase in the push request?something like git push origin master -pPASSPHRASE ?Thanks a lot! | Send passphrase with git push |
-o | --outputis not a universal flag, it is not included in the defaultkubectlflags(1.18) andkubectl describedoes not support the--output(or shorthand-o) flag. | I think -o is supposed be an universal option for kubectl.
But, somehow I get the following error when I run the following kubectl command.Can you please tell me why? Thank you.mamun$ kubectl describe secret -n development serviceaccount-foo -o yaml
Error: unknown shorthand flag: 'o' in -o
See 'kubectl describe --help' for usage. | kubectl describe unknown shorthand flag -o |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.