Response stringlengths 15 2k | Instruction stringlengths 37 2k | Prompt stringlengths 14 160 |
|---|---|---|
Make it so that CSS makes up the majority of your repository.
One approach could be having your examples as full CSS files in their own right and linking in from the markdown.
Documentation on how GitHub decides language for a repois here. | I have a GitHub repo that is just a Markdown file, although it's reference material specifically about CSS. I don't have any .css files for the repo but I'd like GitHub to be able to flag the repo as a CSS repo the same as it tracks other languages in repos. Any suggestions on how to best do this? | Marking a Non-Language GitHub Repo With a Language? |
By the link suggested in comments I ran Gunicorn from command line and send big data request. I saw Gunicorn was experiencing :
Request body exceeded settings.DATA_UPLOAD_MAX_MEMORY_SIZE error.
As documentations says default value is set to about 2.5MB. https://docs.djangoproject.com/en/dev/ref/settings/#data-upload-max-memory-size. After setting it to None Problem solved.
Strange thing is that I didn't get any error in my django project logs!
|
I have setup my server to use nginx plus gunicorn for hosting a project. When I send POST request of small sizes everything is OK. But when I send POST requests of size about 5MB I get 400 error from server.
I had set client_max_body_size in my nginx configuration to 100M. Can anyone help with this error? Following is how I send request to server :
r = requests.post(url, json=data, timeout=180, cookies=cookies, headers=headers)
400 Error depends on data size. With large data size I get this error!
| nginx + gunicorn 400 error |
You need to obtain a promise for each page when the previous one completes, rather than all at once. i.e
function fetchAndProcessPages(i, handlePage) {
retrievePage(pages[i]).then(page => {
handlePage(page);
if (i+1 < pages.length) fetchAndProcessPages(i+1, handlePage);
});
}
fetchAndProcessPages(0, page => console.log(page));
|
I'm having some issues with what I think is a basic problem regarding use of Promises in my node.JS server side application - unfortunately I can't see how to resolve it despite seeing other similar questions (I think).
Basically my issue is this:
I am trying to retrieve some external data and then process it. There is a lot of data so I have to retrieve it page by page. Additionally given the size of the data, my server cannot execute multiple calls/processes at once as I run out of memory and the server crashes. I don't know until execution time how many pages I have to retrieve to get all the data.
I have tried executing a forEach loop with an array of the number of pages however this clearly doesn't work. e.g.:
pages = [1,2,3,4];
pages.forEach( function(pageNumber){
veryMemoryExpensiveFunctionRetrievingAndProcessingPage(pageNumber).then(
// handle the results);
})
(the behaviour here is that all functions execute synchronously and the server runs out of memory).
I'm pretty stuck here - I know I need to execute that function multiple times synchronously but dont know where to start with doing so! I've also attempted recursion however this again causes out of memory as each call adds to the stack.
| Iterating through expensive async function - memory constraints, recursion? |
In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster:Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox, but I think it will be good to know about that feature.Link to the documentationUse one of the latest features called "ingress". Here is description from manual"An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.".Link to the documentationIf kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router".Information about openshift origin.Information about openshift origin routes | I run the CoreOS k8s cluster on Mac OSX, which means it's running inside VirtualBox + VagrantI have in my service.yaml file:spec:
type: NodePortWhen I type:kubectl get servicesI see:NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
kubernetes 10.100.0.1 <none> 443/TCP <none>
my-frontend 10.100.250.90 nodes 8000/TCP name=my-appWhat is the "nodes" external IP? How do I access my-frontend externally? | How to expose a Kubernetes service externally using NodePort |
The way pull requests work is to apply a commit from a fork on top of the upstream repo.For that, the easiest way is to make your fix onthe same branchthan the one you intend to apply it (by making a pull request) on the upstream repo.In other words, all your changes should be done in a custom branch, except for the fix, that you should do (or report by cherry-picking) on the same branch than the one used in the original upstream repo.If you want to fix a bug onmasterfrom upstream, make your fix in themasterbranch of your fork, by first making sure yourmasterbranch is identical (git pull) to the one in upstream. | I've forked a repo on github to do my own customistations.However, along the way, I discovered a bug and fixed it and would like to send a pull request upstream.I followed the guide at:http://gun.io/blog/how-to-github-fork-branch-and-pull-request/And have created a branch with just the bugfix on it - but when I go to submit a pull request to the upstream - it lists all the changes I've made since I forked, I can't seem to find a way to isolate the bug fix patch.
I don't want to send all my changes, and I'm guessing they don't want to receive them - so how do I send just the bug fix?If it helps, the repo ishttps://github.com/chrisjensen/ankusaThe branch is untrainfix | How to push a bug upstream on github |
Have it like this:# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /wordpress/
RewriteRule ^bob/?$ /about [R,NC,L]
RewriteRule ^properties/([a-zA-Z]+)/([a-zA-Z+'.]+)/?$ /properties/?prov=$1&city=$2 [R,NC,QSA,L]
RewriteRule ^(?:properties/)?([0-9]+)/?$ /properties/?id=$1 [R,QSA,NC,L]
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /wordpress/index.php [L] | I have a wordpress site set up with some custom redirect rules set up. The weird thing is I am sure these all of these were working before but now some of them no longer function.Here is the complete htaccess file:RewriteRule ^properties/([a-zA-Z]+)/([a-zA-Z\+\'.]+) /properties/?prov=$1&city=$2&%{QUERY_STRING} [R,NC]
RewriteRule ^properties/([0-9]+) /properties/?id=$1 [R,NC]
RewriteRule ^([0-9][0-9][0-9][0-9][0-9])$ /properties/?id=$1 [R,NC]
RewriteRule ^expand.php?id=([0-9]+) /properties/?id=$1 [R,NC]
# BEGIN WordPress
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /wordpress/
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /wordpress/index.php [L]
</IfModule>
# END WordPressright now the only rule that actually works (other than the directory change for wordpress itself) isRewriteRule ^([0-9][0-9][0-9][0-9][0-9])$ /properties/?id=$1 [R,NC]I've tried throwing in simple rules to test, likeRewriteRule ^/bob /contact [R,NC]but that doesn't work either* Editthe below issue was fixed and is definitely not related to the issue above (but I'll leave it here in case there was a comment that referenced it)*Also, not sure if this gives any insight but on the page where the redirect actually works, my wordpress theme is broken, the wp_footer never fires and the rest of the page fails | htaccess regex wordpress not working |
2
You can use rewrite of nginx. Something like this should work
location /app/ {
rewrite /app/(.*) /$1 break;
proxy_pass http://localhost:8084;
}
Share
Improve this answer
Follow
answered Apr 2, 2017 at 13:30
Bipul JainBipul Jain
4,60333 gold badges2424 silver badges2727 bronze badges
Add a comment
|
|
Usually, when doing a reverse proxy setup, you have a server on your backend which looks like this:
http://localhost:8084/app-root
and you can proxy_pass
location /app-root {
proxy_pass http://localhost:8084;
}
It will proxy www.my-domain.com/app-root to the internal server http://localhost:8084/app-root.
Great!
Can someone explain what needs to be done if the server insists on hosting from the root as so:
http://localhost:8084/index.html
http://localhost:8084/images/image1.jpg
I want this to be accessible via
http://www.my-domain.com/app/index.html
http://www.my-domain.com/app/images/image1.jpg
| NGINX reverse proxy to a host with no root subdirectory |
yes, the intuition for these scenario is to use multi-thread / even GPUs to accelerate. Butthe important thing is to figure out whether the scenario is suited for parallel computation.As you suggested that these datasets are independent of each other, but when you run multi-threaded version on 8 cores there's no obvious improvement : this suggests potential issues: either your statement about the independence of the dataset is wrong or your implementation of the multi-threaded code is not optimized. I would suggest you tune your code first to see improvement and then seek methods to transplant this to GPU plat forms.or you can take a look atOPENCLwhich is intended for both parallel threads / GPU cores.
but the important thing is to figure out whether your question is really suited for parallel computing | I am creating a database using C#. The problem is I have close to 4 million datapoints and it takes a lot of time to complete the database (maybe several month). The code looks like something like this.int[,,,] Result1=new int[10,10,10,10];
int[,,,] Result2=new int[10,10,10,10];
int[,,,] Result3=new int[10,10,10,10];
int[,,,] Result4=new int[10,10,10,10];
for (int i=0;i<10;i++)
{
for (int j=0;j<10;j++)
{
for (int k=0;k<10;k++)
{
for (int l=0;l<10;l++)
{
Result1[i,j,k,l]=myFunction1(i,j,k,l);
Result2[i,j,k,l]=myFunction2(i,j,k,l);
Result3[i,j,k,l]=myFunction3(i,j,k,l);
Result4[i,j,k,l]=myFunction4(i,j,k,l);
}
}
}
}All the elements of the Result array are completely independent of each other. My PC has 8 cores and I have create a thread for each of myFunction methods, but still the whole process would take a lot simply because there are many cases. I am wondering if there is any way to run this on GPU rather than CPU. I have not done it before and I do not know how its gonna work. I do appreciate if someone can help me on this. | Parallel computing of array elements on GPU |
I would remove theproviderkey. Thecarrierwave-awsgemreadme(I'm guessing you are using that or something similar) does not even mention theproviderkey. That might have been an old requirement that has been deprecated.ShareFollowansweredNov 13, 2015 at 3:01Ryan KRyan K3,99544 gold badges4040 silver badges4343 bronze badges0Add a comment| | I have been bumping my head on the wall trying to get this working on production. For some reason, it works locally but not up on heroku.I keep getting this error messageArgumentError in Sessions#indexinvalid configuration option :providerAt first I assume it was because of this!but later after further digging I found out its pointing to myinitializers/aws.rbCarrierWave.configure do |config|
config.storage = :aws
config.aws_bucket = 'thehatgame'
config.aws_acl = :public_read
config.aws_authenticated_url_expiration = 60 * 60 * 24 * 365
config.aws_credentials = {
:provider => 'AWS',
:access_key_id => ENV['SECRET_KEY'],
:secret_access_key => ENV['SECRET_ACCESS_KEY'],
:region => ENV['S3_REGION']
}
endAny help is welcomed, I did find alinkto a question similar, but that didn't work either | invalid configuration option `:provider' |
Usedocker run -i -t <your-options>So here, -i stands for interactive mode and -t will allocate a pseudo terminal for us.Which in your scenario would bedocker run -p 4000:80 -it newappHope it helps! | I am new to Docker.I have a script named ApiClient.py.
The ApiClient.py script asks the user to input some data such as user's email,password,the input file(where the script will get some input information) and the output file(where the results will be output).I have used this Dockerfile:FROM python:3
WORKDIR /Users/username/Desktop/Dockerfiles
ADD . /Users/username/Desktop/Dockerfiles
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 80
ENV NAME var_name
CMD ["python", "ApiClient.py"]1st Issue: I have used this WORKDIR and ADD because thats where the input and output files exist.Is it wrong to declare these directories?2n Issue: The script asks for the user to input some info such as the email and password.However when i run:docker run -p 4000:80 newappI get the following error:
username = ("Please enter your username")EOFError:EOF when reading a lineWhy am i geting this error? | Docker-Python script input error |
After struggling on this same issue for some days I found that the problem was that the Firewall was preventing the websocket from working. I had Pandas Antivirus installed and Firewall was enabled in it. When I turned it off and used Windows firewall and opened that incoming port then it started working.Hope it helps | I'm new to front end web app development. I'm receiving a WebSocket connection failure as follows:WebSocket connection to 'ws://127.0.0.1:7983/websocket/' failed: Error in connection establishment: net::ERR_EMPTY_RESPONSEI looked up this WebSocket error and found diverted to the following pages.Shiny & RStudio Server: "Error during WebSocket handshake: Unexpected response code: 404"WebSocket connection failed with nginx, nodejs and socket.ioRstudio and shiny server proxy settingI then downloaded nginx on my Windows 7 machine and added the following comment in nginx.conf, saved and executed runApp().location /rstudio/ {
rewrite ^/rstudio/(.*)$ /$1 break;
proxy_pass http://localhost:7983;
proxy_redirect http://localhost:7983/ $scheme://$host/rstudio/;
}This didn't seem to solve the issue. I think I may need to add some extra stuff into the nginx.conf file or put it in a specific directory. Please assist. Thanks!EDITED the nginx.conf script as follows:location /rstudio/ {
rewrite ^/rstudio/(.*)$ /$1 break;
proxy_pass http://127.0.0.1:5127;
proxy_redirect http://127.0.0.1:5127/ $scheme://$host/rstudio/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
} | Shiny Websocket Error |
Upgrading to the latest mongodb (3.0.2) helped resolve this issue for me.P.S. - Make sure you kill the mongod process already running using killall -15 instead of pkill -9 as the latter could cause damage.ShareFolloweditedApr 21, 2015 at 5:20answeredApr 20, 2015 at 14:59Harshil GurhaHarshil Gurha7111 silver badge88 bronze badgesAdd a comment| | We just migrated our infrastructure on AWS from one account to another.
The mongo version installed on the server is 2.4.9
I am new to MongoDb and faced the following 2 errors when I ran the web app -{"name":"MongoError","errmsg":"exception: FieldPath field names may not start with '$'.","code":16410,"ok":0}and{"name":"MongoError","errmsg":"exception: the $cond operator requires an array of 3 operands","code":16019,"ok":0}The web app was working on our previous instances. Can anyone point me in the right direction? | MongoError exception: FieldPath field names may not start with '$' |
That's expected. If you really want what's in theentrypoint.shyou can do something like this:kubectl exec -it <pod-name> -c <container-name> -- /path/to/entrypoint.shHope it helps! | I have an entrypoint defined in my container image, and it runs before the args specified in my deployment manifest, as it should. But when I execute a command on that container usingkubectl exec, it seems to bypass the container entrypoint. Is this the expected behavior? Can I somehow force it to always use the entrypoint commands? | Does kubectl exec bypass the container's defined entrypoint? |
No, it is not possible.The twitter API, as well as more or less every other web site API, do not allow you to set up such a feature. They only let you request data. It is up to you to request this data regularly and do something with it.In theory, twitter could allow you to set up a command to be run when you do make a tweet, but bear in mind that it is free.The only way I can think of you getting this sort of set us, is to use the twitter API to send an update to twitter, and then at the same time do what ever else you want to do. This will still let you get away with just tweeting from the one place, but you will have to always use that one other place. | I have a PHP script on my server that I want to run every time I post a new tweet to Twitter. Is there a way to automate this?I could of course set up a cron job to run the script every five minutes, or run the script manually every time after tweeting, but neither of those is instant — and that’s exactly what I’m looking for.Is it possible to use the Twitter API to run a script / get a URL every time my timeline is updated? | Use the Twitter API to run a script every time I post a new tweet |
2
Use the following command to import from another ACR from a different subscription.
az acr import --name <DESTINATION-ACR> --force\
--source <SOURCE-ACR-NAME>/IMAGE-NAME:IMAGE-TAG \
--image IMAGE-NAME:IMAGE-TAG \
--username <USERNAME> \
--password <PASSWORD>
Share
Improve this answer
Follow
edited Jul 4, 2022 at 7:19
Harsh Manvar
28.4k66 gold badges5858 silver badges116116 bronze badges
answered Jun 27, 2022 at 8:54
Vipul ShardaVipul Sharda
42744 silver badges77 bronze badges
Add a comment
|
|
When building my docker image locally and import it to Azure Container Registry (ACR), the Image platform is Windows. It should be Linux. How can I import my image as a Linux platform?
I am using this command to import the image to my ACR.
az acr import
| How can I import a container to ACR and configure it to be a linux platform container? |
Nothing much to do there:
Create an empty directory and add it to git or create a project on github and clone it to get the empty directory
Add your projects one by one to that directory and checkin
NOTE: Tracking changes will become a bit tedious with this appropriate. You will need to be careful if you are working on multiple projects at the same time.
|
I have several projects on my laptop. I would like to upload all of them to GitHub and in one repository called Android-projects. After searching the web and being bombarded by different material I got confused. How can I do that? I didn't quite understand the answers found on this website. All my attempts failed.
I have GitHub Desktop and Git Shell installed on my laptop, but don't know how to use them.
| Upload all projects to one github repository |
You can cache blocks in twig with this extension:https://github.com/asm89/twig-cache-extensionIt allows you to cache blocks of a template based on TTL, a changing cache key, etc.ShareFollowansweredDec 10, 2013 at 19:23asm89asm897611 silver badge55 bronze badges3is it compatible with ZF2 modules?–Oscar FanelliDec 10, 2013 at 19:331As far as I know there is no module integrating it with ZF2, it's just a couple of services and an extension to add to twig though. :)–asm89Dec 10, 2013 at 19:391@OscarFanelli go for it! Make a module for it, or patch the original repo itself if it's supposed to be also a module–OcramiusDec 10, 2013 at 19:44Add a comment| | I've switched from Phptal to Twig: very better experience.
However, in phptal I did "tal:cache" to cache some blocks of code... with Twig, how can I accomplish that? | Cache block with twig |
In short: Yes.Typically, you'd overwrite the existing config file(s) in place while nginx is running, test it usingnginx -tand once everything is fine, reload nginx usingnginx -s reload. This will cause nginx to spawn new worker processes which use your new config while old worker processes are being shut down gracefully.. Graceful means closing listen sockets while still serving currently active connections. Every new request/connection will use the new config.Note that in case nginx is not able to parse the new config file(s), the old config will stay in place.ShareFollowansweredMay 17, 2015 at 11:42VF_VF_2,6571818 silver badges1717 bronze badgesAdd a comment| | I have nginx that I am using to receive traffic for multiple domains on port 80 each with upstream to different application servers on application specific portse.gabc.com:80 --> :3345
xyz.com:80 --> :3346Is it possible to
1. add/delete domains (abc/xyz) without downtime
2. change application level port mapping (3345,3346) without downtimeIf nginx can't do it, is there any other service that can do it without restarting the service and incurring downtime ?Thanks in advance | Can Nxingx do Reverse Proxy updates without downtime |
And it seems that just after I asked this, I made a trivial change. This was picked up and pushed. So it seems that you have to wait until you've made a new commit in order for hg-git to pick it up.
|
I'm trying to get the hg-git extension working under Windows and after hours of fiddling, I finally seem to have it working. However, nothing shows up in my git repository even though the output of hg push reads:
importing Hg objects into Git
creating and sending data
github::refs/heads/master => GIT:8d946209
[command completed successfully Wed Oct 20 15:26:47 2010]
| No changes are pushed when using hg-git |
Solution 1: usethis answeras a template to see how to configure the whole node to that sysctl value; you can use something likeecho 4096 >/proc/sys/net/core/somaxconn. Thereafter you can put a label on the nodes that use a VM with the needed sysctl configuration and use nodeSelector in the Pod spec to force scheduling to those nodes. (This only works withnonnamespaced settings; sys.net.core.somaxconn appears to be namespaced. I would like to leave this solution here as it might help others.)Solution 2: again, starting fromsame answeryou can add--experimental-allowed-unsafe-sysctls=net.core.somaxconnto thekubeletcommand line (This only works withnamespacedsettings; sys.net.core.somaxconn is namespaced). Then you can simply do something like (source):apiVersion: v1
kind: Pod
metadata:
name: sysctl-example
annotations:
security.alpha.kubernetes.io/sysctls: net.core.somaxconn=4096I hope this helps.. | Scanario: I have a container image that needs to run withnet.core.somaxconn> default_value. I am using Kubernetes to deploy and run in GCE.The nodes (vms) in my cluster are configured with correctnet.core.somaxconnvalue. Now the challenge is to start the docker container with flag--sysctl=net.core.somaxconn=4096from kubernetes. I cannot seem to find the proper documentation to achieve this.Am I missing something obvious? | How to pass `sysctl` flags to docker from k8s? |
0
If this page was created newly, it might not show up in the google search result immediately because google has not indexed the page yet.
Apart from this, there are multiple other factors which might be the reason your page is not showing up in the google.
Read here for more details: GitHub repository not listing in Google search
Share
Improve this answer
Follow
answered Jul 9, 2021 at 18:48
Pbk1303Pbk1303
3,75222 gold badges3232 silver badges4747 bronze badges
Add a comment
|
|
I created my repository with GitHub.
When I try to google it, its not listing in the search.
https://github.com/Subathra19/An-innovative-packet-labelling-scheme-TCP-PLATO-for-Data-Center-Networks
Thanks in advance
| My GitHub repositories are not known by Google |
With Nodejs 10 it's bad, because on 64bits OS it could exceed your limit value. It should be fine with Nodejs 12, however If you want scale pods using CPU activity, setting max old memory is a good idea. The GC will try to stay under this value, (so under your k8s request memory) and the CPU activity will grow up. I've write a posthereon this subject. | I have nodejs application running on Kubernetes with limits:apiVersion: apps/v1
kind: Deployment
..
spec:
..
template:
..
spec:
containers:
- name: mynodejsapp
..
resources:
requests:
memory: "1000Mi"
cpu: "500m"
limits:
memory: "2000Mi"
cpu: "1000m"
..And my Dockerfile for my nodejs application is based onnode:10-slim:FROM node:10-slim
..
COPY . .
RUN npm run build
EXPOSE 3000
ENTRYPOINT ["dumb-init", "--"]
CMD [ "node", "./build/index.js" ]I find some older posts that--max_old_space_sizeshould be set. Is that correct? Or will nodejs process automatically find existing memory limitations? | Should I pass any memory options to nodejs application running on Kubernetes pod with limits? |
If you have already typed$ git remote add origin[email protected]:<username>/<reponame>.gityou can not type it again, because origin is exist now.
And it will respondfatal: remote origin already exists.but the address which link to origin may wrong.
Try to type$ git remote remove originand type$ git remote add origin[email protected]:<username>/<reponame>.gitagain.Then type$ git push origin masterIf both the address and SSH key are correct, it may work.ShareFollowansweredJan 22, 2019 at 10:07DDKDDK13611 gold badge11 silver badge1111 bronze badgesAdd a comment| | I've checked stackoverflow a lot trying to figure out why I could be receiving this error because I do have a repo on github for what I am trying to push to. I even regenerated my ssh key and added it to github. I also see:Please make sure you have the correct access rightsand the repository exists.When I try to add the repo remotely I see:$ git remote add origin[email protected]:<username>/<reponame>.git
> fatal: remote origin already exists.
$ git push
fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin masterThen I get the errorWhen I tryssh -T[email protected]I see the correct usernameNot sure what else to try. | Repo not found fatal error while using git and unable to push to github |
Motion Chart Plugin 1.7 is not compatible with SonarQube 6.0. For the time being, there's no plan to make it compatible.Edit:Nevertheless, I think that 2 new core features may help you manage your team on a day to day basis:the Leak Periodthe new Quality ModelThis features are central in the way we use our software everyday. | Since I updated sonarqube to version 6.0, I can't get motionchart plugin to work ... All I get is a blank widget and a red top band with a [hide] button. Besides, I have found errors like this in the server log:2016.08.26 14:23:19 ERROR web[rails] undefined method `snapshot' for #MeasureFilter::Row:0x641ff8d3Has someone got the motion chart plugin 1.7 to work with sonarqube version 6.0? | sonarqube 6.0 and motion chart plugin |
?is a special char in regex, so you need to escape it in the cond's pattern to match?literally :RewriteCond %{THE_REQUEST} /entrar-na-sua-conta.html\?redirecionar=/([^\s&]+) [NC]
RewriteRule ^ https://www.portal-gestao.com/%1? [L,R=302]ShareFollowansweredApr 19, 2016 at 12:59Amit VermaAmit Verma41k2121 gold badges9595 silver badges115115 bronze badgesAdd a comment| | How do I redirect:https://www.portal-gestao.com/entrar-na-sua-conta.html?redirecionar=/f%C3%B3rum-perguntas-e-respostas/conversation/read.html?id=25To:https://www.portal-gestao.com/f%C3%B3rum-perguntas-e-respostas/conversation/read.html?id=25Using.htaccessandregex?I'm trying:RewriteCond %{THE_REQUEST} /entrar-na-sua-conta.html?redirecionar=/([^\s&]+) [NC]
RewriteRule ^ https://www.portal-gestao.com/%1? [L,R=302]And:RewriteRule ^entrar-na-sua-conta.html?redirecionar=/(.*)$ /$1 [R=301,L] | Redirect URL with htaccess and regular expression |
You've most likely run into the WebSocket connection limit.
The Javascript client of Gremlin is not managing a connection pool. The documentation recommends using a single connection per Lambda lifetime and manually handling retry. (If the gremlin client doesn't do it for you).Neptune LimitsAWS Documentation | I am working on an app that uses AWS Lambda which eventually updates Neptune.
I noticed, that in some cases I get a 429 Error from Neptune: Too Many Requests.
Well, as descriptive as it might sound, I would love to hear an advice on how to deal with it.
What would be the best way to handle that?Although I am using a dead letter queue, I'd rather have it not going this road at the first place.Btw the lambda is triggered by a SQS (standard) queue.Any suggestions? | AWS Neptune access from Lambda - Error 429 "Too Many Requests" |
I believe if you set thedomainattribute on theformselement in you web.config, to the same as the one in your custom cookie, it should work. (EDIT:that approach won't work because the SignOut method on FormsAuthentication sets other flags on the cookie that you are not, likeHttpOnly) TheSignOutmethod basically just sets the cookie's expiration date to 1999, and it needs the domain to set the right cookie.If you can't hardcode the domain, you can roll your own sign out method:private static void SignOut()
{
var myCookie = new HttpCookie(FormsAuthentication.FormsCookieName);
myCookie.Domain = "mysite.com";
myCookie.Expires = DateTime.Now.AddDays(-1d);
HttpContext.Current.Response.Cookies.Add(myCookie);
}An authentication cookie is just a plain cookie; so you would remove it the same way you would any other cookie:expire it and make it invalid. | Title should say it all.Here's the code to set the cookie:// snip - some other code to create custom ticket
var httpCookie = new HttpCookie(FormsAuthentication.FormsCookieName, encodedTicket);
httpCookie.Domain = "mysite.com";
httpContextBase.Response.Cookies.Add(httpCookie);Here's my code to signout of my website:FormsAuthentication.SignOut();Environment:ASP.NET MVC 3 Web ApplicationIIS ExpressVisual Studio 2010Custom domain: "http://localhost.www.mysite.com"So when i try and log-off, the cookie is still there. If i get rid of thehttpCookie.Domainline (e.g default to null), it works fine.Other weird thing i noticed is that when i set the domain, Chrome doesn't show my cookie in the Resources portion of developer tools, but when idontset the domain, it does.And secondly, when i actually create the cookie with the custom domain, on the next request when i read in the cookie from the request (to decrypt it), the cookie is there, but the domain is null?I also tried creating another cookie with the same name and setting the expiry to yesterday. No dice.What's going on? Can anyone help? | FormsAuthentication.SignOut Not Working With Custom-Domain Cookie |
I used to virtualize all my development eviroments using VirtualBox.Basically, i have a Debian vbox image file stamped in a DVD. When i have a new project i copy it to one of my external hdds and customize it to my project.Once my project was delivery, then i copy the image from my external hdd to a blank DVD and file it. | I'm getting pretty tired of my development box dying and then I end up having to reinstall a laundry list of tools that I use in development.This time I think I'm going to set the development environment up on a Virtual Box VM and save it to an external HDD so that way I can bring the development environment back up quickly after I fix the real computer.It seems to be like a good way to make a "hardware agnostic backup" and be able to get back up to speed quickly after a disaster.Has anybody tried this? How well did it work? Did it save you time? | Does anyone use Virtualization to create a quicker disaster recovery of a development environment? |
The only way to allow an app on subdomain.example.com to read a cookie from www.example.com would be for www.example.com to set a top-level example.com cookie.This would allow subdomain.example.com to read it, but it would also allow every other subdomain of example.com to see this - which you said you don't want.To follow this through - cookies are retrieved by name and scoped by the browser. If there are multiple cookies with the same name, you will have collisions. I believe the more generic example.com cookie will be the only one ever returned for subdomain.example.com if BOTH example.com and subdomain.example.com cookies exist.TL;DRDon't use top-level domain cookies unless you want the data to be the authoritative cookie across all domains (like single sign on). If you do this for Google Analytics you're going to collide on your different subdomains. | In order to store some Google Analytics data, I would like to access to GA "__utmz" domain's cookie (domain=.example.com) from my www subdomain (domain=www.example.com). Is it possible to read this domain's cookie from a subdomain ? If yes, how can I do that with Rails ?cookies[:__utmz]doesn't seem to work with all browsers.I know I could configure my app setting the cookie domain to '.example.com' in my production.rb (config.action_controller.session = { :domain => ".example.com" }), but I'd rather not (because I don't want my www-subdomain's cookie to be shared among all subdomains).I hope my question is clear enough...Thanks by advance for your help (and sorry for the possible mistakes in my language...) | Read domain's cookie from subdomain with Rails |
...dns.lookup('http://my-service'...Thelookupfunction (with example usage) takes the first parameter as the host name that you want to lookup, eg. google.com. You should remove "http://" from the name you passed in. | I've try yo measure DNS latency in my docker-compose/kubernetes cluster.setInterval(() => {
console.time('dns-test');
dns.lookup('http://my-service', (_, addresses, __) => {
console.log('addresses:', addresses);
console.timeEnd('dns-test');
});
}, 5000);But getaddresses: undefined, any ideas? | Node.js dns.lookup() to internal docker-compose service |
If it's deleted on the remote as well, you can simply usegit fetch --pruneand it will simplypruneall the branches that are not in the remotes anymore.On the other hand if it's not deleted on the remote repository and you won't ever fetch again, you can simply usegit branch -rd <branch_name> | I have fetched a few remote branches e.g.git fetch jason sprint-36. I have attached a screenshot (reds are remotes):How would I stop tracking/delete a remote from my list? For example 4 months ago I didgit fetch ken retain-cycle-fixand I will never fetch or look at this branch again, how do I remove it? | How to remove fetched remote branches from Git? |
Github Wikis allow you to embed HTML, but not all HTML tags are supported.
To embed supported HTML:
Edit a wiki page.
Ensure the edit mode is on "Markdown".
Type the html directly into the main content area (there's no "code" view, like you often see in tools like Wordpress).
Which tags aren't supported?
I couldn't find documentation on this, but we have a few clues:
Github wikis are built on a software tool called Gollum. We can see which tags are supported in Gollum by default here in the Gollum docs. Github may customize these defaults for their use-case, but I'll bet it's pretty similar.
I went ahead and created a test wiki here with all the major visual html elements added to it (copied from Poor Man's Styleguide). It looks like the main tags that don't display are iframe, video, audio, and all of the various form inputs (textarea, input, select, etc).
|
I would like to create a wiki page that is a preamble (standard markdown) followed by an HTML/JS code listing followed by (in a frame I suppose) the page that this code would generate.
Is this possible?
PS The code is: http://pipad.org/MathBox/slides_simple.html
| Can a GitHub wiki embed HTML |
I didn't see any pattern for your requirement, but you can thisvar CronJob = require('cron').CronJob;
var x = 1;
new CronJob('0 35 14 * * 0', async function () {
if (x / 2 != 1) {
x++;
//do somthing
} else {
x = 1;
}
}, null, true, 'America/Los_Angeles'); | Below cron job runs every Sunday 14:35:00, but I want to run the run job every 2 Sunday 14:35:00.Is it possible to do that?var CronJob = require('cron').CronJob;
new CronJob('0 35 14 * * 0', async function () {
}, null, true, 'America/Los_Angeles'); | Nodejs - How to set cron job to run on every 2 Sunday |
11
I know this question is old, but I thought it was worth adding that if you are using Docker For Mac, you can navigate to Docker > Preferences > Resources > Advanced, and on that page are several options to control resource settings such as:
Number of CPUs
Memory
Swap
Disk Image Size
and other various settings. I've noticed that if I signify 2GB of memory, as long as the Docker desktop is running, it will use the entire 2GB of memory.
Share
Follow
edited Feb 1, 2022 at 18:18
DV82XL
5,91955 gold badges3434 silver badges6161 bronze badges
answered Jan 15, 2021 at 10:43
dave4jrdave4jr
1,2411515 silver badges1919 bronze badges
Add a comment
|
|
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 24 days ago.
Improve this question
I am new to docker. I have a nodejs(sails.js) application. I have deployed it using docker. There is only one docker container running on my Ubuntu machine.
When I tried to monitor the memory usage by my docker container using "docker stats" command, below is the stats I get (as shown in image)
My question is, why this single docker container is eating lot of memory ~207MiB? In future if I want to increase the number of containers running per host, will it consume memory in this multiples? It doesn't seem to be feasible solution if I want to run 100 container of same app on my machine. Is there any way to optimize memory consumption by docker containers?
(When I run the same application without docker (sails lift / node app.js) it only consumes 80MB of memory.)
| Why docker container is consuming lot of memory? [closed] |
0
The problem is that if User B has a right to commit, you cannot remove its access to the merge button.
The only way to remove user B's right to press the button is to remove it for everyone using branch protections, and only allow a GitHub App to merge.
You could leverage an app such as Mergify to then write a rule that matches what you described there.
Share
Improve this answer
Follow
answered Oct 18, 2022 at 16:33
jd_jd_
50044 silver badges77 bronze badges
Add a comment
|
|
I want to allow only the pull request author to merge it.
I have a GitHub repository with branch protection, ownership to request mandatory reviews, and a minimum number of reviews set for pull requests.
However, all these checks happen before someone click on merge.
Here is an example:
User A create a pull request.
User A cannot merge it until it passes all the checks and has at least one approval.
User B approves the pull request (User B has the right to commit to the repository)
Now, I don't want User B to merge the pull request. However, because User B didn't commit, User B triggered no checks.
Two solutions come to mind, but both could be wrong:
Create a GitHub Action that triggers on "Click Merge" (will have to deal with merge queue)
Create a check that verifies the username
I don't know to set up any of these solutions. The only piece of answer I found after much research was this article: How to get the author of a PR?
I will appreciate any help,
Thank you very much,
| How to allow only the pull request author to merge it? |
Copy the main.dart file into the lib folder:Navigate to the 'lib' folder in your project.Paste the main.dart file into this folder.Set Dart path in Android Studio:Open Android Studio.Go to 'File' > 'Settings' > 'Languages & Frameworks' > 'Dart'.Set the Dart SDK path to the appropriate location.Run the Code:Open the main.dart file in Android Studio.Right-click on the file and select 'Run'.Code Execution in Console:After running the code, check the console in Android Studio for any output or errors.Check Screenshots:Capture screenshots of the running application for reference or documentation.Ensure each step is followed accurately to execute the code successfully. If you encounter any issues, refer to the console output for debugging.Android studio project structure | Some of the things I find very pretty confusing as a beginner in flutter.As described in github about using the usage but don't know anything about it.I don't find main file in the source code and also showing so much error in code which I don't know how to resolve it.Overall don't have any idea how to run this project.
So the project that I want to run it from flutter android studio is below.Github Link:I want to run this project from github in andoid studio flutter but unable to do it that's why I want to know the whole process | trying to run a project from github but unable to do it in andoird studio flutter |
Hi so let me tell you the basic steps on how to use git to push something. Just check if you are following all these.If you have a new project and it does not have git initialized then do: -git initthen whenever you do some changes in the project folder and save it just go back to git and do:-git statusif it shows some file names in red that means those files have been changed and now you need to stage the changes you want to commit.So if you want to stage a single file then copy that file path shown in the bash window and dogit add <file name>or else if you want to stage all the changes then dogit add .next step is to commit the staged changes to do that just write:-git commit -m "<your commit message here>"one shortcut method of staging all the files and commiting them isgit commit -am "<your commit message here>"once you have commited the files just simply dogit pushSome additional commands that are useful:-git branch --- to know the current branch you are on
git checkout <branchname> --- to switch to some other branch
git checkout -b <branchname> --- to create a new branch
git gc --prune=now ---- to clear all the referencesI hope this helps | I am unable to add any files to git when I give command 'git add .' in a Git bash session.It shows error:'EG' does not have a commit checked out.
fatal : adding files failed.I am unable to do any operation in Git now.I tried allgit checkoutcommands, deleted branches.I even deleted git bash from my system.Tried to search commits in Git GUI but there too I found nothing.Did all this but to no success. Please help me.Attaching the image of steps I followed and error that I am getting | Unable to add files due to git due to a checkout issue |
I think a more illustrative version of your flow would be something like thisGithub
/ | \
Staging Server | Production Server
\ | /
\ | /
\ | /
Development MachineSo you would push to github from your dev machine, then when you deployed to staging or production, using I assume capistrano, it will check out code from github in either of these branches on the respective remote server. I would use amasterfor production and maybe a branch calleddevfor staging.There are lots of scenarios here but another common one is to use webhooks (I think that's what they're called) on github to create an event each time you push to a branch. That could be also deploying code to your staging or continuous integration server. They are pretty neat, but if you are just starting off with this, I would keep it simple.There is a short and sweet Railscast on setting up a staging environmenthere (sorry, not free)ShareFollowansweredJan 8, 2013 at 9:59mraaroncruzmraaroncruz3,78022 gold badges3232 silver badges3131 bronze badgesAdd a comment| | I am new to all this so my apologies if this sounds very basic but I am looking to use github and a staging server (staging.example.com) for my RoR app and then will be moving the staging code to example.com. So I will be having something like this -Local System <----> Github <---- >staging server <---> Live Server/site? | How to use staging server and github for my RoR App ? |
Many projects have:help modelineoptions in the comments at the bottom of the file. For example,# vim: noetThese kind of options will override the options in your ~/.vimrc. Project maintainers often put these in for consistency across the code. Some projects don't have these but still conform to a set of rules. For example, they may use tabs instead of hard spaces. If you use hard spaces and are contributing to a project that uses tabs then it's best to keep it consistent. You can accomplish this by several means. One example would be to set an autocommand for that project in your vimrc like thisau BufRead,BufNewFile /path/to/project/** set noetso that you don't forget to adhere to that projects wishes. Another way is to set it manually. You could also change the projects tabs to hard spaces and then change them back. For example you can do:retab!to change the tabs to your preferences. Then when you're done editing you can change the settings back to their preferences and do another::set noet
:retab! | I am pretty new to "contributing to a github project" but excited to do so, I am wondering how do I get the indentation similar to the existing one while contributing to a github project. For example I use VIM and I have my own .vimrc file but when I fork and then clone a ruby project to contribute, how do I use the same indentation. | Indentation of code while contributing to a github project |
synchronizedensures you have a consistent view of the data. This means you will read the latest value and other caches will get the latest value. Caches are smart enough to talk to each other via a special bus (not something required by the JLS, but allowed) This bus means that it doesn't have to touch main memory to get a consistent view.ShareFollowansweredJul 31, 2012 at 7:16Peter LawreyPeter Lawrey529k8181 gold badges762762 silver badges1.1k1.1k bronze badges72thanks. Then why we need volatile key word in Java ? Please check this linklink–KarthikAug 1, 2012 at 4:101If you only use synchronized, you wouldn't need volatile. Volatile is useful if you have a very simple operation for which synchronized would be overkill.–Peter LawreyAug 1, 2012 at 14:26@PeterLawrey Sir, I have some related doubt abput synchronized block , Would be really helpful if you could enlighten me about my question :stackoverflow.com/q/42163468/504133–nits.kkFeb 10, 2017 at 16:081@nits.kk The answer to your question is; yes. synchronized reads don't mean anything without synchronized writes.–Peter LawreyFeb 10, 2017 at 18:481@MohammadKarmi consistent for read only, yes. you can't do read/write atomically if you use more than one lock. i.e. you get no more guarantee than volatile.–Peter LawreyOct 13, 2018 at 15:13|Show2more comments | When a synchronized method is completed, will it push only the data modified by it to main memory, or all the member variables, similarly when a synchronized method executes, will it read only the data it needs from main memory or will it clear all the member variables in the cache and read their values from main memory ? For examplepublic class SharedData
{
int a; int b; int c; int d;
public SharedData()
{
a = b = c = d = 10;
}
public synchronized void compute()
{
a = b * 20;
b = a + 10;
}
public synchronized int getResult()
{
return b*c;
}
}In the above code assume compute is executed by threadA and getResult is executed by threadB. After the execution of compute, will threadA update main memory with a and b or will it update a,b,c and d. And before executing getResult will threadB get only the value of b and c from main memory or will it clear the cache and fetch values for all member variables a,b,c and d ? | Synchronized data read/write to/from main memory |
Serialization of 'Closure' is not allowedYou can use thePHP SuperClosure libraryto get rid of this.Also you can try other memory storages likeRedisorMemcacheto cache your objects. See thisresolved stackoverflow question. | I use some php library, and generate element of class$elnew = new LibClass();I want to save this variable to cache.
If I make like this$elem = $cache->getItem($ig_name);
if (!$elem->isHit()) {
$elem->set($elnew);
$cache->save($ig);
}$elem->isHit()is always false. I checked how cache works with string - all is ok.
Also I'm not able to serialyze/unserialyze this object because it saysSerialization of 'Closure' is not allowedand no way to modifyLibClassHow can I save$elnewto cache? Any variants for with symfony components? Or maybe other libs can help me? | Symfony cache dont work with class/object |
4
Like you did: clone the repo
$ git clone https://github.com/adityai/dashboards.git
This repo does contain a Dockerfile (which is a file which describes the setup of your docker image). You can build a docker image from the file
$ cd dashboards
$ docker build -t my-dashboard .
The dockerfile starts from base image httpd (apache).
After the build of your dockerfile you can see your image:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
my-dashboard latest 81a5607c03ba About a minute ago 204 MB
And you can create a container instance from that image. I must admit there is not much info about the docker run command on the github page or docker hub page.
Now you can run the image. I saw that port 80 was exposed in the dockerfile so I mapped port 80 of the container on port 80 of my local machine.
$ docker run -d -p 80:80 my-dashboard
Now I can visit the dashboards in my browser at localhost:80
Share
Improve this answer
Follow
answered Jun 15, 2017 at 12:13
lvthillolvthillo
29.3k1313 gold badges9898 silver badges131131 bronze badges
Add a comment
|
|
I forked the keen/dashboards github repo and I am trying to create a Dockerfile for running the dashboard in a Docker container.
My fork: https://github.com/adityai/dashboards
I am not familiar with node and npm. The Docker image was built successfully.
https://hub.docker.com/r/adityai/dashboards/
I am not sure if I am using the right command to start the dashboards app (npm start) because when I try to run the docker container locally, it does not start. It exits right away.
docker run -d -p 3000:3000 --name=keen-dashboard adityai/dashboards:gh-pages
| How-to install and run keen/dashboards |
The /tmp filesystem is often backed by tmpfs, which stores files in memory rather than on disk. My guess is that is the case on your nodes, and the memory is being correctly charged to the container. Can you use an emptydir volume instead?
|
I have a Kubernetes cluster that takes jobs for processing. These jobs are defined as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: process-item-014
labels:
jobgroup: JOB_XXX
spec:
template:
metadata:
name: JOB_XXX
labels:
jobgroup: JOB_XXX
spec:
restartPolicy: OnFailure
containers:
- name: worker
image: gcr.io/.../worker
volumeMounts:
- mountPath: /workspace
name: workspace
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
volumes:
- name: workspace
hostPath:
path: /tmp/client-workspace
Note that I'm trying to mount a folder in the host into the container (workspace). Note also the memory limits defined.
On my container, I download a number of files into workspace, some of them being pretty large (They are downloaded with gsutil from GCS, but don't think that's too important).
When the files I download exceed the memory limits, my code breaks with a "device out of space" error. This doesn't completely make sense, because I'm storing the files into a mount, that is backed by the host's storage, which is more than enough. It's also mentioned in the docs that memory limit's the ammount of RAM available for the container, not storage. Still, when I set the limit to XGi, it breaks after XGi download pretty consistently.
My container is based on ubuntu:14.04, running a shell script with a line like this:
gsutil -m cp -r gs://some/cloud/location/* /workspace/files
What am I doing wrong? Will definitely need to have some limits for my containers, so I can't just drop the limits.
| Kubernetes pod running out of memory when writing to a volume mount |
I think there is only one assumption you can relay on: This is the interface provided by AWS and what is not worded specifically in the AWS documentation is not supported.When a new programmatic access key is created it always differs, so it does not make much sense that it will "stay the same between calls". I think the "AWS_SESSION_TOKEN" hits that you should use it during one session. If it was for one call, it may had been named "AWS_ONECALL_TOKEN". I would assume that the STS assume-role is a much slower operation than a reguler API call. My suggestion is to consider this to be "three part password for one session" and not thing much about it unless you want to create your own implementation for a similar thing. Then it may be very instructive to analyze what are the advantages and disadvantages of this approach. | aws sts assume-rolereturns three fields as the issued Temporary Security Credentials.AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_SESSION_TOKENThe first two are with the same format of a user'sAccess Key, but the 3rd field,AWS_SESSION_TOKEN,is special to the temporary credential.I have two questions:IfAWS_SESSION_TOKENis to represent/encode the temporary validity, why we still needs the first two fields (because after the expiration, we will need to get anotherAWS_SESSION_TOKENanyway)?If my client call the STS API twice, between two responses returned fromaws sts assume-role, will/could AWS_ACCESS_KEY_ID be same? | What's the functionality of AWS_SESSION_TOKEN returned from STS API? |
OK, got this done. The problem is in the smart http buffer chunking mechanism and whoever stuck at this ever just run this git guy:
git config http.postBuffer 104857600 or any other number of bytes, but pls not a very big one, up to 50MB as they suggest on their website.
|
spent all day tryin' to sucessfully push a piece of code into our Git repo. When i generate public key for SSH I get a Git fatal error saying "Access denied" and I've read it's better for VS to go with Http.
Now, when I switch to Http everything works well but the final git push origin master this time ending up with the "The remote end hung up unexpectedly".
when I run ssh -T.. I get the same "Permission denied".
What am I doing wrong?
Still if you know that SSH works with VS well then pls suggest what I need to try for successfully adding the public key (I go through the steps on the git website and run into Permission denied.)
but the rest works fine, I can pull data from the repo.
| How to properly set up Visual Studio 2013 Git Source Control Provider for Http? |
Actually, as in absolutely? On a modern operating system, no. In some environments, yes.
It's always a good plan to clean up everything you allocate as this makes it very easy to scan for memory leaks. If you have outstanding allocations just prior to your exit you have a leak. If you don't free things because the OS does it for you then you don't know if it's a mistake or intended behaviour.
You're also supposed to check for errors from any function that might return them, like fread, but you don't, so you're already firmly in the danger zone here. Is this mission critical code where if it crashes Bad Things happen? If so you'll want to do everything absolutely by the book.
As Jean-François pointed out the way this trivial code is composed is a bad example. Most programs will look more like this:
void do_stuff_with_buf(char* arg) {
long buflen = atol(arg);
char *buf = malloc(buflen);
fread(buf, 1, buflen, stdin);
// Do stuff with buf
free(buf);
}
int main(int argc, char *argv[]) {
if (argc < 2)
return 1;
do_stuff_with_buf(argv[1])
return 0;
}
Here it should be more obvious that the do_stuff_with_buf function should clean up for itself, it can't depend on the program exiting to release resources. If that function was called multiple times you shouldn't leak memory, that's just sloppy and can cause serious problems. A run-away allocation can cause things like the infamous Linux "OOM killer" to show up and go on a murder spree to free up some memory, something that usually leads to nothing but chaos and confusion.
|
This question already has answers here:
What REALLY happens when you don't free after malloc before program termination?
(20 answers)
Is freeing allocated memory needed when exiting a program in C
(8 answers)
Should I free memory before exit?
(5 answers)
Closed 6 years ago.
Suppose I have a program like the following
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
if (argc < 2) return 1;
long buflen = atol(argv[1]);
char *buf = malloc(buflen);
fread(buf, 1, buflen, stdin);
// Do stuff with buf
free(buf);
return 0;
}
Programs like these typically have more complex cleanup code, often including several calls to free and sometimes labels or even cleanup functions for error handling.
My question is this: Is the free(buf) at the end actually necessary? My understanding is that the kernel will automatically clean up unfreed memory when the program exits, but if this is the case, why is putting free at the end of code such a common pattern?
BusyBox provides a compilation option to disable calling free at the end of execution. If this isn't an issue, then why would anyone disable that option? Is it purely because programs like Valgrind detect memory leaks when allocated memory isn't freed?
| Should you free at the end of a C program [duplicate] |
Nginx would not normally specify the port as part of an external redirect if the port number is the same as the default port for the scheme. Port 80 for http and port 443 for https.
You can specify the port explicitly in the rewrite statement.
For example:
location = /order.pl {
return 301 $scheme://$host:$server_port/home;
}
Note: I used curl to test this, as the browser dropped the port from the address-bar for exactly the same reasons.
|
I am having difficulty rewriting url and reverse proxy the request to a spring boot app. Rewrite works but i am losing port number and cause of that it is not working. For example
localhost:80/order.pl converts into localhost/home. The port gets lost and app is not receiving the request
Similar examples online don't work.
server
{
listen 80;
server_name localhost;
set $upstream localhost:8050;
location ~"^\/order.pl$"
{
rewrite "^\/order.pl$ "/home" permanent;
}
location /
{
proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X - Real - IP $remote_addr;
proxy_buffering off;
proxy_connect_timeout 30;
proxy_send_timeout 30;
proxy_read_timeout 30;
proxy_pass http: //$upstream;
}
}
If I don't do rewrite, reverse proxy is working but with rewrite I am losing port number. Any info would be appreciated.
Thanks
| NGINX: Rewrite url and reverse proxy to a different port |
My understanding of thisto request certificates with multiple identifiersis that you will be able to associate multiple domains to one certificate. These domains will most likely be stated in subject alternative name extension of the certificate.Each domain will be validated by CA and only validated domains will be placed in the issued certificate.It is not written clearly in the specification but it does make sense to me after reading section 5.6, specificallyThe CSR encodes the client’s requests with regard to the content of
the certificate to be issued. The CSR MUST contain at least one
extensionRequest attribute [RFC2985] requesting a subjectAltName
extension, containing the requested identifiers.The values provided in the CSR are only a request, and are not
guaranteed. The server or CA may alter any fields in the certificate
before issuance. For example, the CA may remove identifiers that are
not authorized for the key indicated in the “authorization” field. | I'm interested in the upcomingAutomated Certificate Management Environment (ACME). I download the demos & tried it out with my main domain.
I still have a question though:
Using the regular certification process, I'm able to get a certificate with SAN so I can set it on my server (Node.js) and serve it for all the subdomains (which are vhosts). The problem, is that thecurrent draftstates the following:Key AuthorizationThis process may be repeated to associate multiple identifiers to a key pair (e.g., to request certificates with multiple identifiers)Does this mean that I need to issue a new certificate generated from the same key for each individual subdomain even if they are part of the same main domain ("main identifier") ?Thank you for you answers. | ACME - Acquire certificate for subdomains with SAN |
The first solution that comes to my mind is to volume mount your external directory (storage base path) to an internal directory in your container.
Then, you can create the directories in that internal directory, and these directories will also be created in your storage base path.
The code will then need to use this internal directory as path.
Docker Volumes
|
what I'm trying to do is simply write files to a path
private string CreateIfMissing(string path)
{
path = Path.Combine(_options.baseBath, path);
if (!Directory.Exists(path))
Directory.CreateDirectory(path);
return path;
}
Suppose my storage base path is D:\STORAGE_ROOT
it works fine until I run it using a docker container, it results to create the full path again inside my app folder.
C:\App\MyRunningAppFolder\D*\STORAGE_ROOT\path_value.
Is there a way to enable or force the directory to point on the sent path, not the app path
| Docker: writing a file to physical or shared path, using docker and netcore3.1 |
You need to use the repository's URL when you clone a github project. You can find this by clicking on the Code tab at the top of the project's web page. For the project you linked, the URL is https://github.com/playframework/Play20.git. If you are using the command line, you can type
git clone https://github.com/playframework/Play20.git
to clone the project.
|
I want to clone the files from here:
https://github.com/playframework/Play20/tree/master/samples/java/forms
and it's the first time I use github.
I couldn't understand what .git file should I try and clone?
Many thanks.
| Cloning from github - how do you find the .git? |
Hi Configure the key store along with the request testStep .select the request test stepgo to properties and select keystore which you want to sendI hope the key store you are using have the server certificate imported. | There is one soap web service its working with 2 way SSL. Our client certificate(public key) has been shared with web service provider.We are trying to call this soap web service on a latest SOAPUI 5.5 on Windows 2012 R2 machine.We have configured our certificate (private key) in soapui and we are capturing the logs with wireshark during execution.Wireshark says client certificate is not being sent as you can see on below screenshot. I can give more details if required...You can see also SOAPUI configuration and service call on below...Client Certificate Configuration in SOAPUI:Service call execution in SOAPUI:Edit 1:We have spent 8 days to figure out this problem. If anyone believes to solve this problem for us, we are ok to pay for it. Thank you. | Client certificate is not being sent to the server |
2
The .virtualenv/ directory should not be included in the zip file.
If the directory is located in the same directory as serverless.yml then it should be added to exlude in the serverless.yml file, else it gets packaged along with other files:
package:
exclude:
- ...
- .virtualenv/**
include:
- ...
Share
Improve this answer
Follow
edited Apr 11, 2019 at 19:38
answered Apr 11, 2019 at 13:57
Aabesh KarmacharyaAabesh Karmacharya
73499 silver badges2222 bronze badges
Add a comment
|
|
I have a python script that I want to run as a lambda function on AWS. Unfortunately, the package is unzipped bigger than the allowed 250 MB, mainly due to numpy (85mb) and pandas (105mb)
I have already done the following but the size is still too big:
1) Excluded not used folders:
package:
exclude:
- testdata/**
- out/**
- etc/**
2) Zipped the python packages:
custom:
pythonRequirements:
dockerizePip: true
zip: true
If I unzip the zip file generated by serverless package I find a .requriements.zip which contains my python packages and then there is also my virtual environment in the .virtualenv/ folder which contains, again, all the python packages. I have tried to exclude the .virtualenv/../lib/python3.6/site-packages/** folder in serverless.yml, but then I get an Internal server error when calling the function.
Are there any other parameters to decrease the package size?
| Reduce size of serverless deploy package |
It greatly depends on the requirements of your app.Roomallows you to save andorganisethe data. Specific queries and extraction of distinct objects is very powerful if needed. Besides that you can be sure the data won't be deleted, when the device needs storage and clears the cache folders. One problem however is data integrity, which would require some sort of synchroniser between your app and the backend server. I would advise you to use Room if you do any sort of data manipulation and/or want to offer certain and reliable offline user experience.HTTP CACHEis simpler and a quite straightforward solution. You only need to add a interceptor to your OkHttp client and you are ready to go. This would be the solution if you app's main purpose is simply displaying data. | I looked into solving the problem of accessing data offline in Android and came across Room library and HTTP cache-control. I already have all of the Retrofit / OkHttp responses done in my app. Which is better to implement when there is no Internet connection? | For accessing data offline, better to use Room library or HTTP cache-control? |
1
If you need automation, you can consider Argo CD Image Updater, which does include in its update strategies:
latest/newest-build - Update to the most recently built image found in a registry
It is important to understand, that this strategy will consider the build date of the image, and not the date of when the image was tagged or pushed to the registry.
If you are tagging the same image with multiple tags, these tags will have the same build date.
In this case, Argo CD Image Updater will sort the tag names lexically descending and pick the last tag name of that list.
For example, consider an image that was tagged with the f33bacd, dev and latest tags.
You might want to have the f33bacd tag set for your application, but Image Updater will pick the latest tag name.
argocd-image-updater.argoproj.io/image-list: myimage=some/image
argocd-image-updater.argoproj.io/myimage.update-strategy: latest
Share
Improve this answer
Follow
answered Sep 12, 2022 at 6:51
VonCVonC
1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges
2
1
Please note that ArgoCD Image Updater is not considered production-ready by its own docs. I have yet to find a production ready solution for that problem....if anybody has a suggestion...?
– asturm
Jun 1, 2023 at 11:36
@asturm Good point. I did not find a production ready solution either.
– VonC
Jun 1, 2023 at 11:49
Add a comment
|
|
I have started to learn GitOps ArgoCD. I have one basic doubt. I am unable to test ArgoCD because I do not have any Cluster. It will be so kind of you if you can clear my doubts.
As an example currently I am running my deployment using test:1 docker image. Then using Jenkins I upload test:2 and then put test:2 in place of test:1 then ArgoCD detects the change and applies the new image in a cluster.
But if before I used test:latest then using Jenkins I uploads a new image with same name test:latest. What will happen now? Will ArgoCD deploy the image ( name and tag of the new and previous image are the same )
| Deploy through ArgoCD with same image name and tag ( image:latest ) |
With a LOT of help from AWS paid support, I got this working. The reality is I was not far off it was down to some SED syntaxt.Here's what currently works (Gist):option_settings:
- option_name: AWS_SECRET_KEY
value:
- option_name: AWS_ACCESS_KEY_ID
value:
- option_name: PORT
value: 8081
- option_name: ROOT_URL
value:
- option_name: MONGO_URL
value:
- option_name: MONGO_OPLOG_URL
value:
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: ProxyServer
value: nginx
option_name: GzipCompression
value: true
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
container_commands:
01_nginx_static:
command: |
sed -i '/\s*proxy_set_header\s*Connection/c \
proxy_set_header Upgrade $http_upgrade;\
proxy_set_header Connection "upgrade";\
' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.confIn addition to this you need to got into your Load balancer and change the Listener from HTTP to TCP:described here:http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.elb.html). | I am running Meteor on AWS Elastic Beanstalk. Everything is up and running except that it's not running Websockets with the following error:WebSocket connection to 'ws://MYDOMAIN/sockjs/834/sxx0k7vn/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400My unstanding was to add something like:proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";To the proxy config, via my YML config file.Via my .exbextension config file:files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";I have ssh'd into the server and I can see the proxy.conf with those two lines in it.When I hit my webserver I still see the "Error during WebSocket handshake: " error.I have my beanstalk load configured with stick sessions and the following ports:BTW I did seehttps://meteorhacks.com/load-balancing-your-meteor-app.htmland I tried to:Enable HTTP load balancing with Sticky Session on Port 80
Enable TCP load balancing on Port 8080, which allows websocketBut could not seem to get that working either.Adding another shot at some YAML that does NOT work here":https://gist.github.com/adamgins/0c0258d6e1b8203fd051Any help appreciated? | How do I customize nginx on AWS elastic beanstalk to loadbalance Meteor? |
I have also run into this problem many times, by default MySQL allows root to be accessed bylocalhostuser that means even if you have opened the port 3306:3306, you will still need to add the user.Follow these commands and the error will resolve!https://stackoverflow.com/a/11225588 | I'm creating a laravel project in a docker container, along with MySQL and phpmyadmin, when trying to migrate (or access the database from phpmyadmin) I get access denied error.I've tried several SOF solutions but none of them worked, also tried ones in GitHub issues.here is my docker-compose.ymlversion: "3"
services:
web:
container_name: ${APP_NAME}_web
build:
context: ./docker/web
ports:
- 9000:80
volumes:
- ./:/var/www/app
networks:
- mynet
db:
image: mysql:5.7
container_name: db
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: laracocodb
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- mysqldata:/var/lib/mysql/
networks:
- mynet
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpma
links:
- db:db
ports:
- 9191:80
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: root
PMA_HOST: db
networks:
- mynet
networks:
mynet:
driver: bridge
volumes:
mysqldata:
driver: localno matter where I access the database (from db container bash, from phpmyadmin index page or from the web service when trying to migrate the database), the error is always access denied | How to fix "Access denied for user 'root'@'172.22.0.4' (using password: YES)" when connecting to mysql container? |
For anyone coming into this error, the correct way to get the Hosted Zone ID varies for different target types. To find the correct hosted Zone ID, go tothe official documentation, scroll down to HostedZoneId and look for your target type.In this particular case, if you're using an ALB or NLB, you need the CanonicalHostedZoneID attribute of said LB. If it's a Classic LB, you need the CanonicalHostedZoneNameID attribute. | I am trying to update existing alias_dns_name to different new elb using python boto '2.38.0'def a_record_alias_update(myRegion, myDomain, elbName, elbZone):
dnsConn = route53.connect_to_region(myRegion)
myZone = dnsConn.get_zone(myDomain+'.')
changes = route53.record.ResourceRecordSets(dnsConn,myZone.id)
add_change_args_upsert = {
'action': 'UPSERT',
'name': 'dev.'+myDomain+'.',
'type': 'A',
'alias_hosted_zone_id': elbZone,
'alias_dns_name': elbName,
'alias_evaluate_target_health': True
}
change = changes.add_change(**add_change_args_upsert)
result = changes.commit()
return resultError:result = changes.commit()
File "/Library/Python/2.7/site-packages/boto/route53/record.py", line 168, in commit
return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml())
File "/Library/Python/2.7/site-packages/boto/route53/connection.py", line 475, in change_rrsets
body)
boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request
<?xml version="1.0"?>
<ErrorResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"><Error> <Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to create an alias that targets <alias_dns_name>., type A in zone <alias_hosted_zone_id>, but the alias target name does not lie within the target zone</Message></Error><RequestId>afhh08-ckki9f2b5</RequestId></ErrorResponse>Any help is appreciated ! | AWS Route 53 with Boto: alias target name does not lie within the target zone |
descheduler, a kuberenets incubator project could be helpful. Following is the introductionAs Kubernetes clusters are very dynamic and their state change over time, there may be desired to move already running pods to some other nodes for various reasons:Some nodes are under or over utilized.The original scheduling decision does not hold true any more, as taints or labels are added to or removed from nodes, pod/node affinity requirements are not satisfied any more.Some nodes failed and their pods moved to other nodes.New nodes are added to clusters. | What should I do with pods after adding a node to the Kubernetes cluster?I mean, ideally I want some of them to be stopped and started on the newly added node. Do I have to manually pick some for stopping and hope that they'll be scheduled for restarting on the newly added node?I don't care about affinity, just semi-even distribution.Maybe there's a way to always have the number of pods be equal to the number of nodes?For the sake of having an example:I'm using juju to provision small Kubernetes cluster on AWS. One master and two workers. This is just a playground.My application is apache serving PHP and static files. So I have a deployment, a service of type NodePort and an ingress using nginx-ingress-controller.I've turned off one of the worker instances and my application pods were recreated on the one that remained working.I then started the instance back, master picked it up and started nginx ingress controller there. But when I tried deleting my application pods, they were recreated on the instance that kept running, and not on the one that was restarted.Not sure if it's important, but I don't have any DNS setup. Just added IP of one of the instances to /etc/hosts with host value from my ingress. | Redistribute pods after adding a node in Kubernetes |
First point is: do not use regular expressions for HTML parsing. Use HTML parser instead.
Second, if you already have this pattern and just want to fix it a little bit try to understand what does it do.It actually replacescharset=GBK2312orcharset=GBK18030bycharset=UTF-8using very not optimized way.So, first change your regex to the following:charset=GBK(?:2312|18030)I believe this wiil already give you some advantage. But this regular expression is case sensitive. Instead of manually writing each character in lower and upper case usePatterndirectly:Pattern p = Pattern.compile("charset=GBK(?:2312|18030)", Pattern.CASE_INSENSITIVE);
String newHtml = p.matcher(oldHtml).replaceFirst("charset=utf8"); | i use httpclient to crawl htmls. In my code, i foundhtml = html.replaceFirst("[cC][hH][aA][rR][sS][eE][tT]\\s*?=\\s*?([gG][bB]2312|[gG][bB][kK]|[gG][bB]18030)","charset=utf-8");above code cause java.lang.OutOfMemoryError. The total program use 251MB, replaceFirst method use 64.8%, 157MB, and is growing. How can i avoid this, i need some help. ths~ | how to optimize "replaceFirst" method in java |
Work group sizes can be a tricky concept to grasp.If you are just getting started and you don't need to share informationbetweenwork items, ignore local work size and leave it NULL. The runtime will pick one itself.Hardcoding a local work size of 10*8 is wasteful and won't utilize the hardware well. Some hardware, for example, prefers work group sizes that are multiples of 32.OpenCL doesn't specify what order the work will be done it, just that it will be done. It might do one work group at a time, or it may do them in groups, or (for small global sizes) all of them together. You don't know and you can't control it.To your question "why?": the hardware may run work groups in SIMD (single instruction multiple data) and/or in "Wavefronts" (AMD) or "Warps" (NVIDIA). Too small of a work group size won't leverage the hardware well. Too large and your registers may spill to global memory (slow). "Just right" will run fastest, but it is hard to pick this without benchmarking. So for now, leave it NULL and let the runtime pick for you. Later, when you become an OpenCL expert and understand more about how the hardware works, you can try specifying the work group size. However, be aware that the optimal size may be different for different hardware, and there are other rules (like global size must be a multiple of local size). | I use opencl for image processing. For example, I have one 1000*800 image.I use a 2D global size as 1000*800, and the local work size is 10*8.In that case, will the GPU give 100*100 computing units automatic?And do these 10000 units works at the same time so it can be parallel?If the hardware has no 10000 units, will one units do the same thing for more than one time?I tested the local size, I found if we use a very small size (1*1) or a big size(100*80), they are both very slow, but if we use a middle value(10*8) it is faster. So last question, Why?Thanks! | how can opencl local memory size works? |
(Extend from the comment:)
Github Actions docs use actions/setup-python@v2:
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python }}
Also you can try python -m tox.
|
I am new to tox and GitHub actions, and I am looking for a simple way to make them work together. I wrote this simple workflow:
name: Python package
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
python-version: [3.7]
steps:
- uses: actions/checkout@v1
- name: Install tox
run: pip install tox
- name: Run tox
run: tox
which just installs tox and then runs it. But when I run this workflow, the tox installation works fine, but the run command returns an error:
tox: command not found
What is the correct way to run tox from a GitHub action?
| How to run tox from github actions |
I think you can delete the entire folder using the following code:
AmazonS3Config cfg = new AmazonS3Config();
cfg.RegionEndpoint = Amazon.RegionEndpoint.EUCentral1;
string bucketName = "your bucket name";
AmazonS3Client s3Client = new AmazonS3Client("your access key", "your secret key", cfg);
S3DirectoryInfo directoryToDelete = new S3DirectoryInfo(s3Client, bucketName, "your folder name or full folder key");
directoryToDelete.Delete(true); // true will delete recursively in folder inside
I am using amazon AWSSDK.Core and AWSSDK.S3 version 3.1.0.0 for .net 3.5.
I hope it can help you
|
I am trying to delete all the files inside a folder which is basically the date.
Suppose, if there are 100 files under folder "08-10-2015", instead of sending all those 100 file names, i want to send the folder name.
I am trying below code and it is not working for me.
DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest();
multiObjectDeleteRequest.BucketName = bucketName;
multiObjectDeleteRequest.AddKey(keyName + "/" + folderName + "/");
AmazonS3Config S3Config = new AmazonS3Config()
{
ServiceURL = string.Format(servicehost)
};
using (IAmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(accesskey, secretkey, S3Config))
{
try
{
DeleteObjectsResponse response = client.DeleteObjects(multiObjectDeleteRequest);
Console.WriteLine("Successfully deleted all the {0} items", response.DeletedObjects.Count);
}
catch (DeleteObjectsException e)
{
// Process exception.
}
I am using the above code and it is not working.
| Delete a folder from Amazon S3 using API |
GitHub doesn't expose a user interface to do this. You can contact GitHub support, explain to them the situation, and they can fix it for you manually. They are friendly and pretty fast (give it a day or 2), even if you don't have a paid subscription with them. | There is repository B which I forked from, original repository A. There are many commits I made on B. However, I am only supposed to send pull requests to repository A' which is also a forked repository of A.Can I change my original forked repo? I noticed that since I have forked from A, I am not allowed to fork from A' (it redirects to B). I understand that sending a pull request to A is possible from github, but I want this to be visible on github as B is forked from A'. How do I do that without deleting current repo (B) and re-forking from A' and (maybe) manually redoing all commits ? Why am I not able to fork from A' while B is still there? | Changing forked repository |
Apache does have some stuff for this, like RewriteMap or RewriteProg. I think htaccess files are read on every request, so I wouldn't want to make the size of it explode with 3000 lines of text - although I gut tells me it would handle it just fine. I think RewriteMap is only loaded once per server start or somethign like that, so thats a benefit.But personally, I think I would just do an internal rewrite of any request to the news subdomain to a serverside script like php, and then inspect the uri, query the database to get the most current/up to date url slug for the id, and then do an external 301 redirect to the new url.ShareFollowansweredJun 25, 2012 at 3:59goatgoat31.7k77 gold badges7575 silver badges9696 bronze badgesAdd a comment| | I am working on a site overhaul. As a result I am moving several pages over to a new format. They aren't keeping the same file name as before so the migration is a little tricky.Example:news.alpinezone.com/93467/ is becominghttp://alpinezone.com/still-more-skiing-and-riding-at-whiteface/The news subdomain has accumulated in several years over 3000 articles. Is it OK to put 3000 + 301 redirects into an.htaccessfile?On a side note, for properSEO, should I also make sure I use http:// instead of http:// www and also make sure they are fully lower case and also close with a / at the end of the URL. I am redesigning into wordpress and any combination pretty much works but I understand that for Google they can be considered unique but similar URL's so I want to stick with one as much as possible.Thanks! | Handling several thousand redirects with .htaccess |
Generally your flow seems to be right.
But since it is an absolute requirement to do a fast-forward only, you should pass --ff-only to enforce it.
git checkout master
git fetch origin
git merge --ff-only origin/develop
git push origin master
--ff is not needed, since fast-forward is a default behavior.
I tried git rebase and git merge --ff, but those change hashes which
still show as uneven.
Fast-forward merge by definition never creates merge commit, but only updates the branch pointer:
--ff When the merge resolves as a fast-forward, only update the branch pointer, without creating a merge commit. This is the default
behavior.
|
I am using master as a production branch which is fetched nightly onto many servers as read only. Master should never be ahead of develop because I only push to develop. When develop is stable, I want to fast forward master to match it.
I am currently doing this with:
git checkout master
git pull origin develop
git push origin +master
This works for me and syncs everything nicely, but it feels a bit wrong. There is another dev on my team that likes to just merge develop into master like a normal person. However, merging creates a merge git which makes master 1 commit ahead of develop. These build up over time and makes the git history unclean and annoying to look at. If I do my branch cloning hack, those merge commits disappear on remote but remain on my team's local branch.
I tried git rebase and git merge --ff, but those change hashes which still show as uneven.
Summary: How do I cleanly fast forward master to be perfectly even with develop? Or am I just not using git the way it is meant to be?
| Sync commit history of master and develop |
This can happen when the cluster already exists.Check the portal to make sure it doesn't already exist.ShareFolloweditedDec 18, 2021 at 0:46Henry Ecker♦34.9k1919 gold badges4343 silver badges6060 bronze badgesansweredNov 16, 2021 at 20:28hemisphirehemisphire1,22599 silver badges2020 bronze badgesAdd a comment| | I am trying to create Kubernetes cluster on Microsoft Azure, but the operations fail and the following error message comes up (I use PUTTY on windows to generate the required ssh public key). Has anyone seen this before? Thanks!"error": {
"code": "PropertyChangeNotAllowed",
"target": "linuxProfile.ssh.publicKeys.keyData",
"message": "Changing property 'linuxProfile.ssh.publicKeys.keyData' is not allowed." | Error when creating Kubernetes cluster on Azure - error: "code": "PropertyChangeNotAllowed" |
The issue was nginx was caching the page on the browser, hence any modification in nginx.conf file was not reflecting on browser. Here is the final working version of my nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 80;
listen [::]:80;
server_name localhost;
access_log logs/host.access.log main;
location / {
root "D:/projects/mySimpleProject/build";
index index.html index.htm;
}
location /first/ {
proxy_pass http://localhost:8080/;
}
location /second/ {
proxy_pass http://localhost:8081/;
}
location /third/ {
proxy_pass http://localhost:8082/;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
For those who don't know, Restart the Nginx server and hit ctrl+F5 to refresh your browser every time you make a change in nginx.conf
|
I have three Spring boot applications running on embedded tomcat on ports 8080, 8081, 8082.
I am trying to configure reverse proxy for all of them, When hitting the url I am getting 401 error which is from Spring security.
I have configured the nginx as follows:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #also tried $remote_addr;
location /first {
proxy_pass http://localhost:8081;
}
location /second {
proxy_pass http://localhost:8080;
}
location = /third {
proxy_pass http://localhost:8082;
}
}
}
the url I am trying to access is http://localhost/second/ this gives me error saying There was an unexpected error (type=Unauthorized, status=401).
Full authentication is required to access this resource
when tried to access http://localhost:8080/swagger-ui.html gives me 404 error.
when tried to access http://localhost:8080/swagger-ui.html shows me expected page of Swagger.
| nginx - multiple reverse proxy for spring boot applications (enabled spring security) |
Answering my own question:This ended up not being a version issue, but user error. Unfortunately the suds documentation isn't as clear as it could be. Reading it, one would think the code above would work, but (on suds v0.39+) it should be written as:imp = Import('http://domain2.com/url')
imp.filter.add('http://domain3.com/url')
imp.filter.add('http://domain4.com/url')
imp.filter.add('http://domain5.com/url')
d = ImportDoctor(imp)
oc = ObjectCache()
oc.setduration(days=360)
url = "http://domain.com/wsdl"
client = Client(url, doctor=d, cache=oc, timeout=30)Looking at it now, it makes complete sense that the cache has to be configured before the Client is initialized.Hopefully this will help anyone else trying to set a suds cache, and it seems to be ignoring your settings. | I'm using suds 0.3.8, Python 2.4.3, and Django 1.1.1. The code I inherited has a long duration for the cached files, but it's expiring on the default cadence of once every 24 hours. The external servers hosting the schemas are spotty, so the site is going down nightly and I'm at the end of my rope.Any idea what is jacked up in this code?imp = Import('http://domain2.com/url')
imp.filter.add('http://domain3.com/url')
imp.filter.add('http://domain4.com/url')
imp.filter.add('http://domain5.com/url')
d = ImportDoctor(imp)
url = "http://domain.com/wsdl"
client = Client(url, doctor=d, timeout=30)
clientcache = client.options.cache
clientcache.setduration(days=360) | Suds ignoring cache setting? |
git flow initdoes not actually create any release, hotfix or feature branches, because as opposed to the develop and master branch, these arenotsingle, everlasting branches. They are created as feature/abc, release/42.0 or hotfix/foo for every feature, hotfix or realease you create, and are merged and then deleted once you finish.Whatgit flow initactually asks for is the prefix for naming these branches, which means you can just pass it the default values for release, hotfix and feature without worrying about polluting your repo, as it will not create any branches until you specifically ask it to start a feature, hotfix or release.You can read up on these concepts in theofficial explanation of git flow | I have a cloned repo I wanna commit to using git flow, but it's not initialized as a git flow repo and it has no branches like 'release' or 'hotfix'.Can I somehow "partially" initialize it as a git flow repo?I mean, I actually need only 'develop' branch and branches for my features, but when I rungit flow initit also asks for release branch, hotfix branch (maybe smth more, don't remember) which I actually don't need and refuses to init the repo if some of those branches is absent.I don't need those branches and I don't wanna create them, 'cause I don't wanna pollute the repo with branches created only to satisfy git flow.Can I somehow init the git flow repo with only develop branch and features branches prefix?Or what is the common solution for such case? | "Partially" init a git flow repo |
ngx_http_ssl_module is installed in official OpenResty docker image by default.
You don't need to care about RESTY_CONFIG_OPTIONS_MORE.IMO the best solution would be usingnginx-ssl-variablesas reference implementation and set variables you need, for example:set_by_lua_block $ssl_client_issuer {
if ngx.var.https == "on" then
return require("openssl").x509.read(ngx.var.ssl_client_raw_cert):issuer():oneline()
end
return nil
}You will need to installlua-openssl. IMO the simplest way to install that module is to include into your Dockerfile (built from an image withluarocks support):RUN luarocks install openssl --server=https://rocks.moonscript.org/dev | I am trying to create a router for internal testing. I am using openresty image RESTY_CONFIG_OPTIONS_MORE. As the messages we are sending from the client are binary and don't have any request headers, we are trying to extract the issuer and serial number from the cert and set them as request headers.
We want to use these headers to route to our test server as opposed to production depending on the header values.My dockerfile grabs it like so:ENV RESTY_CONFIG_OPTIONS_MORE "--with-ngx_http_ssl_module"I have already tried the following in the server block but it did not work:rewrite_by_lua_block {
ngx.req.set_header("x-issuer", ngx.var.ssl_client_i_dn)
}The author has mentioned that theenvsubstutility is included in all images except alpine and windows. Is that relevant to my issue in any way?If just appending the config options will not work, which do you think is the best option?Use nginx-ssl-variables looks like it does exactly what we want it to do:https://github.com/Seb35/nginx-ssl-variablesModify openresty code to build our own image that enhances ngx.ocsp module to make the cert available asngx.var.ssl_client_raw_cert in rewrite_by_lua_blockModify openresty code to build our own image that overwrites the SSL handshakeSome combination of the aboveOther? | Set client ssl cert variables as request headers of the message in openresty |
Did you try adding the www.ourcompany.com to the list ofIIS Bindings for the site?ShareFollowansweredJul 29, 2011 at 10:15Michael BrownMichael Brown9,09911 gold badge3030 silver badges3737 bronze badges1Doing some searching it seems that your firewall/proxy isn't forwarding the hostname to IISstackoverflow.com/questions/5844781/…Which firewall are you using?–Michael BrownJul 29, 2011 at 13:14Add a comment| | An internal mvc3 webapp is hosted (iis7) on a local serverlocalserver/ourwebappWe are trying to expose the webapp external through a firewall route:www.ourcompany.com/thewebappis mapped tolocalserver/ourwebapp.Nevertheless the route is working, all links in the rendered html still containourwebapp/controller/action, which of course doesn't work.Any idea's how to resolve this issue?Thx! | MVC3 routing(??) problem when making webserver public through a firewall rout |
I used gsutil signed url to solve the issue.
1. gsutil signurl -d 10m -r eu /home/centos/private-key.json gs://bucket-name/spark-examples_2.11-2.4.5.jar . (where -r eu is my region (europe multi region).did some awk transformation : awk -F '\t' ‘FNR==2 {print $4}' by piping the 1st output.This signed url can be used from anywhere (for 10 minutes in my case) to access the bucket object. | I am running spark on k8s version 2.4.5. I have stored the spark images in GCS which could be accessed by spark.kubernetes.container.image.pullSecrets config. I am also storing the spark application jar in GCP buckets.When making the bucket public the spark submit works fine. My question is how can I access the private bucket, are there any config to pass with spark? I have the service account created in GCP and also have the json.keyfile. Below is spark submit command:bin/spark-submit --master k8s://https://host:port --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-sa --conf spark.executor.instances=3 --conf spark.kubernetes.container.image.pullSecrets=cr-k8s-key --conf spark.kubernetes.container.image=eu.gcr.io/Project-ID/spark-image/spark_2.4.5/spark:0.1.0 https://storage.googleapis.com/Bucket-name/spark-examples_2.11-2.4.5.jar | unable to pull jar from GCP bucket when running spark jobs on k8s |
as far as i know there's no automated way. but you can do it by editing the html of the github page.the automatic page generator creates a 'gh-pages' branch with the html, css & js files for the github page you created. you can checkout this branch, edit it as normal and push it back to github. | I'd like to add social buttons (share on twitter, like on Facebook, share on google+) to a github page, which was created using the automatic page generator. Is there an easy way to do that? | How do i add social buttons to my github page using the automatic page generator? |
"Ephemeral storage" here refers to space being used in the container filesystem that's not in a volume. Something inside your process is using a lot of local disk space. In the abstract this is relatively easy to debug: usekubectl execto get a shell in the pod, and then use normal Unix commands likeduto find where the space is going. Since it's space inside the pod, it's not directly accessible from the nodes, and you probably can't use tools likelogrotateto try to manage it.One specific cause of this I've run into in the past is processes configured to log to a file. In Kubernetes you should generally set your logging setup to log to stdout instead. This avoids this specific ephemeral-storage problem, but also avoids a number of practical issues around actually getting the log file out of the pod.kubectl logswill show you these logs and you can set up cluster-level tooling to export them to another system. | I am running a k8 cluster with 8 workers and 3 master nodes. And my pods are evicting repetively with the ephemeral storage issues.
Below is the error I am getting on Evicted pods:Message: The node was low on resource: ephemeral-storage. Container xpaas-logger was using 30108Ki, which exceeds its request of 0. Container wso2am-gateway-am was using 406468Ki, which exceeds its request of 0.To overcome the above error, I have added ephemeral storage limits and request to my namespace.apiVersion: v1
kind: LimitRange
metadata:
name: ephemeral-storage-limit-range
spec:
limits:
- default:
ephemeral-storage: 2Gi
defaultRequest:
ephemeral-storage: 130Mi
type: ContainerEven after adding the above limits and requests to my namespace, my pod is reaching its limits and then evicting.Message: Pod ephemeral local storage usage exceeds the total limit of containers 2Gi.How can I monitor my ephemeral storage, where does it store on my instance?
How can I set the docker logrotate to my ephemeral storage based on size? Any suggestions? | My kubernetes pods are Evicting with ephemeral-storage issue |
Creating a docker that runs the development webserver will leave you with a very slow solution as the webserver is single threaded and will also serve all static files. It's meant for development.As you don't use https it will also disable the web2py admin interface: that's only available over http if you access it from localhost.That being said, you can get your solution up and running by starting web2py with:python web2py.py --nogui -a admin -i 0.0.0.0All options are important as web2py needs to start the server without asking any questions and it needs to bind to external netwerk interface address.When you want to use a production ready docker to run web2py you would need some additional components in your docker; nginx, uwsgi and supervisord would make it a lot faster and give you the options to enable https. Note: for bigger projects you would probably need python binding for MySql or PostgreSQL and a separate docker with the database.An production example, without fancy DB support, can be found here:https://github.com/acidjunk/docker-web2pyIt can be installed from the docker hub with:docker pulll acidjunk/web2pyMake sure to read the instructions as you'll need a web2py app; that will be mounted in the container. If you just want to start a web2py server to fiddle around with the example or welcome app you can use:docker pull thehipbot/web2pyStart it with:docker run -p 443:443 -p 80:80 thehipbot/web2pyThen fire up a browser tohttps://192.168.59.103 | I'm trying to build a docker image of web2py on top of ubuntu. Given the docker file#######################
# Web2py installation #
#######################
# Set the base image for this installation
FROM ubuntu
# File Author/ Mainteainer
MAINTAINER sandilya28
#Update the repository sources list
RUN apt-get update --assume-yes
########### BEGIN INSTALLATION #############
## Install Git first
RUN apt-get install git-core --assume-yes && \
cd /home/ && \
git clone --recursive https://github.com/web2py/web2py.git
## Install Python
RUN sudo apt-get install python --assume-yes
########## END INSTALLATION ################
# Expose the default port
EXPOSE 8000
WORKDIR /home/By building an image using the above Dockerfiledocker build -t sandilya28/web2py .Then by building a container using the above imagedocker run --name my_web2py -p 8000:8000 -it sandilya28/web2py bashThe ip address of the host is192.168.59.103which can be found by usingboot2docker ipAfter creating the image I'm starting the web2py sever usingpython web2py/web2py.pyand I'm trying to access the web2py GUI from192.168.59.103:8000but it is showing the page is not available.How to access the GUI of web2py from the browser. | create a web2py docker image and access it through browser |
To configure fail2ban, make a 'local' copy the jail.conf file in /etc/fail2bancd /etc/fail2bansudo cp jail.conf jail.localTry to restart with default configuration also before editing anything.ShareFolloweditedJun 20, 2020 at 9:12CommunityBot111 silver badgeansweredNov 18, 2014 at 17:13SatishSatish16.9k3030 gold badges9999 silver badges155155 bronze badges2setloglevel = 4and check error in/var/log/fail2ban.logmake sure selinux is disabled.–SatishNov 20, 2014 at 15:39All are okay but still no solution.–Altmish-E-AzamNov 28, 2014 at 9:22Add a comment| | I have installed fail2ban on my Linux server versionRHEL5.4. Its not blocking IP after max retry limit as described in jail.conf. When I try to restart the fail2ban I got following error message./etc/init.d/fail2ban restart
Stopping fail2ban: [ OK ]
Starting fail2ban: ERROR NOK: (2, 'No such file or directory')
[ OK ]I have tried many more but failed to got solved the above issue. Following is the ssh jail in jail.conf file.[ssh]
enabled = true
filter = sshd
action = iptables[name=SSH, port=ssh, protocol=tcp]
sendmail-whois[name=SSH,[email protected],[email protected], sendername="Fail2Ban"]
logpath = /var/log/secure
maxretry = 3Any body can suggest where is the issue.? | Fail2Ban is unable to block ip after multiple try |
I will assume you already have 2Services defined for your apps (s1ands2below).KubernetesIngresssupportsnamed based virtual hosting(and much more):The following Ingress tells the backing loadbalancer to route requests based on the Host header.apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: s1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: s2
servicePort: 80 | I have a cluster of Kubernetes with two web apps deployed , I can't understand the way of assign same port 80 and 443 for this apps for access everyone with you own domain, web1.com and web2.com redirects to specific service.
Looking in the web i found topics like a : Ingress Controller with Nginx Proxy reverse and traefik for manage request and route.How can I do this?Thank you | Kubernetes like Apache or Nginx Virtual Host |
Part of the solution is the response of the user:Esmaeil Mazahery, but a few more steps must be taken.First, I changed the Angular application Dockerfile (passed additional build parameters like: base-href and deploy-url)RUN npm run ng build -- --prod --base-href /projects/sample-app1/ --deploy-url /projects/sample-app1/Then, I changed the reverse proxy nginx.conf configuration fromlocation /projects/sample-app1 {
# Angular app
proxy_pass http://sample-app1:80;
}to:location /projects/sample-app1 {
# Angular app
proxy_pass http://sample-app1:80/;
}Redirection did not work properly without a slash at the end.The order in which nginx redirects are matched is also important. Therefore, before the addresses:/projects/sample-app1and/projects/sample-app2I put the symbol^~, which causes the given locations to be taken first. This nginx localization simulation tool also proved very useful:Nginx location match tester | I have created a reverse proxy using Nginx which redirects in various applications (ASP.NET API's and Angular app's).Reverse proxy nginx.conf (the most important settings):...
server {
listen 80;
server_name localhost 127.0.0.1;
location /projects/sample-app1/api {
proxy_pass http://sample-app1-api:80;
}
location /projects/sample-app1 {
# Angular app
proxy_pass http://sample-app1:80;
}
location /projects/sample-app2 {
# Angular app
proxy_pass http://sample-app2:80;
}
location /api {
proxy_pass http://random-api:80;
}
location / {
proxy_pass http://personal-app:80;
}
}All API's are available and work properly because their path indicated by the location parameter is the same as in the controllers. An Angular application that runs on the url "/" also works, but the problem is with "sample-app1" and "sample-app2". When I type the url to go to these applications, I get an error similar to:Uncaught SyntaxError: Unexpected token < main.d6f56a1….bundle.js:1My suspicion is that the URL leading to the application contains additional elements (/projects/sample-app1) and its default index path is simply "/". So I would have to rewrite to remove the redundant part of the URL, but how to do it? My attempts so far have not been successful and I have tried different ways from other threads on StackOverflow and Github.Angular App nginx.conf:events{}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}} | Nginx reverse proxy for Angular apps |
1
As the error implies that it's not able to find the domain name you are referring for PostgreSQL. try to change the kong-database to exact hostname i.e "localhost" if you are running all docker containers on same VM.
The command should be as below :
docker run --rm --network=kong-net pantsel/konga -c prepare -a postgres -u postgresql://kong@localhost:5432/konga_db
As @atline suggest please also confirm your docker network, confirm with docker network ls it should respond as below:
NETWORK ID NAME DRIVER SCOPE
3a03dfb71f30 bridge bridge local
0501e0af7350 kong-net bridge local
1decf874e725 host host local
adc2bd16eaa3 none null local
Share
Follow
answered Sep 5, 2019 at 14:06
AviXAviX
631010 bronze badges
Add a comment
|
|
this is docker me
0e60f32df539 pantsel/konga:legacy "/app/start.sh" 3 days ago Up 1 second 0.0.0.0:1337->1337/tcp konga
da8fe5294057 kong "/docker-entrypoint.…" 8 days ago Up 33 minutes 0.0.0.0:8000-8001->8000-8001/tcp, 0.0.0.0:8443-8444->8443-8444/tcp kong
0caeee73418b postgres:9.6 "docker-entrypoint.s…" 8 days ago Up 34 minutes 0.0.0.0:5432->5432/tcp kong-database
i try to install
docker run --rm --network=kong-net pantsel/konga -c prepare -a postgres -u postgresql://kong@kong-database:5432/konga_db
this is error
debug: Preparing database...
Using postgres DB Adapter.
Failed to connect to DB { Error: getaddrinfo ENOTFOUND kong-database kong-database:5432
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
errno: 'ENOTFOUND',
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'kong-database',
host: 'kong-database',
port: 5432 }
how to fix error?
| how to fix konga_db |
Data is not exact, the above samples for example aren't exactly aligned to the second. This means that we need to extrapolate a bit when the data doesn't exactly cover the 10s range, which can cause artifacts like this. On average however, the result will be correct.Counting with Prometheusgoes into this in more detail. | The query:increase(Application_hystrix_command_count_success[10s])This seems to be the query I need, from my understanding of the function, however the data it returns does not seems to be correct, sometimes.The data for the counter looks something like:101 @1507897406.565
101 @1507897407.565
101 @1507897408.565
101 @1507897409.565
101 @1507897410.565
101 @1507897411.565
101 @1507897412.565
101 @1507897413.565
102 @1507897414.565
102 @1507897415.565What I am seeing in the graph is some of the spikes are fluctuating. For instance a spike that should be 10 cycles between these values when refreshing the graph:10
11.1111111111111
7.77777777777777 | Attempting to graph changes per second in counter |
I found out that it was a bug within QNetworkAccessManager.
In Wireless environment, QNetworkAccessManager scans the wifi status every few seconds. Those little spikes were the evidence for that. Check the following bug report.
https://bugreports.qt.io/browse/QTBUG-40332
To solve this problem, either compile with
-D QT_NO_BEARERMANAGEMENT
option or just remove bearer folder in a plugin.
|
It seems like after I create a QNetworkAccessManager object in Qt, it makes other applications (those who heavily uses network, such as multiplayer game) run slow.
For example, if I run Dota2 while running my app as a background, the game starts to lag even if my Qt app is very light (I checked through process explorer and it only consumes under 1% of CPU usage whole time). If I remove the QNetworkAccessManager part from the code, then game runs smoothly without any lagging.
Here is how I use QNetworkAccessManager;
QNetworkAccessManager *qnam = new QNetworkAccessManager(this);
response = qnam->get(QNetworkRequest(url));
connect(response , &QNetworkReply::finished, this, &Test::parse_response);
And in parse_response()
void parse_response() {
// Network Error occured
if (response->error() != QNetworkReply::NoError) {
response->deleteLater();
return;
}
response->deleteLater();
qnam->deleteLater();
}
Funny thing is that when I check I/O usage of my app through process explorer, it shows weird activity on I/O usage
When I haven't used QNetworkAccessManager, that weird I/O Usage disappears. Hence I assume that my qnam has not been successfully deleted yet could not found any problem in my code.
If has anyone had similar experiences with this problem?
Or is it just my configuration of usage of QNetworkAccessManager incorrect?
| (Qt) QNetworkAccessManager slows down other application |
You can get YAML from thekubectl create configmapcommand and pipe it tokubectl apply, like this:kubectl create configmap foo --from-file foo.properties -o yaml --dry-run=client | kubectl apply -f -ShareFolloweditedFeb 24, 2023 at 5:01captncraig22.5k1717 gold badges109109 silver badges152152 bronze badgesansweredJul 6, 2016 at 4:46Jordan LiggittJordan Liggitt17.5k22 gold badges5858 silver badges4545 bronze badges105Pipe the command is the way to go, wasn't thinking of the --dry-run which appears to be the key part of the command!–James JiangJul 6, 2016 at 4:505For what its worth, this same pattern can work for Secrets in addition to ConfigMaps example shown here.–rwehnerJul 6, 2016 at 15:344trying this with kubernetes 1.10 but i keep getting the errorerror: error validating "STDIN": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false–yee379Apr 14, 2018 at 9:262that's a bug in 1.10 resolved in 1.10.1 - seegithub.com/kubernetes/kubernetes/issues/61780andgithub.com/kubernetes/kubernetes/pull/61808–Jordan LiggittApr 14, 2018 at 14:318Great answer. usingkubectl applyinstead ofkubectl replace, will work both for new and existing configmap–nahshNov 24, 2019 at 13:08|Show5more comments | I've been using K8S ConfigMap and Secret to manage our properties. My design is pretty simple, that keeps properties files in a git repo and use build server such as Thoughtworks GO to automatically deploy them to be ConfigMaps or Secrets (on choice condition) to my k8s cluster.Currently, I found it's not really efficient that I have to always delete the existing ConfigMap and Secret and create the new one to update as below:kubectl delete configmap fookubectl create configmap foo --from-file foo.propertiesIs there a nice and simple way to make above one step and more efficient than deleting current? potentially what I'm doing now may compromise the container that uses these configmaps if it tries to mount while the old configmap is deleted and the new one hasn't been created. | Update k8s ConfigMap or Secret without deleting the existing one |
Your question is related to the difference betweenSIGTERMandSIGKILL.SIGTERM:When a process receives this signal, it has the chance to perform cleanup. That's the so-called "graceful" exit; it corresponds todocker stop.SIGKILL:The process doesn't even know it has received this signal and it has no chance to ignore or do anything about it. The process directly exits. That's the so-called "not graceful" exit; it corresponds todocker kill.For further details look atthis topic. | I'm trying outDockerand came acrossdocker container stop <hash> # Gracefully stop the specified containerI'm not asking about the difference between docker stop and docker kill. I'm wondering about the term"gracefully"What does "gracefully stop" mean in this context? | What does "gracefully stop" mean? |
18
Took me a minute to figure out too.
Open up CloudFormation in AWS and delete the aws-sam-cli-managed-default stack then try to redeploy.
Every time your deploy fails you'll likely have to do this again.
Share
Improve this answer
Follow
answered Aug 3, 2021 at 20:04
Tanner HildebrandTanner Hildebrand
18111 silver badge33 bronze badges
1
2
Worked for me! just make sure you are in the right region. Thanks.
– yakob abada
Feb 14, 2023 at 11:19
Add a comment
|
|
I'm currently working on AWS serverless lambda function deployment and try to distribute and test with AWS SAM. However, when I followed the AWS SAM hello world template tutorial on official website, I can't really deploy my code to AWS.
I've already
Assigned a working IAM account
Install every package we need for AWS SAM (brew, aws-sam-cli...etc)
Set up AWS configuration
Using a function template provided by AWS
Yet, I got error message
Error: Stack aws-sam-cli-managed-default is missing Tags and/or
Outputs information and therefore not in a healthy state (Current
state:aws-sam-cli-managed-default). Failing as the stack was likely
not created by the AWS SAM CLI
| AWS SAM deployed Error under hello world template |
You can only free() something you got from malloc(),calloc() or realloc() function. freeing something on the stack yields undefined behaviour, you're lucky this doesn't cause your program to crash, or worse.Consider that a serious bug, and delete that line asap. | I'm supporting some c code on Solaris, and I've seen something weird at least I think it is:char new_login[64];
...
strcpy(new_login, (char *)login);
...
free(new_login);My understanding is that since the variable is a local array the memory comes from the stack and does not need to be freed, and moreover since no malloc/calloc/realloc was used the behaviour is undefined.This is a real-time system so I think it is a waste of cycles. Am I missing something obvious? | free() on stack memory |
#standardSQL
SELECT
COUNT(*) naive_count,
COUNT(DISTINCT actor.id) unique_by_actor_id,
COUNT(DISTINCT actor.login) unique_by_actor_login
FROM `githubarchive.month.*`
WHERE repo.name = 'angular/angular'
AND type = "WatchEvent"Naive count: Some people star and un-star, and star again. This creates duplicate WatchEvents.Unique by actor id count: Each person can only star once. We can count those (but we don't know if they un-starred, so the total count will be lower than this).Unique by actor login: Some historical months are missing the 'actor.id' field. We can look at the 'actor.login' field instead (but some people change their logins).Alternatively, thanks to GHTorrent project:#standardSQL
SELECT COUNT(*) stars
FROM `ghtorrent-bq.ght_2017_01_19.watchers` a
JOIN `ghtorrent-bq.ght_2017_01_19.projects` b
ON a.repo_id=b.id
WHERE url = 'https://api.github.com/repos/angular/angular'
LIMIT 1020567, as of 2017/01/19.Related:What happens when a project changes it's name?https://stackoverflow.com/a/42935592/132438How to get updated GHtorrent data, before they update it?https://stackoverflow.com/a/42935662/132438 | My goal is to track over time the popularity of my BigQuery repo.I want to use publicly available BigQuery datasets, likeGitHub Archiveorthe GitHub datasetThe GitHub datasetsample_reposdoes not contain a snapshot of the star counts:SELECT
watch_count
FROM
[bigquery-public-data:github_repos.sample_repos]
WHERE
repo_name == 'angular/angular'returns 5318.GitHub Archive is a timeline of event. I can try to sum them all, but the numbers do not match with the numbers in the GitHub UI. I guess because it does not count unstar actions. Here is the query I used:SELECT
COUNT(*)
FROM
[githubarchive:year.2011],
[githubarchive:year.2012],
[githubarchive:year.2013],
[githubarchive:year.2014],
[githubarchive:year.2015],
[githubarchive:year.2016],
TABLE_DATE_RANGE([githubarchive:day.], TIMESTAMP('2017-01-01'), TIMESTAMP('2017-03-30') )
WHERE
repo.name == 'angular/angular'
AND type = "WatchEvent"returns 24144The real value is 21,921 | How to get total number of GitHub stars for a given repo in BigQuery? |
You are most likely using justnodeor/usr/local/bin/nodeto refer to node, instead of/usr/.nvm/versions/node/v9.11.1/bin/node | I regularly at version 8.4.0, then I installed nvm and used it to upgrade to version 9.11.1.
When running the terminal, I have version 9.11.1, however if a conjob runs a script, node 8.4.0 is still used.
The same ec2-user is running the cron so for me it is strange that the user ec2-user has version 9.11.1 if used via shell and 8.4.0 if used via cron.How can I resolve this to always use 9.11.1? | Node version inconsistent in crontab |
It will run on the merged code if you, f.e., use thecheckoutaction with default values.If you look at theevent dataof a pull request you get something like the following (shortened){
"ref": "refs/pull/1/merge",
"sha": "<sha-1>",
"event": {
"number": 1,
"pull_request": {
"head": {
"ref": "<base-branch>",
"sha": "<sha-2>"
}
}
}
}and the hash the workflow is running on is different to the hash of the commit on the base branch of your pull request. | I've tried to read the GitHub documentation on this and tried to google some information about it, but either the information is missing or I'm just unable to understand it :)Either way, I'll use an example (python) to illustrate the problem:foo.pyonmaster:my_list = [
1,
2,
]
assert len(my_list) == 2So far, so good.Create new branch:foo.pyon branchfeat_amy_list = [
0,
1,
2,
]
assert len(my_list) == 3And a separate feature branch:foo.pyon branchfeat_bmy_list = [
1,
2,
3,
]
assert len(my_list) == 3I then mergefeat_aintomaster. The problem now is that my PR forfeat_bis perfectly mergeable which will create the list with 4 items from 0 to 3. But the assert statement will fail after the merge commit takes place. In other words, I have two branches that runs perfectly fine on their own and are mergeable, but are in a bad state AFTER the merge commit.So my question is this:
When I run a GitHub action, how can I make sure that the action runs the merged code? Is this the default behavior or not? | During a PR action on GitHub, what version of the code is actually run? |
This should work for you:RewriteCond %{REQUEST_URI} ^/slide$
RewriteCond %{QUERY_STRING} ^page=(.*)$
RewriteRule ^(.*) /slide/issue57?page=%1 [R=301,L] | Im trying to redirect this,example.com/slide?page=4to example.com/slide/issue43?page=4But it cannot effect other URL's like, example.com/slide/issue57?page=4Im really stuck, these regular expressions are so weird. Here's the rewriterule that I've come up with,This is not working
RewriteRule ^slide?page(.*)$ http://example.com/slide/issue43?page=$1 [L,R=301]I need to target 'slide?page=X' specifically and have it redirect or point to 'slide/issue43?page=X' | Rewrite specific URL to different directory |
Simply add--reloadto entrypoint worked for me:ENTRYPOINT ["uvicorn", "main:app", "--reload","--host", "0.0.0.0", "--port", "5000"] | Following the documentation inuvicorn-gunicorn-fastapi-dockerI should run my image by running:docker run -d -p 80:80 -v $(pwd):/app myimage /start-reload.shBut I get:Usage: uvicorn [OPTIONS]
Try 'uvicorn --help' for help.
Error: Got unexpected extra argument (/start-reload.sh)I managed to mount a volume by using what I found hereDebug mode?but I think it is not elegant enough and I have to run it every time I make a change (at least I dont have to build the image)docker run --name ${containerName} \
--env GUNICORN_CMD_ARGS="--reload" \
-p 5000:5000 \
-v $(pwd)/app:/app \
${imageName}:${versionTag}My Dockerfile it is just:FROM tiangolo/uvicorn-gunicorn-fastapi:latest
EXPOSE 5000
COPY ./app /app
ENTRYPOINT ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]And It works as supposed.It is possible to be able to reload as I changing my code? | Error: Got unexpected extra argument (/start-reload.sh) When setting Development live reload for FastAPI docker |
First of all, I don't have any experience with PrestaShop. This is an example which you can use for every docker container (from which you want to persist the data).
With the new version of docker (1.11) it's pretty easy to 'persist' your data.
First create your named volume:
docker volume create --name prestashop-volume
You will see this volume in /var/lib/docker/volumes:
prestashop-volume
After you've created your named volume container you can connect your container with the volume container:
docker run -ti --name some-prestashop -p 8080:80 -d -v prestashop-volume:/path/to/what/you/want/to/persist :prestashop/prestashop
(when you really want to persist everything, I think you can use the path :/ )
Now you can do what you want on your database.
When your container goes down or you delete your container, the named volume will still be there and you're able to reconnect your container with the named-volume.
To make it even more easy you can create a cron-job which creates a .tar of the content of /var/lib/docker/volumes/prestashop-volume/
When really everything is gone you can restore your volume by recreating the named-volume and untar your .tar-file in it.
|
There is something I'm missing in many docker examples and that is persistent data. Am I right if I conclude that every container that is stopped will lose it's data?
I got this Prestashop image running with it's internal database:
https://hub.docker.com/r/prestashop/prestashop/
You just run docker run -ti --name some-prestashop -p 8080:80 -d prestashop/prestashop
Well you got your demo then, but not very practical.
First of all I need to hook an external MySQL container, but that one will also lose all it's data if for example my server reboots.
And what about all the modules and themes that are going to be added to the prestashop container?
It has to do with Volumes, but it is not clear to my how all the the host volumes needs to be mapped correctly and what path to the host is normally chosen. /opt/prestashop er something?
| Howto run a Prestashop docker container with persistent data? |
I write my own HostBackup Faraday middleware for this case.
You are welcome to use it! https://github.com/dpsk/faraday_middleware
Here is an article: http://nikalyuk.in/blog/2013/06/25/faraday-using-backup-host-for-remote-request/
|
In my project i'm using following small library for interacting with the external serivce:
class ExternalServiceInteraction
include Singleton
URL = Rails.env.production? ? 'https://first.production.url.com' : 'http://sandbox.url.com'
API_KEY = Rails.env.production? ? 'qwerty' : 'qwerty'
DOMAIN = Rails.env.production? ? 'prod.net' : 'stage.net'
def connection
conn = Faraday.new(url: URL) do |faraday|
faraday.response :logger # log requests to STDOUT
faraday.adapter Faraday.default_adapter # make requests with Net::HTTP
end
end
def return_response(item=true)
if @resp.status == 200
response = item ? Hash.from_xml(@resp.body)['xml']['item'] : Hash.from_xml(@resp.body)['xml']
else
response = Hash.from_xml(@resp.body)['xml']['error']
Rails.logger.info "response_error=#{response}"
end
response
end
def get_subscribers
path = 'subscribers'
data = { 'X-API-KEY' => API_KEY, 'domain' => DOMAIN }
@resp = connection.get(path, data)
return_response
end
def get_subscriber(physical_id)
path = 'subscriber'
data = { 'X-API-KEY' => API_KEY, 'Physical_ID' => physical_id, 'domain' => DOMAIN }
@resp = connection.get(path, data)
return_response
end
# and so on
end
Now i want to use 'https://second.production.url.com' if the there are any error with interacting service via first url, how will be better to setup this?
At first i tried to ping / get 200 ok from server and if this fails, than i switch to the second URL. But there are situations when server is up and running, returns 200 OK, but API isn't reachable. My mains issue is that i don't see how i can catch error and re-run method with another URL from the library.
| Use another source if external service is down |
There is a multiple way to do this.
If a continious access is needed :AWatcheraccess with polling events ( WatchService API )AStream BufferFileObservablewith Java rxThen creating anNFSstorage could be a possible way with exposing the remote logs and make it as apersistant volumeis better for this approach.Else, if the access is based on pollling the logs at for example a certain time during the day then a solution consist of using anFTPsolution likeApache Commons FTP Clientor using an ssh client which have anSFTPimplementation likeJSchwhich is a native Java library. | I have an app (java, Spring boot) that runs in a container in openshift. The application needs to go to a third-party server to read the logs of another application. How can this be done? Can I mount the directory where the logs are stored to the container? Or do I need to use some Protocol to remotely access the file and read it?A remote server is a normal Linux server. It runs an old application running as a jar. It writes logs to a local folder. An application that runs on a pod (with Linux) needs to read this file and parse it | How to read a file on a remote server from openshift |
The below solution is working fine :prom_metric2 {some_label="value"} and on (label2 ) prom_metric1 {label1='A'}References:https://www.robustperception.io/exposing-the-software-version-to-prometheushttps://prometheus.io/docs/prometheus/latest/querying/operators/#many-to-one-and-one-to-many-vector-matchesShareFollowansweredAug 31, 2020 at 8:58Harshit GoelHarshit Goel18511 gold badge44 silver badges1515 bronze badgesAdd a comment| | I am looking for the output of metric 'prom_metric2' where input label is 'label2' the value of which has to be taken from metric 'prom_metric1'.i.e. for followng input query:prom_metric1{label1="A"}Output time series is :prom_metric1{label1="A",label2="B",label3="C"}
prom_metric1{label1="A",label2="D",label3="E"}Now, the following metric should take all the values of label2 from above time series and then show the output.prom_metric2{label2="LABEL_FROM_PROM_METRIC1 i.e. B and D"}It is equivalent to following SQL query :Select * from prom_metric2 where label2 IN (Select label2 from prom_metric1 where label1='A')Is this possible in promQL?Thanks in advance. | Nesting query in promQL |
It's now working. As expected it took re-installing the operating system, as well as the following:
http://blog.rstudio.org/2013/10/22/rstudio-and-os-x-10-9-mavericks/
Using the preview version available from the link below solved the Rstudio/Git issue instantly.
|
I have been trying for a few hours to use git within Rstudio on my macbook. However, the option to use git within version control is missing - the only option remains (none).
I have installed github, and then git directly, using the link given
in the rstudio website.
I have attempted to run the bash script
supplied with the git installation file.
I have verified that git is
active on the machine through both github and directly through the
command line.
I have located the git file in the hidden folder
/local/git/bin/git
and pointed Rstudio to this using global options.
I have reinstalled git a couple of times.
I have logged off and on again multiple times.
Any solutions very welcome.
Thanks,
Jon
| Reasons why git is not visible to Rstudio (OSX) |
0
Since you've mounted shared Projects directory with read only, you cannot change the permissions.
You can try either:
Correct permission on your local file then mount it with the right permissions.
Try to mount it with write access (-v ~/Projects:/Projects:rw) and change the permission from the container (it won't work if you're mounting non-Linux filesystem such as NTFS or FAT32 which won't support Linux file permissions).
Run non-executable binary directly with a dynamic linker (assuming binary is in ELF format), e.g. for 64bit ELF format:
/usr/lib64/ld-linux-x86-64.so.2 ~/Projects/ch25.bin
See: What is /lib64/ld-linux-x86-64.so.2 and why can it be used to execute file?
To find the right dynamic linker, you can use this command inside your container:
find /lib /usr/lib -name "ld*.so*"
If you need 32bit dynamic linker, install lib32z1 package and use /usr/lib/ld-linux.so.2 to execute binary.
Otherwise if your binary file is in different format, use the right parser (for shell script, use bash ch25.bin).
Share
Improve this answer
Follow
edited Sep 22, 2022 at 14:09
answered Sep 21, 2022 at 12:24
kenorbkenorb
160k9090 gold badges689689 silver badges755755 bronze badges
Add a comment
|
|
Is it possible to use Docker like a VM and run binaries in it? I have an ELF binary to debug/reverse engineer but I'm on a Mac so I can't run it. I've tried mounting it through a shared volume with docker run -it -v ~/Projects:/Projects ubuntu and chmod +x but it tells me no such file or directory when I tried to execute it.
So starting a docker instance seems fine, it drops me into a root shell.
$ docker run -it -v ~/Projects:/Projects ubuntu /bin/bash]
root@21aee00b6c45:/# cd Projects/
root@21aee00b6c45:/Projects#
Then I attempt to run my binary which gives me
root@21aee00b6c45:/Projects# ls -la ch25.bin
-rwxr-xr-x 1 root root 12751 Apr 28 09:16 ch25.bin
root@21aee00b6c45:/Projects# ./ch25.bin
bash: ./ch25.bin: No such file or directory
| Running binaries inside docker |
Seedjango-audit-logShareFollowansweredMay 3, 2012 at 11:49Burhan KhalidBurhan Khalid172k1919 gold badges247247 silver badges287287 bronze badges3will it save changes in some kind of table?–Andrey BaryshnikovMay 3, 2012 at 11:52Yes. See theusage instructions.–Burhan KhalidMay 3, 2012 at 11:56Ok, i see now, thank you. But this one create an audit table for every model - this is an overkill.–Andrey BaryshnikovMay 3, 2012 at 12:04Add a comment| | How can i track changes on data done by django users and save them in audit tables with user_ID? Is there is any application what can do that?
I'am using postgres and now for audit i'am using this util :http://dklab.ru/lib/dklab_rowlog/but in this case i cant pass USER_ID into trigger.. | Django tracking data changes done by users |
You need to configure this option in the Gateway API panel.
Choose your API and click Resources.
Choose the method and see the
URL Query String session.
If there is no query string, add one.
Mark the "caching" option of the query string.
Perform the final tests and finally, deploy changes.
Screenshot
|
I'm configuring the caching on AWS API Gateway side to improve performance of my REST API. The endpoint I'm trying to configure is using a query parameter. I already enabled caching on AWS API Gateway side but unfortunately had to find out that it's ignoring the query parameters when building the cache key.
For instance, when I make first GET call with query parameter "test1"
GET https://2kdslm234ds9.execute-api.us-east-1.amazonaws.com/api/test?search=test1
Response for this call is saved in cache, and when after that I make call another query parameter - "test2"
GET https://2kdslm234ds9.execute-api.us-east-1.amazonaws.com/api/test?search=test2
I get again response for first call.
Settings for caching are pretty simple and I didn't find something related to parameters configuration.
How can I configure Gateway caching to take into account query parameters?
| AWS API Gateway caching ignores query parameters |
never mind... apparently SMS only works on us-east region, you need to change the region from the sns management page. All seem normal now.
|
I'm trying to use the amazon AWS to send a text message to my phone.
In particular, I'm using the SNS service and got stuck in the process of creating a new subscription.
In the online tutorial they see this:
How come I see this?
Sorry my screenshot won't work while in the drop-down menu, so I took a ghetto picture with my phone.
Any ideas?
| amazon aws sns, sms option not available? |
This is something that comes up quite often, see e.g.this document,this forum threadorthis stackoverflow question.The answer is basically no. What I would do in your situtation is to run the job every Tuesday and have the first build step check whether to actually run by e.g. checking whether a file exists and only running if it doesn't. If it exists, it would be deleted so that the job can run the next time this check occurs. You would of course also have to check whether it's Tuesday. | I want to schedule Jenkins to run a certain job at 8:00 am every Monday, Wednesday Thursday and Friday and 8:00 amevery otherTuesday.Right now, the best I can think of is:# 8am every Monday, Wednesday, Thursday, and Friday:
0 8 * * 1,3-5
# 8am on specific desired Tuesdays, one line per month:
0 8 13,27 3 2
0 8 10,24 4 2
0 8 8,22 5 2
0 8 5,19 6 2
0 8 3,17,31 7 2
0 8 14,28 8 2
0 8 11,25 9 2
0 8 9,23 10 2
0 8 6,20 11 2
0 8 4,18 12 2which is is fine (if ugly) for the remainder of 2012, but it almost certainly won't do what I want in 2013.Is there a more concise way to do this, or one that's year-independant? | Can I set Jenkins' "Build periodically" to build every other Tuesday starting March 13? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.