Response stringlengths 15 2k | Instruction stringlengths 37 2k | Prompt stringlengths 14 160 |
|---|---|---|
I had the same problem with a daily cron job, I used the @daily but this will run at 00:00 every day.@daily /usr/local/bin/msa70_check.shwas the cron tab line i added, below is the script i run.#!/bin/bash
# msa70 disk check
/sbin/mdadm --detail /dev/md0 /dev/md1|
/bin/mailx -s"Disk check on server123 please check"[email protected]I also had to edit my script and add /sbin/ and /bin in front of mdadm and mailx for the cron job to runShareFolloweditedApr 15, 2015 at 13:05answeredApr 15, 2015 at 13:01PaulMPaulM3255 bronze badges2I am not sure it is a astersik problem since I have many other tasks that works with that notation, and I need it to run at that specific time because the trafic on the server will be low–Peter HayekApr 15, 2015 at 13:05thanks paul It worked after adding before each command the complete path, I actually located the slon and rcpostgresql commands and fixed the script to the following: #!/bin/bash /usr/bin/slon_kill; /usr/sbin/rcpostgresql stop–Peter HayekApr 15, 2015 at 13:15Add a comment| | I have a very simple script in my crontab that I want to run every day. It is located in/home:-rwxr-xr-x 1 root root 40 Apr 15 08:01 kill_slony_stop_sql.shIt has execute permission and here is the content:#!/bin/bash
slon_kill;rcpostgresql stopand here is the cron line for it to run daily:56 12 * * * /home/kill_slony_stop_sql.shBut it is not working for some reason. When I type/home/kill_slony_stop_sql.shin the command line, it works good but it is not working in the crontab.Any thoughts? | running a script in crontab |
set NO_PROXY="$NO_PROXY,192.168.211.158/8443"That slash is not the port, it's theCIDRwhich defines how many IPs should be excluded from the proxy. Separately, it appears you somehow included the colon in the one provided to--docker-env, which I think is also wrong.And, the$NO_PROXY,syntax in yoursetcommand is also incorrect, since that's the unix-y way of referencing environment variables -- you would wantset NO_PROXY="%NO_PROXY%,...just be careful since unless youalready havea variable namedNO_PROXY, thatsetwill expand to readset NO_PROXY=",192.168.etcetc"which I'm not sure is legal syntax for that variable. | I am trying to start minikube behind a corporate proxy on Windows machine. I am using the following start commandminikube start --alsologtostderr --vm-driver="hyperv" --docker-env http_proxy=http://proxyabc.uk.sample.com:3128 --docker-env https_proxy=http://proxyabc.uk.sample.com:3128 --docker-env "NO_PROXY=localhost,127.0.0.1,192.168.211.157:8443"minikube version = 0.28.0kubectl version = 1.9.2I've also tried setting the no proxy variable before the commandset NO_PROXY="$NO_PROXY,192.168.211.158/8443"But everytime I run the "minikube start" command I end up with the following messageError starting cluster: timed out waiting to unmark master: getting node minikube: Gethttps://192.168.211.155:8443/api/v1/nodes/minikube: ForbiddenI have already tried solutions athttps://github.com/kubernetes/minikube/issues/2706https://github.com/kubernetes/minikube/issues/2363 | Kubernetes Minikube not starting behind corporate proxy (Windows) |
I think you don't need to copy postgres jar in slaves as the driver programme and cluster manager take care everything. I've created dataframe from Postgres external source by the following way:Download postgres driver jar:cd $HOME && wget https://jdbc.postgresql.org/download/postgresql-42.2.5.jarCreate dataframe:atrribute = {'url' : 'jdbc:postgresql://{host}:{port}/{db}?user={user}&password={password}' \
.format(host=<host>, port=<port>, db=<db>, user=<user>, password=<password>),
'database' : <db>,
'dbtable' : <select * from table>}
df=spark.read.format('jdbc').options(**attribute).load()Submit to spark job:Add the the downloaded jar to driver class path while submitting the spark job.--properties spark.driver.extraClassPath=$HOME/postgresql-42.2.5.jar,spark.jars.packages=org.postgresql:postgresql:42.2.5 | I have existing EMR cluster running and wish to create DF from Postgresql DB source.To do this, it seems you need to modify the spark-defaults.conf with the updatedspark.driver.extraClassPathand point to the relevant PostgreSQL JAR that has been already downloaded on master & slave nodes,oryou can add these as arguments to a spark-submit job.Since I want to use existing Jupyter notebook to wrangle the data, and not really looking to relaunch cluster, what is the most efficient way to resolve this?I tried the following:Create new directory (/usr/lib/postgresql/ on master and slaves and copied PostgreSQL jar to it. (postgresql-9.41207.jre6.jar)Edited spark-default.conf to include wildcard locationspark.driver.extraClassPath :/usr/lib/postgresql/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/$Tried to create dataframe in Jupyter cell using the following code:SQL_CONN = "jdbc:postgresql://some_postgresql_db:5432/dbname?user=user&password=password"
spark.read.jdbc(SQL_CONN, table="someTable", properties={"driver":'com.postgresql.jdbc.Driver'})I get a Java error as per below:Py4JJavaError: An error occurred while calling o396.jdbc.
: java.lang.ClassNotFoundException: com.postgresql.jdbc.DriverHelp appreciated. | Using Postgresql JDBC source with Apache Spark on EMR |
objc_msgSend() effectively drops messages to nil. If the method has a non-void return type, it will return something like nil, i.e. 0, NO, or 0.0, although this isn't always guaranteed all return types on all platforms. Thus, the only errors you're likely to encounter are when your object isn't really nil, (e.g. when it's a reference to a deallocated object), or when you don't handle nil as a return type appropriately.
In your example, -count returns an NSUInteger, so the value of i will be nil0, since nil1 will return nil2 for a message to nil3 that should return an nil4.
|
I know it's ok to send the release message to nil objects. What about other messages? The following code prints 0 to the console. I'd like to understand why.
NSArray *a = nil;
int i = [a count];
NSLog(@"%d", i);
Does sending messages to nil objects ever cause errors?
| How does Objective-c handle messages sent to nil objects? |
If your image is unix like, you can check if the proccess is running with$ ps aux | grep '[s]idekiq'But this don't guarantee that everything is working inside sidekiq and redis.A better approach is described/developed in this sidekiq pluginhttps://github.com/arturictus/sidekiq_aliveI'm facing problems withlivenessProbefor k8s and trying to solve without using this lib but not successful yet.ShareFollowansweredNov 6, 2019 at 13:44mateusppereiramateusppereira93088 silver badges1919 bronze badges2Surely a healthcheck (liveness probe) that just checks whether the process appears inpsoutput would be redundant, since kubelet will restart the container anyway if the process fails? I'm assuming that sidekiq is running in its own container (and thatrestartPolicyisn't set to Never).kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/…–sengiApr 30, 2021 at 10:48I ended up using thispstrick in the readiness probes, and delayed it by 30 seconds to make sure sidekiq does not crash during boot (e.g. because of missing configuration).–psmithDec 7, 2021 at 2:31Add a comment| | I'm using kubernetes on my cluster with several rails / node docker images. Most of them have :3000/healtz health check that simply returns status 200 with OK in body.Now I'm trying to discover the best way how this health check can be performed on docker image running sidekiq. How I can verify that the worker is running? | How to check health of docker image running sidekiq |
First, this is only if you are on Android.
Find a terminal emulator like Termux.
Grant the emulator storage access.
Move all the relevant files into a new folder.
Install git, using whatever package manager you have (pkg or apt-get both work on Termux).
Create a git remote on the GitHub website or app.
Use your normal git commands to add the folder.
|
When I search or ask for how to upload folder to GitHub from my mobile phone. Everyone is tell that you can only upload entrie folder on desktop or laptop computer. Please help me
| How can I upload a entire folder to GitHub from my mobile phone? |
Lots of options.
The best option is probably to make a new branch and cherry-pick your fix into that branch:
git checkout -b my-fix-branch origin/master
git cherry-pick master
git push -u origin my-fix-branch
then do a pull request from my-fix-branch on GitHub. (This assumes your working branch is named master, based off the remote master; change the branch names as appropriate).
IF nobody has pulled or cloned your fork, you can rewrite history forcefully. Do git rebase -i HEAD~2 and delete the offending commit, then git push --force. This will break any other repo based on your fork, so do not do this if you suspect anyone else is using your repo.
|
OK, I did something stupid.
I forked a repo I am supposed to contribute to.
So I then literally created a file called "blafile" to check I can commit (obviously I
did not understand what a fork is) and committed with a message "check I can commit".
I pushed to my github forked repo and forgot about it.
I started fixing a bug on the next day.
I committed my fix and pushed to my forked repo with message "fixed bug xyz".
Now I wanted to issue a pull request, and all of a sudden I see my "check I can commit" commit. I'd rather not like that to appear on the pull request. :)
Can I entirely delete that commit? Can I issue a pull request on a single commit or will it pull all my commits?
I know I can locally git reset --hard HEAD~1(it's a small fix I could redo quickly) but that only fixes my local repo, not my github (forked) repo.
| I need to delete a commit to a fork |
You can call cudaDeviceReset() at the end of your application if you choose. In fact, this is recommended for proper usage of the visual profiler.
If you are in fact finished with the GPU and ready to exit your application, there should be no downside to using cudaDeviceReset() if you choose. Note that probably neither of these methods (cudaDeviceReset vs. many cudaFree statements) are really necessary for this scenario since application exit will also free the resources (due to the destruction of the cuda context at application exit). But note the statement above about profiler usage.
|
There are various questions regarding the proper use of cudaDeviceReset(), but I haven't been able to find an answer to the following question.
The doc on cudaDeviceReset() says that it explicitly destroys and cleans up all resources associated with the current device in the current process.
Suppose I have a program with many arrays, all allocated with cudaMalloc. Could I use cudaDeviceReset instead of many cudaFree statements at the end of my program to quickly free all the memory on the device? Are there any disadvantages to doing so?
| cudaDeviceReset v. cudaFree |
Something like this:#!/bin/bash
set -x
TEMPDIR=$(mktemp -d)
CONFIG=$(aws cloudfront get-distribution-config --id CGSKSKLSLSM)
ETAG=$(echo "${CONFIG}" | jq -r '.ETag')
echo "${CONFIG}" | jq '.DistributionConfig' > ${TEMPDIR}/orig.json
echo "${CONFIG}" | jq '.DistributionConfig | .DefaultCacheBehavior.LambdaFunctionAssociations.Items[0].LambdaFunctionARN= "arn:aws:lambda:us-east-1:xxxxx:function:test-func:3"' > ${TEMPDIR}/updated.json
aws cloudfront update-distribution --id CGSKSKLSLSM --distribution-config file://${TEMPDIR}/updated.json --if-match "${ETAG}"ShareFollowansweredJun 30, 2020 at 14:17cloudbudcloudbud3,09255 gold badges3232 silver badges5757 bronze badges11You can't set the 0 index for the LambdaFunctionAssociations array. You need to update regarding the entrypoint of the lambda@edge–Mr_DeLeTeDOct 4, 2021 at 12:51Add a comment| | I would like to update the cloudfront distribution with the latest lambda@edge function using CLI.I sawthis documentation, but could not figure out how to update the lambda ARN only.Can some one help? | How to update lambda@edge ARN in cloudfront distribution using CLI |
I think you might want to investigate the other authentication components that CakePHP has to offer.BasicAuthenticateshould be of particular interest.If you go down this route, the authentication will still happen against a userModel rather than a .htpasswd file.As for the IP restriction, that should be relatively safe. IP spoofing is possible but hard.ShareFollowansweredMar 28, 2012 at 7:59Rob ForrestRob Forrest7,40977 gold badges5353 silver badges6969 bronze badgesAdd a comment| | I am developing a website with CakePHP.I have anAdminsControllerfor admins to authenticate. However I want create extra security by adding .htaccess password protection.I tried to do it by adding.htaccessand a.htpasswdfiles in my Admins view directory since I want the other pages of my site to work normally, but it doesn't work.So how to add.htaccessand.htpasswdfor only a specific view?In my AdminsControllers's beforeFilter method I've added :if(env('HTTP_HOST') == 888.888.888.888 || ......),The list of IP addresses that should be allowed. Can I say that it is safe now? | Prevent access to a specific view in cakephp |
I am not sure you can ever do it.fmin_l_bfgd_bis provided not by pure python code, but by a extension (a wrap of FORTRAN code). In Win32/64 platform it can be found at\scipy\optimize\_lbfgsb.pyd. What you want may only be possible if you can compile the extension differently or modify the FORTRAN code. If you check that FORTRAN code, it hasdouble precisionall over the place, which is basicallyfloat64. I am not sure just changing them all tosingle precisionwill do the job.Among the other optimization methods,cobylais also provided by FORTRAN. Powell's methods too. | I am trying to optimize functions with GPU calculation in Python, so I prefer to store all my data as ndarrays withdtype=float32.When I am usingscipy.optimize.fmin_l_bfgs_b, I notice that the optimizer always passes afloat64(on my 64bit machine) parameter to my objective and gradient functions, even when I pass afloat32ndarray as the initial search pointx0. This is different when I use the cg optimizerscipy.optimize.fmin_cg, where when I pass in a float32 array asx0, the optimizer will usefloat32in all consequent objective/gradient function invocations.So my question is: can I enforcescipy.optimize.fmin_l_bfgs_bto optimize onfloat32parameters like inscipy.optimize.fmin_cg?Thanks! | How to enforce scipy.optimize.fmin_l_bfgs_b to use 'dtype=float32' |
I'm confused, though, as to why most Dockerfiles specify the OS in the
FROM line of the Dockerfile. I thought that as it was using the
underlying OS, then the OS wouldn't have to be defined.
I think your terminology may be a little confused.
Docker indeed uses the host kernel, because Docker is nothing but a way of isolating processes running on the host (that is, it's not any sort of virtualization, and it can't run a different operating system).
However, the filesystem visible inside the container has nothing to do with the host. A Docker container can run programs from any Linux distribution. So if I am on a Fedora 24 Host, I can build a container that uses an Ubuntu 14.04 userspace by starting my Dockerfile with:
FROM ubuntu:14.04
Processes running in this container are still running on the host kernel, but there entire userspace comes from the Ubuntu distribution. This isn't another "operating system" -- it's still the same Linux kernel -- but it is a completely separate filesystem.
The fact that my host is running a different kernel version than maybe you would find in an actual Ubuntu 14.04 host is almost irrelevant. There are going to be a few utilities that expect a particular kernel version, but most applications just don't care as long as the kernel is "recent enough".
So no, there is no virtualization in Docker. Just various (processes, filesystem, networking, etc) sorts of isolation.
|
I've read that on linux, Docker uses the underlying linux kernal to create containers. So this is an advantage because resources aren't wasted on creating virtual machines that each contain an OS.
I'm confused, though, as to why most Dockerfiles specify the OS in the FROM line of the Dockerfile. I thought that as it was using the underlying OS, then the OS wouldn't have to be defined.
I would like to know what actually happens if the OS specified doesn't match the OS flavour of the machine it's running on. So if the machine is CentOS but the Dockerfile has FROM Debian:latest in the first line, is a virtual machine containing a Debian OS actually created.
In other words, does this result in a performance reduction because it needs to create a virtual machine containing the specified OS?
| If docker uses the underlying linux os, why specify the OS in the FROM line of a Dockerfile |
When you first asked this question, it was not possible.
But it is now possible to do asynchronous memcache operations in the Python version of the SDK starting in version 1.5.4 (see the announcement) and for Java users from version 1.6.0 (announcement)
|
A typical usage of the memcache (in pseudocode) looks like this:
Map data = getFromMemcache(key);
if(data == null){
data = doSomethingThatTakesAWhile();
setMemcache(key, data);
}
return data;
If the setMemcache call could be asynchronous, that would be about 10 less milliseconds the user has to wait for their response. The function in this scenario doesn't really care if the setMemcache call was successful, so it doesn't need to synchronously wait for it.
Is there a way to do an asynchronous memcache set in app engine? If there isn't currently, is it something that could be possible in the future?
| Google App Engine - Is there any way to do an asynchronous memcache set? |
The Iguazio uses for monitoring standard technology stack Prometheus and Grafana, it means, it is possible to see performance of NGINX (web server in Iguazio).Grafana dashboard see 'private / NGINX - Request Handling Performance' and view to the 'Total request handling time': | How does it possible to identify throughput in MLRun solution (I use MLRun 1.3.0 with Iguazio version 3.5.2)?I am using MLRun real-time function 'nuclio-risk-sentiment' and I would like to see request/response time of MLRun.It is easy to see e.g. view to the Memory, CPU, Network I/O usage (see the board in grafana 'private / Kubernetes Pods'):Do you know where to see the duration of the response/request (real throughput)? | MLRun, Issue with view to REST API throughput |
A nested location is the right way to create locations with regular expressions and it should do the trick for what you want to achieve.
location / {
proxy_pass http://192.168.12.12:91;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~* \.html$ {
proxy_cache cache_one;
proxy_cache_key $host$uri$is_args$args;
proxy_cache_valid any 1m;
expires 1m;
}
}
I’m not totally sure if the nested location is really using the options from the outer location block. If it doesn’t (I can’t test this right now) you could create separate files.
location / {
include proxy.conf;
location ~* \.html$ {
include proxy.conf;
proxy_cache cache_one;
proxy_cache_key $host$uri$is_args$args;
proxy_cache_valid any 1m;
expires 1m;
}
}
proxy.conf
proxy_pass http://192.168.12.12:91;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
This is definitely going to work.
|
I want to cache all *.html files in a Nginx reverse proxy, So I added the config:
# Original configuration
location = / {
proxy_pass http://192.168.12.12:91;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Added for cache
location ~ \.html {
proxy_pass http://192.168.12.12:91;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache cache_one;
proxy_cache_key $host$uri$is_args$args;
proxy_cache_valid 200 301 302 1m;
proxy_cache_valid any 1m;
expires 1m;
}
Repeat twice proxy_pass and proxy_set_header feel bad
How can I optimize this? Thanks!
| How can I optimize this nginx proxy cache configuration? |
The problem is, you installed it in/codeigniter3/This should fix it:// remove index.php
$config['index_page'] = ""
// Allow installation in a subfolder of your webroot
$config['uri_protocol'] = "REQUEST_URI"And keep your rewrite settings, they are ok. | I try to remove the index page in Codeigniterthe first step I do this
//old Code$config['index_page'] = "index.php”//New updated code(Only Need to remove index.php )$config['index_page'] = ""then for second step i do this
creat file .htaccess in root of codigniter then put this code sourceRewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1 [L]but it's the same problem and I can't refresh the web pagewith index page the URL work:http://localhost:8089/codeigniter3/index.php/Hello/dispdatabut without index page don't workhttp://localhost:8089/codeigniter3/Hello/dispdataHello is the controller,
finally thank for help, :) | How to Remove index.php in URL |
Change the remote url tossh.https will keeep asking you for password every time you wish to rungit pull/push/fetch.Simply follow those steps and you will set up your ssh key in no time:Generate a new ssh key (or skip this step if you already have a key)ssh-keygen -t rsa -C "your@email"Once you have your key set inhome/.sshdirectory (orUsers/<your user>.sshunder windows), open it and copy the contentHow to add sh key to github account?Login to github accountClick on the rancher on the top right (Settings)Click on theSSH keysClick on theAdd ssh keyPaste your key and saveChange the remote urlgit remote set-url origin <new_ssh_url>And you all set to go :-) | I'm facing this issue when I try to push the code to the repository from my local machine.user@user:~/rails_projects/first_app$ git push origin master
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.How do I resolve this issue? | How do I remove the permission in Github? |
Prometheus by default doesn't accept data via remote_write protocol. This option can be enabled by running a Prometheus with--enable-feature=remote-write-receivercommand-line flag. Seethese docs.Side notes:You can also write the collected data from client-side Prometheus to any other supported centralized Prometheus-compatible remote storage fromthis list. Some of these systems support Prometheus query API, so they can be used as a drop-in replacement for Prometheus in Grafana. See, for example, the system I work on -VictoriaMetrics.There are also lightweight alternatives to Prometheus, which can be used at client side in order to reduce resource usage:Prometheus agentandvmagent. | I have been trying to setup monitoring for a server which is on client side (unreachable).One way I tried was prometheus remote write. As I am new to prometheus, I expected that Client prometheus will push the metrics to central prometheus further I can create a Grafana dashboard. I guess I am wrong, somehow I am getting this error:"Failed to send batch, retrying" err="Post "http://xx.xx.xx.xx:9090/api/v1/write": context deadline exceeded"I tried everything to solve this problem but nothing worked. Is it because both client and server prometheus are unreachable to each other? Is it necessary even in remote write config for prometheus to reach the endpoint? Any input is welcomed I am stuck for over months now.UPDATE: I tried telegraf and influxdb instead of central prometheus this time both client prometheus and telegraf can ping eachother still I am getting the same error:"Failed to send batch, retrying" err="Post "http://xx.xx.xx.xx:1234/receive": context deadline exceeded" | Prometheus for unreachable endpoint monitoring |
It sounds like you want an exact duplicate of the repository on GitHub without marking it as a fork. GitHub documents how toduplicate a repositoryin their help.To make an exact duplicate, you need to perform both a bare-clone and a mirror-push.Open up the command line, and type these commands:git clone --bare https://github.com/exampleuser/old-repository.git
# Make a bare clone of the repository
cd old-repository.git
git push --mirror https://github.com/exampleuser/new-repository.git
# Mirror-push to the new repository
cd ..
rm -rf old-repository.git
# Remove our temporary local repository | Closed. This question isopinion-based. It is not currently accepting answers.Want to improve this question?Update the question so it can be answered with facts and citations byediting this post.Closed9 years ago.Improve this questionWhat is your workflow if you have a "boilerplate" pushed in github, and you'll be creating another project out of it.Would you clone it and change remote?:git clonelink-of-github-repogit remote set-url originlink-of-ANOTHER-repodo app-specific changes, like changing readme, package.json etc.commit and push?If there arebetterways, can you cite some. | Version control an application that is cloned from a boilerplate [closed] |
25
On GitHub it's not possible to compare two unrelated repos.
On your computer you can:
Go to the working directory of your local repo
Add a remote for the other repo and fetch it
Compare using git diff
For example:
cd /path/to/repo
git remote add other URL_TO_OTHER
git fetch other
git diff other/branchname
git diff ..other/branchname # diff in the other direction
Share
Improve this answer
Follow
answered Jul 17, 2014 at 6:20
janosjanos
123k3030 gold badges233233 silver badges241241 bronze badges
Add a comment
|
|
This question already has answers here:
How do I compare two git repositories?
(6 answers)
Closed 7 years ago.
Git novice here, how do I compare two completely separate repos (no forks / branches between them) using github? If this is not possible, how do I compare the two repos?
| How do I compare repos from different projects through github? [duplicate] |
Answer from micrometer support:Generally I'd say this isn't necessary. The value of the metric is a floating point seconds value. If you want to display ms on a chart you can safely multiply the time series by 1000
There is a healthy principle of using base units whenever possible. Seconds is a base unit, which makes it easier to scale the time series in either direction (either down to millis or up to minutes)
The word 'seconds' comes from that convention in the Prometheus ecosystem. Other ecosystems may suggest that this is redundant
Naming convention can set on a registry underregistry.config() | At the moment on my endpoint/actuator/prometheusI receive answer for timer like this:...
# HELP MY_NAME_seconds
# TYPE MY_NAME_seconds summary
MY_NAME_seconds_count{application="MyApplication",smth="else",} 520.0
MY_NAME_seconds_sum{application="MyApplication",smth="else",} 1249.024
# HELP MY_NAME_seconds_max
# TYPE MY_NAME_seconds_max gauge
...I'm creating my timer like this:Metrics.timer(operation, tags).record(endTime - startTime, TimeUnit.MILLISECONDS);Is it possible to change naming fromMY_NAME_seconds_counttoMY_NAME_millis_count? | How can I change metrics naming in Micrometer |
<Files "\.pdf$">
Header set X-Robots-Tag "noindex, nofollow"
</Files>You have copied the linked solution incorrectly. To match a regex with theFilesdirective you need the additional~argument. ie.<Files ~ "\.pdf$">. (Although theFilesMatchdirective is arguably preferable when using a regex.)However, you do not need a regex here. Just use the standardFilesdirective with awildcard(not regex) pattern. For example:<Files "*.pdf">
:Reference:https://httpd.apache.org/docs/current/mod/core.html#fileshttps://httpd.apache.org/docs/current/mod/core.html#filesmatch | My sample PDF URL is:https://askanydifference.com/wp-content/uploads/2022/09/Difference-Between-Import-and-Export.pdfI am trying to noindex all my PDF files in my wordpress website. While doing research, I learnt that they can only be marked noindex using .htaccess and not any other means as pdf files don't have any html code for meta tag.So by following the solution given below, I added the X-Robots-Tag is my .htaccess file as:https://webmasters.stackexchange.com/questions/14520/how-to-prevent-a-pdf-file-from-being-indexed-by-search-engines<Files "\.pdf$">
Header set X-Robots-Tag "noindex, nofollow"
</Files>
# BEGIN WordPress
# The directives (lines) between "BEGIN WordPress" and "END WordPress" are
# dynamically generated, and should only be modified via WordPress filters.
# Any changes to the directives between these markers will be overwritten.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
# END WordPress
<IfModule mod_headers.c>
Header set Content-Security-Policy "block-all-mixed-content"
</IfModule>I have placed the .htaccess file in public_html folder.But when I inspect the file in Firefox under Network tab, the X-Robots-Tag is not present.All my caching is disabledAny help on where I am going wrong | X-Robots-Tag not shown in HTTP response header |
On the cron command line type:bash -l -c '/home4/USER/public_html/code.rb'On top of your code.rb file add:#!/usr/local/bin/rubyand also open and edit.bashcr just to make sure you have the gems directory included.export HPATH=$HOME
export GEM_HOME=$HPATH/ruby/gems
export GEM_PATH=$GEM_HOME:/lib64/ruby/gems/1.9.3
export GEM_CACHE=$GEM_HOME/cache
export PATH=$PATH:$HPATH/ruby/gems/bin
export PATH=$PATH:$HPATH/ruby/gemsspecial thank you to Jordan, who gave me the answer to this issue.note: when doing a full justhost wipeout. and inserting whereis ruby command I had to change#!/usr/local/bin/rubyto#!/usr/bin/ruby | My Cron Setup is:0 * * * * ruby /directory/to/ruby/file.rbAnd I get this error:/usr/lib64/ruby/1.9.3/rubygems/custom_require.rb:36:in `require': cannot load such file -- mechanize (LoadError)
from /usr/lib64/ruby/1.9.3/rubygems/custom_require.rb:36:in `require'
from /home4/ofixcom1/rails_apps/products.rb:3:in `<main>'When I run that script on SSH it runs without a problem, but when I cron setup it gives me this error. I have read a lot of solutions. Even with RVM and I tried them almost all.
A previous cron with ruby was running smoothly I dont know why it is not working with mine.I forgot to mention, on the JustHost help they have this link with examples for other codes:Cron Setup | Command to run a RUBY cron job on JUSTHOST |
You can see all this and more if you load the SOS.dll (or PSSCOR2.dll) extension into WinDbg or even into Visual Studio.SOS is a part of the .NET framework and it basically turns a native debugger such as WinDbg into a "managed code aware" debugger.SOS has commands that will let you inspect the managed heap, objects and their references and so on.For more information seeTess' excellent blog.For another example of how to use SOS seethis question.ShareFolloweditedJun 24, 2021 at 6:06Glorfindel22.3k1313 gold badges8484 silver badges114114 bronze badgesansweredAug 24, 2010 at 9:44Brian RasmussenBrian Rasmussen115k3434 gold badges223223 silver badges319319 bronze badges2Tess' excellent blog is linking toblogs.msdn.com/b/tessif you can change it toblogs.msdn.com/b/tess/archive/2007/10/19/…–SimsonsAug 24, 2010 at 10:18@Subhen I picked the overview page, as she has many very useful posts on managed debugging.–Brian RasmussenAug 24, 2010 at 10:48Add a comment| | Ok, This question is not exactly a programming question but this is what can really make programming more practical and easy to implement.This question is coming out beacuase each-time I writeint c=10;orMyClass objMyClass=new MyClass();I want to see where in the memory the value has been created (Though We can see the address as an Hex Value now) .Can we see (when we declare a variable) Where it is being created in the Memory? In which state i.e :C#->IL->Machine Language, is the variable present in the memory.Now how different events and functions update it's value. This is just something like my CPU emulator.I am asking because this question was popping in mind long time? When ever I get to know a new concept , the reflex is , Ok How does it look in the memory. | Is there a CPU emulator or a way to see how things are created and Destroyed in Memory |
The recommend method is multistage builds:
https://docs.docker.com/develop/develop-images/multistage-build/
That is dont separate production from test docker files. Instead keep all the requirements in one file and build for the target stage you need.
An example Dockerfile:
FROM python:3.8.7-slim-buster AS production
ADD requirements.txt ${PROJECT_DIR}/requirements.txt
RUN pip install -r requirements.txt
FROM production AS test
ADD requirements-test.txt ${PROJECT_DIR}/requirements-test.txt
RUN pip install -r requirements-test.txt
and then for your production build target the correct stage:
docker build --target production -t org/service:latest .
Multistage build syntax was introduced in Docker Engine 17.05.
|
I have a Dockerfile which installs production & test dependencies. I want to have separate image for tests, so production image is smaller, without to much code duplication. Maybe there is something like FROM statement for referencing other Dockerfiles?
Dockerfile has following lines:
ADD requirements.txt ${PROJECT_DIR}/requirements.txt
RUN pip install --no-cache --process-dependency-links --trusted-host github.com -r requirements.txt
ADD requirements-test.txt ${PROJECT_DIR}/requirements-test.txt
RUN pip install --no-cache --process-dependency-links --trusted-host github.com -r requirements-test.txt
First two install depencencies for project, second two - install dependencies for testing (pytest, pylint, etc.).
I also have docker-compose that starts database, redis cache, etc. This is how I run service and run tests:
run:
docker-compose -f docker-compose.yaml run
test:
docker-compose -f docker-compose-dev.yaml run py.test tests/
Inside both docker-compose.yaml has this build config for my container:
build:
context: .
dockerfile: ./Dockerfile
So, I could reference different Dockerfiles from my docker-compose.yaml, but I don't want them to be complete copies that have only two lines difference.
| How to implement Dockerfile inheritance? |
13
We ran into a similar issue for an application I'm working on. The solution we ended up working with is generating S3 signed URLS, that have short expiration times on them. This allows us to generate a new signed link with every request to the web server, pass that link to our known auth'd user, who then has access for a very limited amount of time, (a few seconds). In the case of images we wanted to display in the DOM, we had our API respond with an HTTP 303 (See Other) header and the signed URL, that expired with-in a couple of second. This allowed the browser time to download the image and display it before the link expired.
A couple of risks around this solution: We know a user could possibly request a signed URL and share it with another service before the expiration happens programmatically, or an un-auth'd user who was intercepting network traffic could potentially intercept the request and make it themselves, we felt these were edge case enough that we were comfortable with our solution.
Share
Improve this answer
Follow
edited Jun 5, 2016 at 3:08
answered Jun 5, 2016 at 2:35
Jeffrey CampbellJeffrey Campbell
14666 bronze badges
1
1
you always have to deal with users leaking your content, and ssl effectively stops any reasonable mitm. And since the request is signed and without any actual credential, it's almost certainly way safer than whatever @virepo's PHP site is dong for authentication. So in my mind this is a great solution. I worry more about the link expiring and the user being confused than anything else.
– erik258
Dec 21, 2016 at 3:47
Add a comment
|
|
When I go to the url of my bucket file it downloads straight away. However I only want users that are logged into my application to have access to these files.
I have been searching for hours but cannot find out how to do this in php from my app. I am using laravel to do this so the code may not look familiar. But essentially it just generates the url to my bucket file and then redirect to that link which downloads it
$url = Storage::url('Shoots/2016/06/first video shoot/videos/high.mp4');
return redirect($url);
How can i make this file only accessible for users logged into my application?
| restrict access to amazon s3 file to only allow logged in users access |
Is the KubeDNS addon running?You should see something like this in yourkube-systemnamespace when you list pods:If you don't see those pods, try installing the addon:https://coreos.com/kubernetes/docs/latest/deploy-addons.html | I created a cluster with 2 vm's. I followed instructions listed below.This is on RHEL 7.3
This is after kubernetes was installed using yum.
The version of kubernetes is 1.7commands on Master01-onlysysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
systemctl stop firewall
systemctl disable firewall
systemctl status firewall
systemctl start iptables.service
systemctl enable iptables.service
iptables -F
service kubelet restart
kubeadm init --pod-network-cidr 10.244.0.0/16make sure you copy the kubeadm join command that gets displayes after cluster creation"mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
kubectl describe nodes
cd ~/Downloads
kubectl apply -f flannel.yml
kubectl apply -f flannel-rbac.yml
kubectl create -f rolebinding.yml
kubectl create -f role.ymlcommands on node-onlysysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
systemctl stop firewall
systemctl disable firewall
systemctl status firewall
systemctl start iptables.service
systemctl enable iptables.service
iptables -F
kubeadm join --token xxxxxx.xxxxxxxxxxxxxx x.x.x.x:6443The issue i am having is that the dns is not working as expected.
Have been struggling with this for past two days. Would appreciate any help. | dns issue on newly created kubernetes cluster |
Because you didn't setX509KeyStorageFlags.PersistKeySetas required, the certificate is in fact not imported to the store as you wished.Further explanation can be found inKB950090 | I am trying to replicate what IIS Import does. I have an application that needs to import the certificates programmatically but its not working because i seem to be missing a step. If i import the same certificate through IIS Import utility it works perfectly.In code:private X509Certificate2Collection x509 = new X509Certificate2Collection();
private X509Store IIS = new X509Store(StoreName.My, StoreLocation.LocalMachine);
x509.Import(path, password, X509KeyStorageFlags.Exportable);
var certificate = new X509Certificate2(path, password, X509KeyStorageFlags.Exportable | X509KeyStorageFlags.MachineKeySet);
IIS.Open(OpenFlags.ReadWrite);
IIS.Add(certificate);
IIS.Close();
netsh http add sslcert ipport=0.0.0.0:" + port.ToString() + " certhash=" + CertificateThumbprint + " appid={2d967d25-4edf-4962-9b6c-5b3c4d4de48d}";The netsh binding FAILS with the errora specified logon session does not exist. it may already have been terminatedIF i first import the certificate through the IIS manager, THEN run the netsh command, this all works just fine so i must be missing something in my code that IIS is doing in the background.. | Where does IIS import cert to? |
Remove the equal signENV NODE_ENV productionShareFollowansweredMar 20, 2019 at 17:26Cody SwannCody Swann65766 silver badges99 bronze badges2But i have other variables like ENV UV_THREADPOOL_SIZE=5 which does not have any issue.–HackerMar 20, 2019 at 17:283Syntax is not the problem in this case,ENVsupports both styles as shown in here:docs.docker.com/engine/reference/builder/#env–Mostafa HusseinMar 20, 2019 at 18:26Add a comment| | I am trying to use below code below code in nodejsif (process.env.NODE_ENV !== 'production')I tried to set NODE_ENV variable from docker file like below.FROM collinestes/docker-node-oracle:10-slim
ENV NODE_ENV=production
EXPOSE 8085
CMD ["npm","start"]If i to run my docker image it does not start and throws error. If i remove NODE_ENV all runs fine. Is it the right way to setting NODE_ENV from dockerfile? | set NODE_ENV variable in production via docker file |
As far as I know this is not achievable by putting each section as an array element. Instead you can do something like the following:command:
- /bin/sh
- -c
- |
./kubectl -n $MONGODB_NAMESPACE exec -ti $(kubectl -n $MONGODB_NAMESPACE get pods --selector=app=$MONGODB_CONTAINER_NAME -o jsonpath='{.items[*].metadata.name}') -- /opt/mongodb-maintenance.sh | I have misunderstanding with how to execute $() commands in exec. i'm creating a job in kubernetes with this params:command:
- ./kubectl
- -n
- $MONGODB_NAMESPACE
- exec
- -ti
- $(kubectl
- -n
- $MONGODB_NAMESPACE
- get
- pods
- --selector=app=$MONGODB_CONTAINER_NAME
- -o
- jsonpath='{.items[*].metadata.name}')
- --
- /opt/mongodb-maintenance.shbut the part with $(kubectl -n ... --selector ...) is treated as a string and don't execute. Please tell me how to do it properly. Thanks! | how to execute an argument in kubernetes? |
Some of reasons not to cache entities:When the entities are changed frequently (so you would end up invalidating/locking them in the cache and re-reading them anyway, but you pay an extra cost of cache maintenance which is not low since cache write operations would be frequent).If there are a large number of entity instances to cache and none of them is used more frequently than the others within a given period in time. Then you would basically put instances in cache and evict them soon afterwards to make room for new ones, without reading the cached instances frequently enough to make the cache maintenance costs pay off.If the entities can be changed without Hibernate being aware of that (from an external application or with direct JDBC for example). | In my experience, I have typically used the shared cache setting:<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>My process is to then think about which entities are not expected to change often and those that would benefit from the cache, performance wise, and mark those as@Cacheable. My practice of using selective entity caching is a learned convention, but I don't fully understand this approach.Why not cache all entities? When can caching all entities become a detriment? How can I better gauge this to make a more educated decision? | Second Level Cache - Why not cache all entities? |
This is a community wiki answer based on OP's comment posted for better visibility. Feel free to expand it.The issue was caused by using different versions of docker on different nodes. After upgrading docker to v19.3 on both nodes and executingkubeadm resetthe issue was resolved. | Versionk8s version: v1.19.0metrics server: v0.3.6I set up k8s cluster and metrics server, it can check nodes and pod on master node,
work node can not see, it return unknown.NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
u-29 1160m 14% 37307Mi 58%
u-31 2755m 22% 51647Mi 80%
u-32 4661m 38% 32208Mi 50%
u-34 1514m 12% 41083Mi 63%
u-36 1570m 13% 40400Mi 62%when the pod running on the client node, it returnunable to fetch pod metrics for pod default/nginx-7764dc5cf4-c2sbq: no metrics known for podwhen the pod running one the master node, it can return cpu or memoryNAME CPU(cores) MEMORY(bytes)
nginx-7cdd6c99b8-6pfg2 0m 2Mi | About k8s metrices server only some resources can be monitored |
I am using latest version of kubernetes version 1.0.1FYI, the latest version isv1.2.3.... it says kube-system not foundYou can create the kube-system namespace by runningkubectl create namespace kube-system.Hopefully once you've created the kube-system namespace the rest of the instructions will work. | enter image description hereI tried to used the instructions from this linkhttps://github.com/kubernetes/heapster/blob/master/docs/influxdb.mdbut I was not able to install it. specifically I dont know what this instruction means "Ensure that kubecfg.sh is exported." I dont even know where I can find this I did thissudo find / -name "kubecfg.sh"and I found no results.moving on to the next step"kubectl create -f deploy/kube-config/influxdb/"when I did this it says kube-system not found I am using latest version of kubernetes version 1.0.1These instructions are broken can any one provide some instructions on how to install this? I have kubernetes cluster up and running I was able to create and delete pods and so on anddefaultis the only namespace I have when i dokubectl get pods,svc,rc --all-namespacesChanging kube-system to default in the yaml files is just getting me one step further but I am unable to access the UI and so on. so installing kube-system makes more sense however I dont know how to do it and any instructions on installing influxdb and grafana to get it up and running will be very helpful | How to install influxdb and grafana? |
Its possible to create multiple databases in the same cluster. I have used the below reference for pgo client to do that . First you need to create a superuser and then use that super user to create multiple databases in the cluster.
I havent used CRDs to create the databases if you are looking for that specific way to create databases.https://access.crunchydata.com/documentation/postgres-operator/4.6.2/pgo-client/reference/pgo_create_cluster/ | I installed Crunchydata Postgres Operator on K8S by following thislinkI found the followinglinkto create PG Cluster.resource "kubectl_manifest" "pgocluster" {
yaml_body = <<YAML
apiVersion: crunchydata.com/v1
...........
...........
...........
kind: Pgcluster
ccpimage: crunchy-postgres-ha
ccpimageprefix: registry.developers.crunchydata.com/crunchydata
ccpimagetag: centos8-13.2-4.6.2
clustername: ${pgo_cluster_name}
**database: ${pgo_cluster_name}**
...........
...........
...........
YAML
}With the definition mentioned above I would be able to create only-one database.Is there any way to create multiple databases on the same cluster using 'custom-resource-definition' on Kubernetes ?EDIT:Ifmultiple databasescannot be created on the same cluster, kindly let me know how to createmultiple clustersusing CRD | How to create multiple databases on same cluster of postgres operator? |
0
Actually i did some tests and concluded that if you comment out the mime.types line in the /etc/nginx/nginx.conf file
# include /etc/nginx/mime.types
And restart nginx
sudo service nginx restart
And you clear the browser cache before accessing again your page, you will notice that nginx will not take advantage of mime.types configuration (of course), and will NOT render correctly the media type of your content.
The following example, shows how styles.css is rendered (Content-Type: application/octet-stream) instead of text/css
To sum up, yes mime.types is required for nginx to render the right media type of content.
Share
Improve this answer
Follow
answered Nov 30, 2022 at 17:00
YoussefYoussef
2,98611 gold badge2626 silver badges2020 bronze badges
Add a comment
|
|
I have a custom nginx.conf file that I start nginx with using the cli, for example nginx -c /my/path/nginx.conf.
I have found that if I take the include /my/path/mime.types from the custom nginx.conf file, that the server still starts up fine, and webpages seem to load normally with no apparent errors.
I have been researching nginx directive priority, but I cannot see any reason that the default mime.types might be getting included. Is it safe to remove the custom mime.types include?
(I should clarify that there is nothing special about the contents of /my/path/mime.types. For the purposes of this question, consider it to be effectively the same as the contents of the default file.)
| What happens if no mime.types is included in nginx.conf? |
You can first squash all the commits hat handled the big file.
To do so, you can juste reset (soft) to the commit before the one you tried to push the big file.For instance, if your tree is like this:--> c0 ---> c1 (commit big file) ---> c2 (revert commit big file) --> c3 (other changes)You can just reset soft to c0, and re commit your changes under a new commit without integrating the big file that caused problemsgit reset --soft <c0-commit-hash>Then, add the files you want to track and commit them:git add myfile1.txt myfile2.txt
git commit -m 'my commit message'I suppose remote branch stayed at c1.If you want to push to actual/local state of your repository without considering what has been pushed in c1.git push -fBut be careful, force push will rewrite/overwrite the remote git commits that are affected, so it is possible you lose some last changes if they werent took into account in the local commit tree. | This question already has answers here:How can I remove/delete a large file from the commit history in the Git repository?(24 answers)How do I squash my last N commits together?(46 answers)Closed6 months ago.Yesterday I tried to commit a repository to GitHub but it had a big file, so it returned an error.
After this, today I deleted the big file and tried to commit and push the repository once again.However, the Git Bash continues to commit and push the first version with the big file, so it's not working.I have tried to fix this error usinggit revertandgit resetbut it didn't work.How can I commit and push just the actual state of my repository? Not considering previous commits. | Delete previous commits and push just the actual commit [duplicate] |
you can use Infinity plugin, to visualise your data as timeseries:in my case I needed to add data transformation, to treat timeStamp as time:I used API mocking service:https://somegrafanademo.free.beeceptor.com/to present data: | I have a sample json & I'm using JSON API plugin as i'm getting data from API{
"data": [
{
"timeStamp": "2022-07-28 12:00:00",
"val": 10
},
{
"timeStamp": "2022-07-28 13:00:00",
"val": 11
},
{
"timeStamp": "2022-07-28 14:00:00",
"val": 20
},
{
"timeStamp": "2022-07-28 15:00:00",
"val": 30
},
{
"timeStamp": "2022-07-28 16:00:00",
"val": 35
},
{
"timeStamp": "2022-07-28 17:00:00",
"val": 39
}
]
}I want to make graph using this data in grafana. Where X-axis should be time & y-axis should some number.How can I plot graph from this data? | how to create graph in grafana using Json data? |
You can addcurlygirly's repo as aremoteto your original repo and merge in changes from it just like any other branch. For example, if you want to merge everything oncurlygirly'smasterbranch into your original repo'smaster:git remote add curlygirly https://github.com/curlygirly/yelp_clone-1.git
git fetch curlygirly
git checkout master
git merge curlygirly/masterYou can also do this usingPull Requestsif you prefer, want to put it through code review, etc. Simply open a request fromcurlygirly:master(or any other branch) totimrobertson0122:masterand go from there.The great thing about Git is that repositories, branches, commits, etc. are all just building blocks you can manage any way you like. There's nothing special about your first repo,origin, ormaster, so you're free to work on code anywhere, and move it anywhere else later.ShareFolloweditedJun 18, 2015 at 15:01answeredJun 18, 2015 at 8:17KristjánKristján18.4k55 gold badges5050 silver badges6262 bronze badgesAdd a comment| | I'm still very new to coding and Github and as such am a little confused with how forking repos works - so please forgive what may be a basic question.I've been working on a project with different pair partners all week and my current code base situation is as follows:My initial repo -https://github.com/timrobertson0122/yelp_cloneThis code was then forked and work continue on a second repo - (can't post url)That repo was subsequently forked and contains the most recent code, that I worked on with a colleague yesterday, which I can't fork -https://github.com/curlygirly/yelp_clone-1So my question is how do I sync my original repo? Can I just add an upstream to the most recently forked repo that points to the original repo? Do I need to submit pull requests?Thanks. | How do I merge between multiple forked repositories on GitHub? |
The latest version allows you to map different branches to different repositories, seeAnnouncement: Deploy Git Branches to Multiple Elastic Beanstalk Environments:Starting today, you can use eb and Git to deploy branches to multiple
Elastic Beanstalk environments. You can also manage and configure
multiple Elastic Beanstalk environments using eb. For example, you can
configure eb and Git to deploy your development branch to your staging
environment and deploy your release branch to your production
environment. [...] | I have two different environments running off of the same git repository. it looks like in the AWS console tools for git and elastic beanstalk, I can only connect one environment at a time, is there anyway to have it push to both of my environments at the same time? | aws.push to more than one environment |
Short-term, you could likely just use the scheduling capabilities in IronWorker and have the worker hit an endpoint in your application. The endpoint will then trigger the operations to run within your app environment.Longer-term, we do suggest you look at more of a service-oriented approach whereby you break your application up to be more loose-coupled and distributed. Here's a post on the subject. The advantages are many especially around scalability and development agility.https://blog.heroku.com/archives/2013/12/3/end_monolithic_appYou can also take a look at this YII addition.http://www.yiiframework.com/extension/yiiron/Certainly don't want you rewrite your app unnecessarily but there are likely areas where you can look to decouple. Suggest creating a worker directory and making efforts to write the workers to be self-contained. In that way, you could run them in a different environment and just pass payloads to the worker. (Push queues can also be used to push to these workers.) Once you get used to distributed async processing, it's a pretty easy process to manage.(Note: I work at Iron.io) | My website is hosted on AWS Elastic Beanstalk (PHP). I use Yii Framework as an MVC.A while ago I wanted to run a SQL query everyday. I looked up how to run crons on Beanstalk and it seemed complicated to merge the concepts of Cloud and Cron. I ran into Iron Worker (http://www.iron.io/worker), and managed to create a worker that is currently doing its job fine.Today I want to run a more complex cron (Look for notifications in my database, decide whether to send an email, build an email template and send the email (via AWS SES).From what I understand, worker files are supposed to be self-contained items, with everything they need to work.
However, I have invested a lot of time and effort in building my MVC. I have complex models, verifications, an email templating engine, etc...
It seems very difficult to use the work I've done to create an Iron Worker. Even if I managed to port all of my code to a worker (which seems like a great deal of work), it means anytime I make changes to my main code I need to make sure the worker also has those changes. It means I would have a "branch" of my code. Even more so if I want to create more workers in the future.What is the correct approach? | Use IronWorkers while using my work |
Warning:This answer is out-dated. You should useEnvironment.getExternalStorageDirectory()to get the root path of the SD card as mentioned in the answers below.Old Answer so the comments on this make sense:Adding/sdcard/to the root your path should direct your Android application to use the SD card (at least it works that way with the G1). Android's file system objects give you the ability to check file sizes... so it should be possible (if tricky) to write some fail-over code. This code would adjust your root path if the internal memory filled up. | Is there a way to store android application data on the SD card instead of in the internal memory?
I know how to transfer the application sqlite database from the internal memory to the SDCard, but what if the internal memory gets full in the first place? How does everyone handle this? | storing android application data on SD Card |
You can usekustomize editto edit thenameprefixandnamesuffixvalues.For example:Deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: the-deployment
spec:
replicas: 5
template:
containers:
- name: the-container
image: registry/conatiner:latestKustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yamlThen you can runkustomize edit set nameprefix dev-andkustomize build .will return following:apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-the-deployment
spec:
replicas: 5
template:
containers:
- image: registry/conatiner:latest
name: the-containerShareFollowansweredDec 23, 2020 at 13:55koolkool3,38011 gold badge1212 silver badges2828 bronze badges1better then templating, however for CI it would be if it was't necessary to make changes to file, but accepting it since I guess there is no such option–user140547Dec 23, 2020 at 14:13Add a comment| | In Helm, it is possible to specify a release name usinghelm install my-release-name chart-pathThis means, I can specify the release name and its components (using fullname) using the CLI.In kustomize (I am new to kustomize), there is a similar concept,namePrefixandnameSuffixwhich can be defined in akustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: overlook-
resources:
- deployment.yamlHowever, this approach needs a custom file, and using a "dynamic" namePrefix would mean that akustomization.yamlhas to be generated using a template and kustomize is, well, about avoiding templating.Is there any way to specify that value dynamically? | Is it possible to have a dynamic namePrefix/nameSuffix in kustomize? |
Looks like we found the bug - it was in the code. :)After some time, we found the Lambda logs on Cloud Watch that matched API Gateway's logs, and we saw some database timeouts. We are still investigating the details, but the issue was with express middleware for logging.It was accessing the database even on OPTIONS and HEAD requests and before database connection was ensured to be alive. The problem may be connected with database socket timeout and Lambda life span. However simple try-catch around logging middleware apparently fixed the issue.We are still not sure, why error was not happening on production with Node 8. Traffic was probably high enough to keep database connection open.Thank you guys for all the help.ShareFollowansweredJan 23, 2020 at 9:39Tine M.Tine M.42866 silver badges1111 bronze badges1I am running into the same issue I think. Did you get to the bottom of it?–Nick RedMay 22, 2020 at 6:41Add a comment| | As NodeJS 8.x runtime on AWS Lambda is EOL, we moved our staging environment for our REST API to NodeJS 12.x..Now we noticed, that at some random times request from frontend web app to API Gateway fails with 502. Usually this happens after API is idle for some time (few minutes). Mostly this happens for OPTIONS or HEAD requests, but this is probably because it is first request after some idle time.
Any subsequent requests to API are working OK. Even if you refresh the website, all request go through with no problem.I can't find any logs on Lambda.API gateway log:"error": "Internal server error", "ErrorDetail": " "Internal server error"", "errorValidation": "-", "errorResponseType": "INTEGRATION_FAILURE"Also we got same issues on rumtime NodeJS 10.x, but not on NodeJs 8.Thanks for your help! | Randomly getting 502 Bad Gateway response from AWS API Gateway after changing Lambda runtime from Node 8.x to Node 12.x |
<project_name>, <build-status>, <current-phase>needed to be passed as separate values. You cannot use them for string interpolation.[doc]You will need to modify you lambda input format and construct your message inside the lambda function.{
"channel":"#XYZ",
"project_name": <project_name>,
"current-phase": <current-phase>,
"build-status": <build-status>
} | Goal: I want to trigger notification to slack on any phase change in codebuild.
I have a lambda that does for me and it expects a request as follows:{
"channel":"#XYZ",
"message":"TESTING <project_name> from <build-status> to <current-phase>"
}So I try create a event from cloudwatch events and trigger my lambda:So I try to useInput TransformerIn which the place holders are values of input path from cloudwatch{
"project_name": "$.detail.project-name",
"current-phase": "$.detail.current-phase",
"build-status": "$.detail.build-status",
}But on adding this
i get the errorThere was an error while saving rule input_transformer_test. Details:
InputTemplate for target Id64936775145825 contains placeholder within
quotes..What am i doing wrong here ? | How to create JSON from AWS cloudwatch Input Transformer |
Yes, the code will still be there after you delete your repo. As soon as you submit your pull-request, Github internally adds that branch to the target repo (it creates a branch in a non-default namespace, so you usually don't see those).
Since PRs cannot usually be deleted, those branches will exist in the target repo indefinitely.
To answer your other question: The code will reside in both your fork and the target repo (originally, at least, unless you delete your fork).
|
Following scenario:
I forked an open source repository (GitHub -> project -> Fork). Then I cloned my project copy locally, made some changes in the master branch, commited them, and pushed to my repository:
$ git clone [email protected]:myusername/originalprojectname.git
... changes ...
$ cd originalprojectname
$ git add path/to/changed/file.php
$ git commit -m "..."
$ push
After it I started a pull request on GitHub. It has been marked as "Good to merge" and added to Milestone: x.y.z.
Where is the changed code staying? Only in my repo or also something else? The backgroud is: I would like to delete my repository. So, finally my question: If I delete my repository on GitHub, are the changes still available then for the original project or should I wait for the release x.y.z?
| Understanding pull requests on GitHub: What happens, when the requesting repository is deleted? |
Before deployment, open the Docker app/daemon on your machine.
|
Using AWS CDK, I am trying to deploy the Docker image with lambda function on AWS. And I am getting the following error.
[100%] fail: docker login --username AWS --password-stdin https://XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com exited with error code 1: Error saving credentials: error storing credentials - err: exit status 1, out: `Post "http://ipc/registry/credstore-updated": dial unix /Users/my_mac/Library/Containers/com.docker.docker/Data/backend.sock: connect: connection refused`
❌ MyService (prj-development) failed: Error: Failed to publish one or more assets. See the error messages above for more information.
at publishAssets (/Users/my_mac/.npm/_npx/8365afa3375eae8d/node_modules/aws-cdk/lib/util/asset-publishing.ts:44:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at CloudFormationDeployments.publishStackAssets (/Users/my_mac/.npm/_npx/8365afa3375eae8d/node_modules/aws-cdk/lib/api/cloudformation-deployments.ts:464:7)
at CloudFormationDeployments.deployStack (/Users/my_mac/.npm/_npx/8365afa3375eae8d/node_modules/aws-cdk/lib/api/cloudformation-deployments.ts:339:7)
at CdkToolkit.deploy (/Users/my_mac/.npm/_npx/8365afa3375eae8d/node_modules/aws-cdk/lib/cdk-toolkit.ts:209:24)
at initCommandLine (/Users/my_mac/.npm/_npx/8365afa3375eae8d/node_modules/aws-cdk/lib/cli.ts:341:12)
Failed to publish one or more assets. See the error messages above for more information.
make: *** [deploy-local] Error 1
What can I do, please?
| AWS CDK: Error saving credentials: error storing credentials - err: exit status 1 |
There are a couple of problems:
int main()
{
PicLib *lib = new PicLib;
beginStorage(lib);
return 0;
}
It is best to allocate and delete memory in the same scope so that it is easy to spot.
But in this case just declare it locally (and pass by reference):
int main()
{
PicLib lib;
beginStorage(lib);
return 0;
}
In beginStorage()
But I see no reason to manipulate a pointer. Pass it by reference and just use it locally.
void beginStorage(PicLib& lib)
{
....
}
In the PicLib class you have a RAW pointer: databases.
If you have a RAW pointer that you own (you create and destroy it) then you must override the compiler generated versions of the copy constructor and assignment operator. But in this case I see no reason touse a pointer it would be easier to just use a vector:
class PivLib
{
private:
std::vector<Pic> databases;
};
|
I've just started combining my knowledge of C++ classes and dynamic arrays. I was given the advice that "any time I use the new operator" I should delete. I also know how destructors work, so I think this code is correct:
main.cpp
...
int main()
{
PicLib *lib = new PicLib;
beginStorage(lib);
return 0;
}
void beginStorage(PicLib *lib)
{
...
if (command != 'q')
{
//let's assume I add a whole bunch
//of stuff to PicLib and have some fun here
beginStorage(lib);
}
else
{
delete lib;
lib = NULL;
cout << "Ciao" << endl;
}
}
PicLib.cpp
...
PicLib::PicLib()
{
database = new Pic[MAX_DATABASE];
num_pics = 0;
}
PicLib::~PicLib()
{
delete[] database;
database = NULL;
num_pics = 0;
}
...
I fill my PicLib with a Pic class, containing more dynamic arrays. Pic's destructor deletes them in the same manner seen above. I think that delete [] database gets rid of all those classes properly.
So is the delete in main.cpp necessary? Everything looking hunky dory here?
| Am I using delete correctly here? |
If I understand correctly: With every code push, CI pipeline creates new image, where new version of application is deployed. As a result, previously created image becomes outdated, so you want to remove it. To do so, you have to:Get rid of all outdated containers, which where created from outdated imagedisplay all containers with commanddocker ps -aif still running, stop outdated containers with commanddocker stop [containerID]remove them withcommand docker rm [containerID]Remove outdated images with command:docker rmi [imageID]To sum up why this process is needed: you cannot remove any image, until it is used by any existing container (even stopped containers still require their images). For this reason, you should first stop and remove old containers, and then remove old images.Detection part, and automation of deletion process should be based on image versions and container names, which CI pipeline generates while creating new images.Edit 1To list all images, which have no relationship to any tagged images, you can use command:docker images -f dangling=true. You can delete them with the command:docker images purge.Just one thing to remember here: If you build an image without tagging it, the image will appear on the list of "dangling" images. You can avoid this situation by providing a tag when you build it.Edit 2The command for image purging has changed. Right now the proper command is:docker image pruneHere is alinkwith a documentation | I have a CI-pipeline that builds a docker image for my app for every run of the pipeline (and the pipeline is triggered by a code-push to the git repository.)The docker image consists of several intermediate layers which progressively become very large in size. Most of the intermediate images are identical for each run, hence the caching mechanism of docker is significantly utilized.However, the problem is that the final couple layers are different for each run, as they result from a COPY statement in dockerfile, where the built application artifacts are copied into the image. Since the artifacts are modified for every run, the already cached bottommost images will ALWAYS be invalidated. These images have a size of 800mb each.What docker command can I use to identify (and delete) these image that gets replaced by newer images, i.e. when they get invalidated?I would like to have my CI-pipeline to remove them at the end of the run so they don't end up dangling on the CI-server and waste a lot of disk space. | How to delete cached/intermediate docker images after the cache gets invalidated? |
In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster.... Do I need to run separate clusters for different requirement?Yes, this is currently true. We are working on relaxing this restriction, but in the mean time you cancopy the instance templateto create another set of nodes with a different size. | I am trying to deploy a web application using Kubernetes and google container engine.
My application requires different types of machine.
In my understanding, in GKE, I can only have single type (instance template) of machines in each cluster, and it reduces to wasting resource or money to mix different pods in single cluster because I need to match machine type with maximum requirement.Let's say database requires 8 CPUs and 100GB ram, and application servers needs 2 CPUs and 4GB ram.
I have to have at least 8 cpu / 100GB machine in the cluster for database pods to be scheduled. Kubernetes will schedule 4 application pods on each machine, and it will waste 84GB of ram of the machine.Is it correct? If it is, how can I solve the problem? Do I need to run separate clusters for different requirement? Connecting services between different clusters doesn't seem to be s trivial problem either. | Kubernetes node capacity planning for various pod requirements in GKE |
0
In docker compose yml file, you can modify the memory limit of a container.
You could try to increase that limit, in your image configuration section, as follows:
...
deploy:
resources:
limits:
memory: <memory size>
More info here
Share
Improve this answer
Follow
answered May 16, 2023 at 13:52
A MA M
82911 gold badge1414 silver badges2727 bronze badges
Add a comment
|
|
I have a spring boot application running on ubuntu 20 ec2 machine where I am creating around 200000 threads to write data into kafka. However it is failing repeatedly with the following error
[138.470s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f828d055000-0x00007f828d059000).
[138.470s][warning][os,thread] Attempt to deallocate stack guard pages failed.
OpenJDK 64-Bit Server VM warning: [138.472s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.
INFO: os::commit_memory(0x00007f828cf54000, 16384, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 16384 bytes for committing reserved memory.
I have tried increasing memory of my ec2 instance to 64 gb which have been of no use. I am using docker stats and htop to monitor the memory footprint of the process and when it touches around 10 gb it fails with the given error.
I have also tried increasing the heap size and max memory for the process.
docker run --rm --name test -e JAVA_OPTS=-Xmx64g -v /workspace/logs/test:/logs -t test:master
Below is my code
final int LIMIT = 200000;
ExecutorService executorService = Executors.newFixedThreadPool(LIMIT);
final CountDownLatch latch = new CountDownLatch(LIMIT);
for (int i = 1; i <= LIMIT; i++) {
final int counter = i;
executorService.execute(() -> {
try {
kafkaTemplate.send("rf-data", Integer.toString(123), "asdsadsd");
kafkaTemplate.send("rf-data", Integer.toString(123), "zczxczxczxc");
latch.countDown();
} catch (Exception e) {
logger.error("Error sending data: ", e);
}
});
}
try {
latch.await();
} catch (InterruptedException e) {
logger.error("error ltach", e);
}
| os::commit_memory failed; error=Not enough space (errno=12) |
There are various ways to control access to the S3 objects:Use the query string auth - but as you noted this does require an expiration date. You could make it far in the future, which has been good enough for most things I have done.Use the S3 ACLS - but this requires the user to have an AWS account and authenticate with AWS to access the S3 object. This is probably not what you are looking for.You proxy the access to the S3 object through your application, which implements your access control logic. This will bring all the bandwidth through your box.You can set up an EC2 instance with your proxy logic - this keeps the bandwidth closer to S3 and can reduce latency in certain situations. The difference between this and #3 could be minimal, but depends your particular situation.ShareFollowansweredApr 20, 2009 at 12:28dardar6,53077 gold badges3535 silver badges4545 bronze badgesAdd a comment| | Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this? | Amazon S3 permissions |
The solution was to update slather to version 2.5 and also generate coverage in the sonarqube generic mode. Follow the steps for successful reproduction:Build
xcodebuild -workspace 'YourProject.xcworkspace' -scheme DEV -derivedDataPath Build/ -enableCodeCoverage YES clean build test CODE_SIGN_IDENTITY="" CODE_SIGNING_REQUIRED=NO -destination 'platform=iOS Simulator,name=iPhone 8,OS=14.0'
Slather generate sonarqube generic xml
slather coverage --jenkins --sonarqube-xml --build-directory ./Build --output-directory ./sonar-reports --scheme DEV --workspace YourProject.xcworkspace
Run analise Sonarqube
sonar-scanner -Dsonar.sources=. -Dsonar.coverageReportPaths=./sonar-reports/sonarqube-generic-coverage.xml -Dproject.settings=sonar-project.properties -Dsonar.qualitygate.wait=trueShareFolloweditedOct 10, 2020 at 18:13answeredOct 10, 2020 at 2:30Renato SouzaRenato Souza14099 bronze badgesAdd a comment| | I'm trying to perform the conversion for SonarQube to interpret coverage and I get this error:Error: Error Domain=XCCovErrorDomain Code=0 "Failed to load result bundle" UserInfo={NSLocalizedDescription=Failed to load result bundle, NSUnderlyingError=0x7fdaa840a8d0 {Error Domain=IDEFoundation.ResultBundleError Code=0 "This version of Xcode does not support opening result bundles created with versions of Xcode and xcodebuild using the v1 API."}}
The operation couldn’t be completed. (cococoLibrary.Bash.Error error 0.) | Someone got convert coverage in new version xcode12 - SonarQube |
The request was failing because I wasn't setting the region for the client before making the request. The default region is probably US East and my table is setup in EU West. This fixed it:
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
client.setRegion(Region.getRegion(Regions.EU_WEST_1));
|
I'm just getting up and running with DynamoDB using the Java SDK (v1.8). I've created a very simple table using the AWS console. My table has a primary hash key, which is a String (no range). I've put a single item into the table with 4 other attribute values (all Strings).
I'm making a simple Java request for that item in the table, but it's failing with ResourceNotFoundException. I'm absolutely positive that the table name I'm supplying is correct, as is the name of the primary hash key that I'm using to query the item. The table status is listed in the AWS console as Active and I can see the item and its values too.
This is the error I'm getting:
Requested resource not found (Service: AmazonDynamoDB; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: ...)
I've tried the following (using the dynamodbv2 versions of the classes):
Map<String, AttributeValue> key = new HashMap<String, AttributeValue>();
key.put(PRIMARY_KEY, new AttributeValue().withS(value));
GetItemRequest request = new GetItemRequest()
.withTableName(TABLE_NAME)
.withKey(key);
GetItemResult result = client.getItem(request);
I've also tried using the older, deprecated versions of all these classes, like this:
GetItemRequest request = new GetItemRequest()
.withTableName(TABLE_NAME)
.withKey(new Key().withHashKeyElement(new AttributeValue().withS(value)));
GetItemResult result = client.getItem(request);
...but it's the same result.
My understanding of the ResourceNotFoundException is that it means the table name or attribute referenced is invalid, which is not the case. It can also be thrown if the table is too early in the Creating state, but my table is Active.
| Simple DynamoDB request failing with ResourceNotFoundException |
Have you looked at this?http://blogs.microsoft.co.il/blogs/srlteam/archive/2006/11/27/TFS-Permission-Manager-1.0-is-Finally-out.aspx | Is there a way to export all of TFS 2008 Groups and Permissions for an Audit? | Export TFS 2008 (Team Foundation Server) Groups and Permissions |
I had a browse around and managed to find my answer. It is solved by the following postIs there a way to skip password typing when using https:// on GitHub?Works a treat!!ShareFolloweditedMay 23, 2017 at 11:43CommunityBot111 silver badgeansweredMay 14, 2015 at 9:22user1107753user11077531,59655 gold badges2424 silver badges3636 bronze badgesAdd a comment| | I am trying to set up a Jenkins Windows slave which will pull from GitHub using Git Bash. I have installed Git Bash on my Windows server so it is available through the Windows command prompt. When I try to invoke any Git command that goes to GitHub it always asks for my credentials.How can I set this up so it does not ask for my credentials?Points to note:I am testing it by invoking from the Windows cmd and not the Git Bash shell as I believe this is how Jenkins will call it.If it isn't possible, how do you connect to GitHub from a Jenkins slave via Git Bash without it asking for credentials? | Store GitHub credentials on Windows with Git Bash |
Here is the answer incase someone in future is having the same problem:
EV certificates are only supported on paid business or enterprise subscriptions:https://support.cloudflare.com/hc/en-us/articles/200170446-Can-I-use-an-EV-or-OV-SSL-certificate-with-CloudFlare-Business-and-Enterprise-only- | I have certified SSL from godaddy. It works fine and the green address bar with the name of my company shows up when I use it without cloudflare. However when I change my dns to cloudflare and turn SSL Strict mode on, the green lock says I have SSL from cloud flare (it shows a different ssl certificate). I don't know what to do to still show my certified ssl in the address bar. | Custom SSL doesn't show when using CloudFlare |
Execute the command:docker inspect --format="{{json .Config.ExposedPorts }}" src_python_1Result:{"8000/tcp":{}}Proof (usingdocker ps):e5e917b59e15 src_python:latest "start-server" 22 hours ago Up 22 hours 0.0.0.0:8000->8000/tcp src_python_1 | Assuming that I start a docker container with the following commanddocker run -d --name my-container -p 1234 my-imageand runningdocker psshows the port binding for that image is...80/tcp, 443 /tcp. 0.0.0.0:32768->1234/tcpIs there a way that I can usedocker inspectto grab the port that is assigned to be mapped to1234(in this case, 32768)?Similar to parsing and grabbing the IP address using the following command...IP=$(docker inspect -f "{{ .Networksettings.IPAddress }} my-container)I want to be able to do something similar to the followingASSIGNED_PORT=$(docker inspect -f "{{...}} my-container)I am not sure if there is a way to do this through Docker, but I would imagine there is some command line magic (grep,sed,etc) that would allow me to do something like this.When I rundocker inspect my-containerand look at theNetworkSettings...I see the following"NetworkSettings": {
...
...
...
"Ports": {
"1234/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32768"
}
],
"443/tcp": null,
"80/tcp": null
},
...
...
},In this case, I would want it to find HostPort without me telling it anything about port 1234 (it should ignore 443 and 80 below it) and return32768. | How can I grab exposed port from inspecting docker container? |
You should probably just ignore it. The swap trick frees the memory from the vector, but that does not mean that the allocator (or even the malloc or equivalent implementation underneath) will yield the memory back to the system. That is, the vector is most probably not the one holding the memory up.
|
Consider the following code, compiled with g++ problem.cpp -o problem:
#include <vector>
using namespace std;
int main()
{
while(1){}
return 0;
}
When this code is executed, the command top reports that ~80K of memory is being consumed.
Now consider this code:
#include <vector>
using namespace std;
int main()
{
vector<int> testVec;
for(int i = 0;i<100000000;i++)testVec.push_back(i);
while(1){}
return 0;
}
As expected, top report that around ~300MB of memory is consumed.
Now finally, consider this code:
#include <vector>
using namespace std;
int main()
{
vector<int> testVec;
for(int i = 0;i<100000000;i++)testVec.push_back(i);
testVec.clear();
vector<int>().swap(testVec);
while(1){}
return 0;
}
Now top reports that ~4196K is being consumed(!) -- why isn't it only ~80K is as in the first example? How can I finally free up that last bit of memory that is presumably being consumed by the vector? I've read that in addition to .clear(), the 'swap trick' is meant to free up everything but apparently it's not working as I expected it would do. What am I missing?
| Problem with memory deallocation for non-dynamically created std::vectors storing normal (i.e. non-dynamically allocated) data |
That depends entirely on how many sessions are typically present (which in turn depends on how many users you have, how long they stay on the site, and the session timeout) and how much RAM your server has.
But first of all: have you actually used a memory profiler to tell you that your "high memory usage" is caused by session data, or are you just guessing?
If the only problem you have is "high memory usage" on a production machine (i.e. it can handle the production load but is not performing as well as you'd like), the easiest solution is to get more RAM for the server - much quicker and cheaper than redesigning the app.
But caching entire result sets in the session is bad for a different reason as well: what if the data changes in the DB and the user expects to see that change? If you're going to cache, use one of the existing systems that do this at the DB request level - they'll allow you to cache results between users and they have facilities for cache invalidation.
|
We are running into unusually high memory usage issues. And I observed that many places in our code we are pulling 100s of records from DB, packing it in custom data objects, adding it to an arraylist and storing in session. I wish to know what is the recommended upper limit storing data in session. Just a good practice bad practice kind of thing.
I am using JRockit 1.5 and 1.6GB of RAM. I did profiling with Jprobe and found that some parts of app have very heavy memory footprint. Most of this data is being into session to be used later.
| How much session data is too much? |
Google Container Engine does not support CPU Quota by default. If you'd like to use CPU quota you can switch to using GCI Node image -https://cloud.google.com/container-engine/docs/gci.
GCI has support for CPU quota and Container Engine would automatically start supporting CPUlimitson containers. | I've set cpu limits on my Kubernetes pods, but they do not seem to cap cpu usage at all running on Google Container Engine version 1.3.3Readinghttps://github.com/kubernetes/kubernetes/tree/master/examples/runtime-constraintsthis has to be enabled on the kubelet as follows:kubelet --cpu-cfs-quota=trueHowever when checking the process when logging into one of the nodes of my cluster it seems the kubelet is missing this flag:/usr/local/bin/kubelet --api-servers=https://xxx.xxx.xxx.xxx --enable-debugging-handlers=true --cloud-provider=gce --config=/etc/kubernetes/manifests --allow-privileged=True --v=2 --cluster-dns=10.223.240.10 --cluster-domain=cluster.local --configure-cbr0=true --cgroup-root=/ --system-cgroups=/system --runtime-cgroups=/docker-daemon --kubelet-cgroups=/kubelet --node-labels=cloud.google.com/gke-nodepool=default-pool --babysit-daemons=true --eviction-hard=memory.available<100MiIs any Googler able to confirm whether its enabled or not and if not tell us why? Now it seems I don't have the choice to use cpu limits whereas as it's enabled I can just leave cpu limit out of my spec if I don't wish to use it. | Does Google Container Engine have CFS cpu quota enabled? |
Just found the answer in thedocs: Relative time. With this option you can set a timerange per graph. | I'm trying to set up a monitoring dashboard that contains two graphs. One that shows current hour transaction volumes (in 1 minute intervals from current hour start until now) and one that shows current day transaction volumes (in 10 minute intervals from 00:00 until now). I can't seem to find a way to display two different x-axis timelines on the two different panels if I create them on the same dashboard. Is there a way to do what I'm looking for?I've tried updating the queries themselves, messing with the dashboard settings, and messing with the panel settings but I haven't found what I needed. I'm using Grafana 6.0.0 | Can you have different time ranges on different panels on the same dashboard? |
Solved :
There are two parameters taken by aws-sdk :
Expression Attribute Name
Expression Attribute Value
both provide the functionality of replacing placeholders used in the attributes list.
Here by Attributes it is a bit ambiguous, where I got confused.
The wizards over at aws mean both the key and value when they use the term attribute.
So in a case where you want to use a reserved key word as a key attribute use the Expression Attribute Name parameter with #(pound) to denote the placeholder.
Similarly where you want to use placeholders for value attribute use the Expression Attribute Value parameter with :(colon) to denote the placeholder.
So finally my code (working) looks like this :
var param = {
TableName: "faasos_orders",
FilterExpression: "#order_status = :delivered OR #order_status = :void OR #order_status = :bad",
ExpressionAttributeValues: {
":delivered": "delivered",
":void": "void",
":bad": "bad"
},
ExpressionAttributeNames: {
"#order_status": "status"
}
};
dynamodb.scan(param, function (err, data) {....});
|
My scan function :
var tableName = 'faasos_orders',
filterExp = 'status = :delivered OR status = :void OR status = :bad',
projectionValues = '',
expressionAttr = {};
expressionAttr[":delivered"] = "delivered";
expressionAttr[":bad"] = "bad";
expressionAttr[":void"] = "void";
limit = 10;
dynamoConnector.getItemUsingScan(tableName, filterExp, projectionValues, expressionAttr, function (err, data) { ...........}
Error on running :
{ [ValidationException: Invalid FilterExpression: Attribute name is a reserved keyword; reserved keyword: status]
message: 'Invalid FilterExpression: Attribute name is a reserved keyword; reserved keyword: status',
code: 'ValidationException',
time: Mon Apr 18 2016 21:57:30 GMT+0530 (IST),
requestId: 'AV6QFHM7SPQT1QR3D4OO81ED4FVV4KQNSO5AEMVJF66Q9ASUAAJG',
statusCode: 400,
retryable: false,
retryDelay: 0 }
Now I do get the point I am trying to use a reserved keyword in th
e filterExpression which is illegal.
But if I run the same function through aws gui it returns data beautifully (check image for details):
Scan function on status through gui
So the question is how do I add the filter expression through node without having to change the key name ???
| Scan Function in DynamoDB with reserved keyword as FilterExpression NodeJS |
Seeing as each gist is in fact a git repository, you could use the git submodule feature to include them all in your primary GitHub repository.
Have a look at this page from the book, http://git-scm.com/book/en/Git-Tools-Submodules , it even has a section on so-called Superprojects.
|
I am developing a pretty large JavaScript library (Formula.js) of functions (450+). Most of them are pretty independent from each other and totally self-contained, or make use of well-known third-party libraries (Moment.js for example). In order to support discussions and manage contributions at the function level rather than at the library level, I created one Gist per function (Cf. CONVERT Gist), and one repository for the entire library. This makes it easy to include the code of a function in the function's documentation (Cf. CONVERT documentation).
My question is: how do I keep the master repository synchronized with the Gists?
The solution should:
allow changes to be made from the master repository and from the individual Gists
automate the inclusion of copyright headers on the individual Gists
automate the inclusion of comments related to third-party libraries on the individual Gists
Additional thoughts:
I could not find many examples of projects being managed that way. I'm also rather unexperienced with Git. Therefore, the workflow I'm suggesting might be totally flawed, or introduce unwanted complexity. Any thoughts on possible best practices for keeping things under control are much welcome.
| How to synchronize a GitHub repository and multiple Gists |
The problem has been resolved.I changed--cluster-cidr=10.254.0.0/16for kube-proxy to--cluster-cidr=172.30.0.0/16. And then it worked well.The kube-proxy cluster-cidr needs to match the one used on the controller manager, also the one used by calico. | For every service in k8s cluster, kubernetes do snat for request packets. The iptables rules are:-A KUBE-SERVICES ! -s 10.254.0.0/16 -d 10.254.186.26/32 -p tcp -m comment --comment "policy-demo/nginx: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.254.186.26/32 -p tcp -m comment --comment "policy-demo/nginx: cluster IP" -m tcp --dport 80 -j KUBE-SVC-3VXIGVIYYFN7DHDA
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADEIt works well in most circumstances, but not networkpolicy. Caclico uses ipset to implement networkpolicy and the matched set only contains pod ip.So when the service pod runs on node1, and access pod runs on node2. The networkpolicy will DROP the request because the src ip of the request is node2's ip or flannel.1 ip.I think there might be a method to close snat for clusterip service.But I can't find it anywhere, could anyone help me?Thank you very much! | how to avoid snat when using service type clusterip on kubernetes? |
When we don't want to modify our input messages and want to publish unmodified messages to the output , processor can be removed and only input and output configuration are sufficient. Removing processor solved my problem . Following the working configurationinput:
http_server:
address: ""
path: /
ws_path: /ws
allowed_verbs:
- POST
timeout: 5s
rate_limit: ""
output:
kafka:
addresses:
- abc-central-1-kafka-2-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-1-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-0-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
topic: abc_abc-central-1.abc-tyur-service-in.v1
client_id: abc-tyur-service-dev-1
tls:
enabled: true
root_cas_file: /Users/ca.crt
client_certs:
- cert_file: /Users/cert.pem
key_file: /Users/key.pem
logger:
level: ALLThanks to benthos community on Discord | i want to create a pipeline to read XML data from postman http url and consume it through benthos input configuration and publish this message to a kafka topic using benthos processor . Following is the configuration , i was trying but doesn't seem workinginput:
http_server:
address: ""
path: /
ws_path: /ws
allowed_verbs:
- POST
timeout: 5s
rate_limit: ""
pipeline:
processors:
- xml: {}
output:
kafka:
addresses:
- abc-central-1-kafka-2-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-1-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
- abc-central-1-kafka-0-client.abc.svc.prod1-us-central1.gke.kaas-prod-us.gcp.extscloud.com:16552
topic: abc_abc-central-1.abc-tyur-service-in.v1
client_id: abc-tyur-service-dev-1
tls:
enabled: true
root_cas_file: /Users/ca.crt
client_certs:
- cert_file: /Users/cert.pem
key_file: /Users/key.pem
logger:
level: ALL | Benthos pipleline to read XML from postman and publish to kafka topic |
sudo firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -d 0.0.0.0/0 -j ACCEPT
sudo firewall-cmd --reload | firewalld command alternative ofiptables -P FORWARD ACCEPTI have to runiptables -P FORWARD ACCEPTin order to run kubernetes cluster and communicate from pods using the service name.Problem is that I have k8s cluster running on centos7 and using firewalld instead of iptables and that is default in cenots, without iptables service running i can't save ruleiptables -P FORWARD ACCEPTand that take effect after reboot, if can get firwalld alternative ofiptables -P FORWARD ACCEPTthen I can persist it easily on reboot. | firewalld command alternative of iptables -P FORWARD ACCEPT |
Based on the comments, the exact cause of the issue is undetermined. However, the problem was solved by creating a new Service Discovery in ECS.
|
I have six docker containers all running in their own Tasks (6 tasks), and each task running in a separate Fargate service (6 services) on ECS.
I need the services to be able to communicate with each other, and some of them need to be publically accessible.
I keep seeing info about using either Service Discovery or a Load Balancer assigned to each service. I would like to try and avoid having to set up 6 load balancers as it's more expensive and more effort to maintain.
This is how I have set up Service Discovery currently:
All Tasks are setup to use awsvpc
All services have been set up to use Service Discovery (set up from within the Service Creation page)
All services are sharing the same Namespace, and they're all using the A DNS Record
When I try to ping <service_discovery_name>.<namespace> from within one of the docker containers I do not get a response. However, I can successfully ping another container when pinging the private IP Address.
Can I achieve what I need to do with Service Discovery?
If so, how exactly do the containers communicate with each other?
Thanks heaps! Please let me know if I haven't provided enough info.
EDIT: Recreating the services and setting them up with a new Service Discovery seemed to resolve the issue. No idea why the old discovery didn't work.
| How to communicate between Fargate services on AWS ECS? |
It worked for me by using external directories provided by maven jib plugin.
<extraDirectories>
<paths>
<path>webapp</path>
<path>
<from>webapp</from>
<into>/app/webapp</into>
</path>
</paths>
</extraDirectories>
|
I am creating docker image using google's Jib maven plugin, image gets created successfully and backend services are working fine but my webapp folder is not part of that image. Before jib i was creating a zip containing everything (including webapp folder in the root of that zip along with executable jar) which was working fine.
Now the image created by jib has classes, libs, resources in the app root. How and where should i copy webapp folder ?
| Jib - where to copy webapp folder inside image? |
The question is quite open-ended. However having gone through this thought process recently, I can come up with two options:hostPath: You mount this data (probably from NFS or some such) at/data(for example) on all your Kubernetes nodes. Then in Pod specification, you attach volume of typehostPathwithpath: /data.GlusterFS volume: This is the option we finally went for, for high availability and speed requirements. The same GlusterFS volume (having the PDFs, data, etc.) can be attached to multiple Pods. | A question that I can’t seem to find a good answer to and am so confused as to what should be an easy answer. I guess you can say I can't see the forest for the trees.I have N Pods that need access to the same shared FILE SYSTEM (contains 20Gig of pdfs, HTML) only read. I don’t want to copy it to each Pod and create a Volume in Docker. How should I handle this? I don’t see this as a Stateful App where each pod gets its own changed data.Node-0Pod-0Java-app - NEED READ ACCESS TO SHARED FILE SYSTEMNode-1Pod-1Java-app - NEED READ ACCESS TO SHARED FILE SYSTEMNode-2Pod-2Java-app - NEED READ ACCESS TO SHARED FILE SYSTEM | Kubernetes shared File System across Pods |
2
I don't think you can reference localhost on heroku. Try using postgres url instead, e.g: postgres://dbUserName:[email protected]:dbPortNumber/dbName.
its easier to manage. Heroku has a postgres service to. You could follow this link to learn more about the postgres addon on heroku
Share
Follow
answered Feb 13, 2020 at 14:56
McdavidMcdavid
3144 bronze badges
13
Thanks Mcdavid, but now the problem is diferent .At this moment I get this error, which is different from before, refers to H12, you know what it is. Sorry but I needed to update the question. Rookie stuff.
– Lekanda
Feb 13, 2020 at 18:46
1
Can you get the full log, what you can do is open the log tab and try reproducing error, then you can get the full log for the error with the file its referencing
– Mcdavid
Feb 13, 2020 at 18:55
Thanks friend but I do not understand what you mean, the log of the fault is in the question, or is that not what you say? Thank you very much for your help
– Lekanda
Feb 13, 2020 at 19:32
1
The logs is the screen shot you sent it doesn't contain the full information of where the problem comes from because heroku shows you only a few lines if you weren't on the log page. To get more logs keep the log page open and try reproducing the error,so as to get the full logs
– Mcdavid
Feb 13, 2020 at 19:37
1
One more thing I see you're using es6, async await and you don't have babel configured. That could be also what might be causing the problem.
– Mcdavid
Feb 13, 2020 at 20:03
|
Show 8 more comments
|
when trying to deploy my app in heroku, I get this error:
I'm trying to search the net but I can't find anything definitive. Just say that it works perfect for me at local. I use MAMP, the database is in myphpadmin and I think I imported it correctly in Heroku. The .env variables that I have are:
BD_NAME = agencydeviajes
BD_USER = root
BD_PASS = root
BD_HOST = 127.0.0.1
BD_PORT = 8889
HOST = localhost
My heroku variables:
And this is my repository: Git HUB Repository
I haven't been able to find a solution for more than a week and I'm a little desperate, I appreciate a lot of ideas. This is the error log.
I am new to Heroku and I am learning Java Script. Thank you very much to all
| Problems uploading an application to HEROKU, Failed with async. H12 |
Use the following:route:
repeat_interval: 10h
routes:
- match:
severity: 'critical'
receiver: 'email'
repeat_interval: 200h
continue: trueShareFollowansweredMay 19, 2021 at 14:23Marcelo Ávila de OliveiraMarcelo Ávila de Oliveira20.9k33 gold badges4242 silver badges5353 bronze badges2Thanks! What if I have multiple sorts of emails but I only want the repeat_interval to be set to 200h for a specific job? Could I add a "job: <my_job>" within the match? to make it specific to that kind of job–CharlesMay 19, 2021 at 15:30Yes, you can use any label you have to that specific metric/alert (ex: job, instance, alertname, mountpoint, etc)–Marcelo Ávila de OliveiraMay 19, 2021 at 16:41Add a comment| | We are using Prometheus to send alerts. We have a globalrepeat_intervalroute of 10 hours as default, but I would like to up this to 200 hours for a specific receiver/type of alert.I have already upped the--data.retentionto 200 hours (as it defaults to 120h), but I don't want to change the defaultrepeat_interval, only for a specified receiver.This is myalertmanager.yml:global:
resolve_timeout: 5m
route:
repeat_interval: 10h
routes:
- match:
severity: 'critical'
receiver: 'email'
continue: true
receivers:
- name: 'email'
email_configs:
- to: "[email protected]"
from: "{{from}}"
...
headers:
subject: "SUBJECT"
- (other receivers...)I would want to set therepeat_intervalofemail-receivers to 200h, but keep the defaultroute-repeat_intervalto 10h.Is this possible? | Can I set AlertManager repeat_interval for a specific receiver/alert? |
Those messages are from systemd itself about the mount. This is addressed in systemd v249; seehttps://github.com/systemd/systemd/issues/6432for more information.In a nutshell, that version of systemd allows controlling of that mount via its unit file using the following:[Mount]
LogLevelMax=0The LogLevelMax setting applies not just to the unit but also to systemd's log messages itselfaboutthe unit. That is the change introduced in v249. | I've google this, but so far no way to fix it. My syslog under /var/log is being flooded every second with messages like this;Aug 27 20:58:27 mail-server systemd[1]: run-docker-runtime\x2drunc-moby-e4bfb13118b141bf232cf981fe9b535706243c47ae0659466b8e6667bd4feceb-runc.YHoxmJ.mount: Succeeded.
Aug 27 20:58:27 mail-server systemd[1083]: run-docker-runtime\x2drunc-moby-e4bfb13118b141bf232cf981fe9b535706243c47ae0659466b8e6667bd4feceb-runc.YHoxmJ.mount: Succeeded.
Aug 27 20:58:27 mail-server systemd[8395]: run-docker-runtime\x2drunc-moby-e4bfb13118b141bf232cf981fe9b535706243c47ae0659466b8e6667bd4feceb-runc.YHoxmJ.mount: Succeeded.
Aug 27 20:58:28 mail-server systemd[1]: run-docker-runtime\x2drunc-moby-5dc4f4e0b3cbd5e5bfbcc88b8d22f92575706b7c3603847ccb2fd4e56f188f99-runc.gt51Ek.mount: Succeeded.
Aug 27 20:58:28 mail-server systemd[1083]: run-docker-runtime\x2drunc-moby-5dc4f4e0b3cbd5e5bfbcc88b8d22f92575706b7c3603847ccb2fd4e56f188f99-runc.gt51Ek.mount: Succeeded.
Aug 27 20:58:28 mail-server systemd[8395]: run-docker-runtime\x2drunc-moby-5dc4f4e0b3cbd5e5bfbcc88b8d22f92575706b7c3603847ccb2fd4e56f188f99-runc.gt51Ek.mount: Succeeded.I am running Ubuntu 20.04 and dockerd is run by systemd.Could anyone help me to find the cause if this? It seems that every single container is generating this.Best,Francis | Docker flooding syslog with run-docker-runtime logs |
If thejobis spawned by acronjob, then you can just delete thejobresource andsuspendthecronjob(seehere. Also seehereregarding "missed jobs" which will happen while thecronjobis suspended).Then when you are ready to resume, justunsuspendthecronjob, and the next triggered cycle will re-create thejobresourceIf the job is created outright (i.e. not from acronjob), then you cansuspend the job itselfif you are on Kubernetes 1.21 or later. If you aren't, then the easiest way (I think), is to dump the job yaml to disk, delete the job itself from the cluster, then recreate it when you're ready to resume.If you are running the job as adeployment(your point #2), and not as ajobresource then, yes, simply scaling down to zero then scaling back up would work. | This question already has answers here:What is the difference between a Pod and a Job resources in k8s?(3 answers)Closed1 year ago.I wanted to stop my job for sometime. What would be the recommended approach for it?If I delete job then it will delete all pods associated to the job.yaml file .Scalling down pods to zero using deployment.Please let me know is there any way to get adverse results if choose option1 over option2. | What is the difference between deleting a job vs pod scale down to zero [duplicate] |
Yes. If a file is tagged in .gitattributes as export-ignore, then it will not be included in the archive. This is a feature of git archive, and GitHub uses git archive internally to generate archives.
There is no way to disable this feature, although if you were using git archive by hand, you could use --worktree-attributes to override these values and cause the files to be included.
|
I had some issues with a deploy and has to revert back to a tagged version of the code.
When reviewing the changes between the the tagged code and the git code it had some changes like changelog.
Is there a list of exclusions for a zipped tag version of the code?
| What files are left out of the tagged github zip? |
1
Conflict is not a fail when using Github. git is saying 'hey I'll do everything, just let me know what is right when two of you write different code at the same file at the same time'
There are several tools that helps you merge when conflict happens. (ex. Github Desktop connects to Visual Studio Code to find where conflict is and decide which code to commit)
Then, you can check file as conflict solved then commit it.
Share
Improve this answer
Follow
answered Dec 15, 2019 at 23:36
Jongseok YoonJongseok Yoon
6377 bronze badges
Add a comment
|
|
Have a question regarding two scenarios: We are all working on a repo. Usually I’m working on my own folder, so everything I do is ok. But what if I’m working on a change in a file but the other developer work in the file at the same time?
I checkout a local branch
The other developer checkout another local branch
The other developer commits, push, and merge the code into remote master
I commits, push, and merge the code into remote master. But this will fail because the conflict
What’s the best way to resolve it? Rebase or Merge conflict? How do we merge conflict from github website?
| Which is the best method to fix conflicts ? Rebase or Merge? |
This sample repository show demo use ofFinalizerandInitializer. Finalizer are used here for garbage collection.Repostory:k8s-initializer-finalizer-practiceHere, I have created a custom controller for pods, just like Deployment.I have usedInitializerto addbusyboxsidecar orfinalizerto underlying pods. Seehere.When aCustomDeploymentcrd is deleted, kubernetes setDeletionTimestampbut does not delete it if it has finalizer. Then controller checks if it has finalizer. If it has finalizer, it deletes its pod and remove the finalizer. Then the crd terminate. Seehere. | Kubernetes SupportsFinalizer in CRto prevent hard deletion. I had a hard time to find sample code though. Can someone please point to real code snippet? | Kubernetes CRD Finalizer |
2
I had the same issue.
What I found was that the error message was misleading.
Here's what worked for me:
Try this:
protoc ./proto/hello/hello.proto --go_out=plugins=grpc:./outputDirectory -I ./proto/hello/hello.proto
Parts of the command obviously look redundant, but this was what I had to do to get it working. I recommend trying this, and see if it runs. If it does then you can see if you're able to tweak it, but I don't think so.
if "." is your output, then do this:
protoc ./proto/hello/hello.proto --go_out=plugins=grpc:. -I ./proto/hello/hello.proto
Notice that you don't need space.
Share
Improve this answer
Follow
answered Aug 18, 2018 at 3:24
Scott TerryScott Terry
1,23311 gold badge1414 silver badges2020 bronze badges
Add a comment
|
|
I am trying to build a project using maven on teamcity and getting this error during maven build step.
[Step 2/4] [ERROR] protoc failed output:
[Step 2/4] [ERROR] protoc failed error: /bin/sh: 1: protoc: Permission denied
[Step 2/4] [13:03:14][Step 2/4] Failed to execute goal
com.google.protobuf.tools:maven-protoc-plugin:0.1.10:compile
(generate-sources) on project unit-protocol-lib: protoc did not exit
cleanly. Review output for more information.
Keep in mind I am using docker-compose for building the teamcity agent (agent running in container) and protoc is added to /usr/local/bin/protoc ($PATH has /usr/local/bin, /usr/local/bin/protoc has rwx permissions).
EDITED for ease
Forget everything above for a while.
I logged into the buildagent of teamcity server, access the shell using /bin/sh and execute the command protoc and it returns the error:
protoc failed error: /bin/sh: 1: protoc: Permission denied
Any help??
| Permission denied for protoc on maven build in Teamcity |
The number of contributors to a repository as listed on the repository's front page is the number of people who have code in that repository (or possibly the main branch). It doesn't reflect how many people have permissions on the repository.For example, when looking athttps://github.com/git/git, there are 1366 contributors, but far, far fewer have permissions to access the repository.Once the user has code in the project, the contributor count will increase accordingly. | I just invited a user to collaborate with my public repository, and she accepted my invitation. But her name is not shown in repository title and repository still show 1 contributor. In manage section, I can see her name as one of the contributors.what is wrong? | New contributor is not shown in public repository |
This should work:RewriteEngine On
RewriteCond %{THE_REQUEST} ^(GET|POST)\ /\?til=(.*)\ HTTP [OR]
RewriteCond %{THE_REQUEST} ^(GET|POST)\ /index\.php\?til=(.*)\ HTTP
RewriteRule ^ /%2? [R=301,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ /index.php?til=$1 [L]It should redirectindex.php?til=mainand?til=mainto/main | i'm looking for a htaccess rule that can do this:Original URL:?til=mainReplace to:/mainand if this is the scenario:Original URL:/index.php?til=mainor?til=mainRedirect to:/mainTill now i only have this code:RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php?til=$1 [L,QSA]It loads/mainbut behind it loads/index.php?til=main | htaccess rewrite and redirect URL |
4
After many hours I finally could fix it.
Resulted that I was using a docker golang version that it doesn't have git included. I should use golang:1.8
I modified my Dockerfile like this and now it works like a charm
FROM golang:1.8
RUN go get github.com/gin-gonic/gin
WORKDIR /go/src/app
COPY . .
RUN go install -v
CMD ["app"]
Share
Improve this answer
Follow
answered May 27, 2017 at 18:30
utiqutiq
1,37322 gold badges1818 silver badges3434 bronze badges
1
It gives error : go: missing Git command. See golang.org/s/gogetcmd, do you have any idea how to solve this too ?
– Nirmal Vatsyayan
Aug 4, 2018 at 11:06
Add a comment
|
|
I'm writing a simple app in GO and I have this folder structure
The docker-compose.yml file content is:
version: '2'
services:
db:
image: rethinkdb:latest
ports:
- "38080:8080"
- "38015:28015"
- "39015:29015"
api:
image: golang:1.8-alpine
volumes:
- .:/go/src/test_server/
working_dir: /go/src/test_server
command: go run server.go
container_name: test_server
ports:
- "8085:8085"
links:
- db
tty: true
Everytime I run docker-compose up I receive this error message:
test_server | controllers/users.go:4:3: cannot find package
"_/go/src/test_server/vendor/github.com/gin-gonic/gin" in any of:
test_server |
/usr/local/go/src/_/go/src/test_server/vendor/github.com/gin-gonic/gin
(from $GOROOT) test_server |
/go/src/_/go/src/test_server/vendor/github.com/gin-gonic/gin (from
$GOPATH)
It's referring to the controllers package. I'm using github.com/kardianos/govendor to vendor my packages. Do you know what's going on?
| docker-compose cannot find package |
In the root of your project create a file namedsonar-project.propertiesand set your key/value property pairs there, one per line. | I have built a rule in my setup.py file that allows my to call sonar scanner from within eclipse. To do this I have had to make use of sonar-scanners command line arguments. I run into a problem however when specifying project names with spaces in. As I'm running on a windows PC my command line look like ths['cmd', '/c', u'sonar-scanner -Dsonar.projectKey=TL:python -Dsonar.projectName=Trade Loader -Dsonar.projectVersion=1.4 -Dsonar.sources=tradeloader -Dsonar.host.url=http://tsw:9000']This gives the error:ERROR: Unrecognized option: Loaderi.e. it doesn't like the spaceI tried to surround the name with quotes:['cmd', '/c', u"sonar-scanner -Dsonar.projectKey=TL:python -Dsonar.projectName='Trade Loader' -Dsonar.projectVersion=1.4 -Dsonar.sources=tradeloader -Dsonar.host.url=http://tsw:9000"]but that also fails in the same way:ERROR: Unrecognized option: Loader'Does anyone know how I can specify a project name with spaces on the command line?Edit:So, my problem came from specifying the entire command as a single string. To get this to work you need to make each argument a separate string.e.g.:['cmd', '/c', 'sonar-scanner', '-Dsonar.projectKey=TL:python', '-Dsonar.projectName=Trade Loader', u'-Dsonar.projectVersion=1.4', '-Dsonar.sources=tradeloader', '-Dsonar.host.url=http://tsw:9000'] | Specifying a space in the sonar.ProjectName option |
If you look at the code on the System page, you'll find your answer. Go to/CMSModules/System/Controls/System.ascx.csfile and search forMemory.Text. You'll find severalSystemHelpermethods to get the values for you.SystemHelper.GetVirtualMemorySize()SystemHelper.GetWorkingSetSize()SystemHelper.GetPeakWorkingSetSize() | BackgroundI recently came across an out of memory exception when users would visit few pages of my Kentico website. Fast forward - I found that the allocated memory (System > General) was over 2 GB! I then went to Debug > Clear cache and then noticed the allocated memory sitting roughly around 400 MB (phew..). Now, when the users would visit the page, it would work without any out of memory exception.QuestionIs there a way I could get these memory statistics via code (ideally C#)? I'm thinking of being able to regularly monitor these memory statistics and trigger an alert (send an email/post to webhook from my C# code) when the allocated memory gets too high.Additonal informationKentico version 9.0.42, hosted in Azure, scaled to 2 instances.The App Service Plan's (in Azure) memory usage was roughly at 50% through out - this rules out setting an alert at that level.Thanks! | How to get Kentico's memory statistics via C# code? |
Add current timestamp as parameter of url, e.g.http://server.com/index.php?timestamp=125656789ShareFollowansweredSep 25, 2009 at 20:16AnatoliyAnatoliy29.8k55 gold badges4646 silver badges4747 bronze badges3This is the best answer since older browsers (i am looking at you IE) don't follow the no-cache header in some situations.–Byron WhitlockSep 25, 2009 at 20:26Do this on the link? What about regular page refreshes?–Nathan LongSep 25, 2009 at 20:321Redirect browser to new timestamp, if timestamp passed by param is too old.–AnatoliySep 25, 2009 at 20:49Add a comment| | For a small intranet site, I have a dynamic (includes AJAX) page that is being cached incorrectly by Firefox. Is there a way to disable browser caching for a single page?Here's the setup I'm using:Apache under XAMPP, running on a Windows serverPHPClarificationThe content that I'm primarily concerned about is page text and the default options in some<select>s. So I can't just add random numbers to the end of some image URLs, for example.Update:I followed the suggestions I've gotten so far:I'm sending nocache headers (see below)I'm including a timestamp URL parameter and redirecting to a new one if the page is reloaded after 2 seconds, like this:$timestamp = $_GET['timestamp'];
if ((time()-$timestamp) > 2) {
header('Location:/intranet/admin/manage_skus.php?timestamp='.time());
}Now Firebug shows that the headers specify no cache, but the problem persists. Here are the response headers for the page:Date Fri, 25 Sep 2009 20:41:43 GMT
Server Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i mod_autoindex_color PHP/5.2.8
X-Powered-By PHP/5.2.8
Expires Mon, 20 Dec 1998 01:00:00 GMT
Last-Modified Fri, 25 Sep 2009 20:41:43 GMT
Cache-Control no-cache, must-revalidate
Pragma no-cache
Keep-Alive timeout=5, max=100
Connection Keep-Alive
Transfer-Encoding chunked
Content-Type text/html | Is there a way to disable browser cache for a single page? |
Answer to first question: ./configure has already been found according to the answer here. It is under the source folder of tensorflow as shown here.
Answer to second question:
Actually, I have the GPU NVIDIA Corporation GK208GLM [Quadro K610M]. I also have CUDA + cuDNN installed. (Therefore, the following answer is based on you have already installed CUDA 7.0+ + cuDNN correctly with the correct versions.) However the problem is: I have driver installed but the GPU is just not working. I made it working in the following steps:
At first, I did this lspci and got:
01:00.0 VGA compatible controller: NVIDIA Corporation GK208GLM [Quadro K610M] (rev ff)
The status here is rev ff. Then, I did sudo update-pciids, and check tensorflow0 again, and got:
tensorflow1
Now, the status of Nvidia GPU is correct as rev a1. But now, the tensorflow2 is not supporting GPU yet. The next steps are (the Nvidia driver I installed is version tensorflow3):
tensorflow4
in order to add the driver into correct mode. Check again:
tensorflow5
We can find that the tensorflow6 is shown and tensorflow7 is in correct mode.
Now, use the example here for testing the GPU:
tensorflow8
As you can see, the GPU is utilized.
|
When installing TensorFlow on my Ubuntu, I would like to use GPU with CUDA.
But I am stopped at this step in the Official Tutorial :
Where exactly is this ./configure ? Or where is my root of source tree.
My TensorFlow is located here /usr/local/lib/python2.7/dist-packages/tensorflow. But I still did not find ./configure.
EDIT
I have found the ./configure according to Salvador Dali's answer. But when doing the example code, I got the following error:
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
E tensorflow/stream_executor/cuda/cuda_driver.cc:466] failed call to cuInit: CUDA_ERROR_NO_DEVICE
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:86] kernel driver does not appear to be running on this host (cliu-ubuntu): /proc/driver/nvidia/version does not exist
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA:
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
The cuda device cannot be found.
Answer
See the answer about how did I enable GPU support here.
| where is the ./configure of TensorFlow and how to enable the GPU support? |
Step functions are excellent at coordinating workflows that involve multiple predefined steps. It can do parallel tasks and error handling well. It mainly uses Lambda functions to perform each task.
Based on your use-case, step functions sound like a good fit. As far as pricing, it adds a very small additional charge on top of Lambdas. Based on your description, I doubt you'd even notice the additional cost. You'd need to evaluate that based on the number of "state transitions" you would be using. Of course, you'll also have to pay for your Lambda invocations.
|
I am designing an application for which input is a large text file (size ranges from 1-30 GB) uploaded to S3 bucket every 15 min. It splits the file into n no of small ones and copy these files to 3 different S3 buckets in 3 different aws regions. Then 3 loader applications read these n files from respective s3 buckets and load the data into respective aerospike cluster.
I am thinking to use AWS lambda function to split the file as well as to load the data. I recently came across AWS step function which can also serve the purpose based on what I read. I am not sure which one to go with and which will be cheaper in terms of pricing. Any help is appreciated.
Thanks in advance!
| AWS Lambda vs AWS step function |
When you deploy Lambda@Edge function, It is deployed to all edge cache regions across the world with their version Replica of the Lambda Edge function. Regional edge caches are a subset of the main AWS regions and edge locations.
When a user requests to the nearest pop/edge, the lambda associated with the edge cache region will get called. All logs of Lambda associated with those regions will in their edge cache region CloudWatch logs.
For example:
If a user is hitting us-east-1 region then its associated logs will be in us-east-1.
To know exactly where (on which region) your function is logging, you can run this AWS CLI script:
FUNCTION_NAME=function_name_without_qualifiers
for region in $(aws --output text ec2 describe-regions | cut -f 3)
do
for loggroup in $(aws --output text logs describe-log-groups --log-group-name "/aws/lambda/us-east-1.$FUNCTION_NAME" --region $region --query 'logGroups[].logGroupName')
do
echo $region $loggroup
done
done
on which you have to replace "function_name_without_qualifiers" with the name of your lambda@edge. Link
Hope it helps.
|
As explained in the Docs , I set up Lambda@edge for cloudfront trigger of Viewer Response.
The lambda function code :
'use strict';
exports.handler = (event, context, callback) => {
console.log('----EXECUTED------');
const response = event.Records[0].cf.response;
console.log(event.Records[0].cf_response);
callback(null, response);
};
I have set up trigger appropriately for the Viewer Response event.
Now when I make a request through cloudfront, it must be logged in cloudwatch, but it doesn't.
If I do a simple Test Lambda Function (using Button), it is logged properly.
What might be the issue here ?
| Lambda@Edge not logging on cloudfront request |
YourRewriteCondis almost correct but you have to capture$1in a group inRewriteRuleand also your target needs to beout/$1.html.You can use this rewrite rule in your site root .htaccess:RewriteEngine On
# To internally forward /dir/file to /out/dir/file.html
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{DOCUMENT_ROOT}/out/$1.html -f
RewriteRule ^(.+?)/?$ out/$1.html [L] | I have a CMS that I've built myself in PHP (Codeigniter framework). I was thinking why every time PHP has to process all code just that to respond with a page. So instead I will create the complete HTML pages and serve them when a user asks for them.That is why I need to rewrite an URL to a specific file only if that file exists. For this, I need to use regex captured groups because I want to build this for all files in that folder and subfolders.For example, I want to put a bunch of HTML pages on %{DOCUMENT_ROOT}/out and want to access them directly with rewrite rules.For example, if the URL is like this:htaaatps://www.dsaoidashd.com/services/development-of-desktop-applicationsI want to look at the following location:%{DOCUMENT_ROOT}/out/servicesfordevelopment-of-desktop-applications.htmland if it exists, I want to return it to the client browser without continuing.And of course, if I have yet more levels like this:htaaatps://www.dsaoidashd.com/services/development/of-desktop-applicationsthen I need to check this location:%{DOCUMENT_ROOT}/out/services/developmentforof-desktop-applications.htmlfile and return it to the calling browser.I triedI know that I have to use RewriteCond to check if the file exists but how to pass it to the RewriteCond regex captured groups so I can check them based on the URL provided?RewriteEngine on
RewriteCond %{DOCUMENT_ROOT}/out/$1.html -f
RewriteRule ^.*$ /$1.html [L] | htaccess: rewrite URL to a file only if that file exists, but with regex captured groups |
Try:git clone https://github.com/username/MYPROJECTWhich should be the correct http address (instead of trying to access github through an ssh session) for apublicrepo.It will take advantage of theirsupport for smart http.git clone https://[email protected]/username/project.gitis for private repo (asexplained here), which should work if your id is right and your public ssh key correctly updated on your GitHub account.(Note: your original address was missing the/username/part)The OP reports:my RSA keys were not used when authenticating, I did assh-addand added them.After that it worked figured it out by runningssh -vT[email protected]in my terminal | I'm having trouble trying to clone a GitHub repository with the following command:git clone https://[email protected]/MYPROJECT.gitWhen I run it, I get this error:fatal: cannot exec 'git-remote-https': Permission deniedHow can I resolve it? | How can I resolve a permission denied error with git-remote-https? |
This is a multi-stage build. This is used to keep the running docker container small while still be able to build/compile things needing a lot of dependencies.For example a go application could be built by using:FROM golang:1.7.3 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]So in the first part we need a complete go environment to compile our software. Notice the name for the first part and the aliasbuilderFROM golang:1.7.3 AS builderIn the second part beginning from the second FROM we only need the compiled app and no other go dependencies anymore. So we can change the base image to using a much smaller alpine Linux.
But the compiled files are still in our builder image and not part of the image we want to start.
So we need to copy files from the builder image viaCOPY --from=builderYou can have as many stages as you want. The last one is the one defining the image which will be the template for the docker container.You can read more about it in the official documentation:https://docs.docker.com/develop/develop-images/multistage-build/ | I am seeing a dockerfile whose code is given below:FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /src
COPY ["FirstDockerApp/FirstDockerApp.csproj", "FirstDockerApp/"]
RUN dotnet restore "FirstDockerApp/FirstDockerApp.csproj"
COPY . .
WORKDIR "/src/FirstDockerApp"
RUN dotnet build "FirstDockerApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "FirstDockerApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "FirstDockerApp.dll"]On the 2nd last line there isCOPY --from=publish /app/publish .. I don't understand why --from is used and what purpose it solves. Can you please help me to understand it? | What is --from used in copy command in dockerfile |
You have to select "commit and push".
If you want to upload (push) the changes that you made, go to:
VCS -> Git -> PUSH
Only after "pushing", your changes will be uploaded to GitHub.
If you select "commit", your changes would remain local.
|
I am trying to use git in Android Studio. If I choose commit changes, it says that it has successfully committed the changed files but those changes do not appear on the GitHub. Instead, if I delete the repository from GitHub and choose Share Project on GitHub, it successfully creates a new repository and uploads the files into it. This means that the connection is fine. Also, I have checked the gitignore file, the java files are not in that list. What could be the problem?
| Android Studio not committing to GitHub |
You are correct - Elastic Beanstalk uses Amazon EC2 instances, Load Balancers and Amazon RDS databases.
From AWS Elastic Beanstalk Pricing - Amazon Web Services (AWS):
There is no additional charge for AWS Elastic Beanstalk. You pay for AWS resources (e.g. EC2 instances or S3 buckets) you create to store and run your application. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments.
Therefore, if the Amazon EC2 instance(s) used by Elastic Beanstalk meet the requirements of the AWS Free Tier (eg using T2 or T3 micro instances), then they would fall under the Free Tier.
The free tier includes (in the first 12 months of your account):
750 hours per month of Linux, RHEL, or SLES t2.micro or t3.micro instance dependent on region
750 hours per month of Windows t2.micro or t3.micro instance dependent on region
|
I want to deploy or upload a Java Application in Elastic Beanstalk.
Is Elastic Beanstalk a Free Tier eligible service?
If yes, how long it will be?? Like EC2 750 hrs/ Month
Read the pricing paragraph in Elastic Beanstalk dashboard. But it seems like Elastic Beanstalk internally using EC2 instance. I am confused here.
As of now my application does not have any database connection, calling API and getting the data instead.
If I deploy the application in Elastic Beanstalk, as a Free Tier user (as of now) anything will be charged for up and running the Elastic Beanstalk service?
| AWS Elastic Beanstalk Pricing |
docker exec will let you run commands in the container.
docker exec $target_container mkdir -p /opt/project/build/core
docker cp /opt/project/build/core/bundle $target_container:/opt/project/build/core/
Note the trailing / on the cp which tells docker to copy the source into the core/ directory rather than naming bundle -> core
Replace
If want to completely "replace" an existing bundle directory rather than add files to it with the cp, then you would need start with removal.
docker exec $target_container sh -c \
'rm -f /opt/project/build/core && mkdir -p /opt/project/build/core'
|
Is it possible to create a folder if it is not existing before copy it?
docker cp /opt/project/build/core/bundle target_container:/opt/project/build/core
Normally there is only /opt/project/build/ existing.
What I want to do is to copy the folder bundle and replace the existing folder and its files if it is existing. If it is not existing the folder core/bundle should be created and the files should be copied.
| Docker: Remove and create folder before doing cp |
yes, just setup your SMTP server to run in a docker container using a Dockerfile in the normal way. Then when you run the container make sure you open the SMTP port ...
docker run -p 25:25 --name yourSmtpDockerContainer yourSmtpDockerImage
now if the server the container is running in exposes port 25 ... then any traffic sent to the server's domain name will be sent to the container.
You may need to expose other SMTP ports too as required - cheers
|
I built up my development environment using Docker containers, but currently all mails are sent by smtp server in my company, I cannot use it for testing. Is there a way that I can create a container that replaces the real smtp server? Do I need a DNS?
Thanks.
| Docker: how to use container to replace real smtp server? |
I find the root cause
scheduler container has different timezone, so it run with a few hours delay | I am using rancher 2.3.3
When I config cronjob with schedule values like @hourly and @daily, works fine.
but when I config it with values like "6 1 * * *" , doesn't work.OS times are sync between all cluster nodesMy config fileapiVersion: batch/v1beta1
kind: CronJob
metadata:
name: samplename
namespace: samplenamespace
spec:
schedule: "6 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: samplename
image: myimageaddress:1.0
restartPolicy: OnFailure | kubernetes cronjob dont works correctly when values are customized |
SQS cannot publish messages to SNS. SQS can only store the messages. You have to pull the message using SQS Api's.Hope this helps you! | We’re presently building an application using AWS and have a need to push msgs into SQS. My question is whether it is possible to have SQS publish a message to an SNS which will trigger a Lambda (susbscribing to the SNS)? The lambda then needs to return an affirmation to the SQS that it received the message, thereby removing that message from SQS.Is the scenario outlined above possible? Or is the only way to grab a message from SQS, to poll the queue via Lambda, etc?Thank you in advance for any help provided.Apologies for misuse of terminology but I’m relatively new to AWS. | Can AWS SQS publish to SNS or is polling SQS required? |
A main philosophy of Docker is to have one task (or process) per container. Seehttps://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/for more clarification on this.I would question whether you're making the most of Docker by trying to run so much in one container. It's alright to run PHP and Apache in the same container (there's an official image for this!), but I'd advise against running nginx and PHP FPM in the same container because PHP FPM is its own process and should therefore get its own container.Secondly, I think you're misusing the multipleFROMfeature.https://docs.docker.com/engine/reference/builder/#from:FROM can appear multiple times within a single Dockerfile in order to create multiple images. Simply make a note of the last image ID output by the commit before each new FROM command.TheFROMkeyword specifies a base image, which you build on top of. If you want a single image as an output, you need a single base image to build on. If your base image isphp:7.1-fpm, you will need to manually install the other version of PHP that you want. This may prove difficult as they'll conflict in a lot of places. I'd strongly recommend rethinking your design and using two separate containers, or running your PHP 5 code with PHP 7 - it'smostlybackwards compatible.ShareFollowansweredMay 22, 2017 at 19:41Jake WrightJake Wright4944 bronze badges0Add a comment| | I have started using docker recently and were able to setup two containers one is running php7.0 with apache2 and another running mysql both of them are able to talk to each other and everything is working fine, now I want to setup a new docker container which shoudl have nginx, php5.6-fpm and php7.0-fpm installed on single container I have been trying to achieve it since past few hours with no luck. Following is my DockerfileFROM nginx:latest
FROM php:php7.1-fpm
FROM php:php5.6-fpm
COPY ./src /var/www/html
RUN apt-get update && apt-get install -y \
nano \
git \
zip \
mcrypt \
&& docker-php-ext-install mcrypt \
&& docker-php-ext-install pdo_mysql \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install -j$(nproc) gd \
&& curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \**EDIT: **I know that one container should have one responsibility but I am in a situation where I need both php5.6-fpm and php7.1-fpm to be running simultaneously, I can create three containers i.e with php5.6-fpm with php7.1-fpm and nginx how would I tell nginx to look for a folder in the conatainer which is running php5.6-fpm if someone tries to access hostone.dev etc | How to install multiple php versions on a single docker container |
(note : I'm not sure I understood what you wish to achieve)
You can easily "squash rebase" original-master as a single commit on top of master :
# from original-master :
git reset --soft master
git commit
With these commands, you would now have a commit :
with the exact content of original-master
on top of master
You can now select whatever content you want from master :
git checkout master -- fileA fileB dir1 dir2 ...
# or if you bluntly want "all the files that exist on master" :
git checkout master -- .
Once you are satisfied with the content :
git commit
# or
git commit --amend
|
I am working on a project that is large and long term. In the middle of development, I decided to dial back some of the features and roll out an MVP. I created a new branch from master and deleted all of the future features.
=====> master
+==========> deleted-and-changed-mvp
I then deployed. However, master should be my origin branch. I forked master to a new branch and merged the MVP into master.
=====> master
+==========> deleted-and-changed-mvp
+> original-master
Now, I want to stack the new features on top of master but can't because its considered 20-something commits behind. How can I move this branch on top of master?
================> master
+> original-master
I would prefer original-master to be considered ahead in the deleted files and behind in the modified/added files.
| How can I bump a branch ahead of master? |
Generally speaking docker-compose can be used to deploy in a production environment. Only one difference that you can do, instead of build the image you can push the images to AWS ECR or any alternative registry like Gitlab registry if you are using Gitlab then you can pull the image directly to the server/instance where you are going to deploy.
Also AWS has a service called ECS which also can be used to deploy containers but without using docker-compose
|
I have a docker-compose.yml configuratio that spin up multiple services such as SQL Server, redis and Elasticsearch. Everything is fine in the local development, I run docker-compose up -d --build on a Windows machine and exposes its IP and ports number to the public. That's how I deploy my docker containers.
But how do I deploy it to the cloud? What website offers this service? I know AWS could host containers but could it run docker-compose up -d --build?
I have been trying to google search deploy docker containers but all I could find was to deploy the docker containers on your local machine, or use SQL Server0 to deploy which I have no understanding about.
| How do I really deploy docker-compose.yml to the cloud? |
You can block access to the wp-admin directory using an htpasswd file. Generate and htpasswd file usingthis tool. Then create a new htaccess file in the wp-admin directory with these contents:<FilesMatch "wp-login.php">
AuthName "Admins Only"
AuthUserFile /directory/with/htpasswd/file/
AuthType basic
require user putyourusernamehere
<FilesMatch "wp-login.php"> | We are getting bruteforce attacked on our sites and I am afraid to ban the IP's as they may be rotating IP's or legitimate users at some point in there life span.I would like to block all unknown bots from accessing my site. Specifically my /wp-login.php file.I have spent hours trying to find the code to do this. I am open to suggestions of course. But is there anyway to ban the unknown bots but not ban google and such?I have captcha setup on my login form and limiting login attempts to 3 fails then lockout for 36 hours then 2 more fails and lockout for 96 hours. This however is not slowing down the attacks and they seem to have an endless pool of IP's to choose from.What I ended up doing on top of generally tightening WP security is locking access to wp-login.php and wp-admin folder.
Very easy and quick setup guide herehttp://support.hostgator.com/articles/specialized-help/technical/wordpress/wordpress-login-brute-force-attackfor the wp-login.php file
Locking a folder can be done easily in any Cpanel or plesk. | How do I block unknown bots to my sites? |
You need to enable Identity and Access Management (IAM) API for your project:https://console.cloud.google.com/apis/library/iam.googleapis.com | Setting IAM policy
Completed
Creating revision
Completed
Routing traffic
Completed
Creating Cloud Build trigger
Completed
Building and deploying from repository
Trigger execution failed: source code could not be built or deployed, no logs are found.I am new in GCP and learning by myself. I am trying to connect my git project to cloud run which is successfully connected. I am trying to deploy on cloud run and getting error source code could not be built or deployed, no logs are found. I checked git and make sure have proper access, checked Dockerfile which contains all necessary information. I am not understanding what is causing this error. there is nothing in log error which say what is causing the error
Can anyone help me to understand this error | GCP Building and deploying from repository Trigger execution failed source code could not be built or deployed, no logs are foun |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.