Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
Yes, it would be consistent. The concept of a 'client' is irrelevant because each API call is independent. The us-east-1 region (previously known as US-Standard) previously did not have read-after-write consistency, but it is now provided in all regions.
Amazon documentation (http://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyModel) states: "Amazon S3 provides read-after-write consistency for PUTS of new objects in your S3 bucket in all regions with one caveat." Ignoring the caveat, this means that a client issuing a GET following a PUT for a new object is guaranteed to get the correct result. My question is, would the guarantee also apply if the GET is issued from a different client not the one which did the PUT (assuming of course the GET follows the PUT chronologically)? In other words, is read-after-write consistency simply read-your-write consistency or it works for all clients? I suspect the answer is that it works globally but can't find a definitive answer.
What does read-after-write consistency really mean on new object PUT in S3?
The link you posted is, sadly, the only real answer at this time (API Version 2012-08-10). PutItem may return items just before they were updated or none at all. The ReturnValues parameter is used by several DynamoDB operations; however, PutItem does not recognize any values other than NONE or ALL_OLD. In short, the only reliable way to retrieve your inserted object is to GetItem, just as you surmised.
I am using nodeJS sdk to put the item to dynamoDB, The item is: { "eventId": date + '-' + eventName + '-' + eventPurpose, "eventName": eventName, "eventPurpose": eventPurpose, "eventDates": eventDates, "attendees": attendees } The present code for the putting the item in dynamoDB: const params = { TableName: "event", Item: { "eventId": date + '-' + eventName + '-' + eventPurpose, "eventName": eventName, "eventPurpose": eventPurpose, "eventDates": eventDates, "attendees": attendees }, ReturnValues: "ALL_OLD" }; dynamo.put(params, (err, data) => { console.log("coming here"); if (err) { console.log("error : " + JSON.stringify(err)); } console.log("data" + JSON.stringify(data)); cb(null, data); }); The insertion happens correctly and the return value is an empty object. I would like to return the inserted item. I found this doc. But this returns only in case of updating the old value. I could not find any other useful info other than this. Is there any work around or we simply need to query using get method with the primary key?
How to return the inserted item in dynamoDB
You can't point to cloudfront from your application load balanacer instead you can create behaviours or behavior groups in cloudfront to point to your load balancer.Just likeDefault (*) -> s3/xyz -> application load balancer
Hi I would like to use AWS application load balancer and create target group which should point default to my CloudFront distribution and based on the rule it will point to other apps. I could not find the resource to do it. Anyway have done such things.Our landing page is pointing to the CloudFront distribution(+AWS S3) and we wanted to have with /xyz it should point to our ec2 instance.
Is it possible to use AWS application load balance with CloudFront distribution and EC2 as target group
Are you able to clone the project?Make sure you are actually under the collaborators and you can access it trough Github. Then clone, stepshere.After making changes go to the "VCS"->"Git"->"Commit", then "Push"Then go to "VCS"->"Git"->"Merge Changes"Select the branch you want to merge fromResolve any conflicts and merge.
I'm working on a project created by a friend. He created his project on Intellij and connected it to a Repository on GitHub. Committing and pushing works, but i (a collaborator) don't understand how i can pull the project from git, and doing merges/pushing/committing. Any help? I've watched plenty of tutorials, but everyone only address the problem of connecting a project to github.
GitHub: pulling and pushing project with my collaborator
--webroot-pathis the path which should be accessible via http using your domain name. This is given when you first procure the certificates at the time of renewal there is no need to supply that explicitly. I think there could be something wrong with the renewal configuration file.When a certificate is issued, by default Certbot creates a renewal configuration file that tracks the options that were selected when Certbot was run. This allows Certbot to use those same options again when it comes time for renewal.https://certbot.eff.org/docs/using.html#modifying-the-renewal-configuration-fileI would suggest try generating new certificates instead of renewing. That would correct the renewal configuration file.
certbot was used with NGINX to create certificates. There was only one cert created on our server for our production build, staging build and jenkins webserver.When I run certbot renew everything is fine until it attempts to challenge the jenkins server. I get the following errorAttempting to renew cert (my.domain) from /etc/letsencrypt/renewal/my.domain.conf produced an unexpected error: Missing command line flag or config entry for this setting: Select the webroot for jenkins.my.domain: Choices: ['Enter a new webroot'] (You can set this with the --webroot-path flag). Skipping. All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/my.domain/mykey.pem (failure)I'm not sure where jenkins webroot is located but I don't think it is as simple as adding it to my letsencrypt conf file at the bottom under webroot, or maybe that is it. Either way any help is appreciated! :)
Cannot renew certificates with certbot renew/Letsencrypt
Well, everyone has his stupid moments... actually not precompiling my assets was the problem. rake assets:clean rake assets:precompile and everything works again..
When i'm pushing to github, the push works fine and my Repo is properly updated. When deploying to Heroku, although it shows that everything works, and it pulled from the master (as i understand) the Files don't get updated. 76180b5..7bd1ec4 master -> master I'm trying to get this working for hours now.. I even deleted the whole folder from my computer.. Set everything from scratch up.. But still not updating on Heroku...
GitHub Repo working but Heroku still the same
I believe on a linux machine docker doesn't need virtual box and can run on Linux Kernel. Is this correct ? Yes, hence the need for a VirtualBox Linux VM (using a TinyCore distribution) Is docker still faster/efficient because it uses a single VM to run multiple containers as opposed to Vargrant's new VM for every environment ? Yes, because of the lack of Hypervisor simulating the hardware and OS: here you can launch multiple containers all using directly the kernel (through direct system calls), without having to simulate an OS. (Note: May 2018, gVisor is another option: a container, simulating an OS!) See more at "How is Docker different from a normal virtual machine?". Of course, remember that Vagrant can use a docker provider. That means you don't have to always provision a full-fledged VM with Vagrant, but rather images and containers. Vagrant.configure("2") do |config| config.vm.provider "docker" do |d| d.image = "foo/bar" end end See Vagrant docker provisioner.
So I have read this in many places that docker is faster and more efficient because it uses containers over VMs but when I downloaded docker on my mac I realized that it uses virtual box to run the containers. I believe on a linux machine docker doesn't need virtual box and can run on Linux Kernel. Is this correct ? Back to original question. Is docker still faster/efficient because it uses a single VM to run multiple containers as opposed to Vargrant's new VM for every environment ?
If docker uses virtual machine to run on a mac then what is its advantage over vagrant?
I asked the same thing on the MSDN forums. Apparently, as of this date, there is no API that can be used to pull in that information. Source: reply from MSDN forums
I'd like to consume information about the Recovery Services that I have setup in Azure. If I go to that section inside the Microsoft Azure Management Portal, I'll see a list of the Backup Vaults that I have created. I'm looking for an API that will let me pull up data about it, such as the one that is presented on the Dashboard: - Name - Status - Location - Storage used/left - etc. So far, I've only been able to find their Storage Services REST API. Thank you
Does Microsoft Azure have a REST API to view information about Backup Vaults?
Job metadata for SUCCEEDED and FAILED jobs are retained for 24 hours. Metadata for Jobs in SUBMITTED, PENDING, RUNNABLE, STARTING, and RUNNING remain in the queue until the job completes. Your AWS Batch Jobs also log STDERR/STDOUT to CloudWatch Logs where you control the retention policy.ShareFollowansweredMay 1, 2017 at 23:04Jamie KinneyJamie Kinney30022 silver badges22 bronze badges11Source of this -docs.aws.amazon.com/batch/latest/userguide/batch_user.pdf–Shashwat Shekhar ShuklaNov 17, 2021 at 0:55Add a comment|
I'm trying to understand how long the details associated with an AWS Batch job are retained. For example, theKinesis limits pagedescribes how each stream defaults to a 24 hour retention period that is extendable up to 7 days.The AWSBatch limits pagedoes not include any details about either the maximum time or count allowed for jobs. It does say that one million is the limit for SUBMITTED jobs, but its unclear if that is exclusively for SUBMITTED or includes other states as well.Does anybody know the details of batch job retention?
Limits for AWS Batch job details retention
I don't think the SSL error is related to the list firewall code. I've just tested and got a response without any problems:from googleapiclient import discovery from oauth2client.client import GoogleCredentials credentials = GoogleCredentials.get_application_default() service = discovery.build('compute', 'v1', credentials=credentials) # Name of the firewall rule to return. firewall = 'default-allow-icmp' # TODO: Update placeholder value. request = service.firewalls().get(project="my-project-1", firewall=firewall) response = request.execute() # TODO: Change code below to process the `response` dict: print(response)Response:{'id': '5299901251757818599', 'creationTimestamp': '2021-07-28T17:53:28.718-07:00', 'name': 'default-allow-icmp', 'description': 'Allow ICMP from anywhere', 'network': *redacted*}Check your code to see where the SSL might be coming from.
I am trying to get firewall list or just get specific firewall info from GCP. I am using GCP Python SDK. I have imported all required modules/packages. Rest of my code works fine, but I have problems with getting information or listing firewalls from my GCP environment. I am using python code to get firewall info, variables:project,credentialsare defined in the script. Modules are imported:from pprint import pprint,from googleapiclient import discovery,from oauth2client.client import GoogleCredentialsservice = discovery.build('compute', 'v1', credentials=credentials) # Name of the firewall rule to return. firewall = 'FW-name' # TODO: Update placeholder value. request = service.firewalls().get(project=project, firewall=firewall) response = request.execute() # TODO: Change code below to process the `response` dict: pprint(response)I am still receiving following error:ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)
gcp - python sdk - get firewall
Try to use the following query:(metric_1 or metric_2 * 0) / metric_2ShareFollowansweredJun 1, 2021 at 21:40Marcelo Ávila de OliveiraMarcelo Ávila de Oliveira20.9k33 gold badges4242 silver badges5353 bronze badges1Works like a charm. Thank you.–BenjaminJun 2, 2021 at 0:53Add a comment|
I have 2 metrics and first metric doesn't always exist. In cases when it doesn't exist I want to behave as it had a value0(or result to have a value0)Metrics:metric_1{label=1} 10 ... metric_2{label=1} 2 metric_2{label=2} 5 ...Operation:metric_1 / metric_2Result:{label=1} 5Expected:{label=1} 5 {label=2} 0My real-life example has many labels so creating a static vector with{label=2}doesn't work.
Setting default value during binary operation when value doesn't exist
Alright I found a solution without .htaccess. For everybody who might have the same problem:sudo nano /etc/apache2/apache2.confthen scroll down to:<Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Direcotry>Now edit theOptions Indexes FollowSymLinkstoOptions FollowSymLinksAnd btwAllowOverride Noneschould beAllowOverride All.Then restart apache/etc/init.d/apache2 restart
I did everything that should have been necessary to prevent directory browsing.My.htaccesshas this code:Options All –IndexesIt works fine when I browse into a directory where subdirs are present for example "example.net/system" "system" has subdirs like "main", "top" etc. But when I go into "example.net/system/main", in which no subdirs are present, I can see all php, html etc. files.My .htaccess file is located in the /var/www/html folder
htaccess Directory browsing works only when subdir is present
Try connecting to the instance via the serial console:https://cloud.google.com/compute/docs/instances/interacting-with-serial-console. From there you should be able to repair the firewall rules.
I logged to my ssh in google cloud Linux Machine by mistake i changed firewall rule and i lost ssh connection. now i am not able login to ssh(22), Is there any way to recover that ?I know I can take shanpshots of my machine and create new instances using that snapshot. but is there anyway to get again ssh login in same machine or i have to delete it.
Google Cloud Instances Firewall Unable to Login SSH
Stream records are organized into groups, or shards.According to Lambdadocumentation, the concurrency is achieved on shard-level. Within each shard, the stream events are processed in order.Stream-based event sources: for Lambda functions that process Kinesis or DynamoDB streams the number of shards is the unit of concurrency. If your stream has 100 active shards, there will be at most 100 Lambda function invocations running concurrently. This is because Lambda processes each shard’s events in sequence.And according toLimits in DynamoDB,Do not allow more than two processes to read from the same DynamoDB Streams shard at the same time. Exceeding this limit can result in request throttling.
I'm in the process of writing a Lambda function that processes items from a DynamoDB stream.I thought part of the point behind Lambda was that if I have a large burst of events, it'll spin up enough instances to get through them concurrently, rather than feeding them sequentially through a single instance. As long as two events have a different key, I am fine with them being processed out of order.However, I just read this page onUnderstanding Retry Behavior, which says:For stream-based event sources (Amazon Kinesis Data Streams and DynamoDB streams), AWS Lambda polls your stream and invokes your Lambda function. Therefore, if a Lambda function fails, AWS Lambda attempts to process the erring batch of records until the time the data expires, which can be up to seven days for Amazon Kinesis Data Streams. The exception is treated as blocking, and AWS Lambda will not read any new records from the stream until the failed batch of records either expires or processed successfully. This ensures that AWS Lambda processes the stream events in order.Does "AWS Lambda processes the stream events in order" mean Lambda cannot process multiple events concurrently? Is there any way to have it process events from distinct keys concurrently?
Does AWS Lambda process DynamoDB stream events strictly in order?
If those GitHub repositories were all part of the same organisation, that would be easy: see "Inviting users to join your organization"If not, you can script that with theGitHub API:list all your repositoriesfor eeach repo,add your collaborator to it.
We have 150+ repositories on our GitHub, for different clients. Now we hired new employee to take care of the managed services process and he requires access to all the repositories. Inviting him as a collaborator to all repositories one by one is going to take plenty of time.Is there a way to add him to all repositories at once? A command or some trick in GitHub that I do not know.
Add collaborator to many repositories at once
For registering vault as a service you will have to do the following stepsCreate a file and write this{"service": {"name": "vault", "tags": ["vault-tag"], "port": 8200}}into it. Name it asvault.jsonNow, enter this commandconsul services register vault.jsonYou can now see that vault is registered as a service
I'm running vault and consul as pods in kubernetes, while I'm checkingconsul catalog serviceit showsconsulalone.How can I registervault as a service?I'd tried with the following link, but it didn't work.https://learn.hashicorp.com/consul/getting-started/services
How to register vault (Hashicorp-vault) as a service in consul (Hashicorp-consul), I am using kubernetes?
Finally achieved my task for the executing java jar using cron. Posting the solution so that it could help other beginners. Dockerfile FROM openjdk:8-jre-alpine MAINTAINER dperezcabrera RUN apk update && apk add bash ADD java-version-cron /temp/java-version-cron RUN chmod 777 /etc/test/ ADD DockerTesting-0.0.1-SNAPSHOT.jar /etc/test RUN cat /temp/java-version-cron >> /etc/crontabs/root RUN rm /temp/java-version-cron RUN touch /var/log/cron.log CMD crond 2>&1 >/dev/null && tail -f /var/log/cron.log java-version-cron * * * * * java -jar /etc/test/DockerTesting-0.0.1-SNAPSHOT.jar >> /var/log/cron.log 2>&1 # Don't remove the empty line at the end of this file. It is required to run the cron job Place your dockerfile , cron and the jar in the same folder or as according to your requirement.
I want to create a docker file using alpine(as it creates a light weight image) with a cron (to execute a task periodically) , As a newbie I initially tried with ubuntu it worked perfect as i took the help from this link UbuntuExample with CRON Now the problem is it create heavy docker image. I want to convert this same example for alpine but couldn't find the perfect help. Searched a lot of websites but didn't got anything fruitfull. MAIN TASK :- My main task is to execute a java jar file through a docker and execute that jar file periodically What I have tried till now is created a simple docker file and a crontab file just to print the message periodically. Main issue I am facing is install cron on alpine. DOCKERFILE (DockerFile) FROM ubuntu:latest MAINTAINER [email protected] # Add crontab file in the cron directory ADD crontab /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/hello-cron # Create the log file to be able to run tail RUN touch /var/log/cron.log #Install Cron RUN apt-get update RUN apt-get -y install cron # Run the command on container startup CMD cron && tail -f /var/log/cron.log CRONTAB (crontab) * * * * * root echo "Hello world" >> /var/log/cron.log 2>&1 # Don't remove the empty line at the end of this file. It is required to run the cron job This worked perfect for ubuntu but how to achieve it for openjdk:8-jre-alpine
openjdk:8-jre-alpine :- Execute a jar file periodically using docker with cron
Github is currently down, probably from the DDoS attack currently affecting a bunch of sites including Twitter, Spotify, and more
Rencently I find a Palmer Drought Severity Index (PDSI) calculation package written by cszang from GIthub. According to the website https://github.com/cszang/pdsi, the install is very easy: install.packages("devtools") library(devtools) install_github("cszang/pdsi") However, when I install it, an error occurs like this: Error in curl::curl_fetch_disk(url, x$path, handle = handle) : Couldn't resolve host name Does anyone know about this issue? Thanks a lot.
Problem in installing PDSI package form Github
If you want to clone other one's repo and make some contributions to it, you need to fork the repo as your own first. Push commits to your own repo and you can send a "Pull Request" to the original author then.git clone[email protected]:username/file.gitHere theusernamemust be yours, or an org that you are in. Otherwise, you don't have the permission to update the repo.B.t.w, a tip for you. You can$ git remote add origin[email protected]:username/file.git $ git push -u origin masterSince then you can push it simply by$ git push
I m new to Git so I am trying to learn git. Meanwhile When I am working on Git ,ERROR: Permission to username/file.git denied to Username1. fatal: Could not read from remote repository.Please make sure you have the correct access rights and the repository exists.I encountered this error. By the way , File is only readable. How can I push ?git clone[email protected]:username/file.gitgit add Files.txtgit add .git commit -a -m "Second Commit"git push[email protected]:username/file.gitBy the way git config[core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true precomposeunicode = false [remote "origin"] url =[email protected]:username/files.git fetch = +refs/heads/*:refs/remotes/origin/* [branch "master"] remote = origin merge = refs/heads/master
GitHub Pushing for read-only
Long and short. Microsoft doesn't support XP anymore and so therefore the credentials won't work. Time to upgrade.
We have some clients running Windows XP embedded that are rejecting our new SSL cert from GoDaddy. Upgrading these systems is not an option since they are connected via a satellite link. Can anyone provide me direction on where to go and what to look at? SSL Checker shows it is correct. The URL shows an error but the path is correct?Thanks for anyone who can provide a direction forward. Chris
SSL cert issued by trusted CA but only works on some clients?
Create a new table called batches that has one row per batch. This table will have an auto-increment column called batchid. Your script will start by inserting a row into this table. In addition to the batchid, it can also contain the time stamp of when the batch was created, who created it, and perhaps other pertinent information. This id will then be used through the rest of the batch.
We have a script that should function like so: Script inserts n rows into database inserting the same unique number into the batch column. Script puts a job on the AWS queue with the batch number. AWS worker does some processing and fires off another script on our server. Script on our server inserts the response from the AWS worker into all rows in the batch. This is all easy except - creating the batch number. We can't just get the max batch number and add 1 due to multiple users being able to create a batch at the same time. So what is the most ideal way to do this? The batch number does not have to be an integer although it could be useful.
Generate batch number in mysql?
Well, I got a solution, it's fixed. I just had to run my app with this line : pm2 start app.js --watch Then it's watch when a file is modified and restart it automaticly.
I just got a problem, I'm using the webhook for github ( I wrote one in php ). The problem, I want to restart my nodejs app with pm2 from my php code something like that : shell_exec("pm2 restart test"); but my user : www-data (nginx) can't execute it.. When I try it log in as www-data I got this : Error: EACCES, permission denied '/.pm2' at Error (native) at Object.fs.mkdirSync (fs.js:747:18) at Object.CLI.pm2Init (/usr/local/lib/node_modules/pm2/lib/CLI.js:40:8) at Object. (/usr/local/lib/node_modules/pm2/bin/pm2:21:5) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Function.Module.runMain (module.js:501:10) at startup (node.js:129:16) And I don't want to use "sudo" because it's not really clean, do you have an idea ? Thank you very much by advance :)
Restart my nodejs app (with pm2) from php server
Your configuration is correct. I think that the problem in in your asp.net core application. Please check the following: To receive IP address: var ipAddress = context.Connection.RemoteIpAddress; To receive User-Agent: var userAgent = context.Request.Headers["User-Agent"];
I have two docker images one for nginx that is used to receive a request from the clients, the request contains the client's IP address, User-Agent. The other image is an asp.net application that is used to receive the request send from the nginx with the client information. My problem is that the request received contains the nginx information not the client information. The configuration of the nginx docker image is shown bellow. server { access_log /var/log/nginx/access-default.log; error_log /var/log/nginx/error-default.log; listen 80; location / { proxy_pass http://172.18.0.2:5000; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header Origin $http_origin; proxy_set_header User-Agent $http_user_agent; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_cache_bypass $http_upgrade; } }
Forward client's request header
First of all, the way you think you're settingcacheto false in your$.getJSON()call is incorrect. You're passing a key/value pair to the server so the request URL will look likesite.com/resource?cache=false.You need to make your request using the$.ajax()method so you can actually set thecacheoption tofalse. However, all this does is what you call a "quick fix hack". It adds_={current_timestamp}to the query string so that the request will not be cached.$.ajax({ url: 'myurl', type: 'GET', dataType: 'json', data: jsonDataObject, cache: false, // Appends _={timestamp} to the request query string success: function(data) { // data is a json object. } });In my opinion that's not a quick fix or a hack, it's the correct way to ensure you get a fresh response from the server.If you'd rather not do that every time, then you can just use your own wrapper function:$.getJSONUncached = function (url, data, successCallback) { return $.ajax({ url: url, type: 'GET', dataType: 'json', data: data, cache: false, // Appends _={timestamp} to the request query string success: successCallback }); };Then you can just replace your call to$.getJSON()with$.getJSONUncached()
I have an AJAX-call, which returns values from a ever changing database, based on simple standard parameters, like month of year. In IE, this function returns cached data, which it never should. I've monitored the server side, and it isn't contacted by the client.Now, my title question has been asked in different ways, many times here already. The top two solutions are:setcache: falsePass a random number/string/timestamp to make the call unique.The thing is though, thatcache: falsedoesn't work, at least not in my application. On the other hand, passing a unique string to prevent caching, seems like a quick fix hack. I don't like it. So what is the correct way of preventing caching on ajax calls?Ajax call that doesn't work in regards of preventing caching:$.getJSON("myURL", { json : jsonDataObject, cache: false }).success(function(data) { //do something with data });I have also tried calling$.ajaxSetup({ cache: false });on it's own, before the calls happen, but to no effect...
What is the correct way to prevent caching on ajax calls?
6 You need to add apk add mdocml-apropos and then for each package you need the man packages for apk add curl-doc and you are set to go to use man after, like you already did apk add man man-pages mdocml-apropos The source for this (plus added the mdocml-apropos which is missing there) is https://wiki.alpinelinux.org/wiki/Alpine_Linux:FAQ#Why_don.27t_I_have_man_pages_or_where_is_the_.27man.27_command.3F but interstingly, i cannot get it working myself. Also tried to export TERM=xterm to see if thats in iteractivity issue but it is not. Also tried makewhatis /usr/share/man manually, but no sucess. Interestingly though: ls -la /usr/share/man/man1/curl-config.1.gz -rw-r--r-- 1 root root 1687 Aug 4 15:07 /usr/share/man/man1/curl-config.1.gz So there is a manpage Share Improve this answer Follow answered Aug 20, 2016 at 16:25 Eugen MayerEugen Mayer 9,32044 gold badges3333 silver badges6060 bronze badges 3 4 There’s also a shortcut – if you want to install -doc subpackage for each package, run apk add docs. – Jakub Jirutka Aug 21, 2016 at 14:57 I tried this and curl --manual works but not man curl. Unfortunately, apk add curl-doc 0 doesn’t work at all. – lukejanicke Aug 22, 2016 at 15:20 @JakubJirutka Thanks for that tip. – lukejanicke Aug 22, 2016 at 15:20 Add a comment  | 
I can’t get man to work in an Alpine Linux Docker container. Pull Alpine Linux and start a container. docker pull alpine:latest docker run -t -i alpine /bin/ash Update repository indexes from all remote repositories. apk update Install man and man-pages. apk add man man-pages Install a package and its documentation. apk add curl apk add curl-doc Try to view the man pages. / # man curl /usr/share/man/mandoc.db: No such file or directory man: outdated mandoc.db lacks curl(1) entry, consider running # makewhatis /usr/share/man more: -s: No such file or directory / # What? Update Following @EugenMayer’s advice to add mdicml-apropos, I can get curl --manual to work but not docker pull alpine:latest docker run -t -i alpine /bin/ash 0. Unfortunately, docker pull alpine:latest docker run -t -i alpine /bin/ash 1 doesn’t work at all. This behaviour is inconsistent and unexpected.
How to get 'man' working in an Alpine Linux Docker container?
1 I think a viable solution to this problem could be periodically checking out the git repository of the contracting company and checking it back in to your account if there is a change. A simple script could help doing that, let's say, once per day. Every commit will be mirrored, so you're not losing any data at all. Maybe this script can give you an idea on how to accomplish this job: https://gist.github.com/oweidner/6f173a9347f3b298dd0d Share Improve this answer Follow edited Nov 30, 2019 at 12:42 answered Nov 30, 2019 at 12:37 GeraldGerald 98511 gold badge99 silver badges1818 bronze badges Add a comment  | 
Our organisation has a contracting company working for them. The contracting company works on an app where the source code is on github and each release goes on bitbucket for deployment. Our organisation also has a github account, and we would like to have the latest changes synced to our organisations github account. For redundancy purposes, we would like to make sure if the contracting company disappeared with their github and bitbucket accounts, our organisation should not lose neither the code nor the artifacts generated for deployments. I am not sure what I should be reading on as this is the first time I am seeing a problem like this, I have some simple knowledge of github and can find out more about bitbucket, but I am not sure what the concept is or which keywords I should be looking for.
How to keep a copy of latest code and release with github and bitbucket?
Add thisnode_modules/to.gitignorefile to ignore all directorynode_modules
I have a react app which was originally bootstrapped using create-react-app in my App directory as follows:App |-- build |-- node_modules |-- etc...I've adjusted the file structure to now look like the following and made a new commit:App |-- client |-- build |-- node_modules |-- etc... |-- serverI moved all my create-react-app contents into a new /client directory inside that original project. Nothing is wrong with my app. It's working as intended. My issue is that when I committed this change, my file size drastically increased, exceeding GitHub's file size limit of 100MB. I'm starting to understand why this happened after researching, but I have yet to find out how to fix it. My git knowledge isn't the strongest, but any direction would be helpful.
Adjusting file structure resulted in Github file size limit
It's a normal behavior. Doctrine stores a reference of the retrieved entities in the EntityManager so it can return an entity by it's id without performing another query. You can do something like : $entityManager = $this->get('doctrine')->getEntityManager(); $repository = $entityManager->getRepository('KnowledgeShareBundle:Post'); $post = $repository->find(1); $entityManager->detach($post); // as the previously loaded post was detached, it loads a new one $existingPost = $repository->find(1); But be aware of that as the $post entity was detached, you must use the ->merge() method if you want to persist it again.
I want to be able to retrieve the existing version of an entity so I can compare it with the latest version. E.g. Editing a file, I want to know if the value has changed since being in the DB. $entityManager = $this->get('doctrine')->getEntityManager(); $postManager = $this->get('synth_knowledge_share.manager'); $repository = $entityManager->getRepository('KnowledgeShareBundle:Post'); $post = $repository->findOneById(1); var_dump($post->getTitle()); // This would output "My Title" $post->setTitle("Unpersisted new title"); $existingPost = $repository->findOneById(1); // Retrieve the old entity var_dump($existingPost->getTitle()); // This would output "Unpersisted new title" instead of the expected "My Title" Does anyone know how I can get around this caching?
How to stop Doctrine 2 from caching a result in Symfony 2?
It is acceptable to use a self-signing cert? If so, a different one per client install?Ask your clients. Will they put up with a browser warning? or not?Is it best to issue a new cert (maybe a free one) for each deployment?It is best for theclientto acquirehis ownSSL certificate. You can't do that for him. Nobody can.Is is acceptable to use the same cert, signed by a proper CA, on all of the deployment VMs?No, it entirely defeats the purpose. The certificate and the private key it wraps are supposed touniquelyidentify the holder.A completely different approach?Handball the whole megillah to the clients. Self-identification is their problem, not yours.
Context:I have an application that is deployed to each client as a Virtual Machine. The latter is installed by the clients wherever they want (I don't necessarily know the final domain). The application comprises an JBoss Web Server that provides access to a configuration page, protected by SSL. Right now the server is using a self signed Certificate. However, I want the browsers to stop showing the warning messages associated to self signed certs. Moreover, I provide a free version of the application that has basic functionality.Question:For cases where the client is using a free version (and me wanting to reduce costs), what is the best approach when using a SSL cert, and not knowing the final domain (most of the time)?It is acceptable to use a self-signing cert? If so, a different one per client install?Is it best to issue a new cert (maybe a free one) for each deployment?Is is acceptable to use the same cert, signed by a proper CA, on all of the deployment VMs?Acompletelydifferent approach?Thanks guys!
New SSL Certificate for each client deployment?
AWS Simple Monthly Calculator is an easy-to-use online tool that enables you to estimate the monthly cost of AWS services for your use case based on your expected usage. It helps you estimate your monthly AWS bill more efficiently.AWS Cost Explorer is used to explore and analyze your historical spend and usage.The AWS Total Cost of Ownership (TCO) Calculator, unlike the previous two tools, has a specific purpose: It is used to compare the cost of your on-premises environment to the cost of an AWS environment. Not only does it allow you to calculate the various cost savings you’ll experience when moving to the cloud, it also provides detailed reports that can be used for presentation purposes.AWS Budgets allows you to monitor your cloud costs and usage, as well as utilization and coverage for your purchased Savings Plans and Reserved Instances. You can receive notifications when costs and usage exceed (or are forecasted to exceed) your budgeted thresholds, and/or when your utilization or coverage falls under your target thresholds.
These topics are pretty confusing and different sites have different answers for them.Can somebody please explain how these below options are used?Which service should be used to estimate the costs of running a new project on AWS?AWS TCO CalculatorAWS Simple Monthly CalculatorAWS Cost Explorer APIAWS Budgets
Difference between various cost estimation tools in AWS
It seems that the default value of0is missing from thedocumentation.The health or readiness check algorithm works like this:Wait forinitialDelaySecondsPerform readiness check and waittimeoutSecondsfor a timeoutIf the number of continued successes is greater thansuccessThresholdreturnsuccessIf the number of continued failures is greater thanfailureThresholdreturnfailureotherwise waitperiodSecondsand start a new readiness check
Kubernetes' liveness and readiness probes for pods (deployment) can be configured with this initial delay ---- meaning the probe will start after this many seconds after the container is up. If it is not specified, what is the default value? I can't seem to find it. The default value for periodSeconds is documented as 10 second.Thanks
What is the default value of initialDelaySeconds?
New version of Prometheus (2.5) allows to write tests for alerts, here is alink. You can check points 1 and 2. You have to define data and expected output (for example intest.yml):rule_files: - alerts.yml evaluation_interval: 1m tests: # Test 1. - interval: 1m # Series data. input_series: - series: 'up{job="prometheus", instance="localhost:9090"}' values: '0 0 0 0 0 0 0 0 0 0 0 0 0 0 0' - series: 'up{job="node_exporter", instance="localhost:9100"}' values: '1+0x6 0 0 0 0 0 0 0 0' # 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 # Unit test for alerting rules. alert_rule_test: # Unit test 1. - eval_time: 10m alertname: InstanceDown exp_alerts: # Alert 1. - exp_labels: severity: page instance: localhost:9090 job: prometheus exp_annotations: summary: "Instance localhost:9090 down" description: "localhost:9090 of job prometheus has been down for more than 5 minutes."You can run tests using docker:docker run \ -v $PROJECT/testing:/tmp \ --entrypoint "/bin/promtool" prom/prometheus:v2.5.0 \ test rules /tmp/test.ymlpromtoolwill validate if your alertInstanceDownfrom filealerts.ymlwas active. Advantage of this approach is that you don't have to start Prometheus.
We are about to setup Prometheus for monitoring and alerting for our cloud services including a continous integration & deployment pipeline for the Prometheus service and configuration like alerting rules / thresholds. For that I am thinking about 3 categories I want to write automated tests for:Basic syntax checks for configuration during deployment (we already do this withpromtoolandamtool)Tests for alert rules (what leads to alerts) during deploymentTests for alert routing (who gets alerted about what) during deploymentRecurring check if the alerting system is working properly in productionMost important part to me right now is testing the alert rules (category 1) but I have found no tooling to do that. I could imagine setting up a Prometheus instance during deployment, feeding it with some metric samples (worrying how would I do that with the Pull-architecture of Prometheus?) and then running queries against it.The only thing I found so far is ablog post about monitoring the Prometheus Alertmanager chain as a wholerelated to the third category.Has anyone done something like that or is there anything I missed?
How to automatically test Prometheus alerts?
I solved the problem. It was a problem that occurred because the logback.xml modifications were not applied when the docker build was done without modifying the version.ShareFollowansweredMar 30, 2022 at 1:37강정화강정화8511 gold badge11 silver badge99 bronze badgesAdd a comment|
I run spring boot in the local environment, the log file is created well. But when I run it on k8s, it works normally, but I can't find the log. I tried to go into the pod and search for it, but I couldn't find it. please help me. Let me explain the environment I tested.nfs directory is "/ktnfs"logback.xml setting is as below.<property name="LOG_PATH" value="/ktnfs/kt/logs"/> <property name="LOG_FILE_NAME" value="api-out"/> <property name="ERR_LOG_FILE_NAME" value="api-err"/>Below is the yaml file I created.apiVersion: v1 kind: PersistentVolume metadata: name: kt-nfs-pv spec: capacity: storage: 6Gi accessModes: - ReadWriteMany nfs: server: 172.30.1.80 path: "/ktnfs" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: kt-nfs-pvc namespace: kt2 spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 2Gi volumeName: kt-nfs-pv --- apiVersion: apps/v1 kind: Deployment metadata: name: kt-api-server namespace: kt2 spec: replicas: 1 selector: matchLabels: app: kt-api-server strategy: {} template: metadata: labels: app: kt-api-server spec: containers: - image: 172.30.1.85:31113/nlu_public/kt-api-server:1.0 name: kt-api-server volumeMounts: - mountPath: /ktnfs/kt/logs name: kt-nfs-volume volumes: - name: kt-nfs-volume persistentVolumeClaim: claimName: kt-nfs-pvc status: {}
how can i save spring boot log in k8s persistent volume
As I mentioned in comment, AFAIK it´s working as designed, if you want to see tls you could try that what mentioned in thistutorial.Seeing that unencrypted communication to the QOTM service is only occurring over the loopback adapter is only one part of the TLS verification process. You ideally want to see the encrypted traffic flowing around your cluster. You can do this by removing the “http” filter, and instead adding a display filter to only show TCP traffic with a destination IP address of your QOTM Pod and a target port of 20000, which you can see that the Envoy sidecar is listening on via the earlier issued kubectl describe command.Hi @jt97 I can see lock badge in kiali dashboard, I read somewhere that this is a representation of encryption is happening over there.Exactly, there isgithub issueabout that.Hope you find this useful.
I am running services on Kubernetes cluster and for security purpose, I came to know about service-mesh named istio. Currently, I have enabled the Mtls in istio-system namespace and I can see Sidecars is running inside the pod in bookinfo service. But while capturing traffic through Wireshark between pod I can see my context route in Wireshark is still in HTTP. I supposed that it should be in TLS and encrypted.Note : I am using istio-1.6.3 and Defined Gateway and ingress (Kubernetes ingress) to the service.Here is the screen shot :Wireshark image
Verify MTLS enabled in Istio through wireshark
As long as the repository holds the information you are looking for, it can be retrieved by using the JGit API.ARevWalkcan be used to traverse the commits of a repository and aTreeWalkin can be used to iterate the tree of a commit, i.e. the list of files the commit consists of. With aDiffFormatter, two commits can be compared and a list ofDiffEntrys can be obtained that describe the added, removed and modified files.Some time ago I wrote anarticle that might also be of interest. It touches the JGit APIs to read and write Git objects (commits, trees and blobs).You may also want to look intogitective: a library that provides an API on top of JGit to make investigating Git repositories simpler and easier. While the project is unmaintained in the meanwhile it may either still work or can at least be used to learn how to use certain JGit APIs.
The statistics I want to pull about a repository include those offered by the Github statistics APIhttps://developer.github.com/v3/repos/statistics/My question is, is this possible using the library JGit?From my research using stack over flow and Google, there is little, if any information that is clear about this. At this point I am doing research prior to starting a project, so example lines of code or even a guaranteed yes\no answer from experienced users is appreciated.
How to get repository statistics with JGit
yes. useDeny from allin .htaccessand for the download script useHTTP Download ClassIt also has a speed and resume support.
I have an folder in my apache public folder and it's contents must be protected at all time. Let's say the folder is called 'protected'. When someone then tries to openhttp://domain.com/protected/random.file, that someone shouldn't be able to open/download it. Access should be completely blocked, with no possible workarounds.I'm building an portal for an company wich wants to be able to only publish files to authorised users. The portal I made has an custom made php authentication method. When a user is logged in and has the right permissions (still php), that user should be allowed to download that specific file it's been granted access to (defined in database).I was thinking of an script at download.php, when an file is requested, php gets the file contents and forces an download. Is it possible to block all access to '/protected' with .htaccess, but still allow php to get the file contents?Thanks in advance.PS. The protected folder hás to be in the public_html folder.
How to protect all directory contents, but gain access to php script
The solution is to install hub from a regular user. $ brew install hub and to add /usr/local/bin to the $PATH of root user (if it's not already the case.) For this, you can use the following command: echo export PATH="/usr/local/bin:$PATH" >> ~/.bash_profile In this way, the /usr/local/bin appears before /usr/bin, and if a command if present in both location, the homebrew version has the priority.
What do I need to run git hub as root? git hub works perfectly in non root user. To install it as root, I ran brew install hub which lead to # brew install hub W: be careful as root. ======================================================================== You may want to update following environments after installed linuxbrew. PATH, MANPATH, INFOPATH (example: /usr/share/doc/linuxbrew-wrapper/examples/profile) ======================================================================== Don't run this as root! /usr/lib/linuxbrew-wrapper/install:110: warning: Insecure world writable dir /root in PATH, mode 040777 It seems to linked with brew ran as root. The alternative would be to use sudo -u, but the command is not recognized. Why is it so? $ sudo -u user 'git hub user myuser' sudo: git hub user myuser: command not found
Cannot use git hub extension as root
This is not an "inaccessible file", but a nested git repo.It is recorded in the main git repo (published on GitHub) as agitlink, aspecial entry in the index, and is displayed as a gray icon.If you really want its content, youshould transform it as a submodule.As such, that gitlink points to nothing, as there is no way to know the remote url of that repo (only its SHA1 which is what the gitlink records)
I create a new repo on Github and follow the instructions.Then I open Command Prompt and follow Github's instructions.The picture is what I get...what appears to be file containing my project that is inaccessible.What am I doing wrong?
Visual Studio 2015 Community: Commit to github via command line produces inaccessible file...?
AQTime can help with that too.ShareFollowansweredOct 16, 2009 at 19:33FrancescaFrancesca21.5k44 gold badges5151 silver badges9090 bronze badgesAdd a comment|
I set up a project and ran it, and looked at it in Process Explorer, and it turns out it's using about 5x more RAM than I would have guessed, just to start up. Now if my program's going too slowly, I hook it up to a profiler and have it tell me what's using all my cycles. Is there any similar tool I can hook it up to and have it tell me what's using all my RAM?
Memory profiling tool for Delphi?
Found it! Looks like since it installs a specific commit hash, it stores it in a different folder labeled with the commit hash as the "version"~/.rvm/gems/ruby-2.2.2@my-gemset/bundler/gems/redcarpet-135b8e16b507
I added a gem to my Rails Gemfile to be installed directly from the github sourcegem 'redcarpet', github: 'tanoku/redcarpet' ... bundle installI use RVM and all my gems that get installed are in~/.rvm/gems/ruby-2.2.2@my-gemset/gems. However, I'm unable to see this new github gem there.Do gems from github get installed elsewhere?
RVM gem install directory when installing from git/github
you can have a look at this solution if it worked:https://stackoverflow.com/a/66602754/15831887also you can try to install new software to help instead of the marketplace
Each time I want to install a new plugin from Eclipse marketplace or an update site, I have the error message 'PKIX path building failed' and I had some degree of success solving this issue by downloading the certificate manually from Chrome and installing it to the JDK's cacerts file with the keystore command. This ended up having like 30+ certificates downloaded and installed manually. We can see that this solution is not durable :For some plugins, the URL references another location, which references another one, which can end up in downloading and installing 2-3 certificates for 1 single pluginFor Eclipse marketplace, the certificate seems to be valid only one day. If I download it and install it one day, I have to redo the same thing the next day or I get 'PKIX path building failed'.I had a look at this answer :https://stackoverflow.com/a/53214663/8315843Peter suggests to input the full path of cacerts to the eclipse.ini file as well as the keystore password :-Djavax.net.ssl.trustStore=c:/full/path/to/cacerts -Djavax.net.ssl.trustStorePassword=changeitI tried that solution. In fact I expected that because I gave a password, I would have an automatic certificate validation but this is not the case. Instead I got the same error message 'PKIX path building failed' again and I had to rerun the keystore command manually after re-downloading the certificate manually.Any suggestions ?
Is there a durable solution to SSL certificate validation for Eclipse plugins?
I have managed to fix the issue by changing in tsconfig.jsontarget:'es5'`instead oftarget:'es2015'`ShareFollowansweredMay 6, 2020 at 15:33Sachin MishraSachin Mishra1,17311 gold badge1616 silver badges1717 bronze badgesAdd a comment|
Issue is that my css & js angular production build files are not getting picked properly , from the browser logs what i understood is that js and cs not loaded because its MIME type, “text/html”, is not “text/css”.Providing screenshot of errors on browser console.screenshot link belowBelow is my nginx configuration fileserver { listen 80; root /usr/share/nginx/html; index index.html index.htm; include /etc/nginx/mime.types; gzip on; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; location / { try_files $uri $uri/ /index.html; } }I have deployed the below ingress yamlapiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/force-ssl-redirect: "false" spec: rules: - http: paths: - path: /test backend: serviceName: testapp servicePort: 80 - path: /assets backend: serviceName: testapp servicePort: 80What Im trying to achieve is I should be able to load my application on ingress path (/test) , but fails to display . When I try without a ingress path(i.e below), it works properly: could you please help me out what im doing wrong
angular app giving mime errors and fails to load on browser after deployed on kubernetes ingress
Got an answer from an NCover representative. There is no integration between SonarQube and NCover. Full stop.ShareFollowansweredJul 29, 2019 at 16:59markmark60.3k8484 gold badges318318 silver badges610610 bronze badgesAdd a comment|
We are starting to implement code coverage in our CI process and my task is to examine NCover from this perspective.Specifically, we have SonarQube and a CI build in Azure DevOps that runs the unit tests and reports the coverage to SonarQube as described inhttps://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-azure-devops/Currently it works with the DotCover command line tool.I would like to check NCover. While googling for it I came across this cheerful page -https://www.ncover.com/support/docs/extras/sonar-integrationand sure enough I clicked the link to the respective Jira issue -https://jira.sonarsource.com/browse/SONARCS-653Oops. Closed with Won't Fix.According tohttps://docs.sonarqube.org/latest/analysis/coverage/only VS Coverage, DotCover and OpenCover format are supported. So, if NCover is supported, it would be through theGeneric Test Dataformat or if NCover knows to produce coverage results in one of the other 3 formats.So far I do not see how NCover can play with SonarQube, but maybe I am missing something here.Anyone?
How to use NCover with SonarQube
The sad news is that if a file has been already committed to GitHub,gitwill continue to version that file.This means if I commit the entirebin/then add it to.gitignore, the files will still persist in GitHub. And, if these files inbin/change, they will also be pushed in the commit because they are versioned.Luckily, you can remove files and directories from GitHub completely. You need, though, to get to a command line runninggit. If you have the GitHub application installed, that probably means you havegit.Open command prompt in Windows or Terminal in Mac OS.Navigate to the directory (ie.cd ~/Workspace/Project) and run the following:git rm bin/* -f git commit --amend git push -fThis should work. Check outthis article on the GitHubthat also outlines the process.Hope this helps you!Disclaimer: always make sure you do your research before working withgit. If you have various branches / other complicated stuff going on, this process might be different
I've looked all around for a few days now trying to figure this out because our .gitignore even though it lists /bin/ folder it still keeps freaking commiting the whole folder and its getting annoying.Now we have a whole bunch of crap in a /bin/ folder in our GitHub repository and I have no idea how to remove it. I've tried looking at other peoples examples but they keep talking about a shell command that I don't have in eclipse (or at least don't know how to access)
How to remove a specific directory from GitHub using Eclipse
One of many methods to realize this workflow is to use a workflow calledgit flow, seethis linkfor more details.To summarize, in this workflow you've got a production branch - themasterbranch - and a development - thedevelopbranch. Your developers are creating features based on this develop branch, implementing and testing them and merging them back to the develop branch.If the feature set is complete for a specific release a newreleasebranch is created where, for example, the QA can test the new release. After the QA is happy, the branch is merged back to the develop as well as to the master branch, where the client can now get the new working release.Have a look on the link above for a more detailed explanation.
I am new to Git stuff but I know how to add/commit/push/pull changes.Situation:We have a git-managed project (currently on bitbucket)Now we are setting up QA and Live places. So for example if we made feature X and pushed to QA and client approves it, we should then be able to push our changes to Live/Production site.So here is how flow would be like:Changes made locallyChanges pushed to QA/Dev folderClient okayed, changes pushed to Live/Production folderCan somebody help how to achieve this workflow ? I am not really sure what's needed for this flowlocal ---> dev ---> production
Git upload to live site after qa approved
2 LiipDoctrineCacheBundle provides a service wrapper around Doctrine's common Cache (documentation) that allows you to use several cache drivers like filesystem, apc, memcache, ... I would recommend loading your generic container-parameters/settings (like maintainance mode,...) from database in a bundle-extension or a compiler-pass. route-specific settings (like page title, ...) could be loaded in a kernel event listener. You can find a list of kernel events here. update/invalidate their cache using a doctrine postUpdate/postPersist/postRemove listener. Share Improve this answer Follow edited Sep 13, 2013 at 11:57 answered Sep 13, 2013 at 11:48 Nicolai FröhlichNicolai Fröhlich 51.8k1111 gold badges127127 silver badges131131 bronze badges Add a comment  | 
I have a db table (doctrine entity) that I use to store some editable settings for my app, like page title, maintenance mode (on/off), and some other things.. I can load the settings normally using the entity manager and repositories, but I think that's not the best solution... My questions are: - can I load the settings only once at some kernel event and then access them the same way I access any other setting saved in yml config files.. how can I cache the database settings, so I would only do one DB query, and then in future page requests, it would use the cached values, instead of doing a DB query for each page request? (of course, everytime I change something in the settings, I would need to purge that cache, so the new settings could take effect)
Symfony2 load settings from database
Docker Hub build service should work (https://docs.docker.com/docker-hub/builds/). You can also consider using gitlab-ci or travis ci (gitlab will be useful for privet projects, it also provides privet docker registry). You should have two Dockerfiles one with all dependencies and second very minimalistic one for reports (builds will be much faster). Something like: FROM base_image:0.1 COPY . /reports WORKDIR /reports RUN replace-with-requiered-jekyll-magic Dockerfile above should be in your reports repository. In 2nd repository you can crate base image with all the tools and nginx or something for serving static files. Make sure that nginx www-root is set to /reports. If you need to update the tools just update base_mage tag in Dockerfile for reports.
Currently have a pipeline that I use to build reports in R and publish in Jekyll. I keep my files under version control in github and that's been working great so far. Recently I began thinking about how I might take R, Ruby and Jekyll and build a docker image that any of my coworkers could download and run the same report without having all of the packages and gems set up on their computer. I looked at Docker Hub and found that the automated builds for git commits were a very interesting feature. I want to build an image that I could use to run this configuration and keep it under version control as well and keep it up to date in Docker Hub. How does something like this work? If I just kept my current setup I could add a dockerfile to my repo and Docker Hub would build my image for me, I just think it would be interesting to run my work on the same image. Any thoughts on how a pipeline like this may work?
Developing in a Docker image that's under version control
The solution to number 2, is finally that each service deployment can be deployed together with each own ingress so no need for the point 1. That being said, you can have multiple ingress rules deployed.
We are currently setting up a kubernetes cluster for deploying our production workloads (mainly http rest services). In this cluster we have setup nginx ingress controller to route traffic to our services from the outside world. Since the ingress controller will be used mainly with path routing I do have the following questions: Question 1: Dynamic backend routing Is it possible to route the traffic to a backend, without specifically specifiying the backend name in the ingress specification? For example I have the followign ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/force-ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /apple backend: serviceName: apple-service servicePort: 8080 Is there any possibility that the /apple request is routed to the apple-service without specifically specifying it in the serviceName? So /apple is automatically routed to the apple-service service, /orange is automatically routed to the orange service without explicitly specifying the backend name? Question Number 2 If there is no sulution to number 1 so that we can deploy based on some conventions, the question now goes further on how to manage the ingress in an automated way. Since the services are going to be deployed by an automated CI/CD pipeline, and new paths may be added as services are added to cluster, how can the ci/cd orchestrator (e.g. jenkins) update the ingress routes when an application is deployed? So that we are sure, that no manual intervention is needed into the cluster and each route is deployed together with the respective service? I hope that the information given is enough to understand the issue. Thank you very much for your support.
Kubernetes ingress update with deployment
I think you just have to add the remote to git with : git remote add origin <github-url> And to see the remotes: git remote -v
I was trying to deploy my angular project on github-pages, suddenly the error says, Failed to get remote.origin.url (task must either be run in a git repository with a configured origin remote or must be configured with the "repo" option). the command I am trying to execute in project directory is: angular-cli-ghpages -d dist/news-app/ --no-silent I have pushed my latest source code on github repository as well.
I was trying to deploy my angular project on github-pages, suddenly the error says, Failed to get remote.origin.url
This is not possible. The official GKE documentation on VPC-native clusters says:Cluster IPs for internal Services are available only from within the cluster. If you want to access a Kubernetes Service from within the VPC, but from outside of the cluster (for example, from a Compute Engine instance), use aninternal load balancer.Seehere.
By default, Kubernetes services with typeClusterIPare accessible from the same cluster. Is there a way to configure a service in GKE to be accessible from the same VPC? e.g., a GCE VM in the same VPC could access the service in GKE, but I don't want to expose it to the internet.
GKE: configure service to be accessible from the same VPC
On a Windows platform the default stack limit is ~1 megabyte, that means you should definitely put bigger objects on the heap instead of changing the default values (orworsedoing it anyway and hope for the best). Check your environment stack size limit before experimenting with it. Also: if your algorithm is a recursive one bear in mind that your stack limit will also put under pressure. Thus also pay attention to your algorithm.One important point to bear in mind is that stack objects will be destroyed at the end of the function call while heap ones (unless you're using smart pointers -which is recommended) will not. You should plan your choice accordingly. As a rule of thumb big long-time-spanning objects should go on the heap, with some exceptions.For most applications the performance differences are kinda negligible too. Don't even think of structuring your whole program because of the small performance gain with stack allocations,premature optimization is the root of all evils. Furthermore huge slowdowns usually come from excessive copying stuff around (or allocating too many times small objects), not really from stack/heap choices of allocation.ShareFolloweditedSep 7, 2014 at 17:33answeredSep 7, 2014 at 17:27Marco A.Marco A.43.2k2626 gold badges137137 silver badges252252 bronze badgesAdd a comment|
This question already has answers here:How do I choose heap allocation vs. stack allocation in C++?(2 answers)Closed9 years ago.for example:void a() { int bla; bla = 1; }vs.void b() { std::unique_ptr<int> bla( new int ); *bla = 1; }When is one or the other considered good practice? When isn't it? Or does it simply lie in the eye of the beholder? Does it only make sense when working on a large scale project or when working with large values?Of course the heap is slightly slower.
C++ at which point does it make sense to use the heap instead of the stack? [duplicate]
If you usingCreate React Appcheck yourbuild/asset-manifest.jsonif have correct paths to the files. If not, change them manually or - what is better - change or set "homepage" value inpackage.jsonto proper one. And last tip - read carefully whole output ofnpm run build- it helped me. Whole bugfinding spends me 3 hours :(ShareFollowansweredJan 10, 2023 at 19:23Paweł MoskalPaweł Moskal80399 silver badges1111 bronze badgesAdd a comment|
I need your help please!I have made deploy and build of aplication. but when i redirect for any page i have this error:Uncaught SyntaxError: Unexpected token <I think it was because the browser await .html archive and chunk.js is open like this imgprint from errorthe route is fine, he finds the route, but don't open file.before some search i found things like a .htaccess and i have tryied but doesn't works.i have tryied these two models of .htacess on public/.htaccess<IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.html$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-l RewriteRule . /index.html [L] </IfModule>Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.html [QSA,L]but it doesn't works too.Serverconst root = require('path').join(__dirname, 'client', 'build') app.use(express.static(root)); app.get("*", (req, res) => { res.sendFile('index.html', { root }); })i don't have any webserver, sorry, but im a student, and im still learning...please help guys!
Uncaught SyntaxError: Unexpected token < in chunk.js
Try this:<IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^dev(.*) app_dev.php$1 [L] RewriteRule . app.php [L] </IfModule>
This question already has answers here:symfony2 rewrite rules .htaccess app.php(5 answers)Closed9 years ago.I have a directory called web in the root folder. In this directory there are 2 file called app.php and app_dev.php. (It come from symfony 2.5.5)I would like to redirect user from root directory to /web/app.php If the user enter www.mywebsite.com/hello he has to be redicted to www.mywebsite.com/web/app.php/helloand if it is possible, www.mywebsite.com/dev/test would redirect to www.mywebsite.com/web/app_dev.php/testI think its possible with htaccess but I don't find how. Please help me.PS: sorry for my bad english, it's not my native language.
How to redirect to web/app.php for symfony [duplicate]
It kind of sounds like you have an undisposed resource somewhere that ends up getting garbage collected eventually, but not quickly enough for your needs. Do you reuse any SQLConnection objects? Or MailClient objects? Or unmanaged Image objects? As for the lower-than-expected memory limit, there are two types of memory use by a ASP.NET app. One is reserved memory and the other is actually used memory. I believe the task manager tracks actual memory use, but reserved memory probably also has a limit. To find out how much reserved memory your process is taking up, go to IIS7, click on the server (the top level, above app pools and sites folder), then click the Processes option and then click your app's process. It should show you CPU use, number of requests and memory usage (both reserved and actual).
We're running into a strange problem. Our ASP.NET application is running on 64-bit Windows 2008/IIS7 machine with 16Gb of RAM. When w3wp.exe process reaches 4Gb (we track it simple via Task Manager on the server) - Out of Memory exeption is thrown even though there's a plenty of memory still available. Is there a known issue were ASP.NET process is limited to 4Gb of memory on 64bit system (and using 64bit app pool)? Is there any way to lift that limit?
w3wp.exe runs out of memory even though there's still memory available
*.phpis regular script file which, as any other scripting languages like i.e. perl requires interpreter to run. So if you want to run your script from command line you have either call interpreter and give it your script file as argument, like:$ /usr/bin/php myscript.phpAnd that's it - it should run.Or (if working using linux/bsd) addas very first line of your PHP script file:#!/usr/bin/php -qwhich tells shell, where to look for interpreter for this script file. Please ensure your PHP is in/usr/binfolder as this may vary depending on the distro. You can check this usingwhich, like this:$ which php /usr/bin/phpif path is right, you yet need to set executable bit on script file so you'd be able to try to "launch it":chmod a+x myscript.phpThis will make it behave as any other app, so you'd be able to launch it this way:/full/path/to/myscript.phpor from current folder:./myscript.phpAnd that's it for that approach. It should run.So your crontab line would look (depending on the choosen approach):1 * * * * /full/path/to/myscript.phpor1 * * * * /usr/bin/php -q /full/path/to/myscript.phpAnd you should rather use "0" not "1", as 1st minute in hour is zero, i.e.:0 * * * * /usr/bin/php -q /full/path/to/myscript.phpEDITPlease note cronworking directoryis user's home directory. So you need to put that into consideration, which usually means using absolute pathes. Alternatively you'd prepend your call withcd <script working path> && /usr/bin/php -q /full/....
Ok I have been looking into cron jobs for hours, checked every post here, looked in google but I just do not understand how it works.I have set up a cron job using my path1 * * * * /home/myuser/domains/mysite/public_html/live.phpI have also tried/home/myuser/public_html/live.phpNothing seems to be working.Do I have to add something in the php file (live.php)? That is the code that has to be executed. The code itself works.I know you will all think that I am lazy but I really can't figure this out.
what to change in my php script for the cron job to work
You need to use tcp://127.0.0.1:514 instead of tcp://elk-custom:514. Reason being this address is being used by docker and not by the container. That is why elk-custom is not reachable. So this will only work when you map the port (which you have done) and the elk-service is started first (which you have done) and the IP is reachable from the docker host, for which you would use tcp://127.0.0.1:514
I want to send logs from one container running my_service to another running the ELK stack with the syslog driver (so I will need the logstash-input-syslog plugin installed). I am tweaking this elk image (and tagging it as elk-custom) via the following Dockerfile-elk (using port 514 because this seems to be the default port) FROM sebp/elk WORKDIR /opt/logstash/bin RUN ./logstash-plugin install logstash-input-syslog EXPOSE 514 Running my services via a docker-compose as follows more or less: elk-custom: # image: elk-custom build: context: . dockerfile: Dockerfile-elk ports: - 5601:5601 - 9200:9200 - 5044:5044 - 514:514 my_service: image: some_image_from_my_local_registry depends_on: - elk-custom logging: driver: syslog options: syslog-address: "tcp://elk-custom:514" However: ERROR: for b4cd17dc1142_namespace_my_service_1 Cannot start service my_service: failed to initialize logging driver: dial tcp: lookup elk-custom on 10.14.1.31:53: server misbehaving ERROR: for api Cannot start service my_service: failed to initialize logging driver: dial tcp: lookup elk-custom on 10.14.1.31:53: server misbehaving ERROR: Encountered errors while bringing up the project. Any suggestions? UPDATE: Apparently nothing seems to be listening on port ELK0, cause from within the container, the command ELK1 shows nothing on this port....no idea why...
Syslog driver not working with docker compose and elk stack
The URI used to match patterns in aRewriteRuleare canonicalized in a per-directory context (either in an htaccess file or in a<Directory>container) by removing the leading/. So if the requested URL is:http://example.com/web/permalink/123And from within an htaccess file in the document root, the URI used to match rules isweb/permalink/123. But within an htaccess file in thewebfolder, the URI ispermalink/123, etc.Thus you can't have your patterns start with a/because they're stripped from the URI in the context of an htaccess file.
I hope I'm missing something silly.I'm trying to redirect URLs using .htaccess on Apache 2.2 using the PHP 5.4 cartridge on OpenShift's free hosting service.This matches the URI /permalink/a123 (note the lack of leading slash in the rule's filter pattern):RewriteEngine On RewriteRule permalink/a.*$ /permalink/b [R=301,L]This does not match the URI /permalink/a123 (note the leading slash in the rule's filter pattern):RewriteEngine On RewriteRule /permalink/a.*$ /permalink/b [R=301,L]So what stupid thing do I have wrong?Thanks.
Why does leading slash not match my RewriteRule?
Found the problem. ObjectMetadata requires the content-type / encoding to be set explicitly rather than via addUserMetadata(). Changing the following: metadata.addUserMetadata("Content-Encoding", "gzip"); metadata.addUserMetadata("Content-Type", "application/x-gzip"); to: metadata.setContentEncoding("gzip"); metadata.setContentType("application/x-gzip"); fixed this.
I'm trying to use AWS Api to set the content type of multiple objects and to add a 'content-encoding: gzip' header to them. Here's my code for doing that: for (S3ObjectSummary summary : objs.getObjectSummaries() ) { String key = summary.getKey(); if (! key.endsWith(".gz")) continue; ObjectMetadata metadata = new ObjectMetadata(); metadata.addUserMetadata("Content-Encoding", "gzip"); metadata.addUserMetadata("Content-Type", "application/x-gzip"); final CopyObjectRequest request = new CopyObjectRequest(bucket, key, bucket, key) .withSourceBucketName( bucket ) .withSourceKey(key) .withNewObjectMetadata(metadata); s3.copyObject(request); } When I run this however, the following is the result: As you can see, the prefix x-amz-meta was added to my custom headers, and they were lower cased. And the content-type header was ignored, instead it put www/form-encoded as the header. What can I do it to cause it to accept my header values?
How to set the content type of an S3 object via the SDK?
As far as I remember Cache is a singleton and there is only one instance of it per app domain. OutputCache uses it too and it's nothing more than just a Response.Cache. So I think cached pages should be available through the Cache (Sorry, I can't check this at the moment). And the following articles should help you in this case:http://www.codeproject.com/KB/session/exploresessionandcache.aspxhttp://aspalliance.com/CacheManager/Default.aspx
Is there any way that I can list the pages which are currently stored in the OutputCache?Just a list of paths would do, but if there's a way to get more information about each item (expiry etc), then all the better.
How can I view the contents of the ASP.NET OutputCache?
Try removing your existing .DS_Store file first.
remote: Counting objects: 610, done. remote: Compressing objects: 100% (352/352), done. remote: Total 610 (delta 296), reused 434 (delta 210) Receiving objects: 100% (610/610), 5.50 MiB | 2.19 MiB/s, done. Resolving deltas: 100% (296/296), done. error: Untracked working tree file '.DS_Store' would be overwritten by merge. So, then, I'm left with an empty repository. I just added .DS_Store to my .gitignore file, but it seems I can't even pull a clean copy to my local machine.
Error when cloning a git repo
(assuming you're using jacoco for coverage reporting)If you're not using Lombok, you might try adding the @Generated annotation to your methods you want skipped. I'm not sure this will work - but worth a shot!If you're using Lombok [like I was], here's a solution from Rainer Hahnekamp that marks the code as @Generated, which makes jacoco ignore the methods, and in turn makes sonarqube display a higher coverage percentage.Luckily, beginning with version 0.8.0, Jacoco can detect, identify, and ignore Lombok-generated code. The only thing you as the developer have to do is to create a file named lombok.config in your directory’s root and set the following flag:lombok.addLombokGeneratedAnnotation = trueThis adds the annotation lombok.@Generated to the relevant methods, classes and fields. Jacoco is aware of this annotation and will ignore that annotated code.Please keep in mind that you require at least version 0.8.0 of Jacoco and v1.16.14 of Lombok.https://www.rainerhahnekamp.com/en/ignoring-lombok-code-in-jacoco/
I'm wondering if it is currently possible to ignore the equals and hashcode method for the sonar test coverage? I have heard about the block exclusion, but it didn't work.
How to make sonarqube ignore the equals and hashcode
4 This thread on AWS seems to imply that RACK_ENV can only be set to one of 'development' or 'production'. Interestingly, in my own tests, when configuring the Elastic Beanstalk environment to RACK_ENV=staging, the migration will run against the staging database defined in database.yml, but Passenger still attempts to connect to the production database. The solution we came up with is to set up two distinct "environments" under the app, each with their own RDS database. Then in database.yml we use the ENV parameters to connect to the proper database at run-time: production: database: <%= ENV['RDS_DB_NAME'] %> username: <%= ENV['RDS_USERNAME'] %> password: <%= ENV['RDS_PASSWORD'] %> host: <%= ENV['RDS_HOSTNAME'] %> port: <%= ENV['RDS_PORT'] %> Share Improve this answer Follow answered Jan 15, 2014 at 22:09 Richard LuckRichard Luck 63977 silver badges1515 bronze badges 1 This method for database connections is also recommended practice from AWS of course for configurability but also to prevent your sensitive connection strings from being checked into source control. You can, of course, limit the holes in your RDS set up, but this is an extra precaution. – Michael Apr 2, 2014 at 17:20 Add a comment  | 
In my Elastic Beanstalk - Container Options. RACK_ENV is set to staging. In fact, if I SSH into the EC2 instance and do rails console in /var/app/current/ and then typing Rails.env it returns staging. Reading http://www.modrails.com/documentation/Users guide Nginx.html#RackEnv It says to set a RACK_ENV variable, since by default, the value is production. You would assume everything would work, except in the Elastic Beanstalk logs, it says: [ 2013-11-18 14:28:26.4677 8061/7fb5fe01a700 Pool2/Implementation.cpp:1274 ]: [App 7428 stdout] PG::ConnectionBad (FATAL: database "foobar_production" does not exist foobar_production database does not exist, but staging0 does. So why is Passenger still looking at the production environment, when it should be looking at staging.
Why is Passenger looking at the staging environment?
get yourself on the commit where you deleted the file (second commit on the branch, if I understood correctly).git checkout --orphan somebranch. Then commit... that will create a "new" revision with no parent..... then you can drop the branch that is messed up and rename your current branch to whatever you like (consider cherry-picking the other revisions if you did more work on the broken branch).
So, I messed up. I accidentally committed thenode_modulesfile to my remote repo on Github. It wasn't a big deal practically because I just removed the file in the next commit, no harm no foul.But now my contributor page looks god awful, example:I'd like to go back and remove the culprit commit that is causing this travesty.GIT Context:The problem commit is the very first one of the project. Should I try to remove this commit? and if so, how do I do it?
Removing a commit from the contributors list on Github
The problem was IISexpress port access issue.By default, the IISexpress does not allow the external network to access the port and this access needs an explicit configuration.If you are facing the same problem, you can find the code snippet and other details here.Accessing IISExpress for an asp.net core API via IP
I have a dotnet core application built on dotnet core 3.1 and when I tried to deploy the same in ubuntu 18.04 server by following the steps given in thisdocbut not able to access the app on port 80 (accessing through public IP)Here is the Nginx updated configurationand dotnet application is running with port 5000 and 5001 (for now I didn't configure service to the same)Getting the following error when accessing through the browsers ( public IP)I'm missing any configurations?
How to deploy dotnet core application on Ubuntu server with Nginx server?
Yes, this is expected behavior. First of all, marshaller is global in Ignite, as well as the metadata, so destroying the cache does not affect this. Second of all, binary format allows to dynamically change the schema, but the changes have to be compatible. I.e., you can change and/or remove fields, but not change their types, because in this case a client that uses older schema will not be able to deserialize the object if it wants to.
I've createted the binary type with the name 'SomeType' and filds: f1:string f2:string And cache based on this type (via CacheConfiguration.setQueryEntities). Now I want to change f1 from string to int. But I don't want to change the name of the type. So when I'm trying ignite.destroyCache(cacheName) And then I'm creating the new cache (with the same name and binary type), I've got an exception while cache populating: org.apache.ignite.binary.BinaryObjectException: Wrong value has been set [typeName=SomeType, fieldName=f1, fieldType=String, assignedValueType=int] As I understand from http://apache-ignite-users.70518.x6.nabble.com/Ignite-client-reads-old-metadata-even-after-cache-is-destroyed-and-recreated-td5800.html it's an expected behaviour. But how can I refresh my binary type matadata whithout creating the new one?
Apache Ignite binary type invalidation
If your Docker is running on WSL, you can take back the RAM by terminating all the running distributions with command:wsl --shutdownShareFollowansweredJun 12, 2021 at 6:15kunthetkunthet57433 silver badges77 bronze badgesAdd a comment|
not so long ago I started to practice with writing EOS smart contracts on my windows 10 computer. For this I needed to install among others, a Linux subsystem for Windows and Docker. In the last couple of days I noticed some pretty significant performance issues, when looking for the perpetrator in my task manager I came across Vmmem using up 1.8 GB of ram (which is quite a lot considering I have only 8GB on my laptop):I Googled around some and figured out that this program handles virtual machines and such, and with that Docker. I don't have Docker Desktop or Ubuntu opened at the time of this screenshot, turned off the setting "Start Docker when you log in" and restarted my computer, but still this program is hogging up my RAM. As you may understand, it isn't worth it for me to keep this running in the background considering this EOS Development is a side-thing for me, hence I don't need to use Docker often. I would deem it a shame if I had to give up on this 'hobby' for performance issues so any help would be appreciated.
Cant stop Docker/Vmmem from running
Raspberries use ARM and not x86_64 processors. You can only run images created for that architecture. Try searching for ARM or ARMv7 on docker hub. There is aDebianimage for ARM I know of but there must be others as well.The underlying issue is that the binary format used by ARM is not compatible with x86_64, which is the architecture used by most desktop and server systems.
I've installed docker in rapsbian according to the official instructions (i.e., runningcurl -sSL https://get.docker.com | sh) but I'm not able to run the hello-world example (I've also tried other examples without success). This is the error I'm getting:pi@raspberrypi2:~ $ docker run hello-world standard_init_linux.go:178: exec user process caused "exec format error"My environment is Raspberry Pi 2 Model B with Raspbian GNU/Linux 8 (jessie) and Docker version 17.03.0-ce, build 60ccb22.Any hint about the problem or possible directions to solve the problem?Many thanks!
Raspberry-pi docker error: standard_init_linux.go:178: exec user process caused "exec format error"
As mentioned before, regex might be an overkill. However, it could be useful in some cases.Here's a basicreplacepattern:SELECT regexp_replace( 'abcd1234df-TEXT_I-WANT' -- use your input column here instead , '^[a-z0-9]{10}-(.*)$' -- matches whole string, captures "TEXT_I-WANT" in $1 , '$1' -- inserts $1 to return TEXT_I-WANT ) ;
I have a field in a redshift column that looks like the following:abcd1234df-TEXT_I-WANTthe characters and numbers in the first 10 digits can be either letters or numbers.If I use a capture group regex, I would use a poorly written expression like(\w\w\w\w\w\w\w\w\w\w\W)(.*)and grap the 2nd groupBut I'm having trouble implementing this in redshift, so not sure how I can grab only the stuff after the first hyphen
How to use a regex capture group in redshift (or alternative)
If the goal is just to prepopulate aCachewith the contents of aMap<K, V>, then you should just useCache.putAll(Map<K, V>)to put all the entries from a specifiedMapin the cache.
I am using Guava LoadingCache to bulk load all elements at once into my eager cache. But the implementation of theloadAllmethod that I'm supplying does not really need anIterable<? extends K>keys argument, since my DAO does not except any parameters either - my DAO method returnsgeneric Map<K,V>.Since my implementation is generic, I'm using generics to do a call ongetAllIterable(<? extends K> keys_), but because of the type erasure, I can not instantiate K key, and pass it to getAll, since it does not expect any non null keys.Does anyone know of any workaround around this?
Guava LoadingCache getAll - but without any arguments?
I might be wrong , go to check your ip address with myipaddress.com or whatever. Then you candeny from allandallow from <YOUR IP>
I'm new to apache... so be gentle with me guys :-)I have used the following for denying access to a web folder with .htaccess:order deny,allow deny from allI'm getting the "forbidden" page which is ok because I don't want web users, spiders or scrapers to access this folder.But I am no longer able to access web folder though php script that I have written. I thought that the applications where an exception to this.Any help would be greatly appreciated by this newbie...Thx...
.htaccess deny access to folder
Your Visual studio 2017 installation has an older version of the Git credential manager for Windows. Upgrade to the latest version and configure this specific installation in your global git config to ensure Visual Studio is aware of the latest GCM that's available to it.See also (different issue, same problem):Cloning repository from MSA backed Azure DevOps using Visual Studio 2017 or 2019 and AAD account
I have a repository on git hub, and I'm trying to clone this repository to a different computer. I'm going to the repository -> open in visual studio -> after the computer filled the repository address and the folder to save it, I'm pressing clone, but the following error happens:Git failed with a fatal error.fatal: AggregateException encountered. One or more errors occurred.error: cannot spawn askpass: No such file or directoryfatal: could not read Username for 'https://github.com': terminal prompts disabled`I have tried the following :change repository address to[email protected]/User_Name/project_name.gitalso triedhttps://{username}:{password}@github.com/{username}/project.gitwhat else should I do? Thanks!
Cloning repository via visual studio shows error: cannot spawn askpass: No such file or directory
I solved the problem by updating the java plugin to 3.5
I want to use SonarQube to analyse my project which is built on Jenkins. In my project I have some literals written in binary system (e.g. 0b00001111).When I'm trying to do an analysis, I am obtaining fallowingerror:[ERROR] Failed to execute goal org.codehaus.mojo:sonar-maven-plugin:2.5:sonar (default-cli) on project org: SonarQube is unable to analyze file : 'whatever': For input string: "b00001111" -> [Help 1] [...] Caused by: java.lang.NumberFormatException: For input string: "b00001111" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:589) at java.lang.Long.valueOf(Long.java:776) at java.lang.Long.decode(Long.java:928) at org.sonar.java.checks.SillyBitOperationCheck.evaluateExpression(SillyBitOperationCheck.java:101) ....Versioning informations:SonarQube Jenkins plugin version: 2.2.1SonarQube version: 5.1SonarQube maven Plugin version: 2.5/2.6 (I've tried both of them)In project I am using JDK 1.8. I don't know how to check if SonarQube is using also 1.8, but I've choosen "Inherit from Project" in SonarQube configuration panel in Jenkins.
SonarQube error while analyzing code with binary literals
Absolutely. From thepre-receivesection ofgithooks(5):Both standard output and standard error output are forwarded togit send-packon the other end, so you can simplyechomessages for the user.As long as you ensure your script exits 0, the push should succeed.
I am able to validate commit via Github pre-receive hook. But, instead of blocking commits with non-zero exit code in pre-receive hook, is tgere a way to display warning messages with exit code 0?
GitHub pre-receive hook display warning message
It depends on how you want to recover. If you want to restore a specific node, you need a backup from that node. If you are rebuilding your swarm cluster from an old backup, then you only need one healthy node's backup. See the following guide for performing a backup and restore: https://docs.docker.com/engine/swarm/admin_guide/#back-up-the-swarm If you restore the cluster from a single node, you will need to reset and join the swarm again on the other managers since you are running a single node cluster. What is restored in that scenario are the services, stacks, and other definitions, but not the nodes.
From the official docker doc, there is a statement (as below) looks confusing to me. From my understanding, don't we only need to pick anyone of healthy manager nodes to backup for future restoration purpose? "You must perform a manual backup on each manager node, because logs contain node IP address information and are not transferable to other nodes. If you do not backup the raft logs, you cannot verify workloads or Swarm resource provisioning after restoring the cluster." Link: https://docs.docker.com/ee/admin/backup/back-up-swarm/
Backup Docker Swarm - How many Manager Nodes Required
You can reference the values using[without any arguments:library(gpuR) A <- seq.int(from=0, to=999) B <- seq.int(from=1000, to=1) gpuA <- gpuVector(A) gpuB <- gpuVector(B) C <- A + B gpuC <- gpuA + gpuB all.equal(C, gpuC) #> [1] "Modes: numeric, S4" #> [2] "Attributes: < target is NULL, current is list >" #> [3] "target is numeric, current is igpuVector" all.equal(C, gpuC[]) #> [1] TRUE
library(gpuR) A <- seq.int(from=0, to=999) B <- seq.int(from=1000, to=1) gpuA <- gpuVector(A) gpuB <- gpuVector(B) C <- A + B gpuC <- gpuA + gpuBgpuCis agpuvector. I want to see the output as numeric values. so I tried to convert tocpuvector. In theRCUDApackagegathergpu()function is there. but no similar function in thegpuRpackage.
Converting GPUvector back to CPU vector using gpuR package
According to docker help run: … -p, --publish list Publish a container's port(s) to the host -P, --publish-all Publish all exposed ports to random ports … Command 1 uses -P (short form of --publish-all) and after that the image name. -P has no arguments. Command 2 uses -p (short form of --publish list). -p expects an argument and I think docker mistakes the image name as the argument for -p (and expects an image name after that).
I'm learning docker, and trying to run the existing images. The first command is working fine command 1: docker run --name static-site -e AUTHOR="Mathi1" -d -P dockersamples/static-site But the below command is throwing error Command 2: docker run --name mvcdotnet -e AUTHOR="Mathi2" -d -p valkyrion/mvcdotnet Error: "docker run" requires at least 1 argument. See 'docker run --help'. Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...] Run a command in a new container
Docker run not working it says requires at least 1 argument
There is no such command. try the below command to check all running podskubectl get po -n <namespace> | grep 'Running\|Completed'below command to check the pods that are failed,terminated, error etc.kubectl get po -n <namespace> | grep -v Running |grep -v Completed
I would like to know if there is a command in kubernetes that returns true if all resources in a namespace have the ready status and false otherwise.Something similar to this (ficticious) command:kubectl get namespace <namespace-name> readinessIf there is not one such command, any help guiding me in the direction of how to retrieve this information (if all resources are ready in a given namespace) is appreciated.
Kubectl command to check if namespace is ready
ioreg -r -k AppleClamshellState -d 4 | grep AppleClamshellState | head -1Tested and works on 10.7.*, foundhere.Update 2019-02-26:Still works on macOS 10.14.3 MojaveShareFolloweditedFeb 26, 2019 at 10:53Jørgen R10.7k77 gold badges4343 silver badges6161 bronze badgesansweredApr 7, 2013 at 22:45SaucierSaucier4,27011 gold badge2525 silver badges4747 bronze badges94Nomeans open,Yesmeans closed–Brian LowJun 27, 2017 at 4:241Also works on macos 10.14! I'm checking for closed state with that:ioreg -r -k AppleClamshellState -d 4 | grep AppleClamshellState | head -1 | grep YesIf i printed some text - the lid is closed. And vice versa.–De_VanoNov 8, 2018 at 0:231Also works on macos 10.15!–William GustafsonNov 15, 2019 at 17:211-d 1is enough. (-d: Limit tree traversal to the specified depth.)–RockalliteDec 31, 2020 at 2:47Also works on 10.15.7!–SaharshFeb 9, 2021 at 16:03|Show4more comments
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed10 years ago.Improve this questionI'm running OS X 10.8 (Mountain Lion). I was wondering if there was a terminal command to check if the macbook pro's lid is currently closed. If I used grep, what would I be looking for exactly, and where?The reason I ask is because I have cron jobs scheduled to run every 30 minutes. However, crontab doesn't run when the computer is sleeping/hibernating. My solution was to use pmset to schedule wakes every 30 minutes. However, I need a way to put my computer back to sleep on the condition that the lid is currently closed. I don't want my computer to be awake for too long with the lid closed i.e. awake all night when I'm sleeping because that could damage the screen.
How to Check if Macbook Lid is closed via Terminal? [closed]
You should useDirectMLplugin.From tensorflow 2.11 Gpu support has been dropped for native windows.you need to use DirectML plugin. You can follow the tutorialhereto install
I'm trying to use my laptop RTX 3070 GPU for CNN model training because I have to employ a exhastive grid search to tune the hyper parameters. I tried many different methods however, I could not get it done. Can anyone kindly point me in the right direction?I followed the following procedure. The procedure:Installed the NVIDIA CUDA Toolkit 11.2Installed NVIDIA cuDNN 8.1 by downloading and pasting the files (bin,include,lib) into the NVIDIA GPU Computing Toolkit/CUDA/V11.2Setup the environment variable by including the path in the system path for both bin and libnvvm.Installed tensorflow 2.11 and python 3.8 in a new conda environment.However, I was unable to setup the system to use the GPU that is available. The code seems to be only using the CPU and when I query the following request I get the below output.query:import tensorflow as tf print("TensorFlow version:", tf.__version__) print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))Output:TensorFlow version: 2.11.0 Num GPUs Available: 0Am I missing something here or anyone has the same issue like me?
Using the RTX 3070 laptop GPU for CNN model training with a windows system
2 How are you opening the process handle? From the doc: The handle must have been opened with the PROCESS_QUERY_INFORMATION access right, which enables using the handle to read information from the process object. Another possibility is that the target process and your process are different bitness (32 vs 64). In that case you either need to use MEMORY_BASIC_INFORMATION32 or something like VirtualQueryEx64 from wow64ext library. Share Improve this answer Follow answered Sep 12, 2013 at 17:22 Igor SkochinskyIgor Skochinsky 24.9k22 gold badges7373 silver badges112112 bronze badges 1 1 Thanks for your answer. At the time I made this thread I meant I use Windows 8 64 bit. Anyway, I'm opening the process with PROCESS_ALL_ACCESS so I guess that PROCESS_QUERY_INFORMATION is already there. The different bitness sounded like a good idea so I compiled my program on windows XP 32 bit. I run it but similarily couldn't achieve access to the process. On Windows XP I couldn't get the process handle even with debug privileges. – Savail Sep 12, 2013 at 19:14 Add a comment  | 
So, I wrote a program which is able to successfully read memory from most of processes using VirtualQueryEx. However, I've come across a process for which this function fails. It's not a system process, just a game process. Without Debug privileges I couldn't even open the process's handle. With them I am able to get the process's handle but still get access denied for VirtualQueryEx. I'm not sure but maybe the process is private? If that's the case, what should I do to successfully use VirtualQueryEx function? I've also read somewhere that I might have to suspend whole process's threads before running VirtualQueryEx, but so far I didn't need that... And when I used function Thread32First to get the first thread it gave me an error: ERROR_BAD_LENGTH... I would be very grateful for any help in this matter!
Access denied error when using VirtualQueryEx
I ended up using the react-native-aws3 library to upload the images to S3. I wish it could be more straight forward to find answers with how to upload an image directly using AWS amplify, but it wasn't working. So here is what I did: (the wrapper of this function is a React Component. I'm using ImagePicker from 'expo-image-picker', Permissions from 'expo-permissions' and Constants from 'expo-constants' to set up the Image uploading from the Camera Roll) import {identityPoolId, region, bucket, accessKey, secretKey} from '../auth'; import { RNS3 } from 'react-native-aws3'; async function s3Upload(uri) { const file = { uri, name : uri.match(/.{12}.jpg/)[0], type : "image/png" }; const options = { keyPrefix: "public/", bucket, region, accessKey, secretKey, successActionStatus: 201} RNS3.put(file, options) .progress(event => { console.log(`percentage uploaded: ${event.percent}`); }) .then(res => { if (res.status === 201) { console.log('response from successful upload to s3:', res.body); console.log('S3 URL', res.body.postResponse.location); setPic(res.body.postResponse.location); } else { console.log('error status code: ', res.status); } }) .catch(err => { console.log('error uploading to s3', err) }) } const pickImage = async () => { let result = await ImagePicker.launchImageLibraryAsync({ mediaTypes : ImagePicker.MediaTypeOptions.All, allowsEditing : true, aspect : [4,3], quality : 1 }); console.log('image picker result', result); if (!result.cancelled) { setImage(result.uri); s3Upload(result.uri); } }
I'm trying to upload an image to S3 from React Native using Amplify. I am able to upload a text file SUCCESSFULLY. But not an image. Here is my code: import React from 'react'; import {View, Text, Button, Image} from 'react-native'; import {identityPoolId, region, bucket} from '../auth'; import image from '../assets/background.png'; import Amplify, {Storage} from 'aws-amplify'; Amplify.configure({ Auth: {identityPoolId,region}, Storage : {bucket,region} }) const upload = () => { Storage.put('logo.jpg', image, {contentType: 'image/jpeg'}) .then(result => console.log('result from successful upload: ', result)) .catch(err => console.log('error uploading to s3:', err)); } const get = () => { //this works for both putting and getting a text file Storage.get('amir.txt') .then(res => console.log('result get', res)) .catch(err => console.log('err getting', err)) } export default function ImageUpload(props) { return ( <View style={{alignItems : 'center'}}> <Image style={{width: 100, height: 100}} source={image} /> <Text>Click button to upload above image to S3</Text> <Button title="Upload to S3" onPress={upload}/> <Button title="Get from S3" onPress={get}/> </View> ) } the error message is: error uploading to s3: [Error: Unsupported body payload number]
Upload to S3 from React Native with AWS Amplify
Certificate Authorities have one primary job -> they receive requests for certificates, validate that the person requesting a certain certificate is actually authorized to that name (if I want an examplesite.com certificate, do I own examplesite.com?), and then issue it to the authorized party. That's it.ShareFollowansweredApr 12, 2012 at 12:44user121356user121356Add a comment|
I am looking into web security topic and just wondering what if your financial, credit card or private information is misused or compromised by a website having valid seal of some certification Authority?Does CA take any legal action against the offending entity in addition to revoking certificate?How does the CA know of any irregularity?Does CA carry out any audit of the entities they certify? or is it not their part in the job?
What if security is compromised by site that is certified by valid CA?
You need to add a volumes reference in your docker-compose.yml.version: 2 services: myService: image: serviceA from registry # Would like to modify a config file's content... volumes: - /path/to/host/config.conf:/path/to/container/config/to/overwrite/config.conf mainAPI: ...Since 3.3, you can also create your own docker config file to overwrite the existing one. This is similar to above, except the docker config is now a copy of the original instead of binding to the host config file.version: 3.3 services: myService: image: serviceA from registry # Would like to modify a config file's content... configs: - source: myconfigfile target: /path/to/container/config/to/overwrite/config.conf mainAPI: ... configs: myconfigfile: file: ./hostconfigversion.confSeehttps://docs.docker.com/compose/compose-file/#long-syntaxfor more info.
I am trying to create a docker-compose for a project that requires a service to be running in the background. The image for service exists and this contains a config file, as well as an entrypoint ["/usr/bin/java", ...]I would like to change the content of configuration file but I am not sure what would be the best way to do it without creating an extra dockerfile to just recreate image with appropriate files?Since the entrypoint is /usr/bin/java , I am not sure how I would be able to use docker-compose's command to modify file content.version: 2 services: myService: image: serviceA from registry # Would like to modify a config file's content... mainAPI: ...Thank you
Docker-compose copy config
Based on your description in the comment it can be done pretty simply:I want to match doc: docfollowed by two numbers: \d{2}and then match test: testand then one number: \dBut there is another doc that I do not want to match: I added ^ to the start and $ to the end^ represents the start of the string, it should start at doc $ represents the end of the string, it should end as soon as we do the last digit^doc\d{2}test\d$https://regex101.com/r/nfB3nR/4
I have a regex that should matchesdoc03test1Test string:doc10test2.prdoc10.comRegex:(doc?\d{2,2})(test?\d{1,1})?Is this correct?
Return first match with pattern
git reflogdoesn't show any commits since I branchedThat means you might not have committed sonce you have branched.Maybe your files were added to the index but were since removed from disk?Asseen in this guide(or in "Recover from git reset --hard?"), check the result of:git fsck −−lost-found(more complete command:git fsck --full --unreachable --no-reflog)
Something mysterious happened and the local git branch I was working on disappeared. I have no idea where it went.Is there any way to list all commits that have ever been made across all branches, even ones that aren't around any more?
Show all git commits over all branches
Thank you for reporting these issues. We haven't tested Unity projects with SonarQube, but it's on our radar as the community and the Unity based code base is huge.At this point I can't suggest any other workaround for your first problem (convention based reflection) but to disable that rule. I created a JIRA ticket to investigate the options:https://jira.sonarsource.com/browse/SLVS-1104And here is the ticket for the readonly field problem. That's definitely something that can be easily fixed. It will be part of the next release:https://jira.sonarsource.com/browse/SLVS-1105
I'm trying to setup Sonar on our organization that we mostly make Unity projects.Our problem is that the rules provided to C# language are not always the same in Unity context.In Unity there is a MonoBehaviour class that if you declare some methods (Awake, Start) they are being called by reflection:https://docs.unity3d.com/ScriptReference/MonoBehaviour.htmlSo in this case I have tons of "Remove this unused private member".Is there a way to tell, don't apply that rule if my class derives from MonoBehaviour (or AssetPostprocessor, etc...) and my method name is for example "Awake"? I mean, is there a way to set a custom rule and invalidate other one?The same applies to Unity serialization system. You can have a private field with [SerializeField] attribute and it gets automatically initialized by Unity:https://docs.unity3d.com/ScriptReference/SerializeField.htmlIn this case I get tons of "Make "{FIELD}" "readonly"" if my field has a default value because it's Unity then who "fills" that value updated/changed from inspector but if I make it readonly it won't work in Unity's serialization system.Thanks.
SonarQube using it with Unity 3D tons of problems
So it seems like the Developers did that intentionally. IDK there probably used to be some setting for it but now I can't find it. This solution is a (rather lengthy) workaround. Close the target project Rename the root project folder Open the renamed project (Optional) See that there is build.gradle and app/build.gradle in the the Android View > Gradle Scripts (Optional) Close back out of the project (Optional) Do 1 -> 4 again but renaming back to the old project name.
I create a project, both build.gradle (Project) and build.gradle (Module) are listed in Android folder structure normally, look likes Old Image 1, then I sync the project to remote GitHub in Android Studio 3.4.1. I clone the project from GitHub in another PC, I find that build.gradle (Project) disappear in Android folder structure in the another PC, look likes New Image 1, but I can find it in Project folder structure, look likes New Image 2. Why? BTW, the clone project can work well. It seems that Github cause the problem, how can I fix it? Thanks! Old Image 1 New Image 1 New Image 2 To InsurgentPointerException: This is project level build.gradle // Top-level build file where you can add configuration options common to all sub-projects/modules. buildscript { ext.kotlin_version = '1.3.40' ext.anko_version = '0.10.8' repositories { google() jcenter() } dependencies { classpath 'com.android.tools.build:gradle:3.4.1' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // NOTE: Do not place your application dependencies here; they belong // in the individual module build.gradle files } } allprojects { repositories { google() jcenter() } } task clean(type: Delete) { delete rootProject.buildDir } Again to InsurgentPointerException: You can test it by clone from https://github.com/mycwcgr/aa You can download the project source code of Android Studio at https://www.dropbox.com/s/ko8stedl135ohnt/MyTest.zip?dl=0
Why does the build.gradle (Project) disappear when I clone the project from Github in Android Studio 3.4.1?
<div class="s-prose js-post-body" itemprop="text"> <p>You can try this</p> <pre><code>if ($http_user_agent = "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322)") { return 444; # 444 is a special nginx status code that's useful in fighting attack } </code></pre> <p>But that user agent string is valid, which means that you could block some legit visits as well.</p> <p>I'd suggest you try ip based access control. See <a href="http://wiki.nginx.org/HttpAccessModule" rel="nofollow">http://wiki.nginx.org/HttpAccessModule</a> for setting that up. It's better in my opinion.</p> </div>
<div class="s-prose js-post-body" itemprop="text"> <p>I have been having a few problems with spam recently and bots registering and all these anti captcha systems do not seem to be working.</p> <p>I have analyzed my access logs and discovered the user agents are not used by humans maybe because they are old... But also noticed that there has been some HEAD / GET / POST / attacks also coming in to the web server as well using the exact same string on user agents. Possibly booters using the same user agents as spam/add bots.</p> <blockquote> <p>216.151.139.172 - - [24/Mar/2013:00:58:20 +0000] "GET /index.php?action=verificationcode;vid=register;rand=12c64196f4558b2dff00db7ed3ee8ad9 HTTP/1.1" 200 2189 "index.php?action=register" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322)" "-"</p> </blockquote> <p>In nginx without blocking all user agents, is there anyway to just block this string contained in the useragent so these bots can stop registering and advertising.</p> <blockquote> <p>"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322)"</p> </blockquote> <p>Thanks for reading.</p> </div>
How to block specific human looking user agent in nginx
This depends on the context but generally, an in-memory cache stores some value so that it can be retrieved later, instead of creating a new object. This is most often used in conjunction with databases – or really any application where the construction / retrieval of an object is expensive. For a simple memory cache, imagine the following dummy class (which violates tons of best practices, so don’t copy it!): class Integer { int value; public: Integer(int value) : value(value) { sleep(1000); // Simulates an expensive constructor } }; Now imagine that you need to create instances of this class: Integer one(1); Integer two(2); // etc. … but later (in another method) perhaps you need to create a new instance of 2: Integer two(2); This is expensive. What if you could recycle the old value? Using constructors, this isn’t possible but using factory methods we can do this easily: class Integer { int value; static std::map<int, Integer> cache; Integer(int value) : value(value) { sleep(1000); // Simulates an expensive constructor } friend Integer make_int(int); }; Integer make_int(int value) { std::map<int, Integer>::iterator i = Integer::cache.find(value); if (i != Integer::cache.end()) return i->second; Integer ret = Integer(value); Integer::cache[value] = ret; return ret; } Now we can use make_int to create or retrieve an integer. Each value will only be created once: Integer one = make_int(1); Integer two = make_int(2); Integer other = make_int(2); // Recycles instance from above.
What is in-memory cache? I could not find much information on it from the Web. In fact, I was asked to design a in-memory cache based on OO concept using C++, but just do not know how to start. Any suggestions would be appreciated.
in-memory cache design using OO concept
Given that it seems you don't have an Ingress Controller installed, if you have the aws cloud-provider configured in your K8S cluster you can follow this guide to install the Nginx Ingress controller using Helm. By the end of the guide you should have a load balancer created for your ingress controller, point your Route53 record to it and create an Ingress that uses your grafana service. Example: apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/app-root: / nginx.ingress.kubernetes.io/enable-access-log: "true" name: grafana-ingress namespace: test spec: rules: - host: grafana.something.com http: paths: - backend: serviceName: grafana servicePort: 80 path: / The final traffic path would be: Route53 -> ELB -> Ingress -> Service -> Pods
I deployed grafana using helm and now it is running in pod. I can access it if I proxy port 3000 to my laptop. Im trying to point a domain grafana.something.com to that pod so I can access it externally. I have a domain in route53 that I can attach to a loadbalancer (Application Load Balancer, Network Load Balancer, Classic Load Balancer). That load balancer can forward traffic from port 80 to port 80 to a group of nodes (Let's leave port 443 for later). I'm really struggling with setting this up. Im sure there is something missing but I don't know what. Basic diagram would look like this I imagine. Internet ↓↓ Domain in route53 (grafana.something.com) ↓↓ Loadbalancer 80 to 80 (Application Load Balancer, Network Load Balancer, Classic Load Balancer) I guess that LB would forward traffic to port 80 to the below Ingress Controllers (Created when Grafana was deployed using Helm) ↓↓ Group of EKS worker nodes ↓↓ Ingress resource ????? ↓↓ Ingress Controllers - Created when Grafana was deployed using Helm in namespace test. kubectl get svc grafana -n test grafana Type:ClusterIP ClusterIP:10.x.x.x Port:80/TCP apiVersion: v1 kind: Service metadata: creationTimestamp: labels: app: grafana chart: grafana- heritage: Tiller release: grafana-release name: grafana namespace: test resourceVersion: "xxxx" selfLink: uid: spec: clusterIP: 10.x.x.x ports: - name: http port: 80 protocol: TCP targetPort: 3000 selector: app: grafana sessionAffinity: None type: ClusterIP status: loadBalancer: {} ↓↓ Pod Grafana is listening on port 3000. I can access it successfully after proxying to my laptop port 3000.
How to forward traffic from domain in route53 to a pod using nginx ingress?
You needunlessoperator for this.vector1 unless vector2results in a vector consisting of the elements of vector1 for which there are no elements in vector2 with exactly matching label sets. All matching elements in both vectors are dropped.For your case:count by(system) (count_over_time({job="mrs_error_list"} |~ "Timestamp" [7d])) unless count by(system) (count_over_time({job="mrs_error_list"} |~ "Timestamp" [1m]))Here, first operand will return full list of systems that where present over last 7 days, andunlesswill exclude those, that were present over last one minute.
"I'm using Loki to store logs and Grafana for visualization. I want to create a Grafana table that lists all systems that are considered offline. A system is considered offline if it has sent a "Timestamp" log in the mrs_error_list job in the past 7 days but not in the last minute. I am able to calculate the count of such systems using Loki queries but unable to list the actual systems.I used the following query to count the number of offline systems:( count(count by(system) (count_over_time({job="mrs_error_list"} |~ "Timestamp" [7d]))) ) - ( count(count by(system) (count_over_time({job="mrs_error_list"} |~ "Timestamp" [1m]))) )However, while this gives me the number of offline systems, I want to create a table that lists out these specific systems. I was thinking of subtracting the results from one query from the other, but I'm unsure how to approach this in Grafana.
How to display systems that are offline based on Loki log queries in Grafana?
If anyone help me understand whether my-branch will be matched with the remote one or notProbably. But it's impossible to be certain from the info you have given. To find out, saygit branch --all -vvIf the listing for my-branch also mentions origin/my-branch, you're all set. If it doesn't, just set the upstream yourself.ShareFollowansweredJun 16, 2021 at 1:25mattmatt523k8989 gold badges901901 silver badges1.2k1.2k bronze badges2I really appreciate it!! One question though, when I do git status. I see FETCH_HEAD (RED) in untracked files. Which is weird, because I've never seen this before. Any idea why this shows up? @matt–cosmos-1905-14Jun 16, 2021 at 1:40Any time you fetch you get a FETCH_HEAD, I don't know why it would show up as file you cansee. So maybe you did some weird thing. Take a look and see what it is and where it is!–mattJun 16, 2021 at 2:44Add a comment|
I had to delete my git branch and now need to fetch that remote branch.I did the following steps as I've seen someone's post here.git clone <repository-address> git fetch origin git checkout -b <branch> origin/branchBut I am not sure if it worked as expected. Here's my output for git branch -a* my-branch (GREEN) master (GREEN) remotes/origin/HEAD -> origin/master (RED) remotes/origin/my-branch (RED)If anyone help me understand whether my-branch will be matched with the remote one or not. If not, I'd appreciate it if you could explain this. Thanks!
How to fetch remote branch properly?
I used following command to create cluster with auditlog functionality. I used volumes to provide policy file to cluster. I think it requires both audit-policy-file and audit-log-path variables to be set.k3d cluster create test-cluster \\ --k3s-arg '--kube-apiserver-arg=audit-policy-file=/var/lib/rancher/k3s/server/manifests/audit.yaml@server:*' \\ --k3s-arg '--kube-apiserver-arg=audit-log-path=/var/log/kubernetes/audit/audit.log@server:*' \\ --volume "$(pwd)/audit/audit.yaml:/var/lib/rancher/k3s/server/manifests/audit.yaml"
I am currently trying to enable and configure audit logs in a k3s cluster. Currently, I am using k3d to set up my k3s cluster. Is there a way how to configure the audit logging?I know you can parse k3s server args when creating a cluster with k3d. So, I tried it with this:k3d cluster create test-cluster --k3s-server-arg '--kube-apiserver-arg=audit-log-path=/var/log/kubernetes/apiserver/audit.log' --k3s-server-arg '--kube-apiserver-arg=audit-policy-file=/etc/kubernetes/audit-policies/policy.yaml'The obvious problem is that the audit policy does not exist at the cluster until now. Thus it crashes when creating the cluster.Also tried it, with setting up the cluster using:k3d cluster create test-cluster --k3s-server-arg '--kube-apiserver-arg=audit-log-path=/var/log/kubernetes/apiserver/audit.log'Then ssh onto the master node, created the policy file in the wanted dir, but then I cannot find a way to set the cluster variableaudit-log-pathto this directory. And thus, the policies will not apply.Doing this with minikube is quite simple (since it is also documented), but I couldn't get it to work with k3d - There is also nothing regards to this in the docs. But I am sure, there has to be a way how to configure audit-logs on k3s, without using a third-party-app likeFalco.Has someone an idea of how to solve the problem? Or want to share some experiences doing similar?
Enable/Configure audit-logs in k3s cluster (using k3d to set up cluster)
I thought it silly that such an inane question go so glaringly unsolved... so for lack of knowing how to cleverly parse strings in bash without resorting to the one SED combo I know by heart, and...... security find-internet-password -s github.com | grep acct | sed 's/"acct"<blob>="//g' | sed 's/"//g' ét voila.... mralexgray This may depend on having the Github mac client installed... and yet again... it might not.
So if.. $ git config user.name ↳ Alex Gray # OK (my name) $ git config user.email ↳ [email protected] # OK (my email). and.. GithubUserForProject() { # in pwd ORIGIN=$(git config --get remote.origin.url) && echo $ORIGIN OWNER=${ORIGIN%/*} && echo $OWNER # trim URL tail OWNER=${OWNER#*.com/} && echo $OWNER # trim URL head OWNER=${OWNER#*:} && echo $OWNER # trim ssh URL head } $ cd /local/git/MyGitHubRepo && GithubUserForProject ↓ [email protected]:mralexgray/MyGitHubRepo.git ↓ [email protected]:mralexgray ↳ mralexgray # OK (my username, but skanky way of finding it) but... $ cd /local/git/SomeGuysProject && GithubUserForProject ↓ git://github.com/someguy/SomeGuysProject.git ↓ git://github.com/someguy ↳ someguy # WRONG! (cloned repo's user!) So, how can I determine my github "short username" programmatically, either from the environment, a github API request, etc., or otherwise (via a script or terminal session?
How to obtain github "short" username at the command line?
Your approach is suitable only for very small development groups and small projects without need for complicated deploy. But:1) Local vs RemoteI usually keep remote versions of config files in the repository and overwrite them on local with untracked files. The config loader then searches for override but does not fail if there is none.2) Dev + productionA rather easy way is that you keep a branch for development (dev) and branch for production (master). Or as many branches as you want, in fact. In the hook you get the name of pushed branch and decide on that where the new code will be copied to (in the simplest case).The post-update hook may look like:for arg in "$@" do if [ "$arg" == "refs/heads/master" ] then DEST="/path/to/production" git --work-tree=$DEST checkout -f elif [ "$arg" == "refs/heads/dev" ] then DEST="/path/to/dev" git --work-tree=$DEST checkout -f fi done3) External repositoryIf you want a backup or share with the world, yes, you should :)
My inquiries are specific and I understand that they can be subjective; I would appreciate any input.Here's what I was doing before git:I run a PHP/MYSQL websiteI develop locally and test on WAMPI FTP to a staging site dev.mywebsite.comOnce I'm happy with all the changes, I FTP to the live siteWhen I decided to start using Git:I initialized a bare repo on my hosting serverCreated a post-receive hook to deploy to dev.mywebsite.comI cloned the dev.mywebsite.com repo to a my local dev machineI test code -> commit -> push to remote (dev site)Here are my questions:1) There are a few files that I need them to remain different on local vs remote (these are mainly config files). I am using --assume-unchanged for these files. However, I read that doing 'git reset' would undo these so my first question: - Is there a better way to never change the config files when I push from local to remote?2) My workflow ends with me pushing to the dev site. I am not sure how to proceed from there, and deploy my code to the live website in the most efficient and risk free way.*3) A bonus question: Should I integrate github/bitbucket/etc.. into my workflow?Thank you
Git and website development workflow
S3 + Glue Crawler is known for poor performance when having a lot of small files.What you could do is to create aAmazon Kinesis Firehose Delivery Streamto append your JSON files so that you have less files but each file is bigger in size. This will allow your glue crawler to be able to finish.the following architecture can help:S3 > Lambda > Firehose > S3 > Glue CrawlerPut an s3 event that triggers a lambda on your s3 bucket that contains all the JSON files.When the lambda is triggered, read the json file and send it to the endpoint of the Firehose you created.The Firehose is configured to wait for x seconds and to wait until the concatenated file reaches a size threshold before dumping the result back to S3. Consider dumping the result to a columnar format like parquet, if you are working with tabular dataOnce all json files are passed through Firehose, you can trigger your Glue Crawler.ShareFolloweditedSep 18, 2020 at 12:42answeredSep 18, 2020 at 12:30Vincent ClaesVincent Claes4,35644 gold badges4646 silver badges6767 bronze badgesAdd a comment|
Got 11 million+ json files in S3.Tried to crawl and catalog them to AWS Glue.JSON File Details:Each file size is from 250KB to 2MB uncompressed.Logs:BENCHMARK : Running Start Crawl for Crawler impall ERROR : Internal Service Exception BENCHMARK : Crawler has finished running and is in state READYAm I missing any step in processing those huge files?
AWS Glue Crawler fails with 11 million files on S3
You can use the madvise call to tell the kernel what you will likely be doing with the memory in the future. For example: madvise(base, length, MADV_SOFT_OFFLINE); tells the kernel that you wont need the memory in quesion any time soon, so it can be flushed to backing store (or just dropped if it was mapped from a file and is unchanged). There's also MADV_DONTNEED which allows the kernel to drop the contents even if modified (so when you next access the memory, if you do, it might be zeroed or reread from the original mapped file).
In the case that memory is allocated and its known that it (almost certainly / probably) won't be used for a long time, it could be useful to tag this memory to be more aggressively moved into swap-space. Is there some command to tell the kernel of this? Failing that, it may be better to dump these out to temp files, but I was curious about the ability to send-to-swap (or something similar). Of course if there is no swap-space, this would do nothing, and in that case writing temp files may be better.
How to selectively put memory into swap? (Linux)
The issue here is in the function help from the base package utils. You have two packages that are both exporting a function with the same name. Specifically, DoE.base and FrF2 both export Yates, so help doesn't load an Rd file; instead, it wants you to choose between different files. But help_console doesn't know how to handle this. This can be easily corrected by adding a package argument to help_console that passes a package name down to help. To create achieve this in a particular R session, you can use: utils0 to load a script editor where you can change the definition of utils1 to the following: utils2 This will then allow you to separately capture the documentation for each version of the function: utils3 In order to get this incorporated into utils4, I've issued a pull request for this to be changed. You can see it here on GitHub. It has now been merged in the main repo on GitHub, so you can install as you normally would.
I can get help in R 3.1.2 on Yates function from FrF2 package through: ?FrF2::Yates Now I want to get help in .tex format through help_console function from noamtools R package. I tried this code but not working: help_console(topic="Yates", format = "latex") and help_console(topic="FrF2:Yates", format = "latex") The noamtools Rpackage can be obtained fromgithub` using the following commands: Yates0
Getting help of R function using help_console function from noamtools R package
So a kubernetes cluster is basically setup like a network. Every node/pod gets its own internal ip address and it's own entry in the kube-dns! All nodes and pods can communicate with each other over the given ip addresses (it doesn't matter if its a master node or not) or hostname.If you use Calico it implements a more advanced networking model using the BGP Protocol (more detailed information about Calcico)Calcico also brings some other features like:more possibilities to define network policy'smore advanced security featuresespecially designed for large scale deploymentsip address management solutions can be used, for greater control over ip allocation
I executed thekubectl get nodes -o widecommand, and in the results, there is a column labeled INTERNAL-IP, which displays the internal IP for each node. I would like to understand if the nodes use this IP to communicate with each other and with the control plane (master node) as well.Additionally, what role does Calico play in this particular scenario?
How do nodes communicate with each other in a Kubernetes cluster?
with that push command you are not pushing mybranch1 into master.... you are pushing your local master branch into the remote master branch. Try this: git push origin mybranch1:master
I committed some changes. git diff correctly shows differences between my branch and origin/master on Github (for Pages). Yet git push shows "Everything up-to-date." (See full interaction below). What am I doing wrong here? $ git fetch $ git status On branch mybranch1 nothing to commit, working tree clean $ git diff origin/master diff --git a/README.md b/README.md index 2791a21..ef702c5 100644 --- a/README.md +++ b/README.md @@ -4,11 +4,8 @@ ...[More changes here, diff is abbreviated.] $ git push origin master Everything up-to-date
Why won't my git push recognize changes in the codebase?