Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
You can either:specify afixed IP for a serviceproxy to theservice DNS name
I have created a service using this manual:https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address-service/The service has IP in this example (10.32.0.16, bykubectl describe services example-servicecommand) and we can create proxy_pass rule:proxy_pass http://10.32.0.16:8080;in external (outside the cluster) nginx.This IP is always different (it depends on number of services, etc..). How can I specify this service for my external nginx?
Kubernetes: how to connect to service from outside the cluster?
To my knowledge this is not possible. AWS Lambda polls the stream and invokes your Lambda function when it detects any type of stream record update. Your Lambda will have to ignore the records that you are not interested in. You can use the eventName property of the stream record (can have values INSERT | MODIFY | REMOVE)
I currently have an AWS DynamoDB stream triggers a Lambda function. The Lambda function is triggered by both insert and update events in the DynamoDB. Is there a way to change the configuration so that the Lambda function would only be triggered by 'insert'?
Configure DynamoDB stream trigger with insert only
Check git version (latest) and try this formatgit clone https://[email protected]/org/project.git
Im triyng to clone the repository project into my desktop git hub window and it failed I got these errors inside the log file:Log File Errors2016-07-20 09:04:11.4932|INFO|thread:14|DashboardViewModel|Selected repository 'techla/webroshhayin' 2016-07-20 09:04:11.5922|INFO|thread:14|DashboardViewModel|Took 0ms to Create RepositoryViewModel for location D:\Documents\GitHub\webroshhayin 2016-07-20 09:04:12.1092|ERROR|thread:16|ComparisonGraph|JavaScript Alert from the comparison graph: http://github-app/: TypeError: Cannot read property 'selectCommit' of null 2016-07-20 09:08:11.6309|INFO|thread:76|GitLfsSmudgeTail|Cleaning up Git LFS smudge progress environment variable 2016-07-20 09:08:11.6739|WARN|thread: 1|StandardUserErrors|Showing user error Please check your log file for more details, or contact out support team if you are still having problems.Solution on how to resolve this issue will be appreciated
Can't do: git clone
An OutOfMemoryException is pretty common when you use the Bitmap class. Bitmaps can require a lot of memory. One standard way to get in trouble is being sloppy about calling its Dispose() method. Not using Dispose() in your code is something you'll get away easily in .NET, finalizers will clean up after you. But that tends to not work well with bitmaps because they take a lot of unmanaged memory to store the pixel data but very little managed memory. There is at least one Dispose() call missing in your code, you are not disposing the old background image. Fix: em.SelectById(); if (pbEmp.BackgroundImage != null) pbEmp.BackgroundImage.Dispose(); // <== here if (!em.EmptyPhoto) pbEmp.BackgroundImage = em.Picture; else pbEmp.BackgroundImage = null; And possibly in other places, we can't see how em.Picture is managed. Also, and much harder to diagnose, is that GDI+ is pretty poor at raising accurate exceptions. You can also get OOM from a file with bad image data. You'll find a reason for that regrettable behavior in this answer.
I have an application that saves User Information with Image into a data base. Admin can access the information those are already saved through a different form view. Clicking on the List Box item will display the details with Image retrieved from the database. UserViewDetails.cs: private void lbEmp_SelectedIndexChanged(object sender, EventArgs e) { try { if (lbEmp.SelectedIndex != -1) { em.Emp_ID = Convert.ToInt32(lbEmp.SelectedValue); em.SelectById(); if (!em.EmptyPhoto) pbEmp.BackgroundImage = em.Picture; else pbEmp.BackgroundImage = null; txtEmpName.Text = em.Emp_Name; txtImagePath.Text = em.ImgPath; cmbEmpType.SelectedText = em.EmployeeType; cmbCountry.SelectedValue = em.CountryID; cmbCity.SelectedValue = em.CityID; } } catch (Exception) { } } This form is called from the parent form Form1: Form1.cs: try { var vi = new Admin.frmViewEmployeeInfo(); vi.ShowDialog(); } catch (Exception ex) { Console.WriteLine(ex.Message); } Here, an "out of memory" exception is caught. What is happening? The same code doesn't throw any exception in another application of mine.
Why does my form throw an OutOfMemory exception while trying to load image?
As I can format here better now as an answer:If might be that cron simply picks the wrong interpreter in your case. A simple solution is to provide the full path to it, you can find it with:md@gw1:~$ type python2.7 python2.7 is /usr/bin/python2.7 md@gw1:~$Now use inside your crontab:0 12 * * * /usr/bin/python2.7 your_script.pyAlternatively use a so called shebang:md@gw1:~$ cat your_script.py #!/usr/bin/python2.7 print "hello" md@gw1:~$ chmod +x your_script.py md@gw1:~$ ./your_script.py hello md@gw1:~$The first line in your_script.py and the chmod made it executeble and will use the correct interpreter.
I have a Python script using requests module. It works on my desktop (Windows), and works when I run it manually on my VM (Ubuntu 14.04 / python 2.7.14). However, when the exact same command is scheduled as a CRON job on the same VM, it's failing.The offending line seems to be:index_response = requests.get(my_https_URL, verify=False)The (slightly redacted) response is:(<class 'requests.exceptions.SSLError'>, SSLError(MaxRetryError("HTTPSConnectionPool(host=my_https_URL, port=443): Max retries exceeded with url: / (Caused by SSLError(SSLError(1, '_ssl.c:510: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure'),))",),), <traceback object at 0x7f3d877ce5f0>)HTTP URLs don't seem to be affected, and indeed some HTTPS URLsareworking... (I think it's domains requiring SNI which are not working).I've tried addingverify=False, and even running the CRON script as sudo, but no update. It originally failed when I was running it manually, but I then installed python2.7.14 (in addition to python 2.7.6), which resolved the manual issue. To add some further complexity, the script is being triggered by another one usingexec(the test runner runs various tests scripts including this one) - could this be related?I didn't have this problem in urllib2, but would prefer not to move back from requests to urllib2 if I can help it...
Python SSLError on Ubuntu, but only when run from a CRON
The traditional High-Availability design is: Data stored in Amazon RDS, preferably configured as Multi-AZ in case of failure Objects stored in Amazon S3 At least two Amazon EC2 instances for the application, spread across more than one Availability Zone — preferably created with Auto Scaling A Load Balancer in front of the instances An Amazon Route 53 domain name resolving to the Load Balancer This way, both instances are serving traffic (you can use two smaller instances if you wish). The Load Balancer performs continuous health checks. If an instance fails the health check, the load balancer stops sending it traffic, so users are minimally impacted. If Auto Scaling is configured, it can automatically replace an unhealthy instance. This can be done by providing a fully-configured AMI, or by providing a User Data script that installs and configures the software at startup (or a combination of both). When performing a software update: Update the Auto Scaling Launch Configuration, which defines how new instances should start (eg different User Data or AMI) Tell Auto Scaling to launch a new instance, then terminate an old instance — this is a rolling update If you can't do a rolling update (due to code change), deploy a second Auto Scaling group and test it. If everything is okay, point the Load Balancer to the new Auto Scaling group, then terminate the old one (after a few minutes to allow connection draining). This is very similar to what Elastic Beanstalk offers — it will create the Load Balancer and Auto Scaling group for you, and deploy code updates. The result is a highly-available, resilient architecture that can auto-recover from failure. It will also force you to use code repositories rather than manually updating servers, which leads to greater reliability and reproducibility. See: AWS Design for Web Application Hosting
I have an aws ec2 instance called primary. I have another ec2 instance called secondary. The primary instance IP is linked to domain, and contains all the hosted code and services. I want to be able to copy all the data (files/deamons/services etc) from primary to secondary on real time. Can this be done via some service on AWS? Or if I have to write code, what kind of code/linux script etc am I looking at? Edits I am expecting the secondary instance to be able to instantly run the system that is being copied. As soon as a failover is detected, I will change the IP linked to the domain to this secondary machine. For now the system is using database to store data, but we will be moving it to an RDS instance The system is a linux machine I looked at Load Balancer, and Auto Scaling group and EFS, but they don't solve my purpose. I looked at Elastic Bean Stalk, but it seemed like overkill for what I am trying to achieve. I can be wrong here too. Any help is greatly appreciated.
Copy files and services in real time from one aws instance to another
Neither WebView or GridView need to be embedded in a scrollview. The WebView can scroll by itself when the content size over the screen size, and the GridView too. In the normal way, the GridView just create so many child views as are visible. Once a view goes out of screen, it will be reused. So if you embed a GridView in a ScrollView, maybe you will wrong the reuse pattern.ShareFollowansweredDec 17, 2013 at 15:38TaoZangTaoZang1,69022 gold badges1515 silver badges1515 bronze badges11This is not helpful. He (and I) needs it in a scroll view from various reasons.–grebulonApr 10, 2014 at 8:33Add a comment|
I have a layout memory issue. When I have a large webview it doesn't shows anything and the logcat shows "View too large to fit into drawing cache".The layout is:<ScrollView android:id="@+id/scrollNoticia" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/ficha_curva" android:layout_below="@+id/linea" android:scrollbars="none" > <RelativeLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingBottom="12dp" > <WebView android:id="@+id/webViewNoticia" android:layout_width="match_parent" android:layout_height="wrap_content" android:scrollbars="none" /> <GridView android:id="@+id/gridGaleria" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@id/webViewNoticia" android:horizontalSpacing="4dp" android:verticalSpacing="4dp" android:numColumns="4" > </GridView> </RelativeLayout> </ScrollView>
WebView and GridView into ScrollView, View too large to fit into drawing cache
As you can see int/t3406-rebase-message.shorgit-rebase.sh, the second message (Current branch master is up to date.) occurs during a rebase.Check if we are already based on$ontowith linear historyIt is possible your second repo is configure to always rebase on pull.git config pull.rebase trueThe first message (Already up-to-date) occurs when there isnothing tomerge(since pull is by default a fetch + a merge).
Why does git give different responses togit pull?(dev) go|c:\srv\lib\django-cms> git pull Already up-to-date. (dev) go|c:\srv\lib\dk> git pull Current branch master is up to date.Both repos come from github, and I'm on the master branch in both repos:(dev) go|c:\srv\lib\django-cms> git status On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean (dev) go|c:\srv\lib\dk> git status On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean
What is the difference between "Already up-to-date." and "Current branch master is up to date."
In your nginx config you have:listen 9000; server_name test.dev;So your domain should be resolved with:http://test.dev:9000But you should also addtest.devto your Windowshostsfile%windir%\system32\drivers\etc\hosts127.0.0.1 test.dev
I am new to WSL2 but so far it works really nice. I have a simple HTML Page i want to serve with Nginx, but i want it to access with a web browser on the host. The default nginx webpage works(!), so i started to mimic the default nginx html page (/var/www/html/index.html).I have created:/var/www/test.dev/index.html/etc/nginx/sites-available/test.dev (+Symlink in sites-enabled/)Nginx config:server { listen 9000; listen [::]:9000; server_name test.dev; root /var/www/test.dev; index index.html; location / { try_files $uri $uri/ =404; } }So the only big difference to the default config is the port 9000.I reloaded/restarted nginx and i tried to curl my configs:$ curl https://localhost $ curl https://localhost:9000both requests weresuccessful.But now i want to access the pages on my Windows host with a web browser. The first one (default) works and i can see the default Nginx HTML page. The second one does not work:site can't be reached.So my questions:1. Why is that? Do i have to make some changes to the Windows Firewall settings?2. I like to have a virtual host name like example.com instead of localhost:9000I've edited /etc/hosts... it works with curl but again not in the host Browser
Nginx running in WSL2 (Ubuntu 20.04) does not serve HTML page to Windows 10 host on another port than port 80
If I understood correctly your frontend web application depends on API server, so that it sends requests to it. In such case, your API service should be available from outside of the cluster. It means it should be exposed as theNodePortorLoadBalancerservice type.P.S. you can refer to service usingClusterIPonly inside of the cluster.
Let me start this by saying I am fairly new to k8s. I'm using kops on aws.I currently have 3 deployments on a cluster.FrontEnd nginx image serving an angular web app. One pod. External service.socket.io server. Internal service. (this is a chat application, and we decided to separate this server from our api. Was this a good idea?)API that is requested by both the socket.io server and the web application. Internal Service (should it be external?)The socket.io deployment and API seem to be able to communicate through the cluster ips and corresponding services I have set up for the deployments; however, the webapp times out when querying the API.From the web app, I am querying the API using the API's cluster IP address. Should I be requesting a different address?Additionally, what is the best way to configure these addresses in my files without having to change the addresses in the files each time I create a new deployment? (the cluster ip addresses change every time you tare down and recreate the deployment)
Kubernetes front end deployment timing out when requesting api deployment
Nevertheless the answer from DavidPi is already accepted, I don’t think it will work. For more info you can check that question -Analyze Kubernetes pod OOMKilled, but I will add some info here. Unfortunately you cannot handle OOM event somewhere inside Kubernetes or your app. Kubernetes doesn’t manage memory limits itself, it just set settings for runtime below which actually execute and manage your payload. Events from an answer above will allow you to get event when K8s generate them, but not when something else do it. In case of OOM, Kubernetes will get information about that event after your app will be already killed and container will be stoped, and you will not be available to run any code in your container on that event because it will be already stopped.
I noticed sometimes my containers are OOMKilled, but I'd like to print some logs before exiting. Is there a way that I can intercept the signal in my entrypoint script?
Kubernetes: print log before being OOMKilled
I think you got one too many*'s there. And yes you can set the PATH variable in cron. A couple of ways. But your problem is the extra*.
I have made a simple cron job by typing the following commandscrontab -ethen in the vi file opened I types* * * * * * echo 'leon trozky' >> /Users/whitetiger/Desktop/foo.txt 2>&1the filefoo.txtindeed gets created, but its content is/bin/sh: Applications: command not foundI'm guessing this has to do with the PATH value ofcron. Is there any way to set the PATH in thecronfile such that when I transfer it to another mac I won't have to set the PATH manually? is this even a PATH problem?
/bin/sh: Applications: command not found [duplicate]
In your program, you are trying to allocatennumber of objects.Your OS allocates some space to your JVM to work with and that space is calledheap space. You getOutOfMemoryErrorwhen all of your heap space is filled and no more space is left to allocate for new objects.So what you should do is increase your heap space with-Xmxlike this:java -Xmx 1024m YourClassNameThis will allocate a heap space of 1024 MB's (1 GB) for your program. You may request for heap space as per your requirement.
Although I know usage of Java Vectors is discouraged as its deprecated, I am stuck with a legacy code where in I don't have the luxury to modify it.I am getting an OutOfMemoryError while trying to addElement to the Vector. Following below is my code snippet. Please let me know if I can improve the below code./*objOut is the Vector Object. idx is incoming integer argument. Val is some Object */ int sz = objOut.size(); if (idx == sz) { objOut.addElement(val); } else if (idx > sz) { for (int i = (idx-sz); i>0; i--) { objOut.addElement(null); // Code through OutOfMemory in this line } objOut.addElement(val); } else { objOut.setElementAt(val, idx); }
java.lang.OutOfMemoryError with Java Vector.addElement(Object o) method
Your test code references some files (containing the typeMyService) that have not been copied to the image. This happens because yourCOPY . .instruction is executed after theWORKDIR /tests/Testsinstruction, therefore you are copying everything inside the/tests/Testsfolder, and not the referenced code which, according to your description, resides in thesrcfolder.Your problem should be solved by performingCOPY . .in your second line, right after theFROMinstruction. That way, all the required files will be correctly copied to the image. If you proceed this way, you can simplify yourDockerfileto something like this (not tested):FROM microsoft/dotnet:2.2.103-sdk AS build COPY . . # Copy all files WORKDIR /tests/Tests # Go to tests directory ENTRYPOINT ["dotnet", "test"] # Run tests (this will perform a restore + build before launching the tests)ShareFollowansweredMay 3, 2019 at 17:25Tao Gómez GilTao Gómez Gil2,3852020 silver badges3939 bronze badges2This did not solve the problem I am guessing because the Dockerfile is under the/testsfolder. Thesrcfolder is next to thetestsfolder so theCOPY . .command still won't copy the referenced code.–J. LoeMay 3, 2019 at 17:45Yes, but you can solve this problem running thedocker runcommand from the parent directory, and specifying yourDockerfilelocation with the-foption, as shown in this answer:stackoverflow.com/a/34300129/2651069–Tao Gómez GilMay 4, 2019 at 12:13Add a comment|
So I have an ASP.NET project in a folder (src) and a test project in a folder right next to the other folder (tests). What I am trying to achieve is to be able to run my tests and deploy the application using docker, however I am really stuck.Right now there is a Dockerfile in thesrcfolder, which builds the application and deploys it just fine. There is also a Dockerfile for the test project in thetestsfolder, which should just run my tests.Thetests/Dockerfilecurrently looks like this:FROM microsoft/dotnet:2.2.103-sdk AS build WORKDIR /tests COPY ["tests.csproj", "Tests/"] RUN dotnet restore "Tests/tests.csproj" WORKDIR /tests/Tests COPY . . RUN dotnet testBut if i run docker build, the tests fail, I am guessing because the application's code to be tested is missing. I am getting a lot of:The type or namespace name 'MyService' could not be found (are you missing a using directive or an assembly reference?I do have a projectreference in my .csproj file so what could be the problem?
How to run tests in Dockerfile using xunit
There are many differences. In A you are writing to gzip which compresses the data before writing to disk. B writes plain sql files which can be 5-10 times bigger (results from my database). If your performance is disk bound this could be the solution -c = "full inserts" is not specified in A -q is not specified in A for large databases INFORMATION_SCHEMA queries can be a pain with mysql (try executing SELECT * FROM information_schema.columns. For B every dump has to do these queries while A has to do this only once.
I use two different ways to backup my mysql database. mysqldump with --all-databases is much faster and has a far better performance than a loop with to dump every database in a single file. Why? And how to speed up performance for the looped version /usr/bin/mysqldump --single-transaction --all-databases | gzip > /backup/all_databases.sql.gz and this loop over 65 databases even with nice: nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c xxx -q > /backup/mysql/xxx_08.sql nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-xxx -q > /backup/mysql/dj-xxx_08.sql nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-xxx-p -q > /backup/mysql/dj-xxx-p_08.sql nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-foo -q > /backup/mysql/dj-foo_08.sql mysqldump.cnf is only used for the authentication, there are no additional options there.
mysqldump with single tables much slower than with --all-databases
Actually I just tested this and you can actually use xml2js straight out of the box because... https://github.com/aws/aws-sdk-js/blob/master/lib/xml/node_parser.js That's what the AWS JS SDK uses. Sample Lambda code use to test this, completely using the Lambda online editor and running test data against it: 'use strict'; var xml2js = require('xml2js'); console.log('Loading function'); var options = { // options passed to xml2js parser explicitCharkey: false, // undocumented trim: false, // trim the leading/trailing whitespace from text nodes normalize: false, // trim interior whitespace inside text nodes explicitRoot: false, // return the root node in the resulting object? emptyTag: null, // the default value for empty nodes explicitArray: true, // always put child nodes in an array ignoreAttrs: false, // ignore attributes, only create text nodes mergeAttrs: false, // merge attributes and child elements validator: null // a callable validator }; exports.handler = (event, context, callback) => { var parser = new xml2js.Parser(options); //console.log('Received event:', JSON.stringify(event, null, 2)); console.log('value1 =', event.key1); console.log('value2 =', event.key2); console.log('value3 =', event.key3); callback(null, event.key1); // Echo back the first key value //callback('Something went wrong'); }; That said if you want to avoid that route you're going to have to go the standard package install route.
Is it possible to do XML parsing in an AWS Node.js Lambda function without using a 3rd party module like xml2js? I'm wondering if AWS has any built-in functionality for this like in the AWS SDK for Node.js.
XML parsing in AWS Lambda function
You can use a .htaccess to deny access to it. Or you can just move it out of thehtdocsorpublic_htmldirectory.<Files "cron.php"> Order deny,allow Allow from name.of.this.machine Allow from another.authorized.name.net Allow from 127.0.0.1 Deny from all </Files>So it can only be requested from the server.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed12 years ago.Improve this questionI've have a cron file, monthly.php and I want to prevent direct access using web browser. It should be accessible only through CPanel cron.Thanks.
Securing cron file [closed]
The COPY instruction in theDockerfilecopies the files insrcto thedestfolder. Looks like you are either missing thefile1,file2andfile3or trying to build theDockerfilefrom the wrong folder.Refer Dockerfile DocAlso the command for building theDockerfileshould be something like.cd into/the/folder/ docker build -t sometagname .
I have a Dockerfile set up in my root (~) folder. The first three lines of my file look like this:COPY file1 /root/folder/ COPY file2 /root/folder/ COPY file3 /root/folder/but it returns the following error for each line:No such file or directoryThe files are in the same directory as my Dockerfile and I am running the commanddocker build - < Dockerfilein the same directory in terminal as well.What am I doing wrong here exactly?
COPYing a file in a Dockerfile, no such file or directory?
fast Tokenizers are not thread-safe apparently.AutoTokenizers seems like a wrapper that uses fast or slow internally. their default is set to fast (not thread-safe) .. you'll have to switch that to slow (safe) .. that's why add theuse_fast=FalseflagI was able to solve this by:tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)Best, Chirag SanghviShareFollowansweredMar 16, 2021 at 6:10chiragsanghvichiragsanghvi1122 bronze badgesAdd a comment|
I'm using Flask with Gunicorn to implement an AI server. The server takes in HTTP requests and calls the algorithm (built with pytorch). The computation is run on the nvidia GPU.I need some input as to how can I achieve concurrency/parallelism in this case. The machine has 8 vCPUs, 20 GB memory and 1 GPU, 12 GB memory.1 worker occupies, 4 GB memory, 2.2GB GPU memory. max workers I can give is 5. (Because of GPU memory 2.2 GB * 5 workers = 11 GB )1 worker = 1 HTTP request (max simultaneous requests = 5)The specific question isHow can I increase the concurrency/parallelism?Do I have to specify number of threads for computation on GPU?Now my gunicorn command isgunicorn --bind 0.0.0.0:8002 main:app --timeout 360 --workers=5 --worker-class=gevent --worker-connections=1000
Gunicorn worker, threads for GPU tasks to increase concurrency/parallelism
Accept onlyfunc(). If you need to call a function with arguments, wrap that function call in an anonymous function.Definition:type BizFunc func() func DoAndMonitor(bizFunc BizFunc) { // monitor begin defer func() { // monitor end }() bizFunc() }Usage:// with no-argument function var myFunc func() // pass it directly DoAndmonitorBizFunc(myFunc) // function with arguments var myFunc2 func(string, int) // actual function call gets wrapped in an anonymous function DoAndMonitorBizFunc(func() { myFunc2("hello", 1) })
This question already has answers here:Function Wrapper in Go(2 answers)Closed1 year ago.I am a Go rookie. I want to use the Prometheus monitoring function level and try to be as generic as possible, like spring transactional. Using aspect oriented programming. I tried the following:type BizFunc func() func DoAndMonitor(bizFunc BizFunc) { // monitor begin defer func() { // monitor end }() bizFunc() }but there is no function type that's compatible with any function.So I wonder whether this can be implemented in Go?
How to use aspect-oriented programming (AOP) in Go? [duplicate]
The URL you are currently using is pointing to the GitHub information page about the file. Changing it to the direct link to the file should solve your error:https://raw.github.com/desandro/masonry/master/jquery.masonry.min.jsShareFollowansweredFeb 27, 2013 at 0:47user142162user1421622GREAT!! Thanks... The answer was so easy I feel real daft for not spotting it myself. Was there anyway I could have found this fault myself? Or did you just know out of experience? The error message did not help me... and I want to learn so that next time I know what or where to look.–Lisa-MarieFeb 27, 2013 at 8:40@Lisa-Marie: I simply visited the pagehttps://github.com/desandro/masonry/blob/master/jquery.masonry.min.jsand saw that it wasn't a JavaScript file. Right clicking the page and clicking view source confirmed the cause of the error, as I saw the HTML for the page started on line 4, which was mentioned in your error.–user142162Feb 27, 2013 at 11:54Add a comment|
I am trying to see if I can include JQuery plugin Masonry on a gadget on Google Sites... created from Google App script by using HtmlService to render HTML from a HTML file with:<script src="http://code.jquery.com/jquery-1.8.2.min.js"></script> <script src="https://github.com/desandro/masonry/blob/master/jquery.masonry.min.js"></script>However... this only renderes this result:Invalid script or HTML content:https://github.com/desandro/masonry/blob/master/jquery.masonry.min.js:4+1- 2: Unexpected token <.I am new to Google App Scripts and testing it out. But it seems to me that it complains on an invalid token in the masonry file? Or am I interpreting this wrong? Can anybody tell me if this should be possible. I read some places it says that Google App Script is restrictive, but I do not know what to look for to see if that is the reason or not in this case.Many thanks.
google app script not loading JQuery Masonry from Github
All performance related questions have a single answer: Measure.Guesswork is always wrong when it comes to performance (usually since the performance is baddespitethe design of the system which means that youthinkit shouldn't be slow but itis).ShareFollowansweredMay 18, 2009 at 14:45Aaron DigullaAaron Digulla325k109109 gold badges611611 silver badges828828 bronze badges2You're right. I'm learning the ins and outs of the environment and I'm trying to narrow down if there are any obvious things I could focus on before engaging other departments, but I might have no choice. Thanks for the reply!–ChrisMay 18, 2009 at 14:52The obvious thing is that there is something wrong with the firewall but that doesn't give you a clue what can be done about it :) You must try several things (like setting up a server on the next developer box without a firewall) and measure which approach helps how much.–Aaron DigullaMay 18, 2009 at 15:06Add a comment|
We have a system that makes calls to a web service across a proxy. This is coded in C#, using HttpWebRequest. We've had problems with the speed of these calls for a long time, and I'd been trying to track it down. An unrelated conversation led to one of the operations guys to mention that the port we had been going over used firewall software that had a less-than-optimal (read: buggy) implementation for porting HTTP 1.1 calls. Sure enough, I dropped the web request to use HTTP 1.0 instead of 1.1, and the speed instantly doubled. We had already disabled keep-alive because it was just too shaky.So, question: for the short-term, are there any variables other than keep-alive and the HTTP version that could possibly further boost speed by changing aspects of the HttpWebRequest call? I guess it's hard to tell without knowing the ins and outs of the firewall software, which I don't yet.More importantly, they have a newer version of the software on a different port that apparently is much, much better and supports HTTP 1.1 fully. Should I expect a significant increase to response time by switching to HTTP 1.1 and keep-alives?
HTTP version performance over firewalls
The oldest supported version SonarQube version is 7.9.3. This version provides a feature to do it OOTB. Steps:open projectclickIssuestabcheck checkbox next to theBulk Changebuttonuncheck issues which you don't want to mark asFalse Positiveclick theBulk Changebuttonin popup go toTransitionand clickResolve as false positiveclick theApplybuttonYou can also add comment and/or send email notification for every change.
Is there a way to mark violations asFalse Positivein bulk in SonarQube? I know the older versions have switch off violations plugin through which it is achieved. Please let me know how it can be achieved.
Is there a way to mark violations as `False Positive` in bulk?
While I'm not sure about using one of the backup files, it is possible to export an application schema to a deluge file from one account and create an equivalent application on a different account by uploading the deluge file to the new account. Settings->Application IDE->Export Note, the data won't be included in this operation. To include the data, it will need to be exported also. I think there are several methods to do the data export/import.
Good day all, For the purposes of Business Continuity I have investigated whether I can download a backup file from one Zoho instance / profile, to another brand new Zoho Creator instance / profile. The idea is to mitigate risk if my main profile is somehow ransomed or hijacked. But from what I can see, there is no option to restore a downloaded backup into Zoho. Please can someone give me some advice?
How can I restore a downloaded Zoho Creator backup file on a separate Zoho profile?
Velocity was a codename while the product was in development, I don't think there was ever an intention from Microsoft to release a standalone product called Velocity. (Apart from anything else, there's already a Java product calledVelocity).AppFabric is the output of two development projects, Velocity and Dublin (which is workflow, not WCF).Although the two parts share an installer, if you want the distributed caching parts of AppFabric you can just install them; you're not obligated to install the whole thing. I've circled the options you need to select in the installer for the distributed caching bits in red here, the WF bits in blue.ShareFolloweditedMay 23, 2017 at 11:45CommunityBot111 silver badgeansweredFeb 9, 2011 at 10:49PhilPursglovePhilPursglove12.6k55 gold badges4646 silver badges7070 bronze badgesAdd a comment|
I heard about Velocity several years ago when MS made a splash going head to head with MemCached.Recently I needed to try out which solution would work best in my project: MemCached or .NET Velocity. It took me a while to find Velocity again. It seems like MS merged Velocity with some WCF tools and it is now called AppFabric.http://msdn.microsoft.com/en-us/windowsserver/ee695849I am a little worried, that this will impact how quickly MS can release new features/improvements for Velocity, now that it is a part of much bigger package.Why did MS get rid of standalone version of Velocity, after all the effort they spent promoting it?
Why did Microsoft get rid of Velocity Distributed Cache as standalone product?
This is happening because of the netcat version included in Busybox.ShareFollowansweredApr 18, 2019 at 8:23Esteban GarciaEsteban Garcia2,2331717 silver badges2424 bronze badges1yup. the-koption doesn't exist for the version in the container. that option is what allows reconnects–Captain_ObviousApr 18, 2019 at 8:25Add a comment|
If I start a docker container like thisdocker container run -it -p 9001:9001 alpine nc -p 9001 -l -kI can then send this little dockerized netcat server some plain text from a terminal on the host usingnc localhost 9001But, once I^Cthe netcat in my host terminal, I can't make a new connection to the docker container. Redoing the command tells me the connection succeeds, but netcat closes right away$ nc localhost 9001 -v Connection to localhost 9001 port [tcp/*] succeeded!Since I'm running my container in interactive mode, I can see that the netcat inside the container is still running.So why can't I reconnect? Or at least, what can I do to resolve the issue?It works just fine if I do it all without docker.
docker container port closes after first connection
.sdfis the file format for SQL ServerCompact Editionand that's quite a different beast from "real" SQL Server..bakis the database backup format for a full-blown SQL Server.The way to get from.sdfto.bakwould be:create a temporary database in a "full" SQL Server (Express, Standard, Enterprise) with the name of the.sdffileusing a tool likeSQL Server Compacst data and schema scripting, export your table structure and data from the SQL Server CE file into "full" SQL Serveronce you've created all tables and other DB objects and inserted all your data into the "full" SQL Server database, create a backup of that database to get your.bakfile
I'm trying to publish a webproject MVC3 .NET with binero.se. The problem is that my database file is .sdf. And binero wants a .bak file.Does anyone know how to solve this problem?
Convert .sdf to .bak
I find that sometimes if in the leaks instrument you click the button that looks something like this: {= and drag your app delegate file onto the screen, it will lead you in the right direction by highlighting the code that allocated that leaked block. Every time it goes into a function call drag the source file with that function onto it. This can be hit and miss though as sometimes these mystery leaks aren't tracked back to the delegate.
If ObjectAlloc cannot deduce type information for the block, it uses 'GeneralBlock'. Any strategies to get leaks from this block that may eliminate the need of my 'trial and error' methods that I use? The Extended Detail thing doesn't really do it for me as I just keep guessing.
Finding leaks under GeneralBlock-16?
The standard Unix approach iscron, so you could for example edit/etc/crontaband add a line like*/5 * * * * root sphynx [whatever other options you need]which means'every five minutes' (for the */5 part)of every hour (the * in position 2)of every day of the month (the * in position 3)of every month (the * in position 4)of every day of the week (the final * in position 5)Another example: '4 5 * * 6' amounts to 'at 5:04am (four minutes after five) on every Saturday (day of the week is 6).You may need to or want to switch the user from root to, say, www-data is sphynx runs as that, and you obviously need to adjust the arguments.Lastly, look into the directories$ ls -1d /etc/cron.* /etc/cron.d /etc/cron.daily /etc/cron.hourly /etc/cron.monthly /etc/cron.weeklyfor examples --- other packages put their jobs there (and this mechanism is more general, and newer, than direct editing of/etc/crontab.
I'm running a slice of ubuntu hardy. I've installed sphinx, and I would like to run the sphinx indexer everyxminutes. What is the best way to go about doing this?
best way to reindex sphinx in ubuntu hardy
Got the answerhttp://social.msdn.microsoft.com/forums/en-US/sqlce/thread/79d2f8a2-1366-4d14-8c61-220f47183368/(...)assign the OpenFileDialog.RestoreDirectory flag to true and then after it closes the original directory will be restored prior to the open dialog....that way you don't need the Directory.SetCurrentDirectory.fileChooser = new OpenFileDialog(); fileChooser.RestoreDirectory = true;(...)
My application needs to backup and restore .sdf files. There is a single dataSet the the whole application and some bindngSource and table adapters on forms using this same dataset.Just for a sake of test I tryied to copy the .sdf in runtime for a backup folder and back to restore it and I got my application not finding the file like it was not there anymore.How should I manage connections to open and close the database since the dataSet do it automaticaly at begin and end of my application?
Backup and Restore a SQLCE .sdf database
This seems to be possible using the create-deployment command.http://docs.aws.amazon.com/cli/latest/reference/opsworks/create-deployment.htmlNote: Haven't done this myself, currently working on it though!ShareFollowansweredJan 5, 2016 at 7:17dcbaokdcbaok8111 bronze badge11Not sure why this was downvoted, it is the correct answer. When sending this command I use ` --command "{\"Name\":\"update_custom_cookbooks\"}"` and works as intended.–brutuscatMay 11, 2016 at 9:02Add a comment|
Is there a way toupdate custom cookbooksfrom the command line on an opsworks instance?I don't see a way to do it with theAWS OpsWorks Agent CLIor with theAWS Command Line Interface.The only way I am able to do it is through the console.Thanks!
opsworks: 'update custom cookbooks' from command line
Yes, you can useread_only:in the compose filehttps://docs.docker.com/compose/compose-file/#read_only
I try to apply the docker CIS (https://github.com/docker/docker-bench-security)The test 5.13 is:Mount container's root filesystem as read onlyThere is an option for docker run to mount the root FS read only:--read-only=trueBut I can't find the possibility to achieve the same with docker-compose.Is there a possibility to mount the root FS read only with docker-compose?
mount root FS read only with docker-compose
So there is a difference betweendocker psanddocker ps -adocker ps: shows the running container on your hostdocker ps -a: this shows running and exited containers on the host.So in your case your container is exited, means not running on the host thats why this shows indocker ps -aShareFollowansweredFeb 12, 2020 at 12:34anandanand76155 silver badges1111 bronze badgesAdd a comment|
Hi i am new user on ubuntu during my practice i install docker bysudo apt install docker.ioCheck the version and all thing properly i started working after it i pull the ubuntu image bysudo docker pull ubuntuafter it i check the image bysudo docker imagesimage is shown by all details after that i make the container bysudo docker container run -it ubuntu /bin/bashwhen i try to see my container bysudo docker psthe result is blankCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESBut when i write this commandsudu docker ps -athis shows me the containerCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 110359388f2d ubuntu "/bin/bash" 21 hours ago Exited (0) 21 hours ago stupefied_dewdneyHow to fix this? Why my container is not showing in docker ps ?
Docker ps not showing my container it shows in ps-a
It turned out that the problem is that we were using the portal for test, and that takes into account user access not AFD access. So, if you apply this by using an arm template it should work, because the afd has the right access to it.
I'm trying to add a custom certificate on the Azure Front Door from a key vault.When the AKV has a network firewall enabled the AFD can't access the certificate even if theAllow trusted Microsoft services to bypass this firewallis enabled and I have a valid AFD access policy to the key vault.When I disable the network firewall, the AFD can access the certificate.. It appears that the whole problem is in the firewall section.Am I missing something here ? Does anyone faced this problem before ?
Front door can't access the KV certificate
yes definitely you can use the cert-manager with k8s and let's encrypt will be also nice to manage the certificate.ACME have different api URL to register domain. from there also you can get wildcard * SSl for doamin.in simple term install cert manager and use ingress controller of nginx and you will be done with it. you have to add the TLS cert on define it on the ingress object.You can refer this tutorial for setup of cert-manager and nginx ingress controller.https://cert-manager.io/docs/tutorials
I'm trying to apply SSL to my kubernetes clusters (production & staging environment), but for now only on staging. I successfully installed the cert-manager, and since I have a 5 subdomains, I want to use wildcards, so I want to configure it with dns01. The problem is, we us GoDaddy for DNS management, but it's currently not supported (I think) by cert-manager. There is an issue (https://github.com/jetstack/cert-manager/issues/1083) and also a PR to support this, but I was wondering if there is a workaround for this to use godaddy with cert-manager since there is not a lot of activity on this subject? I want to use ACME so I can use let's encrypt for certificates.I'm fairly new to kubernetes, so if I missed something let me know.Is it possible to use let's encrypt with other type of issuers than ACME? Is there any other way where I can use GoDaddy DNS & let's encrypt with kubernetes?For now I don't have any Ingresses but only 2 services that are external faced. One frontend and one API gateway as LoadBalancer services.Thanks in advance!
Kubernetes cert-manager GoDaddy
You have to have 2 SSL certificates, just like how you have set them up, one inus-east-1for CloudFront and one ineu-west-1for the load balancer.This is not an issue and should not cause any errors like the 502 error you are seeing. I'm not sure what you mean about the ELB "offloading" the certificate "sent" by the CDN. The load balancer should not care what certificate the CDN is serving, and the CDN shouldn't be sending that to the load balancer as part of the requests for origin data.
I used Amazon Certificate Manager (ACM) to generate a SSL certificate for my domain (ex. mydomain.com). In order to use this certificate in my Cloudfront CDN, the certificate was generated in N. Virginia.My CDN is mapped to an ELB sitting in Ireland (eu-west-1).The issue is that when I want to use the generated certificate in my ELB listeners, I'm not able to do it (I can't find the certificate).am I just missing something? or is it impossible to do?I tried to generate another certificate using ACM using the same region as the ELB and using the same domain. The certificate was issued with any problem and then I was able to attach it to the ELB. But as I expected it didn't work. I'm getting an error 502. Of course the ELB cann't offload the certificate sent by the CDN because it is based on another certificate.
Is it possible to use the same ACM generated certificate with CloudFront and an ELB based in Ireland (eu-west-1)
Dockerfile for running docker-cli inside alpineFROM alpine:3.10 RUN apk add --update docker openrc RUN rc-update add docker bootBuild docker imagedocker build -t docker-alpine .Run container (host and the alipne container will share the same docker engine)docker run -it -v "/var/run/docker.sock:/var/run/docker.sock:rw" docker-alpine:latest /bin/sh
How can I install Docker inside analpinecontainer and run docker images? I could install, but could not start docker and while running get "docker command not found error".
How can I install Docker inside an alpine container?
Yes this is possible at least for the tensorflow backend. You just have to also import tensorflow and put your code into the followingwith:with tf.device('/cpu:0'): your code with tf.device('/gpu:0'): your codeI am unsure if this also works for theano backend. However, switching from one backend to the other one is just setting a flag beforehand so this should not provide too much trouble.
In the context of deep neural networks training, the training works faster when it uses the GPU as the processing unit. This is done by configuring CudNN optimizations and changing the processing unit in the environment variables with the following line (Python 2.7 and Keras on Windows):os.environ["THEANO_FLAGS"] = "floatX=float32,device=gpu,optimizer_including=cudnn,gpuarray.preallocate=0.8,dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic,dnn.include_path=e:/toolkits.win/cuda-8.0.61/include,dnn.library_path=e:/toolkits.win/cuda-8.0.61/lib/x64"The output is then:Using gpu device 0: TITAN Xp (CNMeM is disabled, cuDNN 5110)The problem is that the GPU memory is limited compared to the RAM (12GB and 128GB respectively), and the training is only one phase of the whole flow. Therefore I want to change back to CPU once the training is completed. I've tried the following line, but it has no effect:os.environ["THEANO_FLAGS"] = "floatX=float32,device=cpu"My questions are:Is it possible to change from GPU to CPU and vice-versa during runtime? (technically)If yes, how can I do it programmatically in Python? (2.7, Windows, and Keras with Theano backend).
How to change the processing unit during run time (from GPU to CPU)?
On Linux it is possible to collect such information via OProfile. Each CPU has performance event counters. See here for the list of the AMD K15 family events: http://oprofile.sourceforge.net/docs/amd-family15h-events.php OProfile regularly samples the event counter(s) and together with the program counter. After a program run you can analyze how many events happen and at (statistically) what program position. OProfile has build in Java support. It interacts with the Java JIT and creates a synthetic symbol table to look up the Java method name for a peace of generated JIT code. The initial setup is not quite easy. If interested, I can guide you through or write a little more about it.
I'm writing a program in Java In this program I'm reading and changing an array of data. This is an example of the code: public double computation() { char c = 0; char target = 'a'; int x = 0, y = 1; for (int i = 0; i < data.length; i++) { // Read Data c = data[index[i]]; if (c == target) x++; else y++; //Change Value if (Character.isUpperCase(c)) Character.toLowerCase(c); else Character.toUpperCase(c); //Write Data data[index[i]] = c; } return (double) x / (double) y; } BTW, the INDEX array contains DATA array's indexes in random order to prevent prefetching. I'm forcing all of my cache accesses to be missed by using random indexes in INDEX array. Now I want to check what is the behavior of the CPU cache by collecting information about its hit ratio. Is there any developed tool for this purpose? If not is there any technique?
How to collect AMD CPU Cache Hit Ratio with Java?
this does not require any code in a view:{% with cache_timeout=user.is_staff|yesno:"0,300" %} {% cache cache_timeout cacheidentifier user.is_staff %} your content here {% endcache %} {% endwith %}ShareFollowansweredJul 1, 2014 at 20:46panosmmpanosmm18311 silver badge88 bronze badgesAdd a comment|
How would I go about caching pages for anonymous users but rendering them for authorized users in Django 1.6? There used to be a CACHE_MIDDLEWARE_ANONYMOUS_ONLY flag that sounded perfect, but that has gotten removed.I'm asking because every page has a menu bar that displays the logged in user's name and a link to his/her profile.What's thecorrectway of doing this? Must be a common problem, but I haven't found the right way from looking through the Django documentation.
Caching for anonymous users in django
Whenever you want to merge something into master, the best approach is to create a branch where you create the end state you want on master, and merge/PR that branch. So when a developer wants to merge something from their project branch to master, they would create a new branch, merge master into it (if necessary), fix any and all conflicts and bring it to the state that makes sense with respect to master, which includes fixing the .gitmodules configuration in your case. They would then submit a PR to merge that branch into master. There should be no conflict now, since the conflicts were resolved before doing the PR. This advice holds for any and all PRs: it's good practice to make the person submitting a PR resolve the conflicts on the PR branch, rather than have the owner of the master branch do it. And even when the owner resolves the conflicts for the submitter, they can do it on the PR branch before accepting the PR. (I might do that for one-time contributors if I don't think it's worth teaching them how to resolve their conflicts, but I consider it a good learning opportunity so I generally prefer to spend the time showing the contributor how to do it instead.)
I'm working on a project which has different branches with different developers. Each project has submodules and .gitmodules differs in every branches, so this cause a merge conflict when the main branch holder try to merge. I tryed different way to resolve it but with no results, imagine that I have no access on the main branch. Anyone that encounter my same problem? How did you resolve it? Thank you in advance.
git submodules in different branches
Right click on the instance in the AWS console. Under "Instance Lifecycle", select "Stop". Wait for the instance to stop by refreshing the console or waiting for it to refresh. Once it's in "stopped" state, right click on the instance again, and click "Start". Note: this isnotan operating system reboot. You're actually stopping the reserved instance in the hypervisor and bringing it back up, which should route it to new hardware.The instance will come up on new hardware, and you'll have manually "scheduled" the maintenance.This is also how you'd increase the instance size, if you ever wanted more power than at1.micro. You'd stop the instance, "Change Instance Type", and start it again.ShareFolloweditedApr 16, 2013 at 13:23answeredApr 16, 2013 at 11:38ChristopherChristopher43.4k1111 gold badges8181 silver badges9999 bronze badges2@instancereboot is it instance reboot?.in the hint it is shwn as network maintenence and power maintenence.when i search i found that instance reboot is need to be done,can u help me to clear on this.–hackerApr 16, 2013 at 12:26@hacker: Yes. Stopping the instance and starting the instance again will reboot it. You could also use the "Reboot" command in the same menu in the console.–ChristopherApr 16, 2013 at 13:23Add a comment|
I am usingAWS t1 micro instanceto run some webservices of my application (LAMP server). And also one admin panel is there running with SQLite DB.Now I had overtaken my free tier limit. I have given a scheduled event as system maintenance, my instance is ebs backed, I want to do it manually before schedule. It is shown as system maintenance. Is it instance reboot or system reboot? I am getting confused.Can anybody help me in achieving this manually?
how to do system maintenance scheduled event in ec2 manually?
To skip builds on a per-commit basis you can add[ci skip]to the commit message, as described in theDocs, for example:Before:Add blerb.After:Add blerb [ci skip]To skipallnon-PR builds, you can early-exit if theTRAVIS_PULL_REQUESTenvironnment variable is set to"false"fron your.travis.yml:before_install: # Earliest build step - if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then echo "Not a PR, skipping" && exit; fi
I am developing a library on github that has travis checks attached to it. I'd like to have a WIP pull request open to discuss ideas easily. There is a lot of tests set up for the project on travis, so I'd like to not trigger the tests every time I push a commit (to prevent server for being overloaded), as the code is not expected to pass anyway.Is there a way I could do this on github without having access to travis configuration?
How to prevent travis jobs after each commit?
In your NGinx container you only need the statics and in your PHP-FPM container you only need the PHP files. If you are capable of splitting the files, you don't need any file in both sites.Why isn't it enough to add it just only to my webserver? A web server is a place that holds the files and handles the request...NGinx handles requests from users. If a request is to a static file (configured in NGinx site), it sends the contents back to the user. If the request is to a PHP file (and NGinx is correctly configured to use FPM on that place), it sends the request to the FPM server (via socket or TCP request), which knows how to execute PHP files (NGinx doesn't know that). You can use PHP-FPM or whatever other interpreter you prefer, but this one works great when configured correctly.
I am pretty new with all of this docker stuff and I have thisdocker-compose.ymlfile:fpm: build: context: "./php-fpm" dockerfile: "my-fpm-dockerfile" restart: "always" ports: - "9002:9000" volumes: - ./src:/var/www/src depends_on: - "db" nginx: build: context: "./nginx" dockerfile: "my-nginx-dockerfile" restart: "always" ports: - "8084:80" volumes: - ./docker/logs/nginx/:/var/log/nginx:cached - ./src:/var/www/src depends_on: - "fpm"I am curious why do I need to add my project files in thefpmcontainer as well as in thenginx?Why isn't it enough to add it just only to my webserver? A web server is a place that holds the files and handles the request...I believe that this information would be useful to other docker newbies as well.Thanks in advance.
Why do we need to map the project files in both PHP-FPM and web server container?
Sorry, metrics are not stored at that level.
I am exporting the Java data from Sonar via the Web Service /api/resources as described inhttp://docs.sonarqube.org/pages/viewpage.action?pageId=2752802.Can I obtain the metrics at the method level?For example, the complexity is also available as "function_complexity", but this is the average per class of the complexity of all methods. This average is rather meaningless as typically the few high values of the really complex methods are combined with the many low values of all getters and setters. Therefore, I want to obtain the complexity of each method, or at least all methods with a complexity that exceeds a certain limit.I had expected some qualifier related to methods, like "MTH", but I cannot find anything similar.
Metric values at method level
As awkward as it seems to be, I fixed it by adding$key = nullbefore foreach loop!
I have this code which works perfectly fine:if ($array) { foreach ($array as $key => $value) : doSomething($key, $value) endforeach }But when deployed, SonarQube gives me this "bug":Review the data-flow - use of uninitialized value.with$keyunderlined.Any suggestion?
SonarQube - PHP - Review the data-flow - use of uninitialized value
- Creating the backend Service object:he key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached.apiVersion: v1 kind: Service metadata: name: hello spec: selector: app: hello tier: backend ports: - protocol: TCP port: 80 targetPort: http2- Creating the frontend:Now that you have your backend, you can create a frontend that connects to the backend. The frontend connects to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is "hello", which is the value of the name field in the preceding Service configuration file.
In my front end (deployed as an AWS ECS service), I have a fetch request to an AWS Route 53 host name which is directed to a backend ECS service. Now I would like to deploy this infrastructure locally in a Kubernetes Minikube cluster. If the front-end pod and the back-end pod are connected together using a Kubernetes Service, should I replace that fetch's method argument to the DNS name of the back-end pod?fetch(Route_53_Route)tofetch(DNS_name_of_backend_pod)
Fetch Request to a Docker container
In my opinion the best eclipse plugin to find memory leaks is Memory Analyzer (MAT)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it. Closed 9 years ago. Improve this question I want to know if exist any tool or eclipse plugin for java that list the count of instances per class at runtime. I need it to test a memory leak issue. Thanks!
Java - Count class instances runtime [closed]
For user sites likehttps://maikklein.github.io/, use the master branch for your code.You currently have your site in the gh-pages branch.
I recently wanted to switch away from Jekyll to Hugo but I run into some troublesI have two repositories, which are basically the same but with different nameshttps://github.com/MaikKlein/maikklein.github.iohttps://github.com/MaikKlein/blogThe first one should be reachable fromhttps://maikklein.github.io/and the second one is reachable fromhttps://maikklein.github.io/blogButhttps://maikklein.github.io/is not reachable, why is that?
Did Github remove support for domain names like 'username.github.io'?
Quite a complex question but here are my thoughts on the matter:Before actually answering the questions, I must say that my experience setting up Kubernetes load balancer outside of public clouds is a hassle so I wouldn't recommend the approach but, assuming that it is not an issue for you, the main difference is how things are set ut. I would say that the biggest advantage of having it in Kubernetes is that you may also have service mesh solution like Istio that gives you other advantages that just load balancing. Furthermore, it would be easier to do canary (or other special type of) deployments with infrastructure load balancing than with Spring.The only real advantage I see is if you have different teams with responsibility of the infrastructure, deployment and coding. Say if the Kubernetes team is responsible for creating services, deployments, etc and is overloaded, you might get code out faster if your dev team has capacity and competence but, again, there would be no point using Kubernetes then. Or in case you cannot actually create the LoadBalancer service in Kubernetes (as mentioned - not always straightforward in non cloud env)On a side note, if you are on the way to deploying Kubernetes internally and want to get it to work, have a look at Metallb
For microservices architecturenot in the cloud:What is the difference between the load balancer of Kubernetes and the load balancer of spring cloud?what are the advantages of implementing Eureka and spring boot load balancer when using Kubernetes for deployment rather than using Kubernetes load balancer and kubernetes service discovery?
Why do we need Eureka and Spring boot load balancer when using Kubernetes not in the cloud?
After more research I found in the RFC 1035 :4.2.2. TCP usageMessages sent over TCP connections use server port 53 (decimal). The message is prefixed with a two byte length field which gives the message length, excluding the two byte length field. This length field allows the low-level processing to assemble a complete message before beginning to parse it.So the solution is in the code below :from scapy.all import * ip=IP(dst="216.239.32.10") request = DNS(rd=1, qd=DNSQR(qname = "google.be", qtype="A")) #size = 27(dec) = 1b (hex) twoBytesRequestSize = "\x00\x1b" #BIG ENDIAN completeRequest = str(request) + twoBytesRequestSize SYN=ip/TCP(sport=RandNum(1024,65535), dport=53, flags="S", seq=42) SYNACK=sr1(SYN) ACK=ip/TCP(sport=SYNACK.dport, dport=53, flags="A", seq=SYNACK.ack, ack=SYNACK.seq + 1) send(ACK) DNSRequest = ip/TCP(sport=SYNACK.dport, dport=53, flags="PA", seq=SYNACK.ack, ack=SYNACK.seq + 1) / completeRequest DNSReply = sr1(DNSRequest, timeout = 1)
I use scapy and python to build my DNS request. No problem for UDP request but when I want to use TCP (with exactly the same request that I use with UDP), Wireshark say that my DNS request are malformed.Here my python code:from scapy.all import * ip=IP(dst="130.104.254.1") dns = DNS(rd=1, qd=DNSQR(qname = "google.be", qtype="A")) SYN=ip/TCP(sport=RandNum(1024,65535), dport=53, flags="S", seq=42) SYNACK=sr1(SYN) ACK=ip/TCP(sport=SYNACK.dport, dport=53, flags="A", seq=SYNACK.ack, ack=SYNACK.seq + 1) send(ACK) DNSRequest = ip/TCP(sport=SYNACK.dport, dport=53, flags="PA", seq=SYNACK.ack, ack=SYNACK.seq + 1) / dns DNSReply = sr1(DNSRequest, timeout = 1)The Three Way Handshake are fully completed before I send my request.Thank you very much !
Scapy DNS request malformed only with TCP
You can map your google cloud storage to Google Cloud LoadBalancer. The steps are really easy as mentioned here:https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-bucketsYou also can skip GoDaddy SSL certificate and let Google manage SSL certificate for you free of cost. As mentioned here:https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs
I have a static website hosted on google cloud storage and I also bought godaddy's domain. Now my website is working fine with domain but it is not secured so I bought godaddy's SSL certificate. How to add SSL certificate to Google Cloud Storage static website?I have tried to find multiple solutions but they are using CLoudFare or it involves google App Engine but as my website is hosted on google cloud storage and I already bought SSL certificate so I need solutions using this SSL certificate and GCS.
How to add godaddy's SSL certificate to goolge cloud storage hosted static website with godaddy domain
The option to add multipleproject foldershas been added to the Atom editor in April 2015.Seethis blog postfor more information.To use this feature, you have the following options:Provide multiple folders on the Atom command line:atom ./folder1 ./folder2Use theApplication: Add Project Foldercommand from the command palette.
Is it able to add multiple projects on same window in Github Atom editor as in sublime text? Any options ?
Multiple projects on single window in Github Atom editor
The iterator will iterate from oldest to youngest for LinekdHashMap. You if you want to shrink the LinkedHashMap to a size you can use the following.Map<K,V> lhm = int desiredSize = for(Iterator iter = lhm.keySet().iterator();iter.hasNext()) { if(lhm.size() <= desiredSize) break; iter.next(); //required else IllegalStateException since current=null iter.remove(); }This should take about 20 ns per entry removed.ShareFolloweditedMay 6, 2020 at 17:34mahee9680399 silver badges1616 bronze badgesansweredAug 30, 2011 at 22:01Peter LawreyPeter Lawrey529k8181 gold badges762762 silver badges1.1k1.1k bronze badgesAdd a comment|
How can you shrink aLinkedHashMap? I overrode theremoveEldestEntrymethod, but this method is only called once when a new value is inserted. So there is no change of making the map smaller this way.TheLinkedHashMaponly gives my a normalIteratorand doesn't have anyremoveLastorlistIteratormethod, so how can you find the last, say 1000, entries and remove them?The only way I can think of is iterating through that whole thing. But that can take ages...Creating a new map every time I want to remove only a few elements will also destroy the memory.Maybe remove the first values of theIteratorand then reinserted them, when themaxSizewas reduced in theremoveEldestEntrymethod. Then the reinserting would kick out the oldest values. This is very ugly code... Any better ideas?EDIT: Sry the iteration order is oldest to youngest. So it's easy
Shrink LinkedHashMap in Java
Simple answer is that you can't do that. You could configure a lambda subscriber to output the messages to a log or something and then watch that from the CLI.If you want to subscribe an arbitrary client to a queue of messages, thenSQSmight be more suitable.
I'm looking for a way to listen arbitrarily to my SNS Topic, and in parallel trigger a SNS message from my code base. Next I need to test if that message was sent correctly.code-that-listens-and-exits-when-it-gets-hello-world-message aws sns publish --topic-arn arn:aws:sns:ap-southeast-1:123456789:hello --message "Hello World!"I find plenty of information how tosubscribe to a topicfrom the CLI, but I am puzzled how to actually listen or test for the event coming through the topic. Whichprotocolshould I be using? I don't want to go down the route of checking a subscribed email endpoint contains the message in the inbox.
How to listen to SNS messages from the CLI to test they have been sent?
If you want to use several commands on one line, you have to separate them with a semicolon:0 */1 * * * PHANTOMJS_EXECUTABLE=/usr/local/bin/phantomjs;/usr/local/bin/casperjs /usr/local/share/casper-test/test.js 2>&1Or, if you need to execute commands sequentially and only progress to next if the previous has been successful, use && operator.For better readability you could just put those commands in a shell script and run that from cron.
so I have phantomJS and casperJS installed, everything is working fine, but I'm trying to add my casperJS file to cronjob (ubuntu) and I'm getting error:/bin/sh: 1: /usr/local/bin/casperjs: not foundMy crontab file:0 */1 * * * PHANTOMJS_EXECUTABLE=/usr/local/bin/phantomjs /usr/local/bin/casperjs /usr/local/share/casper-test/test.js 2>&1Any Ideas whats wrong?
CasperJS and cronjob
Thesonar-custom-rules-examplesyou pointed at are all written in Java and use parsers written in Java for the various target languages. Thesonar-dotnetanalyzers for C# and VB.NET are written in C# using theRoslyn frameworkprovided by Microsoft.If you want to write your own custom rules for C# then writing a Roslyn analyzer is definitely the easiest way to do it (Roslyn replaced FxCop, which is now obsolete). However, there are dozens of free third-party Roslyn analyzers available, so it's possible that someone has already written at least some of the rules you want. Have a look onNuGetto see what's available.Next, you want issues raised by a Roslyn analyzer to appear in SonarQube. If you are using new-ish versions of SonarQube (v7.4+), the SonarScanner for MSBuild (v4.4+) and the SonarC# plugin (v7.6+), then issues raised by third-party Roslyn analyzers will automatically be imported asgeneric issues. See thedocsfor more info.Generic issues have a couple of significant limitations, just as not being able to select which rules to run in the SonarQube UI. If you want a more full-featured experience (or if you are using an older version of SonarQube), you can use theSonarQube Roslyn SDKto generate a custom SonarQube plugin that wraps the Roslyn analyzer. Using the SDK is straightforward: it's an exe that you run against the Roslyn analyzer, and it generates a SonarQube plugin jar for you.
I've been doing some research on it. What I found is a list of quite nice samples but forotherlanguageshere.I also looked atsonar-dotnet. But it doesn't look similar to the other implementations.Finally, and to be honest probably my last chance, I took a quick look atFxCop Custom Rulesand I'm not sure what would be the right way.What I'm trying to do is just a basic c# rule that can be reviewed likethis predefined by sonar.I mean, withNoncompliant CodeandCompliant Solution.
How can I create my own C# custom rules for SonarQube?
Please delete the ssl folder on the puppet client too and try again a puppet agent --waitforcert 60 --test
Created a new Puppet Master to upgrade to Puppet6Did "rm -rf /etc/puppetlabs/puppet/ssl" to clear old certificatesAfter pointing the old client at the new master, the client cannot generate new certificates.Error received is this:Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Failed to generate additional resources using 'eval_generate': SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN={server FQDN}] Error: /File[/opt/puppetlabs/puppet/cache/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet:///pluginfacts: SSL_connect returned=1 errno=0 state=error: certificate verify failed: [unable to get local issuer certificate for /CN={server FQDN}]How do I get the Client to generate a new certificate?
Cannot generate new puppet certificates with new puppet master
Having your name in a comment is not a good way to take credit because other people can change the file later on and they should take credit as well. Let's look at how other projects give credit to authors.Let's take Rails as an example.Authors are credited in the commit themselves:https://github.com/rails/rails/commit/d57356bd5ad0d64ed3fb530d722f32107ea60cdfAuthors are credited in the changelog:https://github.com/rails/rails/blob/master/activejob/CHANGELOG.mdAuthors are credited in the contribution page:https://github.com/rails/rails/graphs/contributorsSome other projects have a file with the list of contributors:https://github.com/RubyMoney/money-rails/blob/master/CONTRIBUTORSPersonally I think having myself as the commit author is credit enough. Changelog is good, too.ShareFollowansweredFeb 16, 2017 at 14:55Pedro NascimentoPedro Nascimento13.5k44 gold badges3737 silver badges6464 bronze badgesAdd a comment|
I recently made a pull request to an open source repository that I frequently contribute too (one that I am a maintainer too as well), and I got a request from another maintainer to delete all credit to myself as this is an open source project.I'm currently giving myself credit by using a comment at the top of the file:#Created by Harsha GoliI have seen this syntax used everywhere, so i'm confused on what the 'proper standards' are. The comment that other maintainer made exactly is:Removed Author title, this is an opensource project, authored by the communityEthically, how do you credit an author? Or is it just on a product-by-product basis?
In a pull request to an open source project on githhub, how should the maintainer credit the original author?
Unfortunately, AWS does not support custom domain names for ECR. You'll have to make do with the auto-generated ones for now. There are 'hacks' about, which centre around Nginx proxies, but it really isn't worth the effort.
Amazon Elastic Container Repositories (ECR) have quite human-unfriendly URIs, like 99999999999.dkr.ecr.eu-west-1.amazonaws.com. Is it at all possible to configure custom domain name for an ECR? Simplistic solution would be to create a CNAME record to point to the ECR URI, but this doesn't really work (SSL certificate doesn't match the domain name, passwords generated by aws ecr get-login don't pass, cannot push images tagged with custom domain name...). Are there other options?
How to configure custom domain name for Amazon ECR
You have two separate problems here, and they're related. The first is that you've failed to configure your the name and email used in your commits, and so Git is refusing to commit any changes. The second is that because you have no commits in your repository, trying to push the branch main or master doesn't work, because it doesn't exist. That's the message that you're getting when you see “src refspec…does not match any.” You need to configure your name and email, which are stored in user.name and user.email. Note that user.name is a personal name, not a username. So, for example, someone might run these commands: $ git config --global user.name "Pat Doe" $ git config --global user.email [email protected] Then, once you've made those changes, you can commit and it should succeed. Once you have commits, you can push them. Note that if you want to use main as the default branch but your repository is using master, you can run git branch -m main and that will rename the branch. If you want to do that, do it before you push.
I'm trying to upload a navbar file to git, but it keeps saying that some references couldn't be pushed, and I don't know where I'm going wrong. PS E:\navbar> git init Initialized empty Git repository in E:/navbar/.git/ PS E:\navbar> git add README.md fatal: pathspec 'README.md' did not match any files PS E:\navbar> git commit -m "first commit" Author identity unknown *** Please tell me who you are. to set your account's default identity. Omit --global to set the identity only in this repository. fatal: unable to auto-detect email address (got '신은영@DESKTOP-0T69V65.(no PS E:\navbar> git remote add origin https://github.com/kimdohyeon0811/learnit PS E:\navbar> git push -u origin main error: src refspec main does not match any error: failed to push some refs to 'https://github.com/kimdohyeon0811/learnit' PS E:\navbar> git remote -v origin https://github.com/kimdohyeon0811/learn_css.git (fetch) origin https://github.com/kimdohyeon0811/learn_css.git (push) PS E:\navbar> git push origin master error: src refspec master does not match any error: failed to push some refs to 'https://github.com/kimdohyeon0811/learn_css.git' PS E:\navbar>
Error failed to push some refs to git commit
The scheduling unit of GPUs is the warp / wavefront. Usually that's consecutive groups of 32 or 64 threads. The execution time of a warp is the maximum of the execution time of all threads within that warp. So if your early exit can make a whole warp terminate sooner (for example, if threads 0 to 31 all take the early exit), then yes, it is worth it, because the hardware can schedule another warp to execute and that reduces the overall kernel run time. Otherwise, it probably isn't, because even if threads 1 to 31 take the early exit, the warp still occupies the hardware until thread 0 is done.
We've written GLSL shader code to do ray tracing visualisation using the GPU. It seems to be pretty standard to put an early exit break in the ray marching loop, so if the light is extinguished, the loop breaks. But from what I know about GPU code, each render will take as long as the longest loop run. So my question is: is it worth the early exit? e.g. for(int i = 0; i < MAX_STEPS; i++){ //Get the voxel intensity value from the 3D texture. dataRGBA = getRGBAfromDataTex(dataTexture, currentPosition, dataShape, textureShape); // get contribution from the light lightRayPathRGBA = getPathRGBA(currentPosition, light.position, steps, tex); // this is the light absorbed so we need to take 1.0- to get the light transmitted lightRayRGBA = (vec4(1.0) - lightRayPathRGBA) * vec4(light.color, light.intensity); apparentRGB = (1.0 - accumulatedAlpha) * dataRGBA.rgb * lightRayRGBA.rgb * dataRGBA.a * lightRayRGBA.a; //apparentRGB = (1.0 - accumulatedAlpha) * dataRGBA.rgb * dataRGBA.a * lightRayRGBA.a; //Perform the composition. accumulatedColor += apparentRGB; //Store the alpha accumulated so far. accumulatedAlpha += dataRGBA.a; //Adva nce the ray. currentPosition += deltaDirection; accumulatedLength += deltaDirectionLength; //If the length traversed is more than the ray length, or if the alpha accumulated reaches 1.0 then exit. if(accumulatedLength >= rayLength || accumulatedAlpha >= 1.0 ){ break; } }
is early exit of loops on GPU worth doing?
This has happened because some changes have been pushed to the remote branch since you last downloaded it. Your local branch was not up-to-date with the remote branch. You have to bring them to the same level and then push your local branch to the remote repository. This can be done using either of them: 1. Fetch + Rebase: git fetch git rebase origin/branch_name There are 2 steps in this approach. You first fetch all the changes made in the remote repository. After fetching, you then rebase your branch with the remote branch. 2. Fetch + Merge git fetch git merge origin/branch_name In this, you first fetch all the changes but instead of rebasing, you merge the remote changes onto your local branch. 3. Pull git pull It is basically git fetch followed by git merge in a single command. You can use this command and git will automatically perform a fetch and then merge the remote changes into your local branch. 4. Pull with Rebase git pull --rebase origin branch_name This tells git that rebase be used instead of merge. Git will fetch first and then rebase the remote changes with your local changes. After this has been done, some conflicts may occur. Resolve this conflicts and then you can push the changes to remote repository using the below command. git push origin branch_name For merge vs rebase: When do you use git rebase instead of git merge?
I keep getting an error over one file after trying to push a repo to my github after a minor change was pushed from another computer, even after pulling the update. Here is the error I get when I try to git push origin master: To https://github.com/[me]/[project].git ! [rejected] master -> master (non-fast-forward) error: failed to push some refs to 'https://github.com/[me]/[project].git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes (e.g. 'git pull') before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. what is going on here?
Cant push or pull git repo from github
1 You could try nginx stomp, it support rabbit mq and etc. It is nginx module # nginx.conf server { .... location /sendqueue { stomp_pass stomp; stomp_command SEND; stomp_headers "destination:/amq/queue/stompqueue persistent:false content-type:text/plain content-length:38"; stomp_body "This is new message sending from stomp"; } } Share Improve this answer Follow answered Sep 23, 2018 at 4:34 OktahetaOktaheta 62677 silver badges2121 bronze badges Add a comment  | 
I've enabled RabbitMQ's Web-Stomp plugin by following this, because we need STOMP over WebSockets, and it works. Now what I need is a Nginx server as a reverse proxy in front of my RabbtMQ server. Here are the configuration part from Nginx server as following. http { upstrem websocket { # this is the actual rabbitmq server address server 15.15.181.73:15674 } server { # the nginx server addres is 15.15.182.108 listen 80 default_server; listen [::]:80 default_server ipv6only=on; location /ws/ { proxy_pass http://websocket; proxy_http_version 1.1; proxy_set_header Upgrade websocket; proxy_set_header Connection upgrade; } } } And here is the JavaScript code for accessing the server var WebSocket = require('ws'); var Stomp = require('stompjs'); var ws = new WebSocket('ws://15.15.182.108/ws', { protocolVersion: 8, origin: 'http://15.15.182.108/ws', rejectUnauthorized: false }); var client = Stomp.over(ws); var on_connect = function(){ client.send("/queue/test", {priority: 9}, "Hello, STOMP for /queue/test"); }; var on_error = function(error){ console.log("error"); console.log(error.headers.message); }; client.connect('test','test',on_connect,on_error,'/'); Now what confusing me is that if I need to access the rabbitmq server through Websockets, then I should append /ws after the ip address, and it works if I access it directly. However, it seems that I can't put the /ws in the upstream section after the ip address in the Nginx configuration file. So what should I do if I need to make this work? Thanks.
How can I use Nginx as a reverse proxy to my RabbitMQ's websocket function?
This is an idea = it may not work, so you may still need to work on it to improve it.Save it to the Prometheus with the structure:metric name: config_name_version labels: host=h1, config_name=c1 value: 1 (integer only not a string v1) time: timestampUse math - population standard deviation = Prometheus aggregation operatorstddev.If version values are the same, then std dev is 0 (e.g.stddev(100,100,100,100) = 0, if single one value is different then it won't be 0 (e.g.stddev(101,100,100,100) = 0.433. Of course you need to write in PromQL with grouping per config_name, e.g.:stddev by (config_name) (config_name_version{})Grafana will add configured/dashboard time condition.You can translate numeric values to YES/NO strings on the Grafana level (feature "value mapping"). You have also host label, so you can add more filters (e.g. dashboard variables for host, config name selection) to the dashboard to have more user friendly dashboard, to show hosts with old versions, to visualize updates over time, ...
We have microservices and they require a set of configurations that are broadcasted to hosts by a separate system (say publisher) whenever there is an update in the configuration.The receiving hosts are publishing the below metrics -{ "host": "h1", "configName": "c1", "configNameVersion": "v1", }There could be a delay in pushing these configs to all the hosts and hosts can be in an inconsistent state for some time. We want to capture this inconsistent state as Yes/No in grafana.This can easily be done using SQL query:(if the distinct count of configVersion across hosts for any configName is greater than 1 then inconsistent state)select distinct count configNameVersion as "version_count" from table_name group by configName having (distinct count configNameVersion)>1How can I represent the same in Prometheus and show it in the grafana dashboard?Assume the publisher system doesn't publish any metrics.Any alternative idea to solve this (with minimum criticality) or pointer to the appropriate document/example would be really nice. Feel free to comment if I can add more information :)
Observe Service consistency state in Grafana
1 I think you could achieve what you want with a Git post-commit hook. whether or not this is a wise thing to be doing is another matter entirely. Hrm. Somehow my link to http://kernel.org/pub/software/scm/git/docs/v1.3.3/hooks.html isn't showing up, but that's what you want to read. Share Improve this answer Follow answered Dec 30, 2009 at 13:23 James PolleyJames Polley 8,04322 gold badges2929 silver badges3333 bronze badges 0 Add a comment  | 
Is there anyway I can push changes from my Github Repository to a remote server automatically? I would like to deploy changes in master branch in my github repository to a remote deployment server. If possible.
GitHuB repository to remote server
13 I didn't know this was possible with aptitude, I always used apt-cache policy to get that information (aptitude uses the same repositories as shown with apt-cache policy). You can use apt-cache policy fabric to show version and repository information about the fabric package. As pointed out in another answer, you can also use aptitude versions fabric to get the same information (in a slightly different format). Share Improve this answer Follow edited Jan 16, 2019 at 13:32 answered Sep 19, 2011 at 12:28 Dario SeidlDario Seidl 4,38011 gold badge4242 silver badges5757 bronze badges Add a comment  | 
I am trying to prepare an AWS instance by installing some software, one of which is Fabric for Python, a SSH connection library. By default, AWS's yum doesn't have access to a Fabric distribution to install, so I was attempting to figure out where Aptitude would get Fabric from. I can't figure out a way to get what repo Fabric is in using Aptitude, or Yum for that matter. Also, on a similar note, if I do have the url of a specific repo, how would I go about listing all of the packages it has available?
Aptitude: Show What Repo a Package is From, Listing Contents of a Repo
8 Unfortunately not SNS. You can invoke a StepFunction from: Lambda API Gateway EventBridge CodePipeline IoT Rules Engine (other) Step Functions Share Improve this answer Follow answered Dec 2, 2021 at 10:20 fedonevfedonev 23.4k22 gold badges3131 silver badges4444 bronze badges Add a comment  | 
I want to execute my step function when an SNS message is published and consume it. What's the best solution for this? I know that one option is using a Lambda, subscribe to the SNS topic, and then trigger the SF from inside the Lambda....I was wondering if there's any (simpler) solution without this intermediate step.
Trigger a Step Function from SNS
Azure has deprecated the support managing Helm charts using the Az Cli. So you will need Helm client version3.7.1to push the Helm charts to ACR.To push the Helm charts to ACR, follow the next steps:Enable OCI supportexport HELM_EXPERIMENTAL_OCI=1Save your chart to a local archivecd chart-dir helm package .Authenticate with the registry usinghelm registry logincommandhelm registry login $ACR_NAME.azurecr.io \ --username $USER_NAME \ --password $PASSWORDPush chart to the registry as OCI artifacthelm push chart-name-0.1.0.tgz oci://$ACR_NAME.azurecr.io/helmYou can use the above steps in the Azure DevOps pipeline and it will work as expected. For more info on pushing helm charts to ACR, refer tothis doc.
I get the below error when I try to push the chart to ACR. Can you suggest the steps to be done here?"This command is implicitly deprecated because command group 'acr helm' is deprecated and will be removed in a future release. Use 'helm v3' instead."I followed this article to create helm charthttps://cloudblogs.microsoft.com/opensource/2018/11/27/tutorial-azure-devops-setup-cicd-pipeline-kubernetes-docker-helm/These articles also describe the issue, but I don't understand what needs to be done to fix it.https://github.com/Azure/azure-cli/issues/14498https://gitanswer.com/azure-cli-az-acr-helm-commands-not-working-python-663770738https://github.com/Azure/azure-cli/issues/14467Here is the yaml script which throws error- bash: | cd $(projectName) chartPackage=$(ls $(projectName)-$(helmChartVersion).tgz) az acr helm push \ -n $(registryName) \ -u $(registryLogin) \ -p '$(registryPassword)' \ $chartPackage Chart.yaml apiVersion: v1 description: first helm chart create name: helmApp version: v0.3.0
pushing the helm chart to azure container registry fails
Yes, you can use the resource-level classes such asTablewith both the real DynamoDB service and DynamoDB Local via theDynamoDB service resource, as follows:resource = boto3.resource('dynamodb', endpoint_url='http://localhost:8000') table = resource.Table(name)
I have some existing code that uses boto3 (python) DynamoDB Table objects to query the database:import boto3 resource = boto3.resource("dynamodb") table = resource.table("my_table") # Do stuff hereWe now want to run the tests for this code using DynamoDB Local instead of connecting to DynamoDB proper, to try and get them running faster and save on resources. To do that, I gather that I need to use a client object, not a table object:import boto3 session = boto3.session.Session() db_client = session.client(service_name="dynamodb", endpoint_url="http://localhost:8000") # Do slightly different stuff here, 'cos clients and tables work differentlyHowever, there's really rather a lot of the existing code, to the point that the cost of rewriting everything to work with clients rather than tables is likely to be prohibitive.Is there any way to either get a table object while specifying the endpoint_url so I can point it at DynamoDB Local on creation, or else obtain a boto3 dynamodb table object from a boto3 dynamodb client object?PS: I know I could also mock out the boto3 calls and not access the database at all. But that's also prohibitively costly, because for all of the existing tests we'd have to work out where they touch the database and what the appropriate mock setup and use is. For a couple of tests that's perfectly fine, but it's a lot of work if you've got a lot of tests.
Can I get a boto3 DynamoDB table object from a client object?
You need toderivate(calculate rate of the change) your metric on the time series database (TSDB) level. Please check documentation of your used TSDB.For example:InfluxDB DERIVATE documentation
I have a counter metric which I want to display as requests/time period. How can I display it in Grafana? All I was able to do was to show it as increasing value:
Group counts by time in Grafana
The biggest use is with LoadBalancer services where you want to expose something on (usually) 80 or 443, but don't want the process to run as root so it's listening on 8080 or something internally. This lets you map things smoothly.ShareFollowansweredMay 7, 2020 at 22:52coderangercoderanger53.1k44 gold badges5454 silver badges7777 bronze badges21(Also useful for ClusterIP services within the cluster, for the same reason.)–David MazeMay 7, 2020 at 22:581You more often control both sides there but sometimes not, or sometimes it's just easier to go the path of least resistance :D–coderangerMay 7, 2020 at 23:01Add a comment|
Today I have started to learn about Kubernetes because I have to use it in a project. When I came to the Service object, I started to learn what is the difference between all the different types of ports that can be specified. I think now I undertand it.Specifically, theport(spec.ports.port) is the port from which the service can be reached inside the cluster, andtargetPort(spec.ports.targetPort) is the port that an application in a container is listening to.So, if the service will always redirect the traffic to the targetPort, why is it allowed to specify them separately? In which situations would it be necessary?
What is the advantage of allowing port and targetPort to be different in Kubernetes services?
Check if this new feature (Jan. 2023) can help:GitHub Actions – Support for organization-wide required workflows public beta(Jan. 2023)Today, we are announcing public beta of required workflows in GitHub ActionsRequired workflows allow DevOps teams to define and enforce standard CI/CD practices across many source code repositories within an organization without needing to configure each repository individually.Organization admins can configure required workflows to run on all or selected repositories within the organization.Required workflows will be triggered as required status checks for all the pull requests opened on the default branch, which blocks the ability to merge the pull request until the required workflow succeeds.Individual development teams at the repository level will be able to see what required workflows have been applied to their repository.https://i0.wp.com/user-images.githubusercontent.com/25578249/211552010-d7aa7c25-f204-4c20-a04b-9c53f74ec52e.png?ssl=1-- Required workflows run at repoIn addition to reducing duplication of CI/CD configuration code, required workflows can also help companies with the following use cases:Security: Invoke external vulnerability scoring or dynamic analysis tools.Compliance: Ensure that all code meets an enterprise’s quality standards.Deployment: Ensure that code is continuously deployed in a standard way.Learn more about required workflows
In my company we have a few hundred repositories, for at least 20 of those we want to apply linting by doing github actions. It seems not good to copy the same github action workflow into each.github/workflowsfolder for a few reasons one is that the action is duplicated, no single source of truth, there should be one file somewhere if we change it then all the other files change.How to apply one github action to multiple github repositories without copying this file into every single.github/workflowsfolder in every one of these github projects? This is a github enterprise account.
How to apply a github action across multiple projects in my organization?
You can use something like this to only check for alerts on mondays 4:25 ish:sum(count_over_time({app="my-service"} | label_format day=`{{ __timestamp__.Weekday }}` | label_format hour=`{{ __timestamp__.Hour }}` | label_format minute=`{{ __timestamp__.Minute }}` | hour = 4 and day = "Monday" and minute > 25 and minute < 35 [5m])) > 0This creates labels for day, hour and minute and only returns anything when it is Monday, 9:30 UTC. It uses golang time formatting (https://www.pauladamsmith.com/blog/2011/05/go_time.html) to do that. Its a bit finicky and there might be a different way using predefined methods.
I'm trying to trigger an alert using Grafana Loki. If no log seen by 5AM, each day need to trigger an alert.Here is the query I came up with. I dont think its correct as theoffset 16h25m: shifts the time range by 16 hours and 25 minutes.count_over_time({env="dev", app="test-app"} |="YOUR Test Keyword" [1d] offset 16h25m) == 0Any Grafana Loki expert who has any suggestions.
Setting up Daily Grafana Loki Alert, If no log seen by 4:25 AM, each day
Your problem is that you don't call the original entrypoint to start Cassandra - you overwrote it with your own code, but it just running the cqlsh, without starting Cassandra. You need to modify your code to start Cassandra using the original entrypoint script (source) that is installed as /usr/local/bin/docker-entrypoint.sh, and then execute your script, and then wait for termination signal (you can't just exit from your script, because it will terminate the image.
My docker file FROM cassandra:4.0 MAINTAINER me EXPOSE 9042 I want to run something like when cassandra image is fetched and super user is made inside container. create keyspace IF NOT EXISTS XYZ WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }; I have also tried out adding a shell script but it never connects to cassandra, my modified docker file is FROM cassandra:4.0 MAINTAINER me ADD entrypoint.sh /usr/local/bin/entrypoint.sh RUN chmod 755 /usr/local/bin/entrypoint.sh RUN mkdir scripts COPY alter.cql scripts/ RUN chmod 755 scripts/alter.cql EXPOSE 9042 CMD ["entrypoint.sh"] My entrypoint looks like this #!/bin/bash export CQLVERSION=${CQLVERSION:-"4.0"} export CQLSH_HOST=${CQLSH_HOST:-"localhost"} export CQLSH_PORT=${CQLSH_PORT:-"9042"} cqlsh=( cqlsh --cqlversion ${CQLVERSION} ) # test connection to cassandra echo "Checking connection to cassandra..." for i in {1..30}; do if "${cqlsh[@]}" -e "show host;" 2> /dev/null; then break fi echo "Can't establish connection, will retry again in $i seconds" sleep $i done if [ "$i" = 30 ]; then echo >&2 "Failed to connect to cassandra at ${CQLSH_HOST}:${CQLSH_PORT}" exit 1 fi # iterate over the cql files in /scripts folder and execute each one for file in /scripts/*.cql; do [ -e "$file" ] || continue echo "Executing $file..." "${cqlsh[@]}" -f "$file" done echo "Done." exit 0 This never connects to my cassandra Any ideas please help. Thanks .
How can we run flyway/migrations script inside Cassandra Dockerfile?
2 Seems like there is no straight forward way to do that as rd command doesn't put the folder into the bin, it moves it straight into folder heaven. However, if you are lucky the files are not overwritten yet by your system, and you can use a file recovery programme such as "Recuva". I'd suggest you giving that a go. Share Improve this answer Follow answered Aug 24, 2018 at 11:00 ivanibashivanibash 1,48122 gold badges1414 silver badges3737 bronze badges Add a comment  | 
I tried to upload my project to GitHub but accidentally deleted it. How can i restore it?
I accidentally removed my project using cmd command rd. git /s/q
The NSInvocation retains its target because it needs the target to still be around when the timer fires. That fact is sort of buried in the documentation for -[NSInvocation retainArguments]: If the receiver hasn’t already done so, retains the target [...] NSTimers always instruct their NSInvocations to retain their arguments, [...] because there’s usually a delay before an NSTimer fires. This is what is meant when someone says "Framework classes may be retaining things without you knowing". Don't worry about absolute retain counts. What you should perhaps be worrying about instead* is the fact that, every time you run this code (which you seem to indicate happens fairly often), you are creating a new NSInvocation and repeating NSTimer instance with exactly the same attributes as the last time, which seems like a waste of memory. *Unless this is just test code.
I have a problem regarding NSTimer. See the following code: NSTimeInterval timeInterval = 1.0f; SEL selector = @selector(executeDataRefresh); NSMethodSignature *methodSignature = [[ExecuteDataRefesh class] instanceMethodSignatureForSelector:selector]; NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:methodSignature]; [invocation setTarget:executeDataRefresh]; [invocation setSelector:selector]; NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval: timeInterval invocation:invocation repeats:YES]; The object executeDataRefresh's retain count will now increase by 1 each invocation of method executeDataRefresh. So after 1 minute the retain count is 60. I know the method retainCount shouldn't be used, but is this method really this "incorrect"? How come?
NSTimer retain count increases, why?
After some more research I found a solution. The solution is to use register_shutdown_function One drawback is that this function will get executed after PHP ran out of memory and not before (which is still fine with me, since I can just let the users know about it)
Is it possible to find out (programmatically) if the current PHP process is about to run out of memory? Some Background: I am the author of the Bulk Delete WordPress plugin, which allows people to delete posts, users etc in bulk. One common complaint I get from my plugin users is that they get a blank page when trying to delete huge amount of posts. This happens because PHP runs out of memory. If I can find out that the PHP process is about to run out of memory, then I can try to delete in batches or at least give a warning to the user, instead of just throwing a blank page.
Find out if PHP ran out of memory
Assuming that your S3 content is static and doesn't change often. I believe more than aDaemonSetit makes more sense to use a one timeJobto copy the whole S3 bucket to a local disk. It's not clear how you would signal the kube-scheduler that your node is not ready until the S3 bucket is fully copied. But, perhaps you cantaintyour node before the job is finished and remove the taint after the job finishes.Note also that S3 is inherently slow and meant to be used for processing (reading/writing) single files at a time, so if your bucket has a large amount of data it would take a long time to copy to the node disk.If your S3 content is dynamically (constantly changing) then it would be more challenging since you would have to files in sync. Your apps would probably have to cache architecture where you would go to the local disk to find files and if they are not there, then make a request to S3.
I wanted to copy an S3 bucket on Kubernetes nodes as a DaemonSet, as the new node will also get the s3 bucket copy as soon it gets launched, I prefer an S3 copy to the Kubernetes node because copying S3 to directly to the pod as an AWS API would mean multiple calls as multiple pods require it and it will take time to copy content each time when the pod is launching.
How to copy an S3 bucket onto Kubernetes nodes
1 In case someone else comes along seeking answers - note that the question has been well answered via discussion in the comments. It sounds like my practice won't blow anything up but there's a better way: a "Version Control System" or "VCS". I'm going to have to do a little research and pick one before my semi-bad habit gets too ingrained. Thanks @xantos and @DanielLane! Share Improve this answer Follow answered Jun 7, 2015 at 14:17 MethodicianMethodician 2,46655 gold badges3333 silver badges4949 bronze badges Add a comment  | 
I'm learning C# (self-teaching first real programming language other than VBA). Consistently, my text book asks me to create new project and add a bunch of existing items from an old project when I don't want to mess up my existing project. This seems to be their way of creating a backup. They never really said not to just copy folders so I've been doing that and it works fine. The IDE doesn't allow you to save a whole project with a new name (i.e. Save As: "BACKUP Of projectName") so instead I close the IDE and just copy the folder. It's been a great time saver rather than following their laborious instructions but I fear that I'm teaching myself a bad habit. Please tell me my fears are unfounded.
Is it a bad idea to copy project folders for backup? Add existing items instead?
OK, several things to note:didReceiveMemoryWarning will be called before an out-of-memory crash. Not other crashes. If you handle the warning properly and free up memory, then you can avoid the out-of-memory condition and not crash.You can manually trigger a memory warning in the simulator under the Hardware menu. Highly recommend doing this to test your handling of didReceiveMemoryWarning.Instruments helps you debug leaks (though not all of them) - it's not really that useful for crashes.No, I don't personally use NSLog - I just breakpoint the memory warnings when I'm debugging.
I'm at the part of my development process for tracking down crashing and memory leaks. As a strategy, do you put any NSLog messages or notifications of some such intodidReceiveMemoryWarning:? The documentation for this method is rather sparse. Is it accurate to say that before a crash will happen, the UIViewController will trigger that method? Is that a starting point before even going forward with Instruments?
iOS: helpfulness of didReceiveMemoryWarning:
First check if the issue persists with the latest Flume agent available (release 1.7), using a recent image likemrwilson/docker-flume.You can compare itsdocker-compose.ymlwith yours.An image likegilt/docker-flumeis older and still in 1.5.ShareFollowansweredApr 22, 2018 at 6:30VonCVonC1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badgesAdd a comment|
I have a simple Flume agent with the following configuration:agent.sources = http-source agent.sinks = logger-sink agent.channels = logger-channel # HTTP Source ############################### agent.sources.http-source.type = org.apache.flume.source.http.HTTPSource agent.sources.http-source.channels = logger-channel agent.sources.http-source.port = 81 # Logger Sink ############################### agent.sinks.logger-sink.type = logger agent.sinks.logger-sink.channel = logger-channel # Channel ############################### agent.channels.logger-channel.type = memory agent.channels.logger-channel.capacity = 1000The only thing that the Flume agent does, it to receive the HTTP POST request through the HTTP Source and logs the events using the Logger Sink.The problem that I have is as follows: sometimes when I send the HTTP POST request to the Flume agent, It takes 1-5 second till I see the logs in the console. This is only the case for the first message being sent after starting the Flume agent. After sending several messages I see the logs immediately in the console.My question is: is it a warm-up issues in Flume? It seems that if I do not send any message for a while, again I will have some delay seeing the logs in the console.Notice that I start the Flume agent in a Docker container using a docker-compose file.
Simple Flume agent hat some lag when logging to the console
To change the EIP you can just use PythonbotoSomething like this:#!/usr/bin/python import boto.ec2 conn = boto.ec2.connect_to_region("us-east-1", aws_access_key_id='<key>', aws_secret_access_key='<secret>') reservations = ec2_conn.get_all_instances(filters={'instance-id' : 'i-xxxxxxxx'}) instance = reservations[0].instances[0] old_address = instance.ip_address new_address = conn.allocate_address().public_ip conn.disassociate_address(old_address) conn.associate_address('i-xxxxxxxx', new_address)
I am running some web crawling jobs on an AWS hosted server. The crawler scrapes data from an eCommerce website but recently the crawler gets "timeout errors" from the website. The website might have limited my visiting frequency based on my IP address. Allocating a new Elastic-IP address solves the problem, but not for long.My Question: Is there any service that I can use to automatically and dynamically allocate & associate new IPs to my instance? Thanks!
AWS: dynamically allocate & associate new IP addresses to EC2 instance?
As described in thisdocby phoenixnap.There are several ways to fix the “helm has no deployed releases” error, one ways is by running the following command:kubectl -n kube-system patch configmap [release name].[release version] --type=merge -p '{"metadata":{"labels":{"STATUS":"DEPLOYED"}}}'[release name] is the name of the release you want to update.[release version] is the current version of your release.Since Helm 3 stores the deployment history asKubernetes secrets. Check the deployment secrets:kubectl get secretsFind the secret referring to the failed deployment, then use the following command to change the deployment status:kubectl patch secret [name-of-secret-related-to-deployment] --type=merge -p '{"metadata":{"labels":{"status":"deployed"}}}'You can also refer thisblogby Jacky Jiang for more information about how to upgrade helm
After I uninstalled a release(with --keep-history), there will remain a release history with "uninstalled status".Then if I want to install this release again,installandupgrade --installare both failed.installfailed because of "cannot re-use a name that is still in use" butupgrade --installfailed because of "xxx has no deployed releases"Is the only way that I have to remove the history or uninstall without history?I tried to useinstallandupgrade --installcommand, both failed
How to install/upgrade a uninstalled release(--keep-history)
Yes, that's why it is called conservative. Every integer that looks like it points inside the heap will make the region non-garbage. And as a result, a memory leak may occur.ShareFollowansweredApr 28, 2009 at 22:42UnknownUnknown46.3k2727 gold badges141141 silver badges182182 bronze badgesAdd a comment|
I've looked atConservative GC Algorithmic OverviewCan a misdetection happen in the 'marking' part? If some data is stored and by coincidence happen to be the same as an address of an allocated memory, will the collector keep the memory ?
Mark phase misdetection on garbage collection for C
I want to setup SSL certificate and HTTPS Listener for ALB at this subdomain that was provided by AWS - how I can do it? You can't do this. This is not your domain (AWS owns it) and you can't associate any SSL certificate with it. You have to have your own domain that you control. Once you obtain the domain, you can get free SSL certificate from AWS ACM.
I have following setup at AWS ECS: Container with Caddy web-server at 80 port that serves static files and performs proxying of /api/* requests to backend Container with backend at 8000 port EC2 instance at ECS ALB at subdomain http://some-subdomain-12345.us-east-2.elb.amazonaws.com/ (subdomain was provided automatically by AWS) with HTTP Listener I want to setup SSL certificate and HTTPS Listener for ALB at this subdomain that was provided by AWS - how I can do it? P.S. I have seen an option for ALB with HTTPS Listener when we are attaching custom domain i.e. example.com and AWS will provide SSL certificate for it. But this is a pet project environment and I don't worry about real domain.
Can I setup SSL on an AWS provided ALB subdomain without owning a domain?
Found the answer:select * from pg_proc where proname ilike '%<function_name>%'In the column:prosrcyou can find the function definition. It seemed empty in SQLWorkbenchJ, but the code revealed itself after a double-click.
A co-worker at one point in time has defined a function in AWS Redshift. I have some doubts about its results and would like to see the code behind the function.Is there any query I can do (or another method) that returns this code?
How to find definition of user defined function in AWS Redshift
To find the size of the cache directory use the codebelow.public void clearCache() { //clear memory cache long size = 0; cache.clear(); //clear SD cache File[] files = cacheDir.listFiles(); for (File f:files) { size = size+f.length(); f.delete(); } }This will return the number of bytes.ShareFolloweditedNov 14, 2011 at 17:00Peter Mortensen31k2222 gold badges108108 silver badges132132 bronze badgesansweredAug 12, 2011 at 6:56ilango jilango j6,00722 gold badges2828 silver badges2525 bronze badges6I just edit my question with the code I'm using now,but still can't see the images loading after scrolling down.–Android-DroidAug 12, 2011 at 7:22where you are calling clear cache? and why you are doing this code if(size>=200) f.delete();–ilango jAug 12, 2011 at 7:27I'm calling this in my main activity like this : adapter.imageLoader.clearCache(); adapter.notifyDataSetChanged(); . And I put the IF,because I want it to delete the cache when it's size reach maybe 200kb.Is that right I'm doing?–Android-DroidAug 12, 2011 at 7:30Ok,and how I can call that piece of code to clear the cache when it's like 200kb?–Android-DroidAug 12, 2011 at 7:37use the above code. i think tat may solve your problem. now i have edited that code?–ilango jAug 12, 2011 at 8:21|Show1more comment
I'm using fedor's lazy loading list implementation in my test application where I can clear the cache with a single button click. How can I get the cache size of the loaded images in the listview and clear the cache programmatically?Here is the code for saving the cached images:public ImageLoader(Context context){ //Make the background thead low priority. This way it will not affect the UI performance. photoLoaderThread.setPriority(Thread.NORM_PRIORITY-1); mAssetManager = context.getAssets(); //Find the dir to save cached images if (android.os.Environment.getExternalStorageState().equals(android.os.Environment.MEDIA_MOUNTED)) cacheDir = new File(android.os.Environment.getExternalStorageDirectory(),"LazyList"); else cacheDir = context.getCacheDir(); if(!cacheDir.exists()) cacheDir.mkdirs(); }EDIT:So basically I added this piece of code in clearCache(), method, but I still cannot see the images start loading again when I'm scrolling.public void clearCache() { //clear memory cache long size=0; cache.clear(); //clear SD cache File[] files = cacheDir.listFiles(); for (File f:files) { size = size+f.length(); if(size >= 200) f.delete(); } }
How to get cache size in Android
My approach,Configured the map reduce program to use 16 reducers, so the final output consisted of 16 files(part-00000 to part-00015) of 300+ MB, and the keys were sorted in the same order for both the input files.Now in every stage i read 2 input files(around 600 MB) and did the processing.. So at every stage i had to hold to 600 MB in memory, which the system could manage pretty well.The program was pretty quick took around 20mins for the complete processing.Thanks for all the suggestions!, I appreciate your help
I am working with a 2 large input files of the order of 5gb each.. It is the output of Hadoop map reduce, but as i am not able to do dependency calculations in Map reduce, i am switching to an optimized for loop for final calculations( see my previous question on map reduce designRecursive calculations using MapreduceI would like to have suggestion on reading such huge files in java and doing some basic operations, finally i will be writing out the data which will of the order of around 5gb..I appreciate your help
Reading a large input files(10gb) through java program
0 HTTP-Track website mirroring utility. Wget and scripts RSync and FTP login (or SFTP for security) Git can be used for backup and has security features and networking ability. 7Zip can be called from the command line to create a zip file. In any case you will need to implement either secure FTP (SSH secured) OR a password-secured upload form. If you feel clever you might use WebDAV. Share Improve this answer Follow answered Feb 2, 2010 at 4:10 BobMcGeeBobMcGee 19.9k1010 gold badges4646 silver badges5757 bronze badges 2 Thanks! We are trying to avoid having to put something together ourselves... Something tired and tested, that may only need to be modified would be better, if its out there lol – Indigo Feb 2, 2010 at 4:30 Any service you can find will need significant customization for you to use it in your specific case. The above options all take a lot of the heavy lifting out of the work, and can be easily scripted and automated. – BobMcGee Feb 2, 2010 at 4:43 Add a comment  | 
Does anyone know of a script or program that can be used for backing up multiple websites? Ideally, I would like the have it setup on a server where the backups will be stored. I would like to be able to add the website login info, and it connects and creates a zip file or similar that it would then be sent back to the remote server to be saved as a backup etc... But it would also need to be able to be set up as a cron so it backed up everyday at least? I can find PC to Server backups that a similar, but no server to server remote backup scripts etc... It would be heavily used, and needs to be a gui so the less techy can use it too? Does anyone know of anything similar to what we need?
Multiple Website Backup
I'm using the upstream nginx ingress and use helm controller to install it. BTW, I have carefully gone through the values and overridden the values as below using the helm release. Now, it is working fine. All of my ingresses came online to serve traffic even without the annotation.No errors appeared in the logs. I suppose my previous values may caused the issue. I'm sharing the updated and fixed values as below, I hope it will help someone to who got the similar issue.controller: kind: DaemonSet hostNetwork: true hostPort: enabled: true ports: http: 80 https: 443 dnsPolicy: ClusterFirstWithHostNet nodeSelector: role: minion extraArgs: "default-server-port": 8182 service: enabled: false publishService: enabled: false
I have upgraded my nginx controller from old stable repository to new ingress-nginx repository version 3.3.0. Upgrade was succeeded without an issue.My ingress resources stopped working after the upgrade and after annotatingkubernetes.io/ingress.class: nginxto the existing resources, I could see the below message in the nginx pods. This is the output for my kiali ingress resources.I1008 10:53:00.046817 9 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"istio-system", Name:"istio-kiali", UID:"058a7b68-191a-4cdf-a0dd-023faffbb6a5", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"26912", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress istio-system/istio-kialiStill I'm not able to access it. Does anyone have any idea about the issue?Your valuable thoughts and suggestions would be appreciated.
nginx ingress resources not working after upgrading to ingress-nginx repo from stable repo
If you make the methodsfinal, then SonarQube will not trigger the warning anymore.
My entities (using Spring Boot with Spring Data JPA and Hibernate) are all extending from anAbstractEntityclass I have defined. This class implementsequals()andhashcode()in such a way that subclasses do not need to handle this anymore.SonarQube will now report a violation about:Subclasses that add fields should override "equals"for each subclass.I can suppress this by adding@SuppressWarnings("squid:S2160")on each subclass. But I was wondering if there was a way to state to SonarQube: this rule should not trigger if a class is a subclass ofAbstractEntityso I don't need to repeat the supression of the warning in each subclass.
Ignore sonarqube rule 'Subclasses that add fields should override "equals"' for all subclasses?
The root option has to point to the public directory: server { server_name lumen.dev; root /var/www/lumen/public; The error appears because it's trying to call /index.php?$query_string which is relative to the root. So it tries to find /var/www/lumen/index.php in an endless loop.
I'm trying to setup Lumen - "micro-framework" built on top of Laravel's components. On server-side there's nginx + php-fpm. Here's my nginx config: server { server_name lumen.dev; root /var/www/lumen; location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME /var/www/lumen/public/index.php; try_files $uri $uri/ /index.php?$query_string; } } This config work fine when I'm calling defined route, e.g. I see "Lumen." response when opening http://lumen.dev. But when I try to open undefined route like http://lumen.dev/404 I see "500 Internal Server Error" in browser and this message in nginx error log: rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: lumen.dev How can I fix my nginx conf to make it working?
Lumen + nginx = error 500, rewrite or internal redirection cycle while internally redirecting to "/index.php"
This isn't CORS related -- it's S3 itself. The S3 website endpoints are only equipped forGETandHEAD. Anything else should be denied before the redirection rules are checked.Website EndpointSupports only GET and HEAD requests on objects.—http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html
I have the following CORS configuration for S3 in order to use one of my buckets as a static website hosting:<?xml version="1.0" encoding="UTF-8"?> <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <AllowedMethod>POST</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> </CORSRule> </CORSConfiguration>Then I have the following Edit Redirection Rules:<RoutingRules> <RoutingRule> <Condition> <KeyPrefixEquals>abc</KeyPrefixEquals> </Condition> <Redirect> <HostName>myec2instance.com</HostName> </Redirect> </RoutingRule>What I want to do is when S3 receives a POST to /abc redirects the request and request body to my ec2 instance. The redirection rule is working properly (I was able to test this by switching POST to a GET request) but for any reason S3 is returning HTTPResponse 405 when the request is a POST. Any ideas why?
S3 POST request to S3 with response 405
Can't be done with the current version (3) of the GitHub API.You will have to deal with the mix of files until they add a parameter (much like how Listing method has thepathflag) that allows you to specify what file to limit the view/compare commit method.
Using the github API'scompare endpoint, I can request the unified diff between two commits:curl -H 'Accept: application/vnd.github.3.diff' \ 'https://api.github.com/repos/danvk/dygraphs/compare/01275da4...335011f'Using the git command line tool, I can filter that diff to just a single file:git diff 01275da4..335011f dygraph.jsIs there any way to do this with the github API? I realize that I can filter down to just that diff as a post-processing step, but this could run into API restrictions if the diff contains a large file in addition to a small file.
Is it possible to get the diff for just one file using the github API?
Navigate to task manager, and End all Docker desktop Processes running.thendocker logout docker loginthis worked for meShareFollowansweredApr 25, 2022 at 6:22Robert IshokaRobert Ishoka3633 bronze badgesAdd a comment|
I keep getting this error on my bash terminal using WSL2C:\windows\system32>docker run -d -p 80:80 docker/getting-started Unable to find image 'docker/getting-started:latest' locally docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": Service Unavailable. See 'docker run --help'.
docker: Error Get https://registry-1.docker.io/v2/: Service Unavailable. IN DOCKER , WSL2
It is not strictly related to github. It is supporting emoji as a lot of other sites are doing it.see the following link for the github markdowngithub markdownsee the following link for the complete list of the emoji that can be usedemoji-cheat-sheetShareFolloweditedNov 1, 2016 at 11:18answeredNov 1, 2016 at 11:12cristallocristallo1,95122 gold badges2626 silver badges4343 bronze badgesAdd a comment|
Some repositories have icons in their description.For examplehttps://github.com/Leaflet/Leafletis using:leaves:Where is that documented ?
using icons in github repository description field
There is no reason to get "get socket.io working though nginx". Instead you just route HAProxy directly to Socket.IO (without Nginx in the middle). I recommend you checkout the following links: https://gist.github.com/1014904 http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/
I've been trying for hours and have read what this site and the internet have to offer. I just can't quite seem to get Socket.IO to work properly here. I know nginx by default can't handle Socket.IO however, HAproxy can. I want nginx to serve the Node apps through unix sockets and that works great. Each have a sub directory location set by nginx, however, now I need Socket.IO for the last app and I'm at a loss of configuring at this point. I have the latest socket.io, HAproxy 1.4.8 and nginx 1.2.1. Running ubuntu. So reiterating, I need to get socket.io working though nginx to a node app in a subdirectory, ex: localhost/app/. Diagram: WEB => HAproxy => Nginx => {/app1 app1, /app2 app2, /app3 app3} Let me now if I can offer anything else!
Node.JS, HAproxy and Socket.IO through NGINX, app sits in subdirectory
A few things to note : You can use the EXISTS to check if the key already exists. This is better because you can now cache users that actually have 0 transactions. INCR and INCRBY commands will create the key if it doesn't already exists So, in pseudo code, here's what you should do - if EXISTS user:<userid>:transcount return GET user:<userid>:transcount else int transCountFromDB = readFromDB(); INCRBY user:<userid>:transcount transCountFromDB return transCountFromDB You may also want to execute an EXPIRE command on the key right after you do INCRBY, so that you only cache records for an acceptable time.
I am using ServiceStacks CacheClient and Redis libraries. I want to cache counts of the number of certain transactions that users perform. I am using the following to GET the cached value or create it from the DB if the key does not exist: public int GetTransactionCountForUser(int userID) { int count; //get count cached in redis... count = cacheClient.Get<int>("user:" + userID.ToString() + ":transCount"); if (count == 0) { //if it doent exists get count from db and cache value in redis var result = transactionRepository.GetQueryable(); result = result.Where(x => x.UserID == userID); count = result.Count(); //cache in redis for next time... cacheClient.Increment("user:" + userID.ToString() + ":transCount", Convert.ToUInt32(count)); } return count; } Now, in another operation(when the transaction occurs) I will add a row to the DB and I would like to increment my Redis counter by 1. Do I first need to check to see if the particular key exists before incrementing? I know that the Increment method of cache client will create the record if it does not exists, but in this case the counter would start at 0 even if there are transaction records in the DB. What is the proper way to handle this situation? Get key, if null, query db to get count and create the key with this number?
How to use ServiceStack CacheClient and Redis to increment or create counters
You can try to run in each branch this command:git ls-files | grep YOUR FILEAnd see if it appears.Or you can do a more elaborated script like this:for i in $(git branch | sed 's/\*//'); do git checkout $i; git ls-files | grep -q YOURFILE ; if [ $? -eq 0 ] ; then echo "File found in branch: $i"; fi; doneDo a loop for each branch and do the same that the previous command.
Is there a way in git to find whether a file is present in multiple branches or not? If so list the branch names. Please provide help on this.
How to find whether a file is present in multiple branches in git