Response stringlengths 15 2k | Instruction stringlengths 37 2k | Prompt stringlengths 14 160 |
|---|---|---|
30
There are two different ways of implementing it. One with = sign and other with : sign. Check the following examples for more information.
Docker compose environments with = sign.
version: '3'
services:
webserver:
environment:
- USER=john
- [email protected]
Docker compose environments with : sign
version: '3'
services:
webserver:
environment:
USER:john
EMAIL:[email protected]
Share
Follow
answered May 13, 2020 at 15:53
Suman KharelSuman Kharel
9701111 silver badges1313 bronze badges
Add a comment
|
|
I'm having an issue running docker compose. Specifically i'm getting this error:
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.login-service.environment contains {"REDIS_HOST": "redis-server"}, which is an invalid type, it should be a string
And here's my yml file:
version: '3'
services:
redis:
image: redis
ports:
- 6379:6379
networks:
- my-network
login-service:
tty: true
build: .
volumes:
- ./:/usr/src/app
ports:
- 3001:3001
depends_on:
- redis
networks:
- my-network
environment:
- REDIS_HOST: redis
command: bash -c "./wait-for-it.sh redis:6379 -- npm install && npm run dev"
networks:
my-network:
Clearly the issue is where I set my environment variable even though i've seen multiple tutorials that use the same syntax. The purpose of it is to set REDIS_HOST to whatever ip address docker assigns to Redis when building the image. Any insights what I may need to change to get this working?
| Docker Compose throws invalid type error |
You should specify a CachePolicy:
enum
{
NSURLRequestUseProtocolCachePolicy = 0,
NSURLRequestReloadIgnoringLocalCacheData = 1,
NSURLRequestReloadIgnoringLocalAndRemoteCacheData = 4, // Unimplemented
NSURLRequestReloadIgnoringCacheData = NSURLRequestReloadIgnoringLocalCacheData,
NSURLRequestReturnCacheDataElseLoad = 2,
NSURLRequestReturnCacheDataDontLoad = 3,
NSURLRequestReloadRevalidatingCacheData = 5, // Unimplemented
};
typedef NSUInteger NSURLRequestCachePolicy;
Try this:
[webView loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:myURLString] cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData timeoutInterval:nil]];
|
Developing the Native App for ipad,
Initial screen i have on 'ViewDidLoad' a webcall made to read a file on web getting me the results and showing the list.
Prob 1: when i change the content of file in Web it doesnt reflect in my app, even i kill app still result is old same. can anyone help me with this issue.
After this list select it lands to WebView.
Prob 2: When i change anything on server side javascript. it doesnt reflect on the Native App, it does still give me old response only. (i.e Javascript and Css changes are not reflect in App). Can anyone please help me throught this part.
IOS 7 native App in Ipad. If you need code i can post it.
| Ios Application Caching the Webview & native calls |
It looks like misconfiguration:
You didn't set thenamespacein second YAML, and you apply it by commandkubectl apply -f ray-test-svc.yaml. It will not update the old service, it will create a new one in namespacedefault. You can run commandkubectl apply -f ray-test-svc.yaml -n ray-test-nsand it will update you service. Also you can addnamespace: ray-test-nsto second YAML. | right now I have a servicekubectl get svc ray-test-svcapiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
....
labels:
app: ray-test-app
service: ray-test-svc
name: ray-test-svc
namespace: ray-test-ns
spec:
ports:
- nodePort: 30198
port: 80
protocol: TCP
targetPort: 8000
selector:
app: ray-test-app
service: ray-test-svc
version: v2
type: LoadBalancerAfter I edit my yaml file deleting theversion: v2in selector and dokubectl apply -f ray-test-svc.yamltheversion:v2is still in selector!Here is my yaml filekind: Service
apiVersion: v1
metadata:
name: ray-test-svc
annotations:
....
labels:
app: ray-test-app
service: ray-test-svc
spec:
selector:
app: ray-test-app
service: ray-test-svc
type: LoadBalancer
ports:
- port: 80
targetPort: 8000I check the log by using-v=9seeing that kubectl usesPATCHto do the update. Is this a bug in kubeApi or is there any way to just delete partial labels? Thanks!! | kubectl apply doesn't update service selector label |
Note : can run in local web server not always success run in real server.
In codeigniter 3 you must case sensitive code and using Capital Each Word in every class like this.HTACCESSRewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule .* index.php/$0 [PT,L]CONFIG$config['base_url'] = 'http://heloo.cmsapp.com/';
$config['index_page'] = "";ROUTING<?php defined('BASEPATH') OR exit('No direct script access allowed');
$route['default_controller'] = "Page_controller/home";
$route['page/contact'] = "Page_controller/contact";CONTROLLER<?php defined('BASEPATH') OR exit('No direct script access allowed');
class Page_controller extends CI_Controller {
public function __construct() {
parent::__construct();
}
public function home() {
//load your home page
}
public function contact() {
//load your contact page
}
}Read this documet :www.codeigniter.com/user_guide/general/styleguide.htmlShareFollowansweredApr 29, 2015 at 3:41denyptwdenyptw10066 bronze badgesAdd a comment| | i have a fresh copy of CodeIgniter configured on my local server and works fine. but when i upload inhttp://subdomain.mydomain.comit shows an error.An Error Was Encountered
Unable to determine what should be displayed. A default route has not been specified in the routing file.Is there a problem with my hosting provider or i can fix this using a .htacess file?i have no clue what i am dealing with.
any help is appreciated.error_log
[28-Apr-2015 19:53:56 America/New_York] PHP Warning: PHP Startup: > Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/timezonedb.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/timezonedb.so: cannot open shared object file: No such file or directory in Unknown on line 0Thanks | Codeigniter not woking on subdomain |
Github has an API for issues, thats going to be your best bet.
http://developer.github.com/v3/issues/
|
This is the same question as this one, but for Github instead of JIRA
Hi,
I'm developing FreedomSponsors - a crowdfunding platform for open source projects.
I want to improve the "Sponsor new issue" screen by pre-filling some information, based on the issue's URL.
My next "target" is the Github issue tracking system.
Given a URL like https://github.com/whit537/www.gittip.com/issues/14, What's the best way to extract information like:
1: issue key: 14
2: project issue tracker URL: https://github.com/whit537/www.gittip.com/issues
3: project title: www.gittip.com
(ok, these are easy, it's all in the URL)
and
4: issue title: pay with bitcoin, litecoin
I'm using python
| What is the best strategy to extract information from a Github issue? |
The webapp behindGitoriousis open-source. You can have an interface exactly like it from your web server. It doesn't have all the Github bells and whistles but it has source browsing, revision history, commits, etc.It's rails, which might not be optimal for you, but it's also free :-) | As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened,visit the help centerfor guidance.Closed10 years ago.I know there are plenty of ways to run git on my server, but I quite like the functionality of git with repo browsing - the fact that i can look at previous versions in the web interface.Now was I able to, I'd use github, but the problem is our source control rules are very strict and we aren't allowed to put files on other servers, even if they are encrypted.Is there a script that allows us to run a github like interface, or rather one that allows me to browse the revision history of the git project through a web interface?I'm running a LAMP server, but would consider alternate languages like python, perl etc should nothing in php be available.interested in both paid and open source softwares | Is there a Github clone in PHP that I can run on my own server? [closed] |
you need to addjaxrpc-apito resolve this issue. you can fine dependecy below:<dependency>
<groupId>javax.xml</groupId>
<artifactId>jaxrpc-api</artifactId>
<version>1.1</version>
</dependency>you can also usejavax.xml.rpcto resolve this issue. you can find dependency below:<dependency>
<groupId>org.glassfish</groupId>
<artifactId>javax.xml.rpc</artifactId>
<version>3.0-b74b</version>
</dependency> | Using sonar cube Version 4.5.2.
Apache Maven 3.0.2
Java version: 1.8.0_65My maven build process comes up with several errors such asClass not found: javax.xml.rpc.handler.MessageContextHow to fix this problem? where is Sonar trying to find this package? | SonarQube “Class not found: javax.xml.rpc.handler.MessageContext” during maven sonar build |
Perl to the rescue:perl -le '$d=shift;chomp($f=(`ls -t $d/*`)[0]);print 24*60*60*-M$f' /path/to/dir | I need a bash script to get the age of the newest file in given directory (in hours or seconds). For example:-rw-r--r-- 1 root root 3.0M 2012-12-31 12:36 2012_12_31_1236_redis_dump_encrypted.tgz
-rw-r--r-- 1 root root 2.8M 2013-01-01 11:33 2013_01_01_1133_redis_dump_encrypted.tgz
-rw-r--r-- 1 root root 2.4M 2013-01-04 14:17 2013_01_04_1417_redis_dump_encrypted.tgz
-rw-r--r-- 1 root root 2.7M 2013-01-05 12:26 2013_01_05_1226_redis_dump_encrypted.tgz
-rw-r--r-- 1 root root 54M 2013-01-06 14:16 2013_01_06_1415_redis_dump_encrypted.tgz
-rw-r--r-- 1 root root 3.7M 2013-01-07 16:42 2013_01_07_1642_redis_dump_encrypted.tgz
-rw-r--r-- 1 root root 3.4M 2013-01-08 12:36 2013_01_08_1236_redis_dump_encrypted.tgzCommand should accept path to directory and return how many seconds have passed since newest file (2013_01_08_1236_redis_dump_encrypted.tgz) was created.I need this in order to monitor age of the latest backup with zabbix (I want an alert in case backup mechanism breaks). One-liner would be great, because it is more conviniant to use as zabbix user parameter, but not necessary.Thank you! | Bash: calculate age of the newest file in directory |
You are missing the ingress class in the spec.apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
ingressClassName: nginx # (or the class you configured)UsingNodePorton your service may also be problematic. At least it's not required since you want to use the ingress controller to route traffic via theClusterIPand not use theNodePortdirectly. | I have 2 services and deployments deployed on minikube on local dev. Both are accessible when I run minikube start service. For the sake of simplicity I have attached code with only one serviceHowever, ingress routing is not workingCoffeeApiDeploymentapiVersion: apps/v1
kind: Deployment
metadata:
name: coffeeapi-deployment
labels:
app: coffeeapi
spec:
replicas: 1
selector:
matchLabels:
app: coffeeapi
template:
metadata:
labels:
app: coffeeapi
spec:
containers:
- name: coffeeapi
image: manigupta31286/coffeeapi:latest
env:
- name: ASPNETCORE_URLS
value: "http://+"
- name: ASPNETCORE_ENVIRONMENT
value: "Development"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffeeapi-service
spec:
selector:
app: coffeeapi
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30036Ingress.yamlapiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Prefix
backend:
service:
name: coffeeapi-service
port:
number: 8080 | Kubernetes ingress not routing |
Unfortunately, that getting-started guide isn't nearly as up to date as the kube-up implementation. For instance, I don't see a --cloud-provider=aws flag anywhere, and the kubernetes-controller-manager would need that in order to know to call the AWS APIs.You may want to check out the official CoreOS on AWS guide:https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.htmlIf you hit a deadend or find a problem, I recommend asking in the AWS Special Interest Group forum:https://groups.google.com/forum/#!forum/kubernetes-sig-awsShareFollowansweredMar 2, 2016 at 6:25briangrantbriangrant83566 silver badges1313 bronze badgesAdd a comment| | TheCoreOS Multinode Clusterguide appears to have a problem. When I create a cluster and configure connectivity, everything appears fine -- however, I'm unable to create an ELB through service exposing:$ kubectl expose rc my-nginx --port 80 --type=LoadBalancer
service "my-nginx" exposed
$ kubectl describe services
Name: my-nginx
Namespace: temp
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.100.6.247
Port: <unnamed> 80/TCP
NodePort: <unnamed> 32224/TCP
Endpoints: 10.244.37.2:80,10.244.73.2:80
Session Affinity: None
No events.The IP line that says 10.100.6.247 looks promising, but no ELB is actually created in my account. I can otherwise interact with the cluster just fine, so it seems bizarre. A "kubectl get services" listing is similar -- it shows the private IP (same as above) but the EXTERNAL_IP column is empty.Ultimately, my goal is a solution that allows me to easily configure my VPC (ie. private subnets with NAT instances) and if I can get this working, it'd be easy enough to drop into CloudFormation since it's based on user-data. The official method of kube-up doesn't leave room for VPC-level customization in a repeatable way. | Kubernetes Multinode CoreOS gude doesn't create ELBs in AWS |
This is a little faster without the parallelforloop:def complexPow(inData, power):
theta = af.atan(af.imag(inData)/af.real(inData))
r = af.pow(af.pow(af.real(inData), 2.0) +
af.pow(af.imag(inData), 2.0), .5)
inData = af.pow(r, power) * (af.cos(theta*power) + \
1j*af.sin(theta*power))
return inDataTetsted for 4000 iterations over adtype=complexarray with dimensions(1, 2**18)using nvidia Quadro K4200, Spyder 3, Python 2.7, Windows 7:Usingaf.ParallelRange:7.64 sec (1.91 msec per iteration).Method above:5.94 sec (1.49 msec per iteration).Speed increase:28%. | According to thearrayfire pow documentation,af.pow()currently only supports powers (and roots...) of real arrays. No error is thrown, but I found that usingaf.pow()with complex input can cause a huge memory leak, especially if other functions are used as input (for example,af.pow(af.ifft(array), 2)).To get around this, I have written the functioncomplexPowbelow. This seems to run for complex arrays without the memory leak, and a quick comparison showed that mycomplexPowfunction returns the same values asnumpy.sqrt()and the**operator, for example.def complexPow(inData, power):
for i in af.ParallelRange(inData.shape[0]):
theta = af.atan(af.imag(inData[i])/af.real(inData[i]))
rSquared = af.pow(af.real(inData[i]), 2.0) + \
af.pow(af.imag(inData[i]), 2.0)
r = af.pow(rSquared, .5)
inData[i] = af.pow(r, power) * (af.cos(theta*power) + \
1j*af.sin(theta*power))
return inDataIs there a faster way of doing parallel element-wise exponentiation than this? I haven't found one, but scared I'm missing a trick here... | Faster exponentiation of complex arrays in Python using Arrayfire |
Why? Isn't that a SEO friendly and user friendly URL? Why would you want to make it un-user fiendly? But anyway my guess is that you want/userto point to/editby the looks of it:RewriteRule ^/?user/*$ /edit.php?type=user&%{QUERY_STRING}Though you will be making your site worse by using this and adding uneeded complexity to not only site maintenance but also user navigation.ShareFollowansweredAug 10, 2012 at 10:38SammayeSammaye43.6k77 gold badges106106 silver badges147147 bronze badges7I tired it but it didn't worked... my htaccess looks like as you said and the linke is "<a href='108.166.92.199/edit.php?type=user'>click</a>" and the url i get is also the same that is108.166.92.199/edit.php?type=user–SatishAug 10, 2012 at 10:59@Satish Wait, what are you trying to do? Your question state you are trying to takeeditout ofedit/user/2/1but your comment says otherwise...so what ARE you trying to do? Are you actually trying to changeedit.php?type=userinto/edit/user?–SammayeAug 10, 2012 at 11:05No I just want to remove edit.. so final url become like "mysite.com/user/otherquery string"–SatishAug 10, 2012 at 11:19@Satish Then change your HTML link to go:108.166.92.199/user/blahand you should see some magic happen.–SammayeAug 10, 2012 at 11:24I did it but page not found error.. and other thing is that i want to do it for all the files of the site i want to remove all the filename from url..–SatishAug 10, 2012 at 11:32|Show2more comments | I want to make SEF url In that i want to remove file name also from the url.
my site is in the core php, I manage to remove the extension from the url but didn't have idea that how to remove the whole file name from the url.As i searched on google i got this code for the .htaccessRewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php/$1 [L]I have no idea that how to write header in php so it goes in related file but don't show the file name in url.<a href='http://mysite.com/edit/user/2/1'>click</a>Now if i click on above link it does goto edit.php with url "mysite.com/edit/user/2/1",so from this how can i remove file name that is "edit".Thanks in advance | SEF url without name with core php and .htaccess |
I have done such work on one of my application.
I have shared code overhere...You have to replace TextView with ImageView used in my code.Let me know if you stuck somewhere...Happy Coding :) | Closed. This question needs to be morefocused. It is not currently accepting answers.Want to improve this question?Update the question so it focuses on one problem only byediting this post.Closed7 years ago.Improve this questionText over bitmap with some functionality like text color and size change over bitmap.Please give me any suggestion to do or any github reference link | Text over bitmap in android [closed] |
It looks 17.11.0 has the problem.
Could you try to install the old one as below ?
$ sudo apt install docker-ce=17.09.0~ce-0~raspbian
Or wait for the fix.
(2017.12.5)
It looks 2017-11-29-raspbian-stretch has same issue. To avoid upgrading by apt upgrade, Do: sudo apt-mark hold docker-ce.
And unhold when it fixes.
|
I am new to docker, I plug my PI3 to test some stuff and I'm already facing an error, I can't figured that out myself.
So I freshly install raspbian and docker.
That's my install log
Then I try the classic hello-word test
and there is the log
| Raspbian docker: Error response from daemon: cgroups: memory cgroup not supported on this system: unknown |
To clarify I am posting Community Wiki answer.To resolve that problem you usedinternalinstead oflocalhost.As you wrote in comments section:It seems Docker uses "internal" as they top level domain-naming for local stuff, instead of localhost. So we thought we follow that workaround.Inthis documentationone can find information about networking features in Docker Desktop for Windows.See alsothis documentationabout DNS for Services and Pods.ShareFollowansweredDec 8, 2021 at 19:40community wikikkopczak0Add a comment| | I'm working currently on local e2e tests.Setup:My Windows MachineWorking Kubernetes ClusterDeployed Servicesetc/hosts entriesSelenium Tests with CucumberFor the sake of clarity and because we will use different stages we want to change the redirect url endings from "testing" to "localhost" in all our e2e projects.hosts-file entrycurrent[MY IP ADDRESS] e2e.myproject.testingto be[MY IP ADDRESS] e2e.myproject.localhosttheapiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: my-webclient
name: my-webclient
namespace: somenamespace
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
nginx.ingress.kubernetes.io/app-root: "/app/"
spec:
rules:
- host: e2e.myproject.localhost
http:
paths:
- backend:
serviceName: my-myproject
servicePort: 8080
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: localhostFun fact:
If I run my tests with the suffix "testing", "foobar", "deadbeef" whatever in the url and deployment files, things work fine. If i have replace it with "localhost" it doesn't work. And I get a connection error.Theory:
Using anything other than "localhost" for the redirect-uris in the hosts file, the ingress points to the ip of my machine. But if I use localhost the ingress or deployments point to the ip of the virtual machine they are running in.Can anyone verify that and know a solution? Might also solve some related issues we have ^^' | Kubernetes Serenety Tests Ingress does not work with localhost |
Without this folder, anyone who forks my project would have to set up Laravel first which for the ease of my other developers I would like to avoid.Yes, that should be an acceptable process, provided you can include in your repo a script or command which automates that process: the README would explicitly indicates anyone cloning the repo is supposed to execute said script/command.As commented, acomposer.jsonand thecomposercommandshould help: a declarative approach (declaring the dependencies incomposer.json) is better here than a vendoring approch (where you copy everything in a Git repo) | I am currently trying to upload my Laravel project into my GitHub account but I am running into the issue of not being able to upload the large vendor folder that "powers" the Laravel project.Without this folder, anyone who forks my project would have to set up Laravel first which for the ease of my other developers I would like to avoid.Is there some kind of workaround for this? | Work around to uploading large folders onto Github |
0
I changed the link from https://storage.googlesapis.com/subdomain.mysite.com/... to https://subdomain.mysite.com/... (simply removing "storage.googleapis.com") and it works!
Hope it helps other that stuck as well.
Share
Improve this answer
Follow
answered May 25, 2018 at 16:53
Jo MoranJo Moran
3944 bronze badges
Add a comment
|
|
How can I cache assets stored on google cloud storage (GCS)? I've been trying to make it work in the past 2 days with no luck. My website have backend & frontend, and the asset is stored on GCS. I tried the following guide:
a. https://support.cloudflare.com/hc/en-us/articles/200168926-How-do-I-use-Cloudflare-with-Amazon-s-S3-Service-
b. https://cloud.google.com/storage/docs/hosting-static-website
c. https://cloud.google.com/storage/docs/static-website#tip-dynamic
Let say my website is example.com, here's what I did:
I created a bucket on GCS "img.example.com"
On Cloudflare I set CNAME with the following:
Name: img.example.com
Value: c.storage.googleapis.com
I set all object in GCS bucket 'readable by public' (https://cloud.google.com/storage/docs/access-control/making-data-public#buckets)
The image is still not cached by Cloudflare and the header status still not showing CF-Status. Am I missing something? Any help would be greatly appreciated.
Thank you.
| How to cache google cloud storage (GCS) with cloudflare? |
I recently found a solution to make it work so I thought might be helpful if I will post it here.
So basically I did this:
I created a folder named ssl.crt in/etc/apache2/and placed inside it the certificate files webmail.domain.com.bundle webmail.domain.com.crt webmail.domain.com.key.
Then entered the conf-enabled folder placed in/etc/apache2/and created a vhost configuration file named as my website domain name to webmail-domain-ssl.conf and entered this code inside it:<VirtualHost *:80>
ServerAdmin[email protected]DocumentRoot /opt/roundcube
ServerName webmail.domain.com
RewriteEngine On
RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R=301,L]
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAdmin[email protected]DocumentRoot /opt/roundcube
ServerName webmail.domain.com
ErrorLog ${APACHE_LOG_DIR}/error.log
SSLEngine On
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder on
SSLCertificateFile /etc/apache2/ssl.crt/webmail.domain.com.crt
SSLCertificateKeyFile /etc/apache2/ssl.crt/webmail.domain.com.key
SSLCertificateChainFile /etc/apache2/ssl.crt/webmail.domain.com.bundle
</VirtualHost>
</IfModule>ShareFollowansweredDec 11, 2020 at 10:04Sotmir LaciSotmir Laci11411 silver badge77 bronze badgesAdd a comment| | I am trying to implement a paid SSL certificate for a specific domain like webmail.domain.com in a mail server. I know has something to do with apache but I am not sure where specifically. Please can somebody help me? | How to implement ssl certificate for webmail in debian server 9.12 (stretch)? |
You can find some more information on making CodeIgniter CLI-accessible here:http://phpstarter.net/2008/12/run-codeigniter-from-the-command-line-ssh/Next step is just using crontab -e to set up the cronjob.ShareFollowansweredFeb 10, 2010 at 12:46Zack EffrZack Effr9144 bronze badgesAdd a comment| | I am using codeigniter. I want to know how to set up a cron job to check a table for expiring users and insert data in to another table with the list of expiring users. How to do that.When i tried to write a script with controller and model to insert the table:Fatal error: Class 'Controller' not found in/home/content/html/test/live/application/controllers/cron.phpon line2 | CRON job for codeigniter |
Option 1:This will install the latest version of PhpMyAdmin from a shell script I've written. You are welcome to check it outon Github.Run the following command from your code/projects directory:curl -sS https://raw.githubusercontent.com/grrnikos/pma/master/pma.sh | bashOption 2:This will install PhpMyAdmin (not the latest version) from Ubuntu's repositories. Assuming that your projects live in/home/vagrant/Code:sudo apt-get install phpmyadminDonotselect apache2 nor lighttpd when prompted. Just hit tab and enter.sudo ln -s /usr/share/phpmyadmin/ /home/vagrant/code/phpmyadmincd ~/Code && serve phpmyadmin.test /home/vagrant/code/phpmyadminNote: If you encounter issues creating the symbolic link on step 2, try the first option or see Lyndon Watkins' answer below.Final steps:Open the/etc/hostsfile on your main machine and add:127.0.0.1 phpmyadmin.testGo tohttp://phpmyadmin.test:8000 | I installed it by runningsudo apt-get install phpymyadminand then runningsudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/htmlandsudo service nginx restartbut it's not working.Note: I didn't select any of the apache2 or lighttpd options when installing. | How do I set up phpMyAdmin on a Laravel Homestead box? |
The result of ansvn exportcommand is a simple directory tree without any version control data. You cannot commit in it, not withgit, not even withsvn.svn exportis only suitable if you want to read files of a sub-directory and never intend to commit and push back.If you want to work with the repository, make changes and push back, you can use the--depthoption ofgit clone. It takes an integer parameter, and it will clone only the specified number of recent commits of a single branch. It will get all the files of the latest revision, you cannot filter to a specific sub-directory only. So it's not exactly what you want to do, but hey, no pain no gain. | I have downloaded a specific folder of a git repository using svn export:svn export https://github.com/user/repo.git/trunk/doc/myFolderNow I have myFolder folder locally, but after making changes I want to push it to the git repo in the same directory from where I have downloaded it. I also need to include a .gitignore file with the list of files not to be included.Currently if I type git status inside the downloaded folder it saysfatal: Not a git repository (or any of the parent directories): .gitI want that when I push to the repo the files in the folder on git will get replaced with my local version. How can I do this? Thanks! | Pushing to git a folder downloaded with svn export |
You need to define volumes, which will be used in the Grafana/Prometheus containers to store data persistently.
Doc:https://docs.mesosphere.com/1.7/administration/storage/mount-disk-resources/ | I have usedthis documentationin order to deploy Prometheus with Grafana on the cluster.The problem arises whenever we restart our Prometheus and Grafana with some changed configuration all our dashboards and visualizations are gone.Is there a workaround where we can persist the dashboards and visualizations? | Dashboards and Visualisations gets lost when Grafana restarts on DC/OS |
+50If you look at thewp-load.phpfile. It has below lineerror_reporting( E_CORE_ERROR | E_CORE_WARNING | E_COMPILE_ERROR | E_ERROR | E_WARNING | E_PARSE | E_USER_ERROR | E_USER_WARNING | E_RECOVERABLE_ERROR );This basically disables all the error reporting that you want. So edit the file and comment it and adderror_reporting(E_ALL);Then you will know what the issues is. Also not all php files are compatible with CLI as they use some code that may only be valid when running under a web context.So if you are not able to fix the error in CLI, you can run the curl call to do the migration*/1 * * * * root curl http://localhost/path/to/urlIssue with this is that you will not get logs but only the output that your script gives. Also in case your scripts runs for long then you will need to addset_time_limit(0)https://www.php.net/manual/en/function.set-time-limit.phpShareFollowansweredJul 22, 2019 at 18:00Tarun LalwaniTarun Lalwani144k99 gold badges211211 silver badges268268 bronze badgesAdd a comment| | I have this code:echo "1 - is_readable: " . is_readable("/var/www/docroot/wp-load.php") . "\r\n";
echo "2 - file_exists: " . file_exists("/var/www/docroot/wp-load.php") . "\r\n";
echo "3 - before require\r\n";
require("/var/www/docroot/wp-load.php");
echo "4 - after require\r\n";But the output is strange:1 - is_readable: 1
2 - file_exists: 1
3 - before requireThis situation appears when I start script from CLI or CRON, when I
start it directly in browser - all is fine.What is happened echo 4 doesn't display?
Also I've tried to require another file, result is the same.upd. task in crontab:*/1 * * * * root php -f /mypath/fetch_data.php >> /mypath/results.out.log 2>&1 | PHP require() or include() stops the script without errors |
How can I clone the code of the module from the GitHub repo and start making improvements to itYou need tofork it first, and thenmake some pull requests.You can add each of those repos in your Eclipse workspace (seeadding a repository manually).However, you would need to manage branches and pushing for each and every repos.I would also advise to add for each repo the remote address of theoriginal upstream repo:That way, you can easily rebase your work on top of the latest of the original repo you have forked. | I am new to Git/GitHub. I do have an Eclipse Indigo Joomla 2.5 workspace on an freshly installed Joomla instance. I can launch an http request, set a breakpoint somewhere in the Joomla code and start stepping into the code in a debug session. So, working environment !I would like to contribute to a third party Joomla module hosted on GitHub. How can I clone the code of the module from the GitHub repo and start making improvements to it. I also plan to work on other third party modules or plugins later, from the same workspace if this is possible.My concern is: how do I setup EGit so that I can have several independant local Git repo in the same workspace and add, branch, merge, push on them independantly ? Thanks for your answer !Jean-Pierre | How to setup EGit in an Eclipse Joomla development project to work on a third party module hosted on GitHub? |
Let try:H H(0-7) * * *which seems to be giving it a random time between 12 and 7.Refer:https://stackoverflow.com/a/47779783/8236311 | Is there any way to run in random time? I know about aliases, but I don't found random alias. | How to trigger a build Jenkins Job in random time? |
You need to have something run the script on a timer. This is typically going to be cron (on UNIX based systems such as Linux, OS X, BSD, etc) or Windows Task Schedular (on Windows). | From the basic of php i know that php needs to have some action/request to executeso i am little confused about how to do it. I know it can be done but don't know how.I want to write a php script which will run in server every 6 hours and update the database info from an api.More Info:The server i am currently working is in linux. But i want to know how i can do it in both linux and windows.UPDATE:Cron does not find my script. I don't know where is the problem is. I have used this command in my cpanel0 */6 * * * php public_html/path_to_dir/file_to_run.phpI have setup the cron so cPanel send me email. The email i am getting is showing some error./bin/sh: 0: command not foundLooking forward to your help. | How to write a auto executable script in php? |
I also read about the nginx+zeromq module and I immediately spotted a considerable difference.ZeroMQ nginx module uses REQ/REP sockets to communicate with the backend processes. On the other hand mongrel2 uses two sockets. One PUSH/PULL to send messages downstream (to the handlers) and one PUB/SUB (to receive responses from handlers). This makes it totally asynchronous. When mongrel2 sends a request to the backend handlers it returns immediately from the zmq_send() call and the response will be received in another socket, anytime later.Another difference is that mongrel2 is capable of sending the same response to more than one client. Your handler can tell mongrel2 something like this: "Deliver this response to connections 4, 5, 6 and 10, please". Mongrel2 send the connection ID within the message to the handlers.Hope this helps! =) | I see this newNGINX+ZeroMQproject on github and am now confused.What are the feature and scalability differences betweenMongrel2and NGINX+ZeroMQ.(The reason why I ask is because I'm under the impression Mongrel2 was solely created since NGINX didn't support ZeroMQ) | Mongrel2 vs. NGINX+ZeroMQ? |
finally this problem is solved.
I changed .env
DB_HOST=127.0.0.1 to DB_HOST=db then it's work!!" DB_HOST= service name of mysql container on docker-compose.yml "this time my mysql container name is db, so needed to DB_HOST to be db. | I'm very new to laravel and docker and trying to connect mysql to php container(laravel).
I thought set right my docker-compose.yml and env file in laravel project.Also, I can connect to mysql db inside the container.Here is a error when I did php artisan migrate :SQLSTATE[HY000] [2002] Connection refused (SQL: select * from information_schema.tables where table_schema = myapp and table_name = migrations and table_type = 'BASE TABLE')Can anyone know what happened?docker-compose.ymlversion: '3'
services:
php:
container_name: php
build: ./docker/php
volumes:
- ./myapp/:/var/www
nginx:
image: nginx:latest
container_name: nginx
ports:
- 80:80
volumes:
- ./myapp/:/var/www
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
db:
image: mysql:8.0
container_name: db
environment:
MYSQL_ROOT_PASSWORD: root1234
MYSQL_DATABASE: myapp
MYSQL_USER: docker
MYSQL_PASSWORD: docker
TZ: 'Asia/Tokyo'
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
ports:
- 3306:3306envDB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=myapp
DB_USERNAME=docker
DB_PASSWORD=docker | Can not connect mysql with laravel (docker) |
Q:"What is the problem?"The system needs to see the cards - validate the current state of the server, using the call to (hwloc-tool )lstopo:$ lstopo --only osdev
GPU L#0 "card1"
GPU L#1 "renderD128"
GPU L#2 "card2"
GPU L#3 "renderD129"
GPU L#4 "card3"
GPU L#5 "renderD130"
GPU L#6 "card0"
GPU L#7 "controlD64"
Block(Disk) L#8 "sda"
Block(Disk) L#9 "sdb"
Net L#10 "eth0"
Net L#11 "eno1"
GPU L#12 "card4"
GPU L#13 "renderD131"
GPU L#14 "card5"
GPU L#15 "renderD132"If showing more than just an above mentionedcard0, proceed with proper naming /id#-sandbe sure toset itbeforedoing any otherimport-s, like that ofpycudaandtensorflow.import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1' # MUST PRECEDE ANY IMPORT-s
#---------------------------------------------------------------------
import pycuda as pyCUDA
import tensorflow as tf
...
#---------------------------------------------------------------------- | I have 2 GPUs on my server which I want to run different training tasks on them.On the first task, trying to force the Tensorflow to use only one GPU, I added the following code at the top of my script :import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'After running the first task, when I try to run the second task on the other GPU, (with the same 2 lines of code) I get the error"No device GPU:1".What is the problem? | How to run multiple training tasks on different GPUs? |
0
It looks like it was caused by urls for images and vtt files that had links added in the WordPress admin with http. Updating for https seems to have fixed it.
Share
Improve this answer
Follow
answered Oct 31, 2019 at 20:56
ChrisMChrisM
2,16811 gold badge2323 silver badges4343 bronze badges
Add a comment
|
|
I have a WordPress site running in Docker behind nginx with HTTPS (Letsencrypt), but I am having trouble with some essential scripts that won't load because the browser claims they are unauthenticated. I also see the dreaded 'Skip to content' link on the homepage. I set things up with jwilder/nginx-proxy and the letsencrypt companion. All my site data is loaded from a MySQL dump, and my initial assumption was that I had to change all the http://example.org entries in the dump file to https://example.org. However I was getting a 301 redirect with that ('too many redirects' error in the browser), so I changed all the links back to http. Now the site loads, but with the unathenticated error (if I accept the unauthenticated links the site loads, of course).
I have seen several solutions to this, or what I think might be solutions, which all seem to involve adding entries to .htaccess and/or wp-config.php. Indeed adding the following to my wp-config.php seems to solve the 'Skip to content' issue:
/** SSL */
define('FORCE_SSL_ADMIN', true);
// in some setups HTTP_X_FORWARDED_PROTO might contain
// a comma-separated list e.g. http,https
// so check for https existence
if (strpos($_SERVER['HTTP_X_FORWARDED_PROTO'], 'https') !== false)
$_SERVER['HTTPS']='on';
However some pages are still complaining about unauthenticated content, and that the site is not fully secure... Not sure what else to do. Do I need to modify .htaccess too?
| Getting mixed content warning - WordPress behind nginx-proxy with letsencrypt ssl |
You do not generally want to backup .metadata directory because its content is not portable. When you create a new workspace and re-import your projects, you will notice that your workspace preferences will be missing (stuff set under Window -> Preferences). Everything from code style preferences, to path variables to target runtimes. Mitigate your risk by keeping good notes on how you configure your workspace preferences and you will have no problem recovering. Some preferences (like user spelling dictionary and code style) allow you to reference external files. Take advantage of this and put those files in a directory that will be backed up. | What do I lose if I skip the.metadata/directory when doing the back-up of my Eclipse workspace? (Is there some documentation describing what Eclipse stores in this directory)? I've noticed that it changes very often (essentially every time that I use Eclipse (Galileo).I've seen thisquestion, but I'm not interested in doing a back-up of plug-ins and settings (also because I'm not sure that they would work properly when restored after a re-installation of my PC or on a new PC). I'm just interested in doing a back-up of my projects (source code, libraries, possible data,.svnand.gitdirectories). So, can I safely ignore the.metadata/directory? | Eclipse workspace backup |
I'd recommend that you point Visual VM 1.3.2, with all plugins installed, at your application. It'll show you what's happening in your generational heap memory and all the threads that are started. I can't give you the answer, but Visual VM will make the process more transparent. | I asked thisbeforebut got no response - maybe it was too long - so i'm rephrasing the question:After about 3 days from starting an application that uses Apache Axis2 v.1.5.4,OutOfLangMemoryErrorstart to occur (heap size=2048 MB) resulting either in degrading the application server (WASv.7.0.0.7) performance or stopping the logical server (process still exists).For some reasons, i have to put atimer= 1 second on the web service invocation process, in peak time, timeouts occur (either in establishment or reading).Looking in thejavacoresand theheapdumpsthrown by the server:It seems that there arehungAxis2 threads:"Axis2 Task" TID:0x00000000E4076200, j9thread_t:0x0000000122C2B100, state:P, prio=5.
at sun/misc/Unsafe.park(Native Method)
at java/util/concurrent/locks/LockSupport.park(LockSupport.java:173)
at java/util/concurrent/SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:433)
at java/util/concurrent/SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:334)
at java/util/concurrent/SynchronousQueue.take(SynchronousQueue.java:868)
at java/util/concurrent/ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
at java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
at java/lang/Thread.run(Thread.java:735)How to ensure that Axis2 threads are terminated, whether a response was returned or not i.e. exception occurred? | OutOfLangMemoryError caused by Apache Axis2 |
The container is in waiting state because when runnning the images it's crash or fail.Then the container will be restart by the kubernetes, that make the container is in waiting state because on restarting progress.For pod statuskubectl get podsif the status "CrashLoopBackOff", then its restarting the containerFor check container inside pod logskubectl logs [pod] [container]ShareFollowansweredMar 2, 2016 at 3:44BIlly SutomoBIlly Sutomo41711 gold badge66 silver badges2020 bronze badgesAdd a comment| | this is my .yaml contentapiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- resources:
limits :
cpu: 0.5
image: imagelingga
name: imagelingga
ports:
- containerPort: 80
name: imagelingga
- resources:
limits :
cpu: 0.5
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
# change this
value: pass
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysqlkuber
mountPath: /var/lib/mysql
readOnly: false
volumes:
- name: mysqlkuber
hostPath:
path: /home/mysqlkuberi have two image-mysql-imagelingga = microservice server for javathe mysql logs shows that already run
but the imagelingga logs show Pod"mysql" in namespace "default": container "imagelingga" is in waiting state.trialthe connection between these two images is, imagelinnga need connection to mysql as DB.i already run both images in docker container without kubernetes and run normally. but when i run inside kubernetes then the problem appear like that.how to trigger imagelingga container to start the servicethx before!! | container is in waiting state, kubernetes, docker container |
Any location block that processes PHP files needs to contain all of the fastcgi parameters and directives. See this document on request processing.
I have not tested this, but you should be able to use a map directive to select the appropriate value for the add_header statement.
For example:
map $request_uri $cc {
~^/wp-admin "no-store";
default "public";
}
server {
...
location ~ \.php$ {
add_header Cache-Control $cc;
...
}
}
See this document for details.
|
I want to keep the exact same settings as below in all folders, except for one. In the folder /wp-admin the setting add_header Cache-Control "no-store"; should be added only for .php files.
Question:
How to change a setting only in a specific folder in Nginx?
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
include fastcgi.conf;
add_header Cache-Control "public";
access_log off;
}
location [IF PHP FILE IN wp-admin FOLDER] {
add_header Cache-Control "no-store";
}
location ~* \.(js|css|svg|png|jpg|jpeg|gif|ico|eot|otf|ttf|woff)$ {
add_header Access-Control-Allow-Origin *;
add_header Cache-Control "public";
access_log off;
log_not_found off;
expires 1y;
}
| How to set a specific header in a single folder with Nginx? |
Reassign the range variables in the loop:for integ, spell := range mapp {
integ, spell := integ, spell
cr.AddFunc("@every 2s", func() {
fmt.Println("Running Cron Spell:", spell, "Integer:",integ)})
}The range variable is the same one reutilized on each iteration. If you close around it, the closure (function literal) will see the last value in the iteration. | I am working on a cron jobs service usingrobfig/cronmodule. Issue I am facing is it is not able to dynamically run the cron job functions. For example, refer to the code belowmapp := map[int]string{1: "one", 2: "two", 3: "three"}
cr := cron.New()
for integ, spell := range mapp {
cr.AddFunc("@every 2s", func() {
fmt.Println("Running Cron Spell:", spell, "Integer:",integ)})
}
cr.Start()The output is as below for every 2 secondsRunning Cron Spell: three Integer: 3
Running Cron Spell: three Integer: 3
Running Cron Spell: three Integer: 3The output is same for all the 3 cron jobs. I was expecting it to give output something like this.Running Cron Spell: one Integer: 1
Running Cron Spell: two Integer: 2
Running Cron Spell: three Integer: 3I am not sure whether it is a bug or I am doing it wrong. My goal is to let the cron jobs run dynamically based on configured value. Is there any workaround that I can make it as my desired output? | Golang Robfig cron AddFunc does not dynamically run the jobs |
If you can do this at all, it certainly won't be easy. The GitHub web interface is proprietary software that does not provide a setting for you to make this change. You might be able to create a UserScript in your browser using TamperMonkey (or similar) that detects the fields on the page and modifies them, but you'll have to write all the logic and then keep changing it whenever GitHub makes changes to their code.Note that your suggested commit message that contains emoji is not actually more descriptive than their default message, in fact many people will find it less readable. | I've been wondering is it possible to change the message that appears in the input field when you commit on Github. Refer to the picture below:I like my commits to be as descriptive as possible and enjoyable to read. I want Github to suggest📝 Create READMEinstead ofCreate README.md.When I search for the solution, the 'google search' shows irrelevant answers. Is it somehow possible?Note:Thisdoes not answer my question. | How to change Github's default commit messages? |
You can add to your crontab something like0 * * * * /bin/bash -l -c 'cd /path/to/your/project && bundle exec rake foo:bar >> log/cron.log 2>&1'This will runfoo:bartask every hour and writestdoutandstderrtolog/cron.log.Please noticebundle execbeforerakecommand.Using bundler ensure you that task will fetch correct environment.To specifyRAILS_ENVyou can do... && RAILS_ENV=production bundle exec rake foo:bar | I have and rails application and a rake task which I'm going to execute by cron around once in an hour. But the thing is that the task uses rails environment and some classes of my rails application. If I run it as ruby script, I'll have to include all the dependencies it uses and I think it's not possible to do it correctly and in a simple way. So I'll have to run it as a rake task because it'll preserve all the dependencies, right? Then how can I run a rake task from cron?Note that I prefer not to use any third-party solution when there's no necessity, in this case I don't want to use the gem whenever or the like. | How can I run a rake task via cron? |
I've found this approach which may be not compatible with Magento:You have subscription example.tld with Magento on IP1.Create new subscription on IP2 with name of your new front domain.In subscription of new front place in/httpdocs/index.htmlcontent:<HTML>
<HEAD>
<TITLE>!!!! replace it with your page title!!!!</TITLE>
</HEAD>
<FRAMESET>
<FRAME SRC="https://your-new-front.tld/" NORESIZE>
<NOFRAMES>
Your browser does not support frames.
</NOFRAMES>
</FRAMESET>
</HTML>There is another approach to create custom virtual host for web server(nginx or apache) with new fron domain name and IP2 which web root points to old front but if you have separate SSL certificate for new front you need to maintain this certificates files manually, it will be quite hard. Plus extra maintenance of backup/restore of this configuration.Third approach with customizing web server configs template from/usr/local/psa/admin/conf/templates/(as I understand there just need additional IP and server name in your current virtual server config). It's gives you no extra backup/restore.P.S.
We have this pain just because current Plesk design doesn't allow 2 IPv4 address for single subscription. | We're adding a second storefront to our existing Magento instance. The two store fronts will be accessed by different domain names and would have to have separate IP address to differentiate the payment charges on the customer's CC (so I am told by Authorize.net) and add SSL certificate to both.My server support has no idea how to point a different domain name over to the magento instance since it's on a different IP address and the hosting company support team has said it can't be done.I am being told to build a new Magento instance, but I find that hard to believe. There must be other multisite instances on different IP addresses.How do I set up multiple IP addresses on one server which share the same document root in PLESK? | Magento Multi site instance with 2 IP addresses |
Assuming you want to run everything on the same IP address, you have multiple options:Use a wildcard certificate*.example.com. This will be valid forwww1.example.comandwww2.example.com, but not forwww.example1.comandwww.example2.com1.The wildcard must be in the left-most label. Wildcard certificate usage isdiscouraged.Use a single certificate with multiple Subject Alternative Names (SANs). You could then have a SAN forwww.example1.comand another one forwww.example2.com(the names don't need to be related).Use multiple certificates andServer Name Indication (SNI). The downside is thatnot all clients support it(especially IE on Windows XP, as far as I'm aware).If you can use multiple IP addresses, simply use one certificate per IP-address. | Is it not possible to use Name-Based Virtual Hosting to identify different SSL virtual hosts?I am trying to implement https: for multiple websites on my Ubuntu 10.04 server, but I came across this resource which tells me it cannot be done:http://www.linuxpoweruser.com/?p=121The workaround given in this HowTo is suggesting that I have a site structure like this:www.abc.com/site1
www.abc.com/site2
www.abc.com/site3This is not a satisfactory workaround for me. Can someone tell me whether there is a better workaround for this issue?Thank you. | Name-Based Virtual Hosting to identify different SSL virtual hosts on Ubuntu Apache LAMP Server? |
Depends whether you want to ACCEPT INCOMING connections or just want to use port 80 for OUTGOING connections. Most Firewalls block any incoming connections by default (plus most home-routers are configured to do so, too).For outgoing connections, however, the default behavior of the most popular Firewalls is to block and ask the user for permission for the program, unless it is run with administrative privileges (in which case the user already had to grant the program almost full control over the computer anyway). But it depends on the Firewall in question.ShareFollowansweredNov 19, 2012 at 16:08tndztndz2122 bronze badgesAdd a comment| | My game communicates with server through 4567 port using TCP custom binary protocol and some clients cannot play game. I think that is because of firewalls.Later I will use 80 port and want to know: does firewall intruse into transmitting data and is there possibility that he will block non-HTTP data? If it is, how can I send my binary data within HTTP and will not firewall block even such data?Thank you. | Send binary data as HTTP and pass firewall |
When using the browser when I navigate to the https site the browser downloads the certificate(without the private key) and then continues to send the data over https?No, the browser and the server stablish a SSL/TLS secure channel with a symmetric encryption key. During the handshake process the server presents the https certificate and digitally signs some data with the private key as a proof of authenticity.I have come across some sites (especially when developing) that require you to install the certificate on the local machine before making a service call. What is the purpose here and how does it work?The client must trust the server certificate. Usually it has a list with the Certification Authorities for which certificates are accepted. For other certificates is needed to add them to the trust list. If not, the communication will be rejectedI have also seen some scenarios where you need to install the certificate on the client machine for authentication purposes for example if you are using an email client, how does this work?Probably the same case as the previous one. Also the public part of the certificate of a user can be used to encrypt a message for him | I am trying to wrap my head around certificates and any help is appreciated. So far this is what I understand, please correct me if I am wrong.When using the browser when I navigate to the https site the browser downloads the certificate(without the private key) and then continues to send the data over https?I have come across some sites (especially when developing) that require you to install the certificate on the local machine before making a service call. What is the purpose here and how does it work?I have also seen some scenarios where you need to install the certificate on the client machine for authentication purposes for example if you are using an email client, how does this work? | Certificates, install in local machine before calling a service |
Luckily, the fix is super simple: Just change the line location /post/ { to location /post {, the extra slash matches only request to /post/<something else> which, based on your description, isn't what you want.
In fact, you may even want to change that line to location =/post { if you want to match only requests to /post, not requests to /post<some other string> or /post/<some other string> as well.
|
I'm currently using nginx as a forward proxy for websockets, and it has been working fine up to now. Client connects with:
var ws = new WebSocket('ws://10.30.0.142:8020/');
I want to forward a post request as well. In this case, client adds /post to the ws address, so that the address is extended to 'ws://10.30.0.142:8020/post'. However requests to that address return:
http://10.30.0.142/post 404 (Not Found)
I'm using the following configuration file (nginx.conf), which most probably is wrong for the post request (location /post/):
upstream websocket {
server 127.0.0.1:8010;
}
server {
listen 8020;
server_name server;
root /var/www/html;
expires off;
keepalive_timeout 0;
access_log /dev/null;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location /post/ {
proxy_pass http://127.0.0.1:8010;
}
location ~* \.(?:css|js|map|jpe?g|gif|png)$ { }
}
}
How should I configure this file correctly to solve this issue?
| Nginx as a forward proxy for websockets and post requests |
While you did not provide you code I will answer your question based in your explanation.First, regardingDoFn.start_bundle(), this function is called for every bundle and it is up for DataFlow to decide the size of these, based on the metrics gathered during execution.Second,DoFn.setup()is called once per worker. It will be only called again if the worker is restarted. Moreover, as a comparisonDoFn.processElement()is called once per element.Since you need to refresh your query twice per week, it would be the perfect use forSideInputusing"Slowly-changing lookup cache". You can use this approach when you have a look up table which changes from time to time. So you need to update the result of the lookup. However, instead of using a single query in batch mode, you can use streaming mode. It allows you to update the result of the lookup (in your case the query's result) based on a GlobalWindow. Afterwards, having this side input you can use it within your main stream PCollection.Note:I must point that as a limitation sideInputs won't work properly with huge amounts of data (many Gbs or Tb). Furthermore,explanationis very informative. | I have a streaming pipeline where I need to query from BigQuery as reference for my pipeline transform. Since BigQuery tables are only changed in 2 weeks, I put the query cache in setup() instead of start_bundle(). From observing logs, I saw that start_bundle() will refresh its value in DoFn life cycle around 50-100 element process but setup() will never be refreshed. Is there any way to deal with this problem? | How long beam setup() refresh in python Dataflow DoFn life cycle? |
Using a TypedArena may also be a solution.
let arena = Arena::new()
loop {
let example = arena.alloc(Very_Complicated_Struct::new());
dealing_with(example);
}
time_demanding_steps();
// Arena and all contained values are dropped
There are a few limitations you have to adhere to, notably the fact that you won't own the structure directly; you only get a &mut T.
This is a specialized case of the "garbage holder" pattern described by Matthieu M..
|
Rust developed a clever memory management system, but I have the following situation:
loop {
let mut example = Very_Complicated_Struct::new();
// very complicated data structure created
dealing_with(&mut example);
}
// End of scope, so Rust is supposed to automatically drop the
// memory of example here, which is time consuming
time_demanding_steps();
// But I want Rust to drop memory here,
// after the time_demanding_steps()
Is there a way in Rust to do so?
| How can I delay Rust's memory auto-management? |
First thing is, you need to run a pod inside your cluster then assign the ip of that pod inside the Endpoints yaml with port, because services exposes the pods to within or outside the cluster, we must use either selector or the address of the pod so that service can attach it self to particular pod.
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <ip address of the pod>
ports:
- port: <port of the pod>
One more thing use Statefulset in place of Deployment to run pods.
|
I am studying services in k8s from here
I have created service without selector and with one endpoint. What I am trying to do is I have installed apache and it's running on port 80. I have created a node port service on port 31000. Now this service should redirect ip:31000 to ip:80 port.
It is doing for internal ip of service but not on external ip.
my-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 9376
targetPort: 80
nodePort: 31000
type: NodePort
my-endpoint.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: <IP>
ports:
- port: 80
Output for kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m <none>
my-service NodePort 10.111.205.207 <none> 9376:31000/TCP 30m <none>
| Understanding services in kubernetes? |
Check the ports of the server by any tool.
For Example,nmap <IP>
Starting Nmap 5.21 ( http://nmap.org ) at 2015-05-05 09:33 IST
Nmap scan report for <IP>
Host is up (0.00036s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open *****
139/tcp open *****
443/tcp open openssl
MAC Address: 18:03:73:DF:DC:62 (Unknown)Check the port number is in openstate. | I am trying to connect to one Linux server from a client,openssl s_client -connect <IP of Server>:443I am getting the following error:socket: Connection refusedconnect:errno=111 | OpenSSL: socket: Connection refused connect:errno=111 |
1
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
Share
Improve this answer
Follow
edited May 23, 2017 at 12:30
CommunityBot
111 silver badge
answered Aug 2, 2009 at 3:22
mlevitmlevit
2,6961010 gold badges4242 silver badges5151 bronze badges
1
No, we aren't happy with people copying our content either with or without formatting. This is simply a tool to help us track down those who have already copied it.
– Ian
Aug 3, 2009 at 17:08
Add a comment
|
|
We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
| Content Water Marking |
This definately looks like a DNS issue, you can access sites via IP but not through the URL.
You need to configure your routers DNS settings, personally I would set them to Google's DNS servers
8.8.8.8 (primary)
8.8.4.4 (secondary)
|
Though I have the same problem that is in this question, But I am facing another problem i.e, even https://github.com/ is not running in the browser and showing that this webpage is not available and this happens after I installed Heroku.
Cannot access github from terminal and not even from browser.
After diagnosis I get to know that this problem is related to my WiFi because when I try to open https://github.com/ in mobile using mobile data, It is opening but when I connect my mobile to WiFi then same problem i.e, this webpage is not available and when I use mobile data internet in my laptop I am able to access the site but when using WiFi this webpage is not available.
I am running Ubuntu(Linux), dual boot with Windows-7
Diagnosis Results:
@Aroll605
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=57 time=144 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=57 time=71.4 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=57 time=192 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=57 time=112 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=57 time=137 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=57 time=58.0 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=57 time=83.6 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=57 time=104 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=57 time=128 ms
@epascarello
unknown host 192.30.252.131
| Could Not Resolve Host github.com |
that requires me to be running the script on the cluster itselfNo, it should not: thekubernetes-client pythonperforms operations similar tokubectlcalls (asdetailed here).Andkubectlcalls can be done from any client with a properly set.kube/configfile.Once you get the image name from akubectl describe po/mypod, you might need to docker pull that image locally if you want more (like a docker history).TheOP Raksaddsin the comments:I wanted to know if there is a python client API that actually gives me an option to do docker pull/save/load of an image as suchThedocker-pylibrary can pull/load/save images. | I am currently using Kubernetes Python SDK to fetch relevant information from my k8s cluster. I am running this from outside the cluster.I have a requirement of fetching the images of all the POD's running within a namespace. I did look at Docker python SDK but that requires me to be running the script on the cluster itself which i want to avoid.Is there a way to get this done ?TIA | Fetching Docker image information using python SDK |
Is it trying to extract the download link from the HTML page? That's error prone and may break any time.For such operations, check if they offer an API first.They do:https://docs.github.com/en/rest/reference/releases#get-the-latest-releaseYou could write something like (pseudo code):curl \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/Atmosphere-NX/Atmosphere/releases/latest \
| jq .assets[0].browser_download_url \
| xargs wget -qi -Like suggested in the comments, test each command (pipe separated) individually. | I want to download the two (.bin and .zip) binaries from the latest releases.I tried using the following commandcurl -s https://github.com/Atmosphere-NX/Atmosphere/releases/latest | grep "browser_download_url.*zip" | cut -d : -f 2,3 | tr -d "\" | wget -qi -but nothing happens, output beingSYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrcI'm open to using any other (wget, ecurl etc) commands. | How to download the latest binary release from github? |
Put the truststore file in the resource directory of the Spring Boot project.All files in the resource directory are added to the JAR file when it is built
and will be in the classpath.ShareFollowansweredNov 27, 2019 at 14:07DwBDwB37.8k1111 gold badges5959 silver badges8686 bronze badgesAdd a comment| | I am accessing a database in the cloud. In their "How To" page, it is mentioned that you can put these ca.pem and service.key files in your local path.In my Spring boot project, in application properties, I have put :ssl.truststore.location=/Users/Me-myself/local/path/keys/client.truststore.jksI have used commands to crate the stores and I can access my remote cloud database.My question is, How can other people who will clone my project be able to run it successfully in their local machines?Is it a good practice to embed these files into my spring-boot project? | How to manage certificate, truststore and keystore key files in a Java project |
Since the above hasn't solution hasn't really helped me, I had decided to write an aks cron job which syncs certificates to azure keyvault.If anyone is interested, I would be able to open source it.ShareFollowansweredOct 6, 2022 at 13:02MicheleMichele21911 gold badge55 silver badges1515 bronze badges2I have exactly the same situation with a single AGW and AKS clusters over multiple subscriptions. If you could share or contact I would appreciate it very much!–StephanFeb 17, 2023 at 8:10Thisis what I made and still use. It works like a charm. The only thing is I did not have time to do a helm chart and proper documentation for it yet. Nevertheless I did try to write some points–MicheleFeb 22, 2023 at 13:34Add a comment| | I have an AKS cluster running running Internal nginx ingress + cert-manager which generates lets encrypt for ssl terminations.I would like to include application gateway as an entry point, where I expect that SSL internet traffic hits Applicatiom Gateway and traffic is forwarded to the nginx ingress, then to my application. I do not mind if SSL offloading is done at Appgw level or on the AKS cluster itself.One of my biggest headaches is that Application gateway requires a certificate when an https listener is created. Since the certifiate is generated automatically on the AKS cluster I do not see the benefit of supplying an SSL certificate to the Application Gateway neither do I want to go through the extra work of generating a certificate and storing it keyvault etc etc.What is the neatest way to tackle this problem? Potential solutions I have considered areConfigure Application Gateway to passthrough SSL to the AKS clusterSomehow configure cert-manager to store the certificate in keyvaultThe only options I see are (but I like neither are)Purchase a certificate and store it in keyvault (however I prefer using Lets Encrypt)Generate the SSL certificate on a cluster and then write a script which scrapes the certificate and stores it in Azure Key VaultAny help will be appreciated | Azure Application gateway with lets encrypt |
Technically seen the php script is run where cron is located; ex. If cron was in /bin/cron, then this statement would look for common.php in /bin/includes/common.php.
So yeah, you'll probably have to use fullpaths or use set_include_path
set_include_path('/home/username123/public_html/includes/');
require 'common.php';
|
I have a cron job that needs to include this file:
require '../includes/common.php';
however, when it is run via the cron job (and not my local testing), the relative path does not work.
the cron job runs the following file (on the live server):
/home/username123/public_html/cron/mycronjob.php
and here's the error:
Fatal error: require(): Failed opening required '../includes/common.php'
(include_path='.:/usr/lib/php:/usr/local/lib/php') in
/home/username123/public_html/cron/mycronjob.php on line 2
using the same absolute format as the cron job, common.php would be located at
/home/username123/public_html/includes/common.php
does that mean i have to replace my line 2 with:
require '/home/username123/public_html/includes/common.php';
?
thanks!
| PHP: Require path does not work for cron job? |
1
I found the issue. The appId was wrong somehow. Quite embarrasing, but by testing on a new machine I just double checked all possible changes and found the issue.
Share
Improve this answer
Follow
answered Jan 11, 2023 at 14:46
Juan P ReyesJuan P Reyes
871111 bronze badges
Add a comment
|
|
I'm having issues with an octokit-based nodejs app that was working just a couple of weeks ago. From nowhere, I'm getting auth errors that I've managed to debug to the point of getting this error message:
A JSON web token could not be decoded
This happens when I instanciate the Octokit for an installation App that works on Github and then ask anything of it (creating a PR, adding an issue, etc).
However, I'm not sure what does this mean. So far:
APP_ID and PRIVATE_KEY are variables stored in process.env.
The InstallationId I get from the Github app URL, and the other data (OWNER, REPO, etc) I've verified is correct.
This is a summarized version of my code:
const {Octokit} = require("@octokit/rest");
const { createAppAuth } = require("@octokit/auth-app");
const octokit = new Octokit({
authStrategy: createAppAuth,
auth: {
appId: APP_ID,
privateKey: PRIVATE_KEY,
// optional: this will make appOctokit authenticate as app (JWT)
// or installation (access token), depending on the request URL
installationId: process.env.INSTALLATION_ID,
},
});
const test = async()=>
await octokit.issues.create({
owner: OWNER,
repo: REPO,
title: "Hello world from me",
});
test()
What could be happening? Any help would be greatly appreciated.
Update:
I just tested this code on a different machine and... it works. So, I'm baffled as to why it is not running in the original machine.
| Octokit-js based app throws JSON web token error |
No, after you upload the deployment package, it's saved in the function and layer storage of your AWS lambda account which has adefault limitof 75GB. On each invocation of the lambda function, the deployment package will be pulled from there.Since the deployment package is not pulled from S3, it will not incur any data transfer cost. | I'm well aware of the lambda function deployment package size limit is 50 MB(in case of compressed .zip/.jar) with a direct upload and 250 MB limit (uncompressed) via upload from S3.What I'm not clear is how lambda deploys the package from S3?Is it on the each invocation of the lambda function?Will there be any cost associated data transfer between S3 to lambda function? | Does AWS Lambda use S3 during invocation or only during deployment? |
Thanks everyone for your help and suggestions!It turns out that WP's built-in RewriteRule functions are what need to be used (at least as far as I can tell).This solution worked for me, thanks to a coworker (James) who discovered these WP Codex examples!:http://codex.wordpress.org/Rewrite_API/add_rewrite_taghttp://codex.wordpress.org/Rewrite_API/add_rewrite_ruleadd_action('init', 'dealerRewrite');
function dealerRewrite(){
add_rewrite_tag('%username%','([^&]+)');
add_rewrite_rule('^dealers/dealers-info/([^/]*)/?','index.php?page_id=112&username=$matches[1]','top');
} | I've been pouring over every Stack Overflow topic I can find on htaccess & vanity urls within Wordpress, but I'm completely stuck as to why mine isn't working. I'm a complete noob with htaccess, so I'm sure that has a lot to do with it.I am trying to format all urls pointing to /dealers/dealers-info/username to the same wordpress page (id 112 - aka 'dealers-info') with the username as a parameter.The vanity url code added is right after #vanity urlsFor example, passing: URL.com/dealers/dealers-info/watergallery where 'watergallery' is the username, displays a basic 404:Not FoundThe requested URL /dealers/dealers-info/watergallery was not found on this server.Any insight is greatly appreciated - thanks in advance for your help![EDIT - removed leading / and moved the rule - now seeing a WP 404 page]Options +FollowSymLinks
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
# add a trailing slash to /wp-admin
RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L]
RewriteRule ^uvmax/blog/$ /blog [R=301,L]
RewriteRule ^dealer-finder/$ /dealers [R=301,L]
RewriteRule ^sterilight/blog/$ /blog [R=301,L]
# vanity urls
RewriteRule ^/dealers/dealers-info/(.*)$ index.php?p=112&username=$1 [L]
RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L]
RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L]
RewriteRule . index.php [L] | .htaccess, Wordpress & vanity URLs |
First create the sub-domainblog.abc.comthen use the following .htaccess code to turn abc.com/blog into blog.abc.com:RewriteCond %{HTTP_HOST} ^blog.abc.com$ [NC]
RewriteCond %{REQUEST_URI} !^/blog/
RewriteRule ^(.*)$ /blog/$1 [L]Finally, go to blog.abc.com.ShareFollowansweredApr 30, 2012 at 4:52ErrErr90022 gold badges1111 silver badges3232 bronze badgesAdd a comment| | Suppose I have a domain asabc.com, which is inside/htdocs/directory. Now I am adding a blog to that as/htdocs/blog/.Now I want this blog to browse like asblog.abc.cominstead ofabc.com/blog. But I don't want to do it for all the directory. How to do it using.htaccessor some PHP code?Here I want to create sub domain only for this blog, not for other directory present inside/htdocs/, which is the correct way to do it? & what are the possible way to do it. | How to create directory based subdomain |
There are two good guides on setting up Kubernetes manually:Kelsey Hightower'sKubernetes the hard wayKubernetes guide ongetting started from scratchKelsey's guide assumes you are using GCP or AWS as the infrstructure, while the Kubernetes guide is a bit more agnostic.I wouldn't recommend running either of these in production unless you really know what you're doing. However, they are great for learning what is going on under the hood. Even if you just read the guides and don't use them to setup any infrastructure you should gain a better understanding of the pieces that make up a Kubernetes cluster. You can then use one of the helpful setup tools to create your cluster, but now you will understand what it is actually doing and can debug when things go wrong. | While getting familiar with kubernetes I do see tons of tools that should helps me to install kubernetes anywhere, but I don't understand exactly what it does inside, and as a result don't understand how to trouble shoot issues.Can someone provide me a link with tutorial how to install kubernetes without any tools. | how to install kubernetes manually? |
To display them on screen, iOS has to uncompress your images and that's where your spike comes from.
2048 * 1536 = 3145728 pixels. At 4 bytes per pixel that is 12 MB.
|
My app runs in Instruments taking up approximately 700 KB of Live Bytes on average while running. However, every time it loads a new full-screen image, the memory allocations jump about 10 MB for a second, and then recover to the normal 700 KB level.
This is okay at the beginning, but once it has happened a few times I receive memory warnings and the app quits, even though the total Live Bytes stabilises well under the 1 MB mark.
I have created a test project to see why this is happening. It is a Single View Application with only the following code in the View Controller:
- (void)viewDidLoad
{
[super viewDidLoad];
NSString *imgFile = [[NSBundle mainBundle] pathForResource:@"00-bg" ofType:@"png"];
UIImage *img = [[UIImage alloc] initWithContentsOfFile:imgFile];
UIImageView *backgroundImageView = [[UIImageView alloc] initWithImage:img];
[img release];
[self.view addSubview:backgroundImageView];
[backgroundImageView release];
}
The output from Instruments (Leaks) looks like this:
I have tried both ARC and non-ARC, and the only difference is the length of the spike (ARC seems to hold onto the memory for longer).
I have also tried both UIImage imageNamed: and initWithContentsOfFile: but the results are the same.
Why is this spike happening? And is there anything I can do to avoid it?
| Loading full-screen Retina image on iPad causes massive 10 MB spike |
This behaviour is caused by Apache and it is not an issue with Docker. Apache is designed to shut down gracefully when it receives theSIGWINCHsignal. When running the container interactively, theSIGWINCHsignal is passed from the host to the container, effectively signalling Apache to shut down gracefully. On some hosts the container may exit immediately after it is started. On other hosts the container may stay running until the terminal window is resized.It is possible to confirm that this is the source of the issue after the container exits by reviewing the Apache log file as follows:# Run container interactively:
docker run -it
# Get the ID of the container after it exits:
docker ps -a
# Copy the Apache log file from the container to the host:
docker cp :/var/log/apache2/error.log .
# Use any text editor to review the log file:
vim error.log
# The last line in the log file should contain the following:
AH00492: caught SIGWINCH, shutting down gracefullySources:https://bz.apache.org/bugzilla/show_bug.cgi?id=50669https://bugzilla.redhat.com/show_bug.cgi?id=1212224https://github.com/docker-library/httpd/issues/9 | Consider the following Dockerfile:FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y apache2 && \
apt-get clean
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]When running the container with the commanddocker run -p 8080:80 , then the container starts and remains running, allowing the default Apache web page to be accessed onhttps://localhost:8080from the host as expected. With this run command however, I am not able to quit the container usingCtrl+C, also as expected, since the container was not launched with the-itoption. Now, if the-itoption is added to the run command, then the container exits immediately after startup. Why is that? Is there an elegant way to have apache run in the foreground while exiting onCtrl+C? | Docker container exits when using -it option |
Analysis of C# projects must be done on a Windows environment. Here it's failing because the project is analysed on Linux.The thing is that SonarQubeC# Pluginreally is targeted to the Windows .NET ecosystem, which is where you'd anyhow build/maintain .NET projects. It must be used with theSonarQube Scanner for MSBuild, which requires MSBuild 14.0. | When I run sonar-runner for an analysis my simple C# project the analysis terminate on error on SonarLint.Runner.exe of permission denied:ERROR: Error during SonarQube Scanner execution
ERROR: java.io.IOException: Cannot run program "/opt/sonarqube-5.4/projects/ArturiCSharpSort/.sonar/SonarLint.Runner/SonarLint.Runner.exe": error=13, Permission denied
ERROR: Caused by: Cannot run program "/opt/sonarqube-5.4/projects/ArturiCSharpSort/.sonar/SonarLint.Runner/SonarLint.Runner.exe": error=13, Permission denied
ERROR: Caused by: error=13, Permission deniedEffectively-rw-r--r-- 1 root root 19456 Apr 5 11:14 .sonar/Lint.Runner/SonarLint.Runner.exeThis following is my sonar-project.properties:# Root project information
sonar.projectKey=ArturiCSharpSort
sonar.projectName=ArturiCSharpSort
sonar.projectVersion=1.0
# Some properties that will be inherited by the modules
sonar.sources=.
#List of the module identifiers
#sonar.modules=
# Properties can obviously be overriden for
# each module - just prefix them with the module ID
#module1.sonar.projectName=
#module2.sonar.projectName=I try to act a chmod on SonarLint.Runner.exe but it is rebuil at every analysis.
How can I run analysis on C# project? | C# project in sonarqube |
4
Ensure that 7946/tcp, 7946/udp, and 4789/udp are open and available to all nodes in the cluster BEFORE docker swarm init.
Not sure why, but if they are not open PRIOR to creating to the swarm, they will not properly load balance.
https://docs.docker.com/engine/swarm/ingress/
Share
Follow
answered Apr 1, 2018 at 2:29
jnovackjnovack
8,19922 gold badges2727 silver badges4040 bronze badges
Add a comment
|
|
When I deploy a service on a swarm using:
docker service create --replicas 1 --publish published=80,target=80 tutum/hello-world
I can access the service only from the ip of the node running the container. If I scale the service to run on both nodes, I can access the service from both ips, but it will never run from a container on the other node. (as confirmed by the tutum/hello-world image).
The documentation suggests that load balancing should work when it says:
Three tasks will run on up to three nodes. You don’t need to know which nodes are running the tasks; connecting to port 8080 on any of the 10 nodes will connect you to one of the three nginx tasks.
The swarm was created using swarm init and swarm join.
Using docker network ls the ingress swarm network is found on both nodes:
NETWORK ID NAME DRIVER SCOPE
cv6hk9wce8bf ingress overlay swarm
Edit:
Manager node runs linux, worker node runs OSX. Running modinfo ip_vs on the manager nodes returns:
filename: /lib/modules/4.4.0-109-
generic/kernel/net/netfilter/ipvs/ip_vs.ko
license: GPL
srcversion: D856EAE372F4DAF27045C82
depends: nf_conntrack,libcrc32c
intree: Y
vermagic: 4.4.0-109-generic SMP mod_unload modversions
parm: conn_tab_bits:Set connections' hash size (int)
Running modinfo ip_vs_rr returns:
filename: /lib/modules/4.4.0-109-
generic/kernel/net/netfilter/ipvs/ip_vs_rr.ko
license: GPL
srcversion: F21F7372F5E2331EF5F4F73
depends: ip_vs
intree: Y
vermagic: 4.4.0-109-generic SMP mod_unload modversions
Edit 2:
I tried adding a linux worker to the swarm, and it worked as advertised, so the problem appears to be related to the OSX machine.
The problem is solved for me, however, I'll let the question stay for future reference.
| Docker swarm mode routing mesh not working |
0
This GitHub issue discussion might be helpful: https://github.com/citation-file-format/citation-file-format/issues/339
The intent behind the citations.cff file is to cite the software and hence, there can only be one citation (by design). The preferred citation is intended as an override so that the user can customise the citation.
Share
Improve this answer
Follow
answered May 21, 2023 at 3:38
ManodeepManodeep
1122 bronze badges
Add a comment
|
|
I want to contribute to making citation easier by uploading CITATION.cff files to the packages I have used in a project. One such package, Pyomo, has specified two sources to which to refer if you used their work. CITATIONS.cff have a "preferred citation" category which fits this quite well but I am not sure how to include two sources or if that is at all possible. I have tried simply appending the second source but it seems to override certain fields rather than citing two sources (the latest year and DOI are applied for example). This is how far I have gotten:
cff-version: 1.2.0
title: Pyomo
message: >-
If you use Pyomo in your research, please cite the
Pyomo book and the Pyomo paper.
type: software
authors:
- given-names: Michael
family-names: Bynum
- given-names: Gabriel
family-names: Hackebeil
- given-names: William
family-names: Hart
- given-names: Carl
family-names: Laird
identifiers:
- type: doi
value: 10.1007/978-3-319-58821-6
description: Pyomo — Optimization Modeling in Python
- type: doi
value: 10.1007/s12532-011-0026-8
description: >-
"Pyomo: modeling and solving mathematical
programs in Python"
repository-code: 'https://github.com/Pyomo/pyomo'
url: 'http://www.pyomo.org/'
preferred-citation:
type: article
authors:
- given-names: Michael
family-names: Bynum
- given-names: Gabriel
family-names: Hackebeil
- given-names: William
family-names: Hart
- given-names: Carl
family-names: Laird
doi: "10.1007/s12532-011-0026-8"
journal: "Mathematical Programming Computation"
month: 9
start: 219 # First page number
end: 260 # Last page number
title: "Pyomo: modeling and solving mathematical programs in Python"
issue: 3
volume: 3
year: 2011
Is it possible to include another preferred citation? If so, how?
| Having two preferred citations when implementing a CITATION.cff to a project |
Best way to do it ..Is to use .htaccessHere is website that will generate htaccess country block codes:http://www.ip2location.com/blockvisitorsbycountry.aspx | I want to restrict which country can access my webpage, but I can't get it to work.I tried followingthis guide.It gives me lots of error when it is hosted on my localhost (XAMPP). Check error from screenshot:And if it is uploaded to some free hosting it gives me an infinite "LOOP" when accessingdomain.com/index.phpordomain.com/redirect.php.But onthis webpageit works, and shows my details perfectly
(country, country code, latitude, longitude etc.)I've pasted those two files on the same folder and trying it to access on my computer which is:http://localhost/geoplugin.class/index.php | Country based redirection |
You may need to think of other solutions other than dynamically changing memory size including
allocating a reasonable amount of memory to start with.
decreasing the resolution of your captured images (which is what I did for a similar problem by decreasing image size)
caching your images to disk when not immediately needed vs. a combination.
|
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Setting JVM heap size at runtime
Is it possible to prevent a program from crashing when it encounters a OutOfMemoryError by increasing the memory allowed to the program?
Can it be done at run time?
Reason for increasing the memory
I was talking a lot of screen shots using java.awt.Robot and after some time my Vector ran out of memory. At 60 BufferedImage it was out.
so 1280 x 800 resolution, 3 byte RGB BufferedImage and 60 images later, the vector was out.
So I guess the memory consumed was
1280 x 800 x 60 x 3 = do the math bytes
| Dynamically increase the memory of Java programs [duplicate] |
No there is no way currently for setting the number of replicas by container in kubernetes, and this is unnecessary also.Pods are the smallest or atomic unit of k8s, it is the most basic deployable objects in Kubernetes. Pods contain one or more containers. Basically the concepts of pod have come as a wrapper around the container. And you can scale up or down of the pod by giving the number of replicas, it's not for container level.As far the k8spod doc: A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.ShareFolloweditedMar 6, 2021 at 18:12answeredMar 6, 2021 at 18:07Sahadat HossainSahadat Hossain3,79322 gold badges1616 silver badges2020 bronze badgesAdd a comment| | Is there a way for me to set the number of replicas by container instead of by pod?Thanks! | Managing replicas per-container rather than per-pod |
In your nginx config, change the line proxy_pass http://13.65.148.35:8080; to proxy_pass http://127.0.0.1:8080;
You're providing the externally accessible IP to the proxy pass, so nginx will conform to the firewall settings in the same way that an external user would; that is, not be able to access port 8080. Make sure it's communicating within the local scope of the server.
|
I am testing a sample node app using nginx.
But I get 504 Gateway Time-out. nginx/1.4.6 (Ubuntu 14.04)
I saw others posts related to the same topic but its of no use.
Below is the procedure which I followed for installing node, nginx on Azure.
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo apt-get install -y build-essential
curl -Lo hello.js http://do.co/node-hello
sudo nano app.js
app.js file
var http = require('http');
http.createServer(function (req, res) {
console.log('Came here');
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'localhost');
console.log('Server running at http://localhost:8080/');
ls -l
-rwxrwxrwx 1 root root 265 Mar 12 15:52 app.js
sudo npm install pm2 -g
pm2 startup
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup upstart -u azureuser --hp /home/azureuser
pm2 start app.js
Nginx Server
sudo apt-get update
sudo apt-get install nginx
sudo nano /etc/nginx/sites-available/default
sudo nano /etc/nginx/sites-available/default file
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name testingnode.cloudapp.net;
location / {
proxy_pass http://13.65.148.35:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
sudo service nginx restart
http port 80 is opened in azure dashboard
So after all configurations trying to run http://13.65.148.35/ or testingnode.cloudapp.net will give 504 timeout.
Please let me if anything needs to be configured for running node with nginx.
| 504 Gateway Time-out nginx/1.4.6 (Ubuntu) in Node.js and nginx |
For some cases, it could be a squash merged PR.
The authorship then completely belongs to the person who merged the PR. GitHub seems to credit you as contribution on your profile, but since it's not counted in commit there's a possibility that the changes doesn't makes you a contributor.
When a pull request is merged and commits are squashed, only the user that merged the pull request and the user that opened the pull request receive contribution credit. No other contributors to the pull request will receive contribution credit. link
Update: only the "top" 100 contributors are crawled, and it's a strict limit of 100 and cuts the rest. You only contributed one, so you're end up hiding in the cutted part.
The best way to confirm whether you have the contributor status is to use the recently implemented badge feature in the comments. You seem to have it.
|
I had a PR merged to the master branch of an open source project on GitHub. The project is not a fork, and if I view the file in the master tree that I edited in my PR, I am listed there as a contributor to that file.
For some reason, however, I am not appearing in the project's main "contributors" page. Does anyone know why this would be? I've already viewed this GitHub help page about contributions showing up on my own profile but my question is about showing up in the "contributors" page of the actual project.
For reference, here is the PR in question: https://github.com/octokit/octokit.rb/pull/780
| Why am I not listed as a contributor to a project that merged my PR? |
If you omit the second argument to listen(), node will listen on all IP addresses. That way you can run the same code locally to test and also on your EC2 instance.
In your catch block, you might also want to send back an HTTP error response to the client.
|
I have a long procedure I have written in node.js, but I'd like the PHP side of my application control kicking it off. My node is something like:
var http = require('http');
http.createServer(function (req, res) {
console.log('Got request')
try{
doProcedure()
} catch(e) {
console.log('Really bad error while trying to do the procedure:')
console.error(e.stack ? e.stack : e)
}
}).listen(8124, "127.0.0.1");
When I run this on my local machine, http://localhost:8124 will trigger things correctly. On aws, I've added the 8124 port but requests to mydomain.com:8124 aren't picked up by node.
I tried stopping httpd and then letting node listen on port 80, to rule out ports not being forwarded correctly, but it still saw nothing.
So two questions, I guess:
How to get Node listening as a daemon, so I can pass over requests? ("update user x", "update user y", "update all users", etc)
How do I ping that daemon from php to start these procedures in an AWS evironment?
Bonus question: Is there a better way I should be doing this?
Thanks all,
~Jordan
| Pinging a Node.js server from PHP on AWS |
When you create a trigger you have the option to specify credentials: see "Execute As" (http://msdn.microsoft.com/en-us/library/ms189799.aspx andhttp://msdn.microsoft.com/en-us/library/ms188354.aspx).ShareFollowansweredJul 12, 2011 at 17:21DanDan1,94722 gold badges2525 silver badges3737 bronze badgesAdd a comment| | I have a SQL Server 2008 database with 10 windows users who all have permissions to Insert, Update and Delete tables. Each table has a trigger that writes to an audit table in a different database.Currently for this to work I have to give the user write permissions to the audit database as well, otherwise the trigger will throw an error.I could give Insert permission only for each individual user, but I was hoping that there might be a more elegant solution for this problem. Especially from a standpoint that users get deleted/added which would mean setting them up in two databases rather than one.Ideally I would like to use one account that does all the audit work. | How do I write to an external audit database in SQL Server 2008? |
git push doesn't mix (merge in git-speak) anything. It takes your local copy, and tells the remote side that this is now the current version. The old version will still be available using its commit ID, or you can create a tag or branch to refer to it.
git log will show you all previous versions of your repo.
git commit -m "Old stuff"
(change loads of files)
git commit -m "New changes"
git log --oneline
10a3fe2 New changes
7f0ceaa Old stuff
You can get back to any old committed version of your code with git checkout:
git checkout 7f0ceaa
And back to the current version with
git checkout master
|
I am trying to understand how version control works in git/github. I can clone, pull, merge or add and remove remotes. What I don't understand is how to separate versions.
Here is an example. Suppose I have a entire site stored in github.com/user/foobar_site Now, assume I have completely remade my local copy of foobar_site, but I don't want to push this and have it get mixed with my remote copy. So, what I would like is to keep my remote intact and push my local to it without mixing the two. The end result would be, I will be able to git clone the first version of foobar_site and the second version at any time.
Is such thing possible? if so, then how?
| Making versions in git |
git telling you that you must first:
git fetch
and later
git add /commit /push
fetch is similar to pull but, pull merge the data in files in your local branch , fetch update only the branch structure and id..
if fetch does not work, it means that someone else committed to changing, and now your version must be upgraded before being released
|
I keep getting an error saying: rejected master-> master (fetch first), failed to push some refs.... because remote contains work you do not have locally.
I just want git to overwrite the files currently in the repository with the new uploads so I've been trying to use git push -u origin master, but this error keeps popping up. I'm brand new to git/github. Why is this happening?
I've tried to merge the existing files in the repo with the files on my desktop, but I keep getting merge conflicts. Not sure how to deal with these.
| Why does Git require me to pull before I push? |
Your best bet is to maximize registers usage so that when you read a temporary you don't end up with extra (likely cached) memory accesses. Number of registers will depend on a system and registers allocation (the logic that maps your variables onto actual registers) will depend on a compiler. So your best bet is I guess to expect only one register and expect its size to be the same as the pointer. Which boils down to a simple for-loop dealing with blocks interpreted as arrays of size_t.
|
What is the fastest way to swap two non-overlapping memory areas of equal size? Say, I need to swap (t_Some *a) with (t_Some *b). Considering space-time trade-off, will increased temporary space improve the speed? For example, (char *tmp) vs (int *tmp)? I am looking for a portable solution.
Prototype:
void swap_elements_of_array(void* base, size_t size_of_element, int a, int b);
| C - fastest method to swap two memory blocks of equal size? |
You can resize your GKE cluster to "0", when you don't need it, with the below commandcloud container clusters resize CLUSTERNAME --size=0Then you won't be charged, GKE charges only for worker nodes and not for master nodes.And if you want to make sure your data is persistent after each time you are scaling your cluster, then you will need to usegcePersistentDisk.
You can create PD using gcloud before mounting it to your deployment.gcloud compute disks create --size=500GB --zone=us-central1-a my-data-diskThen you can configure your Pod configuration like in examplehereJust make sure to mount all necessary paths of containers on Persistent Disk.For more information for Kubernetes Engine pricingcheck | I am hosting a jupyterhub with kubernetes on my google cloud account. I noticed that google cloud charges me for the runtime that the jupyterhub instance is running.I am wondering if I can sorta shut down the jupyterhub instance or the kubernetes when we are not using the jupyterhub to save money?If I restart the instance, will the data be whiped out? I want to do a experiment on this but I am afraid of doing something inreversible.Also, where can I learn more about the adminstration tips about using google cloud?
Thanks! | Pricing problem regarding running Jupyterhub with Kubernetes on Google Cloud |
You shouldn't be throwing a DomainException for authorization errors. Due to the way Silverlight handles faults, these responses can still be cached by your browser. Instead, throw an UnauthorizedAccessException from your DomainService and that should fix the caching error on the client. | We have a problem with HTTP Response caching when using WCF RIA Services with Silverlight.Server side, we have a simple DomainService GET method with no caching specified, like this:[OutputCache(OutputCacheLocation.None)]
public IQueryable<SearchResults> GetSearchResults(string searchText);The throws a DomainException when the user is not authenticated (i.e. when the FormsAuthenticationCookie expires). This is as designed.But when the user is re-authenticated, and the Query is called again with the same 'searchText' parameter, then the Query never gets to the server (no breakpoint hit; Fiddler shows no http request sent).I think this is because when the exception is thrown on the server, the HTTP Response has the'Cache-Control'property set to'private', and when the client wants to perform the same query later (once the user is logged in), then the browser does not even send the request to the server.If we enter a different search parameter, then the query is re-executed no problem.Is there any way of ensuring the http response always has 'no-caching' - even when it does not return normally?UPDATE1The problem only occurs when deployed to IIS - when testing from Visual Studio with either Casini or IIS Express it works fine.UPDATE2I updated the question to reflect new knowledge. | WCF RIAServices Querys that throw exceptions have caching problems |
<div class="s-prose js-post-body" itemprop="text">
<p>You can, with the <strong>multi-stage builds</strong> feature introduced in Docker 1.17</p>
<p>Take a look at this:</p>
<pre><code>FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
</code></pre>
<p>Then build the image normally:</p>
<pre><code>docker build -t alexellis2/href-counter:latest
</code></pre>
<p>From : <a href="https://docs.docker.com/develop/develop-images/multistage-build/" rel="noreferrer">https://docs.docker.com/develop/develop-images/multistage-build/</a></p>
<blockquote>
<p>The end result is the same tiny production image as before, with a significant reduction in complexity. You don’t need to create any intermediate images and you don’t need to extract any artifacts to your local system at all.</p>
<p>How does it work? The second FROM instruction starts a new build stage with the alpine:latest image as its base. The COPY --from=0 line copies just the built artifact from the previous stage into this new stage. The Go SDK and any intermediate artifacts are left behind, and not saved in the final image.</p>
</blockquote>
</div> | <div class="s-prose js-post-body" itemprop="text">
<p>I have a few Dockerfiles right now.</p>
<p>One is for Cassandra 3.5, and it is <code>FROM cassandra:3.5</code></p>
<p>I also have a Dockerfile for Kafka, but t is quite a bit more complex. It is <code>FROM java:openjdk-8-fre</code> and it runs a long command to install Kafka and Zookeeper.</p>
<p>Finally, I have an application written in Scala that uses SBT. </p>
<p>For that Dockerfile, it is <code>FROM broadinstitute/scala-baseimage</code>, which gets me Java 8, Scala 2.11.7, and STB 0.13.9, which are what I need.</p>
<p>Perhaps, I don't understand how Docker works, but my Scala program has Cassandra and Kafka as dependencies and for development purposes, I want others to be able to simply clone my repo with the <code>Dockerfile</code> and then be able to build it with Cassandra, Kafka, Scala, Java and SBT all baked in so that they can just compile the source. I'm having a lot of issues with this though. </p>
<p>How do I combine these Dockerfiles? How do I simply make an environment with those things baked in?</p>
</div> | Is there a way to combine Docker images into 1 container? |
2
If you have a Tesla board or high-end Quadro and running on Windows Server 2008 R2 64bit, Windows 7 64bit (or 32/64bit Linux) then you can use NVML to do that.
Download latest NVML SDK (Tespla Deployment Kit) and take a look at these two functions:
nvmlReturn_t nvmlDeviceGetComputeRunningProcesses (nvmlDevice_t device,
unsigned int infoCount,
nvmlProcessInfo_t * infos)
nvmlReturn_t nvmlDeviceGetTemperature (nvmlDevice_t device,
nvmlTemperatureSensors_t sensorType,
unsigned int * temp)
Watch out for:
nvmlReturn_t nvmlDeviceGetFanSpeed (nvmlDevice_t device, unsigned int * speed)
It "retrieves the intended operating speed of the device’s fan" not real fan speed. So you can't use it for checking fan failures.
I'm not aware of nvmlDeviceGetComputeRunningProcesses replacement that'd work on GeForce boards, but Windows NvAPI (which also works on GeForce) allows to query both Fan Speed and Temperature.
Share
Improve this answer
Follow
answered Oct 23, 2012 at 10:51
Przemyslaw ZychPrzemyslaw Zych
2,04011 gold badge2222 silver badges2525 bronze badges
Add a comment
|
|
I have a temperature monitor program I wrote a while back which monitors the temperatures and fans on my AMD Graphics cards, checking for fan failure OR overheats.
The problem with it, is that it needs to know in advance which process will be using the GPU(Graphics Processing Unit), in order to kill it or gracefully make it stop to avoid overheating.
To make my program more dynamic, I needed a way to find which process is using the GPU, much like which process is using CPU time(Task Manager). One such application is Process Explorer from SysInternals.
I am asking, how may I do this in Windows in C? I am aware that if there is such a way, it would target Vista and above.
| How do I get GPU usage per process? |
Let me explain a few things first.
Backing up the previous version
Firstly,you need to identify your current application's installation folder.You can create a registry key to save where the application is installed(You need to do this in the first-time installer of your application).For that you can use Registry.LocalMachine.CreateSubKey(@"SOFTWARE\ProductName\appPathHere").Then,in your new installer,you can read the registry key to get the path of the application.Then,what you can do is create a ZIP of that path/folder.For that you can use :
System.IO.Compression.ZipFile.CreateFromDirectory(pathofApp, zipFilePath);
This will backup the current application.You can even modify the file type/extension to give it your custom type/extension.
Installing the application
Read the registry key to get the path of the installed file.Delete it using System.IO.Directory.Delete(path, true).You can ZIP all your files and then make your installer extract it to the specific location.You can simply use :
System.IO.Compression.ZipFile.ExtractToDirectory(zipPath, extractPath);
Creating the installer
I suggest you create a winform or WPF application,design the UI and implement the above methods.
This is not the ideal way but it will give you an idea on how to get it done with basic knowledge.Hope it helps you
|
I am trying to create an installation program that will backup the previous version of a C# program before updating it. I'm using VS 2015, and have looked at the installer, advanced installer and InstallShield LE. I don't really know what I'm looking at, how to use custom actions, pretty much anything. Any advice or help would be appreciated.
| Creating a C# installation package that backs up the previous version before updating. |
You need to generate the certificate on the iOS Provisioning Portal, according to the instructions on the docs (also available on the portal). You don't need to worry about the details of the certificate; the Provisioning Portal takes care of all the details for you. You need to do this for each of you App IDs and you need to be the Agent of your organization to do this. | I need to integrate apple push notifications. I am not clear about the certificate needed for the server. As I know this is a generated certificate form server end and does it needed to be signed by a valid certificate authority? If so how does APNS going to validate this?For the apple sandbox cant we proceed with a test certificate?Is there any chance to test this APNS process in the simulator?Thank you. | Certificate creation for Apple push notifications |
Helm will connect to the same cluster thatkubectlis pointing to.By setting multiplekubectlcontexts and changing them withkubectl config use-context [environment]you can achieve what you want.Of course you will need to set appropriate HELM_ environment values in your shell for each cluster including TLS certificates if you have them enabled.Also it’s worth taking steps so that you don’t accidentally deploy to the wrong cluster by mistake. | I want to manage different clusters of k8s,one calledproductionfor prod deployments,and another one calledstagingother deployments and configurations.How can I connecthelmto the tiller in those 2 different clusters?Assume that I already havetillerinstalled and I have a configured ci pipeline. | Connect helm to multiple tiller in different k8s clusters |
Have you tried to login in to your ACR using the Azure CLI?az acr login --name acrServer | We are using docker swarm for windows and have several swarms. Most of them works great, but when making a new one, we are now currently failing the docker login. The code used to login is:echo "$(acrPassword)" | docker login --username $(acrUsername) --password-stdin $(acrServer)This line works perfectly well on other swarms, but on this new one, it fails with the following error:[error]docker : Error response from daemon: Get https://myaccount.azurecr.io/v2/: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.here is the result from docker version:Server: Docker Engine - Enterprise
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.24)
Go version: go1.12.12
Git commit: 2ee0c57608
Built: 11/13/2019 07:58:51
OS/Arch: windows/amd64
Experimental: falseEdit: Found the issue, it was an older version of windows. Changed the Windows verison and fixed the issue. | Unable to docker login into azure container registry |
From:https://www.metricfire.com/blog/tips-for-monitoring-kubernetes-applications/:CPU request commitment is the ratio of CPU requests for all pods running on a node to the total CPU available on that node. In the same way, memory commitment is the ratio of pod memory requests to the total memory capacity of that node. We can also calculate cluster level request commitment by comparing CPU/memory requests on a cluster level to total cluster capacity. | Can't find a good article on Google that explains this well or at all so turning to the dev community to help me understand.My guess is that if CPU Request Commitment/Memory Request Commitment says it's over 100% then it means thetotalresources.requests.cpu/resources.requests.memoryin all of my yaml files exceeds the total available CPU/Memory on my node. Is that a correct assumption?I'm talking about these settings in my yaml files:resources:
requests:
cpu: <Your value>
memory: <Your value> | What is CPU Request Commitment and Memory Request Commitment |
If you specify the action to scale for this task, please note that there are other required parameters for theKubernetes manifest task, as below.See:Scale actionfor details. | I m trying to run below ADO task but getting error:
Writing this to facilitate automatic tasks to few people in team.- ${{ if eq(parameters.BringDown, 'true')}}:
- task: KubernetesManifest@0
displayName: Scale down
inputs:
action: scale
arguments: deployment mydeployment-name --replicas=0
namespace: ${{ parameters.Environment }}Error:##[warning]Resource file has already set to: /home/vsts/work/_tasks/KubernetesManifest_dee316a2-586f-4def-be79-488a1f503dfe/0.181.0/node_modules/azure-pipelines-tasks-kubernetes-common-v2/module.jsonKubectl Client Version: v1.19.0
Kubectl Server Version: v1.17.9==============================================================================
##[error]Input required: kindThe other task I tried worked well:- ${{ if eq(parameters.Restart, 'true')}}:
- task: KubernetesManifest@0
displayName: Delete POD
inputs:
action: delete
arguments: pod -l app="${{ parameters.service }}"
namespace: ${{ parameters.Environment }} | KubernetesManifest@0 error- Input required: kind |
+50The sudo will run without a tty and display that is why you command won't work.Try having xvfb installed and use0 18 * * * cd /home/pi/gui && xvfb-run python3 gui.pyUpdate-1: 22-Jun-18If you want to use your actual display then you need to make sure that you use below commandXAUTHORITY=/home/<user>/.Xauthority DISPLAY=:0 python3 gui.pyAnd also make sure the cron is for your user. The defaultDISPLAYis:0.When you have a XServer (GUI display), you cannot just connect to it without an authorization. When system start it creates a file and that location is stored in environment variable XAUTHORITY.When you run a cron, you have limited environment variables. There is no existingXAUTHORITYorDISPLAYdefined
to be able to connect to display you need. So you need to define every environment variable that would be required by your programSo you defineDISPLAY=:0to select the default display and you need to setXAUTHORITY=/home/<user>/.Xauthorityto prove that you are authorized to connect to the display | I have a simple GUI (created using tkinter) that I want to run at a specific time of day on a Raspberry pi 3. Bellow is the code snippet I used in crontab. I invoked the crontab manager using sudo crontab -e.0 18 * * * cd /home/pi/gui && python3 gui.pyFor the moment, I can execute the GUI by invoking it directly via the Pi's command line. However, it doesn't work when I try to do it using cron. I also tried to switch to a basic python script (writing to a file) and that worked. Is there a specific weird interaction that I need to be aware of?My setup: raspberry pi 3, python 3, raspi-screen, tkinter (latest version as far as I know) | Running a tkinter GUI using crontab |
There is not a store. You can pass acaoption to the https request to tell it what CAs you do trust.Fromthe docs:The following options fromtls.connect()can also be specified.
However, aglobalAgentsilently ignores these.ca: An authority certificate or array of authority certificates to check
the remote host against.In order to specify these options, use a customAgent.var options = {
...
ca: CA or [array of CAs]
...
};
options.agent = new https.Agent(options);
var req = https.request(options, function(res) {Ref:http://nodejs.org/api/https.html#https_https_request_options_callback | I am making an https request (using the request module) to a server with a self-signed cert. It throws an error if I don't specifystrictSSL: falseas an option.This cert is already trusted on my OS (OSX), such that Chrome doesn't throw an error while accessing a webpage from that server.I understand different applications/environments may have their own certificate stores. Firefox has its own, and the JVM, for example, is usually at $JAVA_HOME/jre/lib/security/cacerts (on OSX).My question is, where does node look for its trusted CA's? Is there such a concept? I'd like to add my self-signed cert there for development purposes. | Where is node's certificate store? |
At present support for pipeline type projects in delivery-pipeline-plugin is in active development.Refer the JIRA ticket for information & progressJENKINS-34040 | I try to show some delivery pipeline instances in jenkins Delivery Pipeline View.If the delivery pipeline instance is defined as ‘Free Style’ or ‘MultiJob Project’ everything works fine, but the Job does not appear in the Delivery Pipeline View when defined as ‘Pipeline’.I tried the following:
my_pipeline-job as a Post-Build-action -> Build other projects (manual step) ->Downstream Project Names->my_pipeline_job
The result was a error message: my_pipeline_job cannot be build!The message disappears when I tried to build it as:
my_pipeline-job as Post-Build-action ->Trigger parameterized build on other projects-> Build Triggers-> Projects to build->my_pipeline_job
But the results will not be shown in Delivery Pipeline View. | Jenkins Delivery Pipeline View doesn't show pipeline jobs |
You are using the file name, not the path for RESTORE. Try something like the following - only specify the path:
db2 restore database gyczpas from "/home/db2inst1/GYCZPAS/PAS_BACKUP" taken at 20170109092932 into gyczpas
|
I try to restore a DB2 database, but it says the return path is not valid.
This is what I tried:
db2 restore database gyczpas from "/home/db2inst1/GYCZPAS/PAS_BACKUP/GYCZPAS.0.db2inst1.NODE0000.CATN0000.20170109092932.001" taken at 20170109092932 into gyczpas
SQL2036N The path for the file or device "/home/db2inst1/GYCZPAS/PAS_BACKUP/GYCZPAS.0.db2inst1.NODE0000.CATN000" is not valid.
I used the same path during RESTORE that I used for the BACKUP command, but it fails. What could be the reason?
DB22 version: v9.7
| DB2: restore database returns error SQL2036N on Linux |
Here is a bash script (delete.sh) that you can delete any images from yourECR repository:#!/bin/bash
aws ecr batch-delete-image --region $1 \
--repository-name $2 \
--image-ids "$(aws ecr list-images --region $1 --repository-name $2 --query 'imageIds[*]' --output json
)" || trueYou can execute by a single command like this:./delete.sh ap-southeast-1 my-ecr-repowith the following values:ap-southeast-1is myAWS Regionmy-ecr-repois myECR repo nameReferences:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/list-images.htmlhttps://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/batch-delete-image.htmlShareFollowansweredAug 11, 2022 at 4:00Binh NguyenBinh Nguyen1,9791111 silver badges1919 bronze badges0Add a comment| | I would like to know the CLI command to delete all images in an ECR repo. | How to delete all images in an ECR repository? |
The behavior I am looking for is configurable on theServiceitself via thepublishNotReadyAddressesoption:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#servicespec-v1-core | In my Kubernetes cluster, I have a single pod (i.e. one replica) with two containers:serverandcache.I also have a KubernetesServicethat matches my pod.Ifcacheis crashing, when I try to send an HTTP request toservervia myService, I get a "503 Service Temporarily Unavailable".The HTTP request is going into the cluster via Nginx Ingress, and I suspect that the problem is that whencacheis crashing, Kubernetes removes my one pod from theServiceload balancers, as promised in theKubernetes documentation:The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.I don't prefer this behavior, since I still want to be ableserverto respond to requests even ifcachehas failed. Is there any way to get this desired behavior? | Kubernetes Service unavailable when container crashes |
You have to browse to the Redshift dashboard and create a Subnet Group that contains the VPC's subnet.https://eu-west-1.console.aws.amazon.com/redshiftv2Config -> Subnet GroupSelect your subnet to be part of the group.This caught me out too. I was looking everywhere in the VPC dashboard. | I am trying to launch a Redshift cluster. I have created my own VPC(180.18.16.0/16)with two subnets:180.18.16.0/20
180.18.0.0/20These are all in Ohio region. I then try to create a Redshift cluster in the same region. However, when I try to list the VPC, they are being listed but they are disabled and I can not select any VPC either the default or my own one.What am I doing wrong? | Can't access my VPC in AWS console for creating my redshift cluster |
Did you create a new default.ctp layout file and then "URL rewriting is not properly configured on your server." appeared?If that is the case, it happened to me. It's working just fine. I think Cake is throwing a bad error message here. | I've been trying to setup CakePHP on a development section of my server and I can't seem to solve the "URL rewriting is not properly configured on your server" error. I suspect I'm not configuring the .htaccess files with the correct RewriteBase. I've tried a wide variety of different RewriteBase for each file, but I can't seem to hit the right ones, and Cake doesn't give me any information other than "not working" (URL rewrite errors don't end up in Cake's error log).I do not have access to my httpd.conf file, but I've used .htaccess and mod_rewrite with other frameworks (Wordpress and CodeIgniter) without a problem.My base url for the site is:http://dev.domain.com/cake/My base server path is: /home/username/public_html/dev/cake/Cake root .htaccess:<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /cake
RewriteRule ^$ app/webroot/ [L]
RewriteRule (.*) app/webroot/$1 [L]
</IfModule>Cake app directory .htaccess:<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /cake/app
RewriteRule ^$ webroot/ [L]
RewriteRule (.*) webroot/$1 [L]
</IfModule>Cake webroot directory .htaccess:<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /cake/app/webroot
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ index.php?/$1 [QSA,L]
</IfModule> | CakePHP 2.0.4 "URL rewriting is not properly configured on your server" |
4
Put your socket file in /var/run instead of /tmp
And you are welcome.
This answer cost me two hour, fml...
I find it in https://serverfault.com/questions/463993/nginx-unix-domain-socket-error/464025#464025
Share
Improve this answer
Follow
answered Apr 24, 2019 at 8:35
王江浩王江浩
4122 bronze badges
Add a comment
|
|
I'm following the http://www.obeythetestinggoat.com/book/chapter_08.html book, and it says to add a unix socket to run nginx server with gunicorn, which i did.
This is my nginx file
server {
listen 80;
server_name mydjsuperlist-staging.tk;
location /static {
alias /home/elspeth/sites/mydjsuperlist-staging.tk/static;
}
location / {
proxy_set_header Host $host;
proxy_pass http://unix:/tmp/mydjsuperlist-staging.tk.socket;
}
}
Nginx reloads without any failure and checked it with nginx -t
When i run:
gunicorn --bind unix:/tmp/mydjsuperlist-staging.tk.socket superlists.wsgi:application
It succesfully creates mydjsuperlist-staging.tk.socket file in tmp folder and i get this on my terminal
2016-09-01 18:56:01 [15449] [INFO] Starting gunicorn 18.0
2016-09-01 18:56:01 [15449] [INFO] Listening at: unix:/tmp/mydjsuperlist-staging.tk.socket (15449)
2016-09-01 18:56:01 [15449] [INFO] Using worker: sync
2016-09-01 18:56:01 [15452] [INFO] Booting worker with pid: 15452
Everything seems fine, but when i go to my site mydjsuperlist-staging.tk it gives a (502) bad gateway error.
When i was using a port my site was running perfectly. What am i doing wrong over here ?
| Gunicorn with unix socket not working gives 502 bad gateway |
Tried to add comments to explain the role of labels:apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # LABEL-A: <--this label is to manage the deployment itself. this may be used to filter the deployment based on this label.
spec:
replicas: 2
selector:
matchLabels:
app: my-nginx #LABEL-B: <-- field defines how the Deployment finds which Pods to manage.
template:
metadata:
labels:
app: my-nginx #LABEL-C: <--this is the label of the pod, this must be same as LABEL-B
spec:
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)LABEL-A:<--this label is to manage the deployment itself. this may be used to filter the deployment based on this label. Example usage ofLABEL-Ais for deployment management, such as filtering.k get deployments.apps -L app=my-nginxLABEL-B: <-- There must be some place where we tell replication controller to manage the pods. This field defines how the Deployment finds which Pods to manage. Based on these labels of the pod, replication controller ensure they are ready.LABEL-C:<--this is the label of the pod, which LABEL-B use to monitor. this must be same as LABEL-B | In the below yaml:apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: my-nginx # Line 6
spec: # Line 7
replicas: 2
selector:
matchLabels:
app: my-nginx # line 11
template:
metadata:
labels:
app: my-nginx # Line 15
spec: # Line 16
containers:
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources:
limits:
memory: "128Mi" #128 MB
cpu: "200m" #200 millicpu (.2 cpu or 20% of the cpu)Deployment is given a label(app: nginx) at Line 6.Deployment spec at Line 7 uses the Pod spec mentioned in Line 16What is the purpose ofselectorfield withmatchLabels?What is the purpose oftemplatefield withlabels? | Labels in Deployment Spec & template |
1
Your runner is not able to check for jobs. Can you double check the endpoint URL?
If your repository is on gitlab.com, you should be using the endpoint https://gitlab.com/
In your GitLab Web UI, go to Settings -> CI/CD -> Runners -> Set up a specific Runner manually
You'll see the endpoint URL and the token you'll need to register your runner.
This is covered in my GitLab CI tutorial at https://gitpitch.com/atsaloli/cicd/master?grs=gitlab#/41 (it takes a few seconds to load)
Let me know if that helps?
Share
Improve this answer
Follow
answered Apr 10, 2020 at 23:01
Aleksey TsalolikhinAleksey Tsalolikhin
1,56877 silver badges1414 bronze badges
Add a comment
|
|
I am trying to run gitlab-ci on a local running using docker executer
This is the config.toml
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
listen_address = "0.0.0.0:8093"
[[runners]]
url = "https://gitlab.com/<ACCOUNT>/my-static-website"
token = XXXXXX
executor = "docker"
builds_dir = ""
clone_url = "https://gitlab.com/<ACCOUNT>/my-static-website.git"
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
[runners.cache]
Insecure = false
My .gitlab-ci.yml:
image: node
stages:
- build
- test
build website:
stage: build
script:
- npm install
- npm install -g gatsby-cli
- gatsby build
artifacts:
paths:
- ./public
tags:
- trials
test artifacts:
image: alpine
stage: test
script:
- grep -q "Gatsby" ./public/index.html
Here is the error I am getting:
Runtime platform arch=amd64 os=linux
pid=28815 revision=4c96e5ad version=12.9.0
Starting multi-runner from ./config.toml... builds=0
Running in system-mode.
Configuration loaded builds=0
listen_address not defined, metrics & debug endpoints disabled builds=0
Session server listening address=0.0.0.0:8093
builds=0
WARNING: Checking for jobs... failed runner=kYtFEV-i
status=404 Not Found
WARNING: Checking for jobs... failed runner=kYtFEV-i
status=404 Not Found
WARNING: Checking for jobs... failed runner=kYtFEV-i
status=404 Not Found
I am using gitlab-runner version 12.9 and gitlab server: 12.10.0-pre
I have my runner on the server as follows:
I am running the command: gitlab-runner run -c ./config.toml
What did I miss here?
| WARNING: Checking for jobs... failed in docker executer in gitlab-runner |
This script will build only if image not exist.update for V2function docker_tag_exists() {
curl --silent -f -lSL https://hub.docker.com/v2/repositories/$1/tags/$2 > /dev/null
}Use above function for v2#!/bin/bash
function docker_tag_exists() {
curl --silent -f -lSL https://index.docker.io/v1/repositories/$1/tags/$2 > /dev/null
}
if docker_tag_exists library/node 9.11.2-jessie; then
echo "Docker image exist,...."
echo "pulling existing docker..."
#so docker image exist pull the docker image
docker pull node:9.11.2-jessie
else
echo "Docker image not exist remotly...."
echo "Building docker image..."
#build docker image here with absoult or retlative path
docker build -t nodejs .
fiWith little modification from the link below.
If the registry is private u check this linkWith username and password | Say I have this image tag "node:9.2" as inFROM node:9.2...is there an API I can hit to see if image with tag "node:9.2" exists and can be retrieved, before I actually trydocker build ...? | Check if Docker image exists in cloud repo |
Even without redirecting pwp/index.html to pwp/src/index.html, you could simply change yourpublication foldertosrc, asseen here(for the docs folder, but the same idea applies) | I'm publishing my own profile site on github onhttps://yilmazhasan.github.io/pwpIt was working before some changes,now it gives 404forhttps://yilmazhasan.github.io/pwp/src/index.htmlalthough there is an index.html file.I'm redirecting pwp/index.html to pwp/src/index.html, it sees first one but not second one.Since it is public, they can be seen onhttps://github.com/yilmazhasan/pwpWhat can be caused to this?(Note: It works on localhost) | Github not detecting an inner index.html |
Raspbian stable is listed athttps://wiki.alpinelinux.org/wiki/Release_Notes_for_Alpine_3.13.0#time64_requirementswith an outdated version of libseccomp (quoting: ... [requiring] host libseccomp to be version 2.4.2 or greater ...). Note that for Raspbian libseccomp is known as libseccomp2. In this case: either update libseccomp and Docker, or use an older image.The issue with a non-functioning clock seems to apply to all containers based on Alpine Linux built in the last couple of weeks. In my own experience this includes PostgreSQL and Python. Both of these fail: PostgreSQL experiences a Segmentation Fault, Python fails to initialize its clock. Given that Redis is database-like, I would not be surprised if the lack of working clock breaks it as well.(This issue seems to be resolved) The arm-v7 images of Alpine Linux seem to have been built with a non-functioning time component, seehttps://gitlab.alpinelinux.org/alpine/aports/-/issues/12346. This issue should be resolved by using either an older image (eg.redis:6.0.6-alpine3.12seems to be 6 months old), waiting for a fixed build to appear, or using a build that does not use alpine. | I'm running redis in a docker container on a RasPi 4 (redis:6-alpine). It is used by Nextcloud in another container (via docker-compose).
Since a few days redis is using 100% CPU time.I now saw that the date/time in the container is corrupt. Redis seems to start normally, but the log saispi@tsht2:/data/nextcloud $ docker logs nextcloud_redis_1
1:C 03 May 2071 14:21:28.000 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 03 May 2071 14:21:28.000 # Redis version=6.0.10, bits=32, commit=00000000, modified=0, pid=1, just started
1:C 03 May 2071 14:21:28.000 # Configuration loaded
1:M 03 May 2071 14:18:00.000 # Warning: 32 bit instance detected but no memory limit set. Setting 3 GB maxmemory limit with 'noeviction' policy now.
1:M 03 May 2071 14:20:40.000 * Running mode=standalone, port=6379.
1:M 03 May 2071 14:21:28.000 # Server initialized
1:M 03 May 2071 14:21:20.000 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 03 May 2071 14:21:28.000 * Ready to accept connectionsWatch the date!When I look at the date in the container, I getpi@tsht2:/data/nextcloud $ docker exec -it nextcloud_redis_1 date
Sun Jan 0 00:100:4174038 1900I tried to stop the container, remove the image and restart anything, but I have the same problem.What happens there?
Has the 100% CPU usage something to do with the date problem?BTW: the other containers show the correct date/time. | corrupt date with redis:6-alpine on RasPi |
-1i found answer this question , my url was : 'https://185.305.1.13/index.image.png' , we add domin for images , and url chaghed :'https://image.com/index.image.png'... and revso | This question already has answers here:React-native POST request in android over https return network error(3 answers)Closed3 years ago.in reactnavtive app we work with 'http' for url in axios and evry things is ok , now we change 'http' to 'https' but Apis failed ... and app don't work ... why ? please help me. | How work https api in React-native-android? [duplicate] |
Docker Compose doesn't replace your Dockerfile, but you can use Docker Compose to build an image from your Dockerfile:
version: '3'
services:
myservice:
build:
context: /path/to/Dockerfile/dir
dockerfile: Dockerfile
image: result/latest
Now you can build it with:
docker-compose build
And start it with:
docker-compose up -d
|
This is how I'm creating a docker image with nodeJS and meteorJS based on an ubuntu image. I'll use this image to do some testing.
Now I'm thinking of doing this via docker compose. But is this possible at all? Can I convert those commands into a docker compose yml file?
FROM ubuntu:16.04
COPY package.json ./
RUN apt-get update -y && \
apt-get install -yqq \
python \
build-essential \
apt-transport-https \
ca-certificates \
curl \
locales \
nodejs \
npm \
nodejs-legacy \
sudo \
git
## NodeJS and MeteorJS
RUN curl -sL https://deb.nodesource.com/setup_4.x | bash -
RUN curl https://install.meteor.com/ | sh
## Dependencies
RUN npm install -g eslint eslint-plugin-react
RUN npm install
## Locale
ENV OS_LOCALE="en_US.UTF-8"
RUN locale-gen ${OS_LOCALE}
ENV LANG=${OS_LOCALE} LANGUAGE=en_US:en LC_ALL=${OS_LOCALE}
## User
RUN useradd ubuntu && \
usermod -aG sudo ubuntu && \
mkdir -p /builds/core/.meteor /home/ubuntu && \
chown -Rh ubuntu:ubuntu /builds/core/.meteor && \
chown -Rh ubuntu:ubuntu /home/ubuntu
USER ubuntu
| How to convert a Dockerfile to a docker compose image? |
3
Seems that you may be interested in this link.
https://developer.github.com/v3/repos/#get
Using curl you can send a GET request to
GET /repos/:username/:repo
This describes exactly what you are looking for.
example: curl -i https://api.github.com/repos/octocat/Hello-World
This will return a json document including "name" and "description".
edit: Added better information and an example.
Share
Improve this answer
Follow
edited Aug 18, 2015 at 22:30
answered Aug 18, 2015 at 19:39
aaronottaaronott
39611 silver badge66 bronze badges
0
Add a comment
|
|
I want to get specific repository's information using curl.
Suppose my GitHub repo url is like:
https://github.com/githubexample/Example-Repository
Now, when I put this url in my input form and submit then as a result, it would show this repository's title and description.
How can I do it?
| How to get specific repository info from github url in PHP? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.