Response stringlengths 15 2k | Instruction stringlengths 37 2k | Prompt stringlengths 14 160 |
|---|---|---|
I figured it out. I had the arguments in the wrong order. This works:
scp -i mykey.pem somefile.txt [email protected]:/
|
I have an EC2 instance running (FreeBSD 9 AMI ami-8cce3fe5), and I can ssh into it using my amazon-created key file without password prompt, no problem.
However, when I want to copy a file to the instance using scp I am asked to enter a password:
scp somefile.txt -i mykey.pem [email protected]:/
Password:
Any ideas why this is happening/how it can be prevented?
| scp (secure copy) to ec2 instance without password |
It seems you're using Windows XP SP2 COM API, which is known to have issues on Windows Vista/7 and newer versions.It's recommended that you use the newer API:(I have not tested this)Type netFwPolicy2Type = Type.GetTypeFromProgID("HNetCfg.FwPolicy2");
INetFwPolicy2 mgr = (INetFwPolicy2)Activator.CreateInstance(netFwPolicy2Type);
// Gets the current firewall profile (domain, public, private, etc.)
NET_FW_PROFILE_TYPE2_ fwCurrentProfileTypes = (NET_FW_PROFILE_TYPE2_)mgr.CurrentProfileTypes;
// Get current status
bool firewallEnabled = mgr.get_FirewallEnabled(fwCurrentProfileTypes);
string frw_status = "Windows Firewall is " + (firewallEnabled ?
"enabled" : "disabled");
// Disables Firewall
mgr.set_FirewallEnabled(fwCurrentProfileTypes, false);ShareFollowansweredNov 13, 2015 at 19:32Matias CiceroMatias Cicero25.7k1717 gold badges8686 silver badges155155 bronze badges0Add a comment| | Windows 7, 8.1I get an exception when I tryto disableWindows Firewall. I try to do it with admin rights. But I haven't the same problem for Windows Firewallenabling.Type NetFwMgrType = Type.GetTypeFromProgID("HNetCfg.FwMgr", false);
INetFwMgr mgr = (INetFwMgr)Activator.CreateInstance(NetFwMgrType);
// Get the Windows Firewall status
bool firewallEnabled = mgr.LocalPolicy.CurrentProfile.FirewallEnabled;
// it works fine...
String frw_status = "Windows Firewall is " + (firewallEnabled ?
"enabled" : "disabled");
// Enable or disable firewall.
// I get the exception here when I try to disable Windows Firewall.
// I have not problem when I try to enable Windows Firewall (it works fine).
//
// Exception message:
// An unhandled exception of type 'System.NotImplementedException'
// occurred in net_sandbox.exe
// Additional information: Method or operation is not emplemented yet..
mgr.LocalPolicy.CurrentProfile.FirewallEnabled = false;How can I disable Windows Firewall? | How can I disable Windows Firewall? |
I got an answer for the question. I'll post it here in case anyone else has trouble.
Apparently, overhead is a more general term in Computer Science that I'd never heard before, referring to extraneous resources - in this case, bits.
When referring to cache overhead, the question was referring to bits that are necessary for the cache, but that do not include the data itself.
In this particular case, the cache included the validity bid, and the tag. In order to calculate the overhead as a percentage, I had to take the sum of all validity bits and tag bits and divide them by the total cache size.
|
I've been given a problem by my computer architecture professor but it is using some terminology that I cannot find in our textbook.
Basically, I'm given a cache with the following parameters:
-4KB address space
-Byte-addressable memory
-Direct-Mapped
-2 blocks in cache
-4-word blocks
I have no problem drawing out this cache and modeling what would happen with various inputs. However there is one question I'm being asked:
"The cache stores overhead information. What percentage of the total
cache storage is this overhead?"
I have no idea what this means. I've been searching "cache overhead" on Google and StackOverflow and I haven't been able to find anything that places those two words together in a helpful context for me. I don't see it in my textbook either.
Any insight would be greatly appreciated.
Thank You!
| How to Calculate Cache Overhead? |
1
You could forward environment variables etc from the parent image by writing to a file and the copy it into the next image. Then in your entrypoint somehow read it and export variables etc. But i would say it's a bit exotic design.
But in your case there seems to be quite a bit of dependencies on variables and packages so maybe it just easier to not user multi-stage at all?
Share
Follow
edited Aug 20, 2018 at 9:52
answered Aug 19, 2018 at 18:42
Mattias WadmanMattias Wadman
11.3k22 gold badges4444 silver badges5858 bronze badges
Add a comment
|
|
I have an application packages as .bin and it runs on rhel7-init base image. The following is the Dockerfile with the parent image and child image.
FROM registry.access.redhat.com/rhel7-init:7.3 as base
COPY yum.repos.d/ /etc/yum.repos.d/
RUN yum -y install sudo systemd
RUN yum install https://download.postgresql.org/pub/repos/yum/9.4/redhat/rhel-7-x86_64/pgdg-redhat93-9.4-3.noarch.rpm -y && \
yum install -y postgresql94
RUN export key=value && \
installer.bin &> /root/install.log
FROM registry.access.redhat.com/rhel7-init:7.3
COPY --from=base /opt/app/ /opt/app
COPY start_app /root/
RUN chmod +x /root/start_app
ENTRYPOINT [ "/root/start_app" ]
It has a start script given as ENTRYPOINT which configures a few things at runtime as it brings up the container. I copied the installed directory location to the new image from the parent image.
Now when I start my container, it shows dependencies on which sudo packages which were installed in the parent image.
How do I carry forward the installed packages of my parent base image to my new base image without adding too much size?
Can I also carry forward any env variables present in the installer used in the parent base image?
| How to create Docker multistage build with Linux dependencies and environment vars from parent image? |
104
On MAC, if above-mentioned tricks don't work, do the following:
Open Keychain Access
Search for CodeCommit. You should find this:
Select 'git-codecommit....' and press delete
Confirm the delete.
Now try again. It should work. You may have to do it again next time as well when you face the error 403.
One of the possible reason for this issue is the keychain password different than login password on your MAC.
Share
Improve this answer
Follow
edited Aug 26, 2019 at 19:42
answered Nov 11, 2018 at 5:46
user846316user846316
6,18566 gold badges3131 silver badges4040 bronze badges
7
This has stopped working for some reason. Any ideas?
– Mike M
Jan 8, 2020 at 8:11
3
such a pesky feature of Keychain Access
– Varun Chandak
Jan 8, 2020 at 10:30
Worked for me. I was not able to push anymore on my repo. I had to resubmit my password again.
– Andrea Gorrieri
Jan 22, 2020 at 8:37
3
Nothing happens when I click delete - no prompt, and no deletion.
– SUR4IDE
Jan 25, 2023 at 21:57
This worked for me, but I need to delete it again before EVERY remote action (push, fetch etc). Anyone know how to fix this permanently?
– Lewis Donovan
Apr 24, 2023 at 10:27
|
Show 2 more comments
|
My local laptop is a Mac.
The ssh key is configured properly. This is the content of ~/.ssh/config
Host barthea
Hostname git-codecommit.us-east-1.amazonaws.com
User AVVVVVVVVVVVVVVVVVQ
IdentityFile ~/.ssh/aws-aws.pem
Running ssh barthea gets me
You have successfully authenticated over SSH. You can use Git to interact with AWS CodeCommit. Interactive shells are not supported.Connection to git-codecommit.us-east-1.amazonaws.com closed by remote host.
I created an IAM user bruce666 complete with password and access keys, made this user part of the "team" group.Then I created a policy that includes "AWSCodeCommitPowerUsers" and assigned this policy to "team". And finally assigned bruce666 to "team". At this point, bruce666 can access any repo in CodeCommit through the management console.
I ran aws config --profile bruce666, fed in his access and secret key, his region and specified the format at json. At this point, I was able to create the rekha repo in CodeCommmit by running aws codecommit get-repository --repository-name rekha --profile bruce666
I can create a couple of dummy files, run git init, git add . , git commit -m "1", git add origin https://git-gitcode.amzonaws.com/repos/v1/rekha , git push -u origin master And that operation will be successful.
However, when I run git clone ssh://git-gitcode.amazonaws.com/repos/v1/rekha , I get "fatal: unable to access 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/barthia/': The requested URL returned error: 403" What gives?
| running git clone against AWS CodeCommits gets me a 403 error |
Cronjob commands should contain minute(m), hour(h), day of month(dom), month(mon) and day of week(dow).You should write them using the format below :# (m) (h) (dom) (mon) (dow) commandYou're missing one parameter in your case.For example, if you want to run your code at 08:55 everyday, you can use :55 08 * * * /usr/bin/python2 /home/user/file.py | I am using Putty and it does not have Python3 it has python2 or python so tried both to run python file by using command55 08 * * * /usr/bin/python2 /home/user/file.pyand couple of other commands BUT nothing is working.The python file I have runs totally fine with spark2-submit command. This is a pyspark file converted to python. When I use /usr/bin/python2 I get error for line "from pyspark.sql import sparksession" - error-> No module named pyspark.sql.I think spark2-submit is not supported in corntab job. and /usr/bin/python2 is giving error for pyspark convered python file.Can anyone please help me out here. | Not able to run python file in Cron Job in putty |
You might consider:
making sure your submodule B, in repository A, follows a branch (main, for instance)
using a multi-project pipeline
You can set up GitLab CI/CD across multiple projects, so that a pipeline in one project can trigger a pipeline in another project. You can visualize the entire pipeline in one place, including all cross-project inter-dependencies.
Your first pipeline would be triggered only on pushes.
And it includes a trigger to your project A, which would then execute a git submodule update --remote, in order to update the submodule B.
|
I have a project A and I have added a submodule to another project B.
I see that the submodule used in A is referring to a particular commit, and I would like this to be updated every time a new commit appears in B.
Is it possible to trigger an automatic pull/push of the submodule in A when B is updated?
Or this procedure has to be done always by hand?
| Trigger automatic pull/push for submodule after new commit appear |
I believe the proper way to do this in symfony 1.2 is as follows:sfContext::switchTo('frontend'); //switch to the environment you wish to clear
sfContext::getInstance()->getViewCacheManager()->getCache()->clean(sfCache::ALL);
sfContext::switchTo('backend'); //switch back to the environment you started fromShareFollowansweredSep 22, 2009 at 15:35Jeremy KauffmanJeremy Kauffman10.3k55 gold badges4242 silver badges5252 bronze badgesAdd a comment| | I would like to clear my frontend application's cache from an action in my backend application.How can I achieve this? | Clearing Symfony cache for another application |
In your other PC the file.gitconfigshould not be the same as on the first one.This file is your git config for the local user. It can contain colors settings, alias settings, and more importantly, regarding your problem user settings:[color]
diff = auto
status = auto
branch = auto
[user]
name = Exemple
email =[email protected][alias]
ci = commit
co = checkout
st = status
br = branchYes, by changing this file you can get commit from Linus on your project.You can see who commited a commit withgit show commit_hash | I work alone in a repo at Github. Then I saw two collaborators working on this project. Both share my username but only one account is linked to my profile. The other I without any profile link has made over 80% of the commits.As note: I switched to my other PC, cloned the repo withgit clone https://github.com/myaccount/foobar.gitand made a few commits that I've pushed toorigin.Did I made something wrong? And my account isn't compromised since all commits are mine.Update:Github uses the email address to identify the users. I created the initial commit of the repo online to insert a license. Simultaneously Github added a custom email because my registered one isn't public. And my email defined in the.gitconfigwas not the same as Github's custom email. Concluded, Github thought, we are two different users.I used the script of the section 'Changing E-Mail Addresses Globally' which can you findhere.The solution was found during a chat.Be careful:This script will cause a new SHA1-hash for each commit that matches the given email in the script. | Github shows me twice as collaborator |
Helm supports passing arguments to dependent sub-charts. You can override the architecture of yourredissub-chart by adding this to yourvalues.yamlfile.redis:
architecture: standalone | InChart.yamlI specified dependency:dependencies:
- name: redis
version: 15.0.3
repository: https://charts.bitnami.com/bitnamiIndeployment.yamlI specify service:apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis-svc
spec:
clusterIP: None
ports:
- port: 6355
selector:
app: redisBut what I see afterkubectl get all:service/redis-svc ClusterIP None <none> 6355/TCP 36s
statefulset.apps/myapp-redis-master 0/1 37s
statefulset.apps/myapp-redis-replicas 0/3 37sI want single redis instance as Service. What do I do wrong? | Helm: install single redis instance from chart dependency |
1
Although it is not possible to set an alias for the exec mode when using a container it is however possible to do it for run mode using the script below:
%runscript
alias python3='python3.6'
eval ${@}
The difference between exec and run is that exec runs the command you write directly but run passes whatever you write to the script you've written in %runscript.
Source
Share
Follow
answered Nov 4, 2018 at 0:30
AmirAmir
10.8k1010 gold badges4949 silver badges7575 bronze badges
1
2
Do you know whether it is possible to set aliases for the shell mode?
– bli
Feb 26, 2020 at 13:11
Add a comment
|
|
I have been trying to set some aliases in my container but I haven't been able to do it successfully. While building the container I put alias python3=python3.6 in %post and things work fine; the alias is correctly declared and is used throughout the container building process.
However, after the container is built and I execute it, using singularity exec, the alias declaration in %environment or%runscript does not work. I also tried putting the alias declaration command in a bash script in the container and run the bash script but it still does not work. Basically, I think I'm looking like ENTRYPOINT in Docker for Singularity. Does anyone know what I'm doing wrong and how I can set aliases within a container?
I'm using Singularity 2.6.
Here's the definition file I'm using:
BootStrap: docker
From: ubuntu:16.04
%post
# Set up some required environment defaults
apt-get -y update && apt-get -y install software-properties-common && yes '' | add-apt-repository ppa:deadsnakes/ppa
apt-get -y update && apt-get -y install make \
cmake \
vim \
curl \
python3.6 \
python3.6-dev \
curl https://bootstrap.pypa.io/get-pip.py | python3.6
alias python3=python3.6 #Here's where I declare the alias
python3 -m pip install -U pip
python3 -m pip install --upgrade pip
python3 -m pip install -U setuptools
python3 -m pip install scipy \
numpy \
transforms3d \
matplotlib \
Pillow
# I also create a file containing a bash script to declare the alias
cd /
mkdir bash_aliases && cd bash_aliases
echo "alias python3=python3.6">bash_aliases.sh
chmod +x bash_aliases.sh
%runscript
alias python3=python3.6
# bash /bash_aliases/bash_aliases.sh # You may uncomment this as well
| How to set a Python alias in a Singularity container upon execution? |
1
As told by mario, the problem is that s3 resolution is higher than those other models, so images are bigger in dimensions and, therefore, also in memory consumption.
Although it is worth to say that the S3 seems to have a little maximum heap size givin its resolution, since I have also had out of memory problems with it but not with other devices with same resolution that has a heap limit higher.
Share
Improve this answer
Follow
answered Nov 18, 2012 at 13:11
Fran MarzoaFran Marzoa
4,38711 gold badge4040 silver badges5555 bronze badges
Add a comment
|
|
Samsung Galaxy S3 uses 32mb heapsize almost instantly on my app, where on almost any other android device it starts at +- 5mb (saw this in logcat, can send screenshots from two different devices if necessary). Think this is the reason for my app crashing with "OutOfMemory" Exception's on only the Galaxy s3, works perfectly on the galaxy Y Duos and Pocket.
Any Help/Advice on why this is happening would be greatly appreciated thanks.
| "OutOfMemory" Exception only on Samsung Galaxy S3 |
The native pragmaautousewill load modules needed when plainsubroutinesare called:use autouse 'My::Module::A' => qw(a_sub);
# ... later ...
a_sub "this will dynamically load My::Module::A";For proper OO methods,Class::Autousewill load modules (classes) whenmethodsare called:use Class::Autouse;
Class::Autouse->autouse( 'My::Module::A' );
# ... later ...
print My::Module::A->a_method('this will dynamically load My::Module::A');ShareFolloweditedSep 18, 2015 at 16:10answeredSep 18, 2015 at 16:04pilcrowpilcrow57.2k1313 gold badges9595 silver badges138138 bronze badgesAdd a comment| | Given the following module:package My::Object;
use strict;
use warnings;
use My::Module::A;
use My::Module::B;
use My::Module::C;
use My::Module::D;
...
1;I would like to be able to call My::Object in the next 2 scenarios:Normal useuse My::Object;My::Module->new();Reduced memory use. Call the same object but with a condition or a flag telling the object to skip the use modules to reduce memory usage. Somehow like:use My::Object -noUse;My::Module->new();If tried thePerl if conditionwithout success.The problem I'm having it's with big objects with a lot of uses, then only loading this object consumes a lot of RAM. I know I can refactor them but it will be wonderful if somehow I can avoid these uses when I'm sure none of them is used on the given scenario.One solution will be to replace all uses with requires on all places when the modules are needed, but I don't see convenient when some of them are used in a lot of methods.Any ideas?Thanks | How to dynamically avoid 'use module' to reduce memory footprint |
#redirect empty user agent, UNLESS it's accessing the RSS feed
RewriteCond %{HTTP_USER_AGENT} ^$
RewriteCond %{REQUEST_URI} !^/rss.php # <-- path to rss.php
RewriteRule .* http://%{REMOTE_ADDR}/ [R,L]Archived Source:http://wiki.e107.org/index.php?title=Htaccessexample | Is there any solution for preventing http request that has an empty user agent string preferably using .htaccess? | How to block an empty user agent request |
In CloudFormation you createAWS::EC2::Instance. To have thelatest AMI of Amazon Linux 2, you can usedynamic references.The basic example of them:Parameters:
LatestAmiId:
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
Resources:
Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: !Ref LatestAmiId
InstanceType: t2.micro | Is it possible to use the AWS CloudFormation stack with Amazon Linux 2? Currently, I only found Amazon docs that are pointing to Amazon Linux AMI. Unfortunately, AMI would stop being supported in 2023 (and is marked as deprecated already). | (AWS) CloudFormation stack with Amazon Linux 2 |
I removed the known_hosts file from my ~/.ssh folder, which did the trick. Everything works now.
|
OS - Windows 7 professional 64 bit
GIT for windows - Git-1.9.0 - Using Git bash
I started having problems with "git fetch" suddenly out of nowhere.
Sometimes git.exe would error out and sometimes the "git fetch" would just hang.
So I decided to start everything from scratch.
I uninstalled git for windows and reinstalled it (accepting all defaults), restarted the machine. Created a brand new folder and did the following
$ git clone [email protected]:[email protected]/myproject.git
Cloning into 'myproject'...
Enter passphrase for key '/c/Users/myid/.ssh/id_rsa':
remote: Counting objects: 287209, done.
remote: Compressing objects: 100% (86467/86467), done.
remote: Total 287209 (delta 188451), reused 287209 (delta 188451)
Receiving objects: 100% (287209/287209), 168.89 MiB | 328.00 KiB/s, done.
Resolving deltas: 100% (188451/188451), done.
Checking connectivity...
It consistently just hangs at "checking connectivity"
I have scanned the machine for viruses/trojans what have you and no threats were found.
This is happening both at work location and from home - So its probably not the internet.
I'm not sure how to proceed or what to try next.
| git clone hangs at "checking connectivity" |
Your analysis is indeed correct, but I guess your professor is looking for an explanation like this:Suppose the single cycle processor also has the stages that you have mentioned, namely IF, ID, EX, MA and WB and that the instruction spends roughly the same time in each stage as compared to the pipelined processor version. Now you can draw a pipeline diagram for this single cycle processor, and see that it would take 50 cycles on a single cycle processor (which can work on 1 instruction at a time) compared to the 19 cycles on a pipelined processor.Again, I prefer the way you have analyzed it (as the single cycle processor wouldn't really have each of those stages in a different clock cycle, it would just have a very long clock cycle to cover all the stages). Also, you've not mentioned whether this is a stalling-only MIPS pipeline (for which your answer is correct) or if this is a bypassed-MIPS pipeline. If this is the latter, you can shave off a few more cycles and get it down to 15 cycles. | I have to compare the speed of execution of the following code (see picture) using DLX-pipeline and single-cycle processor.Given:an instruction in the single-cycle model takes 800 psa stage in the pipeline model takes 200 ps (based on MA)My approach was as follows.CPU time = CPI * CC * ICSingle-cycle:CPU time = 1 * 800 ps * 10 instr. = 8000 ps.Pipeline:CPI = 21 cycles / 10 instr. = 2.1 cycles per instructionCPU time = 2.1 * 200 ps * 10 = 4200 ps.CPU time single-cycle / CPU time pipeline = 8000/4200 = 1.9, sothe pipeline code runs 1.9 faster.But I was said, I have to work with clock cycles and not with the time -- "It doesn't matter how much time a CC takes".I don't see how to make a comparison otherwise. Could you please help me? | Pipeline processor vs. Single-cycle processor |
Just wrap theseries selectorintolast_over_time()function with some pre-defined lookbehind window in square brackets. For example:sum(last_over_time(my_metric_counter[5m]))In this case the query result shouldn't depend on the interval between points on the graph in Grafana. | I’m quite a beginner of Grafana and Prometheus, I’m facing a strange situation about a stat widget showing a number, the prometheus query is simply this:
sum(my_metric_counter)I tried to compose a screenshot where you can see steps I do to reproduce what I think is a problem, but I understand that the interval query option work in that way, it considers the resolution of the panel and the time-range, so in this way the interval changes varying the panel size.Well, but that number, when my users see in dashboard, where it is very small, doesn’t show right number because of its big interval of 15m which cut off some points.How can I set a static interval or how should I change, if possible, Grafana settings to accomplish my tasks in your opinion?Thanks in advance, best regards | Interval in query options and the resulting number changing from resolution to resolution |
Re:will this execute in Executor or Driver?Once you call tableList.collect(), the contents of 'tables.txt' will be brought to the Driver application. If it is well within the Driver Memory it should be alright.
However the save operation on Dataframe would be executed on executor.Re:This will throw Out of Memory Error ?Have you faced one ? IMO, unless your tables.txt is too huge you should be alright.I am assuming Input data size as 45 GB is the data in the tables mentioned in tables.txt.Hope this helps. | I'm trying to read a config file in spark read.textfile which basically contains my tables list. my task is to iterate through the table list and convert Avro to ORC format. please find my below code snippet which will do the logic.val tableList = spark.read.textFile('tables.txt')
tableList.collect().foreach(tblName => {
val df = spark.read.format("avro").load(inputPath+ "/" + tblName)
df.write.format("orc").mode("overwrite").save(outputPath+"/"+tblName)})Please find my configurations belowDriverMemory: 4GBExecutorMemory: 10GBNoOfExecutors: 5Input DataSize: 45GBMy question here is this will execute in Executor or Driver? This will throw Out of Memory Error ? Please comment your suggestions.val tableList = spark.read.textFile('tables.txt')
tableList.collect().foreach(tblName => {
val df = spark.read.format("avro").load(inputPath+ "/" + tblName)
df.write.format("orc").mode("overwrite").save(outputPath+"/"+tblName)}
) | Use RDD.foreach to Create a Dataframe and execute actions on the Dataframe in Spark scala |
-1you can use self-signed certificate, but it might work only with edge, the latest versions of major browsers didn't trust self-signed certificates, you can navigate to "Security" section in "Developer Tools" to see the exact error.want to see an example? see this:https://www.youtube.com/watch?v=0kvOQJj7gVkbut it won't work if you use updated browser. | is it possible to have a verified certificate for xampp localhost? I able to navigate to https in localhost but i got this error "Connection is not secure".I want something like this when i navigate to localhost.Thanks. | Posibility of implement trusted HTTPS in localhost (xampp) |
... Connection reset by peer at /usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/Net/SSL.pm line 145You are running a very old version of Perl (from 2004) together with an old version of the SSL libraries (i.e. Crypt::SSLeay instead of IO::Socket::SSL) and my guess is that this goes together with using a very old version of the OpenSSL libraries for TLS support. This combination means that there is no support forSNI, no support for TLS 1.2 and no support for ECDHE ciphers. Many modern servers need at least one of these things supported. Butconnection reset by peercould also mean that some firewall is blocking connections or that there is no server listening on the endpoint you've specified. Or it could mean that the server is expecting you to authorize with a client certificate. Hard to tell but with a packet capture of the connection one might provide more information. And, if the URL is publicly accessible publishing it would help too in debugging the problem. | I am trying to call a web service using ssl. It gives following error:500 SSL negotiation failed:I searched forums and applied offered methods but none of them worked.2 methods I applied are listed below:1-) setting enviroment before call:$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME} = 0;2-) passing parameter ssl_opts => [ SSL_verify_mode => 0 ] to proxy:my $soap = SOAP::Lite
-> on_action( .... )
-> uri($uri)
-> proxy($proxy, ssl_opts => [ SSL_verify_mode => 0 ])
-> ns("http://schemas.xmlsoap.org/soap/envelope/","soapenv")
-> ns("http://tempuri.org/","tem");
$soap->serializer()->encodingStyle(undef);Is there any solution for this? | Perl Webservice SSL Negotiation Failure |
Google Colab has by-default the latest stable tensorflow and python version installed. You need not to install the tensorflow-gpu to have GPU support enabled in Google Colab untill you need a specific tensorflow version required for your code. You can access the tensorflow package with gpu enabled directly by importing the tensorflow and setting the runtime to GPU as below.You can follow the below steps to install and access the specific tensorflow version(suppose 2.10) with gpu support in Google Colab:!pip install tensorflow==2.10 #restart the kernel
import tensorflow as tf
tf.__version__ # '2.10.0'
#to check, if GPU enabled
tf.config.list_physical_devices()
# Output - [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU')]
#if no GPU device shows up, change the Colab runtime to GPU mode and re-run the same code
tf.config.list_physical_devices()
#Output - [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
# PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] | At first I choose the runtime to be GPU and selected the available T4 GPU. Then I try to install TensorFlow GPU in Google Colab by running the command !pip install tensorflow-gpu!pip install tensorflow-gpuBut I encountered this error:Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-
wheels/public/simple/
Collecting tensorflow-gpu
Using cached tensorflow-gpu-2.12.0.tar.gz (2.6 kB)
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Preparing metadata (setup.py) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.I have already tried upgrading pip, clearing the pip cache, and restarting the Colab runtime, but the issue persists. I suspect there might be compatibility or network-related problems. Can someone help me understand the cause of this error and suggest possible solutions to resolve it? Are there any specific steps I should take or alternative installation approaches I can try to successfully install TensorFlow GPU in Google Colab?Thank you in advance for any assistance or insights you can provide. | Error installing TensorFlow GPU in Google Colab: subprocess-exited-with-error |
Apache cannot do this. You can use jquery but it might not be accurate unless you perform long-polling on the page. Your question has been answered here in full on StackOverflow:
How to measure a time spent on a page?
|
Is there a library or solution for quick apache/nginx log parsing and finding out how much time a user has spent on each page?
UPDATE: spent on every 3rd level domain.
For example AWStat can do that ( A full log analysis enables AWStats to show you the following information: Visits duration and last visits)
| time user spent on the page |
You're essentially correct, though there are a few ways that it was typically done to make it look less awkward:
Foo* someFoo = [[Foo alloc] init];
self.foo = someFoo;
[someFoo release];
Or more succinctly:
self.foo = [[[Foo alloc] init] autorelease];
|
I am using ARC but reading the MRR part of Objective-C, and it seems like if a property of ViewController is (for non-ARC):
@property (retain, nonatomic) Foo *foo;
then the viewDidLoad of ViewController will need to do a release right after alloc and init:
- (void)viewDidLoad
{
[super viewDidLoad];
self.foo = [[Foo alloc] init];
[self.foo release];
}
Otherwise, the retain will increment the reference count of the Foo object once when it is assigned to _foo (the instance variable), and @property (retain, nonatomic) Foo *foo;
0 also increment the reference count once, so it is claiming ownership twice, and therefore, there needs to be a @property (retain, nonatomic) Foo *foo;
1 right after the @property (retain, nonatomic) Foo *foo;
2 and @property (retain, nonatomic) Foo *foo;
3?
I just feel it is a bit weird looking because an @property (retain, nonatomic) Foo *foo;
4 is immediately followed by a @property (retain, nonatomic) Foo *foo;
5 this way.
(If we do is a @property (retain, nonatomic) Foo *foo;
6, then one ownership is claimed by the autorelease pool, and one claimed by ViewController, and at the end of the event loop, the autorelease pool drains, and unclaim one ownership, and therefore the Foo object is correctly owned once only. (but if Foo doesn't have such methods and only have @property (retain, nonatomic) Foo *foo;
7 and @property (retain, nonatomic) Foo *foo;
8, then the immediate @property (retain, nonatomic) Foo *foo;
9 is needed.))
| In Objective-C MRR, if foo is a retain property, then self.foo = [[Foo alloc] init] will need to be released immediately? |
Use resolver in nginx config
The nginx resolver directive is required.
Nginx is a multiplexing server (many connections in one OS process), so each call of system resolver will stop processing all connections till the resolver answer is received. That's why Nginx implemented its own internal non-blocking resolver.
If your config file has static DNS names (not generated), and you do not care about track IP changes without nginx reload, you don't need nginx's resolver. In this case all DNS names will be resolved on startup.
Nginx's resolver
Nginx nginx0 directive should be used, if you want to resolve domain name in runtime without nginx1 reload.
E.g.:
nginx2
nginx3
Use the same network (not your case, but still worth noting)
Containers you are trying to link may not be on the same network.
You may want to put them all on the same network.
In your case subnets are the same, it's ok:
nginx4
nginx5
|
After deploying to AWS EKS I get this error
Gtihub repo: https://github.com/oussamabouchikhi/udagram-microservices
Steps to reproduce
Create AWS EKS cluster and node groups
Configure EKS cluster with kubectl
Deploy to EKS cluster (secrets first, then other services, then reverserproxy)
. kubectl apply -f env-secret.yaml
. kubectl apply -f aws-secret.yaml
. kubectl apply -f env-configmap.yaml
. ...
. kubectl apply -f reverseproxy-deployment.yaml
. kubectl apply -f reverseproxy-service.yaml
nginx-config
AWS EKS0
docker-compose
AWS EKS1
reverseproxy-deployment
AWS EKS2
reverseproxy-service
AWS EKS3
| nginx: [emerg] host not found in upstream "udagram-users:8080" in /etc/nginx/nginx.conf:11 |
2
TeamCity has an 'Automatic Merge' feature. Please have a look at the documentation and the related blog post
Share
Improve this answer
Follow
answered Mar 27, 2016 at 8:34
Oleg RybakOleg Rybak
1,67111 gold badge1414 silver badges2727 bronze badges
Add a comment
|
|
Does anyone know if it is possible to update source files during a build (from an external source for e.g. like checking if there are new translations and merging those in) and then merge those changes to a git branch via a Pull request with TeamCity as part of the build steps?
| TeamCity automatically update sources and merge to git repo |
This issue can be resolved by setting options.secure to true as mentionedhere. | I have a simple web application using peerjs here:https://github.com/chakradarraju/bingo. I was planning to use github.io to put up a demo, and github.io will be served only in HTTPS, the default PeerServer that is used by the peerjs library doesn't support HTTPS.Is there any public HTTPS PeerServer that I can use? | Is there any public PeerServer over HTTPS? |
Docker images can be served at the moment ONLY with a private registry/hub and not as standalone even if you want to just use amazon s3 as backend so you have to run this registry somewhere and I believe you can use any cheap VPS other than amazon EC2 micro if you want to run it more cost efficiently but you HAVE to do this somewhere :) | I am setting up an EC2 Container Service task but am looking use some private container images. Int the Docker Registry container, you can choose an interface to S3 as a storage backend. Is it possible to point a task definition image reference to an S3 bucket instead of a running a full private registry/hub? That would save me from having to run a little micro instance with a registry on it. It seems like this would be a thing since AWS services usually reference each other really well, but I can't find any info on it.Thanks. | AWS ECS - Images from S3 |
Wordpress doesn't check the rewritten url (/category/1), but parses the original request url (/hello), and so it doesn't know what to do with/hello. To fix this use the proxy flag. So[L]would become[L,P]ShareFollowansweredMay 30, 2012 at 9:54GerbenGerben16.8k66 gold badges3838 silver badges5656 bronze badgesAdd a comment| | I'm trying to add some rewrite rules to a Wordpress site, to go a little further than wp's own rewrite capabilities, but i am having trouble when I place those rules BEFORE wordpress own rules: if my rule rewrites to something that wordpress will rewrite later, it doesn't work. E.g:www.domain.com/hello -> www.domain.com/image.jpg : this works, as wordpress doesn't interfere
www.domain.com/hello -> www.domain.com/category/1 : doesn't work, even if www.domain.com/category/1 works by itselfI've tried to remove the L tag from my rewrite rules, to allow further rewriting, but it doesn't seem to work...maybe wordpress does starnge things with htaccess?Does this sound familiar to anyone?Thank you | htaccess rules before wordpress own rules |
Ahost path volumeisbind mountedinto a container. So anything available on the host will become available in the container.A FUSE volume is not directly supported by kubernetes, as in provisioning a PVC on a node via some fuse driver buthostPathshould work fine. | if a filesystem writted in FUSE mounted on host, can containers started by Docker use it? I use k8s to deploy containers and want to do something by FUSE. | Does Docker support filesystems written in FUSE mounted on host OS? |
According to theJenkins documentationyou should use on:Linux/Unix:${MY_GLOBAL_VARIABLE}or$MY_GLOBAL_VARIABLEWindows:%MY_GLOBAL_VARIABLE%SonarQube Scanner for Jenkinshas supported using environment variables since version 2.5 (seeSONARJNKNS-267 Support using env variable to configure Scanner for MSBuild), so it should be possible to use environment variables.Are you sure this environment variable is set? Could you executeExecute Windows batch commandstep beforeSonarScanner for MSBuild - Begin Analysisand verify that the variable is set properly:echo %MY_GLOBAL_VARIABLE%ShareFollowansweredMay 17, 2019 at 13:26agabrysagabrys8,89833 gold badges3535 silver badges7575 bronze badgesAdd a comment| | I'm using the "SonarScanner for MSBuild for Jenkins" and i'm trying to change sonarQube projectVersion number automatically.I have a windows system variable that contains the number that i want put in this window.
I try in many diferents ways (with no success):ProjectVersion: %MY_GLOBAL_VARIABLE%
ProjectVersion: MY_GLOBAL_VARIABLE
ProjectVersion: ${%MY_GLOBAL_VARIABLE%}
ProjectVersion: ${MY_GLOBAL_VARIABLE}Does anyone knows how can I call this variable? Its possible do that using this Jenkins' block?It works If I put the function in a Windows batch command, but I loose links that is show in Jenkins' main page.
CODE:G:\jenkins-slave\tools\hudson.plugins.sonar.MsBuildSQRunnerInstallation\sonar\SonarScanner.MSBuild.exe begin
/k:"key"
/n:"name"
/v:%MY_GLOBAL_VARIABLE%Image shows the variable that I'm telling you about and Link that I loose if I use a Windows batch command[UPDATE]
SonarQube Version: | How to change sonarqube projectversion number automatically in jenkins |
Git cares only about the executable bit. The rest is left to the system. Unix systems have a per-process umask value, set and maintained by the umask command, that says which permissions not to grant. 0022 and 0066 and 0077 are common settings for that. My Arch linux default is 0022, don't grant write privilege to anyone but me. It looks like your linux distro defaults to 0002, don't grant write privilege to the general public, but allow it for you and your group.
So put umask 0022 in your shell startup. If you want it system-wide, look in /etc/profile, the Arch one runs any hooks in /etc/profile.d so you could add an override there.
|
I cloned the same project from Github on my MacBook (Sierra) and Linux machine (Ubuntu 16.04), and I see the file mode on my MacBook is "-rw-r--r--", but in my Linux machine, it is "-rw-rw--r--".
Is there a way to make them consistent? Thanks!
| Why the file modes on OSX and Linux are different when cloning code with git |
1
I had the same problem.
My approach was to sanitize the repo by BFG and then push the clean repo to the new server.
Here an example:
git clone --mirror git://some-server.com/some-dirty-repo.git
java -jar bfg.jar --delete-files /some-dirty-path/some-dirty-files-pattern
cd some-dirty-repo.git
git reflog expire --expire=now --all
git gc --prune=now --aggressive
git push
Please, tell me if it worked for you too.
Share
Follow
answered Sep 24, 2021 at 18:56
Antonio PetriccaAntonio Petricca
9,78855 gold badges3939 silver badges7777 bronze badges
4
Thank you, I'll try it tonight. Does this push all the branches and history to the new repo?
– minhtuanta
Sep 24, 2021 at 23:26
Yes, you have to issue git push --all and git push --tags.
– Antonio Petricca
Sep 25, 2021 at 8:47
Interesting. I did git push new-origin --all but the new repo still have issue with the missing lfs objects. At this point, I'm thinking I'll find a way to remove lfs and download the files then push them to the new repo.
– minhtuanta
Sep 25, 2021 at 13:50
You have to push the repository pruned by bfg* in order to ge a clean copy. I think you do some training with this tool. :) Good luck!
– Antonio Petricca
Sep 25, 2021 at 20:16
Add a comment
|
|
My company has a Git repo that supports LFS.
The LFS was set up before I joined.
I believe person set it up didn't tell other people. So over time, there are missing LFS objects here and there.
That brings it to today.
I was tasked with moving the current repo to a new repo.
However, when I try to fetch and push the LFS objects, there are missing LFS objects. Nobody knows where those are anymore.
Now I don't care about LFS in the new repo.
Is there anyway I can migrate my current repo to the new repo without LFS?
I want the actual files to be moved, not the LFS pointers.
| Git LFS Missing Files |
This looks like a git submodule to me:✓ Folderis a double grayed out folder icon on GitHub✓ When repo is cloned, files from the folder are not copied locally✓ Can commit files in the folderLooks like you may have accidentally created agit submodule. The way to fix this would be to look for a.gitmodulesfile in the root of your local repository.Delete the.gitmodulesfile and it should "free up" all of the files in that folder so you can commit them normally into your repository. Commit, push, and it should be fixed!In response to your comment (which makes it clear that you are using submodules on purpose), to clone a repository including all submodules, usegit clone --recursiveinstead of the normalgit clone. This will cause all the files (and indexes) from your submodules to be downloaded as well, so you no longer get empty folders. | So I have a blog up and running fine using Jekyll and GitHub. The weird thing is that when I view my repo via the GitHub website and navigate to the folder containing my blog, it is greyed out (the icon is a double grayed out folder one on top of the other).Also when I clone my repo the folder doesn't copy any files locally. Everything else copies just fine except the folder containing my blog.Is this because it is being hosted from that folder? I think this is not the case and I did something wrong. The strange thing is I can commit to the folder and the blog works perfectly.For obvious reasons I want to be able to make posts via browser sometimes and when I clone as a back up I want my blog to be downloaded as well.Can anyone help? | Jekyll GitHub Blog Folder Not Accessible |
As for my experience, You should use multiple levels of cache. Implement both of Your solutions (provided that it's not the only code that uses "SELECT title, body FROM posts WHERE id=%%". If it is use only the first one).
In the second version of code, You memcache.get(query.hash()), but memcache.put("feed_%s" % id, query_result). This might not work as You want it to (unless You have an unusual version of hash() ;) ).
I would avoid query.hash(). It's better to use something like posts-title-body-%id. Try deleting a video when it's stored in cache as query.hash(). It can hang there for months as a zombie-video.
By the way:
id = GET_PARMS['id']
query = query("SELECT title, body FROM posts WHERE id=%%", id)
You take something from GET and put it right into the sql query? That's bad (will result in SQL injection attacks).
|
Something I'm curious about.. What would be "most efficient" to cache the generation of, say, an RSS feed? Or an API response (like the response to /api/films/info/a12345).
For example, should I cache the entire feed, and try and return that, as psuedo code:
id = GET_PARAMS['id']
cached = memcache.get("feed_%s" % id)
if cached is not None:
return cached
else:
feed = generate_feed(id)
memcache.put("feed_%s" % id, feed)
return feed
Or cache the queries result, and generate the document each time?
id = sanitise(GET_PARMS['id'])
query = query("SELECT title, body FROM posts WHERE id=%%", id)
cached_query_result = memcache.get(query.hash())
if cached_query_result:
feed = generate_feed(cached_query_result)
return feed
else:
query_result = query.execute()
memcache.put("feed_%s" % id, query_result)
feed = generate_feed(query_result)
(Or, some other way I'm missing?)
| How granular should data in memcached be? |
It is the constraint from the ETCD.The limit is 1MB because that's the limit for etcd.size limit | I am trying to upload a jar file in a config map. I am able to upload the small jar file of size 1 MB but not a file of size 4MB. I am using below command to create configmap.kubectl create configmap configmap-test --from-file=jarfile.jarIs there a way to increase this size limit?Client version is GitVersion:"v1.14.1" and server version is GitVersion:"v1.15.0" | Can we increase the size of configmap to store binary data of size more than 1 MB? |
From the Mystik-RPG folder:
git rm -rf mapeditor
|
I'm trying to delete the folder mapeditor from my Java engine on GitHub here (https://github.com/UrbanTwitch/Mystik-RPG)
I'm on Git Bash.. messing around with rm and -rfbut I can't seem to do it. How do I remove mapeditor folder from Mystik-RPG completely?
Thanks.
| Deleting a folder from GitHub |
-1A root as defined inRFC5280, Section 3.2:(a) Internet Policy Registration Authority (IPRA): This
authority, operated under the auspices of the Internet
Society, acts as the root of the PEM certification hierarchy
at level 1. It issues certificates only for the next level
of authorities, PCAs. All certification paths start with the
IPRA.Therefore, a "root store", even if it is a generic, non-specified, description (as President James K. Polk pointed out in his comment), shouldonly contain root CA certificates, which means they have signed themselves.If you do there might be unwanted side effects...On Windows this breaks MutualTLS with Internet Information Services (IIS)This is alsonot specified by the RFCbut the issue is documented as Cause 2 here:https://learn.microsoft.com/en-us/troubleshoot/iis/http-403-forbidden-access-website | May the root store contain non-self-signed certificates, i.e. the issuer and subject are different?If so, will certificate chain validations return “success” upon encountering a non-self-signed certificate in the root store, or will the validation continue in the root store until either a self-signed certificate (“success”) or none (“fail”) is encountered?I suspect this behavior is implementation dependent, but I can’t find any reference. | May the certificate root store contain non-self-signed certificates? |
I do confirm the issue. Thetickethas been created (planned for SonarQube 5.2). | I have SonarQube 5.1.1 installed and have several plugins as well.I'm testing out thesonar.web.contextparameter and it seems to be working just fine for the most part, but when I try to load or apply my saved Issues Filters, nothing loads and it gives me a 404 error in the console since the web context is missing. Is anyone else having these problems, or know a workaround for this issue? Removing the context param would get things back to normal, but having that web context would be nice to have.Also, I'm seeing the same issue with the Views plugin. From the View's Settings page, after selecting a view and clicking "Open Dashboard", the page does not properly load and is missing the sonar.web.context. Typing in the missing web context string will allow it to load. | SonarQube sonar.web.context causing issue filters to fail |
These variables are probably going to be read from your environment, so just define them.export JAVA_PERM_MEM...About the PermGen exception, do you may be use spring DM? or some spring related osgi bundles? There are a couple of issues around that. | I am trying to figure out how to increase karaf's permgen memory. In the karaf's start up script I see that there is:if not "%JAVA_PERM_MEM%" == "" (
set DEFAULT_JAVA_OPTS=%DEFAULT_JAVA_OPTS% -XX:PermSize=%JAVA_PERM_MEM%
)
if not "%JAVA_MAX_PERM_MEM%" == "" (
set DEFAULT_JAVA_OPTS=%DEFAULT_JAVA_OPTS% -XX:MaxPermSize=%JAVA_MAX_PERM_MEM%
)I understand that theJAVA_PERM_MEMandJAVA_MAX_PERM_MEMis the variables but they are not defined anywhere in startup script except here.The karaf is running on live machine so I do not want to make any experiments on it I need to be sure if doing like this:if not "%JAVA_PERM_MEM%" == "" (
set DEFAULT_JAVA_OPTS=%DEFAULT_JAVA_OPTS% -XX:PermSize=512M
)
if not "%JAVA_MAX_PERM_MEM%" == "" (
set DEFAULT_JAVA_OPTS=%DEFAULT_JAVA_OPTS% -XX:MaxPermSize=1024M
)will increase the permgen memory? The reason I need to do it is because I keep getting the:Caused by: java.lang.OutOfMemoryError: PermGen spaceEDIT:All the bundles deployed on karaf are spring related, they have camel routes, cxf endpoints, OpenJPA persistence configuration all is manage via Spring. But I do not think that there is the problem, because as I know OutOfMemory PermGen space means there is not enough memory for all deployed applications. If someone know where might be the issue it would be very helpful. | Apache Karaf startup script. How to set more perm memory? |
This would better be a comment, however, I've got not enough reputation to do so:Could you clarify your problem? You wrote, that nginx is serving content from/usr/local/var/www. What do you expect instead? Did you specify a different folder? If so, which one, and in which config did you do this?Normally,nginx.confis used to specify general configuration directives.Server blocks are normally specified in configs located insites-available/, and symlinked from there tosites-enabled/. Upon startup, nginx reads these configs, where you would specify a different folder than/usr/local/var/wwwfor example. | I am using nginx on Mac, I installed it using Homebrew, and it used to work fine until this morning. Now, after a reboot, it doesn't read mynginx.conf(from/usr/local/etc/nginx/nginx.conf) anymore and loads its defaultindex.htmlfrom/usr/local/var/www. If I force pass the config file explicitly using -c switch (sudo nginx -c /usr/local/etc/nginx/nginx.conf), nginx runs fine. Yesterday, I ran (python's)SimpleHTTPServerto serve a file and test something, then quit, and shut down.SimpleHTTPServeris not running anymore, I just ran it a couple of times yesterday. I don't know if the issue is related to that, but just in case.lsofdoes not show any process running on ports 80 or 443 except nginx.Update:In mynginx.conf, I serve files from different directories in a specified order (some directories are sym-linked):location ~ ^/(js/|css/|img/|audio/|fonts/|files/|images/|video/) {
root /Users/me/development/myCompany;
try_files /staticFrontend/$uri /angularjsSPA/develop/$uri =404;
}Againsudo nginx -c /usr/local/etc/nginx/nginx.confworks fine, butsudo nginxdoes not. | Nginx does not read my config file |
Yes, it can be done using Prometheus and Alert manager but you will something to export the metric that you want to monitor to promethes. In your case script_exporter will work. You would have to setup the exporter inside that container and configure it to execute something likels | wc -lin the folder you want to monitor. | I'm looking to write a prometheus rule to constantly check for message queue length(exim mail relay) which is the total number of files in a directory in an app's container and alert a slack channel via alert manager. Is this possible at all with Prometheus/Alert manager ? | Prometheus rules - check file count inside a directory of an app container |
As I explained in "Git submodules: Specify a branch/tag", you can use agit configcommand to edit the.gitmodulesfile.git config -f .gitmodules submodule.<path>.branch <branch>But then, don't forget to do:git submodule update --remoteThat will update the submodules content to the latest of their assigned branch.How 3.0.1 version set?A submodule is a way to record a SHA1 in the parent repo. If that SHA1 is the commit tagged 3.0.1, that is a way for a parent repo to reference the 3.0.1 tag of a submodule. | Fromffmpeg-android, I find the repo depends on ffmpeg repo. I find the version offfmpegis3.0.1.I have tried to edit.gitmodules.[submodule "ffmpeg"]
path = ffmpeg
url = https://github.com/FFmpeg/FFmpeg.git
branch = release/3.4
[submodule "x264"]
path = x264
url = git://git.videolan.org/x264.git
[submodule "freetype2"]
path = freetype2
url = git://git.sv.nongnu.org/freetype/freetype2.git
[submodule "libass"]
path = libass
url = https://github.com/libass/libass.git
[submodule "fontconfig"]
path = fontconfig
url = git://anongit.freedesktop.org/fontconfigAs you see, I addbranch = release/3.4.So I have two questions.How3.0.1version set?How to change the version to3.4.2? | Update gitmodule |
5
If you don't appear in a repository's contributors graph, it may be because:
You aren't one of the top 100 contributors.
Your commits haven't been merged into the default branch.
The email address you used to author the commits isn't connected to your account on GitHub.
source
Share
Follow
answered Dec 17, 2021 at 19:52
MikeMike
20.4k2525 gold badges100100 silver badges145145 bronze badges
Add a comment
|
|
Why my github commits are like this. Is this the reason that I am not showing up as a contributor.
| Why am I not showing up as contributor in github? |
I found something interesting, it is kind of hack:count_values without() ("my_new_label", start_metric) | Some query returns:{id="cart"} 0.014961101137043577
{id="payment"} 0.014961101137043577
{id="products"} 0.013670539986329524But I would like to have:{id="cart", a="0.014961101137043577"} 1
{id="payment", a="0.014961101137043577"} 1
{id="products", a="0.013670539986329524"} 1I've looked everywhere, but I am worried that this is not possible ;/ | PromQL - Prometheus - query value as label |
It turned out that this was due toa recent Docker updatewhich caused problems with the older 3x kernel found by default on Ubuntu 14.04 LTSHelpfully it is possible to upgrade the kernel version on 14.04 rather than upgrading the whole OS. It can be done as described onthis Ask Ubuntu article, but in short:sudo apt-get install linux-generic-lts-xenial
sudo rebootNB: searching the received error message revealed no other current articles online, but searching parts of it sourced a few app-specific forum posts discussing it. For this reason I felt it useful to create a more easily locatable version on here, given it will cover use cases of development, testing or even prod running containers on 14.04.ShareFollowansweredFeb 19, 2019 at 14:59M1keM1ke6,32644 gold badges3434 silver badges5050 bronze badges2Imagine on the day I 've started using linux I get this error. Thanks for the fix.–machazthegamerSep 19, 2019 at 19:36@machazthegamer if this is the day you've started using Linux get yourself on 18.04 rather than 14.04!–M1keSep 20, 2019 at 10:33Add a comment| | With Jenkins running on a Ubuntu 14.04 LTS server we began getting crashes on startup of test containers with the following error:OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:297: copying bootstrap data to pipe caused \"write init-p: broken pipe\"": unknownInitially it was suspected that this could be due to misconfiguration with local Dockerfiles or the Jenkins server itself, however running:docker run --rm -i -a stdin -a stdout ubuntu echo 1Should still work and produced the same issue | Docker crashing on Ubuntu 14.04 for any container |
kubectl get allshows you the resources you createdin this case it starts with the kind and the resource name.You can easily typekubectl delete pod/my-ngnixto delete the pod. Your commandkubectl run my-ngnix --image nginxcreated just the pod without a deployment.ShareFollowansweredAug 17, 2021 at 20:15ManuelManuel1,98722 gold badges1818 silver badges2828 bronze badges11One more thing to understand resource a bit better. Read about it in the kubernetes.io docs. It helps a lot to understand the resources and how to create them. If you want to check it via kubectl command you can usekubectl api-resources. It shows you all supported resources in your cluster.kubectl explain deploymentdescribes and shows you the fields you have to set, to create a resource. You can also dokubectl explain deployment.specand further. Follow the attribute paths...–ManuelAug 17, 2021 at 20:22Add a comment| | I ran this command to create my pod:kubectl run my-ngnix --image nginxNow, I'm trying to delete the pod/deployment with the following command:kubectl delete deployment my-nginxThe problem is that my terminal is telling me that is not possible. Since it couldn't found the right info.Error from server (NotFound): deployments.apps "my-nginx" not foundIf ask for all, this is what I see:kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-ngnix 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 159m
root@aharo003:~# kubectl stop pods,services -l pod/m-ngnixDoes someone know what else should I have to do? | Kubectl is not letting me deleting anything - How do I delete all the current deployments? |
You can use buildx in docker-compose by setting ENV variableCOMPOSE_DOCKER_CLI_BUILD=1, also if buildx is not set as default, you should addDOCKER_BUILDKIT=1:COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose build | I can build my dockerfile separately using following command:docker buildx build --platform linux/arm64 -t testbuild .now I want to use buildx in docker-compose file, but how, and how to say I want to use the arm64 architecture? This is the structure when I use the normal build.testbuild:
build: …/testbuild
image: testbuildDoes anybody know? | Use buildx build linux/arm64 in docker-compose file |
.0 is the current release.The documentation for the upcoming next release (1.1) is available from the options and settings menu (gear icon) in the top right corner of the istio.io site.On the kubernetes setup page it says that it's been tested on Kubernetes 1.10, 1.11, and 1.12.https://preliminary.istio.io/docs/setup/kubernetes/quick-start/ | Is there a link that shows the mapping of Istio and supported Kubernetes? The FAQ section just mentions this... "For our 1.0 release, Istio supports environments running container orchestration platforms such as Kubernetes (v1.9 or greater) and Nomad (with Consul).". What about the releases after 1.0? | Istio on Kubernetes - Version compatibility mapping |
You can achieve this by usingSet-ItResult, which is a Pester cmdlet that allows you to force a specific result. For example:Describe 'tests' {
context 'list with content' {
BeforeAll {
$List = @('Harry', 'Hanne', 'Hans')
$newlist = @()
foreach ($name in $List) {
if (($name -eq "Jens")) {
$newlist += $name
}
}
}
It "The maximum name length is 10 characters" {
if (-not $newlist) {
Set-ItResult -Skipped
}
else {
$newlist | ForEach-Object { $_.length | Should -BeIn (1..10) -Because "The maximum name length is 10 characters" }
}
}
}
}Note that there was an error in your example ($newlistwasn't being updated with name, you were doing the reverse) which I've corrected above, but your test doesn't actually fail for me in this example (before adding theSet-ItResultlogic). I think this is because by usingForEach-Objectwith an empty array as an input theShouldnever gets executed when its empty, so with this approach your test would just pass because it never evaluates anything. | I would like to be able to skip tests if list is empty.a very simplified example:No name is -eq to "jens", therefore the $newlist would be empty, and of course the test will fail, but how do i prevent it from going though this test if the list is empty?context {
BeforeAll{
$List = @(Harry, Hanne, Hans)
$newlist = @()
foreach ($name in $List) {
if (($name -eq "Jens")) {
$name += $newlist
}
}
}
It "The maximum name length is 10 characters" {
$newlist |ForEach-Object {$_.length | Should -BeIn (1..10) -Because "The maximum name length is 10 characters"}
}
}fail message:Expected collection @(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) to contain 0, because The maximum name length is 10 characters, but it was not found. | Pester test if list is empty do not run the tests |
How about this?You need to run as:sudo umount /dev/xvdf | I have create an EBS drive, attached it to the Instance and created file system usingmkfs.ext3.
Now i want to unmount and delete the drive, i've tried many things but nothing seems to work. Although i am able to detach the drive from instance and delete using EC-2 Console,
but when i am checking partition usingdf -hkit is still showing the drive.[ec2-user@XXXXXXXXXXXXXX ~]$ df -hk
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 1075740 7097356 14% /
tmpfs 304368 0 304368 0% /dev/shm
/dev/xvdf 30963708 176196 29214648 1% /media/newdriveAnd more over when i try to use any other command like "fdisk -l" or and all or trying to browse the drive's folders, the putty session hangs.I am new to EC2 cloud and also to Linux. | Amazon EC2: Unable to unmount and remove EBS drive file system |
Leaving aside issues of exactly-once semantics, one way to handle this would be to have a parallel source function that emits the SQL queries (one per sub-task), and a downstreamFlatMapFunctionthat executes the query (one per sub-task). Your source could then send out updates to the query without forcing you to restart the workflow. | I need help on Flink application deployment on K8we have 3 source that will send trigger condition as in form of SQL queries. Total queries ~3-6k and effectively a heavy load on flink instance. I try to execute but it was very slow and takes lot of time to start.Because of high volume of queries, we decide to create multiple flink app instance per source. so effectively one flink instance will execute ~1-2K queries only.example: sql query sources are A, B, CFlink instance:App A --> will be responsible to handle source A queries onlyApp B --> will be responsible to handle source B queries onlyApp C --> will be responsible to handle source C queries onlyI want to deploy these instances on KubernetesQuestion:a) is it possible to deploy standalone flink jar with mini cluster (inbuilt)? like just start main method: Java -cp mainMethod (sourceName is command line argument A/B/C).b) if k8's one pod or flink instance is down then how we can manage it in another pod or another flink instance? is it possible to give the work to other pod or other flink instance?sorry If I mixed up two or more things together :(Appreciate your help. thanks | Flink - multiple instances of flink application deployment on kubernetes |
The only way to update a pull request is to push to the branch that's been PRed - so even the original repo's owner can't amend the PR, by default. It does make sense - for traceability's sake, at least.
So if you want to finish that work, the best thing you can do is to fork the original repo, clone it on your machine, add the PR's repo as a remote, checkout the PR'ed branch, commit on top of that, push those changes to your own fork, and make a new PR stating in the comments that it continues and fixes the other PR, so the original PR would get closed when yours' is merged.
In this case, something like:
$ # Go to https://github.com/cheeriojs/cheerio/ and fork it
$ git clone https://github.com/Delgan/cheerio/ && cd cheerio # assuming that's your GH username :)
$ git remote add pr-base https://github.com/digihaven/cheerio/
$ git fetch pr-base
$ git checkout pr-base/master -b 641-appendTo_prependTo
$ # work work work
$ git add #...
$ git commit -m 'Fixed all the things! See #641, fixes #726'
$ git push origin 641-appendTo_prependTo
$ # Go to your repo and make the PR
$ # ...
$ # SUCESS! (??!)
|
In a repository which is not mine, a third person opened a pull request. One of the owners suggested some changes to make before being able to merge it. However, the author of the pull request has not done it, and it remains open for several months without the modificqtions have been implemented.
Actually I am refering to a situation like this one.
I would make myslef the requested improvements.
What is the cleanest and best way to do this? Can I add my commits at the following or do I need to open a new pull request?
| Is it possible to continue a pull request opened by someone else on Github? |
You can set the fragment_cache_store in your environment.rbActionController::Base.cache_store = ActiveSupport::Cache::MemCacheStore.new()http://api.rubyonrails.org/classes/ActionController/Caching.html#M000628 | Is there any way of using Memcached for fragment caching in Rails? | Fragment Caching with Memcached |
There are several problems with your code:
you store all characters into the first byte of allocated memory
you read characters into a char variable, you cannot correctly test for EOF.
you will run an infinite loop, allocating all available memory and finally crash if standard input does not contain a '\n', such as redirecting from an empty file.
less important, you reallocate the buffer for each byte read, inefficient but can be optimized later.
Here is a corrected version:
#include <stdio.h>
#include <stdlib.h>
int main() {
char *string;
int c;
int len = 0;
string = malloc(1);
if (string == NULL) {
printf("Error.\n");
return -1;
}
printf("Enter a string:");
while ((c = getchar()) != EOF && c != '\n') {
string[len++] = c;
string = realloc(string, len + 1);
if (string == NULL) {
printf("cannot allocate %d bytes\n", len + 1);
return -1;
}
}
string[len] = '\0';
printf("Input string: %s\n", string);
free(string);
return 0;
}
Regarding your question about the difference with the linked code, it uses the same method, has one less bug but also one more bug:
it stores the character in the appropriate offset in str.
it runs an infinite loop if the input file does not contain a '\n', same as yours.
it invokes undefined behavior because c is not initialized for first test.
|
I'm learning C programming and I have to implement a program that read an input string of of unknown size.
I wrote this code:
int main() {
char *string;
char c;
int size = 1;
string = (char*)malloc(sizeof(char));
if (string == NULL) {
printf("Error.\n");
return -1;
}
printf("Enter a string:");
while ((c = getchar()) != '\n') {
*string = c;
string = (char*)realloc(string, sizeof(char) * (size + 1));
size++;
}
string[size - 1] = '\0';
printf("Input string: %s\n", string);
free(string);
return 0;
}
But the last printf doesn't show the whole string but only the last char.
So if I enter hello, world the last printf prints d.
After a little research I tried this code and it works! But I don't get the difference with mine.
I hope I made myself clear, thank you for your attention.
| What is the difference between these two methods to get string input in C? |
The sample might help you. I think this is a duplicate of Nginx multiple ports
server {
listen 80;
listen 8000;
server_name example.org;
root /var/www/;
}
|
I want to listen to ports with nginx and set the proxy.
here is the conf of the server
server{
listen 8080;
location / {
proxy_pass http://127.0.0.1:82;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-live;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server{
listen 8081;
location / {
proxy_pass http://127.0.0.1:83;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-live;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
the 8080 is ok but the 8081 can not be connected
| How to listen 2 ports in nginx? |
Tryhttps://sourceforge.net/projects/nbuexplorer/- it should be able to open .nbf files.It is/was a C# project, and it only provides Windows exe as release download, however, I could get this .exe to run on Ubuntu 18.04 usingmono; and I could also get the project sources to compile on Ubuntu usingxbuild(the result again being an .exe file that can be run with Mono), see comments in my build scriptget-nbuexplorer.svn.sh. | I am using a Mac and have a Nokia phone. Therefore I cannot sync it with my computer, but I found out, that making a backup on the creates a .nbf-file, which contains all the data I want (contacts and messages).
The contacts are stored easily accessible as vCards, so that's cool. Unfortunately the messages are stored each text separately in one file, which looks pretty weird when I open it with a text editor (for example TextWrangler). I can see the numbers and the text, but no information about date.
I uploaded the file here:http://www.4shared.com/file/7LNsuPbF/00000A123EB640F500002010005000.htmlI already tried out different encodings, but it never looks good.
Maybe someone has a clue how to read that file? Could it be encrypted or something? | Encoding of SMS-files from Nokia backup (.nbf)? |
Theydosolve the same problem.Let me start off with pro/con, then I'll move into technical differences.git-annexPros:Supports multiple remotes that you can store the binaries.Can be used without support from hosting provider (for more details seehere).Cons:Windows support in beta, and has been for a long timeUsers need to learn separate commands for day-to-day worknot supported by github and bitbucketgit-lfsPros:Supported by github, bitbucket and gitlabMost supported on all os'sEasy to use.automated based on filtersCons:Requires a custom server implementation to work. A simple ssh remote is not sufficient. Reference server is under developmenthttps://github.com/git-lfs/lfs-test-server.Technicalgit-annexgit-annex works by creating a symlink in your repo that gets committed. The actual data gets stored into a separate backend (S3, rsync, and MANY others). It is written in haskell. Since it uses symlinks, windows users are forced to use annex in a much different manner, which makes the learning curve higher.git-lfsPointer files are written. A git-lfs api is used to write the BLOBs to lfs. A special LFS server is required due to this. Git lfs uses filters so you only have to set up lfs once, and again when you want to specify which types of files you want to push to lfs. | git-annexhas been around for quite some time, but never really gained momentum.Git LFSis rather young and is already supported by GitHub, Bitbucket and GitLab.Both tools handle binary files in git repositories. On the other hand, GitLab seems to have replacedgit-annexwithGit LFSwithin one year.What are the technical differences?Do they solve the same problem? | How do Git LFS and git-annex differ? |
Firebase Hosting by default uses a shared certificate from Lets Encrypt. While your domain name propagates, it is normal that your browser may show the site as not secure. This should resolve itself as your domain propagates, which typically happens within a few hours.If you don't want to use a shared certificate, have your own certificate ready to go, and your project is on the Firebase Blaze plan, you canreach out to Firebase supportto get your site with your own certificate. | This question already has answers here:Firebase hosting ssl certificate is showing different domain(1 answer)SSL certificate generated by Firebase Hosting does not include connected domain(2 answers)Firebase hosting using custom domain has SSL cert pointing to firebase.com(3 answers)SSL Issue : Firebase Issued different domain certificate to my custom domain(1 answer)Closed3 years ago.I use firebase hosting for my website and it created an SSL certificate for me. And my website is seen as secure. But now I checked and the certificate is for a domain I don’t know called “www.cheesusburger.at”.Why is it not for the name of my website?It says that it’s by “Let’s Encrypt Authority X3”.
Also now I noticed that the default firebase provided app domains have a different certificate. I use namecheap for my custom domain. So is namecheap responsible for the weird SSL certificate?Thank you | SSL certificate for weird domain [duplicate] |
Possible cluster-wide policies are listed here,https://kubernetes.io/docs/concepts/policy/pod-security-policy/You can set pod-level security policies or you can limit resource-usage, both of which don't include the revisionHistoryLimit parameter. I am not aware of any other alternatives, so the answer to your question is you have to include the parameter in every deployment definition.ShareFollowansweredJul 31, 2019 at 7:14AYAAYA1,00777 silver badges77 bronze badgesAdd a comment| | I'm setting up a new server with kubernetes and because of storage limitations I need to changerevisionHistoryLimitfor all our existing and new projects to 2 or 3. I know I can change it in each deployment withspec.revisionHistoryLimitbut I need to change it globally.Thank you for answers and tips. | Is there a way/config where I could change default revisionHistoryLimit from 10 to another limit? |
Unfortunately the error message is misleading, the problem is thatResource-Level Permissions for EC2 and RDS Resourcesaren't yet available for all API actions, see this note fromAmazon Resource Names for Amazon EC2:ImportantCurrently, not all API actions support individual ARNs; we'll add support for additional API actions and ARNs for additional Amazon EC2 resources later. For information about which ARNs you can
use with which Amazon EC2 API actions, as well as supported condition
keys for each ARN, seeSupported Resources and Conditions for Amazon
EC2 API Actions.In particular, allec2:Describe*actions are absent still fromSupported Resources and Conditions for Amazon EC2 API Actionsat the time of this writing, which implies that you cannot use anything but"Resource": ["*"]forec2:DescribeImages.The referenced page onGranting IAM Users Required Permissions for Amazon EC2 Resourcesalso mentions that AWS willadd support for additional actions, ARNs, and condition keys in 2014- they have indeed regularly expanded resource level permission coverage over the last year or so already, but so far only for actions which create or modify resources, but not any which require read access only, something many users desire and expect for obvious reasons, including myself. | I'm trying to constrain the images which a specific IAM group can describe. If I have the following policy for my group, users in the group can describe any EC2 image:{
"Effect": "Allow",
"Action": ["ec2:DescribeImages"],
"Resource": ["*"]
}I'd like to only allow the group to describe a single image, but when I try setting"Resource": ["arn:aws:ec2:eu-west-1::image/ami-c37474b7"], I get exceptions when trying to describe the image as a member of the group:AmazonServiceException Status Code: 403,
AWS Service: AmazonEC2,
AWS Request ID: 911a5ed9-37d1-4324-8493-84fba97bf9b6,
AWS Error Code: UnauthorizedOperation,
AWS Error Message: You are not authorized to perform this operation.I got the ARN format for EC2 images fromIAM Policies for EC2, but perhaps something is wrong with my ARN? I have verified that the describe image request works just fine when my resource value is"*". | How can I limit EC2 describe images permissions? |
0
I suggest you to use Azure IaaS VM backup to backup and restore your VMs. Azure backups can be created through the Azure portal. This method provides a browser-based user interface to create and configure Azure backups and all related resources. You can protect your data by taking backups at regular intervals. Azure Backup creates recovery points that can be stored in geo-redundant recovery vaults. This article details how to back up a virtual machine (VM) with the Azure portal.
Share
Improve this answer
Follow
answered Apr 28, 2018 at 14:28
Vikranth SVikranth S
48155 silver badges1010 bronze badges
1
Thanks Vikranth. We currently use Azure snapshots. However it takes several hours to restore a snapshot. IF we are recovering from a crashed server that is too long to wait. So I was trying to find some backup solution with faster recovery times
– Daryl Rinaldi
Apr 30, 2018 at 10:50
Add a comment
|
|
I currently use Azure snapshots to backup my Azure-hosted Windows servers. The problem is that if my Azure-hosted Windows VM fails restoring a snapshot can take hours. That is way too much downtime. Is there a solution that will let me backup an Azure VM and restore it directly to Azure that is faster and/or better than the built-in Azure snapshotting?
| Azure VM Backup strategy |
My first guess is that it's dying on "dot" directories. In Unix there are two directories in every directory/folder: "." and "..". You'll either need to specifically skip those in your script:next if File.directory?(x) # OR
next file x.match(/^\.+$/)-- OR --Look specifically for whatever filetypes you are wantingDir[SOURCE_FOLDER + LOCATION_SOURCE + "*.wav"].each do |file|
convert(file)
endUpdate: 20110401Add Unix redirects to the crontab entry to see what the output is* * * * * /your/program/location/file.rb 1> /some/output/file.txt 2>&1 | I have a ruby program to convert video to MP4 format using ffmpeg. And I'm using the crontab to run the ruby program every 15 minutes. crontab actually runs the ruby program, but the conversion of the file is not complete. The process is stopped before completing the conversion. My sample code for testin is below.def convert(y)
system "ffmpeg -i #{SOURCE_FOLDER + LOCATION_SOURCE}/#{y} -acodec libfaac -ar 44100 -ab 96k -vcodec libx264 #{DEST_FOLDER + LOCATION_DEST}/#{y}"
end
SOURCE_FOLDER = "/home/someone/work/videoapp/public/"
DEST_FOLDER = "/home/someone/work/videoapp/public/"
LOCATION_SOURCE = "source"
LOCATION_DEST = "dest"
files = Dir.new(SOURCE_FOLDER + LOCATION_SOURCE)
files.each do |x|
convert(x)
endThis code works fine, if i run it manually in the console. | cron job is not completing the process? |
Dedicated GPU memory is basically the VRAM on-board the GPU
System GPU memory is memory that the graphics card driver is using the GART (Graphics Address Remapping Table) to store resources in system memory... AGP and PCI Express both provide regions of memory set aside for this purpose (sometimes referred to as aperture segments).
Committed GPU memory refers to the amount of memory mapped into a display device's address space by the display driver, it is a difficult concept to explain but this number typically does not represent anything worthwhile to anyone but driver developers.
I suggest you look into the following documentation on MSDN as well as this overview of GPU address space segementation, while they are somewhat technical they give a general overview of what is going on.
|
I am trying to hunt down a possible memory leak in my Sharpdx / DirectX application.
I am getting the following information from process explorer which I do not know how to interpret.
What is Dedicated GPU Memory?
What is System GPU Memory?
What is Comitted GPU Memory?
| Interpreting GPU information from Process Explorer |
You might try this instead:RewriteBase /
RewriteCond %{QUERY_STRING} ^target=value1$
RewriteRule ^php_file.php$ http://www.website.com/newpage/? [R=301,L]
RewriteCond %{QUERY_STRING} ^target=value2$
RewriteRule ^php_file.php$ http://www.website.com/OtherNewPage/? [R=301,L]The Key here is to use?at the end of your redirection to prevent old query string to be added. | For .htaccess file, I am creating a 301 redirection for my new website:I want this:Fromhttp://www.website.com/php_file.php?target=value1tohttp://www.website.com/NewPage/ANDFromhttp://www.website.com/php_file.php?target=value2tohttp://www.website.com/OtherNewPage/i was tryingRedirect 301 /php_file.php?target=value1 http://www.website.com/newpage/But I'm getting this result:http://www.website.com/?target=value1 | Htaccess Redirection 301 FROM "with parameter" to "without parameter" |
Looks like the iframes acting as a browser are receiving the hostname instead of the full path to the resources. Can you set up the following ReverseProxy headers and give it a go:proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;Basically you have a proxy at the moment, and we want a reverse proxy too. Let me know if this works. | We are trying to update our internal server infrastructure and to proxy all accesses to our R shiny webservers through an Nginx server. Im able to get a response from the shiny server but Im not able to get related files like css/js through the Nginx server.Setup:2 docker container (1 for hosting nginx, 1 running R for a shiny application)both docker container are members of an docker networkshiny server listens to port 7676 (internal ip-adress 172.18.0.3)nginx server is hosting few static html files with iFrames (legacy, cant get ride off), which should show content of the shiny serveraccessingnginx-server/QueryLandscape.htmlloads the page with the iFrameiFrame works: it loads the static part of R-shiny application, but it doesnt load the related JS/CSS/....(e.g.http://nginx-server:8001/ilandscape/shared/shiny.css)within the nginx-docker container i can access this css filewget 172.18.0.3:7676/shared/shiny.cssNginx.conflocation /ilandscape/ {
proxy_pass http://172.18.0.3:7676/;
#proxy_redirect http://172.18.0.3:7676/ $scheme://$host/;
# websocket headers
proxy_set_header Upgrade $http_upgrade;
proxy_http_version 1.1;
proxy_read_timeout 20d;
proxy_set_header Host $host;
}What am I missing in my nginx conf to proxy/redirecthttp://nginx-server:8001/ilandscape/shared/shiny.css --> 172.18.0.3:7676/shared/shiny.css?Thanks for your help,
Tobi | Nginx: Proxy pass / proxy redirect to shiny web applications |
Create a .htaccess under website root with this rule:RewriteEngine On
RewriteRule ^(.*)$ subfolder1/subfolder2/subfolder3/$1 [L] | Let's say my domain is mydomain.com
My structure is root>subfolder1>subfolder2>subfolder3>index.php
How can I redirect domain so that mydomain.com points to subfolder1>subfolder2>subfolder3>index.php without changing url? I have a htaccess in subfolder3 which takes care of parsing url so it has to be unchanged..htaccess code in subfolder3:RewriteEngine on
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php?path=$1 [L,QSA]Thanks. | Redirect main domain to level 3 subfolder without changing url |
As stated in thedocs for docker-compose file network_mode:Notes
This option is ignored when deploying a stack in swarm mode with a (version 3) Compose file.
network_mode: "host" cannot be mixed with links.Thenetwork_modecannot be used when deploying on docker swarm usingdocker stack deploy. This is not new with version 18.04 but is rather older.Thenetwork_modecan only be used with docker-compose when deploying the container on the local machine usingdocker-compose up. | For a small project, I want an application in a docker container to connect to the localhost of the machine. An answer to this question:From inside of a Docker container, how do I connect to the localhost of the machine?tells me the preferred way is to use--net="host"in the docker run command.I use a compose file to start the container. Andthis questiontold me thenetoption was renamed tonetwork_mode: "host".Here is the beginning of the compose fileversion: '3.6'
services:
shiny:
image: paulrougieux/eutradeflows
deploy:
restart_policy:
condition: on-failure
network_mode: "host"
ports:
- "3838:3838"When I start this filesudo docker stack deploy -c stackshiny.yml shinyI get the error:Ignoring unsupported options: network_modeFor information$ sudo docker version
Client:
Version: 18.04.0-ce
Server:
Engine:
Version: 18.04.0-ceHow do I enable connection to a database on the host from a docker container? | Docker Version 18.04.0-ce ignores unsupported options: network_mode |
As @Ajan comments, Java 8 doesn't have a "permgen" heap space anymore, and that option will be ignored.
But this isn't a permgen problem at all. In fact, it is most likely a sign that the main Java heap is full. This exception gets thrown if the JVM detects that the GC is taking too large a proportion of the total CPU time over the last few GC cycles. This generally happens because the heap is getting close to full, and the GC is being run more and more frequently.
So, the "quick fix" for the problem would be to increase the main heap size using an -Xmx... option. However, if the real problem is that you have a memory leak, then that is only putting off the inevitable. Unless you already understand why your application is using a lot of memory, you should probably start looking for memory leaks.
|
This question already has answers here:
Error java.lang.OutOfMemoryError: GC overhead limit exceeded
(24 answers)
Closed 8 years ago.
I am running my application using Java 8, However I have been getting the following error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
I have tried to increase MaxPermSize from 512m to 768 but still I am getting the same error. How can I solve this?
| Java 8: OutOfMemory Error, change MaxPermSize? [duplicate] |
From the dashboard, it is not possible to trigger cross region Lambda. When you create a CloudWatch event rule, select a Target to invoke, under Lambda function only the lambdas in the current region are shown.
|
Can an AWS CloudWatch event in region us-east-1 trigger a lambda in us-west-2? Or do I have to deploy my lambda in both regions?
| Can a CloudWatch Event in one region trigger a Lambda in another region of AWS? |
There're two theoretically possible ways to solve your issue:Nginx with ngx_stream_ssl_preread moduleHAproxy (for balancing) -> proxy_protocol -> Nginx (with ssl certs) | Host A has https serviceserviceAand provides two IP for high availability。e.g. Bose [ip1:443] and [ip2:443] are routed to theserviceA.Host B (do not has ssl_certificate and ssl_certificate_key) use Nginx proxy module to proxies the requests towards the actualserviceA.
How to simply forward 443 port traffic toserviceAwithout ssl verification?
Here is my config:http {
upstream backend {
server [ip1]:443;
server [ip2]:443;
}
server {
listen 443;
listen [::]:443;
location / {
proxy_pass https://backend;
}
}
} | How to disable ssl certificate and only used to forward traffic on port 443 in Nginx? |
17
Did you do apt-get update before that?
And then
apt-get -y install python-pip
Share
Improve this answer
Follow
answered Apr 1, 2015 at 15:24
mrhmrh
45933 silver badges77 bronze badges
1
I sure did: Reading package lists... Done / Building dependency tree / Reading state information... Done / All packages are up to date. but still getting unable to locate package python-pip
– Mike B
May 12, 2020 at 5:27
Add a comment
|
|
Unable to install pip in Docker running Ubuntu version 14.04. See below log.
root@57da7dd8a590:/usr/bin# apt-get install pip
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package pip
root@57da7dd8a590:/usr/bin#
| Unable to install pip in Docker |
Your error message points to a bug in the openssl version you use. Seehttps://bugzilla.redhat.com/show_bug.cgi?id=1022468. In short: the client advertises capabilities it does not have and if the server picks such capability you get this error message. Needs to be fixed by upgrading your local openssl installation. A workaround on the server side should be possible too, if you have control over the server. | I have the following code to use:def createCon(host,auth):
con = httplib.HTTPSConnection(host)
return con
def _readJson(con,url):
con.putrequest("GET",url)
...
r = con.getresponse()It is working on a specific server, but on another I'm getting SSLError. If I open the url from browser, I need to accept the certificate and it is working well. But how can I either accept the certificate or disable it's validation? This is a self-signed certificate stored in a java keystore, so I would prefer to disable the verification...
The code is meant to reuse the connection between the requests, so I would prefer not to modify it deeply.How could I do this? I tried to modify the examples but haven't beend succeded.con.putrequest("GET",url, verify=False)
or
con.request.verify=FalseI do not know how could I access the session or request objects or modify the default settings.UPDATEthis does not help:socket.ssl.cert_reqs='CERT_NONE'well, the actual error message is weird...:SSLError:'[Errno 1] _ssl.c:492: error:100AE081:elliptic curve routines:EC_GROUP_new_by_curve_name:unknown group'Regards:
Bence | Python httplib disble certificate validation |
You can create a new service and leverage theinterpreterSettingNamelabel of the spark master pod. When zeppelin creates a master spark pod it adds this label and its value isspark. I am not sure if it will work for more than one pods in a per user per interpreter setting. Below is the code for service, do let me know how it behaves for per user per interpreter.kind: Service
apiVersion: v1
metadata:
name: sparkUI
spec:
ports:
- name: spark-ui
protocol: TCP
port: 4040
targetPort: 4040
selector:
interpreterSettingName: spark
clusterIP: None
type: ClusterIPAnd then you can have your ingress as:apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-zeppelin-server-http
http
spec:
rules:
- host: my-zeppelin.my-domain
http:
paths:
- backend:
serviceName: zeppelin-server
servicePort: 8080
- host: '*.my-zeppelin.my-domain'
http:
paths:
- backend:
serviceName: sparkUI
servicePort: 4040
status:
loadBalancer: {}Also do checkout this repohttps://github.com/cuebook/cuelake, it is still in early stage of development but would love to hear your feedback. | First of all I'm pretty new on all this (kubernetes, ingress, spark/zeppelin ...) so my apologies if this is obvious. I tried searching here, documentations etc but couldn't find anything.I am trying to make the spark interpreter ui accessible from my zeppelin notebook running on kubernetes.
Following what I understood from here:http://zeppelin.apache.org/docs/0.9.0-preview1/quickstart/kubernetes.html, my ingress yaml looks something like this:Ingress.yamlapiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-zeppelin-server-http
http
spec:
rules:
- host: my-zeppelin.my-domain
http:
paths:
- backend:
serviceName: zeppelin-server
servicePort: 8080
- host: '*.my-zeppelin.my-domain'
http:
paths:
- backend:
serviceName: spark-guovyx
servicePort: 4040
status:
loadBalancer: {}My issue here is that I need to rely on the service-name (in this case spark-guovyx) being set to the interpreter pod name in order to have the UI show up. However since this name is bound to change / have different ones (i.e. I have one interpreter per user + interpreters are frequently restarted) obviously I cannot rely on setting it manually. My initial thought was to use some kind of wildcard naming for the serviceName but turns out ingress/kubernetes don't support that. Any ideas please ?Thanks. | Expose spark-ui with zeppelin on kubernetes |
The answer I was waiting for.I decided to give Ruby a try and it is okay. I like how it is compact, but it isn't pretty looking :(.This works:#!/usr/bin/env ruby
require "yaml"
require "open-uri"
time = Time.new
backupDirectory = "/storage/backups/github.com/#{time.year}.#{time.month}.#{time.day}"
username = "walterjwhite"
#repositories =
# .map{|r| %Q[#{r[:name]}] }
#FileUtils.mkdir_p #{backupDirectory}
YAML.load(open("http://github.com/api/v2/yaml/repos/show/#{username}"))['repositories'].map{|repository|
puts "found repository: #{repository[:name]} ... downloading ..."
#exec
system "git clone[email protected]:#{username}/#{repository[:name]}.git #{backupDirectory}/#{repository[:name]}"
}Walter | I'd like to periodically create a backup of my github repositories. Is there a quick way to pull all of them without knowing what the entire list is?Walter | Backup / Mirror Github repositories |
I'd say thereisa benefit of using an additional counter: naming. Say you have a histogram namedhttp_request_duration_seconds, the associated counter is namedhttp_request_duration_seconds_count, whereashttp_requests_totalwould be a much better name. | For example imagine that we need to measure two metrics: requests number (counter) and request duration (histogram). Histogram already has inner counter, so when I what to query requests rate I can use histogram metric instead of counter.Are there some benefits for using separate counter metric? | prometheus: are there some benefits to use counter if I also use histogram for the same kind of metric? |
Sorry, it's just not a feature.You could implement some sort of semaphore thing by using incrond to monitor the local filesystem, then do something (touch a file, trigger a script, etc.) on the remote machine to tell it there's been an update, but there's no native functionality in NFS. | Can I trigger a filesystem event on Linux, without an actual file change?
Is there some system call that acts like the file was written?
Is that even possible?I have a NFS share mounted and want to getinotifyevents in the virtual machine, when a file changes on the server site.It seemsinotifydoesn't work with NFS.
Is there any network filesystem that supportsinotify?It's easy to monitor the events on the server site, but how can I trigger the events on the client? At that moment I do a simpletouch, but that's not ideal.(the use case is for local development with docker (boot2docker, OS X.) | trigger inotify event over NFS on Linux? |
On Windows, you have to create a new thread in order to increase stack size:import sys
import treading
def my_function(*args):
# write you function here
pass
if __name__ == "main":
sys.setrecursionlimit(5000)
threading.stack_size(2 ** 22)
thread = threading.Thread(target=my_function,
args=("arg1", "arg2"))
thread.start() | I want to set higher stack size in a python script as i am working with a larger dataset. I found thislinkuseful but it only corresponds to the Linux. Could someone provide package with proper instructions on how to implement that.So, for clarity this is the code that gets the job done in Linux&BSD:import resource
resource.setrlimit(resource.RLIMIT_STACK, (2**29,-1))As a side note, Probablypsutilpackage can do this but i have been searching for settingstacksizein their official doc(actually found smth but it is again for Linux) and could not find smth..Any recommendation.. | Setting stacksize in python script(Windows) |
Download the Library from Github.
Unzip.
In Android Studio go to File > New > Import Module and set the directory to "Library" folder inside "segcontrol-master".
|
I want to add this repository to my Android Studio project and I tried multiple dependencies but it just doesn't work. Any help, guys ?
My build.gradle (app) file:
apply plugin: 'com.android.application'
android {
compileSdkVersion 24
buildToolsVersion "24.0.2"
defaultConfig {
applicationId "com.example.myapp"
minSdkVersion 21
targetSdkVersion 23
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.android.support:support-v4:24+'
compile 'com.7heaven.widgets:segmentcontrol:1.16'
compile 'at.bookworm:segcontrol:1' //THIS GUY IS MY PROBLEM
}
My build.gradle (app) file:
buildscript {
repositories {
jcenter()
maven {
url "https://jitpack.io"
}
}
dependencies {
classpath 'com.android.tools.build:gradle:2.1.3'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
jcenter()
}
}
| Add library to Android Studio project |
From Actions, Resources, and Condition Keys for Amazon S3 - AWS Identity and Access Management:
ListBucketVersions: Use the versions subresource to list metadata about all of the versions of objects in a bucket.
I tested this as follows:
Created an IAM User
Assigned the policy below
Ran the command: aws s3api list-object-versions --bucket my-bucket
It worked successfully.
The policy was:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucketVersions",
"Resource": "*"
}
]
}
So, while the naming seems a bit strange (List Object Versions vs List Bucket Versions), it is the correct permission to use.
|
I currently have a lambda that uses the node sdk call listObjectVersions to list all the versions of a specific file. However, I can't figure out what permission in my policy will grant the lambda permission to make this call. I've searched the AWS documentation and I can not find any information.
Here are the current permissions in my policy:
- PolicyName: S3Policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:PutObject
- s3:PutObjectAcl
- s3:GetObject
- s3:GetObjectVersion
- s3:ListObjectVersions
- s3:DeleteObject
- s3:ListBucket
When I execute the lambda I get an Access Denied when making the call. I've changed my policy to allow the action s3:* and the lambda works. However, I do not want to grant full access to s3.
What Action do I need to add to allow?
| What permission is needed to use S3 listObjectVersions in AWS? |
sizeof(Rz3DContourNode) == 6*4 = 24 bytes ... not 12!
The stride is the # of bytes between the start of each vertex, not the padding. Although 0 is a special value that indicates tightly packed data.
So, if you're using 3 floats as vertex data, there's no difference between a stride of 0 and 12 (because 3 floats are 12 bytes). In your case, your struct is 24 bytes, so you should put that.
This convention (stride = step size, not padding) allows the simple use of sizeof, so you ended up doing the the right thing intuitively. That's good API design for you. :)
See glVertexPointer docs:
stride
Specifies the byte offset between consecutive vertices. If stride
is 0, the vertices are understood to be tightly packed in the array.
The initial value is 0.
|
I have following struct to store my vertex data.
struct Rz3DContourNode {
float x; //pos x
float y; //pos y
float z; //pos z
float nx; //normal x
float ny; //normal y
float nz; //normal z
};
I store list of vertices in STL vector as follows :
std::vector < Rz3DContourNode > nodes;
When I try to use this as as vertex-array in OPEGL ,in does not render incorrectly.
glVertexPointer(3, GL_FLOAT, 12, &nodes[0]);
glDrawArrays(GL_POINTS,0,nodes.size());
So, I tried to confirm the values using pointer arithmetic (assuming thats the way OPENGL handle data) as follows:
float *ptr=(float*) &nodes[0];
for(int i=0;i<nodes.size();i++)
{
Rz3DContourNode confirmNode=nodes[i];
float x=*ptr;
ptr++;
float y=*ptr;
ptr++;
float z=*ptr;
ptr++;
//Confirm values !!! Do not equal ??
qDebug("x=%4.2f y=%4.2f z=%4.2f | nx=%4.2f ny=%4.2f nz=%4.2f
",confirmNode.x,confirmNode.y,confirmNode.z,x,y,z);
//Skip normal positions
ptr++;
ptr++;
ptr++;
}
The values does not equal if I directly access values from the struct.
Does this means struct does not save the values contiguously ?
[EDIT] I Just noticed that using sizeof() instead of 12 fix the problem as follows:
glVertexPointer(3, GL_FLOAT, sizeof(Rz3DContourNode), &nodes[0]);
But still I am confused why my hack didnt traverse correctly in memory?(why qDebug doesnt print identical values ?)
| C++ struct memory layout and OpenGL glVertexPointer? |
It was actually just a matter of doing a:
php composer.phar self-update
|
I'm trying to include a repository I created on github, using Composer.
In my composer.json i have:
"repositories": [
...
{
"type": "vcs",
"url": "https://github.com/unknownfrequency/zendservice_discogs"
}
],
"require": {
"unknownfrequency/zendservice_discogs": "dev-master",
}
When i run
$>composer install -vvv i get:
@unknown-2~/workspace/imusic $ composer install -vvv
Downloading composer.json
Loading composer repositories with package information
...
Downloading ..//packages.zendframework/packages.json
Downloading ...api.github/repos/unknownfrequency/zendservice_discogs
Downloading ...github/repos/unknownfrequency/zendservice_discogs/contents/composer.json?ref=master
Downloading ../api.github/repos/unknownfrequency/zendservice_discogs/commits/master
Downloading ...//api.github/repos/unknownfrequency/zendservice_discogs/tags
Downloading ...//api.github/repos/unknownfrequency/zendservice_discogs/git/refs/heads
Reading composer.json of zendframework/zendservice-discogs (master)
Importing branch master (dev-master)
Downloading https://packagist.org/packages.json
Downloading https://packagist.org/p/provider- latest$cf8f23c1297b4c86275ae395aed6402ba4f5cc186e587b80f8dd5ecca7d60e3f.json
Installing dependencies
Your requirements could not be resolved to an installable set of packages.
Problem 1
- The requested package unknownfrequency/zendservice_discogs could not be found in any version, there may be a typo in the package name.
Potential causes:
- A typo in the package name
- The package is not available in a stable-enough version according to your minimum- stability setting
I've tried to fix this for so many hours now. Hope someone can help!
| Can't user Composer to install own git repository |
If it shows younon-fast-forward updates were rejected, that means your local and remote repository are out of sync because you or someone else made changes to the upstream repo and this could be a possible cause of your problem.Try tofetchand thenrebaseor just usepull --rebasecommand.Source:https://help.github.com/articles/dealing-with-non-fast-forward-errorsIf this isn't the problem, does it show errors when pushing? And does you project have multiple branches? | I have a rudimentary understanding of Github: I know how to create, add, commit, push and clone repositories. I've also started exploring Github pages to host my projects. My latest project I started in March and pushed it to a gh-page. I have since then refractored and improved the code and made quite a few changes. On my local server the changes are shown but after pushing to Github and making a new gh-page several times, I still see the old project.Could someone please help. | Github pages showing old code |
no, not through a direct link.
"Loading" a folder from a git repo only means sparse checkout (partial clone).
Any other solution would indeed mean building an artifact and upload it.
Update August 2016 (2 years later): you can have a look at this answer and the DownGit project, by Minhas Kamal.
|
I have a repository that has several folders of code. I'd like to be able to provide a link to the code in a single folder so another user could download just the relevant bits of code without being bloated by the rest of the codebase and without requiring that they have git installed on their machine.
Of course, they can browse the code files inside of the folder online, but that isn't very helpful if they want to run a single project.
Here are several other similar questions, and why I don't think they address my particular issues:
How to download source in .zip format from GitHub?
Only provides a way to download the entire project, otherwise perfect.
Github download folder as zip
This answer is for build artifacts. I don't want to upload the source code twice just so I can provide a download link to it.
Download a single folder or directory from a GitHub repo
Requires using git commands to get a single folder. I'd rather have the link accessible to multiple people without requiring they have git installed.
GitHub - Download single files
Only provides mechanism for downloading single files, not folders
In case it helps provide a concrete example, here's a folder that I would like to be able to download via a link:
https://github.com/KyleMit/CodingEverything/tree/master/MVCBootstrapNavbar/Source%20Code
Is there any way to do this?
| Generate download link for a single folder in GitHub |
When you create a service of type "load balancer", a new load balancer is created by your cloud provider (here aws). Load Balancer naming is of the responsibility of the cloud provider. I don't think you can tell amazon how to generate the load balancer name. And it may depends of the load balancer type - alb, internal alb, nlb, ....But you can use "external-dns" (https://github.com/kubernetes-incubator/external-dns). When configured for a dns providers - for exemple aws route53 - it can automatically creates dns aliases for your load balancer. But you won't be able to create name in amazonws.com domain ... | Say I have the following YAML representing a service:apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:91371:certificate/0a389f-4086-4db6-9106-b587c90a3
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
labels:
app: main-api-prod
name: main-api-prod
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 80
selector:
app: main-api-prodafter I run:kubectl apply -f <file>I run:kbc get svc -o json | grep hostnameand we see:"hostname":
"a392f200796b8b0279bdd390c-228227293.us-west-2.elb.amazonaws.com"my question is - is there a way to tellkubectlto use my own id in the hostname? In other words, I would like to tell it to use "abc" instead of "a392..", so it would be:"abc-228227293.us-west-2.elb.amazonaws.com"As an aside, if anyone knows what "228227293" represents, please lmk, I know that it is not our AWS accountid, that's for sure. | Give EKS service a name (instead of being auto-generated) |
3
The code in a PR is run "as is". If there was an "environmental" issue that was fixed, rerunning the jobs would indeed solve it.
In this case, another PR was merged in order to fix the problem. You should rebase your PR on top of the fix (probably on top of the main or master branch it was merged into) in order to be able to run the PR with the fixed tests.
Share
Improve this answer
Follow
answered Jan 29, 2022 at 13:18
MureinikMureinik
302k5353 gold badges319319 silver badges362362 bronze badges
3
Would a "rebase and merge" do this (not sure if that merges whatever, or would redo the tests) ?
– Ian
Jan 29, 2022 at 15:08
@Ian that should work
– Mureinik
Jan 29, 2022 at 15:15
2
@Ian if you're referring to the PR completion strategy of "rebase and merge", selecting that won't necessarily rebase your branch for you before you complete the PR. (I think...) If you want the test to pass before you complete the PR, you may need to rebase the branch yourself and force push it back out.
– TTT
Jan 29, 2022 at 15:33
Add a comment
|
|
We have some pull requests people have made, which fail an automated analyze test/check (the failure was not related to any change the PR did, but I think a change in the analyze or formatting checks).
Another person has since fixed the issue with a new PR which has been merged which was causing the other PRs to fail the tests.
I was expecting if I "re-run all jobs", for the original PRs to now pass, but it's still failing with the same error (that the new PR has fixed). So I'm guessing it's still looking at the previous point before the new PR was merged.
What is the flow to get around this problem? (I could possibly just merge anyway and ignore the failed analyze test, but that doesn't feel quite right ?)
| GitHub PR still fails test after another merged PR fixed the failing problem |
Open theAmazon EC2 consoleathttps://console.aws.amazon.com/ec2/.On the navigation pane, underLOAD BALANCING, chooseLoad Balancers.Select yourload balancer.On theListeners tab, forSSL Certificate, chooseChange.On theSelect Certificatepage, do one of the following:If you created or imported a certificate using AWS Certificate Manager, selectChoose an existing certificate from AWS Certificate Manager (ACM), select the certificate fromCertificate, and then chooseSave.If you imported a certificate using IAM, selectChoose an existing certificate from AWS Identity and Access Management (IAM), select the certificate fromCertificate, and then chooseSave.If you have a certificate to import but ACM is not supported in the region, selectUpload a new SSL Certificate to AWS Identity and Access Management (IAM). Type a name for the certificate, copy the required information to the form, and then chooseSave. Note that the certificate chain is not required if the certificate is a self-signed certificate.If you want further details you can study the relevant AWS documentationhere | Godaddy SSL certificate is already installed on my server. But now I want to change it to namecheap but I don't know where to put my new certificate files as last time it was installed by another developer. can somebody please help? | How can I renew my SSL certificate on aws normal ec2 instance |
The main difference between the two is, PutItem will Replace an entire item while UpdateItem will Update it.
Eg.
I have an item like:
userId = 1
Name= ABC
Gender= Male
If I use PutItem item with
UserId = 1
Country = India
This will replace Name and Gender and now new Item is UserId and Country.
While if you want to update an item from Name = ABC to Name = 123 you have to use UpdateItem.
You can use PutItem item to update it but you need to send all the parameters instead of just the Parameter you want to update because it Replaces the item with the new attribute(Internally it Deletes the item and Adds a new item)
Hope this makes sense.
|
Based on DynamoDb documentation why would anyone use updateItem instead of putItem?
PutItem - Writes a single item to a table. If an item with the same primary key exists in the table, the operation replaces the item. For calculating provisioned throughput consumption, the item size that matters is the larger of the two.
UpdateItem - Modifies a single item in the table. DynamoDB considers the size of the item as it appears before and after the update. The provisioned throughput consumed reflects the larger of these item sizes. Even if you update just a subset of the item's attributes, UpdateItem will still consume the full amount of provisioned throughput (the larger of the "before" and "after" item sizes).
| Difference between DynamoDb PutItem vs UpdateItem? |
Although the@amzn-main repodoesn't have PHP 7.2 yet (as far as I know), you can use remi-php72. According to hisrelease blogyou can install the EPEL and Remi repositories via:wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6.rpm
rpm -Uvh epel-release-latest-6.noarch.rpmAnd then enable remi-php72 using yum-config-manager:yum install yum-utils
yum-config-manager --enable remi-php72After that, you can simply search and install php and all the needed extensions like normal:sudo yum install --disablerepo="*" --enablerepo="remi,remi-php72" php php-fpm php-cli ... | I haven't seen anyyumpackages forphp7.2 on AWS EC2 and the release has been out over a month.I have triedyum list | grep php7and only able seephp70andphp71packages.Has anyone installedphp72on AWS EC2?Is there anotheryumrepo to connect to?Does AWS have a delivery scheduled? | need help to install php 7.2 yum packages on aws ec2 |
Option 1 add www-data group to my-user:sudo adduser www-data my-userOption 2 change user of php-fpm into my-user (ref):find options user and group in www.conf, and change it into[my-user] group=mygroup | I have followed this upvotedanswerand did the following:sudo chown -R my-user:www-data /var/www/domain.com/
sudo find /var/www/domain.com/ -type f -exec chmod 664 {} \;
sudo find /var/www/domain.com/ -type d -exec chmod 775 {} \;
sudo chgrp -R www-data /var/www/domain.com/storage /var/www/domain.com/bootstrap/cache
sudo chmod -R ug+rwx /var/www/domain.com/storage /var/www/domain.com/bootstrap/cacheEverything works fine, but whenever a directory (within the storage directory) is created by my-user and not www-data user, the webserver can't write to it or vice versa. Unless I rerun those commands after the directory has been created.Notes: sometimes I run commands with my-user that create directories, and sometimes the www-data user create directories. (within the storage directory).Also, my-user is already within the www-data group.How can I avoid permission errors? without running all those commands again. | How to setup laravel file permission once and for all |
Solution
Make a .gitattributes file in your working directory and add the following line to it:
*.docx binary
Why not just set core.autocrlf=false ?
This is useful too. But configuring .docx as a binary format solves not only this problem, but also potential merge issues.
What is the origin of this problem?
From http://git-scm.com/docs/gitattributes , section "Marking files as binary". Note the italicized section.
Git usually guesses correctly whether a blob contains text or binary data by examining the beginning of the contents. However, sometimes you may want to override its decision, either because a blob contains binary data later in the file, or because the content, while technically composed of text characters, is opaque to a human reader.
.docx format is a zip folder containting xml and binary data, such as images.
Git treated your .docx as a text (and not binary) file and replaced endline characters. As Microsoft-developed format, .docx is probably using CRLF, which might have been replaced with LF in the remote repository. When you downloaded that file directly from remote, it still had LFs.
In a binary file Git never replaces endline chars, so even the files on remote repository will have proper CRLFs.
Applicable formats
This is applicable to any file format which is a zipped package with text and binary data. This includes:
OpenDocument: .odt, .ods, *.docx binary
0 and others.
OpenOffice.org XML: *.docx binary
1, *.docx binary
2, *.docx binary
3 and others.
Open Packaging Conventions *.docx binary
4, *.docx binary
5, *.docx binary
6 and others.
|
I put several .docx, .txt and .pdf file into a .git repository. I can open, edit, save the local .docx file; however, when I push it to github, and download it back to my computer, Word complains that it cannot open it.
In order to store .docx file on github, is there some essential steps I should do to the git settings?
| What should I do if I put MS Office (e.g. .docx) or OpenOffice( e.g. .odt) document into a git repository? |
Set the privilege to Read only for the Backup folder (~/Library/Application Support/MobileSync/Backup). | In OSX before Catalina we could use:
defaults write com.apple.iTunes DeviceBackupsDisabled -bool true
to disable iPhone backups
Does anyone know how to disable in Catalina? | Disable iPhone backup in Catalina |
-1There have two variants.First one is:php_value date.timezone 'Region/Zone'The second one is:SetEnv TZ America/WashingtonAnd you can use the time for redirect something like this:RewriteRule ^$ /news/%{TIME_YEAR}%{TIME_MON}%{TIME_DAY}-%{TIME_HOUR}.mp3 [R=301,L]I did not test it and am not sure it works. | I want to redirect to an external dynamic url, one which contains today’s month, date, day, hour. E.g.http://url.com/20170706-2200-048.mp3I read that using the .htaccess file and mod_rewrite I could use something likehttp://url.com/news/%{TIME_YEAR}%{TIME_MON}%{TIME_DAY}-%{TIME_HOUR}-048.mp3.How would I set this up in htaccess? And is there a way I could set a custom timezone? I.e. to EST or NZT. | URL redirect based on date and time |
From Python3.10, OpenSSL 1.1.1 or newer is required.(Ref:PEP 644,Python3.10 release)I have tried changing some of your code like below and worked.delete:openssl-develadd:openssl11openssl11-devel.ShareFollowansweredOct 24, 2021 at 8:34shimoshimo2,23744 gold badges1717 silver badges2323 bronze badges0Add a comment| | I'm trying to compile and install Python 3.10 into a Amazon Linux 2 but I'm not being able to get it with https support. Here the commands that I'm using to compile it:sudo yum -y update
sudo yum -y groupinstall "Development Tools"
sudo yum -y install openssl-devel bzip2-devel libffi-devel
wget https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tgz
tar xzf Python-3.10.0.tgz
cd Python-3.10.0
sudo ./configure --enable-optimizations
sudo make altinstallThe binary works, but when I try to use it with for reach an https endpoint, I get this message:Traceback (most recent call last):
File "<stdin>", line 1113, in <module>
File "<stdin>", line 1087, in main
File "/usr/local/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.10/urllib/request.py", line 519, in open
response = self._open(req, data)
File "/usr/local/lib/python3.10/urllib/request.py", line 541, in _open
return self._call_chain(self.handle_open, 'unknown',
File "/usr/local/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.10/urllib/request.py", line 1419, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib.error.URLError: <urlopen error unknown url type: https>I'm not sure what I'm missing :/ | Compiling python 3.10 at Amazon Linux 2 |
The main issue with your code is that looping through all files in the directory with ls * without some sort of filter is a dangerous thing to do.
Instead, I've used for i in $(seq 9 -1 1) to loop through files from *_9 to *_1 to move them. This ensures we only move backup files, and nothing else that may have accidentally got into the backup directory.
Additionally, relying on the sequence number to be the 18th character in the filename is also destined to break. What happens if you want more than 10 backups in the future? With this design, you can change 9 to be any number you like, even if it's more than 2 digits.
Finally, I added a check before moving site_com_${DATE}.tar in case it doesn't exist.
#!/bin/bash
DATE=`date "+%Y%m%d"`
cd "/home/user/backup/com"
if [ -f "site_com_*_10.tar" ]
then
rm "site_com_*_10.tar"
fi
# Instead of wildcarding all files in the directory
# this method picks out only the expected files so non-backup
# files are not changed. The renumbering is also made easier
# this way.
# Loop through from 9 to 1 in descending order otherwise
# the same file will be moved on each iteration
for i in $(seq 9 -1 1)
do
# Find and expand the requested file
file=$(find . -maxdepth 1 -name "site_com_*_${i}.tar")
if [ -f "$file" ]
then
echo "$file"
# Create new file name
new_str=$((i + 1))
to_rename=${file%_${i}.tar}
mv "${file}" "${to_rename}_${new_str}.tar"
fi
done
# Check for latest backup file
# and only move it if it exists.
file=site_com_${DATE}.tar
if [ -f $file ]
then
filename=${file%.tar}
mv "${file}" "${filename}_1.tar"
fi
|
I was able to script the backup process, but I want to make an another script for my storage server for a basic file rotation.
What I want to make:
I want to store my files in my /home/user/backup folder. Only want to store the 10 most fresh backup files and name them like this:
site_foo_date_1.tar site_foo_date_2.tar ... site_foo_date_10.tar
site_foo_date_1.tar being the most recent backup file.
Past num10 the file will be deleted.
My incoming files from the other server are simply named like this: site_foo_date.tar
How can I do this?
I tried:
DATE=`date "+%Y%m%d"`
cd /home/user/backup/com
if [ -f site_com_*_10.tar ]
then
rm site_com_*_10.tar
fi
FILES=$(ls)
for file in $FILES
do
echo "$file"
if [ "$file" != "site_com_${DATE}.tar" ]
then
str_new=${file:18:1}
new_str=$((str_new + 1))
to_rename=${file::18}
mv "${file}" "$to_rename$new_str.tar"
fi
done
file=$(ls | grep site_com_${DATE}.tar)
filename=`echo "$file" | cut -d'.' -f1`
mv "${file}" "${filename}_1.tar"
| shell backup script renaming |
To see how to make this work, have a look at a working example, such as:
https://github.com/sonatype/sonatype-aether
However, this won't help if you like to release the individual pieces. In that case, you have to just copy the <scm> elements into all the poms.
This is an active topic of discussion on the maven dev list, but don't hold your breath for a solution from there; it's a big deal.
|
I'm trying to release a multi-module maven project that uses git as the SCM, and among the first problems I've encountered is the way in which the maven release plugin builds the release.properties scm.url. My parent POM looks something like this:
<packaging>pom</packaging>
<groupId>org.project</groupId>
<artifactId>project-parent</artifactId>
<version>1.0.0-SNAPSHOT</version>
<scm>
<connection>scm:git:git://github.com/username/project.git</connection>
<developerConnection>scm:git:[email protected]:username/project.git</developerConnection>
<url>http://github.com/username/project</url>
</scm>
<modules>
<module>api</module>
<module>spi</module>
</modules>
And the module POMs are straightforward:
<parent>
<groupId>org.project</groupId>
<artifactId>project-parent</artifactId>
<version>1.0.0-SNAPSHOT</version>
</parent>
<artifactId>api</artifactId>
<version>0.2.2</version>
My goal is to be able to release individual modules since they each have different versions and I don't want to increment all of the versions together each time I do a release.
When I change to the api directory and do a mvn release:clean release:prepare I'm met with the following output:
[INFO] Executing: cmd.exe /X /C "git push [email protected]:username/project.git/api master:master"
[INFO] ------------------------------------------------------------------------
[ERROR] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Unable to commit files
Provider message:
The git-push command failed.
Command output:
ERROR: Repository not found.
It looks like the maven release plugin creates the scm.url by appending the module name to the developerConnection, which ends up not being a valid repository at github. I'm not sure what the right way to set this up is. It might be the case that Maven + git + releasing an individual child module simply won't work? Any input is appreciated.
| Releasing a multi-module maven project with Git |
2
No. Celery offers no way to run anything on GPU. However, nothing prevents you to use Keras, TensorFlow, or PyTorch in your Celery tasks (as a matter of fact I see many questions here about these projects and Celery).
Share
Improve this answer
Follow
answered Jan 31, 2021 at 18:45
DejanLekicDejanLekic
19.2k44 gold badges4949 silver badges8080 bronze badges
2
My problem is that I have too many tasks. One task takes a few seconds so I try to increase the number of workers.
– OceanFire
Feb 2, 2021 at 12:53
If tasks are not heavy on CPU you can increase them to few hundred. Also, keep in mind that you can change the concurrency type to thread or eventlet/gevent.
– DejanLekic
Nov 1, 2023 at 16:05
Add a comment
|
|
The question is - It is possible to run celery on gpu?
Currently in my project I have settings like below:
celery -A projectname worker -l error --concurrency=8 --autoscale=16,8
--max-tasks-per-child=1 --prefetch-multiplier=1 --without-gossip --without-mingle --without-heartbeat
This is django project.
One single task take around 12 s to execute (large insert to postgresql).
What I want to achive is multiply workers as many as possible.
| It is possible to run celery on gpu? |
1
As mentioned in this stackoverflow answer by ttfreeman
Have you tried Kaniko? It can save the cache in gcr.io and and if
you have built your Dockerfile with right steps (see
https://cloud.google.com/blog/products/gcp/7-best-practices-for-building-containers),
it should save you a lot of time. Here is the cloudbuild.yaml
example:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/image
- --cache=true
- --cache-ttl=XXh
More info: https://cloud.google.com/cloud-build/docs/kaniko-cache
Share
Improve this answer
Follow
edited May 9, 2023 at 12:25
answered May 9, 2023 at 11:32
Sathi AiswaryaSathi Aiswarya
2,50444 silver badges1212 bronze badges
Add a comment
|
|
I want to cache the previously stored google cloud image while creating a new build for the project.
I have followed the official documentation : https://cloud.google.com/build/docs/optimize-builds/speeding-up-builds but while building it doesn't seems to be caching the data.
The yaml file I am using is:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build','--cache-from', 'gcr.io/PROJECT_ID/node-app:latest', '-t', 'gcr.io/PROJECT_ID/node-app:latest', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PROJECT_ID/node-app:latest']
images:
- 'gcr.io/PROJECT_ID/node-app:latest'
I also tried to first pull the docker image as:
steps
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker pull gcr.io/PROJECT_ID/node-app:latest || exit 0']
- name: 'gcr.io/cloud-builders/docker'
args: ['build','--cache-from', 'gcr.io/PROJECT_ID/node-app:latest', '-t', 'gcr.io/PROJECT_ID/node-app:latest', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/PROJECT_ID/node-app:latest']
images:
- 'gcr.io/PROJECT_ID/node-app:latest'
I checked there is image already present and it is also pulling the image but it's not caching the image.
How can I cache the image or is there any other process to do so in google cloud?
| How can I use cache to optimise google cloud build? |
The solution is to not use legacy unsupported versions of nginx. Starting from version 1.3.15 (pretty old one), nginx does not log the 400 errors in such cases.See changelog for information:http://nginx.org/en/CHANGES*) Change: opening and closing a connection without sending any data in
it is no longer logged to access_log with error code 400. | Amazon Elastic Load Balancer (ELB) performs periodic health checks:In addition to the health check you configure for your load balancer,
a second health check is performed by the service to protect against
potential side-effects caused by instances being terminated without
being deregistered. To perform this check, the load balancer opens a
TCP connection on the same port that the health check is configured to
use, and then closes the connection after the health check is
completed.nginx logs these events with a 400 error, which happen many times per minutes:[07/Aug/2013:18:32:27 +0000] "-" 0.000 400 0 "-" "-" "-"how can I configure nginx to not log these events? | Configure nginx to not log ELB secondary healthcheck |
yes, it may cause some conflict in your web server, for example, you have to X-Frame-Options in your response header, I suggest you to not do that. handle you're in just one level, do whatever you can with the Django and do the rest with your web server (as I know for example Feature-Policy cant be handle in Django cleanly do this with your web server, nonce hash is not easy in web server and you should do it with your Django !)
I don't know it how much true way, Django deployment checklist suggest do SSL redirection with your web server and
you can use Django-CSP to generate the nonce hash and config your CSP policy, also you can add Feature-Policy to Nginx config by :
add_header Feature-Policy "accelerometer 'none'; camera 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; payment 'none'; usb 'none'";
the best practices available on OWASP Secure Headers Project and my git repo and here is link of django secure header config
|
Now which is better for adding security headers configuration
at the application level or the nginx level like
add_header X-Frame-Options "SAMEORIGIN";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header 'Referrer-Policy' 'origin';
Or at django settings
SECURE_BROWSER_XSS_FILTER=True
SECURE_CONTENT_TYPE_NOSNIFF=True
SECURE_HSTS_INCLUDE_SUBDOMAINS=True
SECURE_HSTS_SECONDS=36000
SECURE_SSL_REDIRECT=True
And what if I added them on the two levels, will it make any conflict or any problems on the future?
Sure what about any framework that provide its security middleware, not django specifically
| Which is best practice to add security headers for django application? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.