Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
2 If the program runs out of memory, it seems like an issue with overcommit handling of your operative system. If you are in Linux, you can try to run the following command to enable "always overcommit" mode, which can help you load the 3.63GiB npy file with numpy: $ echo 1...
I was loading this but it says error. import pandas as pd import numpy as np userMovie = np.load('userMovieMatrixAction.npy') numberUsers, numberGenreMovies = userMovie.shape genreFilename = 'Action.csv' genre = pd.read_csv(genreFilename) MemoryError: Unable to allocate 3.63 GiB for an array with shape (487495360...
How to solve memory error? Should I increase memory limit?
1) No - not for the general case of C# - obviously anything can be created for some subset of the language 2) Yes - HLSL using Direct X or Open GL 3) Not generally possible - CPU and GPU coding are fundamentally different Basically you can't think of CPU and GPU coding as being comparable. A GPU is a highly specialis...
I have no knowledge of GPU programming concepts and APIs. I have a few questions: Is it possible to write a piece of managed C# code and compile/translate it to some kind of module, which can be executed on the GPU? Or am I doomed to have two implementations, one for managed on the CPU and one for the GPU (I understa...
Run C# code on GPU
After a lot of fiddling, I discovered that the issue was due to the presence of a .htaccess file in the repo. It appears the file in question was messing with the Apache localhost server, which was causing the folder to be inaccessible.
I have my Apache server running under localhost on OS X, as per this guide. I recently tried cloning the private repository for my website from my GitHub account to my ~/Sites/ folder, but it does not show up when I navigate to localhost/~USER/ in Chrome. At first I thought it was a problem with Git in general, but I ...
Can't see a cloned GitHub repository in OS X localhost directory
OSX has crontab support for running scheduled tasks.typeman crontabIn terminal for more information. I also foundthis link.
how do I create a cron / cronjob (I am not quite sure about the correct terminology ^^ ) on XAMPP for Mac OS X running Snow Leopard? Or how do I make a cron(job) on Snow Leopard, whether XAMPP or not?
Making a cron(job) on XAMPP for Mac OS X
entrypoint.sh (in here I get createdb: command not found)Runningcreatedbin the nodejs container will not work because it is postgres specific command and it's not installed by default in the nodejs image.If you specifyPOSTGRES_DB: pg_developmentenv var on postgres container, the database will becreated automatically wh...
I'm setting up a simple backend that perform CRUD action with postgres database and want to create database and migration automatically when the docker-compose up runs.I have already tried to add below code toDockerfileorentrypoint.shbut none of them work.createdb --host=localhost -p 5432 --username=postgres --no-passw...
How to create postgres database and run migration when docker-compose up
I see zero issues in doing an expose on the port,as long as we dont publish the port. EXPOSE will not allow communication via the defined ports to containers outside of the same network or to the host machine. To allow this to happen you need to publish the ports. But its doable at the cost of adding security holes by...
In short, I'm trying to set up an nginx container to proxy_pass to other containers on port 80. I was following along with this tutorial: https://dev.to/domysee/setting-up-a-reverse-proxy-with-nginx-and-docker-compose-29jg They describe having a docker compose file that looks something like: version: '3' services: ...
How to proxy_pass to a node docker container on port 80 with nginx container
Docker stats inherently takes a little while, a large part of this is waiting for the next value to come through the stream$ time docker stats 1339f13154aa --no-stream CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS ... real 0m1.556s user 0m0.020s sys 0m0...
I tried to use Python to get the docker stats, by using Python's docker module. the code is:import docker cli = docker.from_env() for container in cli.containers.list(): stream = container.stats() print(next(stream))I run 6 docker containers, but when I run the code, It needs a few second to get all containers...
A Problem on getting docker stats by Python
As Amal commented below your question that dot in regex meansany characterand sinceRewriteRuleused regular expressions for URI pattern hence you will need to escape dot to make it match literal dot.However inmod_rewriterules there is way you can useRewriteCondto make it match literal stringswithout regular expressionsu...
I have the following htacces to rewrite a precise URL (views/index.php to views/index.xml):RewriteEngine On RewriteRule ^/views/index\.php$ /views/index.xml [L]It's way too easy to forget the\and type^/views/index.php$, allowing/views/indexXphpand/views/index/phpinstead of only/views/index.php.Using an exact match inst...
Exact match with RewriteRule without regex
To trust the layers of a pulled image that wasn't built locally, you need -cache-from, e.g.: docker build --cache-from=<registry>/test-docker-image:latest -t newimg:latest . Docker won't trust pulled images by default to avoid a malicious image that claims to provide layers for an image you may build while actually i...
I want to reuse the layers from a docker image on two different machines in the following way: Build image (1) Push image to registry (1) Pull image from registry (2) Build same docker image and reuse layers from pulled image (2) Therefore, Machine 1: I build this following image: FROM node:13-slim COPY package.jso...
Layers between docker builds can't be shared
I found some time to experiment with this and here's what I found. >>> import boto >>> c =boto.connect_s3() >>> fp = open('myfiletoupload.txt') >>> content_length = len(fp.read()) >>> c.generate_url(300, 'PUT', 'test-1332789015', 'foobar', headers={'Content-Length': str(content_length)}, force_http=True) 'http://test-...
I know how to download a file in this way: key.generate_url(3600) But when I tried to upload: key.generate_url(3600, method='PUT'), the url didn't work. I was told: The request signature we calculated does not match the signature you provided. Check your key and signing method. I cannot find example code on the boto h...
How to generate a temporary url to upload file to Amazon S3 with boto library?
This is actually quite reasonable, and is one of the use cases ofAudit. You just need to make sure audit is enabled andspec.enforcementAction: dryrunis set in the Constraint.Here is an example of what the ConstratintTemplate's Rego would look like.OPA Playground.deny[msg] { value := input.request.object.status.disr...
We are looking to use OPA gatekeeper to audit K8s PodDisruptionBudget (PDB) objects. In particular, we are looking to audit the number ofdisruptionsAllowedwithin thestatusfield.I believe this field will not be available at point of admission since it is calculated and added by the apiserver once the PDB has been applie...
Can OPA Gatekeeper be used to audit K8s PodDisruptionBudget status fields?
To separate dev vs. production environments, given a choice of these options:Prefix table namesUse separate regionsUse separate AWS accountsUse option 3.Prefixing table names is just asking for trouble. Your code would have to be "environment-aware" to know to talk toTable_devorTable_prod. Don't do this.Using separate ...
I'm using dynamodb, and I want to separate my development environment from production. I've seen two ways of doing this: one by prefixing the tables, e.g MyTable_Dev vs. MyTable_Prod, and the other by opening separate account and using consolidated billing. But I wanted to hear your opinion about a third way: separatin...
dynamodb - splitting development and production by region
Most implementations ofcronpass a command string to/bin/shso depending on what it is on your system and what implementation ofdateyou have you may have luck with this:compute_monthly_rate -e $(date +%Y.%m.%d)Try it in a terminal first:$ date +%Y.%m.%d 2016.02.02
I have a cronjob that will be launched at the end of each month to generate a monthly report.Here I found how to launch the job at the end of each month:Cron job to run on the last day of the monthNow, I just want the date of the last day of the month as an argument for the script.For example:compute_monthly_rate -e **...
Pass today's date as argument for a cron job
6 Within the same state machine execution You can use a Map state to run these tasks in parallel, and use the maximum concurrency setting to reduce excessive lambda executions. The Map state ("Type": "Map") can be used to run a set of steps for each element of an input a...
I have a state machine in AWS. I want to limit concurrency of a task (created via lambda) to reduce traffic to one of my downstream API. I can restrict the lambda concurrency, but the task fails with "Lambda.TooManyExecutions" failure. Can someone please share a simple approach to limit concurrency of a lambda task? ...
How to limit concurrency of a step in step functions
3 You can look into this docker hub image. docker run -it -p 80:80 --entrypoint "streamlit" marcskovmadsen/awesome-streamlit:latest run app.py Not sure about the streamlit version but you can create one base on this Dockerfile. Or you can explore streamlit-docker, working ...
I want to run the streamlit through docker. I did not find any official image. Can someone please guide me with the steps required to achieve this or Dockerimage for streamlit? Here is the details Operating System: Windows 10 Home Docker version 19.03.1 Streamlit, version 0.61.0
How to run streamlit through docker?
When you access your pods by service name, you get an IP address for one of the pods and use it in subsequent requests.To solve this problem, you can create an ingress and use the url instead of the service name, in this case you will get an IP address on each request, and the load will be distributed between the pods.
I have created deployment which has a service. I set it to run it in 5 replicas. When I call the service it always uses the same replica (pod). Is there a way how to use round robin instead, so all pods will be used?
Kubernetes always uses the same replica
The FallbackResource directive wasn't introduced until 2.2.16 as describedhere. Upgrading Apache should solve your problem.
ContextI have Apache 2.2.15 configured for mass virtual hosting as follows:<VirtualHost *:80> # ...irrelevant lines omitted VirtualDocumentRoot /srv/www/%-3+ ServerName example.com ServerAlias localhost </VirtualHost>mkdir /srv/www/foomakesfoo.example.comavailable.ProblemHTTP 500 from all offoo.example....
HTTP 500 using FallbackResource and mass vhost config
it will do load balancing, but its not application aware, so if your pod cannot handle the request due to load the request would be lost or the error would be returned. you can use readyness probes to mark pods as not ready, they will not receive traffic in that case
Assuming this scenario:Service A (ClusterIP):Pod1 (Image1)Pod2 (Image1)Pod3 (Image1)Service B (ClusterIP):Pod1 (Image2)Pod2 (Image2)Pod3 (Image2)Assuming that I have an Ingress Controller:/svcA > This will redirect to Service A /svcB > This will redirect to Service BSo my question is, the Service is still doing a Load ...
In Azure Kubernetes Service (AKS) I have a Service of type "ClusterIP", does it perform the pod's load balancing internally?
Maybe using QSA in your rewriterules, like this :RewriteRule .* index.php?kohana_uri=$0 [PT,QSA,L]See themanual of mod_rewrite(quoting) :'qsappend|QSA' (query string append)This flag forces the rewrite engine to append a query string part of the substitution string to the existing string, instead of replacing it....
After uploading my Kohana project to my Godaddy server, I noticed my standard .htaccess file wasn't working sufficiently to provide the clean URLs. After some guidance, I ended up with the following rule:RewriteRule .* index.php?kohana_uri=$0 [PT,L]This got my nice-URLs working again, but today I find out that it may b...
Kohana, .htaccess, and $_GET
You could make /dev/log a symlink to directory where syslog-ng has write permission, something like this:source s_local { unix-dgram("/var/run/syslog-ng/log-socket" ...); };With this you would need to create the /dev/log symlink when the image is created. I assume that the image is in your control.ShareFollowanswered...
I am trying to use rsyslog or syslog-ng inside a non-privileged container in Kubernetes. Now I have managed to make most of the part work but the only place I am stuck with with /dev/log socket.The rsyslog/syslog-ng fails to create this socket without privilege which is kind of expected as /dev is owned by root.Error b...
Using rsyslog/syslog-ng in non-privileged Kubernetes pod
I am from the AWS Device Farm team.One possible failure point is using special characters (character whose encoding are different in UTF-8 and ISO-8859-1) in the suite and test name in your test package.This is currently broken and a fix is in the works and will be released soon.Can you verify that you are not using sp...
I'm trying to run calabash tests for my app onAmazon Device Farm, but a very simple check for text test always yields the following error, acrross all possible devices (yes, I tried all of them):This device was unavailable and skippedNo other information is provided.I made a simple app that just shows some static text ...
Amazon Device Farm with Calabash says devices are all unavailable
As per my understanding you want logs of all activities, pipelines into a file.You can log all the Pipeline runs,trigger runs,Activity runs segregated fashion USING"Azure Monitor"all at one go .No need of any extra pipeline/stored procedures/set variables .All the info will be logged in yyyy-->Mm-->dd-->hh-->mm-->ss fa...
I have this ADF pipelineΙ want to save the output with Name, Type, Status, Duration (same like in pipeline debug output) for each run to Azure Data Lake Storage. The process should be automatic, don't want to use 'Export to CSV' manually since i don't know when job will run.Is there any way i can achieve this?I tried m...
Saving log of each Activity in ADF pipeline to ADLS
Update: Figured it out and figured I'd leave this question up since seems relevant due to this recent change. For people on a Macbook, go to KeyChain Access from the Finder, and search for github. Double click the github.com option: press the show password button on the menu that pops up, and swap that out for your p...
As of today seems GitHub has disabled passwords through the command line and instead requires personal access tokens, as you get this error when trying to push a commit: "remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead." I went and generated a pers...
GitHub change from password to personal access token without re-cloning the repo
At heavy (DOM) operations where performance issues are encountered, you should completely drop jQuery, and go back to vanilla JavaScript (especially if the no-jQuery alternative works across all modern browsers). var fragment = document.createDocumentFragment(); // Temporary storage for (var $idx=0, len = gameArray.le...
I have a parent div with around 300 div's inside its all containing an image and some text. i have an array which contains all the information i need to reorder the divs, using references.. if if then loop through the array after i had ordered it and then move the elements accordinly i get severe memory leaks of up to...
jQuery order elements, remove, detach, clone, append memory leaks
To start with you want to know where that memory is actually being used. There are a lot of complex programs to do memory analysis/profiling, but if you want something more detailed than Task Manager but still fairly simple and free, Sysinternals vmmap is great. http://technet.microsoft.com/en-us/sysinternals/dd535533...
I am writing a text Editor application. As an experiment I ran the application and monitered its memory usage on Task Manager as I performed different actions. When I first launched the application, it used 3000 kB. It stayed roughly the same when I typed When I clicked on save, it shot up to 9000kb and then it ju...
Does my text editor application have a memory leak? Why does it consume 3x more memory than Notepad
* * * * * Your application file or command - - - - - | | | | | | | | | +----- day of week (0 - 6) (Sunday=0) | | | +------- month in numbers (1 - 12) | | +--------- day of month (1 - 31) | +----------- hour (0 - 23) +------------- ...
how to run the php file only for once using the cron jobs. i.e., to schedule task to work on 20/06/2015 10:30:00I tried something like this :30 10 20 6 ? 2015 /usr/bin/php /path/to/my/file/application.phpbut it is not working.I just want to schedule it for future but do not want to repeat it. How to do it ?Thank you fo...
scheduling with cronjob to run the php file for only one time
result = [list(someListOfElements) for _ in xrange(x)]This will make x distinct lists, each with a copy ofsomeListOfElementslist (each item in that list is by reference, but the list its in is a copy).If it makes more sense, consider usingcopy.deepcopy(someListOfElements)Generators and list comprehensions and things ar...
I need to incrementally fill a list or a tuple of lists. Something that looks like this:result = [] firstTime = True for i in range(x): for j in someListOfElements: if firstTime: result.append([f(j)]) else: result[i].append(j)In order to make it less verbose an more elegant, ...
How to create a list or tuple of empty lists in Python?
The "Run Code Analysis" menu items only apply to the legacy FxCop rules. You don't need to use those menu items for Roslyn-based analyzers (like the C# and VB.NET rules in SonarLint) - Visual Studio will automatically trigger the analysis in the background. See theMicrosoft docsfor more info.If you are not seeing Sxxx ...
I installed the SonarLint extension for Visual Studio and connected successfully to our SonarQube server and successfully ran Code Analysis to display sonar issues in VS. So it was working OK but for some reason I am now no longer getting any sonar Sxxx warnings and instead now see the following 2 warnings:> Warning CA...
Visual Studio SonarLint extension connected to SonarQube is generating warnings CA0507 and CA0064 and no sonar Sxxx warnings
You merged a pull request (here: #3) before:fatal: Couldn't find remote ref refs/pull/3/mergeTry to open the PR again or create a new commit!
I'm having trouble in a build on Travis CI. I'm getting those errors on git and that's blocking me. I've tried to restart the build and stuff like that, but it didn't work. My .travis.yml:language: node_js node_js: - "0.12" - "0.10" branches: only: - v1.0.0_dev - v1.0.0_stable before_script: - npm install -g bo...
Error in travis ci build
Found out the problem: Needed to encode the message. Used this:message = "1" message_bytes = message.encode("ascii") content = base64.b64encode(message_bytes)
I haven't seen many good resources on this topic, but from what I found I managed to make this script to update a simple text file to have just a 1:payload = { "message": "update file.txt", "committer": { "name": "<name>", "email": "<email>" }, "content": "1", "sha": "<sha>" } url = "https://api.gith...
Simple python request to update GitHub file
Tryhttps://github.com/moby/docker-ci-zap. Just downloaddocker-ci-zap.exefrom that repo and run it.\docker-ci-zap.exe -folder "C:\ProgramData\docker". Worked for me and it is much faster than reinstall Docker.
I am using docker for windows and I noticed my c drive was getting full. When i looked I noticed that there is 15 gb of data here:Docker/windowsfilter. I use docker sporadically so I do not need to keep any images or containers.So I googled some and tried suggestions likedocker system prunedocker image pruneand the sam...
Docker/windowsfilter takes huge amount of diskspace
because the node does not come empty, but it has to run some core apps like :kubeletkube-proxycontainer-runtime (docker, gVisor, or other)other daemonset.Sometimes, 3largeVMs are better than 4mediumVMs in term of the best usage of capacity.However, the main decider is the type of your workload (your apps):If your apps ...
When it comes to running Express (NodeJS) in something like Kubernetes, would it be more cost effective to run with more cores and less nodes? Or more nodes with less cores each? (Assuming the cost of cpus/node is linear ex: 1 node with 4 cores = 2 nodes 2cores)In terms of redundancy, more nodes seems the obvious answe...
Express (NodeJS) more cores vs. more nodes? (With Analysis and Examples)
Take a lookup activity in Azure data factory/ Synapse pipeline and in source dataset of lookup activity, take the table that has the required parameter details.Make sure to uncheck thefirst row onlycheck box.Then take the for-each activity and connect it with lookup activity.In settings of for-each activity, click add ...
I'm working on an ETL pipeline in Azure Synapse.In the previous version I used an Array set as a parameter of the pipeline and it contained JSON objects. For example:[{"source":{"table":"Address"},"destination {"filename:"Address.parquet"},"source_system":"SQL","loadtype":"full"}}]This was later used as the item() and ...
ForEach activity to loop through an SQL parameters table?
0 Few things 1) Did the container exit? If you do a docker ps does it seem to be running. 2) Did you check docker log {container id} 3) Does /go/src/github.com/mygithubname/ actually reflect the location of the build in the first stage of the docker container? Sample docker...
I have a Dockerfile that builds a golang project (that listens to the Twitter stream and lists the tweets by some filter) from the latest golang docker image, right now 1.10.3, like so: FROM golang:1.10.3 COPY . /destination/ WORKDIR /destination/ RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main . C...
Multistage build image not working, while normal build does
Cluster APIis probably what you need. It is a concept of creatingClusterswithMachineobjects. TheseMachineobjects are then provisioned using aProvider. This provider can beBare Metal Operatorprovider for your bare metal nodes andCluster API Provider AWSfor your AWS nodes. All resting in a single cluster (see the docs be...
I am now running two kubernetes clusters.First Cluster is running on bare metal, and Second Cluster is running on EKS. but since maintaining EKS costs a lot, so I am finding ways to change this service as Single Cluster that autoscales on AWS.I did tried to consider several solutions such as RHACM, Rancher and Anthos. ...
Hybrid nodes on single kubernetes cluster
So how can I fix this warning ?You can use a type parameter for your class :public class GridModelHolder<T> { private List<T> gridModel; public List<T> getGridModel() { return gridModel; } }The client code can then decide what type ofListGridModelHolderholds :GridModelHolder<String> gridModelHolder = new...
private List gridModel; public List getGridModel() { return gridModel; }Eclipse shows a warning:List is a raw type. References to generic type List should be parameterized.Changing the code to below will remove the warningprivate List<?> gridModel; public List<?> getGridModel() { return gridModel; }Ho...
Java wildcard generic as return warning in Eclipse and SonarQube
ServiceA deployment consists of one or more pods and replicas of pods. Let's say, we have 3 replicas of pods running in a deployment. Now let's assume there is no service. How does other pods in the cluster access these pods? Through IP addresses of these pods. What happens if we say one of the pods goes down. Kunernet...
After reading thru Kubernetes documents like this,deployment,serviceandthisI still do not have a clear idea what the purpose of service is.It seems that the service is used for 2 purposes:expose the deployment to the outside world (e.g using LoadBalancer),expose one deployment to another deployment (e.g. using ClusterI...
What exactly Kubernetes Services are and how they are different from Deployments
This is an example SonarQube pipeline configuration which is executed on every merge to the master branch.Example pipeline stepsSteps:Node is installed for building purposesPrepare analysis is initiated which downloads necessary for scanning, configurations and rulesetsNugget package manager is installedNugget restore ...
We have setup pull request analysis for C# .Net code. It is observed old code(unmodified) is being considered for analysis which is not expected, this is blocking us from using quality gates.The new code condition is set based on the “number of days” condition which is set to 1.Even then the PR/short branch analysis re...
Analyse new\updated code only for dotnet projects with sonarcloud
You shouldn't release the app delegate. In short, unless you alloc, copy or retain an object you don't need to release it.
I'm still facing the problem when I launch my application in iPhone. It shows the stack over flow by presentModelViewController because I'm using number of viewcontroller and calling the same viewcontroller from other viewcontroller but it gets terminated. Here I'm showing the code which I'm using in whole program to ...
Stack overflow by presentModelViewController in iphone
13 It’s not currently possible in the core framework because of CloudFormation behavior. maybe. But you can use this plugin. https://github.com/matt-filion/serverless-external-s3-event After installing serverless-plugin-existing-s3 by npm install serverless-plugin-existin...
I want to add trigger event on a Lambda function on an already existing bucket and for that I am using below configuration: events: - s3: bucket: serverlesstest event: s3:ObjectCreated:* rules: - prefix: uploads/ - suffix: .pdf where bucket serverlesstest ...
How to add S3 trigger event on AWS Lambda function using Serverless framework?
1 It depends where /data is for you: already in the image, or on your host disk. A Dockerfile RUN command execute any commands in a new layer on top of the current image and commit the results. That means /data is the one found in the image as built so far. Not the /data on...
RUN cp /data/ /data/db, this command does not copy the files in /data to /data/db. Is there an alternate way to do this?
How to copy a folder from docker to other folder?
Here's the result: https://github.com/danielwertheim/kiwi/wiki/Use-with-Asp.Net-MVC //D
Is there a way to get MarkdownSharp (I'm using the NuGet package) to handle 'GitHub flavored Markdown (GFM)' and especially syntax highlighting of c# code, which (in GFM) is written like this: ```c# //my code..... ``` So, if I pass Markdown formatted content to MarkDownSharp, containg a C# code block (as above) I wan...
MarkdownSharp & GitHub syntax for C# code
<div class="s-prose js-post-body" itemprop="text"> <p>In case you want to run many commands at entrypoint, the best idea is to create a bash file. For example <code>commands.sh</code> like this</p> <pre><code>#!/bin/bash mkdir /root/.ssh echo "Something" cd tmp ls ... </code></pre> <p>And then, in your DockerFile, set...
<div class="s-prose js-post-body" itemprop="text"> <p>I'm trying to build a custom tcserver docker image. But I'm having some problems starting the webserver and the tomcat.<br/> As far as I understand I should use ENTRYPOINT to run the commands I want.<br/> The question is, is it possible to run multiple commands with...
Multiple commands on docker ENTRYPOINT
Github repo (user/repo)text field should be filled with github "user/repo" (Example:myGithubUser/myGithubRepository). This will import Github README file and RELEASE to Bintray under the Readme and Release Notes tabs.You can also provide the full github url path in the VCS field located in the package details.I am with...
I want to store an Android library on jcenter.Before this, I created a repository on bintray. After creating the repository, I had to fill package details. In package details there was a text fieldGitHub repo (user/repo)in which I had given github link of my idhttps://github.com/kishlayk.But after update package, it is...
No repository found under this GitHub path
4 I found this one in generator example. $start_time=microtime(true); //do something what you want to measure $end_time=microtime(true); echo "time: ", bcsub($end_time, $start_time, 4), "\n"; echo "memory (byte): ", memory_get_peak_usage(true), "\n"; http://php.net/ma...
I am a PHP coder. I don't find any way to get how many time is taken during execution of code.means execution time. and How much memory space is used by the code during execution tim I know the PHP INI setting but doesn't showed my solution. How could i get that time and memory space unit with coding.
execution time and memory uses by the code
Try this:RewriteEngine On # first, remove redirect to www by default if no subdomain RewriteCond %{HTTP_HOST} ^domain.com [NC] RewriteRule (.*) http://www.domain.com/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www\.domain\.com [NC] RewriteRule ^([^/\.]+)/?$ index.php?page=$1 [NC,L] RewriteCond %{HTTP_HOST} ^www\.domain\.c...
My current htaccess looks like this:RewriteEngine On RewriteCond %{HTTP_HOST} ^(www\.)?site\.com [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([^/\.]+)/?$ index.php?page=$1 RewriteRule ^([^/\.]+)/([^/\.]+)?$ index.php?page=$1&subsectie=$2 RewriteRule ^([^/\.]+)/([^/\.]+)/([^...
htaccess not working properly
Zip files contain CRC32 checksums and you can read them with the python zipfile module:http://docs.python.org/2/library/zipfile.html. You can get a list of ZipInfo objects with CRC members from ZipFile.infolist(). There are also modification dates in the ZipInfo object.You can compare the zip checksum with calculated c...
This is the scenario. I want to be able to backup the contents of a folder using a python script. However, I want my backups to be stored in a zipped format, possibly bz2.The problem comes from the fact that I don’t want to bother backing up the folder if the contents in the “current” folder are exactly the same as wha...
How to elegantly compare zip folder contents to unzipped folder contents
After searching few links and doing few trails, I am able to resolve this issue.As given in the Container runtimesetup, the Docker cgroup driver is systemd. But default cgroup driver of Kubelet is cgroupfs. So as Kubelet alone cannot identify cgroup driver automatically (as given inkubernetes.iodocs), we have to provid...
I'm trying to setup Kubernetes cluster with multi master and external etcd cluster. Followed these steps as described inkubernetes.io. I was able to create static manifest pod files in all the 3 hosts at /etc/kubernetes/manifests folder after executing Step 7.After that when I executed command 'sudo kubeadmin init', th...
Unable to setup external etcd cluster in Kubernetes v1.15 using kubeadm
This may be considering a very manual/hacky way of restoring it but you have several options: Git clone the repo from GitHub into a new folder. Delete .git in the old folder and cp -r .git from the new folder to the old one. You can then commit the new files as desired. Note: By using this method, you lose all your c...
I usually use git from the GUI in my IDE, but I wanted to do something through the command line. However, this messed up the local .git repository that I had for my project, and now I am unable to commit and push files to my remote repository on GitHub. My project files are still intact and safe, it's only the .git re...
Recover .git folder from an older commit on GitHub
Commits have two dates, and GitHub is showing you different ones than your GUI picked. Ordinary git commit and git merge set both the author and committer dates to right now. git rebase and kin set the committer date to right now (and you), leaving the author date (and the author name/email) as they are in the origin...
I have rebased my commits to change their dates, and right now it is showing the correct dates locally. Then, I decided to delete the remote repository and publish it again from the beginning with the correct dates. But, I was surprised to see that all dates of the commits, I have rebased, having the same date of the ...
GitHub doesn't show the correct dates of my commits as they are stored locally
On Cloud Run, Google already implement a proxy front end (named GFE: Google Front End). One of the first assignment is to expose an HTTPS endpoint and do the proxy for reaching your Flask service exposed in HTTP. I personally don't know if this front end is based on Nginx or not In any case, the Cloud Run python sampl...
I've followed the Google Cloud Run Quickstart which shows how to deploy a Flask app to Cloud Run, served with Gunicorn. However, many places online (including Gunicorn's own documentation) say that you should always put a proxy in front of Gunicorn, and specifically recommending Nginx. Is nginx necessary when serving ...
Google Cloud Run w/ Flask and Gunicorn: Nginx needed?
Yes it will shrink the allocated block, but it will lead to fragmentation (on a Windows system) over time.
I'm trying to squeeze execution time on a script avoiding useless big-matrix reallocation. An operation like B = A; causes little overhead since B will point at the same structure of A, and Matlab won't allocate a new one until an update occurs. But what about an operation like this? longVector = longVector(1:n); Wi...
matrix reallocation
you shoud add IPs and domains of APIServer tocertSANsinClusterConfigurationof kubeadm config andkubeadm init --config=<kubeadm-config-file>.apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ['localhost','127.0.0.1'] ...ShareFollowansweredAug 4, 2021 at 6:29wineinlibwineinlib50022 s...
I am deployment a kubernetes v1.22.3 cluster using kubeadm, today when I access api-server from public domain, shows error like this:2021/08/04 11:20:19 http: proxy error: x509: certificate is valid for 10.96.0.1, 172.29.217.209, not 107.124.83.3110.96.0.1is my kubernetes cluster ip address,172.29.217.209is my host int...
how to add the public ip to X509 certificate when access kubernetes api server
18 You can do that using list-objects list-objects will return the StorageClass, in your case you want to filter for values where it is GLACIER aws s3api list-objects --bucket %bucket_name% --query 'Contents[?StorageClass==`GLACIER`]' What you want then is to get only t...
It's very time consuming to get objects from Glacier so I decided to use S3 IA storage class instead. I need to list all the objects in my bucket that have Glacier storage class (I configured it via LifeCycle policy) and to change it to S3 IA. Is there any script or a tool for that?
Listing S3 bucket objects with specific storage class
See basically, Unicorn or thin are all single threaded servers. Which in a way means they handle single requests at a time, using defering and other techniques.For a production setup you would generally run many instances of unicorn or thin ( depending on your load etc ), you would need to load balance b/w those rails ...
Currently I've already read a lot tutorials teaching about Rails App deployment, almost every one of them use Nginx or other alternatives like Apache to serve static pages. Also in this QAWhy do we need nginx with thin on production setup?they said Nginx is used for load balancing.I can understand the reasons mentioned...
Is it necessary to use Nginx when deploy a Rails API ONLY app?
you could redirect them directly using: <head> <meta http-equiv="Refresh" content="0; URL=https://example.com/"> </head> if they need to be logged before you redirect them, you can turn this into an jsx condition such as loggedIn? <meta http-equiv="Refresh" content="0; URL=https://example.com/"> : null this meta...
I would like to redirect users from https://danskii.github.io/Toronto-Bike-Fixit-Map/ to danielpuiatti.com/Toronto-Bike-Fixit-Map/ I was able to set up a custom domain to redirect to danielpuiatti.com with a CNAME, but I can't figure out how to get it to redirect to danielpuiatti.com/Toronto-Bike-Fixit-Map/ My current...
Is it possible to redirect a GitHub pages site to a host with a url-path?
PerhapsGKE usage meteringmight be of interest to you. Step by step can be foundhere. GKE metering will fetch resource usage/consumption from metrics servers, converts the consumption data to usage records and sends the records to a different BigQuery table in the same dataset.ShareFolloweditedNov 15, 2019 at 16:06answe...
I have the following situation:I have a bunch of deployed things in multiple GKE clusters. I would like to generate billing for my customers who use those deployments. However, I don't want to bill them for network traffic they aren't generating, but my cluster is generating, so inter region / AZ communication is proba...
GKE (Google Kubernetes Engine) network traffic monitoring of PODs for detailed byte based billing
You would have to use delete_object(): import boto3 s3_client = boto3.client('s3') response = s3_client.delete_object( Bucket='my-bucket', Key='invoices/January.pdf' ) If you are asking how to delete ALL files within a folder, then you would need to loop through all objects with a given Prefix: import boto3...
How can we delete files inside an S3 Folder using boto3? P.S - Only files should be deleted, folder should remain.
S3 Delete files inside a folder using boto3
Application Map groups based on RoleName property. If you include (through, for instance, TelemetryInitializer) a deployment id then you'll see 26 different nodes.
Take a look at this development environment map generated by application insights:What you are actually looking at are actually 26 deployments on the same AKS and Namespace, but the map leads you into believing that there are 26 pods of the same deployment!The map should look like this:How can I "break" that central no...
Application insights grouping deployments when generating map
You can do this usingIf:Parameters: environment: Type: String Default: dev AllowedValues: - dev - prd Conditions: isDev: !Equals [ !Ref environment, dev] Resources: StandAlonePolicy: Type: AWS::IAM::Policy Properties: PolicyName: "s3-policy" ...
I am creating some IAM roles, policies via cloudformation but I would like to add policies based on the condition I have, say if it is dev then i would like to add certain policy statement. any suggestions ?Parameters: environment: Type: String Default: dev AllowedValues: - dev ...
how to add a condition when writing a aws policy via cloudformation?
This is because when you callFileSystem.get(new Configuration()), the file system resolved is the default file system which in this case ishdfs.You first need to obtain the right file system by providing an URI from a path which contains thes3scheme and your bucket.It would also be better to use the Hadoop configuratio...
How to check if a S3 directory exists or not before reading it?I was trying this, as given herehttp://bigdatatech.taleia.software/2015/12/21/check-if-exists-a-amazon-s3-path-from-apache-spark/import org.apache.hadoop.fs.{FileSystem, Path} import org.apache.hadoop.conf.Configuration val fs = FileSystem.get(new Configura...
Spark-scala : Check whether a S3 directory exists or not before reading it
Check out Lambda API Extensions. These give a way to do something like what you describe, and they are deployed as layers, but it is the extension aspect that allows to interact differently than dependency code in a typical layer. Also see https://aws.amazon.com/blogs/compute/introducing-aws-lambda-extensions-in-pre...
I'm researching the abilities of AWS Lambda Layers and trying to confirm whether the Layer can add behaviors without the Lambda Function having any knowledge / interaction with the layer. My understanding from the docs is that Layers are effectively a .zip file that is unpacked to the Lambda instance and is primarily ...
Can an AWS Lambda Layer intercept a Lambda Function Handler, without the Function / Handler invoking the layer?
I was about to give a less detailed account of the answer you refer to in your question until I read that. I would refer you to this, seems spot on to me. No better way than seeing the physical size on the server, anything else might not be accurate. You might want to set up some monitoring, for which a Powershell s...
I'm in active development of an ASP.NET web application that is using server side caching and I'm trying to understand how I can monitor the size this cache during some scale testing. The cache stores XML documents of various sizes, some of which are multi-megabyte. On the System.Web.Caching.Cache object of System.We...
How to determine the size in bytes of the ASP.NET Cache?
You are usingRewriteBaseincorrectly. You can't use the[L]flag because it's not aRewriteRule, hence the 500 error you are getting. Also you can only have1RewriteBasein your rules. If the file has multiple Bases, it will use the last one. So it will start to cause problems if you actually tried to use this in production ...
I'm creating an .htaccess file for my need in my server:<IfModule mod_rewrite.c> #Enable the Rewrite Engine RewriteEngine On #Rewrite the base to / if this is not local host RewriteBase / #Set the base in case of local development RewriteCond %{HTTP_HOST} ^(.*)localhost(.*)$ RewriteBase /~asafnev...
.htaccess scope of RewriteCond
You can usenamed templatesto define re-usable helper templates. E.g.Intemplates/_helpers.tpl:{{- define "myChart.someParam" -}}someval-{{ .Release.Namespace }}{{- end -}}Intemplates/configmap.yaml(for example):apiVersion: v1 kind: ConfigMap metadata: name: something data: foo: {{ template "myChart.someParam" . }}T...
I would like to be able to reference the current namespace invalues.yamlto use it to suffix some values like this# in values.yaml someParam: someval-{{ .Release.Namespace }}It much nicer to define it this way instead of going into all my templates and adding{{ .Release.Namespace }}. If I can do it invalues.yamlit's muc...
how can i reference the namespace in values.yaml?
It seems that the problem is with the GITHUB_TOKEN you informed. GitHub automatically creates a GITHUB_TOKEN secret to use in your workflow (you can find more information about it here). Therefore in your case, you can follow the specifications informed on the action repository you're using: pull-request: needs: r...
I have a github actions job which is failing on the last job. The build, unit test and regression test jobs are working fine but the pull-request job fails. This is the code for the failing job, the token has been replaced. pull-request: needs: regression name: PullRequest runs-on: ubuntu-latest step...
Github actions pull request builder returns error
You can push branches to a public repo, if you are added as a contributor to that project. Otherwise, the process to contribute would be to create and clone afork, make your changes and push to the fork. Then you can create pull requests from the fork to main project for the author to review and take action.ShareFollow...
I'm attempting to push a bug fix to a public project on github using httpsgit clone <repo's https url> git checkout -b <branch> git add <modified file> git commit -m "message" git push --set-upstream origin <branch>I get:remote: Permission to <repo> denied to <user>. fatal: unable to access <repo>: The requested URL re...
How to push a new branch to a public project
1 The problem here is that you're not really comparing like with like (OS and physical memory). The worker process on the server with more memory is probably being more aggressive at reserving memory upon startup because there's more available. Share Improve this...
I have configured and deployed an identical web application to 2 separate servers. Server1: Virtual Server, windows 2008 r2 enterprise edition, 1GB ram. Server2: Virtual Server, windows 2008 r2 data center edition, 4GB ram. When the web application is started on Server1 it acquires approximately 11MB of ram. When the...
iis 7.5 application pool memory usage
Yes this is possible.X-FORWARDED-FOR is used by most servers (CDNs, Proxys) to send the originating ip (the users ip). So this should to the job:SetEnvIf X-FORWARDED-FOR 1.1.1.1 allow order deny,allow deny from all allow from env=allowIf your CDN has a different variable name, just edit the first line.EDIT:If you want ...
Okay so we use some simple IP authentication with htaccess for a folder on our site:order deny,allow deny from all allow from 1.1.1.1But with the new CDN we are on, the actual users IP address comes through in a different server variable, lets say:$_SERVER['absolutely_true_ip'];whereas$_SERVER['remote_addr'];is not the...
How to specify where to get IP from in htaccess
dig team-mate.app shows two A records (I guess the second one is the "Parked" one from your screenshot): ;; ANSWER SECTION: team-mate.app. 443 IN A 18.157.238.183 team-mate.app. 443 IN A 34.102.136.180 The second one obviously doesn't reach your server, and is at the moment not listening to port 8000....
There is such a config server { listen 8080; server_name 18.157.238.183 team-mate.app www.team-mate.app; location / { proxy_pass http://127.0.0.1:8000; } location /static/ { root /app; } } The website opens by ip address 18.157.238.183:8000 but not the domain name Hosting has the ...
How do I properly configure nginx for domain name access?
The issue here is that you haven't specified an encoding for the file, which means that the file will be read with your system's default encoding. This means that the behaviour of the code could vary from system to system.You should explicitly state the file's encoding, for example,new InputStreamReader( new FileIn...
Below mentioned Code snippet gives Sonar comment with following squid rule:squid:S1943try(BufferedReader reader = new BufferedReader(**new FileReader**(properties.get(FILE_BASED_CONFIGURATION).toString()))) { //some code } catch (IOException | ArrayIndexOutOfBoundsException e) ...
SonarQube issue with New FileReader
I was confused about what it exactly means to add another security group in Source (Inbound Rules) and Destination (Outbound Rules) when adding a new rule. I found the explanation given below (source: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html#SecurityGroupRules) very useful. "When you sp...
From the doc: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#security-group-rules Source or destination: The source (inbound rules) or destination (outbound rules) for the traffic. Specify one of these options: (...) Another security group. This allows instances associated with the spe...
AWS Security group include another Security Group
drawable is equivalent of drawable-mdpi If you put your images in that folder they will get up-sampled for higher resolutions devices and that up-sampling can trigger OOM if images are large. If you put same sized images in drawable-xhdpi you will have upsampled images only on larger xxhdpi devices, and downsampled ...
This question already has an answer here: Bitmap too large to be uploaded into a texture (1 answer) Closed 8 years ago. In my android app, have all the images in the drawable folde...
Android Out of Memory error on drawable folder [duplicate]
Ihave it on good authoritythat the (relatively new)"Organizations"feature allows you to add people with read-only access to a private repository.
I am developing some private projects on Github, and I would like to add nightly cronjobs to my deployments servers to pull the latest version from github. I am currently doing this by generating keypairs on every deployment server and adding the public key to the github project as 'Deployment key'.However, I recently ...
Github: readonly access to a private repo
Hyper-Q cannot be turned on/off. This is ahardware feature of Kepler cc3.5 and newer GPUs.The CUDA MPS server can be turned on/off. The method of turning it on and off is described in section 4.1.1 of thedocumentation. In a nutshell, excerpting:nvidia-cuda-mps-control -d # Start daemon as a background process. ech...
As you know, since CUDA 5.5, Hyper-Q (on NVIDIA GPUs) allows multiple MPI processes to run simultaneously on a single GPU and share its resources, upon resource availability.Hyper-Q can be activated by a driver command (i.e., nvidia-cuda-mps-control -d ) before running the application.Considering that Hyper-Q does not ...
Is Hyper-Q activation/deactivation possible during application runtime
1 You can do: COPY ./ /code/ It will copy everything from the current folder into the /code folder of your image. So then you can create .dockerignore file to prevent of adding other files/directories then a, b and c. For example d, e and f are other directories in the cu...
How would you copy several directories to a destination directory in docker? I do not want to copy the directory contents, but the whole directory structure. The COPY and ADD commands copy the directory contents, flattening the structure, which I do not want. That is, if these are my sources: . ├── a │   ├── aaa.txt │...
Copy several directories to another directory
According to the RFC-2818 Wildcard certificate matches only the one level down domains, but not deeper: E.g., *.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com. In this case what you should do is use ports for mapping repositories, instead of subdomains, so the docker repository...
im currently trying to setup a private Docker Registry in Artifacory (v4.7.4). I've setup a local,remote and virtual docker Repository, added Apache as a Reverse Proxy. Added a DNS Entry for the virtual "docker" Repo. Reverse Proxy is working but if i try something like: docker pull docker.my.company.com/ubuntu:16.04 ...
Problems setting up artifactory as a docker registry
1 I would recommend that you don't try to delete them but instead git revert them. You will then create a new commit which removes the content of those few commits, and the operation will remain visible in the history (helping everyone understand what's going on). $ git rev...
This question already has answers here: How to delete the last n commits on Github and locally? (5 answers) Closed 4 years ago. I want to delete the last 10 commits(and pushed too)...
How to delete last n commits of a branch from git after I have pushed? [duplicate]
The trouble is that the map is saved as a local HTML file (rChart_map.html) and is hence not accessible to nbviewer when you are trying to view it online.Even if you uploadrChart_map.htmlto the gist, it won't show up due to path issues. Locally, you need to refer to it as/files/rChart_map.htmlin your IPython notebook, ...
I have been able to embed thismapin an IPython Notebook (which is sweet), but I am not clear on how I can share this with folks not using the Notebook. I am familiar with thebl.ocks.orgviewer. It's great for standalone examples, but I am looking to share the rest of the analysis in the Notebook along with interactive c...
Share rCharts via IPython Notebook
If you desire your image to be on a separate line by itself, then you need to have it surrounded by blank lines. And if you want to nest an item in a list item, then you must indent that item one level (4 spaces): 1. element 1 2. element 2 ![](imagesurl) 3. element 3 The above renders as: element 1 element 2...
I am doing a list and in the middle in the list I need to put an image. The problem is, that it messes my list up. I have something like this 1. element 1 2. element 2 ![](imagesurl) 3. element 3 but it displays something like this element 1 element 2 image element 3 I need it to display something like this: ...
Images are messing with my lists
A single branch of another repository can be easily placed under a subdirectory retaining its history. For example:git subtree add --prefix=rails git://github.com/rails/rails.git masterThis will appear as a single commit where all files of Rails master branch are added into "rails" directory. However the commit's title...
Consider the following scenario:I have developed a small experimental project A in its own Git repo. It has now matured, and I'd like A to be part of larger project B, which has its own big repository. I'd now like to add A as a subdirectory of B.How do I merge A into B, without losing history on any side?
Will moving files or folders into another file in github mess contribution history? [duplicate]
Your origin repository is ahead of your local repository. You'll need to pull down changes from the origin repository as follows before you can push. This can be executed between your commit and push.git pull origin developmentdevelopmentrefers to the branch you want to pull from. If you want to pull frommasterbranch t...
I ran these commands below:git add .git commit -m 't'Then, when running the command below:git push origin developmentI got the error below:To[email protected]:myrepo.git ! [rejected] development -> development (non-fast-forward) error: failed to push some refs to '[email protected]:myrepo.git' To prevent you fr...
GitHub - error: failed to push some refs to '[email protected]:myrepo.git'
You can use a YAML linter in aGitHub Actionthat runs for every pull request.yamllintis already installed on Ubuntu-based GitHub runners according to thedocs.Here is a basic workflow:name: Validate-YAML on: push: branches: [ main ] pull_request: branches: [ main ] jobs: validate-yaml: runs-on: ubuntu...
I am currently working on a Ruby on Rails App. I have a directory filled with different Yaml File that gets edited from time to time. Anytime a Developer accidentally merge with invalid Yaml syntax to the Main branch. The entire Application breaks.Is there anyway i can setup a Yaml Validator on Github that checks for v...
Is there a way I can validate YAML files on Github?
I do not think that Actors would be the right solution for this problem. The RunASync() method is hard to simulate in an Actor. You could use Timers and Reminders for that, but it feels unnatural. So I would go with a service for this one.
While trying to implement Service Fabric's Reliable Services pipeline, I had these three approaches to chose from:And it looks likeCis a good way to go.Details here.In this case I need to implement kind of message pump between worker services.For example I have 2 kinds of worker services. First one is IO-bound and scal...
Service Fabric: Reliable Services pipeline with partitions load balancing
You can store images in a local SharedObject. By default you're limited to 100KB per site though. http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/net/SharedObject.html You can rely on the browser's own caching but even with that the browser will still make a request to the server to see if it's cache ...
I have a media player, which rotates images for the artist it plays. I load the images dynamically into the flash. The flash downloads the same images from the server over and over, how can i cache the images, so flash grabs them from a local cache and not from the server?
how can i browser cache an image loaded dynamically in flash
Thememkind library(which provides you an easy access to MCDRAM) already provides a C++ allocator for you. See thismanual entryfor more information.It's usage is quite simple, as shown in this example#include <hbw_allocator.h> #include <vector> #include <assert.h> int main(int argc, char*argv[]) { std::vector<u...
I would like to be able to allocate C++ object like Vectors directly on MCDRAM using the hbwmalloc library. The problem is that only C malloc are implemented. Thus I thought about coding a subclass of Vector implementing resize, reserve with dynamic allocation using hbw_malloc.This would allow the programmer to choose ...
How to use hbw_malloc library within C++ program?
you can try it .iptables -F .flush all rules.
I used the following commandsservice iptables saveservice iptables stopchkconfig iptables offBut after sometime, when I run the commandservice iptables status, I shows me a list of rules.How to disableiptablespermanently?
Disable iptables permanently in CentOS
You may be following the guide onBuilding Internet Connectivity for private VMsand this part onConfiguring IAP tunnels for interacting with instancesand the use ofTCP Forwarding in IAP. ByTunneling other TCP connections:"The local port tunnels data traffic from the local machine to the remote machine in an HTTPS stream...
Here is what i have:GCP instance without external IP (on VPC, and NAT), and it accepts HTTP HTTPS requestsfirewall allows ingress TCP for 0.0.0.0 and also for IAP's IP 35.235.240.0/20 on all ports for all instancesI ssh to the instance via IAP and run the application in the terminal on port 5000 and 0.0.0.0 host and le...
Can ssh to GCP Private instance but cant access application interface through cloud shell
All great recommendations, and I thought I'd add this article I found, which relates to expanding a Windows Amazon EC2 EBS instance using the Amazon Web UI tools to perform the necessary changes. If you're not comfortable using CLI, this will make your upgrade much easier. http://www.tekgoblin.com/2012/08/27/aws-guide...
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 10 years ago. Improve this question ...
Growing Amazon EBS Volume sizes [closed]
The way to solve it:// Login to container docker-compose exec phpmyadmin bash // Install vim apt-get update && apt-get install -y vim // Update ini(s) php --ini // Check results and update phpmyadmin conf vim /usr/local/etc/php/php.ini-development vim /usr/local/etc/php/php.ini-production // Update fields post_max...
My problem is I want to load a custom php.ini file in phpmyadmin container inside docker because I want to change max_execution_time and upload_temp_dir in php config file used by phpmyadmin.Why I want to change it, because each time I import sql dump file (*sql) inside phpmyadmin, it always says like thisNo data was r...
How to set php config file (php.ini) to be used in phpmyadmin container in laradock
This is likely a caching issue in your browser when you go tohttp://localhost/try pressingCtrl+F5
I just installed Ubuntu 17.04 and set up my LAMP server w/ PHP7 and the PHP modules enabled for Apache2.When I go tohttp://localhost/it defaults to theindex.htmlthat is present in/var/www/htmland not theindex.phpthat is there. When I go tohttp://localhost/index.phpthe php file loads just fine and the php script execute...
index.php is not loading by default in apache2
THIS ONLY APPLIES TO NON-MASTER BRANCHES:If you are newbie to git - simply don't try to do the git part in R at all.Instead, use GitHub Desktop or SourceTree.Point that tool to the desired repo, switch to desired branchStart RStudio and do any developmentClose RStudio and use that external tool to perform any git steps...
I have followed every advice onhttp://r-pkgs.had.co.nz/git.htmland on the subsectionhttp://r-pkgs.had.co.nz/git.html#git-branchand I am still getting error.The steps I need/did (different from what Hadley's page dictates).grab URL of GitHub repo (e.g,https://github.com/OHDSI/Achilles.git)create versioned project in RSt...
R: RStudio: How to check out an existing branch, modify it and commit back to GitHub (Windows machine)
Firstly, base64 encoding is required in your example. Although the docs state that this is done for you automatically, I always need it in my lambda functions creating ec2 instances with user data. Secondly, as of ES6, multi-line strings can make your life easier as long as you add scripts within your lambda function....
I'm trying to pass a script in Userdata field of a new EC2 instance created by an AWS Lambda (using AWS SDK for Javascript, Node.js 6.10): ... var paramsEC2 = { ImageId: 'ami-28c90151', InstanceType: 't1.micro', KeyName: 'myawesomekwy', MinCount: 1, MaxCount: 1, SecurityGroups: [groupname], Use...
How to pass script to UserData field in EC2 creation on AWS Lambda?
1 When you decide where to cache the results of complex queries, you should consider throughput as well as latency. If you put it in the database, you get a simpler solution, although it is unlikely to be able to handle as many requests per second as if you instead cached t...
Which is faster? A two column select to a traditional db or a query to memcached? If the db query is roughly as fast, why bother with adding another layer to your stack (assuming you don't care about expiring entries)? Wouldn't it be easier to add a two column table (key varchar, value text) which can be used for all ...
memcached vs a db based key value table?
This answerhas an explanation for the difference between "assume yes" and a non-interactive mode.I also found an example of a Dockerfile that installs jackd2here, and it's settingDEBIAN_FRONTENDto'noninteractive'before installing jackd2.
In my Dockerfile, I am trying to install jackd2 package:RUN apt-get install -y jackd2It installs properly, but after installation, I can see the following prompt:If you want to run jackd with realtime priorities, the user starting jackd needs realtime permissions. Accept this option to create the file /etc/security/lim...
Dockerfile - How to pass an answer to a prompt post apt-get install?
Like others have said, what you want is a so called zone-transfer. If it is your own domain you can configure the DNS server to give it to you. If it is for some other domain you probably don't get it, since most DNS-admins consider it a security threat.Even if an individual record isn't a problem (thats what the DNS i...
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed12 years ago.Improve this questionThis question exists because it has historical significance, but it is not considered a good, on-topic question for thi...
Is it possible to find all DNS subdomains for a given domain name? [closed]
Your existing directives specifically avoid rewriting requests for existing files, so it would still enable you to visit files in the root directory. It will also rewrite static resources topublic/index.php?path=, which will presumably fail.Try the following instead:RewriteEngine On # Stop processing if already in the...
I recently moved myindex.php(the file that handles routing) and CSS, JavaScript, and font assets to apublic/folder. I only want items in thispublic/folder to be accessible for security purposes. I ran into this problem when I realized I could visitmysite.com/composer.lockand view the composer lock file with my old.htac...
MVC public folder htaccess is not working
It looks like your work dirctory contains both .csproj and .sln files. Try to specify the .sln file in the command. Rundotnet publish your-solution-file.sln -c Release -o outI had the same error message with dotnet build and this solves it.By the way, since .NET Core 2.0 the dotnet restore command is run implicitly, s...
I am new to docker and trying to create a Dockerfile for ASP.NET Core application. Can someone suggest what changes do I require?Here is my Dockerfile:FROM microsoft/dotnet:2.1-sdk WORKDIR /app COPY Presentation/ECCP.Web/ *.csproj ./ RUN dotnet restore COPY . ./ RUN dotnet publish -c Release -o out FROM mi...
MSBUILD : error MSB1011: Specify which project or solution file to use because this folder contains more than one project or solution file
It depends oh how you are using those libraries.For example, as shown atQuick startof linked by you library, you can start additional server on separate port, just for purposes of exposing metrics.But you also can expose metrics using your current routing, for example like shown atASP.NET Web API exporterpart of docume...
When adding Prometheus instrumentation with a client library for Javahttps://github.com/prometheus/client_javaor .NEThttps://github.com/prometheus-net/prometheus-net, does the instrumentation spin out a metrics web server in a separate thread of the microservice?What if the instrumented microservice is already running ...
When adding Prometheus instrumentation with Java or .NET, is the web server for metrics running in a separate thread?
This can be done using the rawBuild state.import hudson.model.Result currentBuild.rawBuild.@result = hudson.model.Result.SUCCESSFound the answer from this question.How to manipulate the build result of a Jenkins pipeline job?
This question already has answers here:How to manipulate the build result of a Jenkins pipeline job (back to 'SUCCESS')?(5 answers)Closed5 years ago.I'm using the env variable"currentBuild.result"to modify the overall job status of a Jenkins job.I can set it to a failure usingcurrentBuild.result = 'FAILURE'and I can se...
Cannot Set the Job status back to Success in Jenkins Pipeline [duplicate]